Skip to content

Question about Supersampling #8

@XLR-man

Description

@XLR-man

The supersampling method described in the paper is to divide a pixel into sub-pixels for training. I'm a little confused about this method and how it's written in the code.

First, the downscale variable in the code should downsample the image at the specified resolution. For example, if the input image resolution is 504×378 and downscale=2, the resolution size of the downsampled image is 252×189.My question is why down-sample it, is it to train with 252×189 resolution images as low-resolution images and 504×378 resolution images as GT?

At the same time, I do not know whether my understanding of the following code is correct

self.all_rays = torch.cat(self.all_rays, 0) #(61*h/X*w/X,X*X,8)
self.all_rgbs = torch.cat(self.all_rgbs, 0) #(61*h/X*w/X,3)
self.all_rgbs_ori = torch.cat(self.all_rgbs_ori, 0)#(61*h/X*w/X,X*X,3)

It appears to be 252×189 resolution as input for low resolution, and then 504×378 resolution as GT. At the same time, 504 x 378 is divided into multiple s x s patches, which should correspond to supersampling method.

In general, I do not understand how to divide each pixel in the input low-resolution image into s×s sub-pixels, whether to divide multiple s×s patches on the input low-resolution image or the GT image. How is the information of the divided subpixel rays and the corresponding colors obtained? Can you point out specific code actions?
Thank you very much for your contribution. But my understanding may be wrong, I hope you can answer my doubts.

Best regards!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions