Code Monkey home page Code Monkey logo

Comments (6)

ginazhouhuiwu avatar ginazhouhuiwu commented on September 18, 2024

Actually I think a while back we wanted to upgrade this part to support downscaling better (not just by a constant), so thanks for bringing it up! I would be happy to look into this and you are more than welcome to make a PR if you want to address this feature immediately.

from nerfstudio.

jb-ye avatar jb-ye commented on September 18, 2024

Interpolation based downscaling is known to have a detrimental effect on training GS if the downscale factor is greater than 2, because they are not antialiased. The reason I use convolution is because this is antialiased and differentiable.

Yes, the current method only supports downscale factor to be 2, 4, 8, etc. But this is not really a major issue in coarse-to-fine training.

from nerfstudio.

jb-ye avatar jb-ye commented on September 18, 2024

Regarding your concerns about "misaligned coordinates": in gsplat library, we use the convention of graphics/calibration literature rather than opencv convention. That is, the top-left pixel of image represents the color at 2D coordinate (0.5, 0.5). See related discussion here.

Basically under this convention (rather than the one commonly used in opencv), we won't have misaligned coordinates.

from nerfstudio.

wzy-99 avatar wzy-99 commented on September 18, 2024

Hi, @jb-ye,

I understand the 0.5 pixel problem you mentioned.

But actually my problem is not that.

Here I will give you a detail explain of my consideration.

Assume:

  • an image of size 19x19
  • downscale factor of 4
  • so the stride is 4

the original method

def resize_image(image: torch.Tensor, d: int):
    """
    Downscale images using the same 'area' method in opencv

    :param image shape [H, W, C]
    :param d downscale factor (must be 2, 4, 8, etc.)

    return downscaled image in shape [H//d, W//d, C]
    """
    import torch.nn.functional as tf

    image = image.to(torch.float32)
    weight = (1.0 / (d * d)) * torch.ones((1, 1, d, d), dtype=torch.float32, device=image.device)
    return tf.conv2d(image.permute(2, 0, 1)[:, None, ...], weight, stride=d).squeeze(1).permute(1, 2, 0)

image

(This is a 19 * 19 sized image, with each colored square representing a convolutional region. I only demonstrated one row as an example here)

You can see the 3 unsampled pixels is biased in the end of the first row, which caused misalign.

the better method

image

If we evenly scatter the 3 pixels across a row, the misalign will be reduced.

from nerfstudio.

jb-ye avatar jb-ye commented on September 18, 2024

@wzy-99 Assuming the principal point of original resolution is at (9.5, 9.5) (the center of original image). When we resize image, we simply multiply the scale factor on to the principal point, which is (2.375, 2.375). This is the convention we use in nerfstudio and most other calibration library (e.g. colmap). Note that this also means the rescaled PP might not be necessarily the center unless the image resolution is multiplies of 4.

In the original method, the right most columns are cut off by design. We resize 19x19 image to 4x4 image with principle point being (2.375, 2.375) where 2.375 = 9.5 / 4. The resulting downscale image is consistent with the choice how to scale principle point. We end up with a 4x4 image with principal point at (2.375, 2.375. Note PP shifts to right from the center of image, because of the aforementioned cut-off.

In your method, the pixels on the first row in downsampled image represents the 4x4 color region centering at
2, 7, 12, 17 (instead of 2, 6, 10, 14). The resulting scaling factor is not exactly 4, but 5. If we downscale the image as your suggestion, the principal point after scaling should be 1.9 = 9.5 / 5 instead of 2.375.

In short, what really matters is to make sure the way we scale principal point is consistent with the way we scale image.

from nerfstudio.

wzy-99 avatar wzy-99 commented on September 18, 2024

Yes. You are right. I have made a mistake, that is I ignored the pixel plane will be also shifted.

image

As we know, when render a 4*4 image with cx=2, cy=2, the pixel plane is just like the red box area.

But following the below code,

    def rescale_output_resolution(
        self,
        scaling_factor: Union[Shaped[Tensor, "*num_cameras"], Shaped[Tensor, "*num_cameras 1"], float, int],
        scale_rounding_mode: str = "floor",
    ) -> None:
        """Rescale the output resolution of the cameras.

        Args:
            scaling_factor: Scaling factor to apply to the output resolution.
            scale_rounding_mode: round down or round up when calculating the scaled image height and width
        """
        if isinstance(scaling_factor, (float, int)):
            scaling_factor = torch.tensor([scaling_factor]).to(self.device).broadcast_to((self.cx.shape))
        elif isinstance(scaling_factor, torch.Tensor) and scaling_factor.shape == self.shape:
            scaling_factor = scaling_factor.unsqueeze(-1)
        elif isinstance(scaling_factor, torch.Tensor) and scaling_factor.shape == (*self.shape, 1):
            pass
        else:
            raise ValueError(
                f"Scaling factor must be a float, int, or a tensor of shape {self.shape} or {(*self.shape, 1)}."
            )

        self.fx = self.fx * scaling_factor
        self.fy = self.fy * scaling_factor
        self.cx = self.cx * scaling_factor
        self.cy = self.cy * scaling_factor
        if scale_rounding_mode == "floor":
            self.height = (self.height * scaling_factor).to(torch.int64)
            self.width = (self.width * scaling_factor).to(torch.int64)
        elif scale_rounding_mode == "round":
            self.height = torch.floor(0.5 + (self.height * scaling_factor)).to(torch.int64)
            self.width = torch.floor(0.5 + (self.width * scaling_factor)).to(torch.int64)
        elif scale_rounding_mode == "ceil":
            self.height = torch.ceil(self.height * scaling_factor).to(torch.int64)
            self.width = torch.ceil(self.width * scaling_factor).to(torch.int64)
        else:
            raise ValueError("Scale rounding mode must be 'floor', 'round' or 'ceil'.")

we will get cx=2.375, cy=2.375, this will shift the red box to the blue one.

And the blue box is really same to the downsampled GT from the convolutional method.

Thank you very much!!!🌹

from nerfstudio.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.