Code Monkey home page Code Monkey logo

Comments (13)

aluo-x avatar aluo-x commented on July 24, 2024 1

Much appreciated! You really went above and beyond. It really has answered all of my questions. Hopefully the z interpolation bit can be merged into master.

The project that I am working involves many small, potentially very irregular meshes in a single scene that can warp and change depending on several factors, and the loss currently is RGB & mask with the usual regularization applied, in this respect having texture is less meaningful than having per face or per vertex colorization options (neural mesh renderer does really well with their per face option), since the bits of the scene can change depending on warp.

But there are also a few other losses that are currently being experimented with that depend on outputs that other diff renders do not provide, or do not provide in sufficiently high quality. Hopefully once it is a little bit mature I can talk about it more.

Closing this issue now. Thanks again!

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x I can try to replicate this and get back to you tomorrow. I think it is due to some parameter setting in the blending. Can you try the PyTorch3d version again with the same gamma and sigma values as for SoftRas (it seems they are currently not the same)?

In the meantime you could try to copy this function and play around with the settings and then define a new shader which uses it (e.g. copy the TexturedPhongShader but replace with your modified blending function)

from pytorch3d.

aluo-x avatar aluo-x commented on July 24, 2024

Playing around with gamma & sigma doesn't seem to cause the artifacts to disappear in the color image. The artifacts occur only if blur radius is set to greater than 0, even for very small values like 1e-5. Using hard/soft rgb blending doesn't seem to affect the artifacts either.

If this is a bug, then probs somewhere in the raster code?

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x There are a couple of things you need to change. First, pass in the blend_params to the TexturedPhongShader as currently the default values are being used so any changes you make will not be used:

    shader=TexturedPhongShader(
        blend_params=blend_params,
        device=device, 
        cameras=cameras,
        lights=lights
    )

Second, you need to enable barycentric clipping to [0, 1] before texture interpolation. I will add this in a pull request soon but in the meantime you can enable this by changing this line to

pixel_uvs = interpolate_face_attributes(fragments, faces_verts_uvs, bary_clip=True)

There are a few more things I am debugging to resolve the artifacts. I will get back to you shortly!

from pytorch3d.

aluo-x avatar aluo-x commented on July 24, 2024

Apologies, made an error when copy pasting the code. My full notebook had too much unrelated stuff to fully paste here.

Minor question, by clamping the value to [0,1] wouldn't we be stopping the gradient at the border? Unsure if this would matter, but would something like:

clipped[clipped>1.0] = torch.max(clipped[clipped<1.0])

be necessary? Or do we not need the gradients to be calculated at the edge? Much appreciated for the help debugging this.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x we have a function which does the barycentric clipping see here. To enable this you just need to set bary_clip=True at the line mentioned above. This uses the torch.clamp function which is differentiable.

In addition to the two steps mentioned above, one last step to resolving the issue of the black artifacts is to interpolate the z coordinate after barycentric clipping. To do this, pass in meshes to the softmax_rgb_blend function here i.e.

images = softmax_rgb_blend(colors, fragments, self.blend_params, meshes)

Then modify the softmax_rgb_blend function to take meshes as an input argument and then replace line 167 with:

# Reinterpolate the z values using clipped barycentrics
verts_z = meshes_world.verts_packed()
faces = meshes_world.faces_packed()
faces_verts_z = verts_z[faces][..., 2][..., None]
pixel_z = interpolate_face_attributes(fragments, faces_verts_z, bary_clip=True)
pixel_z = pixel_z.squeeze()[None, ...]  # (1, H, W, K)

Bear in mind that the lighting/texturing approach in SoftRas and Pytorch3d are different so may give slightly different results.

  • SoftRasterizer uses a texture atlas method where each face has a (T, T, 3) map. The lighting/flat shading is applied to the texture map before rasterization
  • PyTorch3d is using texture interpolation using uv coordinates and a texture image after rasterization. Lighting/phong shading is applied after texture interpolation.

We have plans to support the texture atlas method soon.

Here are some example outputs with different settings for sigma and gamma:

sigma = 5e-4, gamma = 1e-4
sigma_5_e4_gamma_1e4

sigma = 1e-3, gamma = 1e-3
sigma_3_gamma_3

sigma = 1e-4, gamma = 1e-3
sigma_4_gamma_3

Gamma controls the opacity of the image so lower gamma means the image is more transparent - in the image below you can see the part of the back of the cow.
sigma = 1e-6, gamma = 1e-2
sigma_6_gamma_2

from pytorch3d.

aluo-x avatar aluo-x commented on July 24, 2024

Much appreciated! This does look more right.

Minor nitpick, I was under the impression that clamp only passed gradients to points within or on a given bound, and a quick test seems to confirm this.

import torch
a = torch.randn(20, requires_grad=True)
b = a.clamp(1)
b.sum().backward()
print(a.grad)

I have no idea if this is enough to be an issue. If not please feel free to close the issue.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x If you want to still pass gradients in the backward pass but clamp in the forward pass then you can try using a 'fakeclamp' function. For example a simple version could look like this:

class fakeclamp(torch.autograd.Function):
    @staticmethod
    def forward(
        ctx,
        input_tensor,
        min=0.0,
        max=1.0,
    ):
        ctx.save_for_backward(input_tensor)
        return torch.clamp(input_tensor, min=min, max=max)

    @staticmethod
    def backward(
        ctx, grad_output_tensor
    ):
        input_tensor = ctx.saved_tensors[0]
        grad_input_tensor = input_tensor.new_ones(input_tensor.shape) * grad_output_tensor
        return grad_input_tensor, None, None
>> clamp = fakeclamp.apply
>> a = torch.randn(20, requires_grad=True)
>> b = clamp(a, max=0.6)
>> b.sum().backward()
>> print(a.grad)
>> tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1.])

For reference, in SoftRasterizer, the clamping is done in the CUDA kernel not in PyTorch. In the backward pass CUDA kernel, there is some gradient flow to the clipped inputs (from what I understand w > 1 clipped to 1.0 still gets a gradient but w < 0 clipped to 0 doesn't get a gradient). This isn't the complete implementation of clamp + normalize backward so it wouldn't pass any gradient tests in comparison to the PyTorch autograd version. The fakeclamp method above would give you gradients to all the inputs even if they are clipped.

We plan to add more support for blurry blending of textured meshes so stay tuned! :)

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x Let me know if this answers your question. Also what is the task you are working on? It would be easier to identify what the effect of clamping on the gradients would be depending on the use case e.g. are you use the RGB images for computing a loss?

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x Great, glad to help! :) We'll get the z interpolation with clipping added soon and also support for the per face texturing.

Regarding the "outputs that other diff renderers do not provide or do not provide in sufficiently high quality" could you share what these are? No worries if you are not ready to share this yet.

It's great for us to know how people are using the PyTorch3d renderer so we can prioritize improvements and features!

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x did you end up needing to use the fakeclamp function instead of torch.clamp?

from pytorch3d.

aluo-x avatar aluo-x commented on July 24, 2024

Much appreciated for the followup.

I'm currently using the torch.clamp function without any bad effect.
It should be easy to change to the fakeclamp function down the road I think if it ever becomes necessary since pytorch3d is so modular.

Minor (unrelated question), would it be possible to have documentation on how to canonically do multi GPU training? I could open another issue if it is better that way.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on July 24, 2024

@aluo-x the barycentric clipping fix has now been landed in master.

Yes please raise a separate issue about multi GPU training.

from pytorch3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.