Code Monkey home page Code Monkey logo

Comments (8)

nikhilaravi avatar nikhilaravi commented on August 27, 2024 3

@aluo-x If I understand your question correctly, you want to concatenate multiple meshes into one mesh and then render one image.

This is definitely possible with PyTorch3d. You would just need to format the input data correctly yourself as internally we assume that for a batch of meshes, each mesh is rendered onto a separate image (the same assumption made in NMR and Kaolin).

For example, take two ico spheres where one has a blue texture and the other has a red texture. We can initialize the textures as an RGB color per vertex. We can then create one mesh which contains both the meshes and render them to a single image.

from pytorch3d.utils import ico_sphere
from pytorch3d.structures import Textures 

# Initialize two ico spheres of different sizes
mesh1 = ico_sphere(3)  # (42 verts, 80 faces)
mesh2 = ico_sphere(4)  # (162 verts, 320 faces)
verts1, faces1 = mesh1.get_mesh_verts_faces(0)
verts2, faces2 = mesh2.get_mesh_verts_faces(0)

# Initalize the textures as an RGB color per vertex
tex1 = torch.ones_like(verts1) 
tex2 = torch.ones_like(verts2)
tex1[:, 1:] *= 0.0  # red
tex2[:, :2] *= 0.0  # blue

# Create one mesh which contains two spheres of different sizes.
# To do this we can concatenate verts1 and verts2
# but we need to offset the face indices of faces2 so they index
# into the correct positions in the combined verts tensor. 

# Make the red sphere smaller and offset both spheres so they are not overlapping
verts1 *= 0.25  
verts1[:, 0] += 0.8
verts2[:, 0] -= 0.5
verts = torch.cat([verts1, verts2])  #(204, 3)

#  Offset by the number of vertices in mesh1
faces2 = faces2 + verts1.shape[0]  
faces = torch.cat([faces1, faces2])  # (400, 3)

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

The output as seen below is a single RGBA image which contains both meshes.

twosphere

Let me know if that answered your question. Bear in mind that the texturing API is still experimental and we are working on improvements and more functionality.


NOTE

To learn how to initialize a renderer please refer to one of the tutorials e.g. camera position optimization


from pytorch3d.

aluo-x avatar aluo-x commented on August 27, 2024 2

Wow, thanks for the extremely fast response and the great tool!
The example is very clear and is exactly what I was looking for. Somehow I missed the verts_rgb option while browsing the texture structure code. The documentation between the world/camera/image space is also a missing part of many other renderers.

One minor suggestion, this method (like the vertex color mode in DIB-R) are limited by the mesh resolution to some degree. It would still be a minor improvement to allow for barycentric like interpolation of textures per face. So instead of [F, 3] for texture, you can specify [F, T, T, T, 3] for texture.

This solves my problem so I will close the issue for now.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on August 27, 2024 2

@aluo-x Great! :) Specifying a texture atlas with a T*T*3 texture map per face is a feature we are planning to add. Stay tuned!

from pytorch3d.

bottler avatar bottler commented on August 27, 2024 1

@rainsoulsrx This is an old closed issue. There is a now a function join_meshes_as_scene which makes joining meshes together into a single mesh easier.Β The "order" objects appear is determined by mainly by their z-distances and the blending function, and slightly also by settings of the rasterizer. I don't know what you mean by "correct", but if you are not getting what you expect then you might want to open a new issue.

from pytorch3d.

rahuldey91 avatar rahuldey91 commented on August 27, 2024

I am facing problems with rendering a mesh with Texture initialized using verts_rgb. I used the same code as above and my renderer is like:

MeshRenderer(
  (rasterizer): MeshRasterizer()
  (shader): TexturedSoftPhongShader()
)

When I try to render using these

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

I get the error the following error:

-> image = renderer(mesh)   # (1, H, W, 4)
(Pdb) c
Traceback (most recent call last):
  File "main.py", line 93, in <module>
    main()
  File "main.py", line 68, in main
    loss_train = trainer.train(epoch, loaders)
  File "~/3DMM/pytorchnet_3d/train.py", line 369, in train
    image = renderer(mesh)   # (1, H, W, 4)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/renderer.py", line 69, in forward
    images = self.shader(fragments, meshes_world, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/shader.py", line 269, in forward
    texels = interpolate_texture_map(fragments, meshes)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/texturing.py", line 43, in interpolate_texture_map
    faces_uvs = meshes.textures.faces_uvs_packed()
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/structures/textures.py", line 147, in faces_uvs_packed
    return list_to_packed(self.faces_uvs_list())[0]
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/structures/utils.py", line 116, in list_to_packed
    N = len(x)
TypeError: object of type 'NoneType' has no len()

However, the rendering works only when the Texture is initialized with verts_uvs, faces_uvs and texture_maps.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on August 27, 2024

TexturedSoftPhongShader supports only textures specified as texture maps and vertex uv coordinates. Please use SoftPhongShader instead.

from pytorch3d.

nikhilaravi avatar nikhilaravi commented on August 27, 2024

@aluo-x support for textures as a per face atlas of shape (F, R, R, 3) has now been added (based on the SoftRas implementation). Here is a complete example of how to load the textures as an atlas, create a mesh and and render it: https://github.com/facebookresearch/pytorch3d/blob/master/tests/test_render_meshes.py#L468.

from pytorch3d.

rainsoulsrx avatar rainsoulsrx commented on August 27, 2024

@aluo-x If I understand your question correctly, you want to concatenate multiple meshes into one mesh and then render one image.

This is definitely possible with PyTorch3d. You would just need to format the input data correctly yourself as internally we assume that for a batch of meshes, each mesh is rendered onto a separate image (the same assumption made in NMR and Kaolin).

For example, take two ico spheres where one has a blue texture and the other has a red texture. We can initialize the textures as an RGB color per vertex. We can then create one mesh which contains both the meshes and render them to a single image.

from pytorch3d.utils import ico_sphere
from pytorch3d.structures import Textures 

# Initialize two ico spheres of different sizes
mesh1 = ico_sphere(3)  # (42 verts, 80 faces)
mesh2 = ico_sphere(4)  # (162 verts, 320 faces)
verts1, faces1 = mesh1.get_mesh_verts_faces(0)
verts2, faces2 = mesh2.get_mesh_verts_faces(0)

# Initalize the textures as an RGB color per vertex
tex1 = torch.ones_like(verts1) 
tex2 = torch.ones_like(verts2)
tex1[:, 1:] *= 0.0  # red
tex2[:, :2] *= 0.0  # blue

# Create one mesh which contains two spheres of different sizes.
# To do this we can concatenate verts1 and verts2
# but we need to offset the face indices of faces2 so they index
# into the correct positions in the combined verts tensor. 

# Make the red sphere smaller and offset both spheres so they are not overlapping
verts1 *= 0.25  
verts1[:, 0] += 0.8
verts2[:, 0] -= 0.5
verts = torch.cat([verts1, verts2])  #(204, 3)

#  Offset by the number of vertices in mesh1
faces2 = faces2 + verts1.shape[0]  
faces = torch.cat([faces1, faces2])  # (400, 3)

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

The output as seen below is a single RGBA image which contains both meshes.

twosphere

Let me know if that answered your question. Bear in mind that the texturing API is still experimental and we are working on improvements and more functionality.

NOTE

To learn how to initialize a renderer please refer to one of the tutorials e.g. camera position optimization

When the two objects has overlap, how can I render them in correct order?

from pytorch3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.