Code Monkey home page Code Monkey logo

Comments (15)

maturk avatar maturk commented on June 29, 2024

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

@hanjoonwon have you looked into making a bounding box for your scene, and then generating the mesh? There are ways to crop the scene before hand using these flags:

bounding_box_min: Tuple[float, float, float] = (-1, -1, -1)

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

Thanks :) i know edit tools like meshlab,but it is quite bothering removing background.
Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

Thanks :) i know edit tools like meshlab,but it is quite bothering removing background. Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?

yes you can do some trial and error. If you are using a nerf, then the bounding box will be automatically between -1 and 1, so you can start cropping this down to get a smaller bounding box to only target the correct area. good luck

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342

they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342

they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw

Thanks for kind answer

This is an additional question, is it possible to get just the mesh object automatically like an image segmentaion without having to adjust the boundingboxes and such by trial and error?

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

@hanjoonwon probably yes. Anything seems to be possible these days with deep learning/AI. But this is not implemented in nerfstudio. Masking with known masks should be straightforward.

from nerfstudio.

Lizhinwafu avatar Lizhinwafu commented on June 29, 2024

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size?
How can I restore the point cloud to its original size?

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

Tag maturk with @ the question

from nerfstudio.

maturk avatar maturk commented on June 29, 2024

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

from nerfstudio.

Lizhinwafu avatar Lizhinwafu commented on June 29, 2024

Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.

from nerfstudio.

hanjoonwon avatar hanjoonwon commented on June 29, 2024

Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.

Thanks for the answer, can I ask how you did it?

from nerfstudio.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.