Comments (15)
@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P
from nerfstudio.
I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.
from nerfstudio.
@hanjoonwon have you looked into making a bounding box for your scene, and then generating the mesh? There are ways to crop the scene before hand using these flags:
nerfstudio/nerfstudio/scripts/exporter.py
Line 284 in 57fbc07
from nerfstudio.
I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.
Thanks :) i know edit tools like meshlab,but it is quite bothering removing background.
Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?
from nerfstudio.
I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.
Thanks :) i know edit tools like meshlab,but it is quite bothering removing background. Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?
yes you can do some trial and error. If you are using a nerf, then the bounding box will be automatically between -1 and 1, so you can start cropping this down to get a smaller bounding box to only target the correct area. good luck
from nerfstudio.
@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P
I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.
from nerfstudio.
@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P
I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.
from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342
they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw
from nerfstudio.
@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P
I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.
from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342
they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw
Thanks for kind answer
This is an additional question, is it possible to get just the mesh object automatically like an image segmentaion without having to adjust the boundingboxes and such by trial and error?
from nerfstudio.
@hanjoonwon probably yes. Anything seems to be possible these days with deep learning/AI. But this is not implemented in nerfstudio. Masking with known masks should be straightforward.
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size?
How can I restore the point cloud to its original size?
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?
Tag maturk with @ the question
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?
@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.
#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.
#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.
I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.
from nerfstudio.
Hi, I have other question.
Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.
#2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.
I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.
Thanks for the answer, can I ask how you did it?
from nerfstudio.
Related Issues (20)
- Extraction of demo capture dataset zip contains two directories so it fails
- Pycolmap and HLOC doesn't work
- I'd like to know the details of slpatfacto
- Seeking for an advise: Need to get camera pose information during training HOT 2
- ninja: build stopped: subcommand failed.
- ODM transforms.json pov in opposite direction from ODM viewer. HOT 1
- export command does not working for gsplat?
- [WInErr] fatal error C1189: #error: -- unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. HOT 6
- A question about number of frames to extract HOT 3
- A question about order of matrices in transforms.json
- bash: Warning:: command not found bash: Unable: command not found bash: libio_e57.so:: command not found
- Question about the value of 'fov' for 'ns-render camera-path' that can render exactly as in training dataset HOT 1
- Tiny-cude-nn installation fails on WSL2 (Ubuntu 22.04.3 LTS) HOT 3
- too much memory is needed to training nerfacto with large image but the result is not good
- aa
- Processing data taking too long (more than 12hours have passed) HOT 1
- Render training path HOT 1
- License of the gsplat HOT 4
- from nerfstudio.fields.visibility_field import VisibilityField ModuleNotFoundError: No module named 'nerfstudio.fields.visibility_field' HOT 1
- having problem in installing as document... HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nerfstudio.