Comments (11)
instead of dividing by scale what happens if you multiply by scale
from nerfstudio.
There is no bb in the viewer so you have to do some trial and error, sorry
from nerfstudio.
instead of dividing by scale what happens if you multiply by scale
Thank you for answer :)
When I compare the size before and after rescsle, it looks like the scale factor is applied correctly, but the size is still ridiculously small, like 0.7mm 3.5mm. Is this an intrinsic issue with the old colmap or an issue with the old process?
from nerfstudio.
If you are using colmap to process your data, it is impossible to ever even know the true scale of the scene. Colmap does not output poses in the correct scale of the real world, everything is in an arbitrary coordinate system. This is because it is impossible to ever even obtain ground truth scale from only RGB images. The only way you can know the true scale if you use some kind of VSLAM algorithm that has real world depth data when generating your pose estimates.
I think maybe there is a big misunderstanding here. reconstructed 3D points and camera poses from COLMAP are defined up to a scale factor, and the actual metric scale is unknown without additional information. If you want to obtain real-world scale from your reconstructed scene, you need to incorporate additional information.
from nerfstudio.
If you are using colmap to process your data, it is impossible to ever even know the true scale of the scene. Colmap does not output poses in the correct scale of the real world, everything is in an arbitrary coordinate system. This is because it is impossible to ever even obtain ground truth scale from only RGB images. The only way you can know the true scale if you use some kind of VSLAM algorithm that has real world depth data when generating your pose estimates.
For Instant ngp, it seems to come out close to the actual size, but since I can't get the ground truth with ns-process of a typical image, do I need to resize it with a tool like meshlab?
from nerfstudio.
If you are using colmap to process your data, it is impossible to ever even know the true scale of the scene. Colmap does not output poses in the correct scale of the real world, everything is in an arbitrary coordinate system. This is because it is impossible to ever even obtain ground truth scale from only RGB images. The only way you can know the true scale if you use some kind of VSLAM algorithm that has real world depth data when generating your pose estimates.
For Instant ngp, it seems to come out close to the actual size, but since I can't get the ground truth with ns-process of a typical image, do I need to resize it with a tool like meshlab?
It is not possible to reconstruct correct metric poses with colmap. You need some additional information. You can manually resize it if you want (scale it by some numbers to match whatever size you desire in the real world).
from nerfstudio.
@hanjoonwon please dont make more issues about this same problem. The problem is that when you just take photos using your camera, there is no way to know the real scale of the scene from only RGB images. The scale ambiguity is a fundamental characteristic of perspective projection (i.e. how 3D world makes 2D images), and it means that the reconstructed scene can be scaled uniformly without changing the projected image, which means it is impossible to get the metric scale of the 3D world from just 2D images without any other prior information about the scene. You can manually rescale the mesh if you want, but it is not physically possible to get metric scales by just using colmap dataprocessing.
from nerfstudio.
If you want metric scaled exports, you need metric scaled poses. To do this, you need to use RGBD SLAM or some other technique that uses depth data when estimating camera poses. If you have a recent iPhone with a depth sensor (time of flight sensor), you can get metric poses using e.g. https://www.spectacularai.com/mapping or other apps like PolyCam
from nerfstudio.
@hanjoonwon please dont make more issues about this same problem. The problem is that when you just take photos using your camera, there is no way to know the real scale of the scene from only RGB images. The scale ambiguity is a fundamental characteristic of perspective projection (i.e. how 3D world makes 2D images), and it means that the reconstructed scene can be scaled uniformly without changing the projected image, which means it is impossible to get the metric scale of the 3D world from just 2D images without any other prior information about the scene. You can manually rescale the mesh if you want, but it is not physically possible to get metric scales by just using colmap dataprocessing.
I'm sorry, I think I misused issues due to my lack of basic knowledge. Thank you for your kind reply.
I guess it was a coincidence or mistake that I thought the Instanr ngp mesh was life-size.
from nerfstudio.
@hanjoonwon please dont make more issues about this same problem. The problem is that when you just take photos using your camera, there is no way to know the real scale of the scene from only RGB images. The scale ambiguity is a fundamental characteristic of perspective projection (i.e. how 3D world makes 2D images), and it means that the reconstructed scene can be scaled uniformly without changing the projected image, which means it is impossible to get the metric scale of the 3D world from just 2D images without any other prior information about the scene. You can manually rescale the mesh if you want, but it is not physically possible to get metric scales by just using colmap dataprocessing.
I'm sorry, I think I misused issues due to my lack of basic knowledge. Thank you for your kind reply. I guess it was a coincidence or mistake that I thought the Instanr ngp mesh was life-size.
It is most likely just a coincidence that the poses in instant-ngp happened to line up with the real world poses. Colmap outputs arbitrary scaled poses so it is possible that they closely line up sometimes, but this is not generally the case.
from nerfstudio.
Thank you very much for your work. My question seems to be related to this job. What is the meaning of the crop scale value in the new version of viewer? I found that the value of the crop scale cannot match the coordinate values of the world coordinate system. For example, when the crop scale is set to 4, 4, 4, it is approximately equivalent to a square with a side length of 40 in the world coordinate system
What are the meanings of "crash max" and "crash min" in Viewer_legacy?
from nerfstudio.
Related Issues (20)
- Error retrieving public link during file download using gdown in nerfstudio environment HOT 3
- Problem using ORB-SLAM2 results instead of COLMAP for the data processing step
- How can I render and visualize in W&B all images during training? (every x steps) HOT 2
- Floater Free Gaussian Splats HOT 1
- ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'nerfstudio.configs.base_config.LocalWriterConfig'> for field local_writer is not allowed: use default_factory HOT 2
- [Question] How do I combine colmap dataparser parameters with splatfacto training? HOT 2
- Processing issue on colab, I'm using a custom dataset and during the preprocessing stage, I get the message "Done copying images with prefix frame" but then the cell terminates and nothing happens.
- ns-process-data crop does not work as expected
- Gaussian Splat does not export to PLY when sh-degree is set to 0 HOT 4
- ns-process-data ERROR HOT 7
- mutable default <class 'in2n.in2n_datamanager.InstructNeRF2NeRFDataManagerConfig'> for field datamanager is not allowed: use default_factory
- How to make gaussian splats as high quality as possible? HOT 1
- Any script to load camera path from `transforms` json file HOT 5
- Current Reality Capture Nerfstudio Import process producing very bad quality for Splatfacto. HOT 6
- Bug using sift descriptor and superglue for hierarchical localization
- Improve Error Feedback in colab/demo.ipynb Data Processing Steps HOT 1
- How to enable depth supervision in splatfacto with depth images HOT 2
- Replicate nerfstudio Camera Views in Blender
- Warning to `Install tcnn for speedups` even when tcnn is installed HOT 1
- Google Drive Permission Error when running `ns-download-data nerfstudio --capture-name=poster` HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nerfstudio.