Code Monkey home page Code Monkey logo

Comments (3)

puzzlepaint avatar puzzlepaint commented on September 19, 2024

I want to know how to use the intrinsics0.yaml, and I want to apply the calibrated results to my other algorithms.

For this, example implementations of the generic camera models that can be integrated into your code are provided at:
https://github.com/puzzlepaint/camera_calibration/tree/master/applications/camera_calibration/generic_models
These should be able to load the calibrated intrinsics files.
For more details, see the section "How to use generic camera models in your application" in the Readme.

Also, I can't get the right results when I was using depth estimates. I wonder if there is a problem with my usage.Can you give me some instructions about how to run these programs correctly?

From what I can tell, you calibrated a single camera, made a copy of its intrinsics file, used two different images of this same camera as "stereo camera" images, and wrote camera_tr_rig.yaml such that it contains the image_tr_global poses of the individual images? I think that this should work in principle. From the information given, I don't think one could tell what the issue is that leads to the wrong depth estimates. Given that the calibration result appears to look fine (as far as I can tell from the small images), perhaps the camera_tr_rig poses would be most likely to contain a mistake, but I am just guessing here.

I should also mention that by default the stereo depth estimation only works for the central-generic camera model, as mentioned in the Readme, in case you used a different model.

from camera_calibration.

liangshunkun avatar liangshunkun commented on September 19, 2024

Thank you very much for your reply. Because recently busy with courses, sorry did not reply you in time.
I have obtained the correct depth estimation results, but I do not know the principle very well.
I read the code in the project, but I cannot understand it at all.You used Patch Match Stereo to get disparity maps and "d = bf / x " to get depth results. But in fact the focal length f doesn 't exist in the generic camera model, and I ' m very confused about it.
Can you briefly explain the principle of getting depth values ? Or give some instructions, such as some papers.
Thank you again for your work.

from camera_calibration.

puzzlepaint avatar puzzlepaint commented on September 19, 2024

You used Patch Match Stereo to get disparity maps and "d = bf / x " to get depth results.

No, actually the Patch Match Stereo implementation determines inverse depths directly rather than disparities. The inverse depths only need to be inverted (depth = 1 / inverse_depth) to get the actual depth values, without involving the baseline or focal length (the latter is anyway not a parameter that is estimated by the generic camera models).

The reason for using inverse depths instead of actual depth values is that they behave more similar to disparities, yielding a better distribution of values compared to actual depths: For example, imagine that in a given stereo configuration, the depth values from 10 meters to infinity (which is a huge range) might (depending on the relative pose of the cameras etc.) correspond to a tiny disparity shift in the stereo image, whereas the depth values from 0 meters to 10 meters might cause a much larger disparity shift. Using inverse depths instead, the mapping would be expected to be more even. This is because in the stereo-rectified case, disparity and inverse depths would only differ by the baseline * focal_length factor in a linear way. While this does not apply directly in case of having image distortion rather than a stereo-rectified case, it will still apply approximately in usual cases. (I realize this is not the clearest explanation, but I hope the idea was understandable anyway.)

The basic principle to estimate a depth (or analogously, inverse depth) value directly, without using disparities as in the stereo-rectified case, is actually rather simple: The calibrated camera models and the knowledge of the relative pose of the cameras allow for taking a pixel in one image, using some assumed (inverse) depth hypothesis for it, and transforming it to the other (stereo) image. This allows to check how similar the original pixel is to the corresponding pixel in the other (stereo) image. By testing this for a number of different depth hypotheses (in the Patch Match Stereo algorithm), the best depth hypothesis can be picked.

from camera_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.