Code Monkey home page Code Monkey logo

Comments (7)

puzzlepaint avatar puzzlepaint commented on June 30, 2024 1

The generic camera models do not require or use focal length and optical center parameters. These parameters are only necessary for parametric models that include them, and they are not required to do stereo matching. What do you think you need these parameters for?

Trying to elaborate on the statement about the rotation ambiguity:

  • Consider that you have a camera, calibrated with the central-generic model, that observes some 3D feature points. Taking a 3D feature point and projecting it to the camera image using the calibrated model gives a corresponding 2D observation point (pixel location) in the image.
  • Starting from this state, you can modify the camera calibration by rotating all of the generic models' observation directions around the camera origin by some amount. You further modify the camera pose by applying the exact opposite rotation to it. This gives you a modified state.
  • In this modified state, taking any 3D feature position and projecting it to the camera image using the modified calibration still returns the same 2D observation point as before, since the two rotations cancel themselves out. The calibration is thus equivalent to the original calibration, since with the modified camera pose, the modified observation directions still point in the same (absolute) directions. However, the direction values in the calibration were changed (and this would give you a different undistorted image, unless you deterministically compute a unique orientation for the undistorted image).

from camera_calibration.

StarryPath avatar StarryPath commented on June 30, 2024

I also encountered this problem. Have you solved it?

from camera_calibration.

StarryPath avatar StarryPath commented on June 30, 2024

@puzzlepaint I'm also baffled by this question. When I enter a pixel coordinate,how can I get the coordinates of the pixel undistorted by the general model? Such as the opencv function "undistortPoints".

from camera_calibration.

puzzlepaint avatar puzzlepaint commented on June 30, 2024

Based on the description of undistortPoints here, it seems to first unproject the given input pixels and then optionally apply some further steps (with R and P) that are independent from the original camera model.

The corresponding unprojection function in the generic camera model implementation, for example in the model in camera_calibration/applications/camera_calibration/generic_models/src/central_generic.h, would be this one:

Regarding the question in the first post of this GitHub issue, I don't understand what exactly the question asked for.

from camera_calibration.

StarryPath avatar StarryPath commented on June 30, 2024

@puzzlepaint Thank you for your reply.
When I use the pinhole model parameter matrix multiply d (d calculated by opencv), I can get the coordinates of the pixel undistorted.

 camera_matrix1 = np.array([[3.54146051e+03, 0.00000000e+00, 2.02736891e+03],
          [0.00000000e+00 ,3.54195528e+03, 1.52639175e+03],
         [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])

         dist_coeffs1 = np.array(
   [-0.09569434,  0.10140725 , 0.00025152, -0.000199 ,  -0.0185507 ])

       src1=np.array([[[1000   ,500]]],np.float32)

        dst3=cv2.undistortPoints(src1,camera_matrix1,dist_coeffs1)

        tmp=np.array([dst3[0][0][0], dst3[0][0][1],1 ],np.float32)

        pixel=np.dot(camera_matrix1,tmp.T) #the result of pixel [985.92114981 485.65509851   1.        ]

But when I use the pinhole model parameter matrix multiply d (d calculated by general model), I can not get the coordinates of the pixel undistorted.

camera_matrix1 << 3.54146051e+03, 0.00000000e+00, 2.02736891e+03,
   	0.00000000e+00, 3.54195528e+03, 1.52639175e+03,
   	0.00000000e+00, 0.00000000e+00, 1.00000000e+00;

       p2 <<1000	,500;

   camera.Unproject(p2, &d);

       pixel = camera_matrix1 * d/d[2]; //the result of pixel[975.902 ,522.557,  1]

How can I solve this problem?

from camera_calibration.

puzzlepaint avatar puzzlepaint commented on June 30, 2024

Sorry, but I don't really understand what the problem is. What exactly would you like to achieve?

Do you think that the calculation should return the same value in both cases, in case that both camera models have been calibrated for the same camera? If so, that is not necessarily the case. The central-generic model's observation directions can for example be arbitrarily rotated while rotating all camera poses by the same amount in the opposite direction in order to cancel out the effect. Doing this does not change the shape of the camera intrinsics, however, it will change the directions returned by camera.Unproject(p2, &d);. If you take these directions and project them to a pinhole image that is arbitrarily defined to look towards the z direction, then the results will differ even though the camera remained the same. While the generic camera calibration program does try to apply a canonical orientation to the generic calibrations, this is not necessarily the same as what is obtained with a different camera model.

from camera_calibration.

StarryPath avatar StarryPath commented on June 30, 2024

@puzzlepaint Thank you very much. I'm very happy to see your reply, and I've been waiting for it all the time.
I want to use the central-generic model for stereo camera to complete the measurement task. I have high requirements for the measurement accuracy. After reading your paper, I think the generic model may have higher accuracy. However, I can't determine the focal length and optical center position of the camera through calibration like using the parameter model, and I can't complete the measurement. I want to know how to determine the optical center position of the camera of the central-generic model.
Besides I don't understand your this sentence, " The central-generic model's observation directions can for example be arbitrarily rotated while rotating all camera poses by the same amount in the opposite direction in order to cancel out the effect.", is there any relevant reference, or please explain it in more detail. Thanks!

from camera_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.