Code Monkey home page Code Monkey logo

Comments (5)

puzzlepaint avatar puzzlepaint commented on August 11, 2024

It might help a little to try different values for the --refinement_window_half_extent parameter. I think that with larger values for this, usually more features tend to be found. For example, if I use 25, then the application finds some features in two of your four example images.

Otherwise, one has to dig into the source code. Generally, the feature detection was not designed for strong fisheye lenses. It assumes that the image locally behaves similarly to a pinhole camera. This assumption is used in fitting homographies to small groups of nearby feature matches in order to predict the positions of additional neighboring features. Unfortunately, it is strongly violated here, so the feature positions are predicted quite wrongly. I think that a proper fix would be to fit a slightly more complicated model (than only a homography) to the matches instead, which needs to be able to account for some of the distortion. It does not need to be perfect, only account for some generic distortion somewhat such that the predictions become 'good enough' for the feature detection scheme to use them.

There are also a few hardcoded outlier rejection thresholds in feature_detector_tagged_pattern.cc, but I don't think that relaxing them would help significantly.

from camera_calibration.

chris-hagen avatar chris-hagen commented on August 11, 2024

I have tested several parameters to get this to work. With --refinement_window_half_extent 25 it works best, as you suggested. I used the live feature detection within the tool to check if features were found in all corners of the objectives fov. You have to be very careful and patient to get the current algorithm to detect enough features.

Now I try to calibrate the camera with the recorded features. Unfortunately I can't get good results at the moment. If I use a small feature set for calibration with a big pattern (e.g. pattern_resolution_17x24_segments_16_apriltag_0.yaml) the results are ok.
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions
features.bin.zip

If I use a greater feature set with many features detected with a smaller pattern (e.g. pattern_resolution_25x36_segments_16_apriltag_6.yaml) the results are very bad:
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions
features_6.bin.zip

Am I right that the smaller fov and the fact that some image corners (black areas) containing no features break the calibration process? is there a way to get a good calibration result?

Thanks in advance!

from camera_calibration.

puzzlepaint avatar puzzlepaint commented on August 11, 2024

Am I right that [...] the fact that some image corners (black areas) containing no features break the calibration process?

Yes, I think so. The calibration program is unfortunately not designed for this kind of cameras. It uses a bounding box of all detected feature locations in image space to define a rectangular image area that it tries to calibrate. This area can be seen in the bottommost visualizations that you posted. If the actual area that contains feature detections is circular, then this will include large parts without any detections (at the corners of the rectangle). Thus there are no or almost no constraints on the calibration values in these areas. They might happen to change in a way that breaks the point projection algorithm that is used by the program. I think that it probably does not depend on the type of pattern used, but more or less on luck regarding how the calibration values are initialized and how they change during the optimization.

I don't have time to work on this right now, but I think that it should be comparatively easy to fix this problem, in case you are willing to make a few changes to the source code. First, the image area that shall be calibrated needs to be defined to more tightly fit the detected feature points. One simple way to do this would be to manually draw a mask image. Another way, for example, would be to compute the convex hull of the detected feature points (rather than their bounding box). Then the point projections need to be constrained to this area. Each time a point moves to a place outside the calibrated area during the optimization process used for projection, it should be clamped back to the closest point within the area. Also, I think one has to be careful to prevent points from getting stuck in small protrusions of the calibrated area or something like that. I imagine that it is helpful for this to have a relatively smooth boundary of the area, for example the polygon defined by the convex hull, rather than e.g. a boundary shaped by right-angled pixel boundaries.

It might be an alternative to introduce some kind of regularization on the calibration values that tries to keep them smooth. Then it might be less of a problem that there are almost no data-based constraints on the values in the rectangle corners, since the regularization should try to keep them at sane values.

from camera_calibration.

puzzlepaint avatar puzzlepaint commented on August 11, 2024

For a very quick test whether the suggested clamping would solve the problem (or not), one could probably simply measure a suitable circle manually for the specific camera and add a clamping step to this circle below this here:

// Compute the test state (constrained to the calibrated image area).

(or in an analogous part if using another camera model)

from camera_calibration.

chris-hagen avatar chris-hagen commented on August 11, 2024

I have extended the used noncentral model with the following lines of code:

      // Compute the test state (constrained to the calibrated image area).
      Vec2d test_result(
          std::max<double>(m_calibration_min_x, std::min(m_calibration_max_x + 0.999, result->x() - x_0)),
          std::max<double>(m_calibration_min_y, std::min(m_calibration_max_y + 0.999, result->y() - x_1)));

      // fov circle
      const double fov_centre_x=1306;
      const double fov_centre_y=972;
      const double fov_centre_r=1011;

      //check if point is out of fov
      double dist = std::sqrt(pow(fov_centre_x - test_result.x(), 2) + pow(fov_centre_y - test_result.y(), 2));
      if(dist>fov_centre_r){
          //clamp back to closest point within fov
          double phi=atan2(test_result.y()-fov_centre_y, test_result.x()-fov_centre_x);
          double clamped_x=fov_centre_x+fov_centre_r*cos(phi);
          double clamped_y=fov_centre_y+fov_centre_r*sin(phi);
          /*
            LOG(INFO) << "Clamping calculated Point(" << test_result.x() << ", " << test_result.y() <<") to fov(" <<
                        clamped_x <<", " << clamped_y << ")";
           */
          test_result=Vec2d(clamped_x, clamped_y);
      }

After that the calibration delivered much better results:
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions

There are still some small spots to improve, but I think I can do that by providing more/better features at this points.

I will forge the existing code to deliver the 3 const parameters by command line, so that I can calibrate these kind of cameras with given circle shape fov. It would be very nice, if you will deliver an enhanced version with a graphical function to select fov on the calibration image. But at the moment this will do the trick for me.

Thanks a lot!!!

from camera_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.