Code Monkey home page Code Monkey logo

scikit-surgerycalibration's Introduction

scikit-surgery

image

Documentation Status

The SciKit-Surgery paper

Get in touch via twitter

Follow scikit_surgery on twitter

SciKit-Surgery is a collection of compact libraries developed for surgical navigation. Individual libraries can be combined using Python to create clinical applications for translational research. However because each application's requirements are unique the individual SciKit-Surgery libraries are kept independent, enabling them to be maintained, modified and combined in new ways to create new clinical applications. Keeping the libraries independent enables researchers to implement novel algorithms within a small library that can be readily reused and built on by the research community.

A typical clinical application might consist of an imaging source (e.g. SciKit-SurgeryBK to stream ultrasound images), a tracking source (e.g. SciKitSurgery-NDITracker) to locate the images in space, an image processor (e.g. SciKit-SurgeryTorch) to segment anatomy from the image, and a visualisation layer (e.g. SciKit-SurgeryVTK)

SciKit-Surgery is developed at the Wellcome EPSRC Centre for Interventional and Surgical Sciences, part of University College London (UCL).

Packages

Please see Documentation for further module details.

Tutorials

Tutorials are split into three groups, those that show how to assemble SciKit-Surgery libraries into an application, those that concentrate on the workings a single application, and those that are aimed at general education in image guided interventions using SciKit-Surgery.

General Tutorials

* ROS Integration scikit-surgeryvtk

scikit-surgeryimage

scikit-surgerycalibration

* Camera Calibration

Educational Tutorials

Encountering Problems?

Please check list of common issues.

Contributing

Please see the contributing guidelines.

Copyright 2018 University College London. scikit-surgery is released under the BSD-3 license. Please see the license file for details.

Acknowledgements

Supported by Wellcome and EPSRC.

scikit-surgerycalibration's People

Contributors

mattclarkson avatar mxochicale avatar tdowrick avatar thompson318 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

sph001

scikit-surgerycalibration's Issues

Reproducible calibration tutorial

    Tutorials https://scikit-surgerycalibration.readthedocs.io/en/latest/tutorials/calibration.html seems to work much better using jupypter notebook, specially for this bit: "If you hold the chessboard in view of your camera, and run the below cell repeatedly, it will grab some frames."  But got few errors: 
  • For Chessboard Calibration, I got
      1 if number_of_views > 1:
      2 #     proj_err, params = calibrator.calibrate()
----> 3     proj_err, recon_err, params = calibrator.calibrate()
      4 
      5     print(f'Reprojection (2D) error is: {proj_err}')

ValueError: not enough values to unpack (expected 3, got 2)

Sorted with proj_err, params = calibrator.calibrate()

  • for Dot Calibration, I get: error: OpenCV(4.6.0) ๐Ÿ‘Ž error: (-5:Bad argument) in function 'circle'
---------------------------------------------------------------------------
error                                     Traceback (most recent call last)
/tmp/ipykernel_44007/3599310274.py in <module>
      9     # Draw detected points on frame
     10     for point in img_pts:
---> 11         frame = cv2.circle(frame, (point[0][0], point[0][1]), 5, (255, 0 ,0), -1)
     12 
     13 print(f"Detected {number_of_points} points")

error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'circle'
> Overload resolution failed:
>  - Can't parse 'center'. Sequence item with index 0 has a wrong type
>  - Can't parse 'center'. Sequence item with index 0 has a wrong type

conda env packages

## Some useful commands to manage your conda env:
## LIST CONDA ENVS: conda list -n *VE # show list of installed packages
## UPDATE CONDA: conda update -n base -c defaults conda
## INSTALL CONDA EV: conda env create -f *VE.yml
## UPDATE CONDA ENV: conda env update -f *VE.yml --prune
## ACTIVATE CONDA ENV: conda activate *VE
## REMOVE CONDA ENV: conda remove -n *VE --all

name: scikit-surgerycalibrationVE
channels:
  - defaults
  - conda-forge #vtk; tox;
  - anaconda #coverage; scipy; pyserial 
  #- fastai #opencv-python-headless
dependencies:
  - python=3.7
  - jupyter
  #- cookiecutter>=1.7.3
  - numpy>=1.11
  #- vtk=8.1.2
  - matplotlib
  # - six>=1.10
  # - scipy>=1.7.3
  - tox>=3.26.0
  - pytest>=7.1.2
  - pylint>=2.14.5
  #- pyserial 
  - pip>=22.2.2
  - pip:
     #- scikit-surgeryvtk>=1.0.6
     #- scikit-surgeryutils>=1.2.0
     #- scikit-surgerycore>=0.1.7
     - scikit-surgeryimage>=0.10.1 
     - scikit-surgerycalibration
     # - ndicapi>=3.2.6 
     #- PySide2<5.15.0
     - opencv-contrib-python-headless<4.6 #==4.5.3.56 #<4.6 #4.5.5.64 installted with <4.6 #=4.5.3.56works 4.5.4.58 breaks. issue42 #>=4.2.0.32 # 4.6.0.66 current
     #- pylint-exit

Originally posted by @mxochicale in #46 (comment)

Fix handeye tests

tests/video/test_hand_eye.py is failing with incorrect resuduals.

assert recon_err_1 < one_pixel
E assert 1.0183434217891203 < 1

cv2.triangulatePoints method generates different decimal values of the reconstructed points.

Triangulating points using cv2.triangulatePoints() over _iter_triangulate_point_w_svd() is definitely quicker (0.91077 milliseconds vs 2.426845 milliseconds), however cv2.triangulatePoints seems to use a different computational methods to resolve such points. See below decimal values of reconstructed points.

  • CODE SNIPPETS
FROM: def triangulate_points_hartley(...

# array shapes for input args _iter_triangulate_point_w_svd( [3, 4]; [3, 4]; [3, 1]; [3, 1] )
reconstructed_point = _iter_triangulate_point_w_svd(p1d, p2d, u1t, u2t)
#reconstructed_point = np.around(reconstructed_point, 3) #Truncate and discarding decimal digits (np.around)
print(f'\n {reconstructed_point}')

FROM: def triangulate_points_opencv(...
# array shapes for input args cv2.triangulatePoints( [3, 4]; [3, 4]; [2, 1]; [2, 1] )
reconstructed_point = cv2.triangulatePoints(p1mat, p2mat, u1t[:2], u2t[:2])
reconstructed_point /= reconstructed_point[3]  # Homogenize
#reconstructed_point = np.around(reconstructed_point, 3) #Truncate and discarding decimal digits (np.around)
print(f'\n {reconstructed_point}')


  • LOGS
$ pytest -v -s tests/algorithms/test_triangulate.py #for individual tests

tests/algorithms/test_triangulate.py::test_triangulate_points_hartley
 [[  9.88256179]
 [-22.44358672]
 [127.9216597 ]]

 [[ 48.46473971]
 [-23.09607652]
 [119.95244623]]

 [[  6.94680311]
 [  1.97332699]
 [115.79217294]]

 [[ 44.77458535]
 [  1.76836361]
 [106.8097294 ]]

 Elapsed time for at.triangulate_points_hartley(): 2.426845 millisecs

 [[  9.88261104]
 [-22.44364881]
 [127.92238956]
 [  1.        ]]

 [[ 48.46887878]
 [-23.09824868]
 [119.96328148]
 [  1.        ]]

 [[  6.94682563]
 [  1.97329098]
 [115.79279036]
 [  1.        ]]

 [[ 44.77479814]
 [  1.76837639]
 [106.81026738]
 [  1.        ]]

 Elapsed time for at.triangulate_points_opencv(): 0.91077 millisecs

 rms_hartley: 1.2954729076604021 and test (rms_hartley < 1.5)

 rms_hartley_opencv:  1.3009661570992705 and test (rms_hartley < 1.5)

I guess, we might like to dig into the implementation of cv2.triangulatePoints to understand where the differences comes from.

Visualise results of pivot calibration

Investigate how much effort it would be add graphical results to pivot calibration, for example could we use matplot lib or something scipy to make nice plots to make errors clearer?

Stereo calibration, optimising extrinsics only

Adapt stereo calibration to enable just optimising extrinsics only, for examples such as a stereo lapaproscope, which has fixed focus, and fixed stereo, hence, these can be done accurately in the lab.

Errors in unit test

The following test produce errors

test_charuco_plus_chessboard.py
test_chessboard_calibration.py
test_hand_eye.py
test_precalib.py

logs for test_precalib.py:

==================================================================================================== FAILURES =====================================================================================================
_____________________________________________________________________________________________ test_charuco_dataset_B ______________________________________________________________________________________________

    def test_charuco_dataset_B():
    
        calib_dir = 'tests/data/precalib/data_moved_scope'
        calib_driver = get_calib_driver(calib_dir)
    
        stereo_reproj_err, stereo_recon_err, _ = \
            calib_driver.calibrate()
    
        tracked_reproj_err, tracked_recon_err, _ = \
            calib_driver.handeye_calibration()
    
        print(stereo_reproj_err, stereo_recon_err, tracked_reproj_err, tracked_recon_err)
>       assert stereo_reproj_err < 1
E       assert 7.116922375480872 < 1

tests/video/test_precalib.py:92: AssertionError
_______________________________________________________________________________________________ test_precalbration ________________________________________________________________________________________________

    def test_precalbration():
        """ Use intrinsics from A to calibration B, currently failing. """
        left_intrinsics = np.loadtxt('tests/data/precalib/precalib_base_data/calib.left.intrinsics.txt')
        left_distortion = np.loadtxt('tests/data/precalib/precalib_base_data/calib.left.distortion.txt')
        right_intrinsics = np.loadtxt('tests/data/precalib/precalib_base_data/calib.right.intrinsics.txt')
        right_distortion = np.loadtxt('tests/data/precalib/precalib_base_data/calib.right.distortion.txt')
        l2r = np.loadtxt('tests/data/precalib/precalib_base_data/calib.l2r.txt')
        l2r_rmat = l2r[0:3, 0:3]
        l2r_tvec = l2r[0:3, 3]
    
        calib_dir = 'tests/data/precalib/data_moved_scope'
        calib_driver = get_calib_driver(calib_dir)
    
        stereo_reproj_err, stereo_recon_err, _ = \
            calib_driver.calibrate(
                override_left_intrinsics=left_intrinsics,
                override_left_distortion=left_distortion,
                override_right_intrinsics=right_intrinsics,
                override_right_distortion=right_distortion,
                override_l2r_rmat=l2r_rmat,
                override_l2r_tvec=l2r_tvec)
    
        tracked_reproj_err, tracked_recon_err, _ = \
            calib_driver.handeye_calibration()
    
        print(stereo_reproj_err, stereo_recon_err, tracked_reproj_err, tracked_recon_err)
>       assert stereo_reproj_err < 4.5
E       assert 34.74794862170866 < 4.5

tests/video/test_precalib.py:124: AssertionError
================================================================================================ warnings s

Next steps

  • create a branch from 38bc2e6 and test github actions
  • change version of opencv in requirements and setup, currently only 'opencv-contrib-python-headless' to change to opencv-contrib-python-headless<4.6. See #42

Set up deploy to PyPI

Need to pass secure variables to travis to deploy to PyPI. Hold off doing this until we have something implemented,

Image is undistorted/warped twice in detect_points_in_stereo_canonical_space if using dot detector

Iterative calibration calls detect_points_in_stereo_canonical_space(), which has this bit of code:

    for j, _ in enumerate(left_images):
        left_undistorted = cv2.undistort(
            left_images[j],
            left_camera_matrix,
            left_distortion_coeffs,
            left_camera_matrix
        )
        left_ids, left_obj_pts, left_img_pts = \
            left_point_detector.get_points(left_undistorted)

        if left_ids is not None \
                and left_ids.shape[0] >= minimum_points_per_frame \
                and right_ids is not None \
                and right_ids.shape[0] >= minimum_points_per_frame:

            left_common_points = match_points_by_id(left_ids,
                                                    left_img_pts,
                                                    reference_ids,
                                                    reference_image_points)
            left_homography, _ = \
                cv2.findHomography(left_common_points[0:, 0:2],
                                   left_common_points[0:, 2:4])
            left_warped = cv2.warpPerspective(left_undistorted,
                                              left_homography,
                                              reference_image_size,)

            left_ids, left_obj_pts, left_img_pts = \
                left_point_detector.get_points(left_warped)

The image is undistorted and warped. However, the get_points() function for dotty_grid_point_detector also undistorts and warps the image, so we end up with something like this, which messes up the rest of the processing:

image

Other point detectors (charuco etc) are unaffected as they don't undistort or warp the input image.

Use opencv-headless

If we can use opencv headless we can help avoid qt conflicts downstream. Currently the apps in the ui folder use opencv GUI windows. If we can move or modify them then we can change the opencv requirement.

Pivot calibration: erroneous mixing algorithms by recursion

Hello, I'm not a good Python programmer, but it seems that the pivot_calibration_with_ransac function is calling badly its recursion:

def pivot_calibration(tracking_matrices, configuration=None):

will call

return pivot_calibration_with_ransac(

that will call

def pivot_calibration_with_ransac(tracking_matrices,

that will call

pointer_offset, pivot_location, _ = pivot_calibration(sample)
with implicit configuration=None as parameter that will call

and

return pivot_calibration_aos(tracking_matrices)

but since

the

is not called

I've not traced calls (as noob in Python is this kind of tool exists ?), this is a simple "manual" code review, I may be wrong. I guess you have forgotten to pass configuration in the recursive calls. Finally, your recursive algorithm is complex to understand: I think pivot_calibration_with_ransac shall only call pivot_calibration_with_ransac but not the base function pivot_calibration.

Check residual errors

I think sphere fitting residual is returning mean distance rather than RMS. Ideally write a calculate_residual(matrices, pivot_point, pointer_offset) method that all implementations can call.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.