Code Monkey home page Code Monkey logo

slicersandbox's Introduction

SlicerSandbox

Logo

Collection of modules for 3D Slicer, which are already useful, but not finalized, polished, or proven to be useful enough to be included in Slicer core.

  • Auto Save: automatically save the scene at specified time intervals.
  • Characterize Transform Matrix: quick geometric interpretations of a transformation matrix
  • Combine Models: Boolean operations(union, intersection, difference) for models.
  • Curved Planar Reformat: straighten vessels, bones, or other structures for easier visualization, quantification, creating panoramic dental X-ray, etc.
  • Documentation Tools: tools for creating documentation on read-the-docs. It can generate html documentation from a Slicer source tree, convert module documentation from MediaWiki to markdown, etc.
  • Import ITK-Snap label description file: import label description (*.label, *.txt) files as a color table node
  • Import OCT: Load Topcon OCT image file (*.fda).
  • Import Osirix ROI: Load Osirix ROI files as segmentation.
  • Import SliceOmatic: Load SliceOmatic segmentation files.
  • Lights: customize lighting in 3D views.
  • Line Profile: compute and plot image intensity profile along a line.
  • Scene Recorder: record all MRML node change events into a json document.
  • Segment Cross-Section Area: Measure cross-section of a segmentation along one of its axis. Note there are more advanced tools for this now in Segment Geometry and SlicerVMTK extensions.
  • Stitch Volumes: stitch together image volumes which share physical coordinate systems (e.g. CT scans with different stations)
  • Style Tester: test Qt style sheet changes.
  • User Statistics: collect statistics about what modules and tools are used and for how long.
  • Volume Rendering Special Effects: custom shaders for special volume rendering effects.

Lights

This module can be used to adjust lighting and rendering options in 3D views. Select all or specific 3D views at the top, then adjust options in sections below.

  • Lighting: configures a lightkit that is used for rendering of all 3D content, including volume rendering. The kit consists of the key light (typically the strongest light, simulating overhead lighting, such as ceiling lights or sun), fill light (typically same side as key light, but other side; simulating diffuse reflection of key light), headlight (moves with the camera, reduces contrast between key light and fill light), back lights (fill on the high-contrast areas behind the object). Short demo video
  • Ambient shadows: Uses screen space ambient occlusion (SSAO) method to simulate shadows. Size scale determines the details that are emphasized. The scale is logarithmic, the default 0 value corresponds to 100mm. For highlighting smaller details (such as uneven surface), reduce the value. Use larger values to make large objects better distinguishable. These settings have no effect on volume rendering.
  • Image-based lighting: Necessary when models are displayed with PBR (physics based rendering) interpolation. Brightness of the image determines the amount of light reflected from object surfaces; and fragments of the image appears as reflection on surface of smooth metallic objects. Currently only a single picture is provided via the user interface (hospital_room), but other images can be downloaded (for example from polyhaven.com) and be used in the Python API of the module. These settings have no effect on volume rendering. See some examples here.

Remove CT table

Remove patient table from CT images fully automatically, by blanking out (filling with -1000 HU) voxels that are not included in an automatically determined convex-shaped region of interest.

If boundary of the extracted region is chipped away in the output image then either add a fixed-size padding and/or increase the computation accuracy (in Advanced section).

Curved Planar Reformat

Curved planar reformat module allows "straightening" (default) or "stretching" a curved volume for visualization or quantification. The module provides two-way spatial mapping between the original and straightened space.

Difference between the two modes are nicely explained in the paper of Kanitsar et al. (see in the image below: a = projection, b = stretching, c = straightening):

  • Straightening: Fully straightens the tubular structure. This CPR method generates a linear representation of the vessel with varying diameter. The height of the resulting image corresponds to the length of the central axis.
  • Stretching: The surface defined by the vessel central axis and the vector-of-interest is curved in one dimension and planar in the other one. As the distance between the two consecutive points is preserved by this operation in image space, isometry is maintained. This isometry is the main advantage of this mode compared to projected or straightened mode.

Adjust reformatting parameters for robust mapping

If the slice size is too large or curve resolution is too fine then in some regions you can have transform that maps the same point into different positions (the displacement field folds into itself). In these regions the transforms in not invertible.

To reduce these ambiguously mapped regions, decrease Slice size. If necessary Curve resolution can be slightly increased as well (it controls how densely the curve is sampled to generate the displacement field, if samples are farther from each other then it may reduce chance of contradicting samples).

image|561x379

You can quickly validate the transform, by going to Transforms module and in Display/Visualization section check all 3 checkboxes, the straightened image as Region and visualization mode to Grid.

For example, this transform results in a smooth displacement field, it is invertible in the visualized region:

image|690x420

If the slice size is increased then folding occurs:

image|489x500

Probably you can find a parameter set that works for a large group of patients. Maybe one parameter set works for all, but maybe you need to have a few different presets (small, medium, large)

Characterize Transform Matrix

It is often difficult to understand what a transform matrix is doing just by inspecting the numerical values. All the information is in those 12 numbers, but not in an easily understood format. Characterize Transform Matrix module provides information on what a transformation matrix is doing. For example, is it a rigid transformation or is there scaling or reflection? If there is scaling, what are the scale factors and stretch directions? Is there rotation? If so, what is the axis of rotation and how much rotation occurs around that axis? Alternatively, if we break down the rotation into a sequence of rotations around coordinate axes, what is the rotation about each axis?

How to use

Open the module and select the transform node you want to know about. An analysis such as the following will appear in the text box below:

This transformation does not include a reflection.
Scale factors and stretch directions (eigenvalues and eigenvectors of stretch matrix K):
  f0: +0.012% change in direction [1.00, 0.03, -0.08]
  f1: -2.843% change in direction [-0.08, -0.10, -0.99]
  f2: +3.248% change in direction [0.04, -0.99, 0.10]
This transform is not rigid! Total volume changes by +0.325%, and maximal change in one direction is +3.248%
The rotation matrix portion of this transformation rotates 15.0 degrees ccw (if you look in the direction the vector points) around a vector which points to [0.76, -0.59, -0.27] (RAS)
Broken down into a series of rotations around axes, the rotation matrix portion of the transformation rotates
  11.8 degrees ccw around the positive R axis, then
  8.4 degrees cw around the positive A axis, then
  5.0 degrees cw around the positive S axis
Lastly, this transformation translates, shifting:
  +194.2 mm in the R direction
  +73.4 mm in the A direction
  -1170.3 mm in the S direction
The order of application of the decomposed operations is stretch, then rotate, then translate. A different order of transform application would generally lead to a different set of decomposition matrices.

This analysis is for the matrix

0.985821 0.0570188 -0.157817 194.155
-0.0873217 1.01 -0.192319 73.4412
0.14329 0.203373 0.94 -1170.25
0 0 0 1

Some Decomposition Details

This module uses polar decomposition to describe the components of a 4x4 transform matrix. The decomposition has the form: H = T * F * R * K, where H is the full homogeneous transformation matrix (with 0,0,0,1 as the bottom row), T is a translation-only matrix, F is a reflection-only matrix, R is a rotation-only matrix, and K is a stretch matrix. K can further be decomposed into three scale matrices, which can each be characterized by a stretch direction (an eigenvector) and a stretch factor (the associated eigenvalue). Points to be transformed are on the right, so the order of operations is stretching first, then rotation, then reflection, then translation.

If you would like access to the decomposed components of the matrix, you can call the relevant logic function of this module as follows:

import CharacterizeTransformMatrix
decompositionResults = CharacterizeTransformMatrix.CharacterizeTransformMatrixLogic().characterizeLinearTransformNode(transformNode)

decompositionResults will then be a namedTuple with all the information from the decomposition. For example, decompositionResults.rotationAngleDegrees will have the angle the transformation rotates by around the rotation axis. The named fields of the results are

Field Name Description
textResults a line by line list of the analysis text
isRigid boolean, true if largest strech % change is less that 0.1% and if there is no reflection
hasReflection boolean, true if there is reflection
scaleFactors numpy vector of scale factors in eigendirections of stretch matrix (with a 4th element which is always 1)
scaleDirections list of 3 scale directions as 4 element vectors (4th element always 0)
largestPercentChangeScale largest scale factor as a percent change (100 * (scaleFactor-1))
volumePercentChangeOverall total volume % change after all stretching/shrinking
scipyRotationObject scipy Rotation object of the rotation component of the transform
rotationAxis RAS vector describing the axis the transform rotates about
rotationAngleDegrees positive if counterclockwise when looking down axis
eulerAnglesRAS sequence of rotation angles about the Right, Anterior, and then Superior axes
translationVector 3-element vector of RAS translation
translationOnlyMatrix identitiy matrix with translation vector in 4th column
rotationOnlyMatrix 4x4 rotation matrix R from the decomposition
reflectionOnlyMatrix 4x4 reflection matrix F from the decomposition. This is the identity matrix if there is no reflection, and is np.diag([-1,-1,-1,1]) if reflection is present
stretchOnlyMatrix 4x4 stretch matrix K from the decomposition
scaleMatrixList list of three 4x4 symmetric (likely non-uniform) scale matrices (S1*S2*S3=K)
stretchEigenvectorMatrix 4x4 matrix with the stretch direction eigenvectors as the first 3 columns

Stitch Volumes

This module allows a user to stitch together two or more image volumes. A set of volumes to stitch, as well as a rectangular ROI (to define the output geometry) is supplied, and this module produces an output volume which represents all the input volumes cropped, resampled, and stitched together. Areas of overlap between original volumes are handled by finding the center of the overlap region, and assigning each half of the overlap to the closer original volume.

The resolution (voxel dimensions) of the output stitched volume is set to match the first input image. If other image volumes are at the same resolution, the stitched volume uses nearest-neighbor interpolation in order to avoid any image degradation due to interpolation, but please note that this could mean that there is a physical space shift of up to 1/2 voxel in each dimension for the positioning of one original volume compared to where it appears in the stitched volume's physical space. If original volumes are not at the same voxel resolution, then interpolation is definitely required, and linear interpolation is used. Voxels in the stitched image which are outside all original image volumes are assigned a voxel value of zero.

StitchVolumesScreenshot

How to use

The input image volumes must already be positioned in their correct location. Fiducial Registration Wizard module in SlicerIGT extension can be used for moving the images to correct location, based on matching langmarks.

A region of interest (ROI) markup needs to be defined to designate the region that the stitched volume should fill. This is typically done in two steps:

  • Go to Crop Volumes module, select first volume to stitch as input volume, select Create new ROI as ROI, and then click Fit to Volume. This creates an ROI which is oriented the same way and with the same dimensions as the input volume. (The volume is not cropped, Crop Volumes module is used just because of the handy Fit to Volume button).
  • Next, extend the ROI as needed to encompass the desired regions of all image volumes to stitch using the interaction handles in the slice views or in 3D.

This ROI definition works well if connecting a series of image volumes along one axis (e.g., a series of bed positions) and it works well to mostly extend an ROI in one direction (and often also bring in the sides to reduce the number of air voxels), but a rectilinear ROI can be created any other way.

Once the ROI is created, go to the Stitch Volumes module, select the image volumes to be stitched together, select the ROI, select or create the output to put the stitched volume in, and click the Create Stitched Volume button. The selected output will be a new image volume with the same orientation and extent as the ROI, with the same voxel size as the first image volume listed to stitch.

ImportItkSnapLabel

This module registers a file reader for label description files (*.label, *.txt, see example) created by ITK-Snap. The reader creates a color table node that can be used for loading labelmap volume or segmentation files (*.nrrd, *.nii.gz, etc.).

How to use:

  • Load the *.label or *.txt file as a color table node:
    • Drag-and-drop the label description file over the Slicer main window (or choose Add Data in the File menu and choose the file to add)
    • Make sure that in the description column ITK-Snap Label Description is selected
    • Click OK
  • Load the image or segmentation *.nii.gz or *.nrrd file
    • Drag-and-drop the image file over the Slicer main window (or choose Add Data in the File menu and choose the file to add)
    • Check Show Options checkbox
    • Make sure that in the description column Segmentation is selected (or Volume is selected and Label checkbox is checked)
    • Choose the loaded color table node in the options column (rightmost widget)
    • Click OK

ImportNumpyArray

This module registers a file reader for numpy array (.npy, .npz).

See https://numpy.org/devdocs/reference/generated/numpy.lib.format.html#npy-format

The reader can read an array of 1 to 5 dimensions into an image represented by a specific MRML node.

Dimension Axis order MRML node
1D I vtkMRMLScalarVolumeNode
2D J, I vtkMRMLScalarVolumeNode
3D K, J, I vtkMRMLScalarVolumeNode
4D K, J, I, component vtkMRMLVectorVolumeNode
5D time, K, J, I, component vtkMRMLSequenceNode

Notes:

  • User is responsible for setting the correct IJK to RAS matrix.

  • For the 4D and 5D cases, having the channel-last convention corresponds to the ITK/VTK memory layout and is different from the convention used in PyTorch for NCHW for 4D tensors/arrays and NCDHW for 5D.

LoadRemoteFile

Example module that allows opening a file in 3D Slicer by clicking on a link in a web browser. It requires the slicer:// custom URL protocol to be associated with the 3D Slicer application. The Slicer installer on Windows does this automatically, but it has to be set up manually on linux and macOS.

Example links that open image and/or segmentation in 3D Slicer:

  • LungCTAnalyzerChestCT image: slicer://viewer/?download=https%3A%2F%2Fgithub.com%2Frbumm%2FSlicerLungCTAnalyzer%2Freleases%2Fdownload%2FSampleData%2FLungCTAnalyzerChestCT.nrrd
  • LungCTAnalyzerChestCT image + segmentation" slicer://viewer/?show3d=true&segmentation=https%3A%2F%2Fgithub.com%2Frbumm%2FSlicerLungCTAnalyzer%2Freleases%2Fdownload%2FSampleData%2FLungCTAnalyzerMaskSegmentation.seg.nrrd&image=https%3A%2F%2Fgithub.com%2Frbumm%2FSlicerLungCTAnalyzer%2Freleases%2Fdownload%2FSampleData%2FLungCTAnalyzerChestCT.nrrd

slicersandbox's People

Contributors

adamaji avatar chir-set avatar cpinter avatar dzenanz avatar henrykrumb avatar lassoan avatar mikebind avatar sjh26 avatar sunderlandkyl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slicersandbox's Issues

readPlist removed in python 3.9 - unable to use ImportOsirix ROI

As discussed in https://discourse.slicer.org/t/problems-using-importosirixroi-from-the-sandbox-extension/31406, ImportOsirixROI functionality appears to be broken.

"[...]/AppData/Local/slicer.org/Slicer 5.4.0/slicer.org/Extensions-31938/Sandbox/lib/Slicer-5.4/qt-scripted-modules/ImportOsirixROI.py", line 284, in importOsirixRoiFileToSegmentation

[CRITICAL][Stream] 28.08.2023 11:25:17 [] (unknown:0) -     inputRoiData = plistlib.readPlist(inputRoi)

[CRITICAL][Stream] 28.08.2023 11:25:17 [] (unknown:0) - AttributeError: module 'plistlib' has no attribute 'readPlist'

Update NumPyDataLoader to provide axis option

This issue was created to document improvements to NumPyDataLoader contributed through:

and originally discussed in:


Originally posted by @dzenanz in Slicer/Slicer#6733 (comment), Slicer/Slicer#6733 (comment), Slicer/Slicer#6733 (comment) and Slicer/Slicer#6733 (comment)

My principal motivation for this PR is desire to easily and conveniently visualize dumps of NCHW 4D tensors. Having an option to load them as "channel last/fastest" would be nice, but I really want PyTorch's convention to be the default.

Even when reordering the axis for channel to be last, with the updated reader, I get an all-black image of "4 components". Visualizing something (vector norm, component average, or even just the first component) would be more useful than an all-black image.

Also, we could employ heuristic: the dimensions with largest size could be considered to be spatial. So (10, 4, 368, 640) would be interpreted as 4D image with 4 timepoints and 3D size of 640x368x10 (ijk). Prefer timepoints over components unless size is 3, which can be easily visualized as RGB.

Of course, all of these complications stem from .npy not having suitable metadata.

Originally posted by @lassoan in Slicer/Slicer#6733 (comment)

We could have auto, NCDHW, NKJIC axis options. Auto could implement the heuristics you described.

My only concern is that if sooner or later users will report that the loader randomly breaks. We experienced this with ITK, as ITK vector image reader has such heuristic, too. We were shocked when after being annoyed by seemingly random load failures for a long time we realized that loading fails for all data sets that have 3 or 4 time points simply because it is interpreted as channel data instead of time sequence.

Not much better, but maybe a bit more predictable heuristic could be to decide based on (composite) file extension. For example, something.torch.npy could make NCDHW order the default.

Why don't you save the result using pynrrd? You could use nrrd.write and specify the axis kinds like this:

nrrd.write("test.nrrd", a, {'kinds': ['RGB-color', 'domain', 'domain']})

How do you load the image into Slicer? Drag-and-drop is pretty tedious and it also shows the loaded image by default (there is no way to update an existing image). You could solve both tedious drag-and-drop and setting axes by using slicerio:

np.save("path/to/myfile.npy", someTensor),
nodeID = slicerio.server.file_load("path/to/myfile.npy", "NumpyArrayFile", {'axes': 'NCDHW'})

slicerio.server.file_load loads a data set in Slicer (if Slicer is not running then it launches it). If you load the node from a standard file format then you can reload it by simply calling slicerio.server.node_reload(id=nodeID).

User statistics module slows down the application

User feedback (https://discourse.slicer.org/t/new-module-for-measuring-user-statistics/9220/5):
"One time, I did try enabling collection of user statistics, and a few minutes later, I was struck by how slow and unresponsive Slicer had become. After unchecking the UserStatistics “Enable” button and restarting Slicer, the application was responsive again, as usual. I did not do any further testing, but it was suspiciously correlated with my only attempt at enabling collection of user statistics."

Enable ColorizeVolume to work with already existing RGB volume

I have saved the color volume output from the ColorizeVolume to disk. However, I still have to load the original intensity image and the segmentation, next time when I need to work on in Slicer. It would be nice to have load data from scene/disk option. I like the controls in ColorizeVolume better than the VR ones, so would like to keep using them.

Unable to build the 3d slicer module

I had built Slicer Sandbox and Slicer github module using make file and build cmake commands.
But Qt5 is unable to build even though the python modules pyqt5 and slicer are very much installed.
Qt5 is not open source. I need help here. Any other suggestion on how to build the sandbox remove ct table module.

SSAO Prevents Volume Rendering in Managed View

When SSAO is enabled in the latest version of the Lights module (stable or nightly release), volume rendering isn't displayed at all in the 3D views. This is not affected by the "Shadows for volume rendering" option.

Running the following code will reproduce the issue:

lightsLogic = slicer.modules.lights.widgetRepresentation().self().logic
lightsLogic.setUseSSAO(True)
for viewIndex in range(slicer.app.layoutManager().threeDViewCount):
    viewNode = slicer.app.layoutManager().threeDWidget(viewIndex).threeDView().mrmlViewNode()
    lightsLogic.addManagedView(viewNode)
    viewNode.SetOrientationMarkerType(viewNode.OrientationMarkerTypeCube)

import SampleData
mrHead = SampleData.SampleDataLogic().downloadMRHead()
slicer.modules.volumerendering.logic().CreateDefaultVolumeRenderingNodes(mrHead)

There also seems to be a rendering issue with the orientation marker when SSAO is enabled. as the frontface of the orientation markers are now culled:
image image

Additionally, if there are multiple 3D views in the current layout that are being managed by the Lights module, then all but one of them will stop rendering. Interacting with the broken views will cause the one working view to become transparent (you can see the window behind Slicer).

image

Requested an insecure image

When in the extension manager for Slicer 5.1.0-2022-07-04, if I click on the image for Sandbox, I get errors on the command line, and it does not open the Sandbox page.

Errors are of the sort

"https://extensions.slicer.org/view/Sandbox/31077/linux" 0 "Mixed Content: The page at 'https://extensions.slicer.org/view/Sandbox/31077/linux' was loaded over HTTPS, but requested an insecure image 'http://www.slicer.org/slicerWiki/images/c/c2/VolumeClipLogo.png'. This content should also be served over HTTPS."
Mixed Content: The page at 'https://extensions.slicer.org/view/Sandbox/31077/linux' was loaded over HTTPS, but requested an insecure image 'http://www.slicer.org/slicerWiki/images/c/c2/VolumeClipLogo.png'. This content should also be served over HTTPS.

Button to unset/reset light settings

Currently there is no option to go back to original light settings. Consider implementing a button to revert back to defaults (or a way to unset them).

Add CombineModels function RobustBooleanOperation

RobustBooleanOperation will transform one of the inputs with a small pose variation to get the boolean operation to succeed

The syntax should be:

RobustBooleanOperation(
    inputMesh1,
    inputMesh2,
    resultingMesh,
    mmTranslationDelta=1e-4,
    degRotationDelta=1e-2,
    transformTries=15,
    randomSeed=None
)

Feedback is welcomed
I think I could develop this on the near future so you could assign it to me

Thank you

Add Cross-sectional area measurement computation along any physical axis

Measurement currently does not happen along anatomical axes but IJK axes (row, column, slice). It would be nice if anatomical directions (axial, sagittal, coronal) would be supported and even nicer if direction could be specified for example by a markups line.

This would require resampling of the volume along the chosen axis.

Feature Request: Apply ColorizeVolume to all items of a sequence

If the selected Volume and Segmentation belong to a Sequence, I would want the Colorized Volume to be generated for each frame of that sequence, and the Colorized Volume to be tracked within that sequence. So far, I would have to do apply my settings manually for each individual frame which is quite tedious.

The user should be prompted if they want to apply changes to the whole Sequence. We could use a Checkbox "Colorize whole Sequence" that is disabled for atomic volumes (not part of a Sequence) and enabled for Sequential Volumes+Segmentations.

I created this PR in SlicerTotalSegmentator which has a similar behavior, but for creating the segmentation:

lassoan/SlicerTotalSegmentator#56

If I use it to generate a 4D segmentation, I would like to use ColorizeVolume to create a 4D visualization of that sequence automatically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.