slicerigt / aigt Goto Github PK
View Code? Open in Web Editor NEWDeep learning software modules for image-guided medical procedures
Home Page: http://www.slicerigt.org
License: BSD 3-Clause "New" or "Revised" License
Deep learning software modules for image-guided medical procedures
Home Page: http://www.slicerigt.org
License: BSD 3-Clause "New" or "Revised" License
Hi, I am trying to run Live US Segmentation in Slicer and have customized the loading and preprocessing of the input/output image to my training net. However, the SegmentationUNet extension is no longer visible in the Slicer modules. I am using a FlexibleUNet and Pytorch (Monai Template). How can I adapt the SegmentationUNet.py so that it is available in Slicer?
None was being returned from the __getitem__
method of the UltrasoundDataset in the case where transform arrays were not provided, causing Pytorch to raise an error. This issue was temporarily fixed in 1a4c1dd by removing None from the method return, but there is probably a better way to handle this.
We need to start supporting multiple segments by one-hot encoding, multi-channel model outputs, and softmax instead of sigmoid final activation. Robert tried this on his dataset by modifying our code, and he says it was quite simple to do.
I think Single Slice Segmentation exports different segments as different values, so that may need to be converted to one-hot encoding later by one of the scripts. The safest place to do this would be in the training script. But we may need to modify the prepare_data script too. Multi-value segmentation arrays should only be resized using nearest neighbor interpolation. Standard image resize functions can interpolate between values (if a new pixel position falls between two old pixel positions). But interpolated non-integer values cannot be later converted using one-hot encoding.
Hi,
unfortunately I am unable to access the dataset and the trained models at https://pocus.cs.queensu.ca/api/v1. Would you please let me know how I can obtain them?
Thank you,
Miruna
Hi,
Thank you for developing such a wonderful software!
Inspired by a very interesting YouTube video about Spine Ultrasound Segmentation, I wanted to test with the same data as the video, but I'm suffering from errors.
I get an error with from Processes import Process, ProcessesLogic
in
SlicerExtension/LiveUltrasoundAi/SegmentationUNet/SegmentationUNet.py
.
Where can I get this "Processes" module?
We should keep in mind that most of our models will be deployed in real time. A typical ultrasound machine produces 15-25 frames/second. At the end of training script, we should add a loop that feeds a batch of (or e.g. 100) image frames to the trained model and computes the prediction (inference mode, no augmentation). We should time this and save the average, min, max times /frame.
The goal is around 20 ms inference time, to leave time in the user application for resizing images and do some UI rendering after each frame. We can save trained models with higher inference time for experimentation. But if we see much higher numbers, that should tell us to look at smaller models.
Additionally, we should check time/frame when using the trained model in Slicer, so we have an idea how much overhead it takes to also render images at the same time.
The extension has build error - subdirectories that are added to the build do not exist.
I cannot figure out which modules should be included in the extension.
it would be more intuitive
Seems redundant to have both. Might make more sense to resize using Pytorch transforms during training?
Inspired by tf.keras.datasets
Although we probably won't have fixed default datasets, we should be able to fetch data by one line of code, if we specify data, e.g. in a CSV file.
This is needed because 3d segmentation-based export only works when the segmented structure doesn't move relative to the reference coordinate system. In human scans, almost every internal structure moves a bit during scanning.
Some ultrasound image processing tasks may be more intuitive if curvilinear ultrasound images are resampled in a rectangular image with one scanline per column.
I'm currently facing an issue while running the prepare_data.py script, and I could use your expertise to resolve it.
Here's a brief overview of the problem:
I exported my segmentations from 3DSlicer in the form of two files: "_segmentation.py" and "_ultrasound.py." I've placed these two files in a folder named "SegmentationOutput" and attempted to execute the following command in my terminal:
python prepare_data.py --input_dir SegmentationOutput --output_dir PatientArrays --config_file prepare_data_config.yaml --log_file "PrepareData.log"
My problem arises from the creation of the "_indices.py" file. As per my understanding of the prepare_data.py script, the "_indices.py" file should be derived from the "_segmentation.npy" file. However, the line in the script that reads seg_filename.replace("_segmentation", "_indices") seems to only change the string that describes the path to the folder, but it does not actually create the "_indices.py" file. Even if it did, just renaming "_segmentation.py" to "_indices.py" wouldn't work because the dimensions don't match. Ideally, the "_indices.py" file should have only one dimension, but it appears to be 3D when derived from "_segmentation.py."
I would greatly appreciate your assistance in understanding how to obtain the correct "_indices.py" file or any insights on how to resolve this issue.
Thank you in advance for your support.
This will allow resizing images without the need to scale locations.
I think instead of (or maybe in addition to) saving after a certain number of epochs, we can save the best model based on the validation loss?
Volume reconstruction doesn't need to be added immediately.
This is an example for adding a new sequence to an existing sequence browser:
Some transforms should be used for both training and validation data, like resize and normalization.
But some transforms like noise, crop, rotation, should only be applied to the training data.
Hi,
Is the dataset used to train the US bone segmentation networks publicly available?
If so, could you please provide me with a link?
Thanks!
This coincides in time with the code starting to use monai. I'm not sure if the two are related. Even though the config-file argument is given and logging.basicConfig is called with the correct file name, the log output still goes to the terminal and not in the file. It would be good to debug this, because log files are best preserved in the same folder together with trained models. So we have a good record of how the model was trained.
To decide how live segmentation works, let's test these solution:
It would be good to also test resize in Slicer vs resize in separate process
The current dataset is great for saving memory. But using it with shuffle option is extremely slow.
Shuffling datasets would be important. The way it is now, we only use data from one patient in one batch. This makes every training step biased towards one patient. Even worse, one batch of data only contains one part of the anatomy, because they are all consecutive ultrasound images.
The fastest and most memory efficient for training would be to implement suffling in the prepare_data script. It would take slower to prepare the shuffled data, but then it could be used many times. It's not ideal in the sense that the same batches would be used in every epoch. But at least the represent a broader range of data.
Another option would be to load a set of data arrays (patients) at a time, and adding shuffling among those while in memory. The number of arrays loaded could be a parameter in the dataset class. This would create different batches in every epoch.
Extension build currently fails with this error:
CMake Error at CMakeLists.txt:23 (add_subdirectory):
add_subdirectory given source
"SlicerExtension/UsAnnotationExport/SingleSliceSegmentation" which is not
an existing directory.
Something needs to be reset, so reloaded scenes can only be continued if we press the Delete button first.
Do you have any tutorials on the "Deep Learning Live" modules? I am not sure how I should exactly use it, such as the format of data we import. Any tutorials would be very helpful. Thank you very much!
It should be easier to add/remove data to/from validation rounds. It should also be possible to specify validation data without leave-one-out, or no validation at all (when want to maximize training data before testing).
The repo description of: "Module to export ultrasound annotations for machine learning" isn't a great fit anymore. It would be nice to update to something more relevant such as the first sentence in Readme.md. -- Requires admin privileges to change
Idea from an earlier issue: #3
Probably for this it would be good to have a Sequence Segmentation module, which would help setting up segmentation sequence (synchronized with the input image sequence) and some tools to help with propagating segmentation between time points (e.g., using intensity based registration). This module could also have an export feature to save both input image sequence and corresponding segmentation.
This module should be probably in the Sequences extension.
However, in ultrasound sequences, we typically don't segment every item in the input sequence. We skip 5-10 frames between ones that we invest time in segmenting. The number of skipped frames varies within one sequence. When the ultrasound is not moving, we may skip a lot more frames.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.