Code Monkey home page Code Monkey logo

forkedsynthseg's Introduction

SynthSeg



πŸŽ‰ Update 29/10/2021: SynthSeg is now available on the dev version of FreeSurfer !! πŸŽ‰
See here on how to use it.


In this repository, we present SynthSeg, the first convolutional neural network to segment brain scans of any contrast and resolution, without retraining or fine-tuning. In addition, SynthSeg is also robust to:

  • a wide array of subject populations: from young and healthy to ageing and diseased subjects,
  • white matter lesions,
  • scans with or without preprocessing, including bias field corruption, skull stripping, intensity normalisation, template registration, etc.

As a result, SynthSeg relies on a single model that can be used out-of-the-box without retraining or fine-tuning. Here, we distribute the open-source model along with the corresponding code to enable researchers to run SynthSeg on their own data. We emphasise that predictions are given at 1mm isotropic resolution (regardless of the resolution of the input images), and can be obtained either by running on the GPU (6s per scan) or on the CPU (1min).

Generation examples


Easily segment your data with one command

Once all the python packages are installed (see below), you can simply test SynthSeg on your own data with:

python ./scripts/commands/SynthSeg_predict.py --i <image> --o <segmentation> --post <post> --resample <resample> --vol <vol>

where:

  • <image> is the path to an image to segment (supported formats are .nii, .nii.gz, and .mgz).
    This can also be a folder, in which case all the image inside that folder will be segmented.
  • <segmentation> is the path where the output segmentation(s) will be saved.
    This must be a folder if <image> designates a folder.
  • <post> (optional) is the path where the posteriors (given as soft probability maps) will be saved.
    This must be a folder if <image> designates a folder.
  • <resample> (optional) SynthSeg segmentations are always given at 1mm isotropic resolution. Therefore, images are internally resampled to this resolution (except if they aleady are at 1mm resolution). Use this optional flag to save the resampled images: it must be the path to a single image, or a folder if <image> designates a folder.
  • <vol> (optional) is the path to an output csv file where the volumes of every segmented structures will be saved for all scans (i.e., one csv file for all subjects; e.g. /path/to/volumes.csv)


Additional optional flags are also available:

  • --cpu: to enforce the code to run on the CPU, even if a GPU is available.
  • --threads: to indicate the number of cores to be used if running on a CPU (example: --threads 3 to run on 3 cores). This value defaults to 1, but we recommend increasing it for faster analysis.
  • --crop: to crop the input images to a given shape before segmentation. The given size must be divisible by 32. Images are cropped around their centre, and their segmentations are given at the original size). It can be given as a single (i.e., --crop 160 to run on 1603 patches), or several integers (i.e, --crop 160 128 192 to crop to different sizes in each direction, ordered in RAS coordinates). This value defaults to 192, but it can be decreased for faster analysis or to fit in your GPU.

IMPORTANT: Because SynthSeg may produce segmentations at higher resolution than the images (i.e., at 1mm3), some viewers will not display them correctly when overlaying the segmentations on the original images. If that’s the case, you can use the --resample flag to obtain a resampled image that lives in the same space as the segmentation, such that any viewer can be used to visualize them together. We highlight that the resampling is performed internally to avoid the dependence on any external tool.

The complete list of segmented structures is available in labels table.txt along with their corresponding values. This table also details the order in which the posteriors maps are sorted.


Requirements

All the python requirements are listed in requirements.txt. We give here the important dependencies:

  • Python 3.6 (this is important to have access to the right keras and tensorflow versions!)
  • tensorflow-gpu 2.0.1
  • keras 2.3.1
  • nibabel
  • numpy, scipy, sklearn, tqdm, pillow, matplotlib, ipython, ...

This code also relies on several external packages (already included in \ext for convenience):

  • lab2im: contains functions for data augmentation, and a simple version of the generative model, on which we build to build label_to_image_model
  • neuron: contains functions for deforming, and resizing tensors, as well as functions to build the segmentation network [1,2].
  • pytool-lib: library required by the neuron package.

If you wish to run SynthSeg on the GPU, or to train your own model, you will also need the usual deep learning libraries:

  • Cuda 10.0
  • CUDNN 7.0

How does it work ?

In short, we train a network with synthetic images sampled on the fly from a generative model based on the forward model of Bayesian segmentation. Crucially, we adopt a domain randomisation strategy where we fully randomise the generation parameters which are drawn from uninformative uniform distributions. Therefore, by maximising the variability of the training data, we force the network to learn domain-agnostic features. As a result, SynthSeg is able to readily segment real scans of any target domain, without retraining or fine-tuning.

The following figure first illustrates the workflow of a training iteration, and then provides an overview of the different steps of the generative model:

Generation examples

Finally we show additional examples of the synthesised images along with an overlay of their target segmentations:

Generation examples

If you are interested to learn more about SynthSeg, you can read the associated publication (see below), and watch this presentation, which was given at MIDL 2020 for a related article on a preliminary version of SynthSeg (robustness to MR contrast but not resolution).

Talk SynthSeg


Train your own model

This repository contains all the code and data necessary to train, validate, and test your own network. Importantly, the proposed method only requires a set of anatomical segmentations to be trained (no images), which we include in data. While the provided functions are thoroughly documented, we highly recommend to start with the following tutorials:

  • 1-generation_visualisation: This very simple script shows examples of the synthetic images used to train SynthSeg.

  • 2-generation_explained: This second script describes all the parameters used to control the generative model. We advise you to thoroughly follow this tutorial, as it is essential to understand how the synthetic data is formed before you start training your own models.

  • 3-training: This scripts re-uses the parameters explained in the previous tutorial and focuses on the learning/architecture parameters. The script here is the very one we used to train SynthSeg !

  • 4-training: This scripts shows how to make predictions, once the network has been trained.

  • 5-generation_advanced: Here we detail more advanced generation options, in the case of training a version of SynthSeg that is specific to a given contrast and/or resolution (although these types of variants were shown to be outperformed by the SynthSeg model trained in the 3rd tutorial).

  • 6-intensity_estimation: Finally, this script shows how to estimate the Gaussian priors of the GMM when training a contrast-specific version of SynthSeg.

These tutorials cover a lot of materials and will enable you to train your own SynthSeg model. Moreover, even more detailed information is provided in the docstrings of all functions, so don't hesitate to have a look at these !


Content

  • SynthSeg: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: contains the generative model for MRI scans.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around labels_to_image_model. New images can simply be generated by instantiating an object of this class, and call the method generate_image().

    • training.py: contains code to train the segmentation network (with explainations for all training parameters). This function also shows how to integrate the generative model in a training setting.

    • predict.py: prediction and testing.

    • validate.py: includes code for validation (which has to be done offline on real images).

  • models: this is where you will find the trained model for SynthSeg.

  • data: this folder contains some examples of brain label maps if you wish to train your own SynthSeg model.

  • script: contains tutorials as well as scripts to launch trainings and testings from a terminal.

  • ext: includes external packages, especially the lab2im package, and a modified version of neuron.


Citation/Contact

This code is under Apache 2.0 licensing.
If you use it, please cite one of the following papers:

SynthSeg: Domain Randomisation for Segmentation of Brain MRI Scans of any Contrast and Resolution
B. Billot, D.N. Greve, O. Puonti, A. Thielscher, K. Van Leemput, B. Fischl, A.V. Dalca, J.E. Iglesias
[arxiv | bibtex]

A Learning Strategy for Contrast-agnostic MRI Segmentation
B. Billot, D.N. Greve, K. Van Leemput, B. Fischl, J.E. Iglesias*, A.V. Dalca*
*contributed equally
MIDL 2020
[link | arxiv | bibtex]

Partial Volume Segmentation of Brain MRI Scans of any Resolution and Contrast
B. Billot, E.D. Robinson, A.V. Dalca, J.E. Iglesias
MICCAI 2020
[link | arxiv | bibtex]

If you have any question regarding the usage of this code, or any suggestions to improve it, you can contact us at:
[email protected]


References

[1] Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
CVPR 2018

[2] Unsupervised Data Imputation via Variational Inference of Deep Subspaces
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
Arxiv preprint 2019

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.