Code Monkey home page Code Monkey logo

default-cars-model's Introduction

Train and test with nnUNetv2

Structure of the scripts Directory

This directory contains the following components:

  • Conversion Script: This script, convert_from_bids_to_nnunetv2_format.py, is responsible for converting the SEM segmentation dataset from the BIDS format to the format expected by nnUNetv2. The script requires two arguments: the path to the original dataset and the target directory for the new dataset. Here is an example of how to run the script:
python scripts/convert_from_bids_to_nnunetv2_format.py <PATH/TO/ORIGINAL/DATASET> --TARGETDIR <PATH/TO/NEW/DATASET>

For more information about the script and its additional arguments, run the script with the -h flag:

python scripts/convert_from_bids_to_nnunetv2_format.py -h
  • Setup Script: This script sets up the nnUNet environment and runs the preprocessing and dataset integrity verification. To run execute the following command:
source scripts/setup_nnunet.sh <PATH/TO/ORIGINAL/DATASET> <PATH/TO/SAVE/RESULTS> [DATASET_ID] [LABEL_TYPE] [DATASET_NAME]
  • Training Script: This script is used to train the nnUNet model. It requires four arguments:
    • DATASET_ID: The ID of the dataset to be used for training. This should be an integer.
    • DATASET_NAME: The name of the dataset. This will be used to form the full dataset name in the format "DatasetNUM_DATASET_NAME".
    • DEVICE: The device to be used for training. This could be a GPU device ID or 'cpu' for CPU, 'mps' for M1/M2 or 'cuda' for any GPU.
    • FOLDS: The folds to be used for training. This should be a space-separated list of integers. To run the training script, execute the following command:
./scripts/train_nnunet.sh <DATASET_ID> <DATASET_NAME> <DEVICE> <FOLDS...>
  • Train Test Split File: This file is a JSON file that contains the training and testing split for the dataset. It is used by the conversion script above. The file should be named train_test_split.json and placed in the same directory as the dataset.

Setting Up Conda Environment

To set up the environment and run the scripts, follow these steps:

  1. Create a new conda environment:
conda create --name cars_seg
  1. Activate the environment:
conda activate cars_seg
  1. Install PyTorch, torchvision, and torchaudio. For NeuroPoly lab members using the GPU servers, use the following command:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

For others, please refer to the PyTorch installation guide at https://pytorch.org/get-started/locally/ to get the appropriate command for your system.

  1. Update the environment with the remaining dependencies:
conda env update --file environment.yaml

Setting Up nnUNet

  1. Activate the environment:
conda activate cars_seg
  1. To train the model, first, you need to set up nnUNet and preprocess the dataset. This can be done by running the setup script:
source scripts/setup_nnunet.sh <PATH/TO/ORIGINAL/DATASET> <PATH/TO/SAVE/RESULTS> [DATASET_ID] [LABEL_TYPE] [DATASET_NAME]

Training nnUNet

After setting up the nnUNet and preprocessing the dataset, you can train the model using the training script. The script requires the following arguments:

  • DATASET_ID: The ID of the dataset to be used for training. This should be an integer.
  • DATASET_NAME: The name of the dataset. This will be used to form the full dataset name in the format "DatasetNUM_DATASET_NAME".
  • DEVICE: The device to be used for training. This could be a GPU device ID or 'cpu' for CPU, 'mps' for M1/M2 or 'cuda' for any GPU.
  • FOLDS: The folds to be used for training. This should be a space-separated list of integers. To run the training script, execute the following command:
./scripts/train_nnunet.sh <DATASET_ID> <DATASET_NAME> <DEVICE> <FOLDS...>

Inference

After training the model, you can perform inference using the following command:

python scripts/nn_unet_inference.py --path-dataset ${RESULTS_DIR}/nnUNet_raw/Dataset<FORMATTED_DATASET_ID>_<DATASET_NAME>/imagesTs --path-out <WHERE/TO/SAVE/RESULTS> --path-model ${RESULTS_DIR}/nnUNet_results/Dataset<FORMATTED_DATASET_ID>_<DATASET_NAME>/nnUNetTrainer__nnUNetPlans__2d/ --use-gpu --use-best-checkpoint

The --use-best-checkpoint flag is optional. If used, the model will use the best checkpoints for inference. If not used, the model will use the latest checkpoints. Based on empirical results, using the --use-best-checkpoint flag is recommended.

Note: <FORMATTED_DATASET_ID> should be a three-digit number where 1 would become 001 and 23 would become 023.

default-cars-model's People

Contributors

arthurboschet avatar

Watchers

Julien Cohen-Adad avatar

default-cars-model's Issues

Add preview image

After the model is trained and we have a test prediction, we could add a preview image like in our other default model repos.

There shouldn't be any issue with privacy, etc. because the original CARS slices are publicly available in the old OSF repo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.