Code Monkey home page Code Monkey logo

clinicadl's People

Contributors

14thibea avatar alexandreroutier avatar anbai106 avatar camillebrianceau avatar dependabot[bot] avatar ghisvail avatar mdiazmel avatar mselimata avatar nburgos avatar ncassereau-idris avatar nicolasgensollen avatar oliviercolliot avatar ravih18 avatar sophieloiz avatar thibaultdvx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clinicadl's Issues

File not found

Hi there @mdiazmel,
Thanks for developing such great software. I encountered a problem when preprocessing data. Below is a crash example. It said the file ADNI_BIDS/sub-ADNI036S1135/ses-M06/anat/sub-ADNI036S1135_ses-M06_T1w.nii.gz does not exist. However, the file does exist:

xxx@xxx:~/adni$ ls -lh ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz
-rwxrwxr-x 1 xxx xxx 12M May 27 20:55 ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz
save_bias = True
shrink_factor = <undefined>
weight_image = <undefined>
Traceback: 
Traceback (most recent call last):
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 148, in __init__
    mp_context=mp_context,
TypeError: __init__() got an unexpected keyword argument 'initializer'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 397, in run
    runtime = self._run_interface(runtime)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 792, in _run_interface
    self.raise_exception(runtime)
  File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 723, in raise_exception
    ).format(**runtime.dictcopy())
RuntimeError: Command:
N4BiasFieldCorrection --bspline-fitting [ 600 ] -d 3 --input-image ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz --output [ sub-ADNI011S1080_ses-M24_T1w_corrected.nii.gz, sub-ADNI011S1080_ses-M24_T1w_bia
s.nii.gz ]
Standard output:
Standard error:
 file ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz does not exist . 
Segmentation fault (core dumped)
Return code: 139

Implement resume

It would be great to be able to resume a job that crashed and launch the remaining folds of a job (if only a subset was asked first).

.sh files to .py and preprocessing

Hi,
First of all thank you very much for sharing this- I am working on a very similar final year university project and this is extremely useful to me. I was wondering a couple of things:

  1. I see most of your code files are .sh, would you happen to have .py versions too?
  2. For the data preprocessing, would you happen to have the scripts for the 'minimal' and 'extensive' procedures (along with the 4 main stages of: (i) bias field correction (ii) intensity rescaling and standardisation (iii) skull stripping (iv) image registration) as outlined in your report please?
    Many thanks in advance for your support!

What is your clinica version?

Hi i am very interested in your job,I encountered some problems, hope you can help me。
ImportError: No module named clinica.utils.inputs

raise exception if the diagnoses all have the same label value

Diagnoses are pre-associated with class values

  • 'CN': 0,
  • 'AD': 1,
  • 'sMCI': 0,
  • 'pMCI': 1,
  • 'MCI': 1

If someone wants to perform the binary classification CN vs sMCI, this could be a problem as the two diagnoses have the same class value --> raise an Exception to prevent that case.

Add public continuous integration

Data for continuous integration would be given in a submodule of GitHub.
Tests should be run at a given frequency on Inria machine.

DataLoaders refactoring

Dataset and Dataloaders in tools/deep_learning/data.py could use Clinica's tools to find automatically files in a CAPS structure and load tensors and information to the DataLoaders.

This can facilitate the creation of generic Dataloaders for other modalities of images.

For implementation see clinica_file_reader and get_subject_session_list functions in Clinica.

Implement classify / test

This issue is to discuss the implementation of clinicadl classify (and eventually clinicadl test) as discussed in https://github.com/aramis-lab/AD-DL/pull/45.

What have to be discussed:

  1. Must the input folder have a fixed structure, or should we give the json path / model path ?
  2. Shall there be two pipelines depending on the presence of the true label or not (resp classify and test) or should we use a flag (such as --true_labels) ?
  3. Where are stored the results ? Do the user give the output directory and we could write a json file to remember which model / caps / tsv file was used or is it stored in the experiment folder ?

Add subparsers to `train slice`

train slice is not homogeneous with others because it does not require a network type. cnn subparser could be added, as well as multicnn as a new feature.

Subparsers of training `patch`

Could subparsers of patch be autoencoder, multi-cnn and single-cnn instead of autoencoder and cnn ? This would make the used realize that these choices exist and that they are quite different (also some arguments are specific to multi-cnn).

Suggested in PR #20.

Bug in soft voting for very small number of subjects

The test pipeline test_train_cnn occasionally crash with train_slice because there it may fail if one slice is always badly classified for all subjects. This happens when the number of subjects is very low (so hopefully, only in tests).
This could be fixed.

Unit testing wrong

Hello, your work is amazing. I tested it after the installation and found the following errors. Can you help me solve them?

111

Fix generate pipelines

Paths of the images extracted by the preprocessing or extract have changed, but generate was not updated.
It will be fixed after the preprocessing pipelines will be finished.

Preprocessing: Missing ANTs software

Hi,

I'm a highschooler doing research for Alzheimer's Disease diagnosis and I ran into an issue when I did the preprocessing command on my MacBook. It was saying I'm missing ANTs software, do I need it and where would I find it? I also ran pip3 command and it didn't seem to install it. I have attached a screenshot of the error below. I greatly appreciate it if someone can help me.

image

Gather 'preprocessing', tensor extraction and QC into a single category

We previously planned to refactor clinicadl preprocessing so that:

  • clinicadl preprocessing t1-linear will wrap CLI from Clinica >= 0.3.4
  • clinicadl preprocessing t1-extensive(cf Issue #31) will use this upcoming pipeline in AD-DL

To simplify the use of ClinicaDL, a proposition from this morning is to merge 'preprocessing', tensor extraction and QC into preprocessing as follows:

  • clinicadl preprocessing run {t1-linear, t1-extensive,....} (only t1-extensive and t1-linear will be added on a short basis, this could be extended to t1-volume). Basically, clinicadl preprocessing run is equivalent of clinica run in terms of behaviour.
  • clinicadl preprocessing qc {t1-linear, t1-extensive,....} (only t1-linear will be covered, see PR #47)
  • clinicadl preprocessing extract-tensor {t1-linear, t1-extensive,....} (TBD how we could nicely wrap deeplearning-prepare-data within this sub-parser)

`clinicadl classify` to implement

The classify task should be implemented.
Scripts examples for evaluation of the trained models can be found inside of each folder (e.g. clinicadl/clinicadl/subject_level/evaluation.py).

Remove tsv_path from mandatory arguments of generate

Generate could detect automatically the subjects and sessions that could be used to synthetize new data.
Eventually we could let it as an option if the user wants to specify which part of the CAPS they want to use (but I don't think this is really useful...)

Renaming args in `clinicadl train`

Some arguments could be changed:

  • use_extracted_patches|roi|slices could be replaced by use_extracted_features to be homogeneous with classify.
  • evaluation_steps could be a computational resources argument instead of optimization

How to train single CNN of 3D Patch?

When I use 3D Patch to train the network, the parameter is set to Singel, there is an error. The .pt file (the entire individual) is missing, but I have 36 3D patch files

Implement verbose levels

There are way too many print calls in clinicadl train.
A verbose argument could be given to deal with the number of prints

  • 0 no print
  • 1 default level
  • 2 debug print (time for batch loading)

Transfer learning AE --> AE

This pre-existing feature was lost during the previous refactoring.
It could be interesting to put it back as AE training is costly and this feature was used during architecture search to transfer weights between two AE of different sizes.

Spaces are not in English and will cause errors during runtime

In clinicadl_env/lib/python3.6/site-packages/clinica/utils/ux.py, line 21
cprint("\t%s | %s" % (unique_participants[-1], sessions_last_participant))
The first space in "\t%s | %s" is not in English, it will cause an error during runtime:
UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 17: ordinal not in range(128)

Implement t1-extensive preprocessing pipeline

Extensive preprocessing (t1-volume-tissue-segmentation + skull-stripping) should be implemented.
Preprocessing subparser would have two options:

  • minimal --> wrapper on clinica or message to the user.
  • extensive --> perform extensive preprocessing.

Add autocompletion

Similarly to clinica, autocompletion using Tabulation key could be a nice feature to add :-)

Rename `preprocessing` options

Preprocessings that can be chosen in the commandline are linear and mni. For more homogeneity mni could be renamed extensive or t1-volume (and linear would be t1-linear).

Suggested in PR #20.

Clarify the use of the option `transfer_learning_path`

transfer_learning_path parameter is not very clear. For subject-level AE it is the path to the model file (filename included) and for others to the root folder of the experiment (the implementation of subject-level AE should be changed).

Suggested in PR #20.

Error when installing with pip

I got an error while installing with pip the latest version of master:

Obtaining file:///Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl
    ERROR: Command errored out with exit status 1:
     command: /Users/elina.thibeausutre/miniconda2/envs/clinicadl_doc/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py'"'"'; __file__='"'"'/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/j4/pt6zgm1102913npryfnj439c000mkt/T/pip-pip-egg-info-ossi3_vv
         cwd: /Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/
    Complete output (7 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py", line 16, in <module>
        reqs = [str(ir.req) for ir in install_reqs]
      File "/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py", line 16, in <listcomp>
        reqs = [str(ir.req) for ir in install_reqs]
    AttributeError: 'ParsedRequirement' object has no attribute 'req'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

I commented lines 17 and 46 and everything went well.
My version of pip is 20.1.1

About eMethod

《Convolutional neural networks for classification of Alzheimer's disease: Overview and reproducible evaluation》eMethod 1. 2, 3, and 4 mentioned have no corresponding link. Pelease provide the URL,thanks a lot.

train can't find output of extract

Hi,

I am running clinicaDL to preprocess, extract and then train a DL model, in sequence. Here are the commands I ran:

clinicadl preprocessing /ad_dl/data/ /ad_dl/data/caps_dir ../pAD.tsv work_dir

clinicadl extract /ad_dl/data/caps_dir ../pAD.tsv work_dir slice

These two seem to execute without error. At the end of extract, I can see some .pt file created inside /ad_dl/data/caps_dir/subjects/sub-A00027159/ses-DS2/deeplearning_prepare_data/slice_based/t1_linear/.

But when I run the train command, I get the following error:

command: clinicadl train slice /ad_dl/data/caps_dir/ /ad_dl/ /ad_dl/data/ Conv4_FC3
Error:

FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/anac3/envs/clinicadl_adprog/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/root/anac3/envs/clinicadl_adprog/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
.....
FileNotFoundError: [Errno 2] No such file or directory: '/ad_dl/data/caps_dir/subjects/sub-A00027544/ses-DS2/t1/preprocessing_dl/sub-A00027544_ses-DS2_space-MNI_res-1x1x1.pt'

Seems like train is looking for files in the wrong location (I only see a directory called t1_linear inside ses-DS2) or perhaps preprocessing didnt finish?

Improve automatic tests

Other checks to implement in tests

  • check that transfer learning is working (AE --> AE, AE --> CNN, CNN --> CNN, single CNN --> multi CNN).
  • check that output of train can be read by classify.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.