aramis-lab / clinicadl Goto Github PK
View Code? Open in Web Editor NEWFramework for the reproducible processing of neuroimaging data with deep learning methods
Home Page: https://clinicadl.readthedocs.io/
License: MIT License
Framework for the reproducible processing of neuroimaging data with deep learning methods
Home Page: https://clinicadl.readthedocs.io/
License: MIT License
Hi there @mdiazmel,
Thanks for developing such great software. I encountered a problem when preprocessing data. Below is a crash example. It said the file ADNI_BIDS/sub-ADNI036S1135/ses-M06/anat/sub-ADNI036S1135_ses-M06_T1w.nii.gz does not exist. However, the file does exist:
xxx@xxx:~/adni$ ls -lh ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz
-rwxrwxr-x 1 xxx xxx 12M May 27 20:55 ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz
save_bias = True
shrink_factor = <undefined>
weight_image = <undefined>
Traceback:
Traceback (most recent call last):
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 148, in __init__
mp_context=mp_context,
TypeError: __init__() got an unexpected keyword argument 'initializer'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
result = self._run_interface(execute=True)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
return self._run_command(execute)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
result = self._interface.run(cwd=outdir)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 397, in run
runtime = self._run_interface(runtime)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 792, in _run_interface
self.raise_exception(runtime)
File "/home/xxx/anaconda3/envs/adni/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 723, in raise_exception
).format(**runtime.dictcopy())
RuntimeError: Command:
N4BiasFieldCorrection --bspline-fitting [ 600 ] -d 3 --input-image ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz --output [ sub-ADNI011S1080_ses-M24_T1w_corrected.nii.gz, sub-ADNI011S1080_ses-M24_T1w_bia
s.nii.gz ]
Standard output:
Standard error:
file ADNI_BIDS/sub-ADNI011S1080/ses-M24/anat/sub-ADNI011S1080_ses-M24_T1w.nii.gz does not exist .
Segmentation fault (core dumped)
Return code: 139
It would be great to be able to resume a job that crashed and launch the remaining folds of a job (if only a subset was asked first).
Remove the parameter num_cnn
for multi-patch training and automatically find the right number of CNNs.
Suggested in PR #20.
Hi,
First of all thank you very much for sharing this- I am working on a very similar final year university project and this is extremely useful to me. I was wondering a couple of things:
Hi i am very interested in your job,I encountered some problems, hope you can help me。
ImportError: No module named clinica.utils.inputs
Diagnoses are pre-associated with class values
If someone wants to perform the binary classification CN vs sMCI, this could be a problem as the two diagnoses have the same class value --> raise an Exception to prevent that case.
Data for continuous integration would be given in a submodule of GitHub.
Tests should be run at a given frequency on Inria machine.
There are changes in this version which must be taken into account in this version.
Dataset and Dataloaders in tools/deep_learning/data.py
could use Clinica's tools to find automatically files in a CAPS structure and load tensors and information to the DataLoaders.
This can facilitate the creation of generic Dataloaders for other modalities of images.
For implementation see clinica_file_reader
and get_subject_session_list
functions in Clinica.
This issue is to discuss the implementation of clinicadl classify
(and eventually clinicadl test
) as discussed in https://github.com/aramis-lab/AD-DL/pull/45.
What have to be discussed:
classify
and test
) or should we use a flag (such as --true_labels
) ?train slice
is not homogeneous with others because it does not require a network type. cnn
subparser could be added, as well as multicnn
as a new feature.
The option prepare_dl
allow to perform patch extraction and slice extraction during the training.
Suggested in PR #20.
Could subparsers of patch
be autoencoder
, multi-cnn
and single-cnn
instead of autoencoder
and cnn
? This would make the used realize that these choices exist and that they are quite different (also some arguments are specific to multi-cnn
).
Suggested in PR #20.
The test pipeline test_train_cnn
occasionally crash with train_slice
because there it may fail if one slice is always badly classified for all subjects. This happens when the number of subjects is very low (so hopefully, only in tests).
This could be fixed.
Paths of the images extracted by the preprocessing or extract have changed, but generate
was not updated.
It will be fixed after the preprocessing pipelines will be finished.
It could be also good to add a new subparser for roi
instead of mixing them in patch
subparser though they share the same core code.
Suggested in PR #20.
Hi,
I'm a highschooler doing research for Alzheimer's Disease diagnosis and I ran into an issue when I did the preprocessing command on my MacBook. It was saying I'm missing ANTs software, do I need it and where would I find it? I also ran pip3 command and it didn't seem to install it. I have attached a screenshot of the error below. I greatly appreciate it if someone can help me.
We previously planned to refactor clinicadl preprocessing
so that:
clinicadl preprocessing t1-linear
will wrap CLI from Clinica >= 0.3.4clinicadl preprocessing t1-extensive
(cf Issue #31) will use this upcoming pipeline in AD-DLTo simplify the use of ClinicaDL, a proposition from this morning is to merge 'preprocessing', tensor extraction and QC into preprocessing
as follows:
clinicadl preprocessing run {t1-linear, t1-extensive,....}
(only t1-extensive
and t1-linear
will be added on a short basis, this could be extended to t1-volume
). Basically, clinicadl preprocessing run
is equivalent of clinica run
in terms of behaviour.clinicadl preprocessing qc {t1-linear, t1-extensive,....}
(only t1-linear
will be covered, see PR #47)clinicadl preprocessing extract-tensor {t1-linear, t1-extensive,....}
(TBD how we could nicely wrap deeplearning-prepare-data
within this sub-parser)Remove transfer_learning_multicnn
parameter from training and find if the transfer learning folder is multi or single (automatically).
Suggested in PR #20.
By default use a GPU (and instead have a flag --cpu to force training on CPU).
Suggested in PR #20.
Quality check is performed thanks to models from https://github.com/vfonov/deep-qc (as mentionned in the paper).
A reference to their preprint / github should be added in the code used for quality check.
The classify
task should be implemented.
Scripts examples for evaluation of the trained models can be found inside of each folder (e.g. clinicadl/clinicadl/subject_level/evaluation.py
).
Generate could detect automatically the subjects and sessions that could be used to synthetize new data.
Eventually we could let it as an option if the user wants to specify which part of the CAPS they want to use (but I don't think this is really useful...)
Some arguments could be changed:
use_extracted_patches|roi|slices
could be replaced by use_extracted_features
to be homogeneous with classify.evaluation_steps
could be a computational resources
argument instead of optimizationTypeError: Object of type 'function' is not JSON serializable
When I use 3D Patch to train the network, the parameter is set to Singel, there is an error. The .pt file (the entire individual) is missing, but I have 36 3D patch files
There are way too many print calls in clinicadl train
.
A verbose argument could be given to deal with the number of prints
This pre-existing feature was lost during the previous refactoring.
It could be interesting to put it back as AE training is costly and this feature was used during architecture search to transfer weights between two AE of different sizes.
In clinicadl_env/lib/python3.6/site-packages/clinica/utils/ux.py, line 21
cprint("\t%s | %s" % (unique_participants[-1], sessions_last_participant))
The first space in "\t%s | %s" is not in English, it will cause an error during runtime:
UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 17: ordinal not in range(128)
Extensive preprocessing (t1-volume-tissue-segmentation + skull-stripping) should be implemented.
Preprocessing subparser would have two options:
minimal
--> wrapper on clinica or message to the user.extensive
--> perform extensive preprocessing.Are the models used by 3Dpatch multiple CNN and single CNN all Conv4_FC3?
clinicadl train
outputs a txt file in which the version of some dependencies are stored. It could be improved so it could be used in a template for issues on clinicadl train
.
The number of discarded slices is harcoded in MRIDataset_slice
. This could be chosen by the user.
Similarly to clinica
, autocompletion using Tabulation key could be a nice feature to add :-)
Preprocessings that can be chosen in the commandline are linear
and mni
. For more homogeneity mni
could be renamed extensive
or t1-volume
(and linear
would be t1-linear
).
Suggested in PR #20.
There is a known issue when installing pip version of scikit-learn with conda version Pytorch (see here for details).
The workaround to use clinicadl
is to uninstall conda version of Pytorch and install its pip version (at the same time, reinstall, also with pip, scikit-learn and clinica).
Automatically detect which files are available to conduct the analysis.
The outputs of autoencoders are different depending on the pipeline (pach/roi or image).
Moreover the visualization
argument should be implemented in the same way for patch/roi and image.
transfer_learning_path
parameter is not very clear. For subject-level AE it is the path to the model file (filename included) and for others to the root folder of the experiment (the implementation of subject-level AE should be changed).
Suggested in PR #20.
I got an error while installing with pip the latest version of master:
Obtaining file:///Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl
ERROR: Command errored out with exit status 1:
command: /Users/elina.thibeausutre/miniconda2/envs/clinicadl_doc/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py'"'"'; __file__='"'"'/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/j4/pt6zgm1102913npryfnj439c000mkt/T/pip-pip-egg-info-ossi3_vv
cwd: /Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py", line 16, in <module>
reqs = [str(ir.req) for ir in install_reqs]
File "/Users/elina.thibeausutre/Documents/code/AD-DL_doc/clinicadl/setup.py", line 16, in <listcomp>
reqs = [str(ir.req) for ir in install_reqs]
AttributeError: 'ParsedRequirement' object has no attribute 'req'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I commented lines 17 and 46 and everything went well.
My version of pip is 20.1.1
ANTs is used to perform image registration during the preprocessing. This requirement must be added to the README page.
《Convolutional neural networks for classification of Alzheimer's disease: Overview and reproducible evaluation》eMethod 1. 2, 3, and 4 mentioned have no corresponding link. Pelease provide the URL,thanks a lot.
For patch and slice, if split = None
all the splits of the k-fold CV are done. It could be implemented for subject level and default value for split
could be set to None
.
Suggested in PR #20.
Hi,
I am running clinicaDL to preprocess, extract and then train a DL model, in sequence. Here are the commands I ran:
clinicadl preprocessing /ad_dl/data/ /ad_dl/data/caps_dir ../pAD.tsv work_dir
clinicadl extract /ad_dl/data/caps_dir ../pAD.tsv work_dir slice
These two seem to execute without error. At the end of extract, I can see some .pt file created inside /ad_dl/data/caps_dir/subjects/sub-A00027159/ses-DS2/deeplearning_prepare_data/slice_based/t1_linear/.
But when I run the train command, I get the following error:
command: clinicadl train slice /ad_dl/data/caps_dir/ /ad_dl/ /ad_dl/data/ Conv4_FC3
Error:
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/anac3/envs/clinicadl_adprog/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/root/anac3/envs/clinicadl_adprog/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
.....
FileNotFoundError: [Errno 2] No such file or directory: '/ad_dl/data/caps_dir/subjects/sub-A00027544/ses-DS2/t1/preprocessing_dl/sub-A00027544_ses-DS2_space-MNI_res-1x1x1.pt'
Seems like train is looking for files in the wrong location (I only see a directory called t1_linear inside ses-DS2) or perhaps preprocessing didnt finish?
Where can I view this result? And how to draw ROC curve and calculate AUC?
Other checks to implement in tests
scikit-image is a specific requirement only used in QC.
It could be removed by simplifying load_nifti_images
function.
Could this be used for resize instead of the initial transform ? https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.misc.imresize.html
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.