Code Monkey home page Code Monkey logo

medical-image-analysis-laboratory / mialsuperresolutiontoolkit Goto Github PK

View Code? Open in Web Editor NEW
26.0 7.0 12.0 317.24 MB

The Medical Image Analysis Laboratory Super-Resolution ToolKit (MIALSRTK) consists of a set of C++ and Python processing and workflow tools necessary to perform motion-robust super-resolution fetal MRI reconstruction in the BIDS Apps framework.

License: BSD 3-Clause "New" or "Revised" License

CMake 1.03% C++ 52.48% Shell 1.57% Python 24.71% Dockerfile 0.82% Jupyter Notebook 18.81% HTML 0.59%
fetal mri super-resolution workflow bids bids-apps itk nipype

mialsuperresolutiontoolkit's People

Contributors

allcontributors[bot] avatar hamzake avatar pdedumast avatar sebastientourbier avatar t-sanchez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mialsuperresolutiontoolkit's Issues

Bad memory allocation issue

Error

*** Error in `/opt/conda/envs/pymialsrtk-env/bin/python': corrupted size vs. prev_size: 0x000055d4457176f0 ***
Fatal Python error: Aborted

Current thread 0x00007f3b634f1740 (most recent call first):
Aborted (core dumped)

Extracted from circleci test-01 report: https://circleci.com/api/v1.1/project/github/Medical-Image-Analysis-Laboratory/mialsuperresolutiontoolkit/497/output/106/0?file=true&allocation-id=5fb693dccc7abf1d7a075c9a-0-build%2F3593797B

This does not seem to cause trouble as the workflow seems to perform until the end.

More
It seems to be related to some memory leakage when using the standard malloc library. Suggested fix from r9y9/gantts#14 (comment) is to use tcmalloc.

Issue with code coverage

Error description

It seems there might be an issue, code coverage dropped down to 43%.

It seems code coverage do not go into the run_interface when looking at the code coverage report of preprocess.py on codacy:
https://app.codacy.com/gh/Medical-Image-Analysis-Laboratory/mialsuperresolutiontoolkit/file/50369202494/coverage?bid=20968572&fileBranchId=20968572

It seems this drop happened when adding the feature for specifying the number of cores. Here is a link to the coverage report for test-01 on circleci:
https://circle-production-customer-artifacts.s3.amazonaws.com/picard/5dad7aa5f0aec30dad88909a/5fad1f84c571f970a4e94939-0-build/artifacts/tmp/src/mialsuperresolutiontoolkit/data/test/test-01_coverage.xml?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20201119T113725Z&X-Amz-SignedHeaders=host&X-Amz-Expires=60&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20201119%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=debf17e74844f97cf9c0f9734d2c1a54c60227c3a63d8c9a02f31c4d6eefc87b

Not sure thought it did not occur before

Proposed fix to be tried

  • Set number of cores used by nipype to one in the test-01 command in .circleci/config.xml. Run in parallel of the code might affect code coverage.

Build and publish singularity image to SyLabs Cloud in CircleCI

This would include tthe following tasks:

  • Create account in Sylabs Cloud (https://cloud.sylabs.io/home) and token
  • Add corresponding new environment variables in circleci
  • Add the following jobs in circleci:
  • Build the singularity image
  • Test the singularity image
  • Deploy the singularity image to Sylabs Cloud when a release is made

This might be facilitated by using the circleci singularity orb available at https://circleci.com/developer/orbs/orb/singularity/singularity.

Optimizing tensorflow model size

It is currently impossible to publish pymialsrtk to PyPi due to a 100MB limit size, mainly due to size of trained Tensorflow models (2 x ~90MB).

Here is a list of methods to can help in optimizing the Tensorflow models for serving prediction:

  • Freezing: Convert the variables stored in a checkpoint file of the SavedModel into constants stored directly in the model graph. This reduces the overall size of the model.

  • Pruning: Strip unused nodes in the prediction path and the outputs of the graph, merging duplicate nodes, as well as cleaning other node ops like summary, identity, etc.

  • Constant folding: Look for any sub-graphs within the model that always evaluate to constant expressions, and replace them with those constants. Folding batch norms: Fold the multiplications introduced in batch normalization into the weight multiplications of the previous layer.

  • Quantization: Convert weights from floating point to lower precision, such as 16 or 8 bits.

References

ENH: Improvement of the stacksOrdering

Using the information on the number of rejected slices (in practice, number of NaN obtained from the motion index computation) in manual masks to penalize a stack in the optional stacks_ordering interface taking into account motion-related intensity artifacts like signal dropouts.

Re. MialSRTK usage

Dear Experts,

I look forward to working with MialSRTK for fetal super-resolution MRI.

However, I run into error while running mialsrtk with or without wrappers. Please find the error code attached below. Also attaching the params and imaging files.

I request guidance for the issue.

Thank you.

Best Regards,
Amit.

docker run -t --rm -u $(id -u):$(id -g) -v /path/to/FetalMRI/BIDS/sub-1/anat:/bids_dir -v /path/to/FetalMRI/BIDS/derivatives:/output_dir sebastientourbier/mialsuperresolutiontoolkit /bids_dir /output_dir participant
id: : no such user
User:
id: : no such user
Group:
declare -x BIN_DIR="/usr/local/bin"
declare -x CONDA_ACTIVATE="source /opt/conda/bin/activate pymialsrtk-env"
declare -x CONDA_ENV_PATH="/opt/conda"
declare -x DISPLAY=":0"
declare -x GLIBCPP_FORCE_NEW="1"
declare -x GLIBCXX_FORCE_NEW="1"
declare -x HOME="/home/mialsrtk"
declare -x HOSTNAME="756a80a0007e"
declare -x LANG="C.UTF-8"
declare -x LC_ALL="C.UTF-8"
declare -x LD_PRELOAD="/usr/lib/libtcmalloc.so.4"
declare -x MY_CONDA_PY3ENV="pymialsrtk-env"
declare -x OLDPWD
declare -x PATH="/usr/local/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/app"
declare -x SHLVL="1"
declare -x TERM="xterm"
SHELL: /bin/sh
PATH: /usr/local/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-oxp0f0k4 because the default path (/home/mialsrtk/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
WARNING:tensorflow:From /opt/conda/envs/pymialsrtk-env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO: Number of cores used by Nipype engine set to 2
INFO: Environment variable OMP_NUM_THREADS set to: 2
/bids_dir/code/participants_params.json
{'participant_id': {'Description': 'Unique participant identifier'}, 'age': {'LongName': 'Long (unabbreviated) name of the column', 'Description': 'Description of the the column', 'Levels': {'Key': 'Value (This is for categorical variables: a dictionary of possible values (keys) and their descriptions (values))'}, 'Units': 'Measurement units. [<prefix symbol>]<unit symbol> format following the SI standard is RECOMMENDED'}, 'sex': {'LongName': 'Long (unabbreviated) name of the column', 'Description': 'Description of the the column', 'Levels': {'Key': 'Value (This is for categorical variables: a dictionary of possible values (keys) and their descriptions (values))'}, 'Units': 'Measurement units. [<prefix symbol>]<unit symbol> format following the SI standard is RECOMMENDED'}, 'size': {'LongName': 'Long (unabbreviated) name of the column', 'Description': 'Description of the the column', 'Levels': {'Key': 'Value (This is for categorical variables: a dictionary of possible values (keys) and their descriptions (values))'}, 'Units': 'Measurement units. [<prefix symbol>]<unit symbol> format following the SI standard is RECOMMENDED'}, 'weight': {'LongName': 'Long (unabbreviated) name of the column', 'Description': 'Description of the the column', 'Levels': {'Key': 'Value (This is for categorical variables: a dictionary of possible values (keys) and their descriptions (values))'}, 'Units': 'Measurement units. [<prefix symbol>]<unit symbol> format following the SI standard is RECOMMENDED'}}
dict_keys(['participant_id', 'age', 'sex', 'size', 'weight'])

Traceback (most recent call last):
  File "/opt/mialsuperresolutiontoolkit/docker/bidsapp/run.py", line 247, in <module>
    if len(args.participant_label) >= 1:
TypeError: object of type 'NoneType' has no len()

participants_params.txt
sub-1_acq-AXSSFSEBH_run-1_T2w.nii.gz

sub-1_acq-AXSSFSEBH_run-2_T2w.nii.gz
sub-1_acq-CorSSFSEBH_run-1_T2w.nii.gz
sub-1_acq-SAGSSFSEBH_run-1_T2w.nii.gz
sub-1_acq-SAGSSFSEBH_run-2_T2w.nii.gz

Add tutorial "How to reorient scans to proper anatomical fetal brain orientation with Slicer3D"

As the way fetal MRI scans are acquired can result in scans not properly oriented anatomically to the fetal brain; this might decrease registration performance and so super-resolution quality (that can even fail if scans are too far from each other).

Scans can be properly reoriented using Slicer3D. Although Slicer3D is a great tool it might be hard to use at first.

Thus, this tutorial would guide the user in all the steps required to reorient an image and its corresponding mask, which would be very practically beneficial for the user.

Generate a processing report

I just created this issue to brainstorm on the way we could integrate in the future the generation of a processing report for facilitating processing quality control on a large dataset.

As I can see, we would need for this:

  • the creation of a PNG image in the run() function of each interface using for instance nilearn for visualization of Nifti, or matplotlib or seaborn for instance for illustration of the motion. At the same time we wil create a new output for each interface that we will use to connect a new interface that will create the report.
  • the creation of a JSON in the run() function of each interface which will describe the information to show in the report (for instance parameters values). Similarly we will create a new output for each interface that we will use to connect to the new interface that will create the report.
  • the creation of an interface that collects the PNG images and JSON files for each stage and fill appropriately an Jinja2 HTML template.
  • Integration of the interface, connection of the new nodes.
  • Move and rename the report to pymialsrtk/sub-01/report/sub-01_report.html for instance using the DataSinker.

Please take all as suggestions and all suggestions are welcome ๐Ÿ‘

Suggested Content

  • For each preprocessed LR scan, a nilearn plot_anat figure and a plot of the slice motion indices.
  • A nilearn plot_anat figure of the SR image, parameters of pipelines and SR, ultimately the MSE between the scans and the scans simulated from the SR apply the forward model.

More

Nice resources that show how to use Jinja2 (https://jinja.palletsprojects.com/en/2.11.x/) for report templating:

Add missing images in outputs.rst

IMPORTANT Both should stay readable in the generated HTML and PDF doc on readthedocs.

Update .zenodo.json with Hamza and Priscille

Zenodo uses the ORCID ID to track author. Therefore you will have to create (if you do not have already one) an orcid account (https://orcid.org/) and then add an entry for yourself in the .zenodo.json here:

"creators": [
{
"name": "Tourbier, Sebastien",
"affiliation": "Department of Radiology, Lausanne University Hospital (CHUV), Switzerland",
"orcid": "0000-0002-4441-899X"
},
{
"name": "Bresson, Xavier",
"affiliation": "Data Science and AI Center (DSAIR), Nanyang Technological University (NTU), Singapore"
},
{
"name": "Bach Cuadra, Meritxell",
"affiliation": "Department of Radiology, Lausanne University Hospital (CHUV), Switzerland",
"orcid": "0000-0003-2730-4285"
},
{
"name": "Hagmann, Patric",
"affiliation": "Department of Radiology, Lausanne University Hospital (CHUV), Switzerland",
"orcid": "0000-0002-2854-6561"
}

Add a script that wraps the call to the Singularity image

Similarly to the script that wraps the docker run command (#41), this script will generate the singularity run/exec command.

This includes:

  • Implementation of script in pymialsrtk/cli
  • Update the setup.py to specify this new script to be installed

Add page to list changes between versions

This could be semi automatized by using the following command that shows commit messages between a current (the HEAD) and an old version vX.X.X:

git log vX.X.X..HEAD --pretty=format:' * %s %Cgreen(%cr) - commit %Cred%h%Creset' --abbrev-commit

Brain masks: optional masks directory parameter

So far, the brain masks are retrieved either through automatic computation in the pipeline or from a derivatives/manual_masks/ directory if tag --manual is specified in the command line.

This issue will replace the --manual tag with an alternative to specify in which derivative directory the masks should be searched for if so. Otherwise, masks are computed.

DOC: Make sure all references to version TAG are correct

In info.py: __version__ = 2.0.1

PyPI: pip install pymialsrtk==2.0.1
Docker: docker pull sebastientourbier/mialsuperresolution:v2.0.2
Singularity: singularity pull library://tourbier/default/mialsuperresolutiontoolkit:v2.0.1

DOC: correct example for MialsrtkImageReconstruction interface

Example for mialsrtkImageReconstruction as it appears in the doc:

>>> from pymialsrtk.interfaces.reconstruction import MialsrtkImageReconstruction
>>> srtkImageReconstruction = MialsrtkTVSuperResolution()
>>> srtkImageReconstruction.inputs.bids_dir = '/my_directory'
>>> srtkImageReconstruction.input_images = ['sub-01_ses-01_run-1_T2w.nii.gz', 'sub-01_ses-01_run-2_T2w.nii.gz',     'sub-01_ses-01_run-3_T2w.nii.gz', 'sub-01_ses-01_run-4_T2w.nii.gz']
>>> srtkImageReconstruction.input_masks = ['sub-01_ses-01_run-1_mask.nii.gz', 'sub-01_ses-01_run-2_mask.nii.gz',     'sub-01_ses-01_run-3_mask.nii.gz', 'sub-01_ses-01_run-4_mask.nii.gz']
>>> srtkImageReconstruction.inputs.stacks_order = [3,1,2,4]
>>> srtkImageReconstruction.inputs.sub_ses = 'sub-01_ses-01'
>>> srtkImageReconstruction.inputs.in_roi = 'mask'
>>> srtkImageReconstruction.inputs.in_deltat = 0.01
>>> srtkImageReconstruction.inputs.in_lambda = 0.75
>>> srtkImageReconstruction.run()  # doctest: +SKIP

Lines implicated:

>>> from pymialsrtk.interfaces.reconstruction import MialsrtkImageReconstruction
>>> srtkImageReconstruction = MialsrtkTVSuperResolution()
>>> srtkImageReconstruction.inputs.bids_dir = '/my_directory'
>>> srtkImageReconstruction.input_images = ['sub-01_ses-01_run-1_T2w.nii.gz', 'sub-01_ses-01_run-2_T2w.nii.gz', \
'sub-01_ses-01_run-3_T2w.nii.gz', 'sub-01_ses-01_run-4_T2w.nii.gz']
>>> srtkImageReconstruction.input_masks = ['sub-01_ses-01_run-1_mask.nii.gz', 'sub-01_ses-01_run-2_mask.nii.gz', \
'sub-01_ses-01_run-3_mask.nii.gz', 'sub-01_ses-01_run-4_mask.nii.gz']
>>> srtkImageReconstruction.inputs.stacks_order = [3,1,2,4]
>>> srtkImageReconstruction.inputs.sub_ses = 'sub-01_ses-01'
>>> srtkImageReconstruction.inputs.in_roi = 'mask'
>>> srtkImageReconstruction.inputs.in_deltat = 0.01
>>> srtkImageReconstruction.inputs.in_lambda = 0.75
>>> srtkImageReconstruction.run() # doctest: +SKIP

Typo:

>>> srtkImageReconstruction = MialsrtkTVSuperResolution()

that should be:

>>> srtkImageReconstruction = MialsrtkImageReconstruction()

To be deleted:

>>> srtkImageReconstruction.inputs.in_deltat = 0.01
>>> srtkImageReconstruction.inputs.in_lambda = 0.75

DOC: Motion index computation

Now that an exception has been raised to prevent division by zero in the computation of the motion index (thanks @sebastientourbier!), I think that the implementation should be briefly documented. Indeed, forcing the centroid coordinates to 0 artificially and drastically increases the motion index with respect to the real motion during the acquisition, whereas this exception is raised in the case of discontinuous brain masks, i.e., where slices were rejected because of strong motion/artifacts. This will further have an impact on the StacksOrdering preprocessing module, but the motion in the remaining slices that are actually used for SR reconstruction may not correspond to the high value of this index. It may be fine for most users, but they should be aware of this potential bias.
Happy to discuss this in more detail if you find it useful!

Add option to specify custom trained model for brain extraction

This new feature can be accomplished by:

  • Addition of new arguments --custom_ckpt_brain_localisation and --custom_ckpt_brain_localisation in pymialsrtk/parser.py(where the parser of the BIDS App is defined) to specify custom paths (prefixes) to the files of trained networks. As we are using a container image, these files should be mounted and accessible inside it. I would suggest to use to code/ folder to put these files which is already mounted.

  • Modification of docker/bidsapp/run.py and/or pymialsrtk/pipelines/anatomical/srr.py such that in_ckpt_loc for localization and in_ckpt_seg for segmentation are set to the custom paths (prefixes) specified by the new BIDS App arguments.

Add option to generate custom trained model for brain extraction using the BIDS App

It would be great if we can execute the BIDS App on a BIDS dataset with manual masks and get the trained network as output in the derivatives. Ultimately all the code necessary to retrain the network should integrated in the BIDS App code with the creation of a new interface and a new pipeline. The BIDS App would be idealy called as the following:

docker -v /local/path/to/bids_dir:/bids_dir -v /local/path/to/output_dir:/output_dir \
sebastientourbier/mialsuperresolutiontoolkit-bidsapp /bids_dir /output_dir --group \
 --brain_extraction_training

This would include:

  • Add new BIDS App arguments: --brain_extraction_training(used to know we want to create and run the new pipeline) and --group (defined by the BIDS App standard when the BIDS App is used at the group level)

  • Create a new interface (for instance BrainExtractionTraining) that incorporates code for training and generates the checkpoints files as outputs. It should take as inputs at least a list of input images and a list of input brain masks.

  • Addition of a new workflow as attribute of AnatomicalPipeline class

  • Creation of a new function (for instance create_brain_extraction_training_worflow()) in AnatomicalPipeline that builds a Nipype workflow consisting of:
    - A BIDSDataGrabber that gets the input images and the inputs masks
    - The BrainExtractionTraining interface itself that takes the list of input images and brain masks as inputs and generates as outputs the checkpoint files.
    - A DataSink that takes the checkpoint files as inputs, copies and renames the final checkpoints file to a final BIDS derivatives format. (This has to be thought and discussed)
    This function will set the new workflow attribute (e.g. self.new_workflow_name).

  • Creation of a new function (for instance run_brain_extraction_training_worflow()) which will run the new workflow attribute (e.g. self.new_workflow_name)

  • Modification of the docker/bidsapp/run.py and/or pymialsrtk/pipelines/anatomical/srr.py to create and run the brain extraction training workflow.

  • Documentation: Docstring and usage on readthedocs

BUG: TV Super Resolution Node crashes with segmentation fault

See trace below.
Note that --nipype_nb_of_cores and --openmp_nb_of_cores parameters not specified, and SR image is well computed and saved.

Inner loop optimization took 96.245 (Unix time)
##########################################################################################################

Writing /output_dir/nipype-1.6.0/sub-ctrl0038/ses-20180912073131/rec-3/srr_pipeline/srtkTVSuperResolution/SRTV_sub-ctrl0038_ses-20180912073131_6V_rad1.nii.gz ... done.
  > Write JSON side-car...
  > Load SR image /output_dir/nipype-1.6.0/sub-ctrl0038/ses-20180912073131/rec-3/srr_pipeline/srtkTVSuperResolution/SRTV_sub-ctrl0038_ses-20180912073131_6V_rad1.nii.gz...
    Image properties: Zooms=(1.125, 1.125, 1.125)/ Shape=(320, 320, 94)/ FOV=[360.   360.   105.75]/ middle cut=[160, 160, 47]
  > Crop SR image at (100:220, 100:220, 0:-1)...
Fatal Python error: Segmentation fault

Current thread 0x00007f0a56ad1780 (most recent call first):
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/numpy/linalg/linalg.py", line 1456 in eigh
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/quaternions.py", line 211 in mat2quat
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/nifti1.py", line 1031 in set_qform
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/nifti1.py", line 1817 in _affine2header
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/spatialimages.py", line 503 in update_header
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/nifti1.py", line 1807 in update_header
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/nifti1.py", line 2044 in update_header
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/spatialimages.py", line 469 in __init__
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/analyze.py", line 923 in __init__
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/nifti1.py", line 1772 in __init__
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nibabel/spatialimages.py", line 350 in __getitem__
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/pymialsrtk/interfaces/reconstruction.py", line 422 in _run_interface
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 434 in run
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741 in _run_command
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635 in _run_interface
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516 in run
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67 in run_node
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/process.py", line 239 in _process_worker
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/process.py", line 99 in run
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/process.py", line 297 in _bootstrap
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/popen_fork.py", line 74 in _launch
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/popen_fork.py", line 20 in __init__
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/context.py", line 277 in _Popen
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/multiprocessing/process.py", line 112 in start
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/process.py", line 607 in _adjust_process_count
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/process.py", line 583 in _start_queue_management_thread
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/process.py", line 641 in submit
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 175 in _submit_job
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 367 in _send_procs_to_workers
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/plugins/base.py", line 184 in run
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/engine/workflows.py", line 632 in run
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/pymialsrtk/pipelines/anatomical/srr.py", line 679 in run
  File "/opt/mialsuperresolutiontoolkit/docker/bidsapp/run.py", line 235 in main
  File "/opt/mialsuperresolutiontoolkit/docker/bidsapp/run.py", line 306 in <module>
exception calling callback for <Future at 0x7f09e1868a10 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks
    callback(self)
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/opt/conda/envs/pymialsrtk-env/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

BUG: "skip_stacks_ordering" option doesn't work

When I want to keep the order of stacks specified in "stacks" (i.e., "custom_inferfaces": {"skip_stacks_ordering": true}), the stacksOrdering function is still performed. The final SR recon doesn't take into account the stack order I specified.

Add a script that wraps the call to the Docker image

This script could enable us to make abstraction of the docker run command.

Tasks:

  • Creation of script pymialsrtk/cli/mialsuperresolutiontoolkit-bidsapp which will (1) use the BIDS App parser, (2) generate the Docker run command, and (3) execute it with our run(cmd) function

  • Add this script to be installed in setup.py

  • Modify calls to BIDS App in circleci

  • Update the documentation

DOC: Make sure the references to version TAG are correct

In the documentation, 'v' is missing in the command line to get the latest release (2.0.1) of the BIDS App:
$ docker pull sebastientourbier/mialsuperresolutiontoolkit:v2.0.1
Probably missing as well in the singularity pull.

BUG: Encapsulate StacksOrdering and srtkHRmask in nipype nodes

Not sure what the problem is, I observed that - on the cluster with singularity - the pipeline is failing when Function() (and IdentityInterface ?) nodes are used.

With the custom interfaces "skip_stacks_ordering": true, "do_refine_hr_mask":true, the 2 problematic nodes are bypassed and the processing is going well.

Problematic nodes:

srtkHRMask = Node(interface=Function(input_names=["input_image"], output_names=["output_srmask"],
function=postprocess.binarize_image), name='srtkHRMask')

and

stacksOrdering = Node(interface=IdentityInterface(fields=['stacks_order']), name='stackOrdering')
stacksOrdering.inputs.stacks_order = self.m_stacks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.