Code Monkey home page Code Monkey logo

halfpipe's Introduction

Welcome to ENIGMA HALFpipe

image

image

image

image

HALFpipe is a user-friendly software that facilitates reproducible analysis of fMRI data, including preprocessing, single-subject, and group analysis. It provides state-of-the-art preprocessing using fmriprep, but removes the necessity to convert data to the BIDS format. Common resting-state and task-based fMRI features can then be calculated on the fly.

HALFpipe relies on tools from well-established neuroimaging software packages, either directly or through our dependencies, including ANTs, FreeSurfer, FSL and nipype. We strongly urge users to acknowledge these tools when publishing results obtained with HALFpipe.

Subscribe to our mailing list to stay up to date with new developments and releases.

Table of Contents

Getting started

HALFpipe is distributed as a container, meaning that all required software comes bundled in a monolithic file, the container. This allows for easy installation on new systems, and makes data analysis more reproducible, because software versions are guaranteed to be the same for all users.

Container platform

The first step is to install one of the supported container platforms. If you’re using a high-performance computing cluster, more often than not Singularity will already be available.

If not, we recommend using the latest version ofSingularity. However, it can be somewhat cumbersome to install, as it needs to be built from source.

The NeuroDebian package repository provides an older version of Singularity for some Linux distributions.

If you are running mac OS, then you should be able to run the container with Docker Desktop.

If you are running Windows, you can also try running with Docker Desktop, but we have not done any compatibility testing yet, so issues may occur, for example with respect to file systems.

Container platform Version Installation
Singularity 3.x https://sylabs.io/guides/3.8/user-guide/quick_start.html
Singularity 2.x sudo apt install singularity-container
Docker See https://docs.docker.com/engine/install/

Download

The second step is to download the HALFpipe to your computer. This requires approximately 5 gigabytes of storage.

Container platform Version Installation
Singularity 3.x https://download.fmri.science/singularity/halfpipe-latest.sif
Singularity 2.x https://download.fmri.science/singularity/halfpipe-latest.simg
Docker docker pull halfpipe/halfpipe:latest

Singularity version 3.x creates a container image file called HALFpipe_{version}.sif in the directory where you run the pull command. For Singularity version 2.x the file is named halfpipe-halfpipe-master-latest.simg. Whenever you want to use the container, you need pass Singularity the path to this file.

NOTE: Singularity may store a copy of the container in its cache directory. The cache directory is located by default in your home directory at ~/.singularity. If you need to save disk space in your home directory, you can safely delete the cache directory after downloading, i.e. by running rm -rf ~/.singularity. Alternatively, you could move the cache directory somewhere with more free disk space using a symlink. This way, files will automatically be stored there in the future. For example, if you have a lot of free disk space in /mnt/storage, then you could first run mv ~/.singularity /mnt/storage to move the cache directory, and then ln -s /mnt/storage/.singularity ~/.singularity to create the symlink.

Docker will store the container in its storage base directory, so it does not matter from which directory you run the pull command.

Running

The third step is to run the downloaded container. You may need to replace halfpipe-halfpipe-latest.simg with the actual path and filename where Singularity downloaded your container.

Container platform Command
Singularity singularity run --containall --bind /:/ext halfpipe-halfpipe-latest.simg
Docker docker run --interactive --tty --volume /:/ext halfpipe/halfpipe

You should now see the user interface.

Background

Containers are by default isolated from the host computer. This adds security, but also means that the container cannot access the data it needs for analysis. HALFpipe expects all inputs (e.g., image files and spreadsheets) and outputs (the working directory) to be places in the path/ext (see also `--fs-root <#data-file-system-root---fs-root>__). Using the option --bind /:/ext, we instruct Singularity to map all of the host file system (/) to that path (/ext). You can also run HALFpipe` and only map only part of the host file system, but keep in mind that any directories that are not mapped will not be visible later.

Singularity passes the host shell environment to the container by default. This means that in some cases, the host computer’s configuration can interfere with the software. To avoid this, we need to pass the option --containall. Docker does not pass the host shell environment by default, so we don’t need to pass an option.

User interface

Outdated

The user interface asks a series of questions about your data and the analyses you want to run. In each question, you can press Control+C to cancel the current question and go back to the previous one. Control+D exits the program without saving. Note that these keyboard shortcuts are the same on Mac.

Files

To run preprocessing, at least a T1-weighted structural image and a BOLD image file is required. Preprocessing and data analysis proceeds automatically. However, to be able to run automatically, data files need to be input in a way suitable for automation.

For this kind of automation, HALFpipe needs to know the relationships between files, such as which files belong to the same subject. However, even though it would be obvious for a human, a program cannot easily assign a file name to a subject, and this will be true as long as there are differences in naming between different researchers or labs. One researcher may name the same file subject_01_rest.nii.gz and another subject_01/scan_rest.nii.gz.

In HALFpipe, we solve this issue by inputting file names in a specific way. For example, instead of subject_01/scan_rest.nii.gz, HALFpipe expects you to input {subject}/scan_rest.nii.gz. HALFpipe can then match all files on disk that match this naming schema, and extract the subject ID subject_01. Using the extracted subject ID, other files can now be matched to this image. If all input files are available in BIDS format, then this step can be skipped.

  1. Specify working directory All intermediate and outputs of HALFpipe will be placed in the working directory. Keep in mind to choose a location with sufficient free disk space, as intermediates can be multiple gigabytes in size for each subject.
  2. Is the data available in BIDS format?
    • Yes
      1. Specify the path of the BIDS directory
    • No
      1. Specify anatomical/structural data Specify the path of the T1-weighted image files
      2. Specify functional data Specify the path of the BOLD image files
      3. Check repetition time values / Specify repetition time in seconds
      4. Add more BOLD image files?
        • Yes Loop back to 2
        • No Continue
  3. Do slice timing?
    • Yes
      1. Check slice acquisition direction values
      2. Check slice timing values
    • No Skip this step
  4. Specify field maps? If the data was imported from a BIDS directory, this step will be omitted.
    • Yes
      1. Specify the type of the field maps
        • EPI (blip-up blip-down)
          1. Specify the path of the blip-up blip-down EPI image files
        • Phase difference and magnitude (used by Siemens scanners)
          1. Specify the path of the magnitude image files
          2. Specify the path of the phase/phase difference image files
          3. Specify echo time difference in seconds
        • Scanner-computed field map and magnitude (used by GE / Philips scanners)
          1. Specify the path of the magnitude image files
          2. Specify the path of the field map image files
      2. Add more field maps? Loop back to 1
      3. Specify effective echo spacing for the functional data in seconds
      4. Specify phase encoding direction for the functional data
    • No Skip this step

Features

Features are analyses that are carried out on the preprocessed data, in other words, first-level analyses.

  1. Specify first-level features?
    • Yes
      1. Specify the feature type
        • Task-based

          1. Specify feature name
          2. Specify images to use
          3. Specify the event file type
          • SPM multiple conditions A MATLAB .mat file containing three arrays: names (condition), onsets and durations
          • FSL 3-column One text file for each condition. Each file has its corresponding condition in the filename. The first column specifies the event onset, the second the duration. The third column of the files is ignored, so parametric modulation is not supported
          • BIDS TSV A tab-separated table with named columns trial_type (condition), onset and duration. While BIDS supports defining additional columns, HALFpipe will currently ignore these
          1. Specify the path of the event files
          2. Select conditions to add to the model
          3. Specify contrasts
            1. Specify contrast name
            2. Specify contrast values
            3. Add another contrast?
              • Yes Loop back to 1
              • No Continue
          4. Apply a temporal filter to the design matrix? A separate temporal filter can be specified for the design matrix. In contrast, the temporal filtering of the input image and any confound regressors added to the design matrix is specified in 10. In general, the two settings should match
          5. Apply smoothing?
            • Yes
              1. Specify smoothing FWHM in mm
            • No Continue
          6. Grand mean scaling will be applied with a mean of 10000.000000
          7. Temporal filtering will be applied using a gaussian-weighted filter Specify the filter width in seconds
          8. Remove confounds?
        • Seed-based connectivity
          1. Specify feature name
          2. Specify images to use
          3. Specify binary seed mask file(s)
            1. Specify the path of the binary seed mask image files
            2. Check space values
            3. Add binary seed mask image file
        • Dual regression
          1. Specify feature name
          2. Specify images to use
          3. TODO
        • Atlas-based connectivity matrix
          1. Specify feature name
          2. Specify images to use
          3. TODO
        • ReHo
          1. Specify feature name
          2. Specify images to use
          3. TODO
        • fALFF
          1. Specify feature name
          2. Specify images to use
          3. TODO
    • No Skip this step
  2. Add another first-level feature?
    • Yes Loop back to 1
    • No Continue
  3. Output a preprocessed image?
    • Yes
      1. Specify setting name
      2. Specify images to use
      3. Apply smoothing?
        • Yes
          1. Specify smoothing FWHM in mm
        • No Continue
      4. Do grand mean scaling?
        • Yes
          1. Specify grand mean
        • No Continue
      5. Apply a temporal filter?
        • Yes
          1. Specify the type of temporal filter
            • Gaussian-weighted
            • Frequency-based
        • No Continue
      6. Remove confounds?
    • No Continue

Models

Models are statistical analyses that are carried out on the features.

TODO

Running on a high-performance computing cluster

  1. Log in to your cluster’s head node
  2. Request an interactive job. Refer to your cluster’s documentation for how to do this
  3.  In the interactive job, run the HALFpipe user interface, but add the flag --use-cluster to the end of the command.
     For example, singularity run --containall --bind /:/ext halfpipe-halfpipe-latest.sif --use-cluster
  4. As soon as you finish specifying all your data, features and models in the user interface, HALFpipe will now generate everything needed to run on the cluster. For hundreds of subjects, this can take up to a few hours.
  5. When HALFpipe exits, edit the generated submit script submit.slurm.sh according to your cluster’s documentation and then run it. This submit script will calculate everything except group statistics.
  6. As soon as all processing has been completed, you can run group statistics. This is usually very fast, so you can do this in an interactive session. Run singularity run --containall --bind /:/ext halfpipe-halfpipe-latest.sif --only-model-chunk and then select Run without modification in the user interface.

Quality checks

Please see the manual

Outputs

Outdated

  • A visual report page reports/index.html
  • A table with image quality metrics reports/reportvals.txt
  • A table containing the preprocessing status reports/reportpreproc.txt
  • The untouched fmriprep derivatives. Some files have been omitted to save disk space fmriprep is very strict about only processing data that is compliant with the BIDS standard. As such, we may need to format subjects names for compliance. For example, an input subject named subject_01 will appear as subject01 in the fmriprep derivatives. derivatives/fmriprep

Subject-level features

  •  For task-based, seed-based connectivity and dual regression features, HALFpipe outputs the statistical maps for the effect, the variance, the degrees of freedom of the variance and the z-statistic. In FSL, the effect and variance are also called cope and varcope
     derivatives/halfpipe/sub-.../func/..._stat-effect_statmap.nii.gz
     derivatives/halfpipe/sub-.../func/..._stat-variance_statmap.nii.gz
     derivatives/halfpipe/sub-.../func/..._stat-dof_statmap.nii.gz
     derivatives/halfpipe/sub-.../func/..._stat-z_statmap.nii.gz
     The design and contrast matrix used for the final model will be outputted alongside the statistical maps
     derivatives/halfpipe/sub-.../func/sub-..._task-..._feature-..._desc-design_matrix.tsv
     derivatives/halfpipe/sub-.../func/sub-..._task-..._feature-..._desc-contrast_matrix.tsv
  •  ReHo and fALFF are not calculated based on a linear model. As such, only one statistical map of the z-scaled values will be output
     derivatives/halfpipe/sub-.../func/..._alff.nii.gz
     derivatives/halfpipe/sub-.../func/..._falff.nii.gz
     derivatives/halfpipe/sub-.../func/..._reho.nii.gz
  • For every feature, a .json file containing a summary of the preprocessing
  •  settings, and a list of the raw data files that were used for the analysis (RawSources)
     derivatives/halfpipe/sub-.../func/....json
  •  For every feature, the corresponding brain mask is output beside the statistical maps. Masks do not differ between different features calculated, they are only copied out repeatedly for convenience
     derivatives/halfpipe/sub-.../func/...desc-brain_mask.nii.gz
  •  Atlas-based connectivity outputs the time series and the full covariance and correlation matrices as text files
     derivatives/halfpipe/sub-.../func/..._timeseries.txt
     derivatives/halfpipe/sub-.../func/..._desc-covariance_matrix.txt
     derivatives/halfpipe/sub-.../func/..._desc-correlation_matrix.txt

Preprocessed images

  •  Masked, preprocessed BOLD image
     derivatives/halfpipe/sub-.../func/..._bold.nii.gz
  •  Just like for features
     derivatives/halfpipe/sub-.../func/..._bold.json
  •  Just like for features
     derivatives/halfpipe/sub-.../func/sub-..._task-..._setting-..._desc-brain_mask.nii.gz
  •  Filtered confounds time series, where all filters that are applied to the BOLD image are applied to the regressors as well. Note that this means that when grand mean scaling is active, confounds time series are also scaled, meaning that values such as framewise displacement can not be interpreted in terms of their original units anymore.
     derivatives/halfpipe/sub-.../func/sub-..._task-..._setting-..._desc-confounds_regressors.tsv

Group-level

  • grouplevel/...

Troubleshooting

  • If an error occurs, this will be output to the command line and simultaneously to the err.txt file in the working directory
  • If the error occurs while running, usually a text file detailing the error will be placed in the working directory. These are text files and their file names start with crash
    • Usually, the last line of these text files contains the error message. Please read this carefully, as may allow you to understand the error
    • For example, consider the following error message: ValueError: shape (64, 64, 33) for image 1 not compatible with first image shape (64, 64, 34) with axis == None This error message may seem cryptic at first. However, looking at the message more closely, it suggests that two input images have different, incompatible dimensions. In this case, HALFpipe correctly recognized this issue, and there is no need for concern. The images in question will simply be excluded from preprocessing and/or analysis
    • In some cases, the cause of the error can be a bug in the HALFpipe code. Please check that no similar issue has been reported here on GitHub. In this case, please submit an issue.

Command line flags

Control command line logging

--verbose

By default, only errors and warnings will be output to the command line. This makes it easier to see when something goes wrong, because there is less output. However, if you want to be able to inspect what is being run, you can add the --verbose flag to the end of the command used to call HALFpipe.

Verbose logs are always written to the log.txt file in the working directory, so going back and inspecting this log is always possible, even if the --verbose flag was not specified.

Specifying the flag --debug will print additional, fine-grained messages. It will also automatically start the Python Debugger when an error occurs. You should only use --debug if you know what you’re doing.

Automatically remove unneeded files

--keep

HALFpipe saves intermediate files for each pipeline step. This speeds up re-running with different settings, or resuming after a job after it was cancelled. The intermediate file are saved by the nipype workflow engine, which is what HALFpipe uses internally. nipype saves the intermediate files in the nipype folder in the working directory.

In environments with limited disk capacity, this can be problematic. To limit disk usage, HALFpipe can delete intermediate files as soon as they are not needed anymore. This behavior is controlled with the --keep flag.

The default option --keep some keeps all intermediate files from fMRIPrep and MELODIC, which would take the longest to re-run. We believe this is a good tradeoff between disk space and computer time. --keep all turns of all deletion of intermediate files. --keep none deletes as much as possible, meaning that the smallest amount possible of disk space will be used.

Configure nipype

--nipype-<omp-nthreads|memory-gb|n-procs|run-plugin>

HALFpipe chooses sensible defaults for all of these values.

Choose which parts to run or to skip

Outdated

--<only|skip>-<spec-ui|workflow|run|model-chunk>

A HALFpipe run is divided internally into three stages, spec-ui, workflow, and run.

  1. The spec-ui stage is where you specify things in the user interface. It creates the spec.json file that contains all the information needed to run HALFpipe. To only run this stage, use the option --only-spec-ui. To skip this stage, use the option --skip-spec-ui
  2. The workflow stage is where HALFpipe uses the spec.json data to search for all the files that match what was input in the user interface. It then generates a nipype workflow for preprocessing, feature extraction and group models. nipype then validates the workflow and prepares it for execution. This usually takes a couple of minutes and cannot be parallelized. For hundreds of subjects, this may even take a few hours. This stage has the corresponding option --only-workflow and --skip-workflow.
  • This stage saves several intermediate files. These are named workflow.{uuid}.pickle.xz, execgraph.{uuid}.pickle.xz and execgraph.{n_chunks}_chunks.{uuid}.pickle.xz. The uuid in the file name is a unique identifier generated from the spec.json file and the input files. It is re-calculated every time we run this stage. The uuid algorithm produces a different output if there are any changes (such as when new input files for new subjects become available, or the spec.json is changed, for example to add a new feature or group model). Otherwise, the uuid stays the same. Therefore, if a workflow file with the calculated uuid already exists, then we do not need to run this stage. We can simple reuse the workflow from the existing file, and save some time.
  • In this stage, we can also decide to split the execution into chunks. The flag --subject-chunks creates one chunk per subject. The flag --use-cluster automatically activates --subject-chunks. The flag --n-chunks allows the user to specify a specific number of chunks. This is useful if the execution should be spread over a set number of computers. In addition to these, a model chunk is generated.
  1. The run stage loads the execgraph.{n_chunks}_chunks.{uuid}.pickle.xz file generated in the previous step and runs it. This file usually contains two chunks, one for the subject level preprocessing and feature extraction (“subject level chunk”), and one for group statistics (“model chunk”). To run a specific chunk, you can use the flags --only-chunk-index ... and --only-model-chunk.

Working directory

--workdir

Data file system root

--fs-root

The HALFpipe container, or really most containers, contain the entire base system needed to run

Contact

For questions or support, please submit an issue or contact us via e-mail at [email protected].

halfpipe's People

Contributors

dependabot[bot] avatar dominikgoeller avatar ebohorqu avatar graphvar avatar hippocampusgirl avatar ilyaveer avatar jstaph avatar lalalavi avatar luxoar avatar marcbue avatar pre-commit-ci[bot] avatar trislett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

halfpipe's Issues

Halfpipe user interface won't load

What happened?
I have downloaded the beta 5 image, but when I run the command to get the Halfpipe user interface, nothing is happening. I have tried it on a couple different node on the cluster. I did try to delete the singularity/cache folder, as well, to see if that helped. I have also tried to re-run the version from last week that definitely worked for me and that will not open either.

Is there an error message?
None, just won't load.

What is the exact command that you used to run Halfpipe?
singularity run --cleanenv --bind /:/ext halfpipe_beta5.sif --use-cluster

Which settings did you use?
N/A but I did check to make sure my image forwarding was working on the cluster (able to open graphical interfaces). It's so bizarre because none of my halfpipe images will open, even previous ones that worked before. I have also messaged our cluster admin to ask if anything might cause the issue on our end. Let me know if you have any ideas.
Screen Shot 2020-11-04 at 10 51 38 AM

Clarify slice acquisition vs. phase encoding direction

someone had the same issue as Courtney here: #29 and I realised that if the information from the header is lost, some people may not know what to put (e.g., the data could be old). Should we add instructions on how to check the header information manually (e.g., fslval {file} qform_zorient)?

Support surface-based fMRI processing

These workflows need updating. May also be a good opportunity to add targeted unit tests for different input file formats (NIfTI, CIFTI, TSV) and docstrings.

Resampling to the surface

  • Factory for the resampling to MNI152NLin6Asym
  • Factory for bold_surf_wf to fsaverage and fsnative
  • workflow and Factory for MSM
  • Factory for bold_grayords_wf

Quality checks

  • anat_reports_wf add SurfaceSnapshots
  • anat_reports_wf add segmentation render as per ENIGMA QC manual
  • halfpipe.workflow.report needs to pass through the surface-based reports images to the reports folder

Postprocessing workflows

  • halfpipe.workflow.setting.bandpassfilter init_bandpass_filter_wf
  • halfpipe.workflow.setting.confounds init_confounds_regression_wf
  • halfpipe.workflow.setting.fmriprepadapter needs to also get the CIFTI files from fMRIPrep
  • halfpipe.workflow.setting.grandmeanscaling init_grand_mean_scaling_wf
  • halfpipe.workflow.setting.icaaroma init_ica_aroma_regression_wf
  • halfpipe.workflow.setting.settingadapter needs to support multiple BOLD outputs
  • halfpipe.workflow.setting.smoothing init_smoothing_wf

Feature extraction

  • halfpipe.workflow.feature.atlasbasedconnectivity
  • halfpipe.workflow.feature.dualregression
  • halfpipe.workflow.feature.falff
  • halfpipe.workflow.feature.reho
  • halfpipe.workflow.feature.seedbasedconnectivity
  • halfpipe.workflow.feature.taskbased

Group model

  • halfpipe.workflow.model.base

Nice to have

  • Import existing FreeSurfer and match subject IDs

Implement preset files

Run the user interface with a preset file to skip certain steps with predefined values

Send this file to your collaborators so that your Halfpipe settings are the same

Maybe call it "multiverse" instead of "preset"

Group/covariates spreadsheet

Hi Lea,

When specifying the group level statistics I get an error when I try to use a csv file with the group / covariate information (see attached), but when I convert this file to a text file it works fine. Is a text file the preferred format?

With the csv file, it seems like it cannot read the header of the file (there is also a screenshot of the file attached), as it gives the error {'name': [Not a valid string.']}. The column names above do not match the column names in the file, whereas when a text file is used, the column names are correct. Do you have any ideas on why this could be the case? Please let me know if anything is unclear.

Screenshot_csvfile
Screenshot_error

Add support for lesion maps

  • Verify that fMRIPrep can deal with lesions

  • Add an option to specify lesion maps in the user interface
  • Automatically load them from BIDS
  • Pass lesion maps through to fMRIPrep

Use SBRef files when available

Should be easy to just pass them through to fmriprep. Should we add a user interface prompt, or only import them from BIDS datasets?

Hangs on cluster after submission

What happened?
I run the interactive halfpipe and then submit it to the cluster as an array job. It always hangs at the same place near the beginning and does not progress further. Eventually after several hours or a day, I quit the jobs.

Is there an error message?
It does not seem to produce an error message. This was happening on release beta 3 for me as well, but randomly one day I tried and it actually ran.

What is the exact command that you used to run Halfpipe?
singularity run
--cleanenv
--bind /:/ext Halfpipe_latest.sif
--workdir ${WORKDIR}
--only-run
--execgraph-file /ext/mnt/BIAC/duhsnas-pri.dhe.duke.edu/morey_biac/Lab/Duke/execgraph.19_chunks.7e70725e.pickle.xz
--only-chunk-index ${SGE_TASK_ID}
--nipype-n-procs 2
--verbose --watchdog

Logfiles.zip This contains spec.json, log.txt, err.txt, and Cluster log outputs from two different tries.

Which settings did you use?
Please provide the contents of the spec.json file from your Halfpipe working directory. Note that this files contains all information entered in the user interface. If the error that you are reporting occurred in the user interface, the file may not exist.

Slice Timing - import from file

What happened?
I got an error when I try to use "import from file" for slice timing parameters

Is there an error message?
Error that it must be one of the choices sequential increasing, etc.

This is the version I downloaded right after the original fix for even/odd slice timing mix ups.
Screen Shot 2021-02-22 at 10 15 07 AM

Tuple index out of range

What happened?
In order to test if there was something weird on our Isilon server that caused the problems with the log file and locks, we tried to test with a working directory on our other server. I still ran the halfpipe image from the usual server, but the only thing that changed was that the working directory is on another server.

Is there an error message?
Halfpipe finds the functionals and anatomicals with the path to the new server. Right after typing the task name I get an error that says "tuple index out of range." The 19 scans are all the exact same as previous tests.

What is the exact command that you used to run Halfpipe?
singularity run --cleanenv --bind /:/ext Halfpipe_latest.sif --use-cluster

Which settings did you use?
Working directory is /mnt/BIAC/munin.dhe.duke.edu/Data/Morey/Lab/Duke.
Halfpipe image is located: /mnt/BIAC/duhsnas-pri.dhe.duke.edu/morey_biac/Lab/pipeline

Does the halfpipe image need to be on the same server as the working directory? Googling the error message doesn't give very specific issues.
log (1).txt
err (1).txt
Screen Shot 2020-10-12 at 10 16 34 AM

Detect different `--bind` options from within HALFpipe when generating submit script

  • Parse /proc/self/mountinfo

Doesn't contain sufficient information, because mounts are carried over relative to the device, not the host's file system root

singularity3 --bind /home/lea:/home/fmriprep 
2540 2521 259:2 /home/lea /home/fmriprep 

docker --volume /work/charite:/ext 
583 567 0:40 /charite /ext rw,nodev,relatime - zfs zfs/work rw,xattr,posixacl

singularity2 --bind /mnt/raid5/data/lea:/ext
927 912 8:18 /data/lea /ext rw,nosuid,nodev,relatime master:34 - ext4 /dev/sdb7 rw,data=ordered

singularity2 --bind /mnt/mbServerData/leadata:/ext
927 912 0:52 / /ext rw,nosuid,nodev,relatime master:321 - nfs 192.168.1.1:/volume1/share/leadata 
  • We can detect whether we are on the same device as the root by looking at which device /etc/hosts is mounted from, which should be the root device, except in Docker it depends on which storage type is used, then comparing it to which device feeds fs_root. If these are not the same, and we have --use-cluster, show a warning that we cannot infer the correct --bind. Otherwise, pass the correct --bind options to the generated submit script.

  • Re-write resolve function, cache information in singleton

Fix field map logic

  • Import IntendedFor from BIDS
  • Do not allow matching a field map to a scan from a different session

Cannot delete nipype folder in latest version

What happened?
After running halfpipe, I usually delete the nipype folder before copying over data to our storage server.

Is there an error message?
I get a bunch of messages that it cannot be deleted due to a read-only filesystem.

What is the exact command that you used to run Halfpipe?
rm -rf nipype

Which settings did you use?
Since this was the first batch I ran the latest halfpipe version with, I wanted to see if something changed at all in the code that would do something with that folder. I am also asking my cluster admin about that server and deleting the folder. Would anything have changed in the newest pipeline version that would do that?

Setting `spec.json` `global_settings.skull_strip_algorithm` to `none` does not work

How can I pass options for fmriprep through halfpipe? I received some data that is already brain-extracted and didn't realize so the outputs are wacky. I wanted to add the command "--skull-strip-t1w skip" for fmriprep but am not sure how I can do that. I tried just adding it to the singularity command but had an error. Is there a way to do this?

singularity run
--cleanenv
--bind /:/ext
/mnt/BIAC/duhsnas-pri.dhe.duke.edu/morey_biac/Lab/pipeline2/halfpipe_latest.sif
--workdir /ext/mnt/BIACnfs/duhsnas-pri.dhe.duke.edu/morey_biac/Lab/Masaryk1
--only-run
--execgraph-file /ext/mnt/BIACnfs/duhsnas-pri.dhe.duke.edu/morey_biac/Lab/Masaryk1/execgraph.30_chunks.f7947966.pickle.xz
--only-chunk-index ${SGE_TASK_ID}
--nipype-n-procs 2
--skull-strip-t1w skip
--nipype-omp-nthreads 8
--verbose

Slice timing issue with Anterior to Posterior

I was running data from a site that specified their slice timing direction as Anterior to Posterior in beta 6. I chose that option in the halfpipe user interface, however it found 44 scans with I-S direction. I continued to try to run and got an error:

[2020-12-28 14:56:56,0915] [halfpipe ] [ERROR ] Exception: The 'slice_encoding_direction' trait of a TShiftInputSpec instance must be 'k' or 'k-', but a value of 'j-' <class 'str'> was specified.

I'm not sure if that is due to some of the scans possibly not being A-P or not.

Also, how can I tell the direction from header information? I am double checking what sites have told me but I cannot find that information from using something like fslhd to check the header.

Thank you.

Additional unit tests

  • NiftiheaderLoader parsedescrip
  • ingest QCDecisionMaker
  • ui with monkeypatching calamities app/view classes
  • logging
  • workflow init_execgraph
  • stats design matrix
  • stats missing data handling

Beta 5 Crash report resultdicts

It looks like Beta 5 ran fine for me, but I did get a crash report for each subject labeled "resultsdicts." I am still sorting through all of the files to check outputs but thought you might like to see the crash logs for this error. I attached one crash file but I have one for each subject.
crash-20201106-150423-ch186-make_resultdicts-c34bed6a-b1f3-466a-a08d-792cee643099.txt

Also, there are some weird files in the folder all ending with ~.EDU. I assume from the new logging/locking fix.
Are the safe to delete?
Screen Shot 2020-11-06 at 3 44 21 PM

Inside the file just says something like this with a number:
"/ext/mnt/munin/Morey/Lab/Dukerest/.log.txt.lock|blade14.dhe.duke.edu|11945|2478797660148851099"

zran_read returned error

What happened?
When I'm in the Halfpipe interface and load in the site's nifti files, I get this error.

Is there an error message?
err_tours.txt

What is the exact command that you used to run Halfpipe?
singularity run --cleanenv --bind /:/ext halfpipe_latest.sif --use-cluster
Which settings did you use?

Please provide the contents of the spec.json file from your Halfpipe working directory. Note that this files contains all information entered in the user interface. If the error that you are reporting occurred in the user interface, the file may not exist.
Does not exist yet.

Slice timing correction issues

What happened?
I get an error when doing slice timing correction. I always get warnings that say "Unexpected nifti slice_duration of 58.823528 ms in header for file "/ext/mnt/BIAC/munin.dhe.duke.edu/Data/Morey/Lab/Duke/func/39523.nii.gz" for all the scans. I am wondering if there is an issue due to these scans having TR and other info in milliseconds instead of seconds. I have to manually change the TR in the interface to 2.0 instead of 2000 that halfpipe automatically read. From the crash.txt it seems like the slice timing info in the nifti is causing these crashes.

Is there an error message?

(ignore the tuple index error that is still at the top of the file from earlier tries).
crash-20201012-161003-ch186-slice_timing_correction-8c3fc253-43a3-44f6-8acf-9c1280f667c3.txt
err.txt

What is the exact command that you used to run Halfpipe?

Which settings did you use?
This is what is in the crash file. I am assuming the slice timing was automatically read from the nifti and needs to be divided by 1000.

slice_encoding_direction = k
slice_timing = [58.82353125, 176.47059375, 294.11765625, 411.76471875, 529.41178125, 647.05884375, 764.70590625, 882.35296875, 1000.00003125, 1117.64709375, 1235.29415625, 1352.94121875, 1470.58828125, 1588.23534375, 1705.88240625, 1823.52946875, 1941.17653125, 0.0, 117.6470625, 235.294125, 352.9411875, 470.58825, 588.2353125, 705.882375, 823.5294375, 941.1765, 1058.8235625, 1176.470625, 1294.1176875, 1411.76475, 1529.4118125, 1647.058875, 1764.7059375, 1882.353]
tpattern =
tr = 2.0s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.