Code Monkey home page Code Monkey logo

aroma's Introduction

Multi-echo ICA (ME-ICA) Processing of fMRI Data

⚠️ PLEASE NOTE This code base is currently unmaintained. No new feature enhancements or bug fixes will be considered at this time.

We encourage prospective users to instead consider tedana, which maintains and extends many of the multi-echo-specific features of ME-ICA.

For fMRI processing more generally, we refer users to AFNI or fMRIPrep

This repository is a fork of the (also unmaintained) Bitbucket repository.

Dependencies

  1. AFNI
  2. Python 2.7
  3. numpy
  4. scipy

Installation

Install Python and other dependencies. If you have AFNI installed and on your path, you should already have an up-to-date version of ME-ICA on your path. Running meica.py without any options will check for dependencies and let you know if they are met. If you don't have numpy/scipy (or appropriate versions) installed, I would strongly recommend using the Enthought Canopy Python Distribution. Click here for more installation help.

Important Files and Directories

  • meica.py : a master script that performs preprocessing and calls the ICA/TE-dependence analysis script tedana.py
  • meica.libs : a folder that includes utility functions for TE-dependence analysis for denoising and anatomical-functional co-registration
  • meica.libs/tedana.py : performs ICA and TE-dependence calculations

Usage

fMRI data is called: rest_e1.nii.gz, rest_e2.nii.gz, rest_e3.nii.gz, etc. Anatomical is: mprage.nii.gz

meica.py and tedana.py have a number of options which you can view using the -h flag.

Here's an example use:

meica.py -d rest1_e1.nii.gz,rest1_e2.nii.gz,rest1_e3.nii.gz -e 15,30,45 -b 15s -a mprage.nii --MNI --prefix sub1_rest

This means:

-e 15,30,45   are the echo times in milliseconds
-d rest_e1.nii.gz,rest_e2...   are the 4-D time series datasets (comma separated list of dataset of each TE) from a multi-echo fMRI acqusition
-a ...   is a "raw" mprage with a skull
-b   15 means drop first 15 seconds of data for equilibration
--MNI   warp anatomical to MNI space using a built-in high-resolution MNI template. 
--prefix sub1_rest   prefix for final functional output datasets, i.e. sub1_rest_....nii.gz

Again, see meica.py -h for handling other situations such as: anatomical with no skull, no anatomical at all, applying FWHM smoothing, non-linear warp to standard space, etc.

Click here more info on group analysis.

Output

  • ./meica.rest1_e1/ : contains preprocessing intermediate files. Click here for detailed listing.
  • sub1_rest_medn.nii.gz : 'Denoised' BOLD time series after: basic preprocessing, T2* weighted averaging of echoes (i.e. 'optimal combination'), ICA denoising. Use this dataset for task analysis and resting state time series correlation analysis. See here for information on degrees of freedom in denoised data.
  • sub1_rest_tsoc.nii.gz : 'Raw' BOLD time series dataset after: basic preprocessing and T2* weighted averaging of echoes (i.e. 'optimal combination'). 'Standard' denoising or task analyses can be assessed on this dataset (e.g. motion regression, physio correction, scrubbing, blah...) for comparison to ME-ICA denoising.
  • sub1_rest_mefc.nii.gz : Component maps (in units of \delta S) of accepted BOLD ICA components. Use this dataset for ME-ICR seed-based connectivity analysis.
  • sub1_rest_mefl.nii.gz : Component maps (in units of \delta S) of ALL ICA components.
  • sub1_rest_ctab.nii.gz : Table of component Kappa, Rho, and variance explained values, plus listing of component classifications. See here for more info.

For a step-by-step guide on how to assess ME-ICA results in more detail, click here

#Some Notes

  • Make sure your datasets have slice timing information in the header. If not sure, specify a --tpattern option to meica.py. Check AFNI documentation of 3dTshift to see slice timing codes.
  • For more info on T2* weighted anatomical-functional coregistration click here
  • FWHM smoothing is not recommended. tSNR boost is provided by optimal combination of echoes. For better overlap of 'blobs' across subjects, use non-linear standard space normalization instead with meica.py ... --qwarp

aroma's People

Contributors

eurunuela avatar fabianp avatar gitter-badger avatar tsalo avatar vinferrer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

aroma's Issues

Integration test artifacts are not being stored

Summary

This stems from #46 (comment). Basically, the path we use in the CircleCI config for store_artifacts isn't where the integration tests are writing their outputs to, which means we can't access those outputs.

Next Steps

  1. Figure out the appropriate output directory to use in the integration test.
  2. Use that path.
  3. Ensure that store_artifacts is also pointing to that directory.
  4. Check that artifacts are accessible.

Seaborn is not installed as dependency

Summary

I was trying the plotting part and seaborn was not installed in my computer i checked and is not in the setup.cfg

Additional Detail

Next Steps

  • add seaborn to the installation dependecies

Incorporate license-required notification text

I am not all that familiar with the requirements of the Apache 2.0 license (the one used by the main ICA-AROMA repo), but we can modify and redistribute the code, with some caveats. We need to incorporate information about the changes we have made into the source code and retain the original license. I'm a little fuzzy on the specific details, but we should review the license to figure out what exactly we need to do.

Enforce BIDS Derivatives-compatible ICA inputs

We should restructure the ICA outputs that ICA-AROMA ingests to be BIDS Derivatives-compatible. In order to maintain support for MELODIC, I think we need a new function that converts MELODIC outputs to a BIDS Derivatives outputs. The main ICA-AROMA functions would then just use BIDS Derivatives outputs.

Update badges to new repo

Summary

Badges currently point to old repo. Need to update path to new repo.

Additional Detail

Next Steps

Support native-space inputs

Currently, ICA-AROMA only supports standard-space inputs (MNI152 with 2mm3 voxels, I believe). In order to maximize compatibility with other ICA classification methods (e.g., tedana), we need to support native-space inputs as well.

Here are some blockers on this:

  1. The thresholds in the classification procedure are hardcoded and linked to a specific standard space template.
  2. We need methods to derive our masks of interest (brain, edge, csf, and out-of-brain) in native space. I think we need to assume there are tissue probability maps available.

Brainhack Donostia 2021

Summary

This issue aims to host a list of tasks/goals for Brainhack Donostia 2021.

As of the start of Brainhack Donostia 2021, we have 11 issues and 5 PRs open. I have added labels to the open issues to indicate the necessary effort, the impact, and the priority of closing them. Feel free to adjust them if they do not seem realistic.

There are also a few issues labeled Good First Issue that should be quite straightforward to get done:

#3 , #42 and #44 (this one actually has PR #45 open).

At the same time, we should try to close the PRs we have open, especially #22 , #28 and #46 . I believe it would be great if we could merge these 3 PRs.

What do you think @tsalo @handwerkerd ?

Generate a benchmark dataset for integration tests

I'd suggest pulling out the MELODIC node of fMRIPrep as run by CircleCI on ds005 (because this dataset is heavily downsampled, so the computational burden is not terrible).

fMRIPrep should also generate the necessary TPMs in native space.

Generate a benchmark dataset for the features module

Steps:

  1. Modify the ICA-AROMA code, making sure all intermediate results are not wiped out.
  2. Run that derived code from ICA-AROMA locally with FSL installed
  3. Create a zipfile with intermediate results, including all the masks generated along the way as well as the conversion (resampling) of the initial BOLD in standard space. Make sure all the masks that the feature extraction requires are all kept in the standard space. Include in the zip the dataframe of features you obtain (could be done together with #31)
  4. Write unit tests of the features module using these masks.

Produce BIDS Derivatives-compatible outputs

We have a set of outputs of interest from AROMA, including:

  • Denoised 4D image, in native and standard space (possibly)
  • Metrics (feature scores)
  • Component time series
  • Component maps, in native and standard space (possibly)
  • Classifications (included with feature scores in "classification_overview.txt")
  • Figure

We should figure out how best to make those outputs BIDS Derivatives-compatible.

At minimum, I think we can use filenames that are basically BIDS-ish, like what I propose in ME-ICA/tedana#574. This means using entities and suffixes that match BIDS convention, minus the "source entities" from the original files (e.g., sub, ses, run).

I don't know if we want to output the classifications/metrics as a json (as in tedana) or a tsv. A tsv would be easier to read...

Recognize contributions

Summary

We should recognize the contributions made to aroma.

Next Steps

  • Add the all contributors bot to update the README.

Accept arrays in feature functions

Summary

To reduce the number of temporary files we may need to write, we should allow the feature functions to accept arrays in addition to files.

Additional Detail

Next Steps

  1. Check datatypes of inputs to feature functions. If str, load the file. If array, use that directly.
  2. Update the docstrings.
  3. Add tests for both types of inputs.

Remove non-Python dependencies

The current version uses fslmaths for a number of steps that could very easily be done with nilearn or nibabel + numpy.

The only difficult part that I can think of is handling MELODIC's mixture modeling. We can either (1) assume the MELODIC data are in BIDS format already and work from there or (2) try to "implement" the mixture modeling method with a Python-based ICA algorithm.

Generate a benchmark dataframe for the `predict` function

Steps:

  1. Run the original ICA-AROMA locally with FSL installed
  2. Patch in a new classifier function where the input dataframe is written out to hard-disk, as well as the list of motion/no-motion labels.
  3. Add the dataframe to this repo
  4. Write a unit test of the predict function using this dataframe + labels

Add contributing guidelines and code of conduct

Summary

We should add contributing information and a code of conduct to make things easier for new contributors.

Additional Detail

We can base our information on tedana, which is very welcoming.

Next Steps

  1. Add contributing guidelines and code of conduct, adapted from tedana.

Move testing data into OSF

Summary

We could make the package much lighter by moving the data we use for testing into an OSF repository.

Next Steps

  • Create an OSF repo for aroma under ME-ICA.
  • Upload files under tests/data to the OSF repo.
  • Write a function to download files for testing in conftest.py.
  • Remove the tests/data directory.

Operate on tabular data using pandas

Summary

In a couple of locations, tabular data (e.g., feature scores and classifications) are operated on as numpy arrays and are written to files using basic _io.TextIOWrapper.write(). We should use pandas, which will clean the code up considerably and will also increase confidence in comparing separate arrays.

Add FSL image for testing

Summary

I understand our objective is not u use FSL, however I think that at somepoint we need to setup a test that certifies our AROMA and FSL AROMA to be equivalent. Therefore I propose to find a docker image that can be able to run the integration test that we have now so later is easier to create this compatibility test.

-->

Next Steps

Test on Python 3.8 and 3.9

Summary

We should add unit test jobs for Python 3.8 and 3.9. I'm not sure if integration tests are necessary.

Next Steps

  1. Add the new jobs to the CircleCI config file.
  2. Run the tests to ensure that they pass.

CLI error: folder does not exist

Summary

Running aroma from the command line raises a The folder does not exist error.

Additional Detail

Command I ran:

aroma -i sub-pixar123_task-pixar_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz -tr 2 -out out -mc mc.tsv

Error I got:

usage: aroma [-h] -o OUT_DIR (-i IN_FILE | -f IN_FEAT) [-mc MC] [-a AFFMAT] [-w WARP] [-m MASK]
             [-tr TR] [-den {nonaggr,aggr,both,no}] [-md MEL_DIR] [-dim DIM] [-ow] [-np]
aroma: error: The folder  does not exist!

Next Steps

The error is raised by the following line in àroma/cli/aroma.py`:

options = _get_parser().parse_args(argv)

Sphinx documentation

Summary

Do we want to develop Sphinx-based documentation?

Additional Detail

We can build our documentation on a ReadTheDocs site, but I'm not sure if we want to do that in this repository or in a new repo that wasn't created for a demonstration.

Next Steps

  1. Copy documentation structure from an existing library (e.g., tedana, NiMARE, or phys2bids).
  2. Adapt documentation to fit AROMA.
  3. Set up RTD site.

Too many parallel targets

AFAICT, we have a lot of hands, but scattered on many objectives:

Should we define some list of priorities? Maybe using the project's milestones or a "Project" with the kanban columns?

It feels like we are trying to pin down too many targets. WDYT?

Leverage seaborn in figure creation

Summary

The classification plot code uses seaborn, but doesn't take advantage of the available features. We should review the plotting code and identify places where it can be improved.

Additional Detail

An example- instead of using seaborn's hue argument, the classification plot code duplicates the same lines for the accepted and rejected components. This also requires the code to add fake rows in cases where all components are either rejected or accepted.

Next Steps

  1. Review the classification plot code for unnecessary duplication.
  2. Fix up seaborn usage.

Transfer ownership to ME-ICA organization

Summary

This came up in today's tedana devs call. We think that it might make it easier to continue development of the AROMA refactor if it's under the ME-ICA organization.

Change testing docker image

Summary

We should change the docker image for the testing since we are not using fsl anymore.

Additional Detail

Next Steps

  1. Chose new image (miniconda?)
  2. Implement necessary changes in config.ylm
  3. Check tests work in the new image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.