Code Monkey home page Code Monkey logo

ddmra's Introduction

Multi-echo ICA (ME-ICA) Processing of fMRI Data

โš ๏ธ PLEASE NOTE This code base is currently unmaintained. No new feature enhancements or bug fixes will be considered at this time.

We encourage prospective users to instead consider tedana, which maintains and extends many of the multi-echo-specific features of ME-ICA.

For fMRI processing more generally, we refer users to AFNI or fMRIPrep

This repository is a fork of the (also unmaintained) Bitbucket repository.

Dependencies

  1. AFNI
  2. Python 2.7
  3. numpy
  4. scipy

Installation

Install Python and other dependencies. If you have AFNI installed and on your path, you should already have an up-to-date version of ME-ICA on your path. Running meica.py without any options will check for dependencies and let you know if they are met. If you don't have numpy/scipy (or appropriate versions) installed, I would strongly recommend using the Enthought Canopy Python Distribution. Click here for more installation help.

Important Files and Directories

  • meica.py : a master script that performs preprocessing and calls the ICA/TE-dependence analysis script tedana.py
  • meica.libs : a folder that includes utility functions for TE-dependence analysis for denoising and anatomical-functional co-registration
  • meica.libs/tedana.py : performs ICA and TE-dependence calculations

Usage

fMRI data is called: rest_e1.nii.gz, rest_e2.nii.gz, rest_e3.nii.gz, etc. Anatomical is: mprage.nii.gz

meica.py and tedana.py have a number of options which you can view using the -h flag.

Here's an example use:

meica.py -d rest1_e1.nii.gz,rest1_e2.nii.gz,rest1_e3.nii.gz -e 15,30,45 -b 15s -a mprage.nii --MNI --prefix sub1_rest

This means:

-e 15,30,45   are the echo times in milliseconds
-d rest_e1.nii.gz,rest_e2...   are the 4-D time series datasets (comma separated list of dataset of each TE) from a multi-echo fMRI acqusition
-a ...   is a "raw" mprage with a skull
-b   15 means drop first 15 seconds of data for equilibration
--MNI   warp anatomical to MNI space using a built-in high-resolution MNI template. 
--prefix sub1_rest   prefix for final functional output datasets, i.e. sub1_rest_....nii.gz

Again, see meica.py -h for handling other situations such as: anatomical with no skull, no anatomical at all, applying FWHM smoothing, non-linear warp to standard space, etc.

Click here more info on group analysis.

Output

  • ./meica.rest1_e1/ : contains preprocessing intermediate files. Click here for detailed listing.
  • sub1_rest_medn.nii.gz : 'Denoised' BOLD time series after: basic preprocessing, T2* weighted averaging of echoes (i.e. 'optimal combination'), ICA denoising. Use this dataset for task analysis and resting state time series correlation analysis. See here for information on degrees of freedom in denoised data.
  • sub1_rest_tsoc.nii.gz : 'Raw' BOLD time series dataset after: basic preprocessing and T2* weighted averaging of echoes (i.e. 'optimal combination'). 'Standard' denoising or task analyses can be assessed on this dataset (e.g. motion regression, physio correction, scrubbing, blah...) for comparison to ME-ICA denoising.
  • sub1_rest_mefc.nii.gz : Component maps (in units of \delta S) of accepted BOLD ICA components. Use this dataset for ME-ICR seed-based connectivity analysis.
  • sub1_rest_mefl.nii.gz : Component maps (in units of \delta S) of ALL ICA components.
  • sub1_rest_ctab.nii.gz : Table of component Kappa, Rho, and variance explained values, plus listing of component classifications. See here for more info.

For a step-by-step guide on how to assess ME-ICA results in more detail, click here

#Some Notes

  • Make sure your datasets have slice timing information in the header. If not sure, specify a --tpattern option to meica.py. Check AFNI documentation of 3dTshift to see slice timing codes.
  • For more info on T2* weighted anatomical-functional coregistration click here
  • FWHM smoothing is not recommended. tSNR boost is provided by optimal combination of echoes. For better overlap of 'blobs' across subjects, use non-linear standard space normalization instead with meica.py ... --qwarp

ddmra's People

Contributors

julioaperaza avatar tsalo avatar

Stargazers

 avatar

Watchers

 avatar  avatar

ddmra's Issues

NaNs in QC metrics lead to opaque errors

QCRSFC and high-low end up with no values after the moving average calculation, because the analysis values are all NaNs and the moving average involves dropping NaNs.

 Traceback (most recent call last):
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/experiment-2/05_ddmra.py", line 124, in <module>
    run_ddmra_analyses(project_dir, participants_file, target_file_patterns, confounds_pattern)
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/experiment-2/05_ddmra.py", line 92, in run_ddmra_analyses
    qc_thresh=0.2,
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/ddmra/ddmra/workflows.py", line 210, in run_analyses
    smoothing_curves.loc[smoothing_curve_distances, "scrubbing"] = scrub_smoothing_curve
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 719, in __setitem__
    indexer = self._get_setitem_indexer(key)
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 660, in _get_setitem_indexer
    return self._convert_tuple(key, is_setter=True)
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 785, in _convert_tuple
    idx = self._convert_to_indexer(k, axis=i, is_setter=is_setter)
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 1257, in _convert_to_indexer
    return self._get_listlike_indexer(key, axis)[1]
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 1314, in _get_listlike_indexer
    self._validate_read_indexer(keyarr, indexer, axis)
  File "/home/data/nbc/misc-projects/Salo_PowerReplication/code/conda_env/lib/python3.7/site-packages/pandas/core/indexing.py", line 1374, in _validate_read_indexer
    raise KeyError(f"None of [{key}] are in the [{axis_name}]")
KeyError: "None of [Float64Index([19.313207915827967, 19.339079605813716, 19.390719429665317,\n                19.4164878389476,  19.44222209522358, 19.467922333931785,\n              19.519221295943137, 19.544820285692065, 19.621416870348583,\n                19.6468827043885,\n              ...\n              142.89506639488994,    142.89856542317,  142.9160592795645,\n              142.93355099485913, 142.95104056983985, 142.95453822806746,\n              142.97202523570826, 142.97552238058094, 143.01048912579805,\n              143.05942821079637],\n             dtype='float64', name='distance', length=11910)] are in the [index]"

Support direct comparisons between derivatives

What if we used a bootstrapping procedure to estimate the variances of the intercept and slope values? We could then directly compare intercepts and slopes between derivatives, right?

Within the moving average, we use 1000 points. At the intercept point (35mm), we could bootstrap the 1000 edges' correlation coefficient that would be used for the average to estimate the variance of the average, rather than resampling across samples (although I guess that could work too?).

Which would be better? Bootstrapping samples (runs) and then applying the moving average as-is, or bootstrapping edges within the moving average?

Deploy to PyPi

I'll use the same Action that we've used in tedana and mapca.

Add tests

I'm thinking unit tests for the utility and analysis functions, along with an integration test for the workflow.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.