Code Monkey home page Code Monkey logo

sdc-bids-dmri's Introduction

Introduction to dMRI

Build Status Website Check Status Create a Slack Account with us Slack Status Binder

An introduction to diffusion Magnetic Resonance Imaging (dMRI) analysis in Python.

Why Python?

Python is rapidly becoming the standard language for data analysis, visualization and automated workflow building. It is a free and open-source software that is relatively easy to pick up by new programmers. In addition, with Python packages such as Jupyter one can keep an interactive code journal of analysis - this is what we'll be using in the workshop. Using Jupyter notebooks allows you to keep a record of all the steps in your analysis, enabling transparency and ease of code sharing.

Another advantage of Python is that it is maintained by a large user-base. Anyone can easily make their own Python packages for others to use. Therefore, there exists a very large codebase for you to take advantage of for your neuroimaging analysis; from basic statistical analysis, to brain visualization tools, to advanced machine learning and multivariate methods!

About the Lesson

This lesson teaches:

  • What diffusion Magnetic Resonance Imaging is
  • How dMRI data is organized within the BIDS framework
  • What the standard preprocessing steps in dMRI are
  • How local fiber orientation can be reconstructed using dMRI data
  • How dMRI can provide insight into structural white matter connectivity

Episodes

Topic Time Episode Question(s)
Introduction to Diffusion MRI data 30 1 Introduction to Diffusion MRI data How is dMRI data represented?
What is diffusion weighting?
Preprocessing dMRI data 30 2 Preprocessing dMRI data What are the standard preprocessing steps?
How do we register with an anatomical image?
Local fiber orientation reconstruction 30 3 Local fiber orientation reconstruction What information can dMRI provide at the voxel level?
30 3.1 Diffusion Tensor Imaging (DTI) What is diffusion tensor imaging?
What metrics can be derived from DTI?
30 3.2 Constrained Spherical Deconvolution (CSD) What is Constrained Spherical Deconvolution (CSD)?
What does CSD offer compared to DTI?
Tractography 30 4 Tractography What information can dMRI provide at the long range level?
30 4.1 Local tractography What input data does a local tractography method require?
Which steps does a local tractography method follow?
30 4.1.1 Deterministic tractography What computations does a deterministic tractography require?
How can we visualize the streamlines generated by a tractography method?
30 4.1.2 Probabilistic tractography Why do we need tractography algorithms beyond the deterministic ones?
How is probabilistic tractography different from deterministic tractography?

Contributing

We welcome all contributions to improve the lesson! Maintainers will do their best to help you if you have any questions, concerns, or experience any difficulties along the way.

We'd like to ask you to familiarize yourself with our Contribution Guide and have a look at the more detailed guidelines on proper formatting, ways to render the lesson locally, and even how to write new episodes.

Please see the current list of issues for ideas for contributing to this repository. For making your contribution, we use the GitHub flow, which is nicely explained in the chapter Contributing to a Project in Pro Git by Scott Chacon. Look for the tag good_first_issue. This indicates that the maintainers will welcome a pull request fixing this issue.

Maintainer(s)

Current maintainers of this lesson are

Authors

A list of contributors to the lesson can be found in AUTHORS

License

Instructional material from this lesson is made available under the Creative Commons Attribution (CC BY 4.0) license. Except where otherwise noted, example programs and software included as part of this lesson are made available under the MIT license. For more information, see LICENSE.

Citation

To cite this lesson, please consult with CITATION

sdc-bids-dmri's People

Contributors

bradley-karat avatar dependabot[bot] avatar fmichonneau avatar jhlegarreta avatar josephmje avatar kaitj avatar ostanley avatar tobyhodges avatar zkamvar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sdc-bids-dmri's Issues

Add a pre-commit hook to format the Markdown files

Adding a pre-commit hook that is able to ensure that the Markdown files have a consistent format and follow the carpentries style would be a great addition and would allow developers to concentrate on the contents rather than the formatting.

Best would be to use a carpentries-wide adopted/recommended tool and the same configuration file.

Automatically generate the Markdown episodes from the Jupyter Notebooks?

The maintenance burden to keep all contents synchronized across episode Markdown files, Jupyter notebooks and the solution Jupyter notebooks is considerable. Whenever a change is made to an episode, inadvertently leaving behind one of the three parts is quite likely.

It may be worthwhile to explore how feasible is to use Jupytext to automatically (i.e. using the command line interface) convert the notebooks into Markdown, and whether the layout/formatting looks as nice as when writing natively in Markdown. But that might come with some other challenges and might require some work:

  • The Jekyll commands (e.g. layout, includes, etc.) might need to be added manually to the generated files, or find some way to nicely (automatically) add what is needed.
  • The Carpentries Markdown style recommend line breaks at 100 characters to keep a consistent style and improve readability. Not sure if that can be automated or enforced in case the Markdown files got generated automatically.

FIX: Update Binder link

As @kaitj noticed, the current Binder link is out of date and points to my fork of the repository. I think this must have happened while we were transferring the repository from the CONP repo to mine and then to the Carpentries incubator.

I will update the link but I also noticed that we switched from using the gh-pages branch to build the website to the master branch. I have no preference on which branch to stick with. But if we're keeping the master branch, should we get rid of the gh-pages branch to avoid any confusion?

Fix LaTeX equations

LaTeX maths should be displayed appropriately: according to carpentries/carpentries-theme@399bf2f they should be rendered appropriately. However, this lesson's LaTeX syntax are not being rendered as expected:

Related issues and PRs
carpentries/styles#296
carpentries/styles#573
carpentries/styles#592

BUG: Github Actions Cache

First noticed in a different repository, but looking at the logs, it doesn't look like the cache is actually being used during the Github actions. Not a breaking issue, but probably something we want to look into.

Tensor Imaging (Notebook 3)

Looking to also update Notebook 3 on Diffusion Tensor Imaging (DTI). This notebook should highlight what DTI is, why its important and the pros and cons.

Main things to update;

  • Update with images / processing from ds000221
  • Better explain different quantitative metrics (e.g. FA, MD, AD, RD)

A separate notebook to be included for higher order models / more current models used in research.

Add images for notebook 1

Add images showing different diffusion weighted images to show differences in directional sensitivity

Add a pre-commit hook to validate the Jupyter notebooks

At times, we manually edit the raw Jupyter notebooks (like #137 (comment)) and break the notebooks and they effectively become unusable.

Although the CI tests in place would potentially detect such cases, this is not ideal; we would like the CI tests to spend time in executing the cells, rather than exiting because the file cannot be parsed.

Adding a pre-commit hook that is able to ensure that the files can at least be parsed could be a solution for this.

01 b-val and b-vec paragrah

In addition to the diffusion sensitive images two more files are collected as part of a diffusion dataset these are called b-val and b-vec files.....etc

Right now it sounds like those are your imaging data somehow.

Combine solution and non-solution notebooks

I was playing around with the Exercise2 extension for Jupyter. It seems really easy to set up and I think it would help us combine the solution and non-solution notebooks into a single file.

Here's a demonstration below. The solution can be revealed by clicking on a button. You can also include a cell below the solution whether the learner is meant to enter their own code. Here, you could still include fill in the blank style content and have pytest skip those cells when running the tests.

Peek 2021-08-12 16-52

WDYT? @kaitj @jhlegarreta

Make verbatim/code and type face style consistent

Verbatim/code and type face style is not consistent across the episodes. It should be made consistent by choosing one style:

  • HMTL <code> vs. Markdown backticks vs. double backticks for verbatim/code mark up.
  • Star (*) vs. underscore (_) and their double counterparts for italics/bold faces

Improve CSD lesson theory

Improve CSD lesson theory: explain better the relationship between the SH representation and the SD methods.

Rework notebooks

To do:

  • Add a more complex diffusion model (CSD?) as notebook 4
  • Make tractography notebook 5

BUG: OSF ds000221 eddy missing subjects

Related to the file storage, i just noticed 5 of the processed subjects are missing in the eddy step. May have been something related to the OSF push. Making a note here as a reminder to update the OSF.

ENH: Update Intro to Diffusion episode

I'm working on an update to the Intro to Diffusion episode (1). Here's an initial render of it: http://www.nipreps.org/nipreps-book/dwi/02-intro_dmri.html

Still needs more work but what I'd really like to include is:

  • add an overview of different acquisition schemes (single-shell, multi-shell, DSI)
  • introduce bvec orientation, elude to how it could affect tractography
  • show metadata in the .json file that would be important for pre-processing (some of this is also discussed in the Preprocessing episode but maybe we could also introduce it here)

Proposing new OSF repository

If we move to using NeuroLibre to host this content, we will be using Repo2Data to download the pre-processed data from OSF. Repo2Data cannot use the fetch command to download specific files and instead using the clone command to download all of the files from an OSF repository.

Can we migrate the existing ds000221_sub-010006.zip file to its own OSF repository?

@kaitj @jhlegarreta

pybids leading dot warning

As of pybids v0.14.0, file extensions will need to include the leading dot ('.') when using the 'extension' entity. PR #169 dismisses a FutureWarning by including:
bids.config.set_option('extension_initial_dot', True)
but this won't be necessary once pybids is upgraded.

Add a figure to the camera setup discussion

Add a figure to the camera setup discussion. The contribution in PR #77 would benefit from a figure. Since creating it might take some time, it is left out of the PR so that the PR can be merged and the figure created and added as time permits.

Cross-referencing #69 (comment) and #77 (comment).

The figure can be added just after the pitch, roll and yaw are introduced with a text like:

The following figure illustrates these concepts and how they are related to the
anatomical orientations.

![3D scene camera viewpoint concepts and anatomical views]({{ base }}/fig/discuss/camera_viewpoint_concepts_anatomy.png){:class="img-responsive"}

BUG: `Github Actions` Test Failure

Good news is that the previously set up Github Actions now work. Bad news is it is failing when trying to install pkg-resources==0.0.0 from the requirements.txt.

Something to look into and fix to have valid tests!

Preprocessing ds000221 (Notebook 2)

Perform preprocessing for new dataset ds000221. Previous preprocessing for ds000030 made use of existing software and bash scripting prior to using dipy for processing. Would like to show some common preprocessing that is done (may exclude some steps that may be more specific to certain datasets / groups). Should also talk about existing software that helps with these steps.

To do:

  • Brainmasking
  • Distortion correction (e.g. TOPUP)
  • Eddy correction
  • Registration with T1w
  • Convert to carpentries format

Things to consider:

  • Denoising
  • Unringing
  • Gradient nonlinearity (would not be done, but could point to resources)

Happy to hear some thoughts about what should be included or if anything is missing with the preprocessing.

DOC: Convert Episode 1 to Jupyter notebook

A couple of things to update for episode 1:

  • Need to update the subject in the episode from 010002 to 010006 (which is actually in the subset we are using)
  • Also need to update exercises to use a different subject in the subset
  • The jupyter notebook (located in code) should be updated to reflect the changes of the new dataset. These changes were only made to the episode

Tractogram Image

Tractogram image unable to be visualized through fury...tested with trackvis and able to save image with ~500000 streamlines.

Add a pre-commit hook to clean notebook outputs

In order to try to avoid inadvertently committing and merging the output or result of the execution of the jupyter notebooks, we should incorporate a pre-commit hook that clears the output cells using for example the following command:

jupyter nbconvert --ClearOutputPreprocessor.enabled=True --clear-output *.ipynb

The above requires installing the nbconvert package. Other solutions that do not require any additional package might exist.

Cross-referencing for example #109.

Fix the figure captions

The figure captions are now being rendered correctly, e.g. Streamline propagation differential equation in
https://carpentries-incubator.github.io/SDC-BIDS-dMRI/local_tractography/index.html or
Camera axis orientations in https://carpentries-incubator.github.io/SDC-BIDS-dMRI/discuss/index.html.

The figure index shows the Markdown image title for each image properly:
https://carpentries-incubator.github.io/SDC-BIDS-dMRI/figures/index.html

but those are not visible in the episodes, even when hovering over the image.

ModuleNotFoundError: No module named 'utils'

I get an error while running "from utils.visualization_utils import generate_anatomical_volume_figure" .
How can I fix it?
before that, I run "from dipy.tracking import utils" and it is ok.

Update figures

With slice orientations fixed (#121), figures will need to be updated accordingly. Episodes that make use of the utils call include

  • CSD
  • Deterministic tractography
  • Probabilistic tractography

Provide better theoretical background across the lesson

After today's dry-run, I feel that the lesson would greatly benefit from a few improvements that could allow instructors to provide a clearer message concerning some key aspects with the help of some additional figures and some expanded explanations. Especially, if time does not allow to follow the episodes by typing all commands, this would also enable summarizing better the contents of an episode, and hopefully allow learners to grasp better the underlying ideas.

Specifically, it might be beneficial to:

  • Add an isotropic vs. anisotropic/random walk picture in the introduction episode (without overlapping with/harm to the eigenvalues/eigenvectors picture in the DTI episode). Probably accompanied by/requiring a better explanation.
  • Improve the explanations/add a short theory section about the dMRI physics and what it adds to a regular MRI (main magnetic field, RF pulses, gradients, briefly mentioning the sequences, hopefully with some pictures, briefly mentioning b-tensor encoding, QTI, etc. or adding a picture that shows the tensors and sequences/trajectories). It can be good to look at the sMRI lesson acquisition and modalities episode to avoid repeating things and to be able to assume that some aspects are already known/build bridges across the two lessons.
  • Add the definition of the b-value, including its formula. Would maybe require briefly explaining where it comes from/showing the formula's around. Can also lead to linking these to the local models, EAP, FOD, etc.
  • Improve the explanation of what the B0 volume is.
  • Improve the explanation about different sources of noise/distortion, hopefully adding pictures to show what such distortions look like/how to distinguish them (magnetic susceptibility, Eddy currents, magnetic field inhomogeneities, ghost artifacts, motion).
  • Add a picture of how the FA changes with (an)isotropy.
  • Mention the asymmetric ODFs, and why only symmetric ODFs are used customarily.
  • Add pictures of the local model glyphs.
  • Add an actual picture that shows better where a DTI model would fail (e.g. a crossing fiber).
  • Add a picture that clearly shoes the difference of a det vs. prob tracking.
  • Add further extra contents about downstream tasks/derivatives (tractography enhancement, bundling, tractometry, etc.).
  • Add some extra contents about microstructure models.

Best would be to fix them in separate PRs so that contributions can be done more easily.

Related to #28 and #37.

Add exercise statements and solutions notebooks

The introduction and DTI episodes have each an exercise so that learners can demonstrate the acquired skills; however, the exercises are only written in the Markdown files, and they are missing from the Jupyter notebooks.

The exercise instructions should be added to the notebooks and the solutions should be included in the solution notebooks.

Fix the coronal and sagittal views of the peaks/tractograms

Visualizing the three orthogonal views of the peaks and the tractograms, it can be seen the sagittal right is indeed a sagittal left and that the coronal view is flipped upside-down flip (not sure if it is an anterior or posterior).

Note that we have used a sagittal right view in the matplotib scalar maps, so sticking to that view seems the most consistent choice.

Adjusting the roll, yaw and pitch parameters will probably make the trick.

Increase consistency across episodes

Episodes should be made consistent:

  • Some Jupyter notebooks still contain the output cells (e.g introduction or DTI. This might be interesting for the solutions, to have a reference of what the output should look like (although they should be in the markdown files as well) but I'd say that it is not for the raw notebooks that will be used for teaching.
  • We should maybe decide if Jupyter notebooks will contain the static images. I would maybe remove them. This saves maintenance work (until an automatic method that works flawlessly is found #85) and reduces the risk of discrepancies between the episode markdown and notebooks.
  • Similarly, some episodes (e.g. DTI) do not contain the instructions to save the figures, whereas others do (e.g. CSD, tracking). Although this is strictly not relevant for learners, it is useful for developers; otherwise, they need to manually add the commands every time a change to the figures is needed.
  • We should define as variables initialized at a single place and value values that are used repeatedly (e.g. the subject id, the FA threshold, etc.). Eases maintenance and reduces the possibility of incurring into bugs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.