Code Monkey home page Code Monkey logo

hackathon2023's People

Contributors

anibalsolon avatar bhvieira avatar pbellec avatar sina-mansour avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

pbellec

hackathon2023's Issues

HeuDiConv

Title

Bring HeuDiConv to the next level!

Short description and the goals for the OHBM BrainHack

I plan to contribute by working on the Heudiconv tool, an essential utility in the neuroimaging community that converts DICOM datasets to BIDS (Brain Imaging Data Structure) format.

Goals:

  • Enhancement of Heudiconv tool: The primary goal is to address various existing enhancement requests for the Heudiconv tool. By refining the tool's functionality, we can streamline the conversion process and improve the usability of Heudiconv for researchers around the world.

    • Might also include enhancement of BIDS tools to e.g. provide helpers to rename/delete BIDS files etc.
  • Collaboration and knowledge exchange: Collaborating with others to improve Heudiconv will foster a productive exchange of ideas and knowledge. The aim is to learn from others while contributing to the community's growth and development.

  • Improving documentation: A well-documented tool is an easily accessible tool. By focusing on improving Heudiconv's documentation, the goal is to make the tool more approachable and useful to both new and existing users.

  • Increasing community engagement: The project aims to engage more members of the neuroimaging community with the Heudiconv tool. By working on enhancements, we can encourage more researchers to use and contribute to the tool in the future.

Link to the Project

https://github.com/nipy/heudiconv

Image for the OHBM brainhack website

https://avatars.githubusercontent.com/u/233707?s=48&v=4

Project lead

Yaroslav O. Halchenko @yarikoptic

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Python
Having experience with DICOM to BIDS conversion would be a plus

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

heudiconv

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

ET2BIDS - EyeTracker to BIDS

Title

EyeTracker to BIDS

Short description and the goals for the OHBM BrainHack

This project aims to encode eye-tracker data into BIDS, perhaps including it as a new feature of the phys2bids Python library. phys2bids is a powerful library designed to format physiological files in BIDS.
By leveraging phys2bids and integrating eye-tracker data into BIDS, we will establish a standardized approach for organizing and sharing eye-tracking data obtained in neuroimaging experiments. BIDS provides a simple and widely adopted framework for structuring neuroimaging and behavioral data, saving valuable time spent on data rearrangement or script modification.
Python knowledge is recommended but optional to participate in this hackathon project. Whether you're an experienced Python developer or just starting, your contribution is valuable.

Link to the Project

https://github.com/nipreps/

Image for the OHBM brainhack website

No response

Project lead

Elodie Savary, GitHub: @esavary, discord: esavary

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Python knowledge is recommended for this hackathon project. All levels of experience are welcome, whether you're an experienced Python developer or just starting out.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

et2bids

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

neurolibre

Title

NeuroLibre reproducible preprints

Short description and the goals for the OHBM BrainHack

The NeuroLibre team led by Agâh Karakuzu just released the beta version of NeuroLibre.

This next-generation publication platform offers complete testing of preprints, including recreation of the computational and data environment, and reproduction of all figures, as part of the preprint publication process. All reproducibility assets are archived alongside a re-executable web version of the preprint upon acceptance.

Neurolibre is based on github and the popular Jupyter book project, based on traditional or Myst notebooks.

The objective of this project is to assist people get started creating their first reproducible preprint. We hope that this brainhack can ignite the first wave of publication of truly reproducible science.

Link to the Project

https://neurolibre.org

Image for the OHBM brainhack website

https://raw.githubusercontent.com/neurolibre/brand/main/png/card_tb.png

Project lead

Pierre Bellec @pbellec (both on github and on Discord)

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

The ideal profile to start a NeuroLibre preprint is to have a work in preparation (or already published) which is supported by a collection of Jupyter Notebooks, and based on open data (at least partially).

You can learn all the rest as you go using our complete documentation: https://docs.neurolibre.org/en/latest/ and the support of our team during the hackathon.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

The https://neurolibre.org/ reproducible preprint service is now open for beta! Published preprints are re-executable and include all the artefacts needed for reproduction (and it's been tested). Come work on your first NeuroLibre submission during BrainHack!

Short name for the Discord chat channel (~15 chars)

neurolibre

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Physiopy - Documentation of Physiological Signal Acquisition Community Practices

Title

Physiopy - Documentation of Physiological Signal Acquisition Community Practices

Short description and the goals for the OHBM BrainHack

Physiopy is a community formed around developing solutions to operate physiological files in neuroimaging setups. Physiopy meetings feature discussions on what are suggested practices to set up and work with physiological data, that we are compiling to become available documentation for all interested users. This project specifically will work on continuing the compilation of such documentation, including a detailed overview of what physiological data are typically recorded during an fMRI experiment, how these signals are recorded, and how these signals can improve our modeling of fMRI time series data. We also hope to expand our package documentation containing tips and strategies on how to collect various forms of physiological data and the use of our packages. Imaging cerebral physiology as either signals of interest or denoising, is an active field of research, and we hope to encourage all users to get the latest recommendations prior to initiating a new study.

This project does not necessarily require coding skills to join. If you want to take your first step with git/GitHub and documentation, we’re happy to have you on board!

Link to the Project

https://github.com/physiopy/physiopy.github.io

Image for the OHBM brainhack website

https://github.com/physiopy/phys2bids/blob/master/docs/_static/physiopy_logo_1280x640.png?raw=true

Project lead

Sarah Goodale, Github Username: goodalse2019 , Discord Username: sarah goodale#5094
Ines Esteves, Github Username: isesteves, Discord Username: _iesteves
Stefano Moia, Github Username: smoia, Discord Username: smoia

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Excitement for learning the best practices in physiological signal acquisition. We welcome all contributions from any skill set and level, this project will predominantly be focused on documentation contributions. No prior knowledge on physiological data necessary!
Git - we welcome any level of git (0-3), markdown knowledge helpful

Recommended tutorials for new contributors

Good first issues

Improve documentation for package
Build best practices libraries and summaries for various physiological signals (e.g., respiratory, cardiac, skin conductance, etc.)
Check open source libraries and software similar to what we do
Build list of open source datasets with physiological signals

Twitter summary

Physiopy - documentation of physiological signal acquisition and best practices
physiopy/physiopy.github.io
@s_goodale23 @isesteves @joanacspinto @stemoia
#OHBMHackathon #Brainhack #OHBM2023

Short name for the Discord chat channel (~15 chars)

physiopy-documentation

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

ET2BIDS - EyeTracker to BIDS

Authors

Elodie Savary <[email protected]>
Remi Gau [email protected]
Oscar Esteban [email protected]

Summary

In the realm of neuroimaging, BIDS has become the de facto standard for structuring data. Its adoption has simplified the data management process, enabling researchers to focus on their scientific inquiries rather than the intricacies of data organization [gorgolewski2016]. However, the inclusion of eye-tracking data within the BIDS framework has remained a challenge, often requiring manual data rearrangement. The ET2BIDS project aimed to take a significant step toward incorporating eye-tracking data into BIDS, leveraging the bidsphysio library to facilitate this integration.

Bidsphysio is a versatile tool designed to convert various physiological data types, including CMRR, AcqKnowledge, and Siemens PMU, into BIDS-compatible formats. In particular, it offers a dedicated module called edf2bidsphysio to convert EDF files containing data from an Eyelink eyetracker. An advantageous feature of the bidsphysio library is its Docker compatibility, ensuring smooth cross-platform execution without the need for additional installations.
Before starting our project, a bug had been identified in the Docker image, which hindered the correct execution of the edf2bidsphysio module. Our primary focus was, therefore, to address and resolve this bug in order to enable the utilization of the Docker image for eye-tracking data conversion.

Results

The ET2BIDS project made significant progress in converting eye-tracking data into BIDS format. We initially processed the test data from the GitHub repository, overcoming the bug that had hindered the edf2bidsphysio module's execution. Moreover, with minor modifications to the module, we successfully processed a dataset acquired by the authors using an Eyelink eye tracker, showcasing potential bidsphysio versatility in handling various eye-tracking datasets.

Future prospects

The future of eye-tracking data integration into BIDS is evolving as we identify essential fields and metadata necessary for reproducible research. BIDS specifications for eye-tracking data are in development, with expanding guidelines for essential reporting in research studies [e.g., dunn2023].
As these guidelines grow, we will have to adapt bidsphysio to match the evolving BIDS standards, ensuring it converts eye-tracking data in accordance with the latest recommendations.

References (Bibtex)

@Article{gorgolewski2016,
title={The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments},
author={Gorgolewski, Krzysztof and Auer, Tibor and Calhoun, Vince et al.},
journal={Scientific Data},
volume={3},
pages={160044},
year={2016},
doi={10.1038/sdata.2016.44},
url={https://doi.org/10.1038/sdata.2016.44}
}
@Article{dunn2023,
title={Minimal reporting guideline for research involving eye tracking (2023 edition)},
author={Dunn, Michael J. and Alexander, Robert G. and Amiebenomo, Omogbolahan M. et al.},
journal={Behavior Research},
year={2023},
doi={10.3758/s13428-023-02187-1},
url={https://doi.org/10.3758/s13428-023-02187-1}
}

Clinica's image processing pipeline QC

Authors

Matthieu Joulot [email protected]
Ju-Chi Yu [email protected]

Summary

The goal of the project was to evaluate different QC metrics and visuals for pipelines currently existing in Clinica, which deal with registration or segmentation. That way, we would be able to find a few good metrics to separate the good images from the bad or moderately good ones, which the user would check using some visual we would generate for him.

We were able to output some first results for registration, and keeping only two metrics, the correlation coefficient and the dice made from HD BET which gave us this graph. which enables a correct separation of the categories.
newplot(1)
Figure 1: Scatterplot of HD BET Dice probability and correlation ratio between reference and moving images. Red color means both metrics are below the threshold, blue or purple means one of the metrics is below the threshold, green means both metrics are above the threshold.

We'll look further into this based on this, so that we can hopefully have some good idea of what we want to implement in the future version of Clinica which will include QC.

References (Bibtex)

@Article{routier2021clinica,
title={Clinica: An open-source software platform for reproducible clinical neuroscience studies},
author={Routier, Alexandre and Burgos, Ninon and D{'\i}az, Mauricio and Bacci, Michael and Bottani, Simona and El-Rifai, Omar and Fontanella, Sabrina and Gori, Pietro and Guillon, J{'e}r{'e}my and Guyot, Alexis and others},
journal={Frontiers in Neuroinformatics},
volume={15},
pages={689675},
year={2021},
publisher={Frontiers Media SA}
}

The BIDS connectivity project - current state and next steps of the BEPs

Title

The BIDS connectivity project - current state and next steps of the BEPs

Short description and the goals for the OHBM BrainHack

Besides BIDS' success, expansion, and description of multiple data modalities, gaps still exist in developing the standard to effectively support the process of scientific results reporting. Among others, this prominently refers to data obtained through and during connectivity analyses. This comprises brain parcellations, connectivity maps, structural and functional connections, major white matter tracts, diffusion signal models, white matter tractograms and tractometry, as well as networks based on dimensionality reduction. Sharing processed data and features in addition to raw and minimally-processed data is critical to accelerating scientific discovery. This is because substantial effort, software, and hardware instrumentation, and know-how are required to bring raw data to a usable state. The aim of the present project is to extend the BIDS standard to encompass derivatives resulting from experiments related to macroscopic brain connectivity (U.S. National Institutes of Health NIMH R01-MH126699). During the Brainhack, we would like to

  1. Gather feedback from experts, users, tool developers, ie everyone!
  2. Work on the respective BEPs
  3. Convert BEPs from GoogleDocs to GitHub PRs

Link to the Project

https://pestillilab.github.io/bids-connectivity/

Image for the OHBM brainhack website

https://pestillilab.github.io/bids-connectivity/img/logo.svg

Project lead

Peer Herholz, GitHub: peerherholz, discord: peerherholz
Franco Pestilli, GitHub: francopestilli, discord:
Ariel Rokem, GitHub: arokem, discord

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Experience with one of our covered data modalities and respective analysis software packages would be helpful: s/fMRI, dMRI, PET, MEG, i/EEG. Additionally, knowledge of BIDS would come in handy.
However, these are not requirements: we're happy to welcome everyone interested.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

Interested in Brain connectivity and FAIR data? The BIDS connectivity project might be of interest to you! During the Brainhack we will discuss work on the different connectivity-related BIDS Extension Proposals (BEPs) and welcome support from all interested parties!

Short name for the Discord chat channel (~15 chars)

bids-connectivity

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Neuroimaging Meta-Analyses

Authors

James Kent [email protected]
Yifan Yu [email protected]
Max Korbmacher [email protected]
Bernd Taschler [email protected]
Lea Waller [email protected]
Kendra Oudyk [email protected]

Summary

Introduction

Neuroimaging Meta-Analyses serve an important role in cognitive neuroscience (and beyond) to create consensus and generate new hypotheses. However, tools for neuroimaging meta-analyses only implement a small selection of analytical options, pigeon-holing researchers to particular analytical choices. Additionally, many niche tools are created and abandoned as the graduate student who was working on the project graduated and moved on. Neurosynth-Compose/NiMARE are part of a python ecosystem that provides a wide range of analytical options (with reasonable defaults), so that researchers can make analytical choices based on their research questions, not the tool.

To help improve and expand this ecosystem, we worked on several projects:

  • Make Coordinate Based Meta-Regression more efficient/friendly
    • Goal: Increase adoption of a more flexible and sensitive model approach of coordinate-based meta-analysis
  • Improve the tutorial outlining how to use Neurosynth-Compose
    • Goal: Tutorial to increase usage of the website Neurosynth-Compose
  • Change the masking process for Image Based Meta-Analysis
    • Goal: Use more voxels/data during Image Based Meta-Analysis
  • Run topic modeling of abstracts of papers associated with NeuroVault collections
    • Identify groupings of images that are amenable for meta-analysis

Results

Progress was made on all projects.

  • Several bugs and areas of inefficient code were found for Coordinate Based Meta Regression, as well as notebooks demonstrating usage and issues.
  • Feedback was given to the tutorial to improve clarity and conciseness
  • An outline of a solution for including more voxels was drafted with a plan for implementation
  • topic modeling identified how images on neurovault were distributed

The improvements made to NiMARE and related tools provide more accessibility to neuroimaging meta-analyses making it easier to perform crucial analyses in our field.

References (Bibtex)

No response

ShareStats

Title

Solving the NIH's Underpants Problem with open bibliometrics and OpenAlex.

Short description and the goals for the OHBM BrainHack

We plan to build an open and transparent source resource where program officers can obtain information about data sharing statements for a given paper. The interface will provide aggregation by investigator, institution, grant, etc. It will also provide mechanism for investigators to submit correction (PRs)

Link to the Project

https://tokyo.o18s.com/
https://openalex.org/

Image for the OHBM brainhack website

No response

Project lead

Adam Thomas @agt24 Discord:adamtNIH

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • Front end (java script)
  • containers
  • CI
  • Python
  • Github

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

sharestats

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Mode-based morphometry (MBM)

Title

Mode-based morphometry (MBM)

Short description and the goals for the OHBM BrainHack

Classical approaches to studying neuroanatomy rely on statistical inferences at individual points (voxels or vertices), clusters of points, or a priori regions-of-interest (ROIs) and thus, they are restricted to a single spatial resolution scale. We develop an approach, called mode-based morphometry (MBM), that can be used to describe any empirical map of anatomical variations in terms of the fundamental, resonant modes––eigenmodes––of brain anatomy. This approach naturally yields a multiscale characterization of the empirical map, affording new opportunities for investigating the spatial frequency content of neuroanatomical variability.

We have developed a toolbox for this approach in Matlab. Depending on the expertise of participants, our goals for BrainHack are to make the toolbox more user-friendly, add more features, and translate it to Python.

Link to the Project

https://github.com/NSBLab/MBM

Image for the OHBM brainhack website

No response

Project lead

Name: Trang Cao
GitHub: https://github.com/NSBLab
Discord: trangcao.

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Coding skills
They will be tasks for different skills ranging from just starting coding on Matlab, wanting to try GUI on Matlab, to Premier League. Pythoneers are also welcomed.

Non-coding skills
It is not all about coding! Are you someone with a good flair for good visualization? Someone with more experience who can make a good user interface? Someone who knows how to develop good, tidy and clear documentation?

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

MBM

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

fMRIPrep-integrate AFNI left and right flip detection tool into fMRIPrep

Title

fMRIPrep-integrate AFNI left and right flip detection tool into fMRIPrep

Short description and the goals for the OHBM BrainHack

In this hackathon project, we aim to enhance the fMRIPrep pipeline by integrating AFNI's left and right flip detection feature. This tool detects potential left-right flips between subject EPIs and anatomicals, ensuring accurate alignment and analysis of neuroimaging data.

Link to the Project

https://www.frontiersin.org/articles/10.3389/fninf.2020.00018/full

Image for the OHBM brainhack website

No response

Project lead

Céline Provins, GitHub: @celprov , discord: cprovins

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Having basic Python knowledge is important, however all levels of experience are welcome. Familiarity with building a workflow using Nipype is a plus but is not required.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

fmriprep_qa_xyflip

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

BrainViewer: Seamless Brain Mesh Visualization with WebGL

Authors

Florian Rupprecht ([email protected])
Reinder Vos de Wael ([email protected])

Summary

BrainViewer, a novel JavaScript package, is poised to transform how researchers and developers interact with brain meshes by enabling seamless visualisation directly within web browsers. By harnessing the power of ThreeJS, BrainViewer offers a responsive and adaptable viewing experience. Its integration with the JavaScript event system facilitates the creation of intricate and customizable behaviours, enhancing its utility.

BrainViewer is currently in an active phase of development. With our commitment to meeting the demands of our short-term projects, we anticipate the first release of BrainViewer within the coming months. BrainViewer is available at https://github.com/cmi-dair/brainviewer. Upon release, it will be installable through the npm package management system. A live demo that moves based on microphone input is available at https://cmi-dair.github.io/brainviewer-demo/.

Current key features:

GPU Empowerment: BrainViewer harnesses the capabilities of WebGL and WebGPU (depending on device capabilities) to render brain meshes efficiently within web browsers, delivering a visually engaging experience. Notably, our test users have consistently praised the smoothness of interactivity.

Deployment across ecosystems: BrainViewer supports a large number of devices, from large PC screens with mouse and keyboard to compact mobile screens with touch interfaces.

Customizable Behaviours: BrainViewer's integration with the JavaScript event system empowers users to define complex and tailored behaviours, enhancing interactivity and exploration.

References (Bibtex)

No response

Improving surface functionality in Nilearn

Title

Improving surface functionality in Nilearn

Short description and the goals for the OHBM BrainHack

INTRODUCTION

Nilearn is an open-source Python package for fast and easy analysis and visualization of brain images. It provides statistical and machine-learning tools, with instructive documentation and a friendly community. It includes applications such as multi-voxel pattern analysis (MVPA), decoding, predictive modelling, functional connectivity, and brain parcellations. In recent years, we have developed functionality to support working with surfaces and we now have concrete plans for extending surface support in Nilearn in order to extend this functionality. To this end, we aim to get contributors to get familiar with the existing surface functionality and to work on issues to improve the surface module and surface plotting functions.

GOALS FOR THE BRAIN HACK

Contributors are encouraged to work on issues in the project board by opening pull requests. These include good first issues that will help first time contributors get started and also issues relating to surface functionality. Contributors are also welcome to open issues for relevant bugs or suggestions for enhancement of the current surface functionality.

Link to the Project

https://github.com/nilearn/nilearn

Image for the OHBM brainhack website

https://nilearn.github.io/stable/_static/nilearn-transparent.png

Project lead

Hao-Ting Wang, Github: @htwangtw, Discord: haodareyou
Remi Gau, Github: @Remi-Gau, Discord: remigau
Yasmin Mzayek, Github: @ymzayek, Discord: ymzayek
Elizabeth DuPre, Github: @emdupre, Discord: emdupre#8727

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

We welcome all contributions from various skill sets and levels. This can include opening discussions around improvements to the documenation and/or code base, answering or commenting on questions or issues raised on github and neurostars, reviewing pull requests, and contributing code.

Recommended tutorials for new contributors

Good first issues

Surface project issues

Centralized on this kanban board: https://github.com/orgs/nilearn/projects/6

Other small issues to open your first pull request

Twitter summary

Get familiar with nilearn's surface data functionality and help us improve it! https://nilearn.github.io/stable/index.html

Short name for the Discord chat channel (~15 chars)

nilearn_surface

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

NiPreps generalizations for diffusion MRI (MRIQC+dMRIPrep)

Title

dmri-nipreps

Short description and the goals for the OHBM BrainHack

We plan to continue expanding the support for different modalities of NiPreps. Continuing on that line, we recently released MRIQC with beta support for dMRI data. dMRIPrep has long been in the works, and we expect to make progress on this front too. Finally, we may delve into the eddymotion project with the implementation of a Gaussian Process model for eddymotion, thereby enhancing the dMRIPrep preprocessing pipeline's capability to handle motion-related artifacts.
Therefore, we are planning to act on the following lines, depending on the appetite and preferences of brain hackers

  • MRIQC
    • improving the reportlets of the visual reports, including color FA, anisotropic power map, and/or carpetplot.
  • eddymotion
    • developing new models for diffusion and PET data:
    • Gaussian Process model to allow comparison vs. FSL topup (this line is most likely to directly contribute to DIPY).
    • PET model (continuing with the work initiated in the 2022 hackathon at OHBM Glasgow)
  • dMRIPrep
    • pipeline, package and containers makeover

https://github.com/nipreps/mriqc
https://github.com/nipreps/eddymotion
https://github.com/nipreps/dmriprep

Link to the Project

https://github.com/nipreps

Image for the OHBM brainhack website

No response

Project lead

Ariel rokem (GitHub: @arokem, Discord: @arokem) and Teresa Gomez (Github: @teresamg)

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

While we encourage participants of all skill levels to join, a good understanding of Python is recommended to contribute to this project. Some knowledge of diffusion MRI would also be helpful.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

nipreps-dmri

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

PhysioQC: A physiological data Quality Control toolbox with physiopy

Title

PhysioQC: A physiological data Quality Control toolbox with physiopy

Short description and the goals for the OHBM BrainHack

Physiopy is a community formed around developing solutions to operate physiological files in neuroimaging setups. We manage a few physiology-oriented toolboxes to process physiological data, and we would like to add a new toolbox to help quality assurance of physiological data through (automatised) quality control. If you are familiar with MRIQC, or afni proc quality control - we want to do something similar, but for physiological data.

During previous hackathons and meetings we planned the toolbox, and during these upcoming three days we'll implement a workflow to automatise it and get textual and image-based outputs, in order to get a starting point to report valuable QC information.

All contributions are welcome and accepted, from any level of contribution. We follow the all-contributors specification to report contributions, and adopt physiopy's contributors guide and code of conduct.

Link to the Project

https://github.com/physiopy/physioqc

Image for the OHBM brainhack website

https://github.com/physiopy/phys2bids/blob/master/docs/_static/physiopy_logo_1280x640.png?raw=true

Project lead

Stefano Moia, Github Username: smoia, Discord Username: smoia

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

We welcome all contributors and contributions, from any skillset and level.
We are big believers in learning-by-doing!
No prior git knowledge necessary, if willing to learn on the spot!
Basic knowledge of python, toolbox set up, and/or signal processing are helpful, but not necessary.

Recommended tutorials for new contributors

Good first issues

  • Write simple functions to compute basic signal properties
  • Select data to test the library

Twitter summary

PhysioQC: A physiological data Quality Control toolbox with physiopy @stemoia
#OHBMHackathon #Brainhack #OHBM2023 #Physiopy

Short name for the Discord chat channel (~15 chars)

physiopy-qc

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Issue with "Other" Field in "How did you learn about Brainhack 2023" Answer

Non-Working "Other" Field in "How did you learn about Brainhack 2023" Answer The "other" field in one of the answers in "How did you learn about Brainhack 2023" is currently not functioning properly. This issue needs to be fixed in order to enable users to provide additional information about how they learned about the event.

FSuB-Extractor

Title

FSuB-Extractor

Short description and the goals for the OHBM BrainHack

  • 1. Begin migration to Pydra or Nipype workflows
  • 2. Reorganize / Rename / BIDS-ify outputs
  • 3. Make a software container with dependencies
  • 4. Set up testing / continuous integration workflows on repository
  • 5. More beta testers!

Link to the Project

https://github.com/smeisler/fsub_extractor

Image for the OHBM brainhack website

No response

Project lead

Steven Meisler (Americas / Montreal)
https://github.com/smeisler
Discord Login: JazzyVibes#1404
[email protected]

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Familiarity with one or multiple of:

  • Python
  • MRtrix
  • DIPY
  • CIFTI processing
  • FreeSurfer
  • Pydra or Nipype
  • CircleCI
  • Docker/Apptainer/Singularity containerization

Recommended tutorials for new contributors

Good first issues

  • Documentation
  • Beta testing / error reporting

Twitter summary

Hi! The FSuB-Extractor is a tool that enables users to extract and analyze white matter bundles based on their intersection with gray matter functional ROIs. This approach has been used to study functional sub-components of bundles (FSuB) that support certain cognitive and perceptive tasks (e.g., Kubota et al., 2022, Cerebral Cortex).

Right now we have a functioning workflow, but we would like to formalize it more by migrating our workflow to something like Pydra or Nipype, and set up a software container and continuous integration workflow. We would also like to have outputs that follow BIDS as much as they can (keeping in mind the BIDS derivatives conventions -including connectivity derivatives- are still being codified). Even if you are unexperienced with contributing to code bases, we would still greatly appreciate feedback from beta testers!

Please contact me if you are interested or have questions!

Short name for the Discord chat channel (~15 chars)

fsub-extractor

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

K-particles (a.k.a. what would happen to brains if we explode k-space?)

Title

K-particles (a.k.a. what would happen to brains if we explode k-space?)

Short description and the goals for the OHBM BrainHack

Description

Join us on an extraordinary visual adventure as we dive deep into the captivating realm of K-Space. Through the power of particle explosions animations, our project aims to shed light on the hidden wonders of magnetic resonance imaging (MRI) by explodring the intricacies of k-space and its relationship to the Fourier transform and brain images. Our ultimate goal is to create an enchanting video that unveils the magic of k-space sampling patterns and sparks curiosity among viewers.

The primary objective of this project is to leverage particle animations to provide insights into the journey of traveling and sampling the k-space. By generating captivating visualizations, we aim to demystify the concept of k-space and the role it plays in MRI, engaging both the scientific community and the general public in a delightful exploration of this fundamental aspect of imaging.

Goals

  • Particle Animation Wizardry: We will create an array of animated particles that represent the journey within the k-space. These particles will travel, interact, and dynamically respond to the sampling process, offering a visually captivating representation of the Fourier transform in action.
  • K-Space Explosion Sampling Patterns: Through manipulation of the animated particles, we will generate various k-space sampling patterns that demonstrate the importance of sampling strategies in MRI. By altering particle behavior, density, and distribution, we will illustrate the impact of different sampling schemes on image quality and resolution.
  • Video Compilation: The generated particle animations and k-space sampling patterns will be compiled into a visually stunning and informative video. This video will showcase the beauty and complexity of the k-space journey, making it accessible to a wide audience. To reach a broad community of researchers and enthusiasts, we will publish the final video on YouTube. This platform will enable us to share the captivating visuals, educate viewers about k-space and the Fourier transform, and spark curiosity and discussions around the fascinating world of MRI.

Link to the Project

https://github.com/ofgulban/k_particles

Image for the OHBM brainhack website

https://drive.google.com/file/d/1r2lLUgqEoLL2gsW9ejSGrOvD6aOli3y0/view?usp=sharing

Project lead

Omer Faruk Gulban, Github: @ofgulban , Discord: ofgulban

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • Python
  • ffmpeg

Recommended tutorials for new contributors

Good first issues

  • Generate animations using the example scripts

Twitter summary

TODO

Short name for the Discord chat channel (~15 chars)

k-particles

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Publishing Code in Aperture Neuro

Title

Publishing Code in Aperture Neuro

Short description and the goals for the OHBM BrainHack

Aperture Neuro is a new journal from the OHBM community for the OHBM community.

In a new initiative, Aperture Neuro aims to expand the formats of submissions beyond traditional PDF papers found in conventional journals, by including code submissions.

In the dynamic field of neuroimaging, researchers derive new insights into the brain through the application and development of innovative software and analysis tools. While these insights can be described in conventional PDF papers, many other aspects of researchers' work often do not go through a similar publication procedure, lacking peer review and full citability. These research outputs, collectively known as research objects, hold significant value for the neuroimaging community.
Examples of such research objects include:

  • Code
  • Code wrappers
  • Pipelines
  • Toolboxes
  • Toolbox plugins
  • Code notebooks

Although these research objects are typically not subject to peer review and are not indexed in standardized publication systems, they deserve to be citable in a manner that recognizes the author's contribution and aids in the assessment of their scholarly impact, such as their h-index.

In a special initiative, Aperture Neuro wants to invite members of the neuroimaging community to submit their research objects in the form of code.

This invitation is specifically aimed to code that fulfills the following criteria:

  • High quality
  • Open
  • Useful to the community
  • Not previously published as a peer-review paper

The goal of this hackathon project is to:

  • Identify codes that are used in the field of neuroimaging but are not citable via PubMed listed publications.
  • Build a database of code repositories that are widely used. This list of code could be given to the Aperture Editorial Board to invite authors for submissions.
  • Have a discussion on whether the submission guidelines from Aperture Neuro need to be revised to incorporate these codes.

Resources:

This project is lead via online-attendees.

Link to the Project

https://docs.google.com/spreadsheets/d/1eeWxKzl-dCffbMSaciWRC-0wPJwVAD2MtceT7hqEXr0/edit?usp=sharing

Image for the OHBM brainhack website

https://drive.google.com/file/d/1VQR-Mo1lXrSJIXVLQVGSM0dP9SgaOchj/view?usp=sharing

Project lead

Discord handle: renzohuber Twitter @layerfMRI

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

No specific skills required

Recommended tutorials for new contributors

Good first issues

Go through the methods section of the last few papers that you have been reading and see if all the code that has been used is cited with a PubMed listed reference.
If there is code that looks relevant to a wider community and is only referring to github, add it to the list of valuable code.

Twitter summary

Community discussions on publishing code in the OHBM journal Aperture.
There is lots of good code out there that is waiting to be peer-reviewed and published. Let's identify it and make it citable.
@ApertureOHBM
#OHBMHackathon #Brainhack #OHBM2023

Short name for the Discord chat channel (~15 chars)

Code-Aperture

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Clinica's image processing pipeline QC

Title

Clinica's image processing pipeline QC

Short description and the goals for the OHBM BrainHack

The goal is to establish a QC protocol that includes both metric and visual-based assessment, for the various image processing pipelines available in Clinica.

The project would be planned would be as follows:

  • First, a round table to discuss the different practices of the participants, on which pipelines participants will be working on, as well as the objectives for the end of the hackathon. Before the end of the round table, two work groups should be established: one working on image quality metrics (IQMs), and the second working on visual QC
  • Next, we expect the two groups to work independently:
    • The IQM group will select some metrics that feel relevant, and evaluate them.
    • The visual QC group will set-up a few different visual QC proposal (i.e. views) for each pipeline they're studying, and attempt to evaluate their effectiveness

We expect the outcomes of this hackathon to be a set of IQM candidates and of graphical representation that could be included in the QC tools of Clinica

Link to the Project

https://github.com/aramis-lab/clinica

Image for the OHBM brainhack website

https://www.clinica.run/assets/images/clinica-icon-257x257.png

Project lead

Name: Matthieu Joulot
GitHub: MatthieuJoulot
Discord: Matthieu Joulot

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

The skills we are looking for are:

  • expertise in visual QC of neuro-imaging (structural MRI, diffusion MRI, PET) processing (segmentation, registration, ...)
  • expertise in the use of IQMs for quantitative QC of neuro-imaging (structural MRI, diffusion MRI, PET) processing (segmentation, registration, ...)

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

processing-qc

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

NiPreps: NeuroImaging PREProcessing toolS

Authors

Oscar Esteban

Summary

The NiPreps Community participated as a sponsor of the BrainHack, and seven projects had been proposed ahead the hackathon: “dMRI-NiPreps,” “Post-fMRIPrep ICA-AROMA BIDS-App,” “Reporting FreeSurfer outcomes with NiReports,” “fMRIPrep-integrate AFNI left and right flip detection tool into fMRIPrep,” “EyeTracker to BIDS,” “Migas Analytics: create additional visualizations,” and “fMRIPrep-Split into separate fit & apply workflows.”

References (Bibtex)

No response

CiftiPy - Better python support for cifti files

Title

CiftiPy

Short description and the goals for the OHBM BrainHack

cifti is a neuroimaging file format that flexibly stores vertex and voxel data in the same file. It's the format of choice in the HCP processing pipelines for storing surface maps, but can also be used for cortical parcellations, brain segmentations, structural and functional connectomes, and many other applications.

Currently, there isn't an easy way to handle this format in python. Nibabel has support, but the interface is unintuitive and bulky.

CiftiPy will be a wrapper around nibabel offering a convenient, numpy-based interface for accessing and manipulating cifti files. It will offer convenient methods for indexing cifti files by data-type, structure, and hemisphere, be easily and transparently viewed in an REPL (like pandas or xarray), and offer methods for common tasks.

Goals

No code has been written yet, but we have a fairly clear plan for the API. The goal will be to implement, test, and document the core interfaces.

Long term

We are prioritizing robustness over number of features, so we're purposefully trying to keep the scope as small as possible. If we get a good cifti interface built, however, it could in principle be extended to support other neuroimaging file types (gifti, nifti), resulting in a generalized, user-friendly nibabel wrapper.

Link to the Project

https://github.com/pvandyken/ciftipy

Image for the OHBM brainhack website

No response

Project lead

Peter Van Dyken - pvandyken#9542
Mohamed Yousif - dunnom8#1502
Mauricio Cespedes Tenorio - mau_cespedes99#7386

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

(Any and none of these skills are welcome! This is more a list of what tech we'll use)

  • Python
  • git
  • Documentation (Sphinx, readthedocs, markdown)
  • Cifti files (read the spec here
  • nibabel
  • Numpy
  • Python testing (pytest, hypothesis)
  • Github actions

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

ciftipy makes it easy to view, index, and manipulate cifti files with python

Short name for the Discord chat channel (~15 chars)

ciftipy

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Pydra - updating tutorials

Title

Pydra - updating tutorials

Short description and the goals for the OHBM BrainHack

We would like to update and improve Pydra tutorials, including instructions on how to move Nipype interface to Pydra.

Join us also if you just want to learn about the Pydra project and how to move your interface or workflow from Nipype to Pydra.

Link to the Project

https://github.com/nipype/pydra

Image for the OHBM brainhack website

https://raw.githubusercontent.com/nipype/pydra/master/docs/logo/pydra_logo.jpg

Project lead

Dorota Jarecka, github: @djarecka, discord @dorota (coming on Thursday)
Ghislain Vaillant, github @ghisvail
Chris Markiewicz, github: @effigies

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Python if you want to work on the code or tutorial

No skills if you just want to run the tutorial

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

pydra

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

DiffSimViz

Title

DiffSimViz

Short description and the goals for the OHBM BrainHack

DiffSimViz (Diffusion Simulator Visualizer) will be a python toolbox with the aim of providing intuitive visualizations of particle diffusion in different media/microstructure. This will ideally include an interface to choose particular media to visualize diffusion within, as well as to change parameters such as diffusion time and membrane permeability. The use of such a visualizer will be didactic, or for researchers to better understand what their diffusion MRI signal might look like in particular microstructural environments.

Link to the Project

https://github.com/Bradley-Karat/DiffSimViz

Image for the OHBM brainhack website

No response

Project lead

Bradlet Karat, Github: @Bradley-Karat, Discord: brad_karat

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • Python
  • Diffusion MRI
  • Visualization generation

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

DiffSimViz

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Boutiques Integration to Datalad

Boutiques Integration to Datalad

Project Description

The idea is to create a datalad plugin for boutiques functionality.

Datalad has a lot of useful functionality already, specifically: rerunning commands or scripts and "installing" containers to a dataset.

Boutiques is a tool for annotating an analysis script. It documents the inputs, outputs, and container to run a specific script. It's basically a .json sidecar for your analysis.

The goal of the extension will be to create a datalad boutiques command that can streamline:

  • installing a boutiques command / container from zenodo to a datalad dataset
  • call the installed container on data within the dataset
  • provide useful ways of tracking the outputs
  • stringing multiple pipelines together

Link to the Project

https://github.com/bcmcpher/dl-boutiques

Project lead

Brent McPherson (bcmcpher)

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • python (both tools are python based)

some experience (or interest in learning the mechanics of)

  • datalad
  • boutiques

Short name for the Discord chat channel (~15 chars)

dl-boutiques

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

fMRIPrep-Split into separate fit & apply workflows

Title

fMRIPrep-Split into separate fit & apply workflows

Short description and the goals for the OHBM BrainHack

fMRIPrep is a large, computationally intensive workflow that can potentially produce an order of magnitude more data than is input. However, many of these derivatives can be deterministically generated by a fairly small subset of "first order" derivatives, such as registration transforms.

This project aims to create a workflow that generates the minimal set of derivatives, as well as workflows for generating reports and other derivatives, such as resampled BOLD series and confound time series. The desired end result is a collection of modular workflows that can be composed either to produce the minimal set of derivatives or the full set that are the current outputs of fMRIPrep.

Link to the Project

https://github.com/nipreps/fmriprep/

Image for the OHBM brainhack website

No response

Project lead

Chris Markiewicz, GitHub: @effigies, discord: Chris Markiewicz (gh: effigies)

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

While we encourage participants of all skill levels to join, a good understanding of Python is recommended to contribute to this project.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

fmriprep_fit_apply

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Post-fMRIPrep ICA-AROMA BIDS-App

Title

Post-fMRIPrep ICA-AROMA BIDS-App

Short description and the goals for the OHBM BrainHack

The aim of this hackathon project is to create an ICA-AROMA post-fMRIPrep pipeline. fMRIPrep is a robust and user-friendly preprocessing tool for diverse fMRI data. ICA-AROMA, on the other hand, is a data-driven method that automatically identifies and removes motion-related independent components from fMRI data.
Prior to fMRIPrep 23.1, ICA-AROMA was an optional component of fMRIPrep, but it was removed as being out-of-scope, as it can be performed on fMRIPrep outputs.
The project focuses on creating an ICA-AROMA pipeline that accepts fMRIPrep outputs as its inputs, in particular MNI152NLin6Asym-resampled BOLD series. The pipeline will apply SUSAN smoothing, run MELODIC, and finally ICA-AROMA. The pipeline will provide options for users to select specific outputs, including noise components and non-aggressively denoised BOLD series, as well as generate reports.
To join the discussion and contribute to this project, you can find more information in the related issue at nipreps/fmriprep#2936.

Link to the Project

https://github.com/nipreps/fmripost-aroma

Image for the OHBM brainhack website

No response

Project lead

Céline Provins, GitHub: @celprov , discord: cprovins

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Having basic Python knowledge is important, however all levels of experience are welcome.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

fmriprep_aroma

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Cerebro: One tool to view them all

Title

Cerebro: One tool to view them all

Short description and the goals for the OHBM BrainHack

Cerebero Viewer is a Pythonic 3D viewer to visualize and plot brains and neuroimaging data.

It's still in early development, so there's a lot that you could help with:

  • Contributing code: You could help implementing parts of Cerebro
  • Testing current development version: Reporting bugs is the first step to solving them!
  • Documentation: To help the experience better for future users
  • Ideas/suggestions: To have ideas on what's currently missing and what you'd like us to add next!

Link to the Project

https://github.com/sina-mansour/Cerebro_Viewer

Image for the OHBM brainhack website

https://github.com/sina-mansour/Cerebro_Viewer/raw/main/static/images/screen.png?raw=true

Project lead

Sina Mansour L.
GitHub: sina-mansour
Discord: Sina_Mansour_L
Twitter: @Sina_Mansour_L

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

The skills depend on the type of contribution,

To develop code, intermediate Python knowledge would be needed.

To test the code, beginner Python familiarity is enough.

To write documentation, no particular skill is required but familiarity with Markdown or potential documentation tools can be helpful.

Finally, familiarity with git and GitHub would also be helpful.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

🧠 Cerebero Viewer is a Python package that aims to become the visualization toolbox required for a neuroimager in an interactive yet scriptable format.

🌟 Cerebero aims to extend the idea of reproducibility from scripts to of brain visualizations.

Short name for the Discord chat channel (~15 chars)

Cerebro

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

SulciLab

Title

Sulci Lab: A collaborative tool for sulcal graph labelling

Short description and the goals for the OHBM BrainHack

SulciLab is a web based tool to visualize and manually labellize sulcal graph generated by BrainVISA/Morphologist.
It allows users to create select which graph to visualize, create, copy and share new labelings.

The project is still under basic development and require some user feedback and lot of small bugs should be corrected.

It also include a 3D viewer that could be extracted or replaced to a dependency.

Link to the Project

https://github.com/BastienCagna/sulcilab

Image for the OHBM brainhack website

https://drive.google.com/file/d/1CR0aaMDeAP844HfMT4hEO71Kh5LhDYV5/view?usp=sharing

Project lead

Bastien Cagna (BastienCgn#0797)

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Brain Anatomy
Python (Django)
Typescript (React)

Recommended tutorials for new contributors

Good first issues

Feedback on installation
Report any bug
Enhance the UI or user experience

Twitter summary

SulciLab is a web based tool to visualize and manually labellize sulcal graph generated by BrainVISA/Morphologist.

Short name for the Discord chat channel (~15 chars)

SulciLab-23

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Physiopy - Semi-Automated Workflows for Physiological Signals

Title

Physiopy - Semi-Automated Workflows for Physiological Signals

Short description and the goals for the OHBM BrainHack

Physiopy is a community formed around developing solutions to acquire, process, and utilize physiological files in neuroimaging contexts. Physiopy is developing or has developed several physiology-oriented modular toolboxes to this end, including 1) phys2bids, a toolbox to standardize physiological files in BIDS format 2) peakdet, a toolbox for automatic detection and manual correction of peaks in physiological data and 3) phys2denoise, a toolbox to prepare derivatives of physiological data for use in fMRI denoising. Currently, we have no complete workflows encompassing all steps of physiological data processing and model estimations. The goal of this project is to facilitate a unified workflow across these toolboxes for semi-automated physiological signal processing.

During this hackathon, we aim to:

  • Create a semi-automated workflow based on peakdet and phys2denoise to process respiratory and cardiac signals and obtain models of physiological signal variance for neuroimaging analysis (e.g. RVT, HRV, ...)
  • Create a command line interface (CLI) for the workflows
  • Update and upgrade the current codebase of peakdet and phys2denoise, including better harmonization between toolboxes

Extra and optional aims could include:

  • Create a graphical report of the workflows output
  • BIDS-App-lify the workflows: we want to create an entry point to transform the workflow into a BIDS Application
  • Pydra-ify the libraries: to learn the strength of Pydra as a workflow manager, we can create a version of the workflow using Pydra.
  • Add support for eye-tracking and skin conductance data

All contributions are welcome and accepted, from any level of contribution. We follow the all-contributors specification to report contributions, and adopt physiopy's contributors guide and code of conduct.

Link to the Project

https://github.com/physiopy/

Image for the OHBM brainhack website

https://raw.githubusercontent.com/physiopy/phys2bids/master/docs/_static/physiopy_logo_small.png

Project lead

Mary Miedema (Montreal), Github Username: m-miedema , Discord Username: m-miedema
Roza Gunes Bayrak (Americas), Github Username: rgbayrak Discord Username: @rgbayrak,
Marie-Eve Picard (Montreal), Github Username: me-pic, Discord Username: Marie-Eve Picard (she/her)#4750

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

We welcome all contributors and contributions, from any skill set and level. Prior work with physiological data would be useful – we’d like to compare how different labs are processing their data. However, we are big believers in learning-by-doing!

No prior git knowledge necessary, if willing to learn on the spot!
Likewise, basic knowledge of python and toolbox set up are helpful, but not necessary.

However, if you are acquainted with bokeh or html reports or pygui or other graphic interfaces, we are looking for you!!!

Recommended tutorials for new contributors

Good first issues

  • Add import support for BIDS-format physiological data
  • Lay out a sample workflow to process physiological data
  • Write documentation about the workflow
  • Write a tutorial on the workflow-

Twitter summary

Physiopy - semi-automated workflows for physiological signals
https://github.com/physiopy/
@MaryMiedema @redgreenblues @stemoia
#OHBMHackathon #Brainhack #OHBM2023 #physiopy

Short name for the Discord chat channel (~15 chars)

physiopy-workflows

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Neurobagel

Title

Enable subject level cohort search across BIDS datasets

Short description and the goals for the OHBM BrainHack

Neurobagel’s goal is to make it easy for researchers to annotate their own BIDS data with standardized terms and to provide solutions to make these harmonized metadata searchable.

As a proof of concept, we have done this for a large part of the OpenNeuro MRI data and put them here: https://github.com/OpenNeuroDatasets-JSONLD. The metadata can be searched here: https://query.neurobagel.org/

We are going to work on two projects for folks at any level of experience. Our goals are:

  • Understand better how users want to interact with our tools. Sit down with us, try the query tool or our "Getting started" documentation, and tell us what is easy and what can be improved. We specifically want to hear from people who are interested in a cross-dataset cohort search about how they would like to use Neurobagel in the future.
  • Increase alignment with the BIDS community. Neurobagel annotations are currently stored inside BIDS participant.json files with additional json keys. We would like to discuss how we can create these files in a way that makes them maximally useful to people in the wider BIDS ecosystem, and hopefully eventually fully BIDS compatible.
  • Collaborate on automating the processing of OpenNeuro annotations. We would like to have an automatic process that can be triggered (e.g. via some Github Action or hook) to pull a BIDS datalad dataset and run the Neurobagel metadata extractor to create a linked data view of the BIDS dataset that we can search over. All of the steps are already in place but done manually at the moment. We hope to find ways to automate this via Github or a different CI.

Link to the Project

https://github.com/neurobagel

Image for the OHBM brainhack website

No response

Project lead

Who Where Github Discord
Alyssa Dai In person @alyssadai alyssadai#5780
Arman Jahanpour On Discord @rmanaem rmanaem
Sebastian Urchs In person @surchs _surchs

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

User feedback

  • everybody is welcome
  • having an interest in cross dataset cohort search or dealing with messy data

Discussing semantic annotation in BIDS

  • BIDS (beginner)
  • Experience working with messy (clinical) data
  • Research interest that wants to ask questions across several dataset
  • Interest in data harmonization and metadata

Automating processing of OpenNeuro metadata

  • Datalad
  • git, Github Actions, CI workflows in general
  • Python
  • Docker (nice to have)

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

neurobagel

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

NARPS Open Pipelines

Authors

Affiliations

Boris Clénet*1, Élodie Germani*1, Arshitha Basavaraj2, Remi Gau3, Yaroslav Halchenko4, Paul Taylor5, Camille Maumet1

  1. Univ Rennes, Inria, CNRS, Inserm, France
  2. Data Science and Sharing Team, NIMH, National Institutes of Health, Bethesda, MD, USA
  3. Origami lab, McGill University, Montréal, Québec, Canada
  4. Center for Open Neuroscience, Department of Psychological and Brain Sciences, Dartmouth College, NH, USA
  5. Scientific and Statistical Computing Core, NIMH, National Institutes of Health, Bethesda, MD, USA

Contacts

Boris Clénet [email protected]
Élodie Germani [email protected]
Arshitha Basavaraj [email protected]
Remi Gau [email protected]
Yaroslav Halchenko [email protected]
Paul Taylor [email protected]
Camille Maumet [email protected]

Summary

Introduction

Different analytical choices can lead to variations in the results, a phenomenon that was illustrated in neuroimaging by the NARPS project (Botvinik-Nezer et al., 2020). In NARPS, 70 teans were tasks to analyze the same dataset to answer 9 yes/no research questions. Each team share their final results as well as a textual description (COBIDAS-compliant Nichols et al., 2017) of their analysis.

The goal of NARPS Open Pipelines is to create a codebase reproducing the 70 pipelines of the NARPS project and share this as an open resource for the community.

Results

The OHBM Brainhack 2023 gave the oppurtunity to:

  1. make the repository more welcoming to new contributions:
  • Proof-read and test the contribution process : everyone helped in finding and fixing inconsitencies in the documentation and in the processes related to contributing. PR#66, PR#65, PR#63, PR#64, PR#52, PR#50

  • Create GitHub Actions workflows for enabling continuous integration, i.e.: testing existing pipelines everytime there are changes on them. PR#47

  • Develop a new GitHub Actions workflow to detect typos in code comments and documentations at each commit. PR#48

  1. learn new skills:
  • Learn NiPype : joining the project was an opportunity to start using Nipype.
  1. advance pipeline reproductions

In the end a total of:

  • 6 pull requests were merged, 4 opened ;
  • 4 issues were closed, 6 opened.

References (Bibtex)

@article{botvinik2020,
  author  = "Botvinik-Nezer, R. et al.",
  title   = "Variability in the analysis of a single neuroimaging dataset by many teams",
  journal = "Nature",
  year    = 2020
}

@article{taylor2023
  author  = "Paul A Taylor et al.",
  title   = "Highlight Results, Don't Hide Them: Enhance interpretation, reduce biases and improve reproducibility",
  journal = "NeuroImage",
  year    = 2023
}

@article{nichols2017best,
  title={Best practices in data analysis and sharing in neuroimaging using MRI},
  author={Nichols, Thomas E and Das, Samir and Eickhoff, Simon B and Evans, Alan C and Glatard, Tristan and Hanke, Michael and Kriegeskorte, Nikolaus and Milham, Michael P and Poldrack, Russell A and Poline, Jean-Baptiste and others},
  journal={Nature neuroscience},
  volume={20},
  number={3},
  pages={299--303},
  year={2017},
  publisher={Nature Publishing Group US New York}
}

Simulation of the interplay between brain and behavior

Title

Simulation of the interplay between brain and behavior

Short description and the goals for the OHBM BrainHack

Multiple longitudinal studies of neurodevelopment are now emerging and are openly available (baby connectome, ABCD, etc.). However, it's unclear the best approaches to model data longitudinally, given that the gold standard is often unclear. Thus, the goal of this project is for several different groups to generate longitudinal simulated data with embedding aspects of typical and atypical brain development into the the code. Initially, the plan is that the data will be released and after approximately one year, the code with the data will be released. This will allow different groups to assess the code.

Link to the Project

https://github.com/tonyajohansonwhite/LongSim

Image for the OHBM brainhack website

No response

Project lead

Tonya White
Github login: tonyajohansonwhite

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Brian development, coding, neuroimaging

Recommended tutorials for new contributors

Good first issues

all invited

Twitter summary

not written yet.

Short name for the Discord chat channel (~15 chars)

LongSim

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Cerebro - Advancing Openness and Reproducibility in Neuroimaging Visualization

Authors

Sina Mansour L. [email protected]
Annie G. Bryant [email protected]
Natasha Clarke [email protected]
Niousha Dehestani [email protected]
Jayson Jeganathan [email protected]
Tristan Kuehn [email protected]
Jason Kai [email protected]
Darin Erat Sleiter [email protected]
Maria Di Biase <[email protected] >
Caio Seguin [email protected]
B.T. Thomas Yeo [email protected]
Andrew Zalesky [email protected]

Summary

Synopsis

Openness and transparency have emerged as indispensable virtues that pave the way for high-quality, reproducible research, fostering a culture of trust and collaboration within the scientific community $^1$. In the domain of neuroimaging research, these principles have led to the creation of guidelines and best practices, establishing a platform that champions openness and reproducibility $^2$. The field has made substantial strides in embracing open science, a feat achieved through the successful integration of open-source software $^3$, public dataset sharing $^4$, standardized analysis pipelines $^5$, unified brain imaging data structures $^6$, open code sharing $^7$, and the endorsement of open publishing.

The joint efforts of these initiatives have established a robust foundation for cultivating a culture of collaboration and accountability within the neuroimaging community. Nonetheless, amidst these advancements, a crucial inquiry arises: What about the visual representations —the very tools through which we observe and make sense of neuroimaging results? It is within this context that we introduce "Cerebro" $^8$, a new Python-based software designed to create publication-quality brain visualizations (see Fig.1). Cerebro represents the next step in the quest for openness and reproducibility, where authors can share not only their code and data but also their publication figures alongside the precise scripts used to generate them. In doing so, Cerebro empowers researchers to make their visualizations fully reproducible, bridging the crucial gap between data, analysis, and representation.

Future guidelines

As Cerebro continues to evolve in its early development stages, this manuscript serves as a manifesto, articulating our dedication to advancing the cause of open and reproducible neuroimaging visualization. Cerebro is guided by a set of overarching goals, which include:

Fully Scriptable Publication-Quality Neuroimaging Visualization:

Cerebro's primary mission is to equip researchers with the means to craft impeccable brain visualizations while preserving full scriptability. Cerebro enables researchers to document and share every aspect of the visualization process, ensuring the seamless reproducibility of neuroimaging figures.

Cross-Compatibility with Different Data Formats:

The diversity of brain imaging data formats can pose a significant challenge for visualization. Many existing tools are constrained by compatibility limitations with specific formats. Drawing upon the foundations laid by tools like NiBabel $^9$, Cerebro is determined to provide robust cross-compatibility across a wide spectrum of data formats.

Integration with Open Science Neuroimaging Tools:

We wholeheartedly acknowledge that Cerebro cannot thrive in isolation. To realize its full potential, it must seamlessly integrate into the existing tapestry of open science tools, software, and standards. Through active collaboration with the neuroimaging community, particularly through open, inclusive community initiatives such as brainhacks $^{10}$, Cerebro aspires to forge cross-compatibility with established and emerging tools, standards, and pipelines. Our vision is one where the future of neuroimaging visualizations is marked by unwavering openness and reproducibility, fostered by this united effort.

References (Bibtex)

References

  1. Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Percie du Sert, N., ... & Ioannidis, J. (2017). A manifesto for reproducible science. Nature human behaviour, 1(1), 1-9.
  2. Niso, G., Botvinik-Nezer, R., Appelhoff, S., De La Vega, A., Esteban, O., Etzel, J. A., ... & Rieger, J. W. (2022). Open and reproducible neuroimaging: from study inception to publication. NeuroImage, 119623.
  3. Gorgolewski, K., Burns, C. D., Madison, C., Clark, D., Halchenko, Y. O., Waskom, M. L., & Ghosh, S. S. (2011). Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Frontiers in neuroinformatics, 5, 13.
  4. Markiewicz, C. J., Gorgolewski, K. J., Feingold, F., Blair, R., Halchenko, Y. O., Miller, E., ... & Poldrack, R. (2021). The OpenNeuro resource for sharing of neuroscience data. Elife, 10, e71774.
  5. Nichols, T. E., Das, S., Eickhoff, S. B., Evans, A. C., Glatard, T., Hanke, M., ... & Yeo, B. T. (2017). Best practices in data analysis and sharing in neuroimaging using MRI. Nature neuroscience, 20(3), 299-303.
  6. Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., ... & Poldrack, R. A. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific data, 3(1), 1-9.
  7. Smout, C., Holford, D. L., Garner, K., Martinez, P. A., Campbell, M. E. J., Khormi, I., ... & Coelho, L. P. (2023). An open code pledge for the neuroscience community. Aperture Neuro, Proceedings of the OHBM Brainhack 2021.
  8. Mansour L., S., Bryant, A. G., Jeganathan, J., Kai, J., Kuehn, T., Sleiter, D. E., … & Dehestani, N.. (2023). sina-mansour/Cerebro_Viewer: v0.0.10.4 (v0.0.10.4). Zenodo. https://doi.org/10.5281/zenodo.8238773
  9. Brett, M., Markiewicz, C. J., Hanke, M., Côté, M. A., Cipollini, B., McCarthy, P., ... & Guidotti, R. (2020). nipy/nibabel: 3.2. 1. Zenodo.
  10. Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y. F., ... & Marinazzo, D. (2021). Brainhack: Developing a culture of open, inclusive, community-driven neuroscience. Neuron, 109(11), 1769-1775.

Bibtex bibliography

@article{munafo2017manifesto,
  title={A manifesto for reproducible science},
  author={Munaf{\`o}, Marcus R and Nosek, Brian A and Bishop, Dorothy VM and Button, Katherine S and Chambers, Christopher D and Percie du Sert, Nathalie and Simonsohn, Uri and Wagenmakers, Eric-Jan and Ware, Jennifer J and Ioannidis, John},
  journal={Nature human behaviour},
  volume={1},
  number={1},
  pages={1--9},
  year={2017},
  publisher={Nature Publishing Group}
}

@article{niso2022open,
  title={Open and reproducible neuroimaging: from study inception to publication},
  author={Niso, Guiomar and Botvinik-Nezer, Rotem and Appelhoff, Stefan and De La Vega, Alejandro and Esteban, Oscar and Etzel, Joset A and Finc, Karolina and Ganz, Melanie and Gau, Remi and Halchenko, Yaroslav O and others},
  journal={NeuroImage},
  pages={119623},
  year={2022},
  publisher={Elsevier}
}

@article{gorgolewski2011nipype,
  title={Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python},
  author={Gorgolewski, Krzysztof and Burns, Christopher D and Madison, Cindee and Clark, Dav and Halchenko, Yaroslav O and Waskom, Michael L and Ghosh, Satrajit S},
  journal={Frontiers in neuroinformatics},
  volume={5},
  pages={13},
  year={2011},
  publisher={Frontiers}
}

@article{markiewicz2021openneuro,
  title={The OpenNeuro resource for sharing of neuroscience data},
  author={Markiewicz, Christopher J and Gorgolewski, Krzysztof J and Feingold, Franklin and Blair, Ross and Halchenko, Yaroslav O and Miller, Eric and Hardcastle, Nell and Wexler, Joe and Esteban, Oscar and Goncavles, Mathias and others},
  journal={Elife},
  volume={10},
  pages={e71774},
  year={2021},
  publisher={eLife Sciences Publications Limited}
}

@article{nichols2017best,
  title={Best practices in data analysis and sharing in neuroimaging using MRI},
  author={Nichols, Thomas E and Das, Samir and Eickhoff, Simon B and Evans, Alan C and Glatard, Tristan and Hanke, Michael and Kriegeskorte, Nikolaus and Milham, Michael P and Poldrack, Russell A and Poline, Jean-Baptiste and others},
  journal={Nature neuroscience},
  volume={20},
  number={3},
  pages={299--303},
  year={2017},
  publisher={Nature Publishing Group US New York}
}

@article{gorgolewski2016brain,
  title={The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments},
  author={Gorgolewski, Krzysztof J and Auer, Tibor and Calhoun, Vince D and Craddock, R Cameron and Das, Samir and Duff, Eugene P and Flandin, Guillaume and Ghosh, Satrajit S and Glatard, Tristan and Halchenko, Yaroslav O and others},
  journal={Scientific data},
  volume={3},
  number={1},
  pages={1--9},
  year={2016},
  publisher={Nature Publishing Group}
}

@article{smout2021open,
  title={An open code pledge for the neuroscience community},
  author={Smout, Cooper and Holford, Dawn Liu and Garner, Kelly and Martinez, Paula Andrea and Campbell, Megan Ethel Janine and Khormi, Ibrahim and Gomes, Dylan GE and Bayer, Johanna Margarete Marianne and Bradley, Claire and Schettino, Antonio and others},
  year={2021},
  publisher={Aperture Neuro},
}

@software{sina_mansour_l_2023_8238773,
  author       = {Mansour L., Sina and Bryant, Annie G. and Jeganathan, Jayson and Kai, Jason and Kuehn, Tristan and Sleiter, Darin Erat and Clarke, Natasha and Wright, Brooklyn and Dehestani, Niousha},
  title        = {sina-mansour/Cerebro\_Viewer: v0.0.10.4},
  month        = aug,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.0.10.4},
  doi          = {10.5281/zenodo.8238773},
  url          = {https://doi.org/10.5281/zenodo.8238773}
}

@software{brett_matthew_2020_4295521,
  author       = {Brett, Matthew and Markiewicz, Christopher J. and Hanke, Michael and Côté, Marc-Alexandre and Cipollini, Ben and McCarthy, Paul and Jarecka, Dorota and Cheng, Christopher P. and Halchenko, Yaroslav O. and Cottaar, Michiel and others},
  title        = {nipy/nibabel: 3.2.1},
  month        = nov,
  year         = 2020,
  publisher    = {Zenodo},
  version      = {3.2.1},
  doi          = {10.5281/zenodo.4295521},
  url          = {https://doi.org/10.5281/zenodo.4295521}
}

@article{gau2021brainhack,
  title={Brainhack: Developing a culture of open, inclusive, community-driven neuroscience},
  author={Gau, R{\'e}mi and Noble, Stephanie and Heuer, Katja and Bottenhorn, Katherine L and Bilgin, Isil P and Yang, Yu-Fang and Huntenburg, Julia M and Bayer, Johanna MM and Bethlehem, Richard AI and Rhoads, Shawn A and others},
  journal={Neuron},
  volume={109},
  number={11},
  pages={1769--1775},
  year={2021},
  publisher={Elsevier}
}

Neuroimaging Meta-Analyses! (Neurosynth-Compose + NiMARE)

Title

Neuroimaging Meta-Analyses! (NiMARE + Neurosynth-Compose)

Short description and the goals for the OHBM BrainHack

We are on an exciting precipice of making reproducible/sharable neuroimaging meta-analyses more accessible to a broader audience and we want your help to make it a reality.

We have multiple projects you can interface with that fit your level of experience (from newcomer to veteran)

Novice Projects

User testing

Walk through our https://compose.neurosynth.org/ using a tutorial.

Goals

provide feedback on what was intuitive/what didn’t make sense/what didn’t work in the tutorial or on the platform.

Intermediate Projects

Coordinate Based Meta-Regression exploration

If you have run a meta-analysis before using kernel-based methods and want to try Coordinate-based meta-regression

Goals

  • Execute a meta-analysis with meta-regression and compare the results to a kernel-based method
  • Provide feedback on the usability of the library
  • Identify bugs

Documentation/Educational Material Development

If you have some knowledge of meta-analyses and want to explain concepts and help organize an educational book, we can use contributors to the meta-analysis-book

Goals

  • Create exercises for sections of the book
  • Provide more fundamental knowledge about kernel-based methods
  • Outline Image-Based Meta-Analysis Section

Advanced Projects

Data mining

Use the neurosynth dataset to recreate term meta-analyses using coordinate-based meta-regression.

Goals

  • Determine the feasibility of using neurosynth dataset with CBMR
  • Compare results with original term meta-analyses on neurosynth

Image-Based Meta-Analysis

Prototype workflow for Image Based Meta-Analysis, and help develop the components to make it possible to run an image-based meta-analysis on compose.neurosynth.org and NiMARE

Goals

  • Create a workflow to complete an Image Based Meta-Analysis

Support Coordinate Based Meta-Regression on Neurosynth Compose

https://compose.neurosynth.org/

Help make a new neuroimaging meta-analytic method accessible to a broader audience

Goals

  • Outline changes to the API
  • Outline changes to the meta-analysis runner
  • Create a coordinate-based meta-regression workflow on neurosynth-compose

Link to the Project

https://github.com/neurostuff/NiMARE

Image for the OHBM brainhack website

https://compose.neurosynth.org/static/synth.png

Project lead

Name github discord
James Kent jdkent jdkent
Alejandro De La Vega adelavega neurozorro#2158
Yifan Yu yifan0330 Yifan Yu#0260

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Novice Projects:

  • None

Intermediate Projects:

  • Python (beginner/confirmed)
  • meta-analyses (confirmed)
  • git: 0/1

Advanced Projects:

  • Python (confirmed/advanced)
  • meta-analyses (confirmed)
  • git: 1/2

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

Hack with us on neurosynth-compose and NiMARE to create reproducible/sharable neuroimaging meta-analyses!

Short name for the Discord chat channel (~15 chars)

neurosynth-compose

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Migas Analytics: create additional visualizations

Title

Migas Analytics: create additional visualizations

Short description and the goals for the OHBM BrainHack

Migas is a telemetry web service that collects, aggregates, and displays software usage. It is broken up into two parts, a client and server. We would like to expand the types and quality of the visualizations, providing authenticated users a dynamic and clearer view of their projects’ usage.

Link to the Project

https://github.com/nipreps/migas-server

Image for the OHBM brainhack website

No response

Project lead

Mathias Goncalves, GitHub: @mxgd, discord:

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

While we encourage participants of all skill levels to join, a good understanding of Python is recommended to contribute to this project.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

migas_viz

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Selecting the best machine learning pipeline

Title

PHOTONAI

Short description and the goals for the OHBM BrainHack

INTRODUCTION

With photonai, we have designed a machine learning software that abstracts and condenses machine learning analyses to the most important design decisions, so that it can be applied from practitioners in medicine and the Life Sciences. As a part of that, we have automated the nested cross-validation and hyperparameter optimization loop. However, the optimal approach for choosing the most suitable hyperparameter configuration continues to be an open problem. For example, the hyperparameter configuration that performs best on the validation set might be overly specific to the particularities of the validation set and thus underperform in new applications.

GOALS FOR THE BRAIN HACK

  • brainstorm strategies on how to select the best hyperparameter configuration to maximize generalization performance
  • implement ensembling techniques, i.e. select several hyperparameter configurations

Link to the Project

github.com/wwu-mmll/photonai

Image for the OHBM brainhack website

No response

Project lead

Jan Ernsting, Github: jernsting, Discord: Jan#5471
Ramona Leenings, Github: rleenings, Discord: ramona_photonai

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Basic Python
Basic Machine Learning or Statistic Knowledge

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

photonai

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Nobrainer

Title

Nobrainer: A framework for developing neural network models for 3D image processing

Short description and the goals for the OHBM BrainHack

Nobrainer-zoo is a toolbox with a collection of deep learning neuroimaging models that eases the use of pre-trained models for various applications. Nobrainer-zoo provides the required environment with all the dependencies for training/inference of models.

During the hackathon, we aim to incorporate new models into the zoo along the lines of existing ones. Some of the models include TopoFit, CorticalFlow, Vox2Cortex, CortexODE, PialNN, and many other deep learning-based neuroimaging models that users would want to see as part of the nobrainer-zoo.

Tasks are designed to be really simple and to facilitate this model integration, we also provide a step-by-step documentation/guide that can be found at https://github.com/neuronets/trained-models/blob/master/add_model_instructions.md

Link to the Project

https://github.com/neuronets/nobrainer-zoo.git

Image for the OHBM brainhack website

No response

Project lead

@satra

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

Python, Datalad, Docker, Singularity

Recommended tutorials for new contributors

Good first issues

neuronets/trained-models#51

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

nobrainer-project

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

boldGPT

Title

boldGPT

Short description and the goals for the OHBM BrainHack

Humans struggle to "see" the structure in functional MRI (BOLD) brain maps. Our goal is to train a GPT that understands brain maps better than humans. This kind of "foundation" model should be useful for things like phenotype prediction and brain activity decoding. Plus it will hopefully generate neat fake brain maps.

Link to the Project

https://github.com/clane9/boldGPT

Image for the OHBM brainhack website

https://github.com/clane9/boldGPT/raw/main/.github/images/boldgpt.png

Project lead

Connor Lane
GitHub: clane9
Discord: connortslane

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • python
  • fMRI
  • deep learning
  • pytorch

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

boldgpt

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Physiopy - Practices and Tools for Physiological Signals

Authors

Mary Miedema <[email protected]>
Simon R. Steinkamp <[email protected]>
Céline Provins <[email protected]>
Sarah Goodale <[email protected]>
Marie-Eve Picard <[email protected]>
François Lespinasse <[email protected]>
Stefano Moia <[email protected]>
The physiopy community <[email protected]>

Summary

The acquisition and analysis of peripheral signals such as cardiac and respiratory measures alongside neuroimaging data provides crucial insight into physiological sources of variance in fMRI data. The physiopy community aims to develop and support a comprehensive suite of resources for researchers to integrate physiological data collection and analysis with their studies. This is achieved through regular discussion and documentation of community practices alongside the active development of open-source toolboxes for reproducibly working with physiological signals. At the OHBM 2023 Brainhack, we advanced physiopy’s goals through three parallel projects:

Documentation of Physiological Signal Acquisition Community Practices

We have been working to build “best community practices” documentation from experts in the physiological monitoring realm of neuroimaging. The aim of this project was to draft a new version of our documentation adding information from six meetings throughout the year discussing good practices in acquisition and use of cardiac, respiratory, and blood gas data. The documentation is finished and ready for editorial review from the community before we release this version publicly.

Semi-Automated Workflows for Physiological Signals

The aim of this project was to upgrade the existing code base for the peakdet and phys2denoise packages to achieve a unified workflow encompassing all steps in physiological data processing and model estimation. We mapped out and began implementing a restructured workflow for both toolboxes incorporating configuration files for more flexible and reproducible usage. To better interface with non-physiopy workflows, we added support for NeuroKit2 functionalities. As well, we added visualization support to the phys2denoise toolbox.

PhysioQC: A Physiological Data Quality Control Toolbox

This project was about creating a quality control pipeline for physiological data, similar to MRIQC, and leveraging NiReports. At the hackathon, we implemented a set of useful metrics and visualizations, as well as a proof-of-concept workflow for gathering the latter in an HTML report.

Going forward, development of these toolboxes and revision of our community practices continues. We welcome further contributions and contributors, at any skill level or with any background experience with physiological data.

References (Bibtex)

@software{phys2bids,
author = {Daniel Alcalá and
Apoorva Ayyagari and
Katie Bottenhorn and
Molly Bright and
César Caballero-Gaudes and
Inés Chavarría and
Vicente Ferrer and
Soichi Hayashi and
Vittorio Iacovella and
François Lespinasse and
Ross Markello and
Stefano Moia and
Robert Oostenveld and
Taylor Salo and
Rachael Stickland and
Eneko Uruñuela and
Merel van der Thiel and
Kristina Zvolanek},
title = {{physiopy/phys2bids: BIDS formatting of
physiological recordings}},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {},
doi = {10.5281/zenodo.3470091},
url = {https://doi.org/10.5281/zenodo.3470091}
}

@Article{Makowski2021neurokit,
author = {Dominique Makowski and Tam Pham and Zen J. Lau and Jan C. Brammer and Fran{\c{c}}ois Lespinasse and Hung Pham and Christopher Schölzel and S. H. Annabel Chen},
title = {{NeuroKit}2: A Python toolbox for neurophysiological signal processing},
journal = {Behavior Research Methods},
volume = {53},
number = {4},
pages = {1689--1696},
publisher = {Springer Science and Business Media {LLC}},
doi = {10.3758/s13428-020-01516-y},
url = {https://doi.org/10.3758%2Fs13428-020-01516-y},
year = 2021,
month = {feb}
}

K-Particles: A visual journey into the heart of magnetic resonance imaging

Authors

Omer Faruk Gulban [email protected]
Kenshu Koiso [email protected]
Thomas Maullin-Sapey [email protected]
Jeff Mentch [email protected]
Alessandra Pizzuti [email protected]
Fernanda Ponce | <TO_BE_FILLED_IN_LATER_WE_COULD_NOT_REACH_HER>
Kevin R. Sitek [email protected]
Paul A. Taylor [email protected]

Summary

Introduction

The primary objective of this project is to leverage particle animations to provide insights into the journey of traveling and sampling the k-space. By generating captivating visualizations, we aim to demystify the concept of k-space and the role it plays in magnetic resonance imaging (MRI), engaging both the scientific community and the general public in a delightful exploration of this fundamental aspect of imaging. This project builds on our previous Brainhack experiences that focused on generating 3D geodesic distance computation animations [CITE Brainhack2021] and particle simulation based brain explosions [CITE Brainhack2022].

Methods

Our methodology is as follows. First, we start from a 2D brain image (e.g. selecting a slice from a 3D anatomical MRI data [CITE Nibabel, Numpy, Scipy]). Then, we take its Fourier Transform and subsequently perform masking (or magnitude scaling) operations on k-space data [CITE Bernstein2004], the transform. We then create simultaneous visualizations of the k-space magnitude and corresponding image-space magnitude data. Note that the masking of k-space data is where the participants of this project exercised their creativity, by setting up various initial conditions and a set of rules to animate the mask data. In the final step, animated frames are compiled into movies to inform (and entertain). The scripts we have used to program these steps are available at: https://github.com/ofgulban/k_particles . Note that, we have also included several animations where no brain images were used, but instead, we generated k-space data directly in k-space to guide the unfamiliar participants with the concepts (see Figure 1, Panel A).

Results

As a result of this hackathon project, a compilation of our progress (Figure 1, Panel B) can be seen at https://youtu.be/_5ZDctWv5X4 as a video. Some of the highlights are:

  1. Audiovisualizer that maps audio features (e.g. amplitude) to k-space mask diameters
  2. Game of life [CITE Gardner1970] simulation in k-space with Hermitian symmetry.
  3. Pacman moving in k-space is implemented as a series of radial sector masks.
  4. Semi-random initialization of particle positions and semi-randomly reassigned velocities that look like an emerging butterfly.
  5. Randomized game of life initialization with vanishing trails of the previous simulation steps that look like stars, clouds, and comets orbiting frequency bands in k-space.
  6. Semi-random initialization of particle positions and velocities that look like explosions.
  7. Predetermined initialization of particle positions (e.g. at the center) and semi-randomized velocities that look like fireworks.
  8. Dancing text animations where the positions of text pixels are manipulated by wave functions.
  9. Picture based (e.g. cropped faces of the authors) moving within the k space where each pixel’s grayscale value is mapped onto a mask coefficient between 0-1.

Our future efforts will involve sophisticating the k-space simulations to generate more entertaining and educational content. For instance, instead of only visualizing the magnitude images, we can generate four panel animations showing real and imaginary (or magnitude and phase) components of the data.

Figure Caption

Figure 1: Our compilation of k-space animations generated during the brainhack can be seen at https://youtu.be/XS0LEQExGU8?si=I5Zufp3AcCbdhYIR .

References (Bibtex)

@book{Bernstein2004,
title = {Handbook of {MRI} {Pulse} {Sequences}},
isbn = {978-0-12-092861-3},
url = {https://linkinghub.elsevier.com/retrieve/pii/B9780120928613X50006},
abstract = {Magnetic Resonance Imaging (MRI) is among the most important medical imaging techniques available today. There is an installed base of approximately 15,000 MRI scanners worldwide. Each of these scanners is capable of running many different "pulse sequences", which are governed by physics and engineering principles, and implemented by software programs that control the MRI hardware. To utilize an MRI scanner to the fullest extent, a conceptual understanding of its pulse sequences is crucial. This book offers a complete guide that can help the scientists, engineers, clinicians, and technologists in the field of MRI understand and better employ their scanner. • Explains pulse sequences, their components, and the associated image reconstruction methods commonly used in MRI • Provides self-contained sections for individual techniques • Can be used as a quick reference guide or as a resource for deeper study • Includes both non-mathematical and mathematical descriptions • Contains numerous figures, tables, references, and worked example problems.},
publisher = {Elsevier},
author = {Bernstein, Matt A. and King, Kevin F. and Zhou, Xiaohong Joe},
year = {2004},
doi = {10.1016/B978-0-12-092861-3.X5000-6},
note = {Publication Title: Handbook of MRI Pulse Sequences
ISSN: 1053-1807},
}

@Article{Brainhack2021,
title = {Proceedings of the {OHBM} {Brainhack} 2021},
copyright = {All rights reserved},
url = {https://apertureneuro.org/article/77464-proceedings-of-the-ohbm-brainhack-2021},
doi = {10.52294/258801b4-a9a9-4d30-a468-c43646391211},
language = {en},
urldate = {2023-10-15},
journal = {Aperture Neuro},
author = {Nikolaidis, Aki and Manchini, Matteo and Auer, Tibor and L. Bottenhorn, Katherine and Alonso-Ortiz, Eva and Gonzalez-Escamilla, Gabriel and Valk, Sofie and Glatard, Tristan and Selim Atay, Melvin and M.M. Bayer, Johanna and Bijsterbosch, Janine and Algermissen, Johannes and Beck, Natacha and Bermudez, Patrick and Poyraz Bilgin, Isil and Bollmann, Steffen and Bradley, Claire and E.J. Campbell, Megan and Caron, Bryan and Civier, Oren and Pedro Coelho, Luis and El Damaty, Shady and Das, Samir and Dugré, Mathieu and Earl, Eric and Evas, Stefanie and Lopes Fischer, Nastassja and Fu Yap, De and G. Garner, Kelly and Gau, Remi and Ganis, Giorgio and G. E. Gomes, Dylan and Grignard, Martin and Guay, Samuel and Faruk Gulban, Omer and Hamburg, Sarah and O. Halchenko, Yaroslav and Hayot-Sasson, Valerie and Liu Holford, Dawn and Huber, Laurentius and Illanes, Manuel and Johnstone, Tom and Kalyani, Avinash and Kashyap, Kinshuk and Ke, Han and Khormi, Ibrahim and Kiar, Gregory and Ković, Vanja and Kuehn, Tristan and Kumar, Achintya and Lecours-Boucher, Xavier and Lührs, Michael and Luke, Robert and Madjar, Cecile and Mansour L., Sina and Markeweicz, Chris and Andrea Martinez, Paula and McCarroll, Alexandra and Michel, Léa and Moia, Stefano and Narayanan, Aswin and Niso, Guiomar and A. O’Brien, Emmet and Oudyk, Kendra and Paugam, François and G. Pavlov, Yuri and Poline, Jean-Baptiste and A. Poser, Benedikt and Provins, Céline and Reddy Raamana, Pradeep and Rioux, Pierre and Romero-Bascones, David and Sareen, Ekansh and Schettino, Antonio and Shaw, Alec and Shaw, Thomas and A. Smout, Cooper and Šoškié, Anđdela and Stone, Jessica and J Styles, Suzy and Sullivan, Ryan and Sunami, Naoyuki and Sundaray, Shamala and Wei Rou, Jasmine and Thanh Thuy, Dao and Tourbier, Sebastien and Urch, Sebastián and De La Vega, Alejandro and Viswarupan, Niruhan and Wagner, Adina and Walger, Lennart and Wang, Hao-Ting and Ting Woon, Fei and White, David and Wiggins, Christopher and Woods, Will and Yang, Yu-Fang and Zaytseva, Ksenia and D. Zhu, Judy and P. Zwiers, Marcel},
month = mar,
year = {2023},
pages = {87},
file = {Nikolaidis et al. - 2023 - Proceedings of the OHBM Brainhack 2021.pdf:/Users/faruk/Zotero/storage/MUBSB5NX/Nikolaidis et al. - 2023 - Proceedings of the OHBM Brainhack 2021.pdf:application/pdf},
}

@Article{Scipy,
title = {{SciPy} 1.0: {Fundamental} {Algorithms} for {Scientific} {Computing} in {Python}},
volume = {17},
doi = {10.1038/s41592-019-0686-2},
journal = {Nature Methods},
author = {Virtanen, Pauli and Gommers, Ralf and Oliphant, Travis E. and Haberland, Matt and Reddy, Tyler and Cournapeau, David and Burovski, Evgeni and Peterson, Pearu and Weckesser, Warren and Bright, Jonathan and van der Walt, Stéfan J. and Brett, Matthew and Wilson, Joshua and Millman, K. Jarrod and Mayorov, Nikolay and Nelson, Andrew R. J. and Jones, Eric and Kern, Robert and Larson, Eric and Carey, C J and Polat, İlhan and Feng, Yu and Moore, Eric W. and VanderPlas, Jake and Laxalde, Denis and Perktold, Josef and Cimrman, Robert and Henriksen, Ian and Quintero, E. A. and Harris, Charles R. and Archibald, Anne M. and Ribeiro, Antônio H. and Pedregosa, Fabian and van Mulbregt, Paul and {SciPy 1.0 Contributors}},
year = {2020},
pages = {261--272},
}

@Article{Numpy,
title = {Array programming with {NumPy}},
volume = {585},
url = {https://doi.org/10.1038/s41586-020-2649-2},
doi = {10.1038/s41586-020-2649-2},
number = {7825},
journal = {Nature},
author = {Harris, Charles R. and Millman, K. Jarrod and Walt, Stéfan J. van der and Gommers, Ralf and Virtanen, Pauli and Cournapeau, David and Wieser, Eric and Taylor, Julian and Berg, Sebastian and Smith, Nathaniel J. and Kern, Robert and Picus, Matti and Hoyer, Stephan and Kerkwijk, Marten H. van and Brett, Matthew and Haldane, Allan and Río, Jaime Fernández del and Wiebe, Mark and Peterson, Pearu and Gérard-Marchant, Pierre and Sheppard, Kevin and Reddy, Tyler and Weckesser, Warren and Abbasi, Hameer and Gohlke, Christoph and Oliphant, Travis E.},
month = sep,
year = {2020},
note = {Publisher: Springer Science and Business Media LLC},
pages = {357--362},
}

@misc{Nibabel,
title = {nipy/nibabel: 5.1.0},
publisher = {Zenodo},
author = {Brett, Matthew and Markiewicz, Christopher J. and Hanke, Michael and Côté, Marc-Alexandre and Cipollini, Ben and McCarthy, Paul and Jarecka, Dorota and Cheng, Christopher P. and Halchenko, Yaroslav O. and Cottaar, Michiel and Larson, Eric and Ghosh, Satrajit and Wassermann, Demian and Gerhard, Stephan and Lee, Gregory R. and Baratz, Zvi and Wang, Hao-Ting and Kastman, Erik and Kaczmarzyk, Jakub and Guidotti, Roberto and Daniel, Jonathan and Duek, Or and Rokem, Ariel and Madison, Cindee and Papadopoulos Orfanos, Dimitri and Sólon, Anibal and Moloney, Brendan and Morency, Félix C. and Goncalves, Mathias and Markello, Ross and Riddell, Cameron and Burns, Christopher and Millman, Jarrod and Gramfort, Alexandre and Leppäkangas, Jaakko and van den Bosch, Jasper J.F. and Vincent, Robert D. and Braun, Henry and Subramaniam, Krish and Van, Andrew and Gorgolewski, Krzysztof J. and Raamana, Pradeep Reddy and Klug, Julian and Nichols, B. Nolan and Baker, Eric M. and Hayashi, Soichi and Pinsard, Basile and Haselgrove, Christian and Hymers, Mark and Esteban, Oscar and Koudoro, Serge and Pérez-García, Fernando and Dockès, Jérôme and Oosterhof, Nikolaas N. and Amirbekian, Bago and Christian, Horea and Nimmo-Smith, Ian and Nguyen, Ly and Reddigari, Samir and St-Jean, Samuel and Panfilov, Egor and Garyfallidis, Eleftherios and Varoquaux, Gael and Legarreta, Jon Haitz and Hahn, Kevin S. and Waller, Lea and Hinds, Oliver P. and Fauber, Bennet and Perez, Fabian and Roberts, Jacob and Poline, Jean-Baptiste and Stutters, Jon and Jordan, Kesshi and Cieslak, Matthew and Moreno, Miguel Estevan and Hrnčiar, Tomáš and Haenel, Valentin and Schwartz, Yannick and Darwin, Benjamin C and Thirion, Bertrand and Gauthier, Carl and Solovey, Igor and Gonzalez, Ivan and Palasubramaniam, Jath and Lecher, Justin and Leinweber, Katrin and Raktivan, Konstantinos and Calábková, Markéta and Fischer, Peter and Gervais, Philippe and Gadde, Syam and Ballinger, Thomas and Roos, Thomas and Reddam, Venkateswara Reddy and {freec84}},
month = apr,
year = {2023},
}

@Article{Gardner1970,
title = {Mathematical {Games}: {The} fantastic combinations of {John} {Conway}’s new solitaire game “life”},
volume = {4},
journal = {Scientific American},
author = {Gardner, M.},
year = {1970},
pages = {120--123},
}

@misc{Brainhack2022,
title = {Proceedings of the {OHBM} {Brainhack} 2022},
}

NARPS Open Pipelines

Title

NARPS Open Pipelines

Short description and the goals for the OHBM BrainHack

The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS project (Botvinik-Nezer et al., 2020) and share this as an open resource for the community.

In particular, we would like to focus on these tasks during the Brainhack:

  • Start reproducing new pipelines, based on the knowledges of participants (i.e.: which fMRI analysis software they are used to, whether they use Python or not, ...) ;
  • Improving documentation and accessibility of the project.

Link to the Project

https://github.com/Inria-Empenn/narps_open_pipelines

Image for the OHBM brainhack website

https://raw.githubusercontent.com/Inria-Empenn/narps_open_pipelines/main/assets/images/project_illustration.png

Project lead

Boris Clénet (https://github.com/bclenet) - virtual hub Europe
Camille Maumet (https://github.com/cmaumet) - Montreal
Elodie Germani (https://github.com/elodiegermani) - Montreal

Main Hub

Europe / Middle East / Africa

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

  • Base knowledge of git and GitHub
  • Able to understand Python code
  • Understanding of fMRI analysis pipelines
  • Ideally but not required : some Nipype knowledge
  • Ideally but not required : having used SPM or FSL or AFNI or nistats

Recommended tutorials for new contributors

Good first issues

Twitter summary

NARPS Open Pipelines
https://github.com/Inria-Empenn/narps_open_pipelines
Project leader: Boris Clénet with @cmaumet @elodiegermani
#OHBMHackathon #Brainhack #OHBM2022

Short name for the Discord chat channel (~15 chars)

narps-open

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Nipreps - Reporting FreeSurfer outcomes with NiReports

Title

Reporting FreeSurfer outcomes with NiReports

Short description and the goals for the OHBM BrainHack

This project aims to gather the output of FreeSurfer in comprehensive reports. Using the powerful NiReports tooling, reportlets will be created from FreeSurfer runs to visualize and assess the quality of neuroimaging processing steps.

The project will leverage the NiReports assembler, which utilizes PyBIDS, to collect the reportlets generated by FreeSurfer. The assembler follows a report specification in YAML format, specifying the query for specific reportlets, their associated metadata, and text annotations. Ultimately, the assembler combines the reportlets into a single HTML file, providing an informative and concise summary of the FreeSurfer analysis.

FreeSurfer is a widely used software package for structural and functional neuroimaging analysis, while NiReports, a part of the NiPreps' reporting and visualization tools, offers a powerful framework for organizing and presenting the results. This hackathon project aims to streamline the reporting process and enhance the visualization of FreeSurfer output, facilitating the interpretation and analysis of neuroimaging data.

Link to the Project

https://github.com/nipreps/nireports

Image for the OHBM brainhack website

No response

Project lead

Michael Dayan, GitHub: @neurorepro , discord:

Main Hub

Montreal

Other Hub covered by the leaders

  • Montreal
  • Asia / Pacific
  • Europe / Middle East / Africa
  • Americas

Skills

To make the most of this hackathon project, having some Python knowledge is important. While prior experience with FreeSurfer and NiReports is appreciated, it's not a requirement. We encourage you to join us and contribute your skills, regardless of your level of expertise.

Recommended tutorials for new contributors

Good first issues

No response

Twitter summary

No response

Short name for the Discord chat channel (~15 chars)

nireports_freesurfer

Please read and follow the OHBM Code of Conduct

  • I agree to follow the OHBM Code of Conduct during the hackathon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.