Code Monkey home page Code Monkey logo

cbig's Introduction

Welcome to the Computational Brain Imaging Group (CBIG) repository!

We are from Thomas Yeo's Computational Brain Imaging Group (CBIG).

CBIG repository is a package that provides the following useful tools:

  1. fMRI preprocessing pipeline
  2. Brain parcellations and algorithms
  3. Mental disorder subtyping maps and algorithms
  4. fMRI dynamical models (including neural mass models)
  5. Registration between MNI and fsaverage space
  6. Phenotypic prediction algorithms

For more info, please check stable_projects folder.

Currently, CBIG mainly uses matlab, bash, csh, python and only supports Linux system.

Usage

After cloning/downloading this repository, please see README inside setup directory to see instructions on how to set up your local environment to be compatible with our repository.

We strongly encourage you to join the CBIG users group (https://groups.google.com/forum/#!forum/cbig_users/join), so that you can be informed about major updates & bugs.

If you have issues, please email Ruby ([email protected]) and Thomas ([email protected]), but we may or may not be able to help you because we are a small lab with limited resources.

License

See our LICENSE file for license rights and limitations (MIT).

Happy researching!

cbig's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cbig's Issues

Incorrect LUT (Yan2023)?

I was trying to get the network assignment from the LUT file. While the readme file of the Yan2023 homotopic parcellation reads, "Parcel 1 correspondes to parcel 201 and so on", the LUT file (400Parcels_Yeo2011_7Networks_LUT.txt) shows:
1 0.80392 0.24314 0.30588 7networks_LH_Default_FPole_1
2 0.80392 0.24314 0.30588 7networks_LH_Default_FPole_2
3 0.80392 0.24314 0.30588 7networks_LH_Default_IPL_1
...
201 0.80392 0.24314 0.30588 7networks_RH_Default_FPole_1
202 0.80392 0.24314 0.30588 7networks_RH_Default_IPL_1

Moreover, the network assignment does not seem to match either. For example, there are 48 DMN ROIs on the left (ROI 1-48) and 35 on the right hemisphere (ROI 201-235). Did I miss something here?
Thanks!
Oliver

https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yan2023_homotopic/parcellations/MNI/yeo7/fsleyes_lut/400Parcels_Yeo2011_7Networks_LUT.txt

Schaefer2018 for HCP data

Dear CBIG Administrator:
Thank you for your excellent work on human brain parcellation!
Recently I want to use Schaefer2018 method to get the group parcellation with fs_LR_32k template on different HCP's data. But it's base on the fsaverage space as the paper said(Schaefer et al. 2018).
I find prams.grad_prior using the file:"./lib/input/3_smooth_lh_borders_120_gordon_subjects_3_postsmoothing_6.mat" with fsaverage6: 6*40962 in each hemisphere. So I wonder how to get this file from other fMRI data. Is it using the gordon's boundary file to assign to the neighborhood nodes in fs_LR_32K?
Looking forwards to your reply!

Yours,
Lee

CBIG_RF_projectMNI2fsaverage to gii formated file

Hi,
I used the CBIG_RF_projectMNI2fsaverage.sh to map my statistic results from MNI to fsaverage. I see the script use MRIwrite to save the results, which is a matrix with size of 1*163842. The code is following:

input = MRIread('$input'); 
[lh_proj_data, rh_proj_data] = CBIG_RF_projectMNI2fsaverage('$input', '$interp', '$lh_map', '$rh_map'); 
input.vol = permute(lh_proj_data, [4 2 3 1]); 
MRIwrite(input, '$lh_output'); 
input.vol = permute(rh_proj_data, [4 2 3 1]); 
MRIwrite(input, '$rh_output'); 

It can be visualised by freeview, but how about connectome workbench? I tried to convert the nii.gz file to fun.gii file with mri_covert, but I have been filed. The display in workbench was just disordered.
How could I output .gii file directly or convert the results to .gii

Kong2022/examples/readme: set $CBIG_CODE_DIR env missing

The $CBIG_CODE_DIR env variable has to be set to (correctly) run KONG2022/examples/CBIG_ArealMSHBM_create_example_input_data.sh. This info is missing in examples/readme.md. If not set, the script doesn't terminate but just throws some warnings and probably copies things to a wrong location.

Might make even more sense to just read the scripts directory via SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) or sth like this.

compile MARS_DT_Boundary.c into mex file

Expected situation

Successful compilation into .mexa64 file

Actual situation

Error output:

Error using mex
/tmp/mex_3712286831561570_15131/MARS_DT_Boundary.o: In function MARS_DT_Boundary': MARS_DT_Boundary.c:(.text+0x64): undefined reference to Min_HeapAllocate'
MARS_DT_Boundary.c:(.text+0xb3): undefined reference to Min_HeapInsert' MARS_DT_Boundary.c:(.text+0xd9): undefined reference to Min_HeapEditKeyIndexID'
MARS_DT_Boundary.c:(.text+0x124): undefined reference to Min_HeapGetCurrSize' MARS_DT_Boundary.c:(.text+0x140): undefined reference to Min_HeapExtract'
MARS_DT_Boundary.c:(.text+0x1bb): undefined reference to Min_HeapIdIsInHeap' MARS_DT_Boundary.c:(.text+0x1d1): undefined reference to Min_HeapQueryKeyIndexID'
MARS_DT_Boundary.c:(.text+0x213): undefined reference to Min_HeapEditKeyIndexID' MARS_DT_Boundary.c:(.text+0x290): undefined reference to Min_HeapFree'
collect2: error: ld returned 1 exit status

Steps to reproduce the issue (optional)

Do mex MARS_DT_Boundary.c in command window in MATLAB R2020b
I'm on Ubuntu 18.04, I've also downloaded min_heap.c, min_heap.h, libmin_heap.a the files that seem to be related to MARS_DT_Boundary.c, but the error stays

Schaefer parcellation in MNI space

Hi all,

First of all, thanks for making this parcellation available to the community!

I tried to load the 1000 parcel parcellation in MNI space using FieldTrip's ft_read_atlas() function. However, I got an error message saying that the data was not in GZIP format. Next, I tried to open the same data with FSLeyes (as per your documentation) and got the same error message. Subsequently, instead of downloading this parcellation from your repository, I also tried fetching the data with the NILearn python package using nilearn.datasets.fetch_atlas_schaefer_2018(). The data fetched with NILearn indeed seems to work with FSLeyes and FieldTrip. Am I doing something wrong or is there an issue with the MNI parcellation?

Once I got the data (fetched with NILearn) loaded using FieldTrip, I did some basic checks and it appears that the origin (0,0,0) lies outside of the brain. As far as I understand, the origin of the MNI spaces are usually set to the anterior commissure.

Could you help me to figure out how to properly load the MNI space parcellation?

Group atlas selection

I found in the scripts of group priors exstimation:
for example:
CBIG_ArealMSHBM_cMSHBM_estimate_group_priors_parent(project_dir,mesh,num_sub, ...
% num_sess,num_clusters,beta,tmp_dir, max_iter)

there are only numbers could be selection, that means he group atlas used here is only supported Schaefer et al., 2018. It is very helpful to a term to select other atlas used in the group priors to produce atlas. For example, I have an group atlas on fs_LR32k, could I used it in this method?

CBIG_fMRI_Preproc2016 without downsampling

Expected situation

Use the CBIG_fMRI_Preproc2016 pipeline without downsampling to fsaverage5 space by editing the example_config.txt file to remove the -down flag:

CBIG_preproc_native2fsaverage -proj fsaverage6 -sm 6

Actual situation

Files with the fsaverage5 mesh are still created as documented in the native2fsaverage log:
image

The fMRI_preprocess log indicates that it cannot find files ending in the 'fs6_sm6_fs' suffix:
image

Thanks for your help!
Maddy

write .annot file error

hello, I updated the label of each vertex through an algorithm. The purpose is to replace the updated labels_id with the labels_id in the .annot file, but the problem now is that I open the .annot file with Notepad and it is garbled. Do you know how to solve it?
I want to replace the vertex_label in .annot file. If you know ,are you willing to share the source code?

Looking forword to your reply

The Schaefer 1000 parcellation in fsLR has regions missing

Expected situation

I attempted to parcellate an object using the Schaefer dataset as per the bottom of this walkthrough:
https://netneurolab.github.io/neuromaps/user_guide/transformations.html
only using the 1000 parcellation instead of the 400, like so:

from neuromaps.datasets import fetch_annotation
from netneurotools import datasets as nntdata
from neuromaps.parcellate import Parcellater
from neuromaps.images import dlabel_to_gifti

fc_grad = fetch_annotation(source='margulies2016', desc='fcgradient01', space='fsLR', den='32k')
schaefer = nntdata.fetch_schaefer2018('fslr32k')['1000Parcels7Networks']
parc = Parcellater(dlabel_to_gifti(schaefer), 'fsLR')
fc_grad_parc = parc.fit_transform(fcgradient, 'fsLR')

and was met with an error:

IndexError: index 999 is out of bounds for axis 0 with size 999

Actual situation

Since neuromaps linked to this Github with the license, I assumed that they fetched it as source.
I downloaded the dlabel file from here:

https://github.com/ThomasYeoLab/CBIG/blob/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal/Parcellations/HCP/fslr32k/cifti/Schaefer2018_1000Parcels_7Networks_order.dlabel.nii

and checked it. Regions 533 and 903 were missing from the second hemisphere.

If you could assist me with this I would greatly appreciate it, as I am trying to use the parcellation on my own data. If I have misunderstood the data, please do also let me know.

Thanks!

Schaefer2018 parcellation in individual surface space : no sphere.reg file

I'm trying to follow this example in order to parcellate an individual brain. However whenI try to run mri_surf2, it raises an error saying it does not find the lh.sphere.reg ( not provided in the repo)
Do I have to manually create this file ? If yes can you explain me the different steps to achieve that ?

Actual situation

mri_surf2surf: could not read surface .../fsaverage6/surf/lh.sphere.reg
No such file or directory
Return code: 255

Steps to reproduce the issue (optional)

Run the following command:

mri_surf2surf 
 --hemi lh
 --tval **...**/PT02/lh.Schaefer2018_1000Parcels_17Networks_order.annot
--sval-annot **...**/fsaverage6/label/lh.Schaefer2018_1000Parcels_17Networks_order.annot --srcsubject fsaverage6
--trgsubject PT02

Thanks for all yur work,
Victor

Schaefer_2018 atlas MNI152 space

Expected situation

Hi there, thanks for this amazing work! I have a question about the exact MNI152 template used here. I thought MNI152NLin6Asym was used, which has an affine matrix of this:
array([[ 1., 0., 0., -91.],
[ 0., 1., 0., -126.],
[ 0., 0., 1., -72.],
[ 0., 0., 0., 1.]])

Actual situation

But after fetching the Schaefer_2018 atlas, the affine matrix I got is the following:
array([[ -1., 0., 0., 90.],
[ 0., 1., 0., -126.],
[ 0., 0., 1., -72.],
[ 0., 0., 0., 1.]])

I also found the same affine matrix in UK Biobank neuroimaging data, but they mentioned that MNI152NLin6Asym space was used when generating T1 nii images. Could you let me know which MNI152 space was used in Schaefer_2018 atlas? Thank you very much!

Steps to reproduce the issue (optional)

fsaverage to MNI projection

I've downloaded the standalone scripts for MNI to fsaverage projection. Can these be used to do the reverse, i.e., project data from fsaverage to MNI space?

Thanks,
Marta

CBIG2016 Preproc - Spatial Distortion Correction

Expected situation

I attempted to run spatial distortion correction with fieldmaps that each have more than one volume.

Actual situation

The pipeline implicitly assumes that fieldmaps only have one volume, so the datain.txt file is generated with two lines (reflecting a total of two volumes in the AP_PA.nii.gz file). This creates an error when there are more than two volumes in the AP_PA.nii.gz file:

Topup: msg=topup_clp::topup_clp: Mismatch between /fslgroup/grp_proc/compute/Nielson_analysis/CBIG2016_preproc_FS6/sub-23638/sub-23638/bold/sdc/AP_PA.nii.gz and /fslgroup/grp_proc/compute/Nielson_analysis/CBIG2016_preproc_FS6/sub-23638/sub-23638/bold/sdc/datain.txt

Steps to reproduce the issue (optional)

This error can be reproduced using the multi-echo masking test dataset on Openneuro: https://openneuro.org/datasets/ds002156/versions/2.0.0 (ds002156).

Multi-echo CBIG2016 Preprocessing

Expected situation

When the bold/001/ME_intermediate folder from Tedana is not created, an error is thrown indicating that the folder does not exist and there was an issue with Tedana. This would be very helpful for troubleshooting errors and indicating whether or not Tedana is setup correctly.

Actual situation

The logfile (CBIG_preproc_multiecho_denoise.log) indicates that Tedana finished even though no ME_intermediate folder was created:

=====================combine echos and perform multi-echo ICA using tedana ======================
tedana -d sub-35793_bld001_e1_rest_skip4_mc.nii.gz sub-35793_bld001_e2_rest_skip4_mc.nii.gz sub-35793_bld001_e3_rest_skip4_mc.nii.gz -e 12.4 34.28 56.16 --out-dir /CBIG2016_preproc_FS6/sub-35793/sub-35793/bold/001/ME_intermediate
====================== Multi-echo ICA finished. ======================

Thank you for creating this wonderful resource!

Possible to get the MNI coordinates for the regions for the 7 resting state networks?

Hello! Sorry if this is a silly question but I've been stuck on this for a few days already and I'd really appreciate it if someone can help me out :)

From this link: https://sites.google.com/view/yeolab/software, for the resting state networks and under the Cortical Resting Networks, I was able to download Resting State Cortical Parcellation in nonlinear MNI152 space and then view it in freeview. I'm trying to get the MNI coordinates for each of these 7 regions so I can recreate it (build a network model) in Python.

So far I've tried using Mango and MatLab to open the files as well but it's not giving me the MNI coordinates. May I get some advice?

Warning: Van Mises did not converge after 100 iterations

in stable_projects/brain_parcellation/Kong2019_MSHBM/step1_generate_profiles_and_ini_params

I am running the CBIG_MSHBM_generate_ini_params on my own dataset with 15 subjects, two runs per subject. The inputs are:

seed_mesh          = 'fs_LR_900';
targ_mesh           = 'fs_LR_32k';
num_clusters       = 17;
num_initialization = 1000;

The program runs fine, but I get the following Warning from most iterations. For example:

Iter. 145...  Warning: Von mises did not converge after 100 iterations 
> In direcClus_fix_bessel_bsxfun (line 232)
  In CBIG_VonmisesSeriesClustering_fix_bessel_randnum_bsxfun (line 129)
  In CBIG_MSHBM_generate_ini_params (line 59) 
took 19.683137 second.

Should I worry about these warnings? Should I increase the default number of iterations to avoid it?

apply gradient computed from speedup_gradients to Schaefer2018 parcellation code

Hi, I successfully ran the example code of speedup_gradients with sample data in fsaverage6 space. Its output is a 74947 * 1 array, which I believe corresponds to the left hem non-medial wall vertices (37476 * 1) plus the right hem non-medial wall vertices (37471 * 1). I prepare to similarly use the output from other data as gradient prior for Schaefer parcellation code.

However, the length of the example left hem gradient prior (border) in Schaefer parcellation is 40962, which includes medial wall. Taking a closer look at the values of the sample gradient prior, the medial wall indices' values are not simply zeros, but something similar to other non-medial wall vertices, so it does not seem to be simply put the non-medial wall vertices' values back to the whole surface.

I wonder how the gradient output from speedup_gradients can be converted to the gradient prior for Schaefer parcellation?

Memory requirements for Yeo 2011 stability analysis?

I am attempting to replicate the stability analysis with my own input data, but was not able to get it to finish running--wondering if it's a memory issue, or if I can do something to parallelize so that the analysis will finish? Or whether it's user error of some sort?

I ran CBIG_runresamplingK_single.m and am running CBIG_determineK_single.m (I gathered these are the scripts to replicate that analysis, please correct me if I'm wrong). It seems to hang most severely at the two lines running the Hungarian algorithm below, and I don't get past the first value of k, since I think the second Hungarian algorithm with random clusters never finishes. I have 37k vertices per hemisphere, so somewhat larger data than it was originally run on, but tried running with really large amounts of memory and wasn't successful--wondered how I could more effectively investigate the issue, or if you have any suggestions?

% Do the matching 
		
		[matching cost1] = Hungarian(-assign1*assign2');

        % Creating random cluster assignments

        r1r = rand(size(r1));
		rr1r=rand(size(rr1));
        
        maxr1r = max(r1r,[],2);
        arrign1_random = ~(r1r-maxr1r*ones(1,size(r1,2)));
        
        maxrr1r = max(rr1r,[],2);
        arrign2_random = ~(rr1r-maxrr1r*ones(1,size(r1,2)));

		[matching cost2] = Hungarian(-assign1_random*assign2_random');

Expected situation

Algorithm finishes with stability values.

Actual situation

Algorithm hangs at 'Creating random cluster assignments'.

Steps to reproduce the issue (optional)

Group atlas selection

I found in the scripts of group priors exstimation:
for example:
CBIG_ArealMSHBM_cMSHBM_estimate_group_priors_parent(project_dir,mesh,num_sub, ...
% num_sess,num_clusters,beta,tmp_dir, max_iter)

there are only numbers could be selection, that means he group atlas used here is only supported Schaefer et al., 2018. It is very helpful to a term to select other atlas used in the group priors to produce atlas. For example, I have an group atlas on fs_LR32k, could I used it in this method?

Network ordering of Kong2018_MSHBM

We are trying to release a few scripts to provide the network label with the network name.
The only problem might be that the network structure could be very different if you are training from scratch on a different dataset. Here are two cases:

Use our estimated group priors to estimate individual parcellation

The group atlas will be utilized to initialize the algorithm, therefore, the network order in the individual parcellation will be the same as the group parcellation.
We provided two group priors which are generated by a group atlas estimated by GSP dataset in fsaverage5 space, and a group atlas estimated by HCP dataset in fsLR_32k space.
I will provide these group atlases with corresponding network labels.

Use your own data to train the model and estimate group priors

If you train on your own data, then your group atlas will be different from ours, the network ordering will also be different. In this way, you will have to run a Hungarian match algorithm to match the networks in your estimated group atlas with our atlases.
To do that, you can use the following code:

  1. If it's fsaverage5, you can directly use the following command to reorder the color table and visualize it:
my_par = load('<output_dir>/group/group.mat');
ref_par = load('/Kong2019_MSHBM/examples/results/estimate_group_priors/group/group.mat');
[output, assign, cost, dice_overlap] = CBIG_HungarianClusterMatch([ref_par.lh_labels; ref_par.rh_labels], [my_par.lh_labels; my_par.rh_labels], 1);
colors_old=ref_par.colors(2:end,:);
colors_new=colors_old(assign,:);
colors_new=[[0,0,0];color_new];
CBIG_DrawSurfaceMaps(my_par.lh_labels, my_par.rh_labels, 'fsaverage5', 'inflated',-Inf,Inf,colors_new);
  1. If your data is in fslr, sorry, I didn't release the HCP group atlas in the current release (will do it asap), so contact me ([email protected]) to get the group atlas file HCP_40sub_1000iter_17nets_cen_sm4.mat.
my_par = load(fullfile(output_dir,'group','group.mat'));
ref_par = load('HCP_40sub_1000iter_17nets_cen_sm4.mat');
[output, assign, cost, dice_overlap] = CBIG_HungarianClusterMatch([ref_par.lh_labels; ref_par.rh_labels], [my_par.lh_labels; my_par.rh_labels], 1);
colors_old=ref_par.colors(2:end,:);
colors_new=colors_old(assign,:);
CBIG_DrawSurfaceMaps_fslr(my_par.lh_labels, my_par.rh_labels, 'fs_LR_32k', 'inflated',-Inf,Inf,colors_new);

I will try to release all above commands and other useful scripts later.

parcelation in MNI space

Hello,

We have a question that might have a very easy answer, but we are not able to solve it.

We are working on a project where we want to obtain time series from functional MRI images (volumetric space) using the Schaefer 400 parcel, 17 network atlas. However, the available atlases differ from our images in terms of voxel size (1 and 2mm vs 3mm in our images). Our first approach was to resize the atlas using spmcalc, but that changed the values of the atlas making the labels continuous.

So the question is whether there is a way to obtain the atlas in the right size in volumetric space? Functional data was analysed using spm12.

Many thanks, Sarah

Schaefer2018 parcellation: difference between gordon prior and gordon_water prior?

Hi there, In the example of Schaefer parcellation, the gradient priors (containing fields border and border_matrix) are provided without the need to compute ourselves. However, I'm trying to compute the gradient priors (border and border_matrix) from my own data.

I see in the code it says the paper used option 'gordon', but it's not clear to me how this 'gordon' gradient is computed. In the CBIG repo, there's an implementation of computing gradient, but because it uses watershed, it seems more likely to be 'gordon_water' option in Schaefer parcellation code? However, in Gordon et al, 2016 paper's implementation, there is also a watershed step, so I'm confused about the two. What's the difference between 'gordon' and 'gordon_water'?

Error in CBIG_VonmisesSeriesClustering_fix_bessel_randnum_bsxfun

In step1: CBIG_MSHBM_generate_ini_params

An error occurred:
In an assignment A(I) = B, the number of elements in B and I
must be the same.

When calling function CBIG_VonmisesSeriesClustering_fix_bessel_randnum_bsxfun
(line 171: rh_s(l2) = s(length(l1)+1:end);)

My MATLAB version is R2014a in Linux and the data are in HCP format (dtseries.nii).
I ran the function following the user guide and the first 2 scripts in step1 had already been finished without error.

How should I fix this error?

Abbreviations of parcel names - Post vs. PostC

Hi there,

Could you please help to clarify the difference between the parcel abbreviations "Post" and "PostC"? There is no "Post" here https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal/Parcellations, but only "PostC" for post central. This is the same for 17 networks https://github.com/ThomasYeoLab/CBIG/blob/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/Yeo2011_17networks_N1000.split_components.glossary.csv. However, in 7 networks https://github.com/ThomasYeoLab/CBIG/blob/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/Yeo2011_7networks_N1000.split_components.glossary.csv, we only have "Post" which refers to as "posterior", what do you meant by posterior? Is it same as post central? Thanks a lot.

Best,
Zhengchen

Schaefer parcellation gap between parcels with HCP data

Hi there, I successfully ran Schaefer parcellation code on HCP resting-state fMRI data of 500 subjects. However, there are gaps between the parcels (see below) consisting of vertices with seemingly random parcel assignment. I wonder why it is like this and if there's a way to adjust it to the type of no-gap parcels from the example (the bottom image)?

I used the following settings in CBIG_gwMRF_build_data_and_perform_clustering.m function

num_left_cluster = 50
num_right_cluster = 50
start_gamma = 5e6;
exponential = 15;
smooth_cost = 1e5;
num_iterations = 100; (gamma_head becomes Inf in the middle of the iterations; I also tried num_iter = 7 such that all gamma_head values are finite in the iterations, but the outputs from the two values are quite similar)
num_runs = 1;

I computed the gradient prior file using the speedup_gradient code

HCP 500 subjects:
Schaefer_HCP_L50R50

From example fsaverage6 data:
Schaefer_fs6_L50R50

atlas - roi

Hello
congrats for your impressive work!
I am new with atlas manipulation , I tried various atlas in ggseq but cannot find one that includes the following region:

  • frontal cortex,
  • thalamus,
  • hippocampus,
  • basal ganglia,
  • cerebellum,
  • spinal cord

are you aware of such atlas or how I can adapt yours?
my goal is to plot quantitative measures for each of these regions.

thank you

Have you considered making a singularity/docker container with the required software?

I am trying to help a researcher set up the CBIG-0.10.2-Schaefer2018_LocalGlobal repository on a server running Ubuntu 18.04.3. I set up all of the paths for the required software libraries in a copy of the CBIG_gwMRF_tested_config.sh file. (The researcher is primarily interested in the Schaeffer2018_LocalGlobal stable project.)

After following all of the steps in the README.md, I sourced my .bashrc and was confronted with warnings that all of the versions of the required software that were already installed on the researcher's local server were the incorrect ones:

Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME   /usr/local/freesurfer
FSFAST_HOME       /usr/local/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /usr/local/freesurfer/subjects
MNI_DIR           /usr/local/freesurfer/mni
FSL_DIR           /usr/local/fsl
[WARNING]: This version of CBIG repository's default FREESURFER version is 5.3.0.
You appear to be using FREESURFER 6.0.0 (which may or may not work with current repo). Switch to FREESURFER version 5.3.0 if possible.
Note: stable projects may not follow default setting, refer to the proper config file of the project you use. 

[WARNING]: This version of CBIG repository's default FSL version is 5.0.8.
You appear to be using FSL 6.0.1 (which may or may not work with current repo). Switch to FSL version 5.0.8 if possible.
Note: stable projects may not follow default setting, refer to the proper config file of the project you use. 

[WARNING]: This version of CBIG repository's default WORKBENCH version is 1.1.1.
You appear to be using WORKBENCH 1.3.2 (which may or may not work with current repo). Switch to WORKBENCH version 1.1.1 if possible.
Note: stable projects may not follow default setting, refer to the proper config file of the project you use. 

[WARNING]: This version of CBIG repository's default AFNI version is AFNI_2011_12_21_1014.
You appear to be using AFNI AFNI_19.1.18 'Caligula' (which may or may not work with current repo). Switch to AFNI version AFNI_2011_12_21_1014 if possible.
Note: stable projects may not follow default setting, refer to the proper config file of the project you use. 

ln: failed to create symbolic link '/usr/local/CBIG-0.10.2-Schaefer2018_LocalGlobal/.git/hooks/pre-commit': No such file or directory
ln: failed to create symbolic link '/usr/local/CBIG-0.10.2-Schaefer2018_LocalGlobal/.git/hooks/pre-push': No such file or directory

Nowhere previously in any of the documentation do you specify which versions of these dependent softwares are required.

Given that your tool has such specific requirements, have you considered making a singularity and/or docker container with the correct versions? Or can I just safely ignore these warnings and continue with the newer versions of the dependent softwares?

Thanks,
suzanne

in Kong2019, step2 : incorrectly expect .nii.gz file format for fsLR_32k profiles

The profiles are saved as .mat files in the case of fsLR_32k mesh (cf. step1_generate_profiles_and_ini_params/CBIG_MSHBM_generate_profiles.m line 109).

However step2_estimate_priors/CBIG_MSHBM_estimate_group_priors.m line 548 tries to read a MRI file. I believe a simple fix is to replace l. 548,
from
[~, series, ~] = CBIG_MSHBM_read_fmri(avg_file);
to the following:
tmp = load(avg_file); series = tmp.profile_mat;

A similar edit is needed in step3_generate_ind_parcellations/CBIG_MSHBM_generate_individual_parcellation at line 531

zeros (background) labels in HCP fs_32k space

The released parcellation labels in HCP surface space of Yeo and Schaefer (just checked these two) contain a few hundred vertices marked as zeros (background) in the positions of non-medial wall, which should only contain parcellation labels (non-zeros). This is probably due to the misregistration between the volume space and the surface? I tried a nearest neighbor interpolation, but it looks ugly

Do you guys have suggestions of other possible quick fix to this? also this will be definitely better if you guys refine the labels to exclude those zeros. after all it doesn't make sense to have zeros in these positions

zero labeled vertices:
zeros2

nearest neighbor (1-hop neighbor of surface tessellation) interpolated of Yeo2011_7Networks_N1000.dlabel.nii
interp

template to fsaverage5

hi, I have a question to ask you. I want let rh.white template to fsaverage5.
I know can use mri_surf2surf, but I don't know how to set the parameters(srcsurfval、 src_type 、trg_surf)
mri_surf2surf --srcsubject bert --srcsurfval thickness --src_type curv --trg_type curv --trgsubject fsaverage5 --trgsurfval white --hemi rh

so, How to set these parameters for rh.white

Replicate the work of Schaefer

Expected situation

Actual situation

Steps to reproduce the issue (optional)

Dear CBIG community:

Hello! I think you guys did a great job on the human brain parcelation. Recently, I've been trying to use your method to generate the parcellation of our animal model. So I used Gordon's method to generate the boundary files and I obtained 32k boundaries and averaged them which seems different from your methods. I see the boundary file you provided and its name is waterfiles. mat. I think that you obtain the gradient matrix 32k*32k and average by row and then obtain the segmented_lines. Is that true?

Besides, I'm trying to use the MARS_computeLogOdds to generate the border with the file you provided. However, after compiling the MARS_DT_Boundary.c file, I encountered an unexpected error that the system could find the MARS_DT_Boundary.mexw64 though I ran code in this folder. I guess it might be caused by the system gap because I use Windows system to do that.

Looking forwards to your reply.

Regards,
Zhaojin

Inconsistent LUT labelling of Left/Right Accumbens area

Hi everyone,

after projecting the Schaefer2018 7/17 Network atlases to individual space and consecutively relabelling it using MRTrix3's labelconvert, I encountered that one of the nodes (Left-Accumbens-area) was missing in my subjects. The reason seems to be that the left Accumbens area is labeled "Left-Accumbens-a" in the provided LUT files, while the right one is labeled as "Right-Accumbens-area". So just something minor, but thought it may be worth mentioning it.

Hope this helps!

NAN values for vertices within VisCent parcel timeseries matrix

Hi CBIG Team,

I am working on extracting matrices of functional timeseries for all vertices within a given parcel (using the Schaefer & Yeo 400 region parcellation), and I have noticed that one parcel in particular (VisCent) contains NAN values for a number of vertices. I have encountered an older issue post noting that vertices along the medial wall may be listed as NAN, but I wanted to confirm whether this explains the NAN values within the VisCent parcel, or if this might be the result of a registration issue or perhaps the inclusion of off-brain data in the VisCent parcel.

All of my data have been preprocessed per the standard CBIG pipeline, and I am using a lightly modified version of the 'CBIG_ComputeROIs2ROIsCorrelationMatrix.m' function to extract and save the timeseries matrices in question.

Thanks in advance for your assistance.

HCP fs_lr32k 1000 parcel atlas cannot be loaded

Hi all,

I wanted to map the Schaefer 1000 parcellation to the vertices of the fs_lr32k surface, however, if I try loading the .dlabel.nii file using nibabel's nibabel.load() it returns an error saying Cannot work out file type of "/Schaefer2018_1000Parcels_17Networks_order.dlabel.nii". I tested loading the same data using cifti-MATLAB using ft_read_cifti('Schaefer2018_1000Parcels_7Networks_order.dlabel.nii', 'mapname', 'array');, however, I end up getting this error

Warning: could not determine filetype of
Schaefer2018_1000Parcels_7Networks_order.dlabel.nii 
Error using read_nifti2_hdr (line 56)
cannot open
Schaefer2018_1000Parcels_7Networks_order.dlabel.nii
as nifti file, hdr size = 168430090, should be 348 or 540

Error in ft_read_cifti (line 79)
hdr = read_nifti2_hdr(filename);

If I use the same loading functions with the 400 parcel atlas it works. Could it be that the 1000 parcel atlas file is corrupted?

creating FSL mask from Schaefer parcellation

Expected situation

I want to create binary masks in FSL based on one of the Schaefer2018 parcellations. I opened the nii.gz and corresponding lookup table in FSL. I selected only the parcels of interest from the lookup table (mPFC parcels). I expected to be able to save a binary mask based on this image.

Screen Shot 2022-09-30 at 5 59 41 PM

Actual situation

I cannot figure out how to create a binary mask in FSL based on the parcels I selected. In trying to add the Schaefer atlas as an atlas in FSL, I could not because FSL requires an XML specification. And I could not find this here. How can I use the schaefer nifti file and lookup table to create an FSL atlas, so that I can create a binary mask for select regions?

Steps to reproduce the issue (optional)

Need to run gradient step for estimating group priors with Areal cMSHBM?

Expected situation

The Readmes and example seem to make it clear that running the gMSHBM steps are optional and only necessary for gMSHBM.

Actual situation

I am running cMSHBM and getting an error in the estimate group priors step that the "gradient_list" files are not found:

Error using readtable (line 198)
Unable to open file
'<project_dir>/estimate_group_priors/gradient_list/training_set/gradient_list_lh.txt'.

Error in CBIG_ArealMSHBM_cMSHBM_estimate_group_priors_parent>fetch_data (line
749)
lh_gradient_name = table2cell(readtable(lh_data_gradient, 'Delimiter', ' ',
'ReadVariableNames', false));

Error in CBIG_ArealMSHBM_cMSHBM_estimate_group_priors_parent (line 192)
fetch_data(project_dir, setting_params.num_session, setting_params.num_sub,
setting_params.mesh, tmp_dir);

Steps to reproduce the issue (optional)

i think this could be easily reproduced by following the steps using new study data (rather than the example data including pre-computed files)

THanks!

Missing parcel in Schaefer 1000 conte69

Hello! First, thanks for providing this great resource.

Our group is using different versions/resolutions of the Schaefer parcellations, and we noticed that the 1000-node parcellation cifti files seem to be missing a parcel in the right hemisphere. We are using:

  • 'Schaefer2018_1000Parcels_7Networks_order.dlabel.nii': no label 533
  • 'Schaefer2018_1000Parcels_17Networks_order.dlabel.nii' no label 555
    As a result, we find 501 unique labels in the left hemisphere, and only 500 unique labels in the right hemisphere.

All labels seem to be present in the fsaverage5 annotation labels however, with the same number of unique labels in each hemisphere.

Many thanks for your help!

Unrecognized function or variable 'MARS_computeMeshFaceAreas'

Dear CBIG group,

I used the 'standalone_scripts_for_MNI_fsaverage_projection' to transform from MNI152 to fsaverage space.

I was able to implement the following function in Matlab:
[lh_proj_data, rh_proj_data] = CBIG_RF_projectMNI2fsaverage('MNI_probMap_ants.central_sulc.nii.gz');

However, when I tried to plot the results using the function below:
CBIG_DrawSurfaceMaps(lh_proj_data, rh_proj_data, 'fsaverage', 'inflated', 0, 1);

I got the error message that: Unrecognized function or variable 'MARS_computeMeshFaceAreas'.
Note that I have downloaded the whole repo and added it in the path of Matlab.

I was able to locate the MARS_computeMeshFaceAreas.c file at CBIG-master/external_packages/SD/SDv1.5.1-svn593/BasicTools but did not find a .m file of it. I'm wondering if you can kindly give some advice on how to resolve the issue?

Thanks in advance!
Chuanji

Question: Network Number <-> Network Name

Thank you so much for making this data available!

Expected situation

Being easily able to link the network number with the network name

Actual situation

Difficult to link network number to network name

Steps to reproduce the issue (optional)

I'm trying to tie the striatal network atlas with the schaefer parcellation, but the striatal lists the networks as 1,2,3, etc, whereas the schaefer atlas gives names to the networks without specifying order (e.g. VisCent, VisPeri, etc.).

I was curious if I'm missing some obvious linker between network1 being the VisCent, VistPeri network or if there was some file that specified this relationship.

Thank you!
James

the freesurfer version when using fsaverage-MNI152 mapping

hello, I would like to know whether the freesurfer version would influence the results using "fsaverage-MNI152 mapping" tools? Because in the paper (Wu et al., 2018), the freesurfer used is version 4.5.0, which is quite old. And it seems that the fsaverage templates are slightly different across different versions.

Compatibility of the .lut files with fsleyes

Hello,

I’m trying to add the Schaefer_2018 atlas into FSL, following the instructions in the README file, it seems that the lut file format is not supported with fsleyes (and that the old fslview is now deprecated).

Is there some way that the atlas can be added?

Thank you in advance,
Taly

Error when running CBIG_RNN_ensemble_prediction.sh file

Create mask for D2 subjects
Generate test data
train 1667 subjects
test 896 subjects
22 features
Predict using 1st set of pre-trained weights
usage: predict.py [-h] --checkpoint CHECKPOINT --data DATA --prediction
PREDICTION
predict.py: error: argument --prediction/-p is required
Predict using 2nd set of pre-trained weights
usage: predict.py [-h] --checkpoint CHECKPOINT --data DATA --prediction
PREDICTION
predict.py: error: argument --prediction/-p is required
Predict using 3rd set of pre-trained weights
usage: predict.py [-h] --checkpoint CHECKPOINT --data DATA --prediction
PREDICTION
predict.py: error: argument --prediction/-p is required
Predict using 4th set of pre-trained weights
usage: predict.py [-h] --checkpoint CHECKPOINT --data DATA --prediction
PREDICTION
predict.py: error: argument --prediction/-p is required

Missing parcels are different between 17 Networks and 7 Networks (Schaefer2018, fsLR 1000 parcels)

Expected situation

There's two parcels missing in fsLR template. And I've understood this is related to the difference between middlewall in fsaverage and fsLR. I thougt the missing parcels may have different names between 17 Networks and 7 Networks but same IDs.

Actual situation

However, the missing parcels are different comparing 17 Networks and 7 Networks. In 17Network, the missing labels are 555 and 908, while in 7Network, the missing labels are 533 and 903.
image
What caused the difference?

Steps to reproduce the issue (optional)

run Schaefer 18 parcellation code with HCP data

Hi there, I'm trying to run Schaefer18 parcellation code with HCP data. I've computed average gradients of both hemispheres with Gordon's code, which is of dimension #vertices * #vertices (excluding medial wall). However, loading a sample gradient matrix in this repo will give two variables: border and border_matrix, each with dimension 1 * #vertices and 6 * #vertices (including medial wall). It's not clear to me how to compute these gradient-related variables, which are used for the actual parcellation computing, from the gradient matrix I have?

Second, there is a function CBIG_build_sparse_gradient in CBIG_gwMRF_graph_cut_clustering_split_newkappa_prod.m, with comment "this function only works for fsaverageX". First, I wonder what this function is for, i.e. why we need to compute another "sparse gradient" from the gradient files, and second, with HCP data, does this function, which seems to be computing neighborhood vertices but treating first 12 vertices and the remaining ones differently, are applicable to HCP surface space (fsLR32k)?

Thanks.

Mapping from 17 to 7 and vice versa

Is there an array you have that can map the ordering of the parcels in the 17 network solution to the 7 network solution, and vice versa? Basically, we have data made in the 17 network ordering, but want to know how it would look in the 7 network ordering.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.