Code Monkey home page Code Monkey logo

bibsnet's People

Contributors

ericfeczko avatar erikglee avatar gregconan avatar hough129 avatar lucimoore avatar lundq163 avatar madisoth avatar paul-reiners avatar rosemccollum avatar tikal004 avatar tjhendrickson avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bibsnet's Issues

Exactly one BIBSnet seg file

What happened?

Hello I am using the singularity image (most recent) and ran into this error:

INFO 2023-06-19 00:03:32,519: Finished dilating left/right segmentation mask

ERROR 2023-06-19 00:03:32,520: There must be exactly one BIBSnet segmentation file: /workingdir/bibsnet/sub-1007/ses-SCANS/output/*.nii.gz
Resume at postBIBSnet stage once this is fixed.

What command did you use?

module purge
module load singularity
singularity run --nv --cleanenv \
-B /projects/b1174/cabinet_test/bidsdir:/input \
-B /projects/b1174/cabinet_test/outdir:/output \
-B /projects/b1174/cabinet_test/param_files/param_file.json:/param_file.json \
-B /projects/b1174/cabinet_test/workingdir:/workingdir \
/projects/b1174/singularity_images/cabinet_2.4.0.sif \
/input /output participant -jargs /param_file.json -end postbibsnet -v -participant 1007 -w /workingdir

What version of CABINET are you using?

2.4.0

Relevant log output

NFO 2023-06-19 00:03:32,519: Finished dilating left/right segmentation mask

ERROR 2023-06-19 00:03:32,520: There must be exactly one BIBSnet segmentation file: /workingdir/bibsnet/sub-1007/ses-SCANS/output/*.nii.gz
Resume at postBIBSnet stage once this is fixed.

Add any additional information or context about the problem here.

Thank you!

Make aseg-derived mask only once

A short summary of what you would like to see in CABINET.

This is very minor, but in postbibsnet, 2 brainmasks are derived from the final segmentations (in native T1w and T2w spaces). It would be cleaner to generate a single brainmask from the BIBSNet output segmentation (chirality correction doesn't matter for this) and then just register it to native T1w and T2w spaces in the same manner as we do with the segmentation:
https://github.com/DCAN-Labs/CABINET/blob/t2output/src/utilities.py#L1242

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

participants.json / sub-{}_sessions.json

A short summary of what you would like to see in CABINET.

  • require a participants.json or a sub-()_sessions.json to be paired with the participants.tsv and sub-{}_sessions.tsv, respectively
  • see BIDS specification for what a participants.json / sub-{}_sessions.json looks like: https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#participants-file:~:text=UTF%2D8%20encoding.-,Participants%20file,-Template%3A
  • validate that the age columns units are "months" before utilizing the column
  • probably also validate the units of the brain_z_size (unsure what these would be)
  • if the units do not match what cabinet needs, fail and throw an error telling the user to change the tsv/json to have the correct units in order to be processed through cabinet
  • possible that we could convert them, but that might be odd for infants - it is more likely that they accidentally have the wrong units in their paired JSON instead of having infant age in years in their TSV
  • it should do this validation at the very beginning when validating parameters in order to fail fastest for the user

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Refactor functions that perform averaging in pre-bibsnet

What happened?

This bug is regarding remaining issues with the code on the averaging-fix-2023-01 branch

The intermediate files required for the pipeline to finish successfully are correct, but additional files that may be important for debugging have some issues. I think instead of addressing the various issues, we should just refactor the functions to be more modular and flexible like the way the other functions for prebibsnet are written. Here are a list of current issues with the code as is:

  • the file named "desc-avg" under the prebibsnet average folder ends up being averaged multiple times somehow due to some redundancy in the code, but the final _0000/_0001 files used in the pipeline in subsequent steps are the correctly averaged files, so the code should work at least

  • there's no log output for when the averaging occurs, only the registration steps, so I had to visually inspect to confirm that the averaged file was actually created

  • change code to register runs 2 and above to run1 - currently registers run 1 to run 2

  • will also need to test that it still works when there are more than 2 input T1s and/or T2s

  • the .mat file gets overwritten because it's missing the run number. again this isn't super important because these .mat files are never used in the pipeline, but we should either name them correctly or not create them

What command did you use?

NA

What version of CABINET are you using?

2.0.0

Relevant log output

No response

Add any additional information or context about the problem here.

No response

odd behavior of preBIBSnet with newborn subject from WashU

This is a more detailed explanation of the issue that was arising last week. If you would like to follow up on it (not urgent) I can provide you with paths to all the relevant in-/ and outputs. I'm posting this issue for documentation purposes, in case something similar arises again in the future (my problem for this particular subject is solved, using additional manual interventions).

  1. preBIBSnet "final" files from the two different paths (xfms and ACPC) show different amounts of cropping (one still has shoulders, the other one doesn't)
  2. For input to BIBSnet xfms path is chosen even though ACPC path has way better alignment when visually comparing both. Is there a way to find the dice coefficient on which this selection is based?

CABINET is having an issue converting 6 month aged subjects to int

What happened?

Essentially, when passing the 6 representing the age in months of my subject python is having an issue recognizing it as an int. Was initially trying to do this in a loop but then was attempting to bypass the issue by simpling running a single subject manually but ran into the same issue unfortunately. Sure it is not too complicated an error, but figured I'd get an extra pair of eyes. Thank you for all the help:)

What command did you use?

docker run -it \
    -v /Users/sealab/Documents/MRI/BABIES/Six_Month/bids:/input \
    -v /Users/sealab/Documents/MRI/BABIES/Six_Month/derivatives/CABINET:/output \
    -v /Users/sealab/Documents/MRI/BABIES/Six_Month/derivatives/work/cabinet_work:/work_tmp \
    -v /Users/sealab/Documents/MRI/BABIES/CABINET_parameter/parameter-file-container.json:/param_file.json \
    dcanumn/cabinet \
    /input /output participant \
    -participant 1503 \
    -ses sixmonth \
    -age 6 \
    -jargs /param_file.json \
    -end postbibsnet \
    -start prebibsnet \
    -w /work_tmp

What version of CABINET are you using?

dcanumn/cabinet latest e28757885510 2 months ago 38.6GB (I think 2.4 just updated 2 weeks ago)

Relevant log output

tsv_df.shape:  (2, 2)
tsv_df.columns:  Index(['session', 'age'], dtype='object')
Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 1044, in <module>
    main()
  File "/home/cabinet/cabinet", line 69, in main
    json_args, sub_ses_IDs = get_params_from_JSON([get_stage_name(stg) for stg
  File "/home/cabinet/cabinet", line 211, in get_params_from_JSON
    return validate_cli_args(vars(parser.parse_args()), stage_names,
  File "/home/cabinet/cabinet", line 289, in validate_cli_args
    sub_ses_IDs[ix]["age_months"] = read_from_tsv(
  File "/home/cabinet/cabinet", line 519, in read_from_tsv
    desired_output = get_col_value_from_tsv(j_args, logger, tsv_path, ID_col, col_name, sub_ses)
  File "/home/cabinet/cabinet", line 557, in get_col_value_from_tsv
    return int(subj_row[col_name])
  File "/opt/conda/lib/python3.8/site-packages/pandas/core/series.py", line 185, in wrapper
    raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'int'>


### Add any additional information or context about the problem here.

_No response_

Issues with Segmentation and Surfaces Generated from BIBSNET

What happened?

After running our subjects data through CABINET, and then utilizing CABINET's segmentation generated by BIBSNET to process our subjects through RECON-ALL function to generate surfaces we began to detect a variety of labeling and alignment issues. The subject's screenshots below are indicative of some of the issues we observed. We noticed that segmentation done by bibsnet was commonly mislabeling left and right hemispheres, as well as delineating different views incorrectly (such as axial as coronal etc.). Really appreciate your guys help, and all that you do! So far overall CABINET has been awesome, excited to keep using it and hopefully help to improve it as well:).

What command did you use?

I wrote a script that loops over a batch of bids formatted subjects. This is the command used:

    docker run -t \
        -v "${input_dir}:/input" \
        -v "${output_dir}:/output" \
        -v "${param_file}:/param_file.json" \
        dcanumn/cabinet:t1-only_t2-only \
        /input /output \
        participant \
        -participant "${participant_id}" \
        -ses "${session}" \
        -age "${age}" \
        -jargs /param_file.json \
        --end postbibsnet \
        -v

Recon-all command used:

    infant_recon_all --s ${sub} --age ${age} \
    --inputfile ${SUBJECTS_DIR}/${sub}/mprage.nii.gz \
    --segfile ${SUBJECTS_DIR}/${sub}/aseg.nii.gz \
    --masked ${SUBJECTS_DIR}/${sub}/mprage.nii.gz

What version of CABINET are you using?

dcanumn/cabinet t1-only_t2-only a67201fd14b6 6 months ago 38.4GB

Relevant log output

No response

Add any additional information or context about the problem here.

recon-all:

bibsnet:

Question link is wrong

What happened?

When you go to submit an issue, there is a link for questions to ask but it links to some random cabinet github that is definitely not ours: https://github.com/CABINET. I can help develop/implement a template for asking questions if needed!

What command did you use?

Question link

What version of CABINET are you using?

Issues

Directory Structure

No response

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Adding manually corrected segmentations from WashU babies to BIBSnet model

A short summary of what you would like to see in CABINET.

Some attempts to use the 512 BIBSnet model with T1 and T2 weighted images collected at WashU resulted in missing parts in the segmentation. To increase generalizability of the model in the future, it could be helpful to add some manually corrected segmentations of these subjects to the set of images the model is trained on.

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Issues copying concatenated .mat file from pre to post bibsnet when multiple sessions are included for one subject

What happened?

Issues copying concatenated .mat file from pre to post bibsnet when it's t2-only - results in an exit code 1, because it can't complete postbibsnet

What command did you use?

current cabinet singularity run command

What version of CABINET are you using?

old container, working_dir branch bound (soon to be merged into main)

Relevant log output

Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 1014, in <module>
    main()
  File "/home/cabinet/cabinet", line 76, in main
    run_all_stages(STAGES, sub_ses_IDs, json_args["stage_names"]["start"],
  File "/home/cabinet/src/utilities.py", line 1331, in run_all_stages
    sub_ses_j_args = stage(sub_ses_j_args, logger)
  File "/home/cabinet/cabinet", line 648, in run_preBIBSnet
    shutil.copy2(concat_mat, out_mat_fpath)
  File "/opt/conda/lib/python3.8/shutil.py", line 435, in copy2
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/opt/conda/lib/python3.8/shutil.py", line 264, in copyfile
    with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/cabinet/postbibsnet/sub-CENSORED/ses-10mo/preBIBSnet_full_crop_T2w_to_BIBS_template.mat'

Add any additional information or context about the problem here.

No response

Issue running prebibsnet and bibsnet stages, then later running postbibsnet

What happened?

I am trying to run a test where I take a pre-existing run of bibsnet [the container] (which included prebibsnet, bibsnet, and postbibsnet), and then move it to a new location, specify a new output directory, and delete the "postbibsnet" folder under the working directory. Then what I am interested in is restarting bibsnet at the "postbibsnet" stage, taking as input the original BIDS data and the working directory (again, after the postbibsnet folder has been deleted).

When I do this, the "postbibsnet" section of the code gets very close to finishing. But based on the attached error log, it seems like the code is expecting some file to be there that doesn't exist. I don't know if this is because the original derivatives folder and the work/postbibsnet folder had been deleted, or if there is some complication that has come up because the pipeline started populating the working directory under one path (for prebibsnet and bibsnet) and is finishing populating the working directory at another path (for postbibsnet). Or I suppose it could be something else more generic to the process of running the container stage by stage.

Any idea what might be going on and how this could be fixed?

What command did you use?

/usr/bin/singularity run --nv \
-B /home/faird/shared/projects/HBCD_MIDB_IG/cabinet/bibsnet_test_from_barry/roo_data1_eriks_copy/input/:/input \
-B /home/faird/shared/projects/HBCD_MIDB_IG/cabinet/bibsnet_test_from_barry/roo_data1_eriks_copy/derivatives/:/output \
-B /home/faird/shared/projects/HBCD_MIDB_IG/cabinet/bibsnet_test_from_barry/roo_data1_eriks_copy/work/:/work \
/home/faird/shared/code/internal/pipelines/bibsnet_container/bibsnet_clusters-fix-2023.10.16.sif \
/input /output participant -jargs /home/cabinet/parameter-file-container.json -start postbibsnet -end postbibsnet -v \
--participant-label M1003 \
-w /work

What version of BIBSnet are you using?

bibsnet_clusters-fix-2023.10.16.sif

Directory Structure

No response

Relevant log output

"uname": executable file not found in $PATH
Traceback (most recent call last):
  File "/home/bibsnet/bibsnet", line 48, in <module>
    main()
  File "/home/bibsnet/bibsnet", line 39, in main
    run_all_stages(STAGES, sub_ses_IDs, json_args["stage_names"]["start"],
  File "/home/bibsnet/src/utilities.py", line 223, in run_all_stages
    sub_ses_j_args = stage(sub_ses_j_args)
  File "/home/bibsnet/src/postbibsnet.py", line 73, in run_postBIBSnet
    nii_outfpath = reverse_regn_revert_to_native(
  File "/home/bibsnet/src/postbibsnet.py", line 557, in reverse_regn_revert_to_native
    preBIBSnet_mat = glob(preBIBSnet_mat_glob).pop()
IndexError: pop from empty list

Add any additional information or context about the problem here.

No response

Dataset_description.json error

What happened?

Hi again! almost there! I am running into this error:

Traceback (most recent call last):
File "/home/cabinet/cabinet", line 1044, in
main()
File "/home/cabinet/cabinet", line 76, in main
run_all_stages(STAGES, sub_ses_IDs, json_args["stage_names"]["start"],
File "/home/cabinet/src/utilities.py", line 1331, in run_all_stages
sub_ses_j_args = stage(sub_ses_j_args, logger)
File "/home/cabinet/cabinet", line 821, in run_postBIBSnet
os.remove(new_data_desc_json)
FileNotFoundError: [Errno 2] No such file or directory: '/output/bibsnet/dataset_description.json'

dataset_description.json.txt

I've attached my dataset_description.json that is in my bidsdir (as a text file so I can upload it but its a json on my system). Thanks!

What command did you use?

module purge
module load singularity
singularity run --nv --cleanenv \
-B /projects/b1174/cabinet_test/bidsdir:/input \
-B /projects/b1174/cabinet_test/outdir:/output \
-B /projects/b1174/cabinet_test/param_files/param_file.json:/param_file.json \
-B /projects/b1174/cabinet_test/workingdir:/workingdir \
/projects/b1174/singularity_images/cabinet_2.4.0.sif \
/input /output participant -jargs /param_file.json -end postbibsnet -v -participant 1007 -w /workingdir --overwrite

What version of CABINET are you using?

2.4.0

Relevant log output

See above.

Add any additional information or context about the problem here.

Thank you :)

Run Nibabies + XCP-D

A short summary of what you would like to see in CABINET.

Wrap Nibabies and XCP-D to run within CABINET.

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Read only file system error

What happened?

Erroring out because of read only file system:
Traceback (most recent call last):
File "/opt/conda/bin/nnUNet_predict", line 33, in
sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_predict')())
File "/opt/conda/bin/nnUNet_predict", line 25, in importlib_load_entry_point
return next(matches).load()
File "/opt/conda/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/opt/conda/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 843, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/cabinet/nnUNet/nnunet/inference/predict_simple.py", line 19, in
from nnunet.inference.predict import predict_from_folder
File "/home/cabinet/nnUNet/nnunet/inference/predict.py", line 21, in
from batchgenerators.augmentations.utils import resize_segmentation
File "/opt/conda/lib/python3.8/site-packages/batchgenerators/augmentations/utils.py", line 22, in
from skimage.transform import resize
File "/opt/conda/lib/python3.8/site-packages/skimage/init.py", line 141, in
from .data import data_dir
File "", line 1039, in _handle_fromlist
File "/opt/conda/lib/python3.8/site-packages/lazy_loader/init.py", line 76, in getattr
submod = importlib.import_module(submod_path)
File "/opt/conda/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/conda/lib/python3.8/site-packages/skimage/data/_fetchers.py", line 270, in
_init_pooch()
File "/opt/conda/lib/python3.8/site-packages/skimage/data/_fetchers.py", line 247, in _init_pooch
os.makedirs(data_dir, exist_ok=True)
File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 1 more time]
File "/opt/conda/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/home/adp9359'

What command did you use?

#!/bin/bash
#SBATCH --account=p31681  ## YOUR ACCOUNT pXXXX or bXXXX
#SBATCH --partition=gengpu
#SBATCH --nodes=1 ## how many computers do you need
#SBATCH --ntasks-per-node=1 ## how many cpus or processors do you need on each computer
#SBATCH --gres=gpu:a100:1  ## type of GPU requested, and number of GPU cards to run on
#SBATCH --time=04:00:00 ## how long does this need to run (remember different partitions have restrictions on this param)
#SBATCH --mem=40G ## how much RAM do you need per CPU (this effects your FairShare score so be careful to not ask for more than you need))
#SBATCH --job-name=run_ibeat ## When you run squeue -u NETID this is how you can identify the job
#SBATCH --output=cabinet_test_1007.log ## standard out and standard error goes to this file

module purge
module load singularity

singularity run --nv --cleanenv --no-home \
-B /projects/b1174/cabinet_test/bidsdir:/input \
-B /projects/b1174/cabinet_test/outdir:/output \
-B /projects/b1174/cabinet_test/param_files/param_file.json:/param_file.json \
/projects/b1174/singularity_images/cabinet:latest.sif \
/input /output participant -jargs /param_file.json -end postbibsnet -v -participant 1007

What version of CABINET are you using?

latest from singularity

Relevant log output

Traceback (most recent call last):
  File "/opt/conda/bin/nnUNet_predict", line 33, in <module>
    sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_predict')())
  File "/opt/conda/bin/nnUNet_predict", line 25, in importlib_load_entry_point
    return next(matches).load()
  File "/opt/conda/lib/python3.8/importlib/metadata.py", line 77, in load
    module = import_module(match.group('module'))
  File "/opt/conda/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/cabinet/nnUNet/nnunet/inference/predict_simple.py", line 19, in <module>
    from nnunet.inference.predict import predict_from_folder
  File "/home/cabinet/nnUNet/nnunet/inference/predict.py", line 21, in <module>
    from batchgenerators.augmentations.utils import resize_segmentation
  File "/opt/conda/lib/python3.8/site-packages/batchgenerators/augmentations/utils.py", line 22, in <module>
    from skimage.transform import resize
  File "/opt/conda/lib/python3.8/site-packages/skimage/__init__.py", line 141, in <module>
    from .data import data_dir
  File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
  File "/opt/conda/lib/python3.8/site-packages/lazy_loader/__init__.py", line 76, in __getattr__
    submod = importlib.import_module(submod_path)
  File "/opt/conda/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/opt/conda/lib/python3.8/site-packages/skimage/data/_fetchers.py", line 270, in <module>
    _init_pooch()
  File "/opt/conda/lib/python3.8/site-packages/skimage/data/_fetchers.py", line 247, in _init_pooch
    os.makedirs(data_dir, exist_ok=True)
  File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 1 more time]
  File "/opt/conda/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/home/adp9359'

Add any additional information or context about the problem here.

Thank you!

Generated only one half of the aseg file

What happened?

Dear Cabinet Experts,

I ran cabinet through a T2w for segmentation. While the T2w quality was good, the output aseg file, unfortunately, only had one half of the brain.

This is puzzling, and I'm reaching out with the hope of getting insights from you -- any idea about the potential reasons?

Please note that this is the first time my team met this issue. We'd greatly appreciate your thoughts on what might be causing this.
Thank you in advance!

image

What command did you use?

docker run -t -v /Users/sealab/Documents/MRI/ABC/newborn/bids:/input -v /Users/sealab/Documents/MRI/ABC/newborn/derivatives/CABINET:/output -v /Users/sealab/Documents/MRI/ABC/newborn/derivatives/work/cabinet_work:/work_tmp -v /Users/sealab/Documents/MRI/BABIES/CABINET_parameter/parameter-file-container.json:/param_file.json dcanumn/cabinet:2.4.2 /input /output participant -participant sub-12020 -ses ses-newborn -age 1 -jargs /param_file.json -end postbibsnet -start prebibsnet -w /work_tmp -v

What version of CABINET are you using?

2.4.2

Directory Structure

No response

Relevant log output

No response

Add any additional information or context about the problem here.

No response

CABINET expecting "session" column in session tsv file instead of "session_id"

What happened?

Based on testing + conversations with @hough129 , it looks like the current version of CABINET is expecting the sessions tsv file to have a column name of "session" when according to the BIDS structure, this should be "session_id". When I try running processing with a tsv file that has the "session_id" column, I receive an error that ends like: KeyError:'age'

What command did you use?

singularity run -B /home/faird/shared/projects/HBCD_MIDB_IG/cabinet/cabinet_240_test/:/dir /home/faird/shared/code/internal/pipelines/cabinet_container/cabinet_v2-4-0.sif /dir/bids/ /dir/out/ participant -jargs /home/cabinet/parameter-file-container.json

What version of CABINET are you using?

2.4.0

Relevant log output

Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 519, in read_from_tsv
    desired_output = get_col_value_from_tsv(j_args, logger, tsv_path, ID_col, col_name, sub_ses)
  File "/home/cabinet/cabinet", line 539, in get_col_value_from_tsv
    tsv_df = pd.read_csv(
  File "/opt/conda/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
    return _read(filepath_or_buffer, kwds)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 488, in _read
    return parser.read(nrows)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 1047, in read
    index, columns, col_dict = self._engine.read(nrows)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 310, in read
    index, names = self._make_index(data, alldata, names)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/base_parser.py", line 415, in _make_index
    index = self._get_simple_index(alldata, columns)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/base_parser.py", line 447, in _get_simple_index
    i = ix(idx)
  File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/base_parser.py", line 442, in ix
    raise ValueError(f"Index {col} invalid")
ValueError: Index session invalid
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
    return self._engine.get_loc(casted_key)
  File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'age'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 1044, in <module>
    main()
  File "/home/cabinet/cabinet", line 69, in main
    json_args, sub_ses_IDs = get_params_from_JSON([get_stage_name(stg) for stg
  File "/home/cabinet/cabinet", line 211, in get_params_from_JSON
    return validate_cli_args(vars(parser.parse_args()), stage_names,
  File "/home/cabinet/cabinet", line 289, in validate_cli_args
    sub_ses_IDs[ix]["age_months"] = read_from_tsv(
  File "/home/cabinet/cabinet", line 529, in read_from_tsv
    desired_output = get_col_value_from_tsv(j_args, logger, tsv_path, ID_col, col_name, sub_ses)
  File "/home/cabinet/cabinet", line 557, in get_col_value_from_tsv
    return int(subj_row[col_name])
  File "/opt/conda/lib/python3.8/site-packages/pandas/core/series.py", line 942, in __getitem__
    return self._get_value(key)
  File "/opt/conda/lib/python3.8/site-packages/pandas/core/series.py", line 1051, in _get_value
    loc = self.index.get_loc(label)
  File "/opt/conda/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
    raise KeyError(key) from err
KeyError: 'age'

Add any additional information or context about the problem here.

No response

Add json sidecar to accompany *_space-orig_desc-aseg_dseg.nii.gz files

A short summary of what you would like to see in CABINET.

It would be nice if the files found like bibsnet/*_space-orig_desc-aseg_dseg.nii.gz had an accompanying json file with relevant info for the user. For example, it would be great if there could be a field in this json along the lines of "RegisteredTo" or "IntendedFor" which says which file from the BIDS directory the segmentation is registered to.

It could also be nice to have summary statistics for different segmented regions here. This could just include the volume of each region in the atlas, but also could include statistics about the signal intensity of voxels within each region (mean, standard deviation, etc).

Do you have any interest in helping implement the feature?

No :(

Add any additional information or context about the request here.

No response

alignment issues in WashU data

What happened?

We are still seeing some alignment issues, so we will have to also adjust the alignment procedure, using sub 2. unsure what drove this. Luci Moore or Kimberly Weldon or Eric Feczko would be a good fit for troubleshooting this alignment problem, need to play with on the command line to find something that works.

What command did you use?

the current cabinet container

What version of CABINET are you using?

the current cabinet container

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Error downloading FSL in Docker build

What happened?

The FSL download times out about half the time and the build fails.
I sometimes have to rerun a build 2 or 3 times to get it to go.

Here's an example build log

What command did you use?

This is generated on dockerhub during automated builds.

What version of BIBSnet are you using?

3.0.0

Directory Structure

N/A

Relevant log output

2023-10-13T16:11:07Z #7 19.27 Downloading FSL ...
2023-10-13T16:23:50Z #7 782.9 curl: (56) OpenSSL SSL_read: Connection timed out, errno 110
2023-10-13T16:23:50Z #7 782.9
2023-10-13T16:23:50Z #7 782.9 gzip: stdin: unexpected end of file
2023-10-13T16:23:50Z #7 782.9 tar: Unexpected EOF in archive
2023-10-13T16:23:50Z #7 782.9 tar: Error is not recoverable: exiting now

Add any additional information or context about the problem here.

No response

Minor bug when running BIBSNet without overwrite flag when there are partial existing intermediate outputs in tmp space

What happened?

very small bug - in prebibsnet, if the crop2full.mat files exist, it skips cropping with robustfov. we should also look for the presence of the cropped T1 and T2 (eg cropped/T1w/sub-_ses-_0000.nii.gz) before skipping robustfov

What command did you use?

running application outside of container

What version of CABINET are you using?

2.0.0

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Error thrown for 0 month old subjects

What happened?

For some reason I am getting an error when I try to run CABINET on subjects whose age is 0 months. When I change their age in the sessions tsv file, CABINET is able to run correctly. Based on conversations with @hough129 , this is likely because CABINET doesn't have a 0 month template.

What command did you use?

singularity run -B /home/faird/shared/projects/HBCD_MIDB_IG/cabinet/cabinet_240_test_2/:/dir /home/midb-ig/shared/containers/leex6144/cabinet_2_4_0.sif /dir/bids /dir/output participant -end postbibsnet -jargs /home/cabinet/parameter-file-container.json

What version of CABINET are you using?

2.4.0

Relevant log output

Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 1044, in <module>
    main()
  File "/home/cabinet/cabinet", line 69, in main
    json_args, sub_ses_IDs = get_params_from_JSON([get_stage_name(stg) for stg
  File "/home/cabinet/cabinet", line 211, in get_params_from_JSON
    return validate_cli_args(vars(parser.parse_args()), stage_names,
  File "/home/cabinet/cabinet", line 289, in validate_cli_args
    sub_ses_IDs[ix]["age_months"] = read_from_tsv(
  File "/home/cabinet/cabinet", line 519, in read_from_tsv
    desired_output = get_col_value_from_tsv(j_args, logger, tsv_path, ID_col, col_name, sub_ses)
  File "/home/cabinet/cabinet", line 546, in get_col_value_from_tsv
    print("ensure_prefixed: ", ensure_prefixed(sub_ses[1], "ses-") if ID_col == "session_id" else ensure_prefixed(sub_ses[0], "sub-"))
IndexError: tuple index out of range

Add any additional information or context about the problem here.

No response

Update readthedocs section on age column of participants.tsv file

A short summary of what you would like to see in CABINET.

https://cabinet.readthedocs.io/en/stable/participants/

I think we should add a note here to use 1 instead of 0 for age if the subject is a neonate since we don't currently have 0mo templates

we should also explain what happens when the subject age is much older - I assume it just uses the oldest set of templates over a certain age?

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Document T1-only / T2-only

A short summary of what you would like to see in CABINET.

We need to make sure we're explaining that you need to remove the T1s to run T2-only, and vice versa

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

postBIBSnet chirality correction not accurate around midline

What happened?

Chirality correction during postBIBSnet is in many cases not accurate around the medial plane (see example image) which causes the MCRIBS surface reconstruction in Nibabies to crash.

image

What command did you use?

# run cabinet 
env -i ${singularity} run \
-B ${data_dir}/:/input \
-B ${out_dir}/:/output \
-B /home/faird/shared/code/external/utilities/freesurfer_license/license.txt:/opt/freesurfer/license.txt \
/home/faird/shared/code/internal/pipelines/cabinet_container/cabinet_shrink_docker_build_plus_clusters_fix_09272023a.sif \
/input /output participant -jargs /home/cabinet/parameter-file-container.json -start prebibsnet -end postbibsnet -v

What version of BIBSnet are you using?

latest

Relevant log output

No response

Add any additional information or context about the problem here.

Happy to help solving this issue as I have plenty of datasets where I'm planning to use the Nibabies MCRIBS workflow combined with BIBSnet.

Issues with Segmentation File

What happened?

For a handful of our one-month subjects, that have decent quality T1 and T2 images we are observing major issues with the segmentation file created by CABINET whereby the location of the brain regions are very off. We provided an image of a subject that exemplifies this issue. We're noticing this to be an issue more so with the T2 segmentation files. What is surprising is that this isn't happening for all subjects so it is unclear why this is happening for some subjects, but not all.

What command did you use?

docker run -t \
        -v "${input_dir}:/input" \
        -v "${output_dir}:/output" \
        -v "${work_dir}:/work_tmp" \
        -v "${param_file}:/param_file.json" \
        dcanumn/cabinet:2.4.2 \
        /input /output \
        participant \
        -participant "${participant_id}" \
        -ses "${session}" \
        -age "${age}" \
        -jargs /param_file.json \
        -end postbibsnet \
        -start prebibsnet \
        -w /work_tmp \
		-v

What version of CABINET are you using?

2.4.2

Relevant log output

No log output. Issues with the final product

Add any additional information or context about the problem here.

Screen Shot 2023-06-30 at 1 28 12 PM

Error when running model 515 - T2w-only

Hi @hough129, thanks for pointing to a solution. I am testing with model 515 (T2w-only). However, it threw an error:

CABINET: error: CABINET needs T1w data at the path below to run model 515, but none was found.

docker version: dcanumn/cabinet:2.4.2
parameters: participant -participant sub-13399 -ses ses-newborn -age 1 -jargs /param_file.json -end postbibsnet -start prebibsnet --model-number 515 -w /work_tmp -v

Thank you in advance!

Originally posted by @yanbin-niu in https://github.com/DCAN-Labs/CABINET/issues/59#issuecomment-1636113718

Change Parameter File from Being Subject/Session Specific to Study Specific

A short summary of what you would like to see in CABINET.

Currently the parameter file that CABINET requires to run needs the user to specify the BIDS folder, output folder, and subject/session IDs. While this works okay for processing a few subjects/sessions this does not scale well as a unique parameter file is needed per subject/session.

I propose that the BIDS folder, output folder and subject/session IDs parameters are removed from the parameter file and instead inserted as required command line arguments. This will allow CABINET to run more programmatically at scale.

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Quality check output from Segmentation pipeline

A short summary of what you would like to see in CABINET.

It would be great, to have a feature that outputs some images (jpgs) overlaying the segmentation with the T1 or T2 (dependent on input) for quick quality screening.
Happy to help working on it if somebody tells me how it works.

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Test possible bugs with --overwrite flag

What happened?

I was debugging by running the application and had produced all of the prebibsnet outputs up until the point that the prebibsnet output anatomicals are copied into the bibsnet input folder. I ran the application again using the --overwrite flag (for reasons that aren't relevant here) and it errored at https://github.com/DCAN-Labs/CABINET/blob/main/run.py#L666 because the output file didn't exist yet from the prior run, so when it tries to remove it there is a file not found error

I would suggest that we only run the copy command whether the file exists or not (if it doesn't exist, it will create it and if it does exist from a prior run, it will be overwritten).

So replace:

        if j_args["common"]["overwrite"]:  # TODO Should --overwrite delete old image file(s)?
            os.remove(out_nii_fpath)
        if not os.path.exists(out_nii_fpath): 
            shutil.copy2(transformed_images[f"T{t}w"], out_nii_fpath)

with:

        if j_args["common"]["overwrite"]:  # TODO Should --overwrite delete old image file(s)?
            shutil.copy2(transformed_images[f"T{t}w"], out_nii_fpath)        

We'll also want to look through the rest of the code base to make sure this issue doesn't exist elsewhere

What command did you use?

not relevant - logic is apparent in code

What version of CABINET are you using?

2.4.0

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Edit DockerHub Documentation

What happened?

The current DockerHub documentation refers to FSL as FreeSurferLearner, where I think you mean to say FMRIB Software Library (see below):

"fsl_bin_path": string, a valid absolute path to existing bin directory in FreeSurferLearner (FSL). Example: "/opt/fsl-6.0.5.1/bin/"

What command did you use?

NA

What version of CABINET are you using?

NA

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Incorporate Model 552

A short summary of what you would like to see in CABINET.

  • Adjust dockerfile
  • Adjust application to call 552 instead of other T1 and T2 combination model
  • Perhaps make adjustments based on Tim's findings

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Allow user to input different brain z values for T1 and T2 in participants.tsv

A short summary of what you would like to see in CABINET.

It might be useful in the future to allow different brain-z size inputs for T1 vs T2. Sometimes robustfov is finicky with infant data and might choose different locations for the top of the brain depending on the contrast of the images, so the optimal cropping won't always be the same for T1 and T2 (eg T1 will look fine and T2 is over-cropped)

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

Low priority

"uname" executable file not found

What happened?

Tried to run latest singularity container on single participant data. Received "name": executable file not found in $PATH error and the container quit.

What command did you use?

run --nv --cleanenv --no-home -B /study/dean_k99/Studies/infmri/processed-data/rawdata:/input -B /local/dean:/output -B /local/dean/cabinet/params.json:/param_file.json cabinet.sif /input /output participant -jargs /param_file.json -participant 001 -ses 01 -age 1  -end postbibsnet -v

What version of CABINET are you using?

latest

Relevant log output

No response

Add any additional information or context about the problem here.

No response

T2 images rotated during T2 to T1 alignment in preBIBSNet

What happened?

For a number of subjects (newborn infants from eLABE dataset) T2 images are rotated during T2 to T1 alignment step in preBIBSNet. The cropped images and even the ACPC aligned images in preBIBSNet look fine but once the T2 gets registered to the T1 it gets rotated. This speaks to an issue in the FSL Flirt T2 to T1 registration step which affects the transformation matrix that is produced during that step. The rotation of the T2 leads to faulty segmentations using model 551.

What command did you use?

env -i ${singularity} run \
-B ${data_dir}/input:/input \
-B ${data_dir}/processed:/output \
-B ${data_dir}/work:/workdir \
-B ${run_dir}/license.txt:/opt/freesurfer/license.txt \
/home/faird/shared/code/internal/pipelines/cabinet_container/cabinet_v2-4-0.sif \
/input /output participant -w workdir -jargs /home/cabinet/parameter-file-container.json -start prebibsnet -end postbibsnet -v

What version of CABINET are you using?

2.4.0

Relevant log output

No response

Add any additional information or context about the problem here.

No response

cannot properly average multiple T1s and T2s when running preBIBSnet

What happened?

when there are multiple T1s and/or T2s, it writes the flirt.mat file for averaging out to the top level directory instead of under preBIBSnet. Then later has trouble finding that .mat file

What command did you use?

${singularity} run --nv --cleanenv --no-home \
> -B ${data_dir}:/input \
> -B ${data_dir}/processed/cabinet:/output \
> -B ${run_dir}/license.txt:/opt/freesurfer/license.txt \
> /home/faird/shared/code/internal/pipelines/cabinet_container/cabinet_latest.sif \
> /input /output participant -jargs /home/cabinet/parameter-file-container.json -start prebibsnet -end postbibsnet -v

What version of CABINET are you using?

the latest version of CABINET (11-15-2022)

Relevant log output

"uname": executable file not found in $PATH

INFO 2023-01-06 13:58:18,264: Subject details from participants.tsv row:
  participant_id       session age
0      sub-M1003  ses-20191205   1

INFO 2023-01-06 13:58:18,267: Subject age in months: 1
Closest BCP age in months in age-to-head-radius table: 1

INFO 2023-01-06 13:58:18,315: /home/cabinet/cabinet /input /output participant -jargs /home/cabinet/parameter-file-container.json -start prebibsnet -end postbibsnet -v

INFO 2023-01-06 13:58:18,315: All parameters from input args and input .JSON file:
{'common': {'fsl_bin_path': '/opt/fsl-6.0.5.1/bin/', 'task_id': None, 'bids_dir': '/input', 'overwrite': False, 'verbose': True}, 'resource_management': {'mem_mb': None, 'n_cpus': None, 'nipype_plugin_file': None, 'nthreads': None, 'omp_nthreads': None, 'resource_monitor': None}, 'bibsnet': {'model': '3d_fullres', 'nnUNet_predict_path': '/opt/conda/bin/nnUNet_predict', 'code_dir': '/home/cabinet/SW/BIBSnet'}, 'meta': {'script_dir': '/home/cabinet', 'slurm': False}, 'stage_names': {'start': 'prebibsnet', 'end': 'postbibsnet'}, 'optional_out_dirs': {'prebibsnet': '/output/prebibsnet', 'bibsnet': '/output/bibsnet', 'postbibsnet': '/output/postbibsnet', 'nibabies': '/output/nibabies', 'xcpd': '/output/xcpd', 'derivatives': '/output'}}

INFO 2023-01-06 13:58:18,315: All required input files exist.

INFO 2023-01-06 13:58:18,315: Now running prebibsnet stage on:
{'subject': 'sub-M1003', 'session': 'ses-20191205', 'age_months': 1, 'brain_z_size': 119, 'has_T1w': True, 'has_T2w': True, 'model': 512}
230106-13:58:19,80 nipype.utils WARNING:
	 A newer version (1.8.4) of nipy/nipype is available. You are using 1.7.0

WARNING 2023-01-06 13:58:19,080: A newer version (1.8.4) of nipy/nipype is available. You are using 1.7.0
flirt -in /input/sub-M1003/ses-20191205/anat/sub-M1003_ses-20191205_run-02_T2w.nii.gz -ref /input/sub-M1003/ses-20191205/anat/sub-M1003_ses-20191205_run-01_T2w.nii.gz -out sub-M1003_ses-20191205_run-02_T2w_flirt.nii.gz -omat sub-M1003_ses-20191205_run-02_T2w_flirt.mat -bins 640 -searchcost mutualinfo
230106-14:10:13,88 nipype.interface INFO:
	 stderr 2023-01-06T14:10:13.087741:Could not open file sub-M1003_ses-20191205_run-02_T2w_flirt.mat for writing

INFO 2023-01-06 14:10:13,088: stderr 2023-01-06T14:10:13.087741:Could not open file sub-M1003_ses-20191205_run-02_T2w_flirt.mat for writing
230106-14:10:15,100 nipype.interface INFO:
	 stderr 2023-01-06T14:10:15.100832:Error: cant open file sub-M1003_ses-20191205_run-02_T2w_flirt.nii.gz

INFO 2023-01-06 14:10:15,100: stderr 2023-01-06T14:10:15.100832:Error: cant open file sub-M1003_ses-20191205_run-02_T2w_flirt.nii.gz
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 454, in aggregate_outputs
    setattr(outputs, key, val)
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/traits_extension.py", line 330, in validate
    value = super(File, self).validate(objekt, name, value, return_pathlike=True)
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/traits_extension.py", line 135, in validate
    self.error(objekt, name, str(value))
  File "/opt/conda/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error
    raise TraitError(
traits.trait_errors.TraitError: The 'out_matrix_file' trait of a FLIRTOutputSpec instance must be a pathlike object or string representing an existing file, but a value of '/sub-M1003_ses-20191205_run-02_T2w_flirt.mat' <class 'str'> was specified.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/cabinet/cabinet", line 1002, in <module>
    main()
  File "/home/cabinet/cabinet", line 76, in main
    run_all_stages(STAGES, sub_ses_IDs, json_args["stage_names"]["start"],
  File "/home/cabinet/src/utilities.py", line 1260, in run_all_stages
    sub_ses_j_args = stage(sub_ses_j_args, logger)
  File "/home/cabinet/cabinet", line 528, in run_preBIBSnet
    create_anatomical_average(preBIBSnet_paths["avg"])  # TODO make averaging optional with later BIBSnet model?
  File "/home/cabinet/src/utilities.py", line 426, in create_anatomical_average
    register_and_average_files(avg_params["T{}w_input".format(t)],
  File "/home/cabinet/src/utilities.py", line 1052, in register_and_average_files
    registered_files = register_files(input_file_paths, reference)
  File "/home/cabinet/src/utilities.py", line 1072, in register_files
    res = flt.run()
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 401, in run
    outputs = self.aggregate_outputs(runtime)
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/fsl/preprocess.py", line 742, in aggregate_outputs
    outputs = super(FLIRT, self).aggregate_outputs(
  File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 461, in aggregate_outputs
    raise FileNotFoundError(msg)
FileNotFoundError: No such file or directory '/sub-M1003_ses-20191205_run-02_T2w_flirt.mat' for output 'out_matrix_file' of a FLIRT interface

Add any additional information or context about the problem here.

problem occurred when running an s3 wrapper but was later reproduced in the terminal interactively via an srun

Add the 526 model into BIBSnet and CABINET.

A short summary of what you would like to see in CABINET.

Add the 526 model into BIBSnet and CABINET.

Do you have any interest in helping implement the feature?

Yes, but I would need guidance

Add any additional information or context about the request here.

No response

eta squared bug

What happened?

Gregory and I need to calculate eta values for the three other subjects to see where the eta problem is. We originally used sub 1, which wasn't a good test case as it had good alignment in both choices.

What command did you use?

the current container

What version of CABINET are you using?

the current container

Relevant log output

No response

Add any additional information or context about the problem here.

No response

Repository Data Directory

A short summary of what you would like to see in CABINET.

  • add the contents of the data.zip file back into the cabinet repository on github (now that the repository is public, we shouldn't have the space issue we had while it was private)
  • remove the dockerfile command to pull the data.zip file from the s3
  • will need to test that the container still works with this method before adding it into main
  • remove data.zip from the s3 once the container works with it hosted directly within the repository
  • we'll then be able to explain and document the template files etc more in the readthedocs (i.e where they came from, how they are used, where the user can find them in the repo if they are curious)
  • after this, it would be good to BIDS-ify the template names, but that would involve some code changes too (more of an ask than changing the file names)

Do you have any interest in helping implement the feature?

Yes!

Add any additional information or context about the request here.

No response

Is the T2-only Model flag up and running?

A short summary of a question you have for the CABINET development team.

We have found with our lab (SEA Lab at Vandy) that we generally get higher quality results from exclusively running the T2-only model. Currently, we have been manually removing our T1s to force bibsnet to use the T2-only model, but I remember I had an issue a few months back and was told this flag was in development, so just wanted to check-in. I looked at the usage page of the docs and saw its mention but wasn't sure yet if it was ready.

Support for sessions tsv files

A short summary of what you would like to see in CABINET.

In convos with Cecile, Tim and I have realized that it is not BIDS valid to have multiple lines for a given subject in a participants.tsv file. So for HBCD Cecile will be making sessions tsv files (https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#sessions-file) that will contain the participants age in months at the time of a session. Would it be possible to have CABINET support reading these files to determine a subject's age?

Do you have any interest in helping implement the feature?

No :(

Add any additional information or context about the request here.

No response

Specifying all three stages of segmentation pipeline (preBIBSnet, BIBSnet, postBIBSnet) without previously generated outputs results in error

The segmentation pipeline behaves oddly when all three stages are specified start to end in one call. Without any previously run stage if the stages are specified as follows -start preBIBSnet -end postBIBSnet the following error is thrown:
usage: run.py [-h] [-start {preBIBSnet,BIBSnet,postBIBSnet,nibabies,XCPD}] [-end {preBIBSnet,BIBSnet,postBIBSnet,nibabies,XCPD}] [--script-dir SCRIPT_DIR] parameter_json run.py: error: The file(s) below are needed to run the BIBSnet stage, but they do not exist. /output/BIBSnet/sub-470437/ses-1mo/input/sub-470437_ses-1mo_optimal_resized_0000 .nii.gz

It appears that somehow the preBIBSnet routines get skipped here and move right on to the BIBSnet stage, however, when just the preBIBSnet stage is run -start preBIBSnet -end preBIBSnet it completes just fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.