Code Monkey home page Code Monkey logo

c-pac's Introduction

C-PAC: Configurable Pipeline for the Analysis of Connectomes

DOI for "Moving Beyond Processing and Analysis-Related Variation in Neuroscience" DOI for "FCP-INDI/C-PAC: CPAC Version 1.0.0 Beta"

LGPL

A configurable, open-source, Nipype-based, automated processing pipeline for resting state fMRI data. Designed for use by both novice users and experts, C-PAC brings the power, flexibility and elegance of Nipype to users in a plug-and-play fashion; no programming required.

Website

The C-PAC website is located here: https://fcp-indi.github.io/

How to Run

Instructions can be found within our quick-start guide: https://fcp-indi.github.io/docs/latest/user/quick

Documentation

User documentation can be found here: https://fcp-indi.github.io/docs/latest/user

Developer documentation can be found here: https://fcp-indi.github.io/docs/latest/developer

Documentation pertaining to this latest release can be found here: https://fcp-indi.github.io/docs/latest/user/release_notes/latest

Discussion Forum

If you are stuck and need help or have any other questions or comments about C-PAC, there is a C-PAC discussion forum here: https://neurostars.org/tag/cpac

Issue Tracker and Bugs

This is a beta version of C-PAC, which means that it is still under active development. As such, although we have done our best to ensure a stable pipeline, there will likely still be a few bugs that we did not catch. If you find a bug or would like to suggest a new feature, please open an issue on the the C-PAC Github issue tracker: https://github.com/FCP-INDI/C-PAC/issues?state=open

If you would like to suggest revisions to the user documentation, please open an issue on the C-PAC website's GitHub issue tracker: https://github.com/FCP-INDI/fcp-indi.github.io/issues

c-pac's People

Contributors

amygutierrez avatar anibalsolon avatar birajstha avatar briancheung avatar carolfrohlich avatar ccraddock avatar childmindinstitutecnl avatar chrisgorgo avatar czarrar avatar diegoaper avatar e-kenneally avatar effigies avatar hechengjin0 avatar jpellman avatar mawebster avatar miykael avatar nrajamani3 avatar nx10 avatar oesteban avatar olivierlacan avatar pintohutch avatar ranjit avatar ranjitk avatar satra avatar sgiavasis avatar shnizzedy avatar ssikka avatar tergeorge avatar tsalo avatar xinhuili avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

c-pac's Issues

VMHC template

The VMHC workflow needs a templage file in MNI152 space: MNI152_T1_2mm_brain_symmetric.nii.gz.
This file is not a standard file included in FSL, but right now C-PAC is looking for that file in $FSLDIR/data/standard/ folder. This will cause C-PAC failinng.

MNI152_T1_2mm_brain_symmetric.nii.gz should be shipped with C-PAC, and the code should be changed to fetch this file in some C-PAC folder...

  • Yang

Network Centrality Crash

TraitError: Each element of the 'op_string' trait of a DynamicTraitedSpec instance must be a string, but a value of None <type 'NoneType'> was specified.
Error setting node input:
Node: z_score
input: op_string
results_file: /home/bcheung/yard_sale/p_work_n_fix/resting_preproc_sub08224_/centrality_zscore_0/_scan_func_lfo/_csf_threshold_0.98/_gm_threshold_0.7/_wm_threshold_0.98/_threshold_0.2/_compcor_ncomponents_5_selector_pc10.linear1.wm1.global0.motion1.quadratic0.gm0.compcor1.csf1/_bandpass_freqs_0.01.0.10000000000000001/_mask_HarvardOxford-sub-maxprob-thr25-2mm/op_string/result_op_string.pklz
value: [None, None, None, None]

Derivative Template for Group Analysis

Hi all,

In symlink directory, you can find VHMC, f/ALFF derivetive by going into:
sym_links/%s/%s////%s.nii.gz
but for SCA, you have go one level deeper:
sym_links/%s/%s/
///*/%s.nii.gz

Due to this change of the symlink structure, the derivetivies could not be picked up by only one template anymore. User can get around of it by running group analysis for SCA using one template and then change template for other derivatives.
But this should be conditioned, right?

-Yang

Flexibility with multiple models and model-specific subject list

i should be able to run the 'config file for fsl' only once and get all the files i need for further analyses
and when i indicate the name of the model in 'CPAC.pipeline.group_runner.run' command, it should be able to pick up the subject list based on the model name

generate_motion_statistics assumes rotations are last three columns of movement parameter file and other issues re FD/scrubbing

  1. generate_motion_statistics assumes the rotations are in the last three columns of the movement parameter file. This is indeed the case when you use 3dvolreg, but not when the user used fsl mcflirt to do the motion correction. The output of mcflirt puts the rotations in the first three columns.
  2. for calculating FD, why not stay within python instead of piping to stdin?
  3. for scrubbing, why not stay within python, use nibabel instead of 3dcalc to do the scrubbing.

See the code below:

# based on CPAC generate_motion_statistics and scrubbing


# check whether the motion parameter file matches the dimensions of the image.
def check(motparfile, input_image):

    import numpy as np
    import nibabel as nb

    # load motion pars
    motpardata = np.loadtxt(motparfile)
    # load input image
    img = nb.load(input_image)

    if not motpardata.shape[0] == img.get_shape()[3]:
        print motpardata.shape[0], img.get_shape()[3]
        print 'The number of motion parameters does not correspond to the number of timepoints in the image.'
        return False
    else:
        return True


# calculate FD as in Power et al. 2012
def calc_FD(motparfile, rotcols):
    """
    Method to calculate Framewise Displacement (FD) 
    as outlined in Power et al., 2012, NeuroImage.

    Parameters
    ----------
    in_file : string
        movement parameters vector file path
    rotcols : list
        list of indices specify which columns have the rotations, first column = 1

    Returns
    -------
    FD : array
        numpy array holding the FD values for each timepoint

    """

    import numpy as np

    # read in file
    data = np.loadtxt(motparfile)

    rots = []
    # determine what the rotations are
    for x in range(0, data.shape[1]):
        # range started at 0, so adjust to accomodate what user entered
        if x + 1 in rotcols:
            if len(rots) == 0:
                rots = [ y for y in data[:,x]]
            else:
                rots = np.column_stack([rots,data[:,x]]) 
    # convert rot to translations on 50mm sphere
    rots2trans = (rots/360)*2*50*np.pi

    # determine the translations
    trans = []
    for x in range(0, data.shape[1]):
        if not x + 1 in rotcols:
            if len(trans) == 0:
                trans = [ y for y in data[:,x] ]
            else:
                trans = np.column_stack([trans,data[:,x]])  

    #put back together as translations [0,1,2] and converted rotations [3,4,5]
    transdata = np.hstack([trans,rots2trans])

    # obtain framewise displacement
    # N+1 - N
    FDtransdata = transdata[1:,:] - transdata[:transdata.shape[0]-1,:]

    # calculate the FD measure specified in Power et al.
    # FD = sum of absolute 'translations' per timepoint
    FD = [ np.sum(np.abs(transdata[x,:])) for x in range(0, transdata.shape[0]) ]
    # convert to array for easy indexing
    FD = np.array(FD)

    return FD


# select which frames to keep and which ones should be deleted.
def prepare_scrubbing(FD, threshold=0.2, frames_before=1, frames_after=2):
    """
    Based on the Framewise Displacement (FD) 
    as outlined in Power et al., 2012, NeuroImage, select which timepoints (frames)
    to keep and which ones to delete.

    Parameters
    ----------
    FD : array
        numpy array holding one FD value per timepoint
    threshold : a float
         scrubbing threshold value
    frames_before : an integer
        number of frames preceding the offending time frame
        default value is 1
    frames_after : an integer
        number of frames following the offending time frame
        default value is 2
    Returns
    -------
    FD : array
        numpy array holding the FD values for each timepoint

    """

    import numpy as np

    # initial exclusion
    indices = [i[0] for i in (np.argwhere(FD >= 3)).tolist()]
    # remove remaining indices
    extra_indices = []
    for i in indices:
        #remove preceding frames
        if i > 0 :
            count = 1
            while count <= frames_before:
                extra_indices.append(i-count)
                count+=1

        #remove following frames
        count = 1
        while count <= frames_after:
            extra_indices.append(i+count)
            count+=1

    # take union of indices and extra indices          
    frames_ex = list(set(indices) | set(extra_indices))
    frames_ex.sort()

    # frames to keep
    frames_in = range(len(FD))
    temp = list(frames_in)
    # go through all frames, start from the back that way index deletion works
    # remove indices from temp if they are in frames_ex
    for i in sorted(frames_in, reverse = True):
        if i in frames_ex:
            del temp[i]
    frames_in = temp

    output = {'marked_frames': indices, 'frames_in': frames_in, 'frames_ex': frames_ex}

    return output


# adjust motion parameters according to frames_in
def adjust_motpar(motparfile, prep_scrub_output):

    from numpy import loadtxt
    # read in file
    data = loadtxt(motparfile)
    # adjust
    data = data[prep_scrub_output['frames_in'],:]
    # output
    print data.shape
    return data


# save frames and adjusted motparfile to files for reference
def save_frames(outfile, prep_scrub_output, adjusted_motpar):

    import os
    from numpy import savetxt
    #write frames_ex to a file
    with open(os.path.join(os.path.dirname(outfile),'scrubbing_excluded_frames.1D'),'w') as myfile:
        mystring = [ str(x) for x in prep_scrub_output['frames_ex'] ]
        myfile.write(','.join(mystring))
    #write frames_in to a file
    with open(os.path.join(os.path.dirname(outfile),'scrubbing_included_frames.1D'),'w') as myfile:
        mystring = [ str(x) for x in prep_scrub_output['frames_in'] ]
        myfile.write(','.join(mystring))
    #write marked_frames to a file
    with open(os.path.join(os.path.dirname(outfile),'scrubbing_marked_frames.1D'),'w') as myfile:
        mystring = [ str(x) for x in prep_scrub_output['marked_frames'] ]
        myfile.write(','.join(mystring))
    #write adjusted motpars
    savetxt(outfile.split('.nii.gz')[0] + '_motpar.1D', adjusted_motpar, fmt='%.8f', delimiter='\t', newline='\n')


# do the actual scrubbing
def scrub(input_file, output_file, keepers):

    import nibabel as nb

    # load image and data
    img = nb.load(input_file)
    data = img.get_data()
    # only keep timepoints that are in the keepers list
    data = data[:,:,:,keepers]
    # save image
    temp_image = nb.Nifti1Image(data, img.get_affine())
    nb.save(temp_image, output_file)

    return data


# actions when the script is called from the command line.
# example: python2.7 /home/mrstats/maamen/DCCN/Scripts/NeuroImage/processing_ndes/motionn_scrubbing.py filtered_func_data.nii.gz scrub_test.nii.gz mc/prefiltered_func_data_mcf.par [1,2,3]
if __name__ == "__main__":
    import sys
    infile = sys.argv[1]
    outfile = sys.argv[2]
    motparfile = sys.argv[3]
    rotcols = sys.argv[4]
    rotcols = [ int(x) for x in rotcols.strip(']').strip('[').split(',') ]

    # check consistency of files
    c = check(motparfile, infile)
    if c == False:
        sys.exit()
    # calculate FD
    FD = calc_FD(motparfile, rotcols)
    # determine which timepoints to keep
    to_scrub = prepare_scrubbing(FD)
    # scrub the data
    scrub(infile, outfile, to_scrub['frames_in'])
    # adjust motion parameters
    adjusted_motpar = adjust_motpar(motparfile, to_scrub)
    # save motion parameter
    save_frames(outfile, to_scrub, adjusted_motpar)

(re)check output directory when pipeline is run again

When the pipeline crashes or is otherwise aborted and then rerun or the user accidentally deleted files in the output directory, the pipeline will not restart the symlink generation process unless something is changed in the working_directory.

This is somewhat annoying since the only workaround is to run the pipeline for the affected subjects again - once they have been found.

Could you put an option in the configfile to check the output directory for missing symlinks and create them when needed (or some other easy to use technique that doesn't require the whole pipeline to be rerun)

[request] data source should support DiscSci data structure

Hi guys,

C-PAC are going to process data for the Rockland sample data for the Discovery Science Project. Please login to Rocky, go to "/home2/data/Originals/DiscSci" to see how the data for this project are organized.

We need C-PAC to be able to handle this structure...

Thanks

Yang

inconsistency in names of derivatives

the names final outputs of ALFF/fALFF, and VMHC and seed based analysis are inconsistent:

'ALFF_Z_FWHM_2standard', 'fALFF_Z_FWHM_2standard', 'VMHC_Z_stat_FWHM', 'sca_Z_FWHM'

Should we change them to similar? for example

'ALFF_Z_FWHM_2standard', 'fALFF_Z_FWHM_2standard', 'VMHC_Z_FWHM_2standard'', 'sca_Z_FWHM_2standard''

Request: Run status / Progress information

It would be very useful for there to be a way to see how many subjects have been run, how many to be run, etc, without having to look at what files are in the output directory.

Missing/broken dot should not cause C-PAC to crash.

If dot/graphvis can not be found, C-PAC should print a message like "dot not found: visual schematics of pipelines will not be generated" but continue to run.

I can add a section to the user guide about visual pipelines being an optional feature that requires dot.

Using YAML files instead of current config files

This might not be the ideal thing, but I feel having a user provide data_config.py or config.py as YAML files would make the user experience smoother. For instance, with the config.py, you could ease the presentation of the different settings by having them in a hierarchy, for example:

anat-preproc:
    run: True
    etc: [1,2,3]

Since the YAML syntax naturally fits in with python dictionaries and lists, the transition wouldn't be bad.

adding multicore support via mcflirt instead of flirt

Hi guys,

I've tried to modify the registration.py code so that it could use MCFLIRT instead of flirt.

However, they're slightly different and require someone familiar with the entire workflow to do it.

If possible, I'd recommend you guys add MCFLIRT on the "To do" list, as it will speed up the computations substantially.

The goodnews is that nipype does support the multicore variant.

user docs lacks some desirable links

eg, getting to the code base or the issues tab is not at all clear.
in fact, in 'troubleshooting', there is a link to a CMI page, which is clearly not the issues tab here.

CPAC.pipeline.cpac_runner.run fails on grayMatterThreshold

In [2]: CPAC.pipeline.cpac_runner.run('/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/data_config.py','/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/CPAC_subject_list.py')

AttributeError Traceback (most recent call last)
in ()
----> 1 CPAC.pipeline.cpac_runner.run('/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/data_config.py','/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/CPAC_subject_list.py')

/home/mrstats/maamen/epd/lib/python2.7/site-packages/CPAC/pipeline/cpac_runner.pyc in run(config_file, subject_list_file)
221
222
--> 223 strategies = sorted(build_strategies(c))
224
225 print strategies

/home/mrstats/maamen/epd/lib/python2.7/site-packages/CPAC/pipeline/cpac_runner.pyc in build_strategies(configuration)
80
81
---> 82 config_iterables = {'_gm_threshold': eval('configuration.grayMatterThreshold'), '_wm_threshold': eval('configuration.whiteMatterThreshold'), '_csf_threshold': eval('configuration.cerebralSpinalFluidThreshold'), '_threshold': eval('configuration.scrubbingThreshold'), '_compcor': eval('configuration.Corrections'), '_target_angle_deg': eval('configuration.targetAngleDeg')}
83
84

/home/mrstats/maamen/epd/lib/python2.7/site-packages/CPAC/pipeline/cpac_runner.pyc in ()

AttributeError: 'module' object has no attribute 'grayMatterThreshold'

CPAC cause SGE crash when processing dataset with large number of subjects

The master node of our cluster have 16GB of RAM. When running CPAC on it to process data sets with a lot of subjects (n > 100), CPAC keeps submitting jobs to SGE and the use up all the RAM. SGE then stops responding to CPAC and the job submitting process will crash because of time out error.

Unable to run job: failed receiving gdi request response for mid=1 (got syncron message receive timeout error)

Some kind of job control should be implemented so this won't happen...

Yang

inverse warping fails when the ref image (T1) has a resolution that is too high...

Inverse warping (carried out by the FSL function /usr/share/fsl/4.1/bin/invwarp ) is a very slow and computational heavy function, it takes a lot of time to run especially when the reference image has too many voxels.

invwarp will do a check on the dimension of the reference image and if the volume dimension is larger than 170x220x180 voxels, invwarp will give warning and exit out. this will crash the node.

Possible solution: we check the volume dimension of the reference image, if it is too high then we can down sample the data.

preprocessing anatomical fails (potential newb issue)

Hi,

I've had an issue using CPAC.

Well, first, when I try using import CPAC, everything does not load as you'd expect.

i.e., to call create_anat_preproc, I have to type:
from CPAC.anat_preproc import create_anat_preproc

then, I've tried to use either lists or a single string, but I always get the following kinds of issues:

Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/pipeline/engine.py", line 509, in run
execgraph = generate_expanded_graph(deepcopy(flatgraph))
File "/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/pipeline/utils.py", line 483, in generate_expanded_graph
graph_in = _remove_identity_nodes(graph_in, keep_iterables=True)
File "/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/pipeline/utils.py", line 433, in _remove_identity_nodes
destnode.set_input(inport, value)
File "/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/pipeline/engine.py", line 1017, in set_input
setattr(self.inputs, parameter, deepcopy(val))
File "/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/interfaces/traits_extension.py", line 80, in validate
self.error( object, name, value )
File "/Library/Python/2.7/site-packages/traits-4.2.0-py2.7-macosx-10.8-intel.egg/traits/trait_handlers.py", line 170, in error
value )
traits.trait_errors.TraitError: The 'in_file' trait of a ThreedrefitInputSpec instance must be an existing file name, but a value of '/Users/Gagan/school shit/thesis/test/M87179880/t1_avg.nii.gz' <type 'str'> was specified.

note that this path is correct and I followed the instructions from the docs where it tells you to create the anat struct, and then set the inputs:

note that I"m using Python 2.7.3. on mac and I installed most dependencies through easy_install with superuser privs.

any help would be great.

Symlinks failing

Error output:


AttributeError Traceback (most recent call last)
/home/millskl/CPAC_files/FCP-INDI-C-PAC-7b14cf3/ in ()
----> 1 c['node'].run()

/home/millskl/epd-7.3-2-rh3-x86_64/lib/python2.7/site-packages/nipype/pipeline/engine.pyc in run(self, updatehash)
1058 self.config = merge_dict(deepcopy(config._sections), self.config)
1059 if not self._got_inputs:
-> 1060 self._get_inputs()
1061 self._got_inputs = True
1062 outdir = self.output_dir()

/home/millskl/epd-7.3-2-rh3-x86_64/lib/python2.7/site-packages/nipype/pipeline/engine.pyc in _get_inputs(self)
1204 output_name = info[1]
1205 try:
-> 1206 output_value = results.outputs.get()[output_name]
1207 except TypeError:
1208 output_value = results.outputs.dictcopy()[output_name]

AttributeError: 'NoneType' object has no attribute 'get'

If this is being caused by the nipype datasink not being updated with the new changes we need, can you post the link to the fork in an easy to find place?

Pipeline fails after changing input paths

The pipeline appears to fail after the following steps below. See /home/data/Projects/Rockland/scripts/02_PreProc/test for all the relevant files.

  1. I initially run it through rocky (see data_config_rocky.py, config_rocky.py, and CPAC_subject_list_rocky.py)
  2. Halfish through, I kill the process.
  3. Then I use a different set of input paths (see data_config_gelert.py, config_gelert.py, and CPAC_subject_list_gelert.py).
  4. I restart the process with the gelert files on gelert.
  5. For many processes, it tries looking for the old rocky based paths instead of the new gelert based paths.

Z

warning when instantiating create_register_func_to_mni

Hi guys,

Trying to register the functionals to the warped anatomicals, but I'm a bit wary because I get the following error:

func2proc = create_register_func_to_mni();
/Library/Python/2.7/site-packages/nipype-0.6.0-py2.7.egg/nipype/interfaces/base.py:359: UserWarning: Input concat_xfm requires inputs: in_file2

I hope this error is harmless and I can just follow the documentation?

CPAC.utils.extract_data.run fails

Just installed the new master branch, but run into this error on the first step:

In [1]: import CPAC
nipype/init.py:53: UserWarning: Running the tests from the install directory may trigger some failures
warnings.warn('Running the tests from the install directory may '
121107-12:06:21,98 interface INFO:
stdout 2012-11-07T12:06:21.098494:/opt/cluster/matlab

In [2]: CPAC.utils.extract_data.run('/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/data_config.py')

AttributeError Traceback (most recent call last)
in ()
----> 1 CPAC.utils.extract_data.run('/home/mrstats/maamen/TestDir/TestDir/CPAC-Test/settings/data_config.py')

/home/mrstats/maamen/epd/lib/python2.7/site-packages/CPAC/utils/extract_data.pyc in run(data_config)
449 sys.path.append(path)
450 c = import(fname.split('.')[0])
--> 451 if c.scanParametersCSV is not None:
452 s_param_map = read_csv(c.scanParametersCSV)
453 else:

AttributeError: 'module' object has no attribute 'scanParametersCSV'

symlinks for group analysis

is it possible to have something as simple as:
....group_models/all_paths_to_pipelines_n_subpipes/model1/derivative_name/*

and can u customize so that user can give a name to the the combinations of pipelines and subpipes. what you have now is very illustrative but not compact!

Crash when file names are too long.

We should check file names and warn the user if they need to be shortened.

{'node': resting_preproc_M87100810_.func_preproc_0.func_get_mean_RPI.c0,
'traceback': ['Traceback (most recent call last):\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 15, in run_node\n result['result'] = node.run(updatehash=updatehash)\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1128, in run\n self._run_interface()\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1226, in _run_interface\n self._result = self._run_command(execute)\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1350, in _run_command\n result = self.interface.run()\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/interfaces/base.py", line 827, in run\n results.outputs = self.aggregate_outputs(results.runtime)\n',
' File "/home/data/PublicProgram/epd-7.2-2-rh5-x86_64/lib/python2.7/site-packages/nipype/interfaces/base.py", line 882, in aggregate_outputs\n raise FileNotFoundError(msg)\n',
"FileNotFoundError: File/Directory '/home/data/Projects/kiehl/working/resting_preproc_M87100810
/func_preproc_0/_scan_rest_v01_r01_0005_20081227_145638SerieMR-0005-0001restv01r01M871008100536s005a001/func_get_mean_RPI/20081227_145638SerieMR-0005-0001restv01r01M871008100536s005a001_3dc_RPI_3dT.nii.gz' not found for ThreedTstat output 'out_file'.\nInterface ThreedTstat failed to run. \n"]}

Typo in leaf out_file assignment when 0 in runWorkflow

found a bug in leaf out_file

        if 0 in c.AnyWorkflow:
            # we are forking so create a new node
            tmp = strategy()
            tmp.resource_pool = dict(strat.resource_pool)
            tmp.leaf_node = (strat.leaf_node)

---------> tmp.out_file = str(strat.leaf_out_file)
tmp.name = list(strat.name)
strat = tmp
new_strat_list.append(strat)

tmp.out_file should be tmp.leaf_out_file

create_fsl_model

column name in phenotypic can be sub or subject_id or subject anything, create_fsl_model should be able to handle that

Problem of Multiple-scans Support

Hi All,

We Steve ran C-PAC to process the rockland sample data (which have 3 rs-fMRI data with 900, 404 and 120 volumes), SCA failed.

Sharad and I have just figured out the reason: for all the seeds, the 1D time series extracted from the 900- and 404-TR data were overwitten by the time series from the 120-TR one .

That means the support of multiple scans is not ready.

This problem needs to be fixed.

Thanks

Yang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.