Code Monkey home page Code Monkey logo

heudiconv's Introduction

HeuDiConv

a heuristic-centric DICOM converter

JOSS Paper Our Docker image GitHub Actions (test) CodeCoverage Readthedocs Zenodo (latest) Debian Unstable Gentoo (::science) PyPI

About

heudiconv is a flexible DICOM converter for organizing brain imaging data into structured directory layouts.

  • It allows flexible directory layouts and naming schemes through customizable heuristics implementations.
  • It only converts the necessary DICOMs and ignores everything else in a directory.
  • You can keep links to DICOM files in the participant layout.
  • Using dcm2niix under the hood, it's fast.
  • It can track the provenance of the conversion from DICOM to NIfTI in W3C PROV format.
  • It provides assistance in converting to BIDS.
  • It integrates with DataLad to place converted and original data under git/git-annex version control while automatically annotating files with sensitive information (e.g., non-defaced anatomicals, etc).

Heudiconv can be inserted into your workflow to provide automatic conversion as part of a data acquisition pipeline, as seen in the figure below:

figs/environment.png

Installation

See our installation page on heudiconv.readthedocs.io .

HOWTO 101

In a nutshell -- heudiconv is given a file tree of DICOMs, and it produces a restructured file tree of NifTI files (conversion handled by dcm2niix) with accompanying metadata files. The input and output structure is as flexible as your data, which is accomplished by using a Python file called a heuristic that knows how to read your input structure and decides how to name the resultant files. You can run your conversion automatically (which will produce a .heudiconv directory storing the used parameters), or generate the default parameters, edit them to customize file naming, and continue conversion via an additional invocation of heudiconv:

figs/workflow.png

heudiconv comes with existing heuristics which can be used as is, or as examples. For instance, the Heuristic convertall extracts standard metadata from all matching DICOMs. heudiconv creates mapping files, <something>.edit.text which lets researchers simply establish their own conversion mapping.

In most use-cases of retrospective study data conversion, you would need to create your custom heuristic following the examples and the "Heuristic" section in the documentation. Note that ReproIn heuristic is generic and powerful enough to be adopted virtually for any study: For prospective studies, you would just need to name your sequences following the ReproIn convention, and for retrospective conversions, you often would be able to create a new versatile heuristic by simply providing remappings into ReproIn as shown in this issue (documentation is coming).

Having decided on a heuristic, you could use the command line:

heudiconv -f HEURISTIC-FILE-OR-NAME -o OUTPUT-PATH --files INPUT-PATHs

with various additional options (see heudiconv --help or "Usage" in documentation) to tune its behavior to convert your data.

For detailed examples and guides, please check out ReproIn conversion invocation examples and the user tutorials in the documentation.

How to cite

Please use Zenodo record for your specific version of HeuDiConv. We also support gathering all relevant citations via DueCredit.

How to contribute

For a detailed into, see our contributing guide.

Our releases are packaged using Intuit auto, with the corresponding workflow including Docker image preparation being found in .github/workflows/release.yml.

3-rd party heuristics

Support

All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/nipy/heudiconv/issues.

If you have a problem or would like to ask a question about how to use heudiconv, please submit a question to NeuroStars.org with a heudiconv tag. NeuroStars.org is a platform similar to StackOverflow but dedicated to neuroinformatics.

All previous heudiconv questions are available here: http://neurostars.org/tags/heudiconv/

heudiconv's People

Contributors

aksoo avatar asmacdo avatar bpinsard avatar candleindark avatar chrisgorgo avatar daeh avatar danlurie avatar darrencl avatar dependabot[bot] avatar effigies avatar hbraundsp avatar jdkent avatar jennan avatar jpellman avatar jwodder avatar kasbohm avatar keithcallenberg avatar leej3 avatar matthew-brett avatar mgxd avatar mih avatar mvdoc avatar octomike avatar psadil avatar pvelasco avatar satra avatar stilley2 avatar thechymera avatar tsalo avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heudiconv's Issues

dcm2niix converts only the first run to .nii

When I try to run heudiconv with the -c dcm2niix option using docker, heudiconv executes only for the first scan collected and then ends with the message "INFO: PROCESSING DONE:" rather than looping through all the scans.

For more background, I successfully completed the convertall stage and had moved to the dcm2niix stage. The dicominfo.tsv output included all of the scans, but heudiconv with dcm2niix option still only converted the first scan. I have tried running heudiconv in two ways: 1. with each scan's dicoms in separate directories and 2. with all dicoms in the same directory. I get the same result either way.

Strangely, although heudiconv is not successfully running dcm2niix, I can run dcm2niix on its own to convert all of the images.

If anyone has encountered this issue, I would love to hear your suggestions. Below is the heudiconv input and output, and attached are my dicominfo and heuristic files (in .txt format).

172-17-56-19:data gailrosenbaum$ docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/raw/*.IMA -s SC02162 -f /data/MBMF_heuristic4.py -c dcm2niix -b -o /data/SC02162/output/
INFO: Need to process 1 study sessions
INFO: PROCESSING STARTS: {'session': None, 'outdir': '/data/SC02162/output/', 'subject': 'SC02162'}
INFO: Processing 5599 dicoms
INFO: Analyzing 5599 dicoms
INFO: Generated sequence info with 28 entries
INFO: Doing conversion using dcm2niix
INFO: Converting /data/SC02162/output/fmap/sub-SC02162_dir-AP_epi (72 DICOMs) -> /data/SC02162/output/fmap . Converter: dcm2niix . Output types: ('nii.gz', 'dicom')
INFO: Executing node convert in dir: /tmp/heudiconvdcmmaVro9/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f fmap -o /tmp/heudiconvdcmmaVro9/convert -s n -v n /tmp/heudiconvdcmmaVro9/convert/SC02162.MR.SACKLER_SI-CH.0002.0001.2015.10.07.16.20.19.671875.4413677.IMA
INFO: Executing node embedder in dir: /tmp/heudiconvdcmmaVro9/embedder
INFO: Post-treating /data/SC02162/output/fmap/sub-SC02162_dir-AP_epi.json file
INFO: Populating template files under /data/SC02162/output/
INFO: PROCESSING DONE: {'session': None, 'outdir': '/data/SC02162/output/', 'subject': 'SC02162'}
[dicominfo.txt](https://github.com/nipy/heudiconv/files/1257097/dicominfo.txt
MBMF_heuristic4.txt
)

singularity: no space on /tmp

for large conversions, I kept running into this error midway in the process

  File "/usr/local/bin/heudiconv", line 902, in <module>
    main()
  File "/usr/local/bin/heudiconv", line 898, in main
    min_meta=args.minmeta)
  File "/usr/local/bin/heudiconv", line 778, in convert_dicoms
    min_meta=min_meta)
  File "/usr/local/bin/heudiconv", line 504, in convert
    res = convertnode.run()
  File "/opt/conda/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 366, in run
    self._run_interface()
  File "/opt/conda/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 476, in _run_interface
    self._result = self._run_command(execute)
  File "/opt/conda/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 621, in _run_command
    self._save_results(result, cwd)
  File "/opt/conda/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 489, in _save_results
    savepkl(resultsfile, result)
  File "/opt/conda/lib/python2.7/site-packages/nipype/utils/filemanip.py", line 540, in savepkl
    pkl_file.close()
  File "/opt/conda/lib/python2.7/gzip.py", line 378, in close
    fileobj.write(self.compress.flush())
IOError: [Errno 28] No space left on device

turns out /tmp fills up within the container, causing the conversion to fail

Quick workaround was to replace the /tmp directory with an empty one

-B some/empty/dir:/tmp

but we should see if there is a more elegant solution

Siemens Csa headers

Hi, running heudiconv on Siemens (Avanto/Skyra/Prisma) data, I get:
Ignoring xxxx.IMA since not quite a ยซnormalยป DICOM: Dataset does not have attribute โ€˜ProtocolNameโ€™.

I guess this is caused by the attribute not being present at the normal location (0018 1030) in our Siemens DICOMS, but exists as tProtocolName in the private Csa header. Did I miss a switch when calling heudiconv, or does some logic need to be added? nibabel.nicom.csareader?

Cheers!

NameError: global name 'URIRef' is not defined (nipype related)

orange:heudiconv filo$ bin/heudiconv -d /Volumes/Samsung_T1/AA_dicoms/%s_*/*/*.dcm -s S4379IUI -f heuristics/convertall.py -o /tmp -c dcm2nii
Converting /tmp/S4379IUI/run001
/tmp/S4379IUI
nii.gz
dcm2nii
150929-17:02:37,968 workflow INFO:
     Executing node convert in dir: /var/folders/rw/dpbdbfb12p91tqb08078kww40000gn/T/heudiconvtmp7pAXZG/convert
150929-17:02:37,989 workflow INFO:
     Running: dcm2nii -a y -c y -b config.ini -v y -d y -e y -g y -i n -n y -o /private/var/folders/rw/dpbdbfb12p91tqb08078kww40000gn/T/heudiconvtmp7pAXZG/convert -p y -x n -f n /private/var/folders/rw/dpbdbfb12p91tqb08078kww40000gn/T/heudiconvtmp7pAXZG/convert/S4379IUI_1_1_00001_00001_124025357500_124030855000_4081835128.dcm
Traceback (most recent call last):
  File "bin/heudiconv", line 495, in <module>
    anon_outdir=args.conv_outputdir)
  File "bin/heudiconv", line 435, in convert_dicoms
    mod, 'custom_callable', None))
  File "bin/heudiconv", line 299, in convert
    res = convertnode.run()
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1428, in run
    self._run_interface()
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1538, in _run_interface
    self._result = self._run_command(execute)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1664, in _run_command
    result = self._interface.run()
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/interfaces/base.py", line 1082, in run
    prov_record = write_provenance(results)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/utils/provenance.py", line 235, in write_provenance
    ps.add_results(results)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/utils/provenance.py", line 288, in add_results
    a0_attrs)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 1846, in activity
    return self.add_element(PROV_REC_ACTIVITY, identifier, {PROV_ATTR_STARTTIME: _ensure_datetime(startTime), PROV_ATTR_ENDTIME: _ensure_datetime(endTime)}, other_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 1840, in add_element
    return self.add_record(record_type, identifier, attributes, other_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 1802, in add_record
    new_record = PROV_REC_CLS[record_type](self, self.valid_identifier(identifier), attributes, other_attributes, asserted)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 480, in __init__
    self.add_attributes(attributes, other_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 896, in add_attributes
    ProvElement.add_attributes(self, attributes, extra_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 576, in add_attributes
    self.add_extra_attributes(extra_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 567, in add_extra_attributes
    attr_set = self.parse_extra_attributes(extra_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 559, in parse_extra_attributes
    for attribute, value in extra_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 559, in <genexpr>
    for attribute, value in extra_attributes)
  File "/Users/filo/anaconda/lib/python2.7/site-packages/nipype/external/provcopy.py", line 537, in _auto_literal_conversion
    if isinstance(literal, URIRef):
NameError: global name 'URIRef' is not defined

strange sorting of multiecho Philips data (dcm2niix)

I got some strange sorted output using dcm2niix of multi echo data. Echo 3, 1, 2, 5, 4, &%$. dcm2niix is mentioning this strange order at the console output.

I changed the default out_filename to mention the echo, too.

                            convertnode.inputs.out_filename = os.path.basename(dirname +'_%e')  

INFO: Processing 0 dicoms

Dear experts,
starting for the first time trying to use the docker version of heudiconv, with Siemens dicom files. I am launching the following from my command prompt (Windows 10). So it seams not to find the dicom files. Any help or suggestion is welcome. Attached a screenshot of where my data is. Thanks in advance for your help.
C:\Users\admin\data>docker run --rm -it -v C:/Users/admin/data nipy/heudiconv -d C:/Users/admin/data/{subject}/ST000000/SE00001/MR* -s essai -f convertall.py -c dcm2niix -o output
INFO: Processing 0 dicoms
Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 902, in
main()
File "/usr/local/bin/heudiconv", line 898, in main
min_meta=args.minmeta)
File "/usr/local/bin/heudiconv", line 735, in convert_dicoms
dcmfilter=getattr(mod, 'filter_dicom', None))
File "/usr/local/bin/heudiconv", line 312, in process_dicoms
lgr.info("Generated sequence info with %d entries", len(info))
UnboundLocalError: local variable 'info' referenced before assignment
screenshot 2

no valid OpenPGP data found

Hi, docker build is throwing an error for me at:
apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9

gpg: requesting key 2649A5A9 from hkp server pool.sks-keyservers.net
gpgkeys: key A5D32F012649A5A9 can't be retrieved
gpg: no valid OpenPGP data found.

Changing hkp://pool.sks-keyservers.net -> hkp://p80.pool.sks-keyservers.net
seems to resolve the problem.

dicom_dir_template with subject and session.

In order to search using both subject and session, would it be possible to change the way dicom_dir_template is set up and formatted?

E.g., instead of

heudiconv -d /scratch/tsalo006/DICOM/%s_S1/*/* \
	-s sub-05 -f /scratch/tsalo006/heuristics.py -c dcm2niix \
	-o ./ -ss 1

we could have

heudiconv -d /scratch/tsalo006/DICOM/{s}_S{ss}/*/* \
	-s sub-05 -f /scratch/tsalo006/heuristics.py -c dcm2niix \
	-o ./ -ss 1

In heudiconv, instead of

if sid:
    sdir = dicom_dir_template % sid
    # and see what matches
    fl = sorted(glob(sdir))

there'd be

if sid:
    # if ss isn't in the template it's just ignored
    sdir = dicom_dir_template.format(s=sid, ss=ses)
    # and see what matches
    fl = sorted(glob(sdir))

Change in behaviour of -s flag

Just documenting a change in behavior. Not sure if this was intended. Previously if the dicom contained a subject identifier different from the desired output subject identifier one could pass the latter as part of the heudiconv call. This no longer works as {subject} now needs to be included in the dicom template:

$ singularity exec --bind $PWD:/data nipy_heudiconv-2017-05-27-2471285b9681.img heudiconv -d /data/dicoms/*-03686103-20151022-11160-DICOM.tar -s 001 -ss 01 -f /data/anal/heudiconv_files/heuristics/convertall.py -b -o /data/bids_v_01_generic -c none

INFO: Processing 1 dicoms
INFO: Generated sequence info with 20 entries

$ singularity exec --bind $PWD:/data nipy_heudiconv_latest-2017-08-07-6b7e1ec1c6f4.img heudiconv -d /data/dicoms/*-03686103-20151022-11160-DICOM.tar -s 001 -ss 01 -f /data/anal/heudiconv_files/heuristics/convertall.py -b -o /data/bids_v_03_generic -c none

Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 2079, in
main()
File "/usr/local/bin/heudiconv", line 2071, in main
return _main(args)
File "/usr/local/bin/heudiconv", line 1835, in _main
grouping=grouping)
File "/usr/local/bin/heudiconv", line 1422, in get_study_sessions
"subject id. Got %r" % dicom_dir_template)
ValueError: dicom dir template must have {subject} as a placeholder for a subject id. Got '/data/dicoms/*-03686103-20151022-11160-DICOM.tar'

"install: illegal option --t" message when make install

I'm trying to install the heudiconv locally, but got the following error message.

I'm using MacBook Pro with macOS Sierra. Any inputs would be appreciated.

$ make install
mkdir -p /usr/local/share/heudiconv/heuristics
mkdir -p /usr/local/share/doc/heudiconv/examples/heuristics
mkdir -p /usr/local/bin
install -t /usr/local/bin bin/heudiconv
install: illegal option -- t
usage: install [-bCcpSsv] [-B suffix] [-f flags] [-g group] [-m mode]
[-o owner] file1 file2
install [-bCcpSsv] [-B suffix] [-f flags] [-g group] [-m mode]
[-o owner] file1 ... fileN directory
install -d [-v] [-g group] [-m mode] [-o owner] directory ...
make: *** [install] Error 64

Add cluster distributed conversion support

PR #32 merges a bunch of functionality to heudiconv. as part of the merge process, distributed conversion support was removed. this should be added back especially if we are going to run this on a large collection of dicoms.

@mgxd - this would be a good exercise to test your python-fu and nipype-fu

ImportError: No module named configparser

When try to run ( i've installed "heudiconv docker" from https://hub.docker.com/r/nipy/heudiconv/ )

docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/%s/YAROSLAV_DBIC-TEST1/HEAD_ADVANCED_APPLICATIONS_LIBRARIES_20160824_104430_780000/*/*IMA -s PHANTOM1_3 -f /data/convertall.py -c dcm2niix -b -o /data/output

i've this error:

INFO: Processing 128 dicoms Traceback (most recent call last): File "/usr/local/bin/heudiconv", line 849, in <module> main() File "/usr/local/bin/heudiconv", line 845, in main is_bids=args.bids) File "/usr/local/bin/heudiconv", line 731, in convert_dicoms sourcedir=sourcedir) File "/usr/local/bin/heudiconv", line 461, in convert from nipype import Function, Node File "/opt/conda/lib/python2.7/site-packages/nipype/__init__.py", line 11, in <module> from .utils.config import NipypeConfig File "/opt/conda/lib/python2.7/site-packages/nipype/utils/config.py", line 15, in <module> import configparser ImportError: No module named configparser

thanks,

Piero

Json file is not so completed as dcmstak command

Hi, thanks for your work, really helps me a lot

I have a question about the json file, the information in your json file is not completed compared with the json file from dcmstack(in your pipeline, you also use dcmstack niftiwrapper to get the metadata), So i want to ask why you just keep part of information of the metadata?

I know that for dcmstack, they dont work for Philips machine, the reason why you keep just this metadata is because of this? So for heudiconv, it is stable for all the machines or not? Philips, Siemens, GE???

Here is the result that i used heudiconv and dcmstack to get the jsons:

heudiconv:
{ "Manufacturer": "Siemens", "ManufacturersModelName": "Prisma_fit", "ProcedureStepDescription": "PRISMA_PREV_DEMALS", "ScanningSequence": "EP", "SequenceVariant": "SK_SP", "SeriesDescription": "ep2d_diff_FREE68_p2FAD_2.5mm_iso", "BodyPartExamined": "BRAIN", "ProtocolName": "ep2d_diff_FREE68_p2FAD_2.5mm_iso", "SequenceName": "_ep_b0", "ImageType": ["ORIGINAL", "PRIMARY", "DIFFUSION", "NONE", "ND"], "AcquisitionDateTime": "2015-10-08T16:36:36.362500", "MagneticFieldStrength": 3, "FlipAngle": 90, "EchoTime": 0.09, "RepetitionTime": 7.3, "EffectiveEchoSpacing": 0.000360002, "PhaseEncodingDirection": "j-", "ConversionSoftware": "dcm2niix", "ConversionSoftwareVersion": "v1.0.20170411 GCC4.8.4" }

#################
############
#############
dcmstack
{ "global": { "const": { "SpecificCharacterSet": "ISO_IR 100", "ImageType": [ "ORIGINAL", "PRIMARY", "DIFFUSION", "NONE", "ND" ], "StudyTime": "161446.658000", "SeriesTime": "163643.455000", "AccessionNumber": "", "Modality": "MR", "Manufacturer": "SIEMENS", "SeriesDescription": "ep2d_diff_FREE68_p2FAD_2.5mm_iso", "ManufacturerModelName": "Prisma_fit", "BodyPartExamined": "BRAIN", "ScanningSequence": "EP", "SequenceVariant": [ "SK", "SP" ], "ScanOptions": "FS", "MRAcquisitionType": "2D", "AngioFlag": "N", "SliceThickness": 2.5, "RepetitionTime": 7300.0, "EchoTime": 90.0, "NumberOfAverages": 1.0, "ImagingFrequency": 123.252999, "ImagedNucleus": "1H", "EchoNumbers": 1, "MagneticFieldStrength": 3.0, "SpacingBetweenSlices": 2.5, "NumberOfPhaseEncodingSteps": 95, "EchoTrainLength": 47, "PercentSampling": 100.0, "PercentPhaseFieldOfView": 100.0, "PixelBandwidth": 1580.0, "SoftwareVersions": "syngo MR D13D", "ProtocolName": "ep2d_diff_FREE68_p2FAD_2.5mm_iso", "TimeOfLastCalibration": [ "123723.000000", "123723.000000" ], "TransmitCoilName": "Body", "AcquisitionMatrix": [ 96, 0, 0, 96 ], "InPlanePhaseEncodingDirection": "COL", "FlipAngle": 90.0, "VariableFlipAngleFlag": "N", "SAR": 0.31044795430682, "dBdt": 0.0, "StudyID": "1", "SeriesNumber": 8, "ImageOrientationPatient": [ 1.0, 0.0, 0.0, 0.0, 0.97591675922037, -0.2181432535579 ], "PositionReferenceIndicator": "", "SamplesPerPixel": 1, "PhotometricInterpretation": "MONOCHROME2", "Rows": 96, "Columns": 96, "PixelSpacing": [ 2.5, 2.5 ], "BitsAllocated": 16, "BitsStored": 12, "HighBit": 11, "PixelRepresentation": 0, "SmallestImagePixelValue": 0, "WindowCenterWidthExplanation": "Algo1", "PerformedProcedureStepStartTime": "161446.713000", "CsaImage.UsedChannelString": "XXXXXXXXXXXXXXXXXXXX", "CsaImage.MeasuredFourierLines": 0, "CsaImage.ImaPATModeText": "p2", "CsaImage.AcquisitionMatrixText": "96*96", "CsaImage.EchoLinePosition": 48, "CsaImage.BandwidthPerPixelPhaseEncode": 28.935, "CsaImage.RFSWDDataType": "predicted", "CsaImage.ImaRelTablePosition": [ 0, 0, 0 ], "CsaImage.PhaseEncodingDirectionPositive": 1, "CsaImage.SequenceMask": 134217732, "CsaImage.EchoPartitionPosition": 32, "CsaImage.NonPlanarImage": 0, "CsaImage.GSWDDataType": "predicted", "CsaImage.MultistepIndex": 0, "CsaImage.ImaAbsTablePosition": [ 0, 0, -1279 ], "CsaImage.RealDwellTime": 3300, "CsaImage.ImaCoilString": "HE1-4;NE1,2", "CsaImage.EchoColumnPosition": 48, "CsaImage.ImageType4MF": [ "ORIGINAL", "PRIMARY", "DIFFUSION", "NONE", "", "" ], "CsaImage.ImageHistory": [ "ChannelMixing:ND=true_CMM=1_CDM=1", "ACC1", "", "", "", "" ], "CsaSeries.Laterality4MF": "U", "CsaSeries.TalesReferencePower": 1121.60244, "CsaSeries.Operation_mode_flag": 0, "CsaSeries.dBdt_thresh": 0.0, "CsaSeries.ProtocolChangeHistory": 0, "CsaSeries.GradientDelayTime": [ 36.0, 35.0, 31.0 ], "CsaSeries.SARMostCriticalAspect": [ 5.33599974, 2.2896638, 0.0 ], "CsaSeries.B1rms": [ 7.07106781, 1.69914282 ], "CsaSeries.PATModeText": "p2", "CsaSeries.RelTablePosition": [ 0, 0, 0 ], "CsaSeries.NumberOfPrescans": 0, "CsaSeries.dBdt_limit": 0.0, "CsaSeries.Stim_lim": [ 42.70069885, 23.97319984, 36.48099899 ], "CsaSeries.PatReinPattern": "1;HFS;50.00;62.00;1;0;0;-397572668", "CsaSeries.B1rmsSupervision": "NO", "CsaSeries.PhaseSliceOversampling": "NONE", "CsaSeries.ReadoutGradientAmplitude": 0.0, "CsaSeries.MrProtocolVersion": 41340006, "CsaSeries.RFSWDMostCriticalAspect": "Bore Local", "CsaSeries.SequenceFileOwner": "SIEMENS", "CsaSeries.GradientMode": "Fast*", "CsaSeries.EchoTrainLength": 47, "CsaSeries.SliceArrayConcatenations": 1, "CsaSeries.FlowCompensation": "No", "CsaSeries.TransmitterCalibration": 254.388132, "CsaSeries.Isocentered": 0, "CsaSeries.AbsTablePosition": -1279, "CsaSeries.ReadoutOS": 2.0, "CsaSeries.dBdt_max": 0.0, "CsaSeries.RFSWDOperationMode": 0, "CsaSeries.SelectionGradientAmplitude": 0.0, "CsaSeries.PhaseGradientAmplitude": 0.0, "CsaSeries.RfWatchdogMask": 0, "CsaSeries.CoilForGradient2": "AS82", "CsaSeries.Stim_mon_mode": 2, "CsaSeries.CoilId": [ 255, 0, 0, 0, 0, 4868, 4867, 0, 0, 0, 0 ], "CsaSeries.Stim_max_ges_norm_online": 0.62531626, "CsaSeries.CoilString": "HE1-4;NE1,2", "CsaSeries.CoilForGradient": "void", "CsaSeries.DICOMAcquisitionContrast": "", "CsaSeries.TablePositionOrigin": [ 0, 0, -1279 ], "CsaSeries.MiscSequenceParam": [ 40, 68, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 524288, 0, 0, 0, 0, 0, 2, 11128, 0, 0, 0, 0, 0, 0 ], "CsaSeries.SafetyStandard": "IEC", "CsaSeries.GradientEchoTrainLength": 47, "CsaSeries.LongModelName": "NUMARIS/4", "CsaSeries.DICOMImageFlavor": "", "CsaSeries.Stim_faktor": 1.0, "CsaSeries.SW_korr_faktor": 1.0, "CsaSeries.Sed": [ 1000000.0, 308.79541916, 308.79541916 ], "CsaSeries.PositivePCSDirections": "+LPH", "CsaSeries.SliceResolution": 1.0, "CsaSeries.Stim_max_online": [ 6.70691442, 3.44702697, 21.44870186 ], "CsaSeries.t_puls_max": 0.0, "CsaSeries.RFEchoTrainLength": 1, "CsaSeries.MrPhoenixProtocol.ulVersion": 41340006, "CsaSeries.MrPhoenixProtocol.tSequenceFileName": "%SiemensSeq%\\ep2d_diff", "CsaSeries.MrPhoenixProtocol.tProtocolName": "ep2d_diff_FREE68_p2FAD_2.5mm_iso", "CsaSeries.MrPhoenixProtocol.tdefaultEVAProt": "%SiemensEvaDefProt%\\DTI\\DTI.evp", "CsaSeries.MrPhoenixProtocol.tReferenceImage0": "1.3.12.2.1107.5.2.43.67048.2015100816151328731925230", "CsaSeries.MrPhoenixProtocol.tReferenceImage1": "1.3.12.2.1107.5.2.43.67048.2015100816151616715125234", "CsaSeries.MrPhoenixProtocol.tReferenceImage2": "1.3.12.2.1107.5.2.43.67048.201510081615194716725238", "CsaSeries.MrPhoenixProtocol.lScanRegionPosTra": 0.0, "CsaSeries.MrPhoenixProtocol.ucScanRegionPosValid": 1, "CsaSeries.MrPhoenixProtocol.lPtabAbsStartPosZ": -1279, "CsaSeries.MrPhoenixProtocol.bPtabAbsStartPosZValid": 1, "CsaSeries.MrPhoenixProtocol.ucTablePositioningMode": 1, "CsaSeries.MrPhoenixProtocol.ucEnableNoiseAdjust": 1, "CsaSeries.MrPhoenixProtocol.lContrasts": 1, "CsaSeries.MrPhoenixProtocol.lCombinedEchoes": 1, "CsaSeries.MrPhoenixProtocol.ucEnableIntro": 1, "CsaSeries.MrPhoenixProtocol.ucDisableChangeStoreImages": 1, "CsaSeries.MrPhoenixProtocol.ucAAMode": 1, "CsaSeries.MrPhoenixProtocol.ucAARegionMode": 1, "CsaSeries.MrPhoenixProtocol.ucAARefMode": 1, "CsaSeries.MrPhoenixProtocol.ucReconstructionMode": 1, "CsaSeries.MrPhoenixProtocol.ucOneSeriesForAllMeas": 1, "CsaSeries.MrPhoenixProtocol.ucPHAPSMode": 1, "CsaSeries.MrPhoenixProtocol.ulWrapUpMagn": 1, "CsaSeries.MrPhoenixProtocol.ucDixon": 1, "CsaSeries.MrPhoenixProtocol.ucDixonSaveOriginal": 1, "CsaSeries.MrPhoenixProtocol.ucWaitForPrepareCompletion": 1, "CsaSeries.MrPhoenixProtocol.lAverages": 1, "CsaSeries.MrPhoenixProtocol.dAveragesDouble": 1.0, "CsaSeries.MrPhoenixProtocol.lScanTimeSec": 518, "CsaSeries.MrPhoenixProtocol.lTotalScanTimeSec": 520, "CsaSeries.MrPhoenixProtocol.dRefSNR": 60930.28803, "CsaSeries.MrPhoenixProtocol.dRefSNR_VOI": 60930.28803, "CsaSeries.MrPhoenixProtocol.ucInlineEva": 1, "CsaSeries.MrPhoenixProtocol.ucMotionCorr": 1, "CsaSeries.MrPhoenixProtocol.ucCineMode": 1, "CsaSeries.MrPhoenixProtocol.ucSequenceType": 4, "CsaSeries.MrPhoenixProtocol.ucCoilCombineMode": 2, "CsaSeries.MrPhoenixProtocol.ucFlipAngleMode": 1, "CsaSeries.MrPhoenixProtocol.lTOM": 1, "CsaSeries.MrPhoenixProtocol.lProtID": -939, "CsaSeries.MrPhoenixProtocol.lSequenceID": 400, "CsaSeries.MrPhoenixProtocol.ucReadOutMode": 1, "CsaSeries.MrPhoenixProtocol.ucBold3dPace": 1, "CsaSeries.MrPhoenixProtocol.ucForcePositioningOnNDIS": 1, "CsaSeries.MrPhoenixProtocol.ucTmapB0Correction": 1, "CsaSeries.MrPhoenixProtocol.ucTmapEval": 1, "CsaSeries.MrPhoenixProtocol.ucTmapImageType": 1, "CsaSeries.MrPhoenixProtocol.ulOrganUnderExamination": 1, "CsaSeries.MrPhoenixProtocol.dTissueT1": 10.0, "CsaSeries.MrPhoenixProtocol.dTissueT2": 5.0, "CsaSeries.MrPhoenixProtocol.lInvContrasts": 1, "CsaSeries.MrPhoenixProtocol.ulReaquisitionMode": 1, "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.tMeasuredBaselineString": "N4_VD13D_LATEST_20130810", "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.tBaselineString": "N4_VD13D_LATEST_20130810", "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.tSystemType": "021", "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.flNominalB0": 2.893620014, "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.flGMax": 34.0, "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.flRiseTime": 5.0, "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.lMaximumNofRxReceiverChannels": 64, "CsaSeries.MrPhoenixProtocol.sProtConsistencyInfo.ulConvFromVersion": 41340006, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bEddyCompensationValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bB0CompensationValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bCrossTermCompensationValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lOffsetX": 1086, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lOffsetY": -11305, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lOffsetZ": 11873, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bOffsetValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lDelayX": 36, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lDelayY": 35, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.lDelayZ": 31, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bDelayValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.flSensitivityX": 0.0001609030005, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.flSensitivityY": 0.0001610220061, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.flSensitivityZ": 0.0001654569933, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bSensitivityValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.flGSWDMinRiseTime": 6.0, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.bShimCurrentValid": 1, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.ucMode": 17, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude[0]": 0.0004820229951, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude[1]": 0.003751589917, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude[2]": 0.0005103289732, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude[3]": 0.0001303360041, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflAmplitude[4]": 0.0006237300113, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant[0]": 2.556329966, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant[1]": 0.6592119932, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant[2]": 0.1530800015, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant[3]": 0.003486339934, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationX.aflTimeConstant[4]": 0.0005000000237, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude[0]": 0.0008187500061, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude[1]": 0.0063786502, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude[2]": -0.001653269981, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude[3]": 4.056790203e-06, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflAmplitude[4]": 0.0003440979926, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant[0]": 2.509150028, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant[1]": 0.6780369878, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant[2]": 0.4560959935, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant[3]": 0.04625780135, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationY.aflTimeConstant[4]": 0.001560249948, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude[0]": -0.0003919939918, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude[1]": -0.003124210052, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude[2]": -0.0006287390133, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude[3]": 4.59180992e-05, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflAmplitude[4]": -0.0006129500107, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant[0]": 3.197999954, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant[1]": 0.5136190057, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant[2]": 0.1338600069, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant[3]": 0.002825029893, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sEddyCompensationZ.aflTimeConstant[4]": 0.0005006300053, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflAmplitude[0]": 0.008810830303, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflAmplitude[1]": 0.0338425003, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflAmplitude[2]": 0.0004275010142, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflTimeConstant[0]": 1.999819994, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflTimeConstant[1]": 0.5674759746, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationX.aflTimeConstant[2]": 0.01461779978, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflAmplitude[0]": 0.002319379942, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflAmplitude[1]": 0.07534249872, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflAmplitude[2]": 0.003013229929, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflTimeConstant[0]": 1.470679998, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflTimeConstant[1]": 0.6624130011, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationY.aflTimeConstant[2]": 0.1112419963, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflAmplitude[0]": 0.05862779915, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflAmplitude[1]": 0.1553879976, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflAmplitude[2]": 0.01224910002, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflTimeConstant[0]": 0.874168992, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflTimeConstant[1]": 0.3712719977, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sB0CompensationZ.aflTimeConstant[2]": 0.05150299892, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXY.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXY.aflAmplitude[0]": -0.0002011189936, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXY.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXY.aflTimeConstant[0]": 0.3982659876, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXZ.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXZ.aflAmplitude[0]": 0.0002803739917, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXZ.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationXZ.aflTimeConstant[0]": 0.6427000165, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYX.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYX.aflAmplitude[0]": 0.0001223629952, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYX.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYX.aflTimeConstant[0]": 0.3541249931, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYZ.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYZ.aflAmplitude[0]": 0.0002668160014, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYZ.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationYZ.aflTimeConstant[0]": 0.4557920098, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZX.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZX.aflAmplitude[0]": 0.0004880850029, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZX.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZX.aflTimeConstant[0]": 0.417234987, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZY.aflAmplitude.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZY.aflAmplitude[0]": -0.0002583040041, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZY.aflTimeConstant.__attribute__.size": 5, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.sCrossTermCompensationZY.aflTimeConstant[0]": 0.442979008, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent.__attribute__.size": 15, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent[0]": 273, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent[1]": -90, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent[2]": -269, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent[3]": 20, "CsaSeries.MrPhoenixProtocol.sGRADSPEC.alShimCurrent[4]": 29, "CsaSeries.MrPhoenixProtocol.sTXSPEC.bTxScaleFactorsValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.lNoOfTraPulses": 5, "CsaSeries.MrPhoenixProtocol.sTXSPEC.lBCExcitationMode": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.lBCSeqExcitationMode": 4, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynMagnitudeMin": 0.5, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynMagnitudeMax": 1.5, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynMagnitudeClipLow": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynMagnitudeClipHigh": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynPhaseMax": 0.6981319785, "CsaSeries.MrPhoenixProtocol.sTXSPEC.flKDynPhaseClip": 0.1745329946, "CsaSeries.MrPhoenixProtocol.sTXSPEC.bKDynValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.ucRFPulseType": 2, "CsaSeries.MrPhoenixProtocol.sTXSPEC.ucExcitMode": 32, "CsaSeries.MrPhoenixProtocol.sTXSPEC.ucSimultaneousExcitation": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.ucBCExcitationModeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.lB1ShimMode": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo.__attribute__.size": 2, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].tNucleus": "1H", "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].lCoilSelectIndex": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].lFrequency": 123252999, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bFrequencyValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].flReferenceAmplitude": 254.3881378, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bReferenceAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].flCompProtectionRefAmpl": 254.3881378, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bCompProtectionRefAmplValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].flCompProtectionB1PlusRefAmpl": 249.2962952, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bCompProtectionB1PlusRefAmplValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].flAmplitudeCorrection": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bAmplitudeCorrectionValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].bCompProtectionValuesValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.MaxOfflineTxAmpl": 588.7976685, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.WorstCasePulseScaleRefAmpl": 254.3881378, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.WorstCaseMaxOfflineTxAmpl": 588.7976685, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrixValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrixValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.MaxOnlineTxAmpl.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.MaxOnlineTxAmpl[0]": 425.1856079, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.MaxOnlineTxAmpl[1]": 423.4075623, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.WorstCaseMaxOnlineTxAmpl.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.WorstCaseMaxOnlineTxAmpl[0]": 425.1856079, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.WorstCaseMaxOnlineTxAmpl[1]": 423.4075623, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.adGainVariation.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.adGainVariation[0]": 1.021249056, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.adGainVariation[1]": 1.016978264, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.Size1": 3, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.Size2": 3, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData.__attribute__.size": 9, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[0].dRe": -0.9830442292, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[0].dIm": 0.5268405586, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[1].dRe": 0.01216587696, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[1].dIm": -0.02734949342, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[3].dRe": 0.01742247754, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[3].dIm": -0.02635246419, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[4].dRe": -1.076079495, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.DecouplingMatrix.ComplexData[4].dIm": 0.4388834552, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ZZMatrixVector.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.Size1": 2, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.Size2": 2, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData.__attribute__.size": 4, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[0].dRe": 0.3188383758, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[0].dIm": -0.174032503, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[1].dRe": -0.04570575545, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[1].dIm": 0.04655692378, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[2].dRe": -0.04564337043, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[2].dIm": 0.0464012541, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[3].dRe": 0.4606432637, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].CompProtectionValues.ScatterMatrix.ComplexData[3].dIm": 0.1823795906, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[0].aTxScaleFactorSlice.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].lCoilSelectIndex": -1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.MaxOnlineTxAmpl.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.WorstCaseMaxOnlineTxAmpl.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.adGainVariation.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.DecouplingMatrix.ComplexData.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.ZZMatrixVector.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].CompProtectionValues.ScatterMatrix.ComplexData.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.asNucleusInfo[1].aTxScaleFactorSlice.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[0].tName": "ExtExciteRF", "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[0].flAmplitude": 236.5644226, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[0].bAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[1].tName": "RTEIdentDRF1", "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[1].flAmplitude": 313.0079651, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[1].bAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[2].tName": "RTEIdentDRF2", "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[2].flAmplitude": 313.0079651, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[2].bAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[3].tName": "SLoopFCSatNS", "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[3].flAmplitude": 62.70937729, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[3].bAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[4].tName": "AddCSaCSatNS", "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[4].flAmplitude": 62.70937729, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aRFPULSE[4].bAmplitudeValid": 1, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor.__attribute__.size": 8, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[0].dRe": 0.7071, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[1].dIm": 0.7071, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[2].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[3].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[4].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[5].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[6].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aTxScaleFactor[7].dRe": 1.0, "CsaSeries.MrPhoenixProtocol.sTXSPEC.aPTXRFPulse.__attribute__.size": 0, "CsaSeries.MrPhoenixProtocol.sRXSPEC.lGain": 1, "CsaSeries.MrPhoenixProtocol.sRXSPEC.bGainValid": 1, "CsaSeries.MrPhoenixProtocol.sRXSPEC.UseDoubleDataRate": 0, "CsaSeries.MrPhoenixProtocol.sRXSPEC.asNucleusInfo.__attribute__.size": 2, "CsaSeries.MrPhoenixProtocol.sRXSPEC.asNucleusInfo[0].tNucleus": "1H", "CsaSeries.MrPhoenixProtocol.sRXSPEC.asNucleusInfo[0].lCoilSelectIndex": 0, "CsaSeries.MrPhoenixProtocol.sRXSPEC.asNucleusInfo[1].lCoilSelectIndex": -1, "CsaSeries.MrPhoenixProtocol.sRXSPEC.alVariCapVoltages.__attribute__.size": 4, "CsaSeries.MrPhoenixProtocol.sRXSPEC.alDwellTime.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.sRXSPEC.alDwellTime[0]": 3300, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjFreMode": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjShimMode": 2, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjWatSupMode": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjRFMapMode": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjMDSMode": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjTableTolerance": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjProtID": 131, "CsaSeries.MrPhoenixProtocol.sAdjData.uiAdjFreProtRelated": 1, "CsaSeries.MrPhoenixProtocol.sAdjData.uiDefaultExcitationModeImaging": 0, "CsaSeries.MrPhoenixProtocol.sAdjData.lCoupleAdjVolTo": 1, "CsaSeries.MrPhoenixProtocol.alTR.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.alTR[0]": 7300000, "CsaSeries.MrPhoenixProtocol.alTI.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.alTI[0]": 2500000, "CsaSeries.MrPhoenixProtocol.alTD.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.alTE.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.alTE[0]": 90000, "CsaSeries.MrPhoenixProtocol.acFlowComp.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.acFlowComp[0]": 1, "CsaSeries.MrPhoenixProtocol.sSliceArray.lSize": 59, "CsaSeries.MrPhoenixProtocol.sSliceArray.lConc": 1, "CsaSeries.MrPhoenixProtocol.sSliceArray.ucMode": 4, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice.__attribute__.size": 128, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].dThickness": 2.5, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].dPhaseFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].dReadoutFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].sPosition.dCor": -29.74465083, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].sPosition.dTra": -77.71859815, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].sNormal.dCor": 0.2181432414, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[0].sNormal.dTra": 0.9759167619, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].dThickness": 2.5, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].dPhaseFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].dReadoutFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].sPosition.dCor": -29.19929273, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].sPosition.dTra": -75.27880625, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].sNormal.dCor": 0.2181432414, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[1].sNormal.dTra": 0.9759167619, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].dThickness": 2.5, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].dPhaseFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].dReadoutFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].sPosition.dCor": -28.65393462, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].sPosition.dTra": -72.83901434, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].sNormal.dCor": 0.2181432414, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[2].sNormal.dTra": 0.9759167619, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].dThickness": 2.5, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].dPhaseFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].dReadoutFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].sPosition.dCor": -28.10857652, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].sPosition.dTra": -70.39922244, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].sNormal.dCor": 0.2181432414, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[3].sNormal.dTra": 0.9759167619, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[4].dThickness": 2.5, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[4].dPhaseFOV": 240.0, "CsaSeries.MrPhoenixProtocol.sSliceArray.asSlice[4].dReadoutFOV": 240.0,

Really hope to hear from you:)

How to run heudiconv

Hi

Sorry for being a newbie. So my dir structure is. Project/Subject/Session/Scans/Images(.dcm)

I tried:

heudiconv -d ~/Desktop/Drive\ Images/DiabC/{subject}///*.dcm -s $sub -f /usr/local/share/heudiconv/heuristics/convertall.py -c
dcm2niix -o ~/Documents/Data/

Got error:

Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 2120, in
main()
File "/usr/local/bin/heudiconv", line 2112, in main
return _main(args)
File "/usr/local/bin/heudiconv", line 1876, in _main
grouping=grouping)
File "/usr/local/bin/heudiconv", line 1457, in get_study_sessions
assert sids
AssertionError

Am i doing this right? Thank you.

Kevin

Items in 's' for the heuristics file?

I'm having a difficult time gleaning what must be done to have a functioning heuristics file, as this doesn't seem to be particularly well-documented at the moment.

Specifically, when iterating over the 'seqinfo' list, I am uncertain what each index in the 's' list contains. The code here, is illuminating, although I'm a bit unsure what 'total', 'series' and 'size' represent at a glance (with apologies if this turns out to be a slightly silly question).

Should heuristics be refactored as classes?

The current version of the code takes one of the files under heuristics as an argument. The file has an implementation of the function infotodict that parses seqinfo in some particular way.

This implies some code repetition (e.g. the create_key function is repeated in most of the files) and it's hard to test.

I think it could be a good idea to implement the functionality in these files as classes. New heuristics could be derived from a base class defining just create_key and infotodict. The main script would get an argument with the name of the class as a string and create the proper class using some factory for heuristics.

silence up nipype unless in < INFO logging mode

ATM it is too talkative to my liking -- making it easy to miss important warnings etc.
@satra - how in current heudiconv code to make nipype silent?

I have tried

  1. standard python way e.g.
logging.getLogger('nipype').setLevel(logging.WARNING)
  1. to modify custom config file
$> cat ~/.nipype/nipype.cfg       
[DEFAULT]
interface_level = WARNING
filemanip_level = WARNING
workflow_level = WARNING

but it just keeps bringing all the useful INFO ;)

GE scan, all TRs collapsed into dim3

I'm trying to use heudiconv, called through docker on Ubuntu 17.04 to organize some files that were collected around 2011 by another lab member (long gone) on a GE scanner (different site). So, unfortunately I don't know much about the data. Each run (either functional or anatomical) is associated with one folder (e.g., dicomDir). To call heudiconv I'll use something like:

docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/dicomDir -f convertall.py -c none -o outDir -s 01

But, each dicomDir contains one file for every image taken by the scanner. So, if there were 150 TRs of 64x64x35 images, heudiconv reads dicomDir as containing a single TR of a 64x64x5250 image.

Am I doing something wrong? Is there a way for heudiconv to figure out the appropriate dimensions? I can convert them by calling dcm2niix directly by calling:

dcm2niix -z y -b y -f outFileName -o outdir dicomDir,

which appropriately creates a nifti file of size 64x64x35x150

Thank you for your time.

Should -s argument be mandatory?

The convert_dicoms function gets the argument passed in the command line with -s as the first argument and loops thru its values.

However, this else branch of this code (around line 568 in heudiconv) seems to be intended for an optional -s argument.

   # TODO: RF into a function
        # expand the input template
        if sid:
            sdir = dicom_dir_template % sid
            # and see what matches
            fl = sorted(glob(sdir))
        else:
            # we were given no subject so we consider dicom_dir_template to be
            # and actual directory to process
            if not isdir(dicom_dir_template):
                raise ValueError(
                    "No subject id was provided, and dicom_dir_template=%s is not "
                    "an existing directory to traverse" % str(dicom_dir_template)
                )
            fl = sorted(find_files('.*', topdir=dicom_dir_template))

Fails on bvecs/bval copy part of DWI conversion

I have a Philips DWI scan (nothing fancy, standard clinical protocol). heudiconv fails like this:

  File "/usr/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1128, in aggregate_outputs
    raise FileNotFoundError(msg)
nipype.utils.filemanip.FileNotFoundError: File/Directory '['/tmp/heudiconvtmpm4a6EF/convert/20130717_141500DTIhigh2isoSENSEs701a1007.bvec']' not found for Dcm2nii output 'bvecs'.
Interface Dcm2nii failed to run.

That is, because the file it is looking for carries an x prefix -- which is strange, as dcm2nii was executed with this command:

Running: dcm2nii -a y -c y -b config.ini -v y -d y -e y -g y -i n -n y -o /tmp/heudiconvtmpm4a6EF/convert -p y -x n -f n

This could be considered a bug in nipype, dcm2nii, or heudiconv. It would be relatively simple to work around in heudiconv: Check if res.outputs.bvecs is present and a corresponding file exists. If it is present, but the file doesn't exist, look for the same file with a x prefix.

What approach would you suggest?

heuristics/cmrr_heuristic.py lacks needed infotodict method

was familiarizing myself with heudiconv and ran into this:

$> bin/heudiconv --dbg -d ../PHANTOM1_3/YAROSLAV_DBIC-TEST1/HEAD_ADVANCED_APPLICATIONS_LIBRARIES_20160824_104430_780000/ -s '' -f heuristics/cmrr_heuristic.py -c dcm2niix -o ../outputs/ -b
Traceback (most recent call last):
  File "bin/heudiconv", line 715, in <module>
    is_bids=args.bids)
  File "bin/heudiconv", line 595, in convert_dicoms
    info = mod.infotodict(seqinfo)
AttributeError: 'module' object has no attribute 'infotodict'

Add a flag to support multiple sessions?

It would be great if heudiconv could automatically recognize multiple sessions of dicoms from the same subject ID and organize nifti files into separate session folders.

'NoneType' object has no attribute 'groupdict'

Today I did "docker pull" and it now produces a new type of error. It would be greatly appreciated if you could help.

Error message is like this:

$ docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/*/*IMA -s WCWsession1 -f /data/WCW_heuristic_1.py -c dcm2niix -b -o /data/output
INFO: Need to process 1 study sessions
INFO: PROCESSING STARTS: {'session': None, 'outdir': '/data/output/', 'subject': 'WCWsession1'}
INFO: Processing 2834 dicoms
INFO: Reloading existing filegroup.json because /data/output/.heudiconv/WCWsession1/info/WCWsession1.edit.txt exists
INFO: Doing conversion using dcm2niix
INFO: Converting /data/output/run001 (224 DICOMs) -> /data/output . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcm9ccTKA/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f output -o /tmp/heudiconvdcm9ccTKA/convert -s n -v n /tmp/heudiconvdcm9ccTKA/convert/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0015.0001.2017.06.12.16.07.00.956995.128218061.IMA
Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 2120, in
main()
File "/usr/local/bin/heudiconv", line 2112, in main
return main(args)
File "/usr/local/bin/heudiconv", line 1975, in main
min_meta=args.minmeta)
File "/usr/local/bin/heudiconv", line 1351, in convert_dicoms
outdir=tdir)
File "/usr/local/bin/heudiconv", line 929, in convert
save_scans_key(item, outname_bids_files)
File "/usr/local/bin/heudiconv", line 1062, in save_scans_key
subj
, ses
= _find_subj_ses(f_name)
File "/usr/local/bin/heudiconv", line 1035, in _find_subj_ses
res = regex.search(f_name).groupdict()
AttributeError: 'NoneType' object has no attribute 'groupdict'

The heuristic code I used is as follows:

import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes

def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
t1 = create_key('anat/sub-{subject}_T1')
task = create_key('func/sub-{subject}_run-{item:02d}_task-{type}_bold')
info = {t1: [], task: []}

for s in seqinfo:
    if (s.dim3 == 224) and (s.dim4 == 1):
        info[t1] = [s.series_id] # assign if a single series meets criteria
    elif (s.dim4 == 870) and ('r1_2.7iso_ap_64ch_mb8' in s.protocol_name):
        info[task].append({'item': s.series_id, 'type': 'painsound'})
    elif (s.dim4 == 870) and ('r2_ap_64ch_mb8' in s.protocol_name): 
        info[task].append({'item': s.series_id, 'type': 'painsound'})                
return info

add anonymization

  • face masking
  • ear masking
  • dicom metadata masking
  • teeth masking

please add other types of anonymization that heudiconv should consider

problem parsing specific set of files

I am having trouble parsing a specific set of files, which I think may be happening due to the presence of spaces rather than underscores in the series_description:

SeqInfo(total_files_till_now=211, example_dcm_file='EM2311_A.MR.RIC_Modified_Protocols_TIM_Protocols.2.10.20141005.105003.rjrf9w.dcm', series_id='2-MPRAGE T1 AX 0.8 mm TI-766', unspecified1='DICOM', unspecified2='-', unspecified3='-', dim1=320, dim2=220, dim3=208, dim4=1, TR=2.2, TE=2.83, protocol_name='MPRAGE T1 AX 0.8 mm TI-766', is_motion_corrected=False, is_derived=False, patient_id='EM2311', study_description='MRI106', referring_physician_name='EM2311', series_description='MPRAGE T1 AX 0.8 mm TI-766', sequence_name='*tfl3d1', image_type=('ORIGINAL', 'PRIMARY', 'M', 'ND'), accession_number='', patient_age='050Y', patient_sex='F')

the heuristic seems to run properly, but then this occurs when the file is being parsed:

INFO: Converting /Users/poldrack/data_unsynced/GOBS/GOBS_bids/EM2311/anat/sub-EM2311_run-1_T1w (208 DICOMs) -> /Users/poldrack/data_unsynced/GOBS/GOBS_bids/EM2311/anat . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcmbLYWNS/convert
INFO: Running: dcm2niix -b n -z i -x n -t n -m n -f anat -o /tmp/heudiconvdcmbLYWNS/convert -s n -v n /tmp/heudiconvdcmbLYWNS/convert/EM2311_A.MR.RIC_Modified_Protocols_TIM_Protocols.2.10.20141005.105003.rjrf9w.dcm
INFO: Executing node embedder in dir: /tmp/heudiconvdcmbLYWNS/embedder
INFO: Post-treating /Users/poldrack/data_unsynced/GOBS/GOBS_bids/EM2311/anat/sub-EM2311_run-1_T1w.json file
Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 2120, in
main()
File "/usr/local/bin/heudiconv", line 2112, in main
return _main(args)
File "/usr/local/bin/heudiconv", line 1975, in main
min_meta=args.minmeta)
File "/usr/local/bin/heudiconv", line 1351, in convert_dicoms
outdir=tdir)
File "/usr/local/bin/heudiconv", line 958, in convert
treat_infofile(scaninfo)
File "/usr/local/bin/heudiconv", line 1227, in treat_infofile
j_pretty = json_dumps_pretty(j_slim, indent=2, sort_keys=True)
File "/usr/local/bin/heudiconv", line 236, in json_dumps_pretty
assert(j == j
)
AssertionError

Pre execution bids validation

It'd be great if we could validate bids compliance of a planned heudiconv run before execution. This would shorten the debug loop and reduce user frustration.

unable to get bval/bvec: converted but not copied to output

I'm processing some Prisma data and noticed heudiconv isn't outputing bvals or bvecs. When I run the conversion using command-line dcm2niix, the bvals/bvecs were successfully made.

I thought this was a problem with nipype's dcm2nii.py interface - but even after making these changes I was still unable to produce them. Any ideas as to what is causing this?

For reference, here is dcm2niix's stdout for my diffusion scans

"no module named heudiconv" from docker

I tried running heudiconv from the docker container pulled frum dockerhub. This is the error I get:

INFO: Processing 3556 dicoms
Traceback (most recent call last):
  File "/usr/local/bin/heudiconv", line 849, in <module>
    main()
  File "/usr/local/bin/heudiconv", line 845, in main
    is_bids=args.bids)

  File "/usr/local/bin/heudiconv", line 682, in convert_dicoms
    mod = __import__(fname.split('.')[0])
ImportError: No module named heudiconv

template field 'ReferringPhysicianName' missing

Hi,
I've just updated to the latest build and tried to run in docker as before, howeve there's an AttributeError: Dataset does not have attribute 'ReferringPhysicianName'. My dicom files don't seem to have this field. Is there a way to disable this? (not sure how to modify the code within docker..)
Thanks.

Unreliable performance with Philips data

At this point, this is primarily a note for people coming here after having faced trouble.

Right now, dcmstack is not very reliable with Philips data. Dcmstack is used to sift through DICOMs and sort them into series. Especially with single image (no mosiac) DICOM it has trouble figuring out the dimensionality of volumetric images. Sometimes DICOMs get sorted wrongly so that dcm2nii subsequently refuses conversion.

This needs some attention.

ValueError: No JSON object could be decoded (when running w/ --bids)

When trying to run heudiconv with the --bids flag (to try to get it to use sub- for subject folders), I get the following error:

(legacy) # dlurie@nx4 in /home/despoB/lesion/data/original/bids_test [12:57:59]
$ heudiconv -d /home/despoB/lesion/data/original/dicom/%s/*/* -s 101 -f /home/despoB/dlurie/Software/heudiconv/heuristics/uc_bids.py -c dcm2niix -o nifti_new -b
INFO: Processing 539 dicoms
INFO: Generated sequence info with 20 entries
INFO: Doing conversion using dcm2niix
Converting /home/despoB/lesion/data/original/bids_test/nifti_new/sub-101/func/sub-101_task-rest_acq-128px_run-01_bold
/home/despoB/lesion/data/original/bids_test/nifti_new/sub-101/func
INFO: Processing 300 dicoms
dcm2niix
INFO: Executing node convert in dir: /tmp/heudiconvtmpWwNvTv/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func_%e -o /scratch/tmp/heudiconvtmpWwNvTv/convert -s n -v n /scratch/tmp/heudiconvtmpWwNvTv/convert/IM-0009-0001.dcm
Traceback (most recent call last):
  File "/home/despoB/dlurie/Software/heudiconv/bin/heudiconv", line 884, in <module>
    main()
  File "/home/despoB/dlurie/Software/heudiconv/bin/heudiconv", line 880, in main
    min_meta=args.minmeta)
  File "/home/despoB/dlurie/Software/heudiconv/bin/heudiconv", line 761, in convert_dicoms
    min_meta=min_meta)
  File "/home/despoB/dlurie/Software/heudiconv/bin/heudiconv", line 550, in convert
    embedfunc.inputs.bids_info = load_json(os.path.abspath(scaninfo))
  File "/home/despoB/dlurie/Software/heudiconv/bin/heudiconv", line 92, in load_json
    data = json.load(fp)
  File "/home/despoB/dlurie/anaconda3/envs/legacy/lib/python2.7/json/__init__.py", line 291, in load
    **kw)
  File "/home/despoB/dlurie/anaconda3/envs/legacy/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/home/despoB/dlurie/anaconda3/envs/legacy/lib/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/home/despoB/dlurie/anaconda3/envs/legacy/lib/python2.7/json/decoder.py", line 382, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

The heuristic I'm using can be found here.

It looks like heudiconv makes it through conversion for the first image and then crashes. Here's what it what gets written out prior to crashing:

nifti_new/:
sub-101/

nifti_new/sub-101:
func/
info/

nifti_new/sub-101/func:
sub-101_task-rest_acq-128px_run-01_bold.json
sub-101_task-rest_acq-128px_run-01_bold.nii.gz

nifti_new/sub-101/info:
101.auto.txt
101.edit.txt
dicominfo.txt
filegroup.json
uc_bids.py

Everything looks normal in 101.auto.txt, so it seems to be picking up the other scans okay. The scan-parameters JSON file also looks good.

If I run the same command without the --bids flag, everything converts without an error.

I've tried to look through the code to get an idea of what is going wrong, but I can't figure out what exactly is going on with scaninfo (i.e. what it's supposed to contain, when it's supposed to be created, etc).

Any ideas? Thanks!

subject/session not added to outdir when locator is None

When locator is None, subject/session is omitted from outdir, stacking all subjects' scans in the base output directory.

command:

subj=('DOD_053' 'DOD_049')

for sub in ${subj[@]}
do
docker run --rm -it -v /code/tests/dicoms:/dicoms:ro \
-v /code/tests/heuditest/output:/output \
heudiconv:latest -d /dicoms/{subject}/*.dcm -s $sub \
-f /output/heuristic.py -c dcm2niix -b --minmeta -o /output
done

logger:

INFO: Need to process 1 study sessions
WARNING: locator: None
WARNING: study_outdir: /output/
INFO: PROCESSING STARTS: {'session': None, 'outdir': '/output/', 'subject': 'DOD_053'}
INFO: Processing 347 dicoms
WARNING: DOD_053 contained nonalphanumeric character(s), subject ID was cleaned to be DOD053
INFO: Analyzing 347 dicoms
INFO: Generated sequence info with 5 entries
INFO: Doing conversion using dcm2niix
INFO: Converting /output/dwi/sub-DOD_053_dir-PA_dwi (72 DICOMs) -> /output/dwi . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcmRgJC4f/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f dwi -o /tmp/heudiconvdcmRgJC4f/convert -s n -v n /tmp/heudiconvdcmRgJC4f/convert/828000-10-1.dcm
INFO: Executing node embedder in dir: /tmp/heudiconvdcmRgJC4f/embedder
INFO: Post-treating /output/dwi/sub-DOD_053_dir-PA_dwi.json file
INFO: Populating template files under /output/
INFO: PROCESSING DONE: {'session': None, 'outdir': '/output/', 'subject': 'DOD_053'}
INFO: Need to process 1 study sessions
WARNING: locator: None
WARNING: study_outdir: /output/
INFO: PROCESSING STARTS: {'session': None, 'outdir': '/output/', 'subject': 'DOD_049'}
INFO: Processing 299 dicoms
WARNING: DOD_049 contained nonalphanumeric character(s), subject ID was cleaned to be DOD049
INFO: Analyzing 299 dicoms
INFO: Generated sequence info with 5 entries
INFO: Doing conversion using dcm2niix
INFO: Converting /output/fmap/sub-DOD_049_acq-dwi_dir-AP_epi (7 DICOMs) -> /output/fmap . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcm5nVd8L/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f fmap -o /tmp/heudiconvdcm5nVd8L/convert -s n -v n /tmp/heudiconvdcm5nVd8L/convert/39000-11-1.dcm
INFO: Executing node embedder in dir: /tmp/heudiconvdcm5nVd8L/embedder
INFO: Post-treating /output/fmap/sub-DOD_049_acq-dwi_dir-AP_epi.json file
INFO: Populating template files under /output/
INFO: PROCESSING DONE: {'session': None, 'outdir': '/output/', 'subject': 'DOD_049'}

saving of hidden files that are not updated with changes to heuristic

hi - it appears that information is saved into the .heudiconv folder that will prevent the heuristic from being rerun on the same data, even if the heuristic file changes. I spent several hours trying to debug a problem before realizing this. saving info to a hidden folder seems like a suboptimal solution, but if you are going to do it, please clear it out whenever the heuristic changes, or at least save it within the newly generated subject directory so that when that directory is deleted (which is necessary in order to rerun the job) then the hidden directory would be deleted as well.

use NamedTuple in all examples

@mgxd - we should also update the examples to use NamedTuple instead of the indices. and your slides, and also add the slides to the README for heudiconv.

No module named convertall

Hi, I am trying to use heudiconv to convert a batch of dicoms to bids format. When I attempt to run it through docker I get an error saying:

Traceback (most recent call last):
File "/usr/local/bin/heudiconv", line 2079, in
main()
File "/usr/local/bin/heudiconv", line 2071, in main
return _main(args)
File "/usr/local/bin/heudiconv", line 1830, in _main
heuristic = load_heuristic(os.path.realpath(args.heuristic_file))
File "/usr/local/bin/heudiconv", line 1396, in load_heuristic
mod = import(fname.split('.')[0])
ImportError: No module named convertall

This is the command I used:

docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/YAROSLAV_DBIC-TEST1///*IMA -s PHANTOM1_3 -f /convertall.py -c none -o /data/output

I have previously gotten it to work without this error (a couple months ago).
I have tried pulling the latest version and still no luck.

Thanks,
Grace

Add an option to disable dcmstack metadata dump

Currently heudiconv will try to save all of the metadata from dicoms into the sidecar JSON file (in addition to the BIDS metadata extracted and normalized by dcm2niix). This can be good for preserving as much information as possible, but at the same time it can generate gigabytes of data in form of JSON files. It would be good if this could be turned off on the command line.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.