Code Monkey home page Code Monkey logo

appian-pet / appian Goto Github PK

View Code? Open in Web Editor NEW
38.0 6.0 17.0 427.9 MB

APPIAN is an open-source automated software pipeline for analyzing PET images in conjunction with MRI. The goal of APPIAN is to make PET tracer kinetic data analysis easy for users with moderate computing skills and to facilitate reproducible research.

License: MIT License

Python 4.39% Makefile 0.01% HTML 15.70% CSS 3.19% JavaScript 68.47% Batchfile 0.01% Shell 0.02% PHP 4.50% Dockerfile 0.05% Less 0.55% Handlebars 3.09% Singularity 0.01%
neuroscience neuroimaging pipeline automation reproducible-research openscience

appian's Introduction

APPIAN

Table of Contents

  1. Introduction
  2. Installation
  3. Documentation
    3.1 User Guide
    3.2 Developer Guide
  4. Publications
  5. Getting Help
  6. About us
  7. Terms and Conditions

Introduction

The APPIAN pipeline is implemented in Python using the Nipype library. Although the core of the code is written in Python, the pipeline can use tools or incorporate modules written in any programming language. The only condition is that the tools must be capable of being run from a command line with well-defined inputs and outputs. In this sense, APPIAN is language agnostic.

Cost

APPIAN is 100% free and open-source, but in exchange we would greatly appreciate your feedback, whether it be as bug reports, pull requests to add new features, questions on our mailing list, or suggestions on how to improve the documentation or the code. You can even just send us an email to let us know what kind of project you are working on!

Installation

APPIAN is currently only available through Docker. Docker is a platform for creating containers that package a given software in a complete filesystem that contains everything it needs to run, and ensures that the software can always be run in the same environment. This means that all of the dependencies required by APPIAN are within its Docker container (no need to fumble about trying to compile obscure libraries). However, it also means that you will need to install Singularity or Docker before proceeding. Don’t worry it’s very easy (except maybe for Windows). For a guide on how to install Docker on Ubuntu, Debian, Mac, Windows, or other operating system, please visit this link a.

The pipeline is implemented in Python using the Nipype library. Although the core is coded in Python, the pipeline can use tools or incorporate modules written in any programming language. The only condition is that these tools must be run from a command line, with well-defined inputs and outputs. In this sense, APPIAN is language agnostic. Once Docker or Singularity is installed, simply run the following command line on your terminal:

docker pull tffunck/appian:latest

That’s it, APPIAN is installed on your computer.

Documentation

Developers

For those interested in extending or contributing to APPIAN please check out our developer guide.

Users

For more information please read our user guide.

Developers

For those interested in extending or contributing to APPIAN please check out our contributors guidelines.

Publications

  1. Funck T, Larcher K, Toussaint PJ, Evans AC, Thiel A (2018) APPIAN: Automated Pipeline for PET Image Analysis. Front Neuroinform. PMCID: PMC6178989, DOI: 10.3389/fninf.2018.00064

  2. APPIAN automated QC (in preparation)

Getting help

If you get stuck or don't know how to get started please send a mail to [email protected] :

About us

Thomas Funck, PhD Candidate ([email protected])
Kevin Larcher, MSc Eng.
Paule-Joanne Toussaint, PhD

Terms and Conditions

Copyright 2017 Thomas Funck, Kevin Larcher

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


appian's People

Contributors

davecash75 avatar gkiar avatar klarcher avatar llevitis avatar pjtoussaint avatar tfunck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

appian's Issues

Error running example (with quantification)

Hello, I have run into an error running the example data. Here is the last part of the output with the errors:

`
Running FixHeaderLinkCommand

190212-17:43:20,526 workflow INFO:
Runtime memory and threads stats unavailable
190212-17:43:20,531 workflow INFO:
Executing node pvc_qc_metrics.a1 in dir: /path/to/cimbi/dir/out_cimbi/preproc/_args_run.task.ses02.sid01/pvc_qc_metrics
19 26821106966.0
('PVC MSE:',
array(data=
-0.987869531793,
start=[0 0 0 0], count=[ 38 207 256 256],
separations=[1.0, 1.21875, 1.21875, -1.21875], dimnames=[])
190212-17:43:24,539 workflow INFO:
Runtime memory and threads stats unavailable
190212-17:43:24,543 workflow INFO:
Executing node convertPET.a1 in dir: /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET
minctoecat /path/to/cimbi/dir/out_cimbi/preproc/pvc/_args_run.task.ses02.sid01/fixHeaderNode/sub-01_ses-02_pet_center_GTM.mnc temp.v
190212-17:43:46,755 interface INFO:
stdout 2019-02-12T17:43:46.755269:Converting data: ..............................................................
('mv', 'temp.v', '/path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v')
/path/to/cimbi/dir/out_cimbi/preproc/initialization/_args_run.task.ses02.sid01/petSettings/sub-01_ses-02_pet_center.info
('Start -- Duration:', [28, 33, 38, 43, 48, 53, 58, 73, 88, 103, 118, 133, 148, 163, 178, 193, 208, 238, 268, 298, 328, 448, 568, 688, 808, 928, 1228, 1528, 1828, 2128, 2428, 3028, 3628, 4228, 4828, 5428, 6028, 6628], [5, 5, 5, 5, 5, 5, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 30, 30, 30, 30, 120, 120, 120, 120, 120, 300, 300, 300, 300, 300, 600, 600, 600, 600, 600, 600, 600, 600])
190212-17:44:07,697 interface INFO:
stdout 2019-02-12T17:44:07.696520:/path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v : unit 'unknown' replaced by 'Bq/cc'
190212-17:44:07,697 interface INFO:
stdout 2019-02-12T17:44:07.696520:Writing image file /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v
190212-17:44:09,310 workflow INFO:
Runtime memory and threads stats unavailable
190212-17:44:09,311 workflow INFO:
Executing node referencemask_extract.a1 in dir: /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract
190212-17:44:14,954 interface INFO:
stdout 2019-02-12T17:44:14.954082:reading /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v
190212-17:44:14,954 interface INFO:
stdout 2019-02-12T17:44:14.954082:reading /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/minctoecat_reference/sub-01_ses-02_T1w_normalized_brain_labeled_space-pet.v
190212-17:44:14,954 interface INFO:
stdout 2019-02-12T17:44:14.954082:calculating mask VOIs
190212-17:44:14,954 interface INFO:
stdout 2019-02-12T17:44:14.954082:1 regional TACs written in /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract/sub-01_ses-02_pet_center_GTM.dft
img2dft /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/minctoecat_reference/sub-01_ses-02_T1w_normalized_brain_labeled_space-pet.v /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract/sub-01_ses-02_pet_center_GTM.dft

tacunit -xconv=min /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract/sub-01_ses-02_pet_center_GTM.dft

190212-17:44:16,491 workflow INFO:
Runtime memory and threads stats unavailable
190212-17:44:16,492 workflow INFO:
Executing node lp.a1 in dir: /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/lp
_parse_inputs
_gen_output
190212-17:44:16,495 workflow INFO:
Running: imgdv /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract/sub-01_ses-02_pet_center_GTM.dft /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/lp/sub-01_ses-02_pet_center_GTM_lp.v
_parse_inputs
190212-17:44:16,519 interface INFO:
stderr 2019-02-12T17:44:16.518981:Error: missing result file name.
190212-17:44:17,524 workflow ERROR:
['Node lp.a1 failed to run on host 9dadccbcdd40.']
190212-17:44:17,525 workflow INFO:
Saving crash info to /opt/crash-20190212-174417-root-lp.a1-4886f66e-eb19-423c-a111-3017b2b25b81.pklz
190212-17:44:17,525 workflow INFO:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/linear.py", line 39, in run
node.run(updatehash=updatehash)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine/nodes.py", line 394, in run
self._run_interface()
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine/nodes.py", line 504, in _run_interface
self._result = self._run_command(execute)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine/nodes.py", line 630, in _run_command
result = self._interface.run()
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1043, in run
runtime = self._run_wrapper(runtime)
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1660, in _run_wrapper
runtime = self._run_interface(runtime)
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1694, in _run_interface
self.raise_exception(runtime)
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1618, in raise_exception
raise RuntimeError(message)
RuntimeError: Command:
imgdv /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/referencemask_extract/sub-01_ses-02_pet_center_GTM.dft /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/convertPET/sub-01_ses-02_pet_center_GTM.v /path/to/cimbi/dir/out_cimbi/preproc/quantification/_args_run.task.ses02.sid01/lp/sub-01_ses-02_pet_center_GTM_lp.v
Standard output:

Standard error:
Error: missing result file name.
Return code: 1
Interface quantCommand failed to run.

190212-17:44:17,528 workflow INFO:
***********************************
190212-17:44:17,528 workflow ERROR:
could not run node: preproc.quantification.lp.a0
190212-17:44:17,529 workflow INFO:
crashfile: /opt/crash-20190212-174231-root-lp.a0-ab68a46a-c829-4ede-89ba-1ed0497c4e41.pklz
190212-17:44:17,529 workflow ERROR:
could not run node: preproc.quantification.lp.a1
190212-17:44:17,529 workflow INFO:
crashfile: /opt/crash-20190212-174417-root-lp.a1-4886f66e-eb19-423c-a111-3017b2b25b81.pklz
190212-17:44:17,529 workflow INFO:
***********************************
Traceback (most recent call last):
File "/opt/APPIAN/Launcher.py", line 43, in
run_scan_level(opts,args)
File "/opt/APPIAN/scanLevel.py", line 31, in run_scan_level
scan_level_workflow.workflow.run()
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine/workflows.py", line 597, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/linear.py", line 57, in run
report_nodes_not_run(notrun)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/base.py", line 95, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details
`

ERROR - could not run node: preproc.initialization.petVolume.a0

Hi,

When I try to run the example in the User Guide Default: Coregistration + MRI Preprocessing + Results Report, after some processing using the default settings, an error occurs: could not run node: preproc.initialization.petVolume.a0

Any tips on how to solve it?

brain_mask_space_pet error

Hello,

I'm using APPIAN to run an SUVR pipeline with T1, freesurfer segmentation and static florbetapir PET image that is one single frame of 50-60mins post-injection. I'm using the latest release downloaded from dockerhub on 19th May 2022, running it through singularity on a HPC cluster (as I can't use docker on the HPC).

This is the command that is being run:
python3 /opt/APPIAN/Launcher.py -s /appian_test/data -t /appian_test/cerebellum_suvr_10015124_baseline --sub 10015124 --ses baseline --no-group-level --quant-method suvr --start-time 3000 --end-time 3600 --quant-label-img fs-segm --quant-label 7 8 46 47 --quant-labels-ones-only --quant-label-space t1 --no-mri-mask --normalization-type nl --results-label-img fs-segm --results-label-space t1 --threads 4

It fails consistently on the 'brain_mask_space_pet' step after completing normalisation step. I've attached the full log file. Any help working out what is going wrong would be much appreciated!
Many thanks - Will

appian_log.txt

Docker NOT pulling latest version of code + ants_nibabel import error

Hi!
I am experiencing some issues with running a quantitative analysis, and I hope you can provide some insights for me.

My command is as follows:

python3 /opt/APPIAN/Launcher.py --sub 000101 --ses baseline --user-ants-command /opt/APPIAN/src/ants_command_quick.txt -s INPUT -t OUTPUT --start-time 0.5 --filter --quant-method srtm --quant-label-space pet --quant-label-img reference_mask.nii.gz --quant-labels-ones-only

I get the following error:



Traceback (most recent call last):
File "/opt/APPIAN/Launcher.py", line 26, in
run_scan_level(opts,args)
File "/opt/APPIAN/src/scanLevel.py", line 34, in run_scan_level
scan_level_workflow.workflow.run()
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/engine/workflows.py", line 638, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/plugins/linear.py", line 82, in run
raise error from cause
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/plugins/linear.py", line 47, in run
node.run(updatehash=updatehash)
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/engine/nodes.py", line 521, in run
result = self._run_interface(execute=True)
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
return self._run_command(execute)
File "/usr/local/lib/python3.8/dist-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node quant-srtm.

Traceback (most recent call last):
File "/opt/APPIAN/src/quantification.py", line 178, in srtm
isotope=header["Info"]["Tracer"]["Isotope"][0]
KeyError: 'Info'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/opt/APPIAN/src/quantification.py", line 586, in _run_interface
quant_vol = model(pet_vol, int_vol, ref_tac_rsl, int_ref, time_frames, opts=opts, header=header)
File "/opt/APPIAN/src/quantification.py", line 184, in srtm
half_life = opts.quant_half_life
AttributeError: 'TraitDictObject' object has no attribute 'quant_half_life'


For some context, I am using APPIAN through Docker (tffunck/appian:latest).

I have tried to investigate why this error is occurring and I believe there is a discrepancy between the source code pulled in Docker and the latest source code on the GitHub website (where line 178 of quanitifcation.py is updated to reflect the actual JSON key name).

So, I decided to enter the container and from there checkout the latest commit to the master branch, dated Mar 8 2023. When I run my command again, I get "ModuleNotFoundError: No module named 'src.ants_nibabel'".

I have 3 main questions that I would ask for your input with:

  1. Why doesn't the Docker tffunck/appian:latest appear to pull the most up to date source code from the git repository?
  2. In the latest source code, is there a missing file ants_nibabel.py? Do you have any plans to add this file so the imports work correctly?
  3. I see that I could get around (1) by instead directly specifying the additional flag "--half-life 20.362". But this does not work either, because I get the error "AttributeError: 'TraitDictObject' object has no attribute 'quant_half_life'". Am I using this flag incorrectly?

I would very much appreciate your time in helping to resolve these issues.

mincreshape with -dimrange error in 3D volumes

Hi again,

When I try to run the default Launcher.py, it gives an error in the mincreshape call:

Standard error:
Unknown image dimension "time"
Return code: 1
Interface ReshapeCommand failed to run.
Interface pet3DVolume failed to run.

It probably happens because of the -dimrange 'time=0,1' parameter.

Could it be optional?

run_mincreshape.inputs.dimrange = 'time='+str(first)+','+str(count)

Installation Failed

Hi,

When I installed the software, I met the error in the attached screenshot. It seems that the package is missing "ModuleNotFoundError: No module named 'sklearn.neighbors.kde". Please see the screenshot for details.
error2

Dashboard GUI

Hi,

I'd like to be able to view the output of my preprocessing in the dashboard GUI mentioned briefly in your documentation and the paper but I can't work out how? I assumed it was the index.html page in preproc/ however opening that just gave this:

image

Is this possible yet? If so, how do I get to it.

Many thanks.

Applying preprocessed MRI images as input?

Thanks for developing and sharing the nice tool for PET quantification.

I am currently working with multi-modal data with both MRI and FDG images.

Since the preprocessing steps of MRI take a lot of time, I wonder if it is possible to use the preprocessed MRI images (e.g., preprocessed by fMRIprep that also used ANTs and the Brain Imaging Data Structure (BIDS)) as input for APPIAN?

Thanks a lot!

APPIAN application to 18F-DOPA PET pediatric scans

I am a postdoc fellow at University of Genoa starting to work on pediatric PET images. First of all, I was looking for an automated image processing software for PET and I came into APPIAN. I have read both the original paper and the relative software documentation, and had a couple of questions about its implementation for my specific case.

I would like to try and automatically reproduce some image processing step conducted semi-quantitatively by a group of radiologists on a bunch of pediatric PET/CT images on subjects affected by pontine gliomas. Specifically,

  • PET scanner in use is PET/CT Discovery ST system (GE Healthcare, Milwaukee, WI, USA).
  • MRI examinations were performed on 1.5T and 3T (5 patients; Ingenia Cx, Philips, Best, the Netherlands) scanners
  • Co-registration and fusion of 18F-DOPA PET and MR images to ensure precise anatomical comparability.
  • Volumetric tumor analysis on axial 18F-DOPA PET and MRI FLAIR images. In detail, the anatomic tumor extent was delineated on MRI FLAIR images using a perimeter technique with user-assisted semi-automated software; 18F-DOPA PET tumor volume was delineated based on 18F-DOPA uptake avidity (tumoral areas with increased uptake compared to normal background reference region).
  • For each patient, 18F-DOPA uniformity, defined as the percentage of the MRI tumor volume at admission (as delineated on FLAIR images) demonstrating increased 18F-DOPA PET uptake, was also calculated
  • PET tumor volume was delineated on 18F-DOPA PET studies by including all voxels with standardized uptake value (SUV) above the maximum (max) SUV of the normal background reference tissue. For the normal (N) background reference tissue, a VOI (diameter 20 mm) was drawn in the normal cerebral hemisphere at the level of the left centrum semiovale, including cortical and white matter. For each case, the radiotracer concentration in the tumoral VOI was normalized to the injected dose per patient body weight, and the SUV max was obtained for each lesion [maximum pixel value (kBq/mL) within the VOI/ injected dose (kBq)/patient weight (g)]. An additional VOI was drawn over the left striatum including the entire putamen (S). Ratios of tumor to normal tissue uptake were also generated by dividing the tumor SUVmax by the SUVmax of the striatum (T/S)
  • In case of absence of increased 18F-DOPA uptake, a VOI including the tumor on MRI FLAIR images was delineated on co-registered 18F-DOPA PET images to generate T/S and T/N ratios.

Is it possible to automatically perform all aforementioned steps through your platflorm?
thank you so much in advance,
Rosella Trò

Problem getting APPIAN to identify stereotaxic template image

Hi,

I'm having an issue with APPIAN finding the image I want to use to define the stereotaxic space. I have tried defining the image path with both absolute and relative paths to the same result; failing with the error:

'Status : FAIL 1 All_Command_lines_OK

Using single precision for computations.

file OASIS/T_template0.nii.gz does not exist .'

I am wondering, does the container have some expectation that template images will be stored in a particular directory? I have tried using a sub-directory contained within the input directory defined by the -s option.

PET ROI segmentation

Hi,
I was wondering whether via Docker it is possible to segment a specific ROI corresponding to tumor volume on PET scans. How can manual segmentation be performed via Docker if interactive dashboard is not feasible?

thank you,

Rosella

Add Zenodo DOI

Hey!

I was bouncing around to try and remember the status of the Boutiques descriptor for this project, and realized that there's no DOI for citing the actual APPIAN codebase! These are super easy to obtain through Zenodo as you probably know, and make it easy to differentiate between version of software when citing (esp. useful for before/after bug fixes), and giving new devs co-authorship automatically w/out a new in-press publication.

There's the official guide for how to do it, here: https://guides.github.com/activities/citable-code/

I'd recommend taking the 5-10 min to set it up sooner than later, so that it'll be there moving forward :) Happy to help/chat/or even do it if you wanted to give me temporary access to the APPIAN org that I'd give back when finished the integration.

Once that's done, I'll circle back on the Boutiques stuff ;)

cc: @tfunck @llevitis @pjtoussaint

Update FixHeaderCommand to take into account the specified --analysis-space

The FixHeaderCommand function used by ecattominc2 updates the header of the final MINC file to match the dimensions of the native-space PET input image. If analysis-space is set to t1, this results in the final lp or other tka image not being in the same space as the native T1 image, result labels in T1 space, and the PET co-registered to the T1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.