Code Monkey home page Code Monkey logo

deepneuro's Introduction

Alt text

Build Status

DeepNeuro

A deep learning python package for neuroimaging data. Focused on validated command-line tools you can use today. Created by the Quantitative Tumor Imaging Lab at the Martinos Center (Harvard-MIT Program in Health, Sciences, and Technology / Massachusetts General Hospital).

Table of Contents

About

DeepNeuro is an open-source toolset of deep learning applications for neuroimaging. We have several goals for this package:

  • Provide easy-to-use command line tools for neuroimaging using deep learning.
  • Create Docker containers for each tool and all out-of-package pre-processing steps, so they can each can be run without having install prerequisite libraries.
  • Provide freely available deep learning models trained on a wealth of neuroimaging data.
  • Provide training scripts and links to publically-available data to replicate the results of DeepNeuro's models.
  • Provide implementations of popular models for medical imaging data, and pre-processed datasets for educational purposes.

This package is under active development, but we encourage users to both try the modules with pre-trained modules highlighted below, and try their hand at making their own DeepNeuro modules using the tutorials below.

Installation

  1. Install Docker from Docker's website here: https://www.docker.com/get-started. Follow instructions on that link to get Docker set up properly on your workstation.

  2. Install the Docker Engine Utility for NVIDIA GPUs, AKA nvidia-docker. You can find installation instructions at their Github page, here: https://github.com/NVIDIA/nvidia-docker

  3. Pull the DeepNeuro Docker container from https://hub.docker.com/r/qtimlab/deepneuro_segment_gbm/. Use the command "docker pull qtimlab/deepneuro"

  4. If you want to run DeepNeuro outside of a Docker container, you can install the DeepNeuro Python package locally using the pip package manager. On the command line, run pip install deepneuro

Tutorials

Modules

Citation

If you use DeepNeuro in your published work, please cite:

Beers, A., Brown, J., Chang, K., Hoebel, K., Patel, J., Ly, K. Ina, Tolaney, S.M., Brastianos, P., Rosen, B., Gerstner, E., and Kalpathy-Cramer, J. (2020). DeepNeuro: an open-source deep learning toolbox for neuroimaging. Neuroinformatics. DOI: 10.1007/s12021-020-09477-5. PMID: 32578020

If you use the MRI skull-stripping or glioblastoma segmentation modules, please cite:

Chang, K., Beers, A.L., Bai, H.X., Brown, J.M., Ly, K.I., Li, X., Senders, J.T., Kavouridis, V.K., Boaro, A., Su, C., Bi, W.L., Rapalino, O., Liao, W., Shen, Q., Zhou, H., Xiao, B., Wang, Y., Zhang, P.J., Pinho, M.C., Wen, P.Y., Batchelor, T.T., Boxerman, J.L., Arnaout, O., Rosen, B.R., Gerstner, E.R., Yang, L., Huang, R.Y., and Kalpathy-Cramer, J., 2019. Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement. Neuro-Oncology. DOI: 10.1093/neuonc/noz106. PMID: 31190077

Contact

DeepNeuro is under active development, and you may run into errors or want additional features. Send any questions or requests for methods to [email protected]. You can also submit a Github issue if you run into a bug.

Acknowledgements

The Center for Clinical Data Science at Massachusetts General Hospital and the Brigham and Woman's Hospital provided technical and hardware support for the development of DeepNeuro, including access to graphics processing units. The DeepNeuro project is also indebted to the following Github repository for the 3D UNet by user ellisdg, which formed the original kernel for much of its code in early stages. Long live open source deep learning :)

Disclaimer

This software package and the deep learning models within are intended for research purposes only and have not yet been validated for clinical use.

deepneuro's People

Contributors

annabeers avatar changken1 avatar jonathanchiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepneuro's Issues

Dropbox links broken for notebook sample data

Just a heads up that the links provided for loading sample data seem to be broken. You get a HTTPError: HTTP Error 400: Bad Request when you try to load the sample data as shown in the jupyter notebooks and you get a 400 error by following the links given in load.py

Thanks,
-Michaela

Log files

A useful feature of DeepNeuro would be a log file with various metadata and processing steps used for individual runs of training and inference. Some ideas below:

Preamble

  • Date and time of run
  • Machine it was run on
  • Environment data (e.g. TensorFlow version)
  • DeepNeuro version information (branch/tag/hash)

Logging

  • Which config file was used
  • Details of data used for training/inference
  • Where the resulting model(s) is/are saved
  • Basic output of the steps of training/inference

Replace xrange for Python 3.x compatibility

I can't use Deepneuro on my Python 3.6 installation because of xrange

Traceback (most recent call last):
File "main.py", line 143, in
main()
File "main.py", line 101, in main
output_type='categorical_label', num_outputs=config["n_labels"], batch_norm=True)
File "/home/eashver/anaconda3/envs/python_tf/lib/python3.6/site-packages/deepneuro-0.1.1-py3.6.egg/deepneuro/models/model.py", line 98, in init
self.build_model()
File "/home/eashver/anaconda3/envs/python_tf/lib/python3.6/site-packages/deepneuro-0.1.1-py3.6.egg/deepneuro/models/unet.py", line 52, in build_model
for level in xrange(self.depth):
NameError: name 'xrange' is not defined

Maybe we could replace with range for compatibility with python 3

FileNotFoundError: No such file or no access

Hi Guys,

I am having a problem when I am trying to run segment_GBM.
It stop because it can not find a FLAIR_N4Bias.nii.gz file. It created in output folder another files.
T1_convert, T1POST_Convert and ect ...
I dont know why there is an issue with \FLAIR_N4Bias.nii.gz

('Starting New Case...',)
('Whole Tumor Prediction',)
('======================',)
('Working on image.. ', 'C:\\Users\\krasona\\PycharmProjects\\GlioblastomaSegmentation\\output')
('Working on Preprocessor:', 'Conversion')
('Working on Preprocessor:', 'N4BiasCorrection')
('Working on Preprocessor:', 'Registration')
('Working on Preprocessor:', 'ZeroMeanNormalization')
Traceback (most recent call last):
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\nibabel\loadsave.py", line 40, in load
    stat_result = os.stat(filename)
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\Users\\krasona\\PycharmProjects\\GlioblastomaSegmentation\\output\\FLAIR_N4Bias.nii.gz'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/krasona/PycharmProjects/GlioblastomaSegmentation/GlioblastomaSegmentationMain.py", line 49, in <module>
    quiet=False)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\predict.py", line 123, in predict_GBM
    wholetumor_file = wholetumor_model.generate_outputs(data_collection, output_folder)[0]['filenames'][-1]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\models\model.py", line 163, in generate_outputs
    return_outputs += [output.generate()]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 131, in generate
    return self.generate_individual_case(self.case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 153, in generate_individual_case
    input_data = self.data_collection.get_data(self.case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_collection.py", line 376, in get_data
    self.load_case_data(case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_collection.py", line 321, in load_case_data
    self.preprocess()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_collection.py", line 304, in preprocess
    preprocessor.execute(self, return_array=False)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\preprocessing\preprocessor.py", line 73, in execute
    data_group.preprocessed_case, data_group.preprocessed_affine = data_group.get_data(return_affine=True)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_group.py", line 87, in get_data
    preprocessed_case, preprocessed_affine = read_image_files(self.preprocessed_case, return_affine=True)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\utilities\conversion.py", line 44, in read_image_files
    data, _, affine, data_format = convert_input_2_numpy(data_file, return_all=True)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\utilities\conversion.py", line 495, in convert_input_2_numpy
    return NUMPY_CONVERTER_LIST[input_format](input_data, return_all=True) + (input_format,)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\utilities\conversion.py", line 329, in nifti_2_numpy
    nifti = nib.load(input_filepath)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\nibabel\loadsave.py", line 42, in load
    raise FileNotFoundError("No such file or no access: '%s'" % filename)
FileNotFoundError: No such file or no access: 'C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\output\FLAIR_N4Bias.nii.gz'

Process finished with exit code 1

I would be appreciate for any help please.

Patch-searching heuristic

Our current method for extracting patches subject to a condition is very inefficient. It samples patches randomly until a patch condition is met. Code profiling has verified that it slows down data processing very much.

A solution proposed in the past is to pre-compute a list of indices for patch-center points and sample from there, but our initial implementation of this was also not the most efficient. But it's clearly the answer! We will have to figure this out at some point.

Create an "un-preprocessing" pipeline

We like to register and resample our data to isotropic resolution before we create segmentations, but we might not want to do this forever, and at any rate other labs may have other workflows. Thus, we should ideally be able to de-preprocess the segmentation maps we create back into the original resolutions of the input, perhaps as an optional feature. How this would be done -- automatically in preprocessing sub-classes, or via a new sub-module, is for now anyone's guess.

ValueError: bad marshal data (unknown type code)

I have run the docker but stuck at following error

nvidia-docker run --rm -v /home/user/Desktop/data/mets:/INPUT_DATA qtimlab/deepneuro_segment_mets segment_mets pipeline -T1 /INPUT_DATA/T1pre -T1POST /INPUT_DATA/T1post -FLAIR /INPUT_DATA/FLAIR -T2 /INPUT_DATA/T2 -output_folder /INPUT_DATA/op

Using TensorFlow backend.
File loading completed.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 231, in func_load
code = marshal.loads(raw_code)
ValueError: bad marshal data (unknown type code)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/segment_mets", line 11, in
load_entry_point('deepneuro', 'console_scripts', 'segment_mets')()
File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 96, in main
Segment_Mets_cli()
File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 33, in init
getattr(self, args.command)()
File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 86, in pipeline
predict_brain_mets(args.output_folder, T2=args.T2, T1POST=args.T1POST, T1PRE=args.T1, FLAIR=args.FLAIR, ground_truth=None, input_directory=args.input_directory, bias_corrected=args.debiased, resampled=args.resampled, registered=args.registered, skullstripped=args.skullstripped, preprocessed=args.preprocessed, save_preprocess=args.save_preprocess, save_all_steps=args.save_all_steps, output_segmentation_filename=args.segmentation_output)
File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/predict.py", line 34, in predict_brain_mets
mets_model = load_model_with_output(model_name='mets_enhancing', outputs=[ModelPatchesInference(**mets_prediction_parameters)], postprocessors=[BinarizeLabel(postprocessor_string='_label')], wcc_weights={0: 0.1, 1: 3.0})
File "/home/DeepNeuro/deepneuro/pipelines/shared.py", line 50, in load_model_with_output
model = load_old_model(load(model_name), **kwargs)
File "/home/DeepNeuro/deepneuro/models/model.py", line 257, in load_old_model
model = KerasModel(model=load_model(model_file, custom_objects=custom_objects))
File "/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py", line 260, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py", line 334, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 1017, in from_config
process_layer(layer_data)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 1003, in process_layer
custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.5/dist-packages/keras/layers/core.py", line 730, in from_config
function = func_load(config['function'], globs=globs)
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 235, in func_load
code = marshal.loads(raw_code)
ValueError: bad marshal data (unknown type code)

Create data converters for DICOM-RT data

It would be nice to do this in a pure Python implementation, because as far as I know there are no good Python converters for DICOM-RT currently available. The other option is to use an external package such as plastimatch which has a good handle on such conversions.

Additional code will likely have to be added for data conversion in pipelines, as DICOM-RT data cannot be converted to e.g. NIFTI format without accompanying volumetric DICOMs to set orientation and spatial data.

HTTP Error 400 while running Segment_GBM

Hi,
I got this error while trying to run the Segment_GBM module thru the command line:

urllib.error.HTTPError: HTTP Error 400: Bad Request

the links are not working.

Move from Python 3.5 to 3.6

DeepNeuro is currently written for Python 3.5, not 3.6. You would think this wouldn't be the biggest deal, but this has already caused some problems with different dictionary performance between 3.5 and 3.6, and more importantly causes problems loading Keras models saved in 3.5 with Lambda layers in 3.6.

Changes will need to happen with all saved models, converting them from 3.5 to 3.6, and all current Docker containers.

Parallel inference.

Inference is currently not configured at all if we ever wanted to predict in batches. Of course, with patching inference, this may not be needed, as predicting on patches would take up enough memory to make predicting multiple cases in the same batch not practical. But as we move on to non-patch-based-methods, we will probably want to be able to do this.

Integrate neurodocker with DeepNeuro.

The Docker containers for DeepNeuro have been a bit slap-dash so far -- based off of nvidia-docker and edited as necessary. It would be much better to base our dockers off of https://github.com/kaczmarj/neurodocker.

We would need to make a pull request for a few enhancements, such as adding support for 3DSlicer. We could also build on the general format for DeepNeuro specific items, such as templated model Dn imports for Docker containers.

Error-Catching for bad cases in data groups.

I recently ran into a situation where one modality in one data group in one case out of a hundred cases had an extra dimension. Particularly, a FLAIR sequence had two time-points. This caused an error in patch extraction, as patches were being extracted at the wrong dimension.

We'll face a lot of these unexpected data errors as time goes on. Currently, data is appended list-wise after its processed via augmentation. If an error happens in one data group (e.g. input modalities), but not the other (e.g. ground-truth) the two lists of data will become de-synced, leading to disaster. We need a way to A) catch errors in one data group, and then B) remove all corresponding data from the other data group.

PermissionError Errno 13 - docker - Brain metastases segmentation

Hi, I'm getting the following error while trying to run the metastases segmentation via docker (nvidia-docker):

File loading completed.
('Starting New Case...',)
('Enhancing Mets Prediction',)
('======================',)
('Working on image.. ', '/INPUT_DATA/Output_Folder')
('Working on Preprocessor:', 'Conversion')
('Working on Preprocessor:', 'N4BiasCorrection')
('Working on Preprocessor:', 'Registration')
('Working on Preprocessor:', 'ZeroMeanNormalization')
('Predicting patch set', '1/3...')
('Predicting patch set', '2/3...')
('Predicting patch set', '3/3...')
('Working on Preprocessor:', 'SkullStrip_Model')
('Working on Preprocessor:', 'ZeroMeanNormalization')
('Predicting patch set', '1/8...')
('Predicting patch set', '2/8...')
('Predicting patch set', '3/8...')
('Predicting patch set', '4/8...')
('Predicting patch set', '5/8...')
('Predicting patch set', '6/8...')
('Predicting patch set', '7/8...')
('Predicting patch set', '8/8...')
Using TensorFlow backend.
Traceback (most recent call last):
  File "/usr/local/bin/segment_mets", line 11, in <module>
    load_entry_point('deepneuro', 'console_scripts', 'segment_mets')()
  File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 91, in main
    Segment_Mets_cli()
  File "/home/DeepNeuro/deepneuro/pipelines/shared.py", line 22, in __init__
    self.load()
  File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 20, in load
    super(Segment_Mets_cli, self).load()
  File "/home/DeepNeuro/deepneuro/pipelines/shared.py", line 48, in load
    getattr(self, args.command)()
  File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/cli.py", line 87, in pipeline
    quiet=args.quiet)
  File "/home/DeepNeuro/deepneuro/pipelines/Segment_Brain_Mets/predict.py", line 108, in predict_brain_mets
    data_collection.clear_preprocessor_outputs()
  File "/home/DeepNeuro/deepneuro/data/data_collection.py", line 650, in clear_preprocessor_outputs
    preprocessor.clear_outputs(self)
  File "/home/DeepNeuro/deepneuro/preprocessing/preprocessor.py", line 179, in clear_outputs
    os.remove(output_filename)
PermissionError: [Errno 13] Permission denied: '/INPUT_DATA/pn-0372_hdglio/T1.nii'

Here's the command I run:

nvidia-docker run --rm -v /home/user/Desktop/tricktest/met:/INPUT_DATA qtimlab/deepneuro_segment_mets segment_mets pipeline -T1 /INPUT_DATA/pn-0372_hdglio/T1.nii -T1POST /INPUT_DATA/pn-0372_hdglio/CT1.nii -FLAIR /INPUT_DATA/pn-0372_hdglio/FLAIR.nii -T2 /INPUT_DATA/pn-0372_hdglio/T2.nii -output_folder /INPUT_DATA/Output_Folder -gpu_num 5

I don't know if this may be relevant, but I do not have sudo privileges on my system.

Thanks

UPDATE:
I was able to go one step ahead using chmod 777 on the directories I'm using, but I'm still getting the same error. This time I'm also getting some output in the folders (see attached files), but I'm not sure if something is missing.

thanks!
Screenshot from 2019-10-03 12-39-01

Multi-sequence inputs stored as 4D niftis are not handled correctly

Given a setup where we have a folder of 4D (multi-sequence) MR images and a separate folder of labelmaps, we can use the wonderfully flexible DataCollection object as follows:

training_modality_dict = {
    'input_modalities':  ['Task01_BrainTumour/imagesTr'],
    'ground_truth': ['Task01_BrainTumour/labelsTr']
}
training_data_collection = DataCollection(train_dir, source='files', \
    data_group_dict=training_modality_dict, verbose=True)
training_data_collection.fill_data_groups()

However, currently DN assumes that the inputs, due to being 4D, are time-series (seemingly):

array = np.rollaxis(np.stack([image for image in image_list], axis=-1), 3, 0)

It bugs out at the patch extraction stage due to having five dimensions. Can we use the channels=True flag to ensure 4D data are loaded the desired way?

Train/Test Splits with Patient/Identifier Groupings

One frequently wants to intelligently split their testing data before processing. Particularly, one wants to make sure that cases in the "Training" set do not share characteristics with cases in the Testing set, such as being derived from the same patient.

It would be useful to create a train/val/test splitting mechanism in DeepNeuro that could figure out these problems for us.

Create save-out function for any format.

Currently, DeepNeuro's output format is Nifti. In the future, we will want users to be able to specify any output format -- Nifti, NRRD, DICOM, DSO, .npy, .mat, etc. This requires a generalized saving function with wrappers for packages that can interact with all of these formats -- a great utility not only for DeepNeuro but any neuroimaging application.

ValueError: could not broadcast input array from shape (64,64,8,2) into shape (64,64,8,3)

Hello,

Right now I am running Segment_GBM and I got an error in last prediction step.

('Predicting patch set', '1/3...')
('Predicting patch set', '2/3...')
('Predicting patch set', '3/3...')
Traceback (most recent call last):
  File "C:/Users/krasona/PycharmProjects/GlioblastomaSegmentation/GlioblastomaSegmentationMain.py", line 49, in <module>
    quiet=True)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\predict.py", line 123, in predict_GBM
    wholetumor_file = wholetumor_model.generate_outputs(data_collection, output_folder)[0]['filenames'][-1]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\models\model.py", line 163, in generate_outputs
    return_outputs += [output.generate()]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 131, in generate
    return self.generate_individual_case(self.case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 154, in generate_individual_case
    self.process_case(input_data)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 106, in process_case
    output_data = self.predict(input_data)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 174, in predict
    input_patches = self.grab_patch(input_data, corner_batch)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 245, in grab_patch
    output_patches[corner_idx, ...] = input_data[tuple(output_slice)]
ValueError: could not broadcast input array from shape (64,64,8,2) into shape (64,64,8,3)

> ('Starting New Case...',)
> ('Whole Tumor Prediction',)
> ('======================',)
> ('Working on image.. ', 'C:\\Users\\krasona\\PycharmProjects\\GlioblastomaSegmentation\\output5')
> ('Working on Preprocessor:', 'Conversion')
> ('Working on Preprocessor:', 'N4BiasCorrection')
> ('Working on Preprocessor:', 'Registration')
> ('Working on Preprocessor:', 'ZeroMeanNormalization')
> ('Predicting patch set', '1/3...')
> ('Predicting patch set', '2/3...')
> ('Predicting patch set', '3/3...')
> ('Working on Preprocessor:', 'SkullStrip_Model')
> ('Working on Preprocessor:', 'ZeroMeanNormalization')
> ('Predicting patch set', '1/6...')
> Traceback (most recent call last):
>   File "C:/Users/krasona/PycharmProjects/GlioblastomaSegmentation/GlioblastomaSegmentationMain.py", line 52, in <module>
>     quiet=False)
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\predict.py", line 123, in predict_GBM
>     wholetumor_file = wholetumor_model.generate_outputs(data_collection, output_folder)[0]['filenames'][-1]
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\models\model.py", line 163, in generate_outputs
>     return_outputs += [output.generate()]
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 131, in generate
>     return self.generate_individual_case(self.case)
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 154, in generate_individual_case
>     self.process_case(input_data)
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 107, in process_case
>     output_data = self.predict(input_data)
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 175, in predict
>     input_patches = self.grab_patch(input_data, corner_batch)
>   File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\segmentation.py", line 246, in grab_patch
>     output_patches[corner_idx, ...] = input_data[tuple(output_slice)]
> ValueError: could not broadcast input array from shape (64,64,8,2) into shape (64,64,8,3)

Could anyone help me with this issue ?

I would be appreciate.

Transfer data-type conversion utilities from qtim_tools to DeepNeuro

Currently, I am importing a data converter from qtim_tools using..

from qtim_tools.qtim_utilities.format_util import convert_input_2_numpy

qtim_tools is always going to be less ready for production than DeepNeuro, so I don't want to make it a pre-requisite. This means that all of the qtim_tools data-converters should be ported into DeepNeuro (in the utilities/conversion.py script, most likely), soon.

Pre-emptive error notices.

A certain art when writing code is making sure that when your code hits an error, you have a vague understanding of why that error occurred. I have not yet mastered that art.

There are several instances in DeepNeuro where you may hit a bug because you had a typo in your inputs, or otherwise formatted your data incorrectly. The fact that you did this may not be obvious. Here are some common ones that I will address in the future.

  • Filepath checking. Always check and raise errors if a provided folder or filepath does not exist.
  • Datasize checking. Raise errors if different channels of input data do not much in dimensions before downstream errors.

We should all add more types of these errors to catch in advance to this issue; I will post updates as they are addressed.

Handling different input volume dimensions

Currently, it is necessary for all input data at train/inference time to have the same dimensions. It would be good to have either (a) dimension-agnostic processing or (b) automatic padding to the largest dimensions across all input data. The latter is probably easier.

Create internal practices regarding batch dimensions, channel dimensions.

DeepNeuro is plagued by errors that blossom from commits because of inconsistencies regarding the number of dimensions in input data. Data-loading, for example, loads data without the batch dimension, but many processed require a batch dimension to work properly -- particularly in data_collection. There are a few options:

  • Make data-loading include a singleton batch-dimension, and pour over the code to make assumed batch-dimensions the norm.
  • Keep data-loading as is (which might be useful outside the context of DeepNeuro), and create more robust processes that can accept data with our without a batch dimension (difficult).

I'm leaning towards the first option.

Refactor DICOM conversion for DeepNeuro

DICOM conversion is currently done via a somewhat cryptic script in utilities.conversion that I had written in a very ad-hoc manner. It tends to work for most un-complicated 3D data, but will surely fail for more complicated imaging modalities.

I would like to replace this script with either a pre-existing package, like dcmstack, or a wrapper around a reliable dicom converter, such as Freesurfer's mri_convert or, ideally, 3DSlicer. Each has their own difficulties -- Freesurfer requires a license to operate, 3DSlicer'S API for DICOM Conversion is difficult to access with Python scripting, and dcmstack has at least in the past not covered all the use cases. Any might be better than our current situation, however..

Refactor Data Transformation Code

The current preprocessing class is confusing, and probably inefficient. For example, it saves after every step. In the long-term, refactor it, perhaps to be more like the postprocessing class (or merged with the postprocessing class?).

Support for TPU (in Colab or otherwise)

Google offers access to cloud TPUs via Colab, but using TPUs requires some modification to existing Tensorflow/Keras code to get any actual speed-ups. Details on that process can be found at this link: https://www.tensorflow.org/guide/using_tpu.

This is likely a low-priority item, as training on medical data using Colab is already fraught because of privacy concerns, but may also be not too difficult to implement in the context of DeepNeuro. Likely an optional "tpu" parameter would be added to keras_model.py and tensorflow_model.py, along with a few extra lines of code to convert models to their respective TPU modes.

KeyError: running glioblastoma segmentation from command line

I am getting an error when I am runnign glioblastoma segmentation via cmd.

This is my cmd:
segment_gbm pipeline -T1 T1 -T1POST T1POST -FLAIR FLAIR -output_folder output -wholetumor_output wholetumor. nii.gz -enhancing_output enhancing.nii.gz

I got an key error :

 ('Starting New Case...',)
('Whole Tumor Prediction',)
('======================',)
('Working on image.. ', 'output')
Traceback (most recent call last):
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\Scripts\segment_gbm-script.py", line 11, in <module>
    load_entry_point('deepneuro==0.2.3', 'console_scripts', 'segment_gbm')()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 86, in main
    Segment_GBM_cli()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\shared.py", line 22, in __init__
    self.load()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 16, in load
    super(Segment_GBM_cli, self).load()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\shared.py", line 48, in load
    getattr(self, args.command)()
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 82, in pipeline
    quiet=args.quiet)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\pipelines\Segment_GBM\predict.py", line 123, in predict_GBM
    wholetumor_file = wholetumor_model.generate_outputs(data_collection, output_folder)[0]['filenames'][-1]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\models\model.py", line 163, in generate_outputs
    return_outputs += [output.generate()]
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 131, in generate
    return self.generate_individual_case(self.case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\outputs\output.py", line 153, in generate_individual_case
    input_data = self.data_collection.get_data(self.case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_collection.py", line 376, in get_data
    self.load_case_data(case)
  File "C:\Users\krasona\PycharmProjects\GlioblastomaSegmentation\venv\lib\site-packages\deepneuro\data\data_collection.py", line 317, in load_case_data
    data_group.preprocessed_case = copy.copy(data_group.data[self.current_case])
KeyError: 'output'

I don't know really what I am doing wrong.
Could you help me please ?
I would be appreciate for any help please.

fail to import preprocessing module

from deepneuro import preprocessing

Error:
ImportError Traceback (most recent call last)
in
----> 1 from deepneuro import preprocessing

4 frames
/usr/local/lib/python3.7/dist-packages/deep neuro/models/keras_model.py in
----> 1 from Keras.engine import Input, Model
2 from Keras.layers import Activation, Lambda
3 from Keras.layers.merge import concatenate
4 from Keras.optimizers import Nadam, SGD, Adam, RMSprop, Adagrad, Adamax, Adadelta
5 from keras import backend as K

ImportError: cannot import name 'Input' from 'keras.engine' (/usr/local/lib/python3.7/dist-packages/keras/engine/init.py)

I got this error when importing preprocessing module on colabratory google. I'll be grateful if you help me.
Thanks

ValueError: Cannot create group in read only mode

Hello,

I found a new bug.

File loading completed.
Traceback (most recent call last):
File "/pstore/home/krasona/.local/bin/segment_gbm", line 10, in
sys.exit(main())
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 86, in main
Segment_GBM_cli()
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/shared.py", line 22, in init
self.load()
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 16, in load
super(Segment_GBM_cli, self).load()
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/shared.py", line 48, in load
getattr(self, args.command)()
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 82, in pipeline
quiet=args.quiet)
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/pipelines/Segment_GBM/predict.py", line 70, in predict_GBM
postprocessors=[BinarizeLabel(postprocessor_string='label')])
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/models/model.py", line 284, in load_model_with_output
model = load_old_model(load(model_name), **kwargs)
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/deepneuro/models/model.py", line 260, in load_old_model
model = KerasModel(model=load_model(model_file, custom_objects=custom_objects))
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/keras/engine/saving.py", line 221, in _deserialize_model
model_config = f['model_config']
File "/pstore/home/krasona/.local/lib/python3.6/site-packages/keras/utils/io_utils.py", line 300, in getitem
raise ValueError('Cannot create group in read only mode.')
ValueError: Cannot create group in read only mode.

Do you have any idea how I could fix this isssue ?

I would appreciate for any help please.

HTTP Error 400 while running Segment_GBM module on the command line

Hi,
I got this error while trying to run the Segment_GBM module thru the command line:

urllib.error.HTTPError: HTTP Error 400: Bad Request

It seems to be happening after the file are loaded correctly, here's the full output:

Using TensorFlow backend.
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
File loading completed.
Traceback (most recent call last):
  File "/home/m192229/anaconda3/envs/deepneuroenv/bin/segment_gbm", line 11, in <module>
    sys.exit(main())
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 86, in main
    Segment_GBM_cli()
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/shared.py", line 22, in __init__
    self.load()
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 16, in load
    super(Segment_GBM_cli, self).load()
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/shared.py", line 48, in load
    getattr(self, args.command)()
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/Segment_GBM/cli.py", line 82, in pipeline
    quiet=args.quiet)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/pipelines/Segment_GBM/predict.py", line 70, in predict_GBM
    postprocessors=[BinarizeLabel(postprocessor_string='label')])
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/models/model.py", line 284, in load_model_with_output
    model = load_old_model(load(model_name), **kwargs)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/site-packages/deepneuro/load/load.py", line 70, in load
    urlretrieve(data_dict[dataset][1], dataset_path)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 187, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 162, in urlopen
    return opener.open(url, data, timeout)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 471, in open
    response = meth(req, response)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 581, in http_response
    'http', request, response, code, msg, hdrs)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 503, in error
    result = self._call_chain(*args)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 443, in _call_chain
    result = func(*args)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 686, in http_error_302
    return self.parent.open(new, timeout=req.timeout)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 471, in open
    response = meth(req, response)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 581, in http_response
    'http', request, response, code, msg, hdrs)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 509, in error
    return self._call_chain(*args)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 443, in _call_chain
    result = func(*args)
  File "/home/m192229/anaconda3/envs/deepneuroenv/lib/python3.5/urllib/request.py", line 589, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request

Thank you

Templating for DeepNeuro modules.

It's taken quite a bit of boiler-plate and duplicate code-writing to create the skull-stripping module, and I only anticipate that coding time will increase as modules become more flexible creatures. Consider creating a class, or config file, or something, that will automatic the production of DeepNeuro modules.

HTTP Error 400: bad request

Hi

I encountered the "bad request" problem
I would be glad to get support in fixing it
Thanks

(tf) C:\Users\ERNESTB>segment_gbm pipeline -T1 C:\Users\ERNESTB\Documents\deepNeuro\ozondo\PRE\T1 -T1POST C:\Users\ERNESTB\Documents\deepNeuro\ozondo\T1 -FLAIR C:\Users\ERNESTB\Documents\deepNeuro\ozondo\PRE\FLAIR -output_folder C:\Users\ERNESTB\Documents\deepNeuro\ozondo\result -save_all_steps -save_only_segmentations
Using TensorFlow backend.
File loading completed.
Traceback (most recent call last):
File "c:\users\ernestb\anaconda3\envs\tf\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "c:\users\ernestb\anaconda3\envs\tf\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\ERNESTB\Anaconda3\envs\tf\Scripts\segment_gbm.exe_main
.py", line 7, in
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 86, in main
Segment_GBM_cli()
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\shared.py", line 22, in init
self.load()
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 16, in load
super(Segment_GBM_cli, self).load()
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\shared.py", line 48, in load
getattr(self, args.command)()
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\Segment_GBM\cli.py", line 82, in pipeline
quiet=args.quiet)
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\pipelines\Segment_GBM\predict.py", line 70, in predict_GBM
postprocessors=[BinarizeLabel(postprocessor_string='label')])
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\models\model.py", line 284, in load_model_with_output
model = load_old_model(load(model_name), **kwargs)
File "c:\users\ernestb\anaconda3\envs\tf\lib\site-packages\deepneuro\load\load.py", line 70, in load
urlretrieve(data_dict[dataset][1], dataset_path)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 563, in error
result = self._call_chain(*args)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 755, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "c:\users\ernestb\anaconda3\envs\tf\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request

Test Suites

Frequently, when I change something in DeepNeuro, I break something somewhere else in the code. Fortunately, we can run tests to avoid this! Below is a list of basic tests we should add. Some things will be difficult to test, such as the use of computation-heavy preprocessing steps, GPU models, or external libraries, but this should at least make sure things work on some basic level.

-- Loading one file, and multiple files (2-3) into a data collection.
-- Saving those files out to an hdf5 file.
-- Preprocessing data with a dummy preprocessor (maybe can wait, since preprocessors are not heavily used outside of inference yet)
-- Augmenting data with a dummy augmentation (1x, 2x), multiple dummy augmentations.
-- Training a minimal model locally (may run into keras/tensorflow installation issues on Travis)
-- Loading a minimal model locally
-- Running 1, 2-3 inferences on a minimal model locally. Stream from both hdf5 and file names.

I'll assign myself for now -- post here if you have more test ideas.

Suggestions for Improvement

To do:

  1. Multi Output Support
    -Automatic detection of number of output channels
    -Implementation: (number of outputs, x , y, z). Sigmoid activation with binary crossentropy.
  2. Additional Augmentation
    -Zooming
  3. Early stopping based on validation
  4. Log file
  5. Continue training

Embed radiomics tools from qtim_tools into DeepNeuro as either a "model", or and "output", or both.

This will be good for those who want to use radiomics as a bench mark for deep learning efforts. A "model" object could extract radiomics features, and then run them through a random forest (or another preferred model, like SVM). An "output" object could just save out the features to a csv file. Given that a radiomics + RF model does not iterate over batches in the same way that a deep learning model does, it might be better to squeeze it all into an "output" object.

This would require a bit of auxillary code, like code to extract regions around a provided region of interest, but this shouldn't be too hard, and would be useful anyway!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.