Code Monkey home page Code Monkey logo

3dunetcnn's Introduction

3D U-Net Convolution Neural Network

[Update August 2023 - data loading is now 10x faster!]

Tutorials

Tumor Segmentation Example

Introduction

We designed 3DUnetCNN to make it easy to apply and control the training and application of various deep learning models to medical imaging data. The links above give examples/tutorials for how to use this project with data from various MICCAI challenges.

Quick Start Guide

How to train a UNet on your own data.

Installation

  1. Clone the repository:
    git clone https://github.com/ellisdg/3DUnetCNN.git

  2. Install the required dependencies*:
    pip install -r 3DUnetCNN/requirements.txt

*It is highly recommended that an Anaconda environment or a virtual environment is used to manage dependcies and avoid conflicts with existing packages.

Create configuration file and run training

See the Brats 2020 example for a description on how to create a configuration and train a model.

Documentation

Still have questions?

Once you have reviewed the documentation, feel free to raise an issue on GitHub, or email me at [email protected].

Citation

Ellis D.G., Aizenberg M.R. (2021) Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework. In: Crimi A., Bakas S. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2020. Lecture Notes in Computer Science, vol 12659. Springer, Cham. https://doi.org/10.1007/978-3-030-72087-2_4

Additional Citations

Ellis D.G., Aizenberg M.R. (2020) Deep Learning Using Augmentation via Registration: 1st Place Solution to the AutoImplant 2020 Challenge. In: Li J., Egger J. (eds) Towards the Automatization of Cranial Implant Design in Cranioplasty. AutoImplant 2020. Lecture Notes in Computer Science, vol 12439. Springer, Cham. https://doi.org/10.1007/978-3-030-64327-0_6

Ellis, D.G. and M.R. Aizenberg, Structural brain imaging predicts individual-level task activation maps using deep learning. bioRxiv, 2020: https://doi.org/10.1101/2020.10.05.306951

3dunetcnn's People

Contributors

alkamid avatar dependabot[bot] avatar ellisdg avatar ysam12345 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dunetcnn's Issues

convert_brats_data NotImplementedError

Hi there,
when I try to run convert_brats_data, I run to an error and I don't know how to solve it. I've searched but I could not find any solution.
Here is error:
screenshot from 2017-08-07 13 48 32

thanks regards.

Why dice loss is -dice coef?

Thanks for your new update. But I want to know , why the dice loss seems like this?
`def dice_coef(y_true, y_pred, smooth=1.):
y_true_f = K.flatten(y_true)
y_true_onehot = K.one_hot(tf.to_int32(y_true_f),8)
y_true_onehot_f = K.flatten(y_true_onehot)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_onehot_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_onehot_f) + K.sum(y_pred_f) + smooth)

def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)`
I rewrited your dice loss to train the multi-category task, but I didn't know why this loss is just equal to the
"-dice_coef"
thanks.

Doubling channels before pooling

Hello,

Looking at your code in model.py, it seems that you overlooked this statement on page 4 of the Cicek paper:

Like suggested in [13] we avoid bottlenecks by doubling the number of channels already before max pooling. We also adopt this scheme in the synthesis path.

I annotated the figure below:

image

I believe this wasn't the case in 2D UNet, so it's likely a copy-paste artifact. I'm not sure what difference it will make in terms of performance, but I figured it's worth mentioning.

Expected size of training data set and metrics for accuracy

According to this paper, they used IoU (paper says and I quote : The IoU is defined as true positives/(true positives + false negatives + false positives)) as a metric for accuracy. I guess here dice_coeff is used but the score is coming quite low for epoch around 0.09(loss is also showing same value but negated). I kept number of epochs as 2. Score is however increasing as increase number of epochs. Should I ideally keep it to 50 as you used in config.py or 70000 as paper says.(We ran 70000 training iterations on an NVIDIA TitanX GPU, which took approximately
3 days.
) Should I increase initial learning rate then i.e config["initial_learning_rate"]?
I am using 20 patients( each with 4 modalities and ground truth) for training. Is that too much or too less? I am asking this because each epoch is taking quite long to execute. I am using a i7 processor without Nvidia GPU.Paper says:In many biomedical applications, only very few images are required to train a network that generalizes reasonably well.But then doubt still remains what is training data size?

About Brats2015 datasets

hi @ellisdg
I'm trying to download BRATS 2015 dataset. However,the website is asking for registration for download. As far as I understand, someone needs to confirm the registration but my registration is waiting still. (for 3 days).Could you provide me with these data over another link, it would be really really great for me. Thanks a lot.

About Brats2015 datasets

hi @ellisdg
I'm trying to download BRATS 2015 dataset.However, However,the website is asking for registration for download. As far as I understand, someone needs to confirm the registration but my registration is waiting still. (for 3 days).Could you provide me with these data over another link, it would be really really great for me. Thanks a lot.

Implementation of label map and viewing prediction.nii.gz

@ellisdg The file is produced after successfully running predict.py. But the label map (which I am assuming is prediction.nii.gz) is not multicolor. I have tried itk snap and 3D slicer. I am not able to understand how is multiclass classification is implemented. Is the tumor detected using 3D unet also segmented?If yes,does the label map give different colors to segmented portions of the tumor. For e.g. red to necrotic, blue to advancing and so on. Please suggest any other tool or alternatively can you post the screenshot of the segmented image?

Here is the screenshot of what I got using model trained with config["image_shape"]=(16,16,16). This is using itk snap:
image
And this one using 3D slicer:
image

unused list:stds unet3d/normalize.py

Hi ellisdg!

Instead of taking mean of the standard dev, standard dev of the 'means' were taken. not sure if this was the intention as 'stds' is unused. (find below)

def normalize_data_storage(data_storage):

  | means = list()
  | stds = list()
  | for index in range(data_storage.shape[0]):
  | data = data_storage[index]
  | means.append(data.mean(axis=(1, 2, 3)))
  | stds.append(data.std(axis=(1, 2, 3)))
  | mean = np.asarray(means).mean(axis=0)
  | std = np.asarray(means).std(axis=0)
  | for index in range(data_storage.shape[0]):
  | data_storage[index] = normalize_data(data_storage[index], mean, std)
  | return data_storage

Why there is no Batch Normalization or dropout layer in the model?

Hi @ellisdg ,
How did you solve the overfitting problem? Data augmentation may solve part of it, but from the network structure and my training results, there is still a severe overfitting problem.
P.S. As the limitation of memory, i split volume into patches and then i have the room for increasing batch size.
Cheers.

Query about the preprocessing

Hi ellisdg,
I am a rookie in deep learning, especially in 3D neural networks. I wonder why you convert the .mha to .nii, and then write the .nii into the hdf5 file. What are the purposes of these preprocessing steps? I read part of your source code and I think it's a wonderful learning material for me, I truly hope to learn about more details about the implementation.

Why do you choose shape(144,144,144)?

Hi, ellisdg.
(1)I want to know why do you choose shape(144,144,144)?
(2)I guess your original data shape that is not (144,144,144), so I want to know how do you process your data?(crop or rescale?)

Interface N4BiasFieldCorrection failed to run

hi, i have this question when i run your code, but i can't solve it through several days struggle.
the system i use is win10 x64 platform and i use anaconda python 3.5 run your code.
and the following is the problem i met:
170813-16:46:22,367 interface INFO:
[WinError 10038] 在一个非套接字上尝试了一个操作。
Traceback (most recent call last):
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1472, in _process
res = select.select(streams, [], [], timeout)
OSError: [WinError 10038] 在一个非套接字上尝试了一个操作。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\LiRui\Desktop\3d-unet\3DUnetCNN-master\brats\preprocess.py", line 120, in convert_brats_data
convert_brats_folder(subject_folder, new_subject_folder, background_mask)
File "C:\Users\LiRui\Desktop\3d-unet\3DUnetCNN-master\brats\preprocess.py", line 105, in convert_brats_folder
normalize_image(image_file, out_file, background_mask)
File "C:\Users\LiRui\Desktop\3d-unet\3DUnetCNN-master\brats\preprocess.py", line 89, in normalize_image
corrected = correct_bias(windowed, append_basename(out_file, "_corrected"))
File "C:\Users\LiRui\Desktop\3d-unet\3DUnetCNN-master\brats\preprocess.py", line 56, in correct_bias
done = correct.run()
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1081, in run
runtime = self._run_wrapper(runtime)
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1724, in _run_wrapper
runtime = self._run_interface(runtime)
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1755, in _run_interface
redirect_x=self._redirect_x)
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1487, in run_command
_process()
File "C:\my_software\Anaconda\lib\site-packages\nipype\interfaces\base.py", line 1475, in _process
if e[0] == errno.EINTR:
TypeError: 'OSError' object is not subscriptable
Interface N4BiasFieldCorrection failed to run.

it would be very nice if you can help us solve this problem.

Error Running

Hello, i tried running the program

image

but I got this

image

I installed all the dependencies, thank you

Running testing.py

Will be this be correct way to the testing.py

if __name__ == '__main__':
    run_test_case(0,"./data_test")

where ./data_test is the name to the out_dir where the prediction.nii.gz should I also call predict_from_data_file_and_write_image from run_test_case() to actually write the image?

Testing of the model

Hello @ellisdg Can you provide some inputs on how to view prediction.nii.gz that is generated after testing. Also please explain what is multi-class classification is doing exactly? I guess it is producing the label map but when I see the image in ITK Snap I can see only red aand green patches in the image. Any help in this regard is greatly appreciated.

What is the correct way to install ANTs N4Bias?

I'm sorry I met an error not about your code but the environment. When I convert data, it shows
"command 'N4BiasFieldCorrection' could not be found on host
Interface N4BiasFieldCorrection failed to run.
"
First, I pip install nipype
Then, I cmake the ANTs with the resource code( the ITVK is also installed) like the link "https://brianavants.wordpress.com/2012/04/13/updated-ants-compile-instructions-april-12-2012/"
Third, I add the 'path/to/ants/bin' to the $PATH in profile. But it still not works.

The loss and dice_coef are not consistent to each other.

Hi, I tried to use this UNet to segment the image and used Dice as the loss function. The function was like below and I didn't change it.

def dice_coef(y_true, y_pred, smooth=1.):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)


def dice_coef_loss(y_true, y_pred):
    return -dice_coef(y_true, y_pred)

The complie is like:

model.compile(optimizer=Adam(lr=initial_learning_rate), loss=dice_coef_loss, metrics=[dice_coef])

However, during the training, the log message is like below:

Epoch 1/1000
299s - loss: 6.4668 - dice_coef: 0.2710 - val_loss: 5.3623 - val_dice_coef: 0.5395
Epoch 2/1000
228s - loss: 4.7529 - dice_coef: 0.5754 - val_loss: 4.4627 - val_dice_coef: 0.3444
Epoch 3/1000
229s - loss: 3.7705 - dice_coef: 0.6485 - val_loss: 3.3135 - val_dice_coef: 0.7470
Epoch 4/1000
232s - loss: 3.0341 - dice_coef: 0.7334 - val_loss: 2.7504 - val_dice_coef: 0.7445
Epoch 5/1000
230s - loss: 2.5172 - dice_coef: 0.7546 - val_loss: 2.2995 - val_dice_coef: 0.7674
Epoch 6/1000
230s - loss: 2.0970 - dice_coef: 0.7988 - val_loss: 1.9096 - val_dice_coef: 0.8277

In my opinion, I think the loss should be negative dice_coef. If I'm wrong, how was this loss calculated?
Further, Somethings the loss is equal to negativedice_coef, sometimes it is not.

Thank you.

Convert to Tensorflow

do you have any suggestions on how to convert this Keras model to a Tensorflow model?

Pre-trained models

Don't know whether should I post here. I didn't find you email too for the contact.

I need a pretrained model from your implementation. So, can you please upload a pre-trained model?

about iseg2017

Hi, ellisdg,
I was wondering what is the dice coefficient you have got for the infant segmentation dataset?
I am a deep learning beginner.
I am now using your default settings. In the first 15 epochs, the dice coefficient seems stable around 0.57.
Whether the learning rate is too small as 0.00001?
Thanks.

command 'N4BiasFieldCorrection' could not be found on host mic233 Interface N4BiasFieldCorrection failed to run.

I run this code on the Linux with python 3.6.3,but there are this problem

Traceback (most recent call last):
File "", line 1, in
File "/DATA/234/zywang/code/3DUnetCNN/brats/preprocess.py", line 120, in convert_brats_data
convert_brats_folder(subject_folder, new_subject_folder, background_mask)
File "/DATA/234/zywang/code/3DUnetCNN/brats/preprocess.py", line 105, in convert_brats_folder
normalize_image(image_file, out_file, background_mask)
File "/DATA/234/zywang/code/3DUnetCNN/brats/preprocess.py", line 89, in normalize_image
corrected = correct_bias(windowed, append_basename(out_file, "_corrected"))
File "/DATA/234/zywang/code/3DUnetCNN/brats/preprocess.py", line 56, in correct_bias
done = correct.run()
File "/DATA/234/zywang/anaconda/lib/python3.6/site-packages/nipype/interfaces/base.py", line 1081, in run
runtime = self._run_wrapper(runtime)
File "/DATA/234/zywang/anaconda/lib/python3.6/site-packages/nipype/interfaces/base.py", line 1724, in _run_wrapper
runtime = self._run_interface(runtime)
File "/DATA/234/zywang/anaconda/lib/python3.6/site-packages/nipype/interfaces/base.py", line 1750, in _run_interface
(self.cmd.split()[0], runtime.hostname))
OSError: command 'N4BiasFieldCorrection' could not be found on host mic233
Interface N4BiasFieldCorrection failed to run.

Because I don't familiar with Linux,so I don't know whether I add ANTs to path correctly or not. The .bashrc as shown below:

tim 20171110201803

Do I add the path right? If correct, How can I solve this problem

Concatenate layer requires inputs with matching shapes

Hey, thanks a lot for your awesome library!
When trying to test it using my own image cubes of size 64³ and tensorflow as backend, I'm getting:

In [0]: model = unet_model_3d((64, 64, 64, 1))
ValueError: `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 16, 16, 16, 512), (None, 16, 16, 16, 256)

Does this might have something to do with the fact that you wrote the model for the Theano back-end but I'm trying to use it for Tensorflow?
Do you see a way to solve it?
Thanks and Best
Willi

Use it for regression

@ellisdg I am trying it to do regression using this network, i.e., the output is now a probability map instead of a concrete labels. I stored the mask file as type 'float32' nifti images and try to load the images. But when I start training for example, 100 epochs , it shows 'Epoch 1/100' and nothing prints out anymore and the machine seems dying there and I have to kill it. I was wondering if there is anything tricky in the data generator part. Do you have any idea what important thing I need to pay attention to to modify the code? Thanks in advance!

I can`t download the data,can u help me?

Im sorry to trouble u, but i cant download the data. I try to register in his websize, but failed. Can u give me the data? I`m a new comer , so I promise it just for learning. Thank u!

Why you get a common foreground mask in advance in preprocessing?

Hi @ellisdg ,
Based on your explanation in issue #15 , you first get a common foreground mask and then crop images with that mask. I guess the purpose of having a global foreground mask is that you wanna make sure all the cropped images have the same size, but you will have the same size images after the resize anyway, so what could be the consideration of having a prepared foreground mask?

Please right me if I am wrong about this, thanks.

when i run UnetTraining,i has so many problem.

In UnetTraining line151, "model_file = os.path.abspath("3d_unet_model.h5")", where is "3d_unet_model.h5"?
In UnetTraining line 165,"processed_list_file = os.path.abspath("processed_subjects.pkl")",where is "processed_subjects.pkl"?

so, when i run this .py file, i meet this problem:
/usr/bin/python2.7 /home/s/myproject/3DUnetCNN-master/UnetTraining.py
Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:99] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: /home/s/pycharm/pycharm-2016.3/bin:/home/s/git/torch/install/lib:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:1562] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
Traceback (most recent call last):
File "/home/s/myproject/3DUnetCNN-master/UnetTraining.py", line 217, in
main(overwrite=False)
File "/home/s/myproject/3DUnetCNN-master/UnetTraining.py", line 155, in main
model = unet_model()
File "/home/s/myproject/3DUnetCNN-master/UnetTraining.py", line 55, in unet_model
conv1 = Conv3D(32, 3, 3, 3, activation='relu', border_mode='same')(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 517, in call
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 571, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 155, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 1219, in call
filter_shape=self.W_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 1787, in conv3d
x = tf.nn.conv3d(x, kernel, strides, padding)
AttributeError: 'module' object has no attribute 'conv3d'

I wish your help, pls !

patches.py reconstruct_from_patches patch_data size doesn't match patch.flatten size

just going to post this here in case someone else run into the same problem.

after training on my own data I tried to run prediction, but run into this problem: my patch.flatten() size did not match patch_data[patch_index] size, they were off by exactly 4 times, meaning patch.flatten was 1/4 of patch_data

on this line:

patch_data[patch_index] = patch.flatten()

here is the solution:
take a look at train.py config on this line:

config["labels"] = (1, 2, 3, 4) # the label numbers on the input image

The brats data 2017 used actually has 4 labels (4 different areas of the same tumor), my own dataset had only 1 area. So I had to decrease the labels to only 1:
config["labels"] = (1, )

this solves the problem

Structure.?

Hi,
Thank you so much for the post.
I would like to ask you that: Will this 3D CNN work well with some the volume has complicated structure (such as retina, or tree branches)

Is it possible to share training error?

I ran the code myself and found the training error (-dice) stuck at ~-0.55 even I trained it for already 50 epochs. I would expect it to be ~-0.9 or ~-1.0 after training for so many epochs. Thanks!

I don't what the code means,please help me,thank you~

When I run this code according to the closed issue,I understand most of the code, But there are too many path and file need to be put into code,I got some confuse.
Then,I run this code,there are something wrong happened,I can't fix it.And I need some help~pls

Traceback (most recent call last):
  File "/home/kaido/workspace/3DUnet/UnetTraining.py", line 226, in <module>
    main(overwrite=False)
  File "/home/kaido/workspace/3DUnet/UnetTraining.py", line 163, in main
    train_model(model, model_file, overwrite=overwrite, iterations=training_iterations)
  File "/home/kaido/workspace/3DUnet/UnetTraining.py", line 191, in train_model
    subjects[dirname.split('_')[-2]] = dirname
IndexError: list index out of range

nibabel.filebasedimages.ImageFileError

I am also using BRATS dataset containing the images in mha format. Will that not work? I saw in your readme that you have used BRATS 2015 dataset. The folders inside my ./data for a particular patient looks something like this
screenshot from 2017-04-07 15 16 09
The content inside the folder lloks like this
screenshot from 2017-04-07 15 24 15

I even tried to zip the mha file and then give it as input but no help.
I tried this in normalize.py
background_path = os.path.join(subject_folder, subject_folder.split("/")[-1]+str(".mha.gz"))

Please let me know what is the correct way to give input of images. Also what is foreground and background in your context? I mean we are using 4 modalities T1,T2, T1c and FLAIR and of course ground truth. I cannot understand background or foreground in this context.

PS I am using latest code updated 13 hours ago

Running the code in GPU

Hello @ellisdg I would like to know your thoughts on following:
My system has following memory capabilities, output of sudo lshw -class memory

 *-memory
       description: System Memory
       physical id: 19
       slot: System board or motherboard
       size: 32GiB

Output of nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 745     Off  | 0000:01:00.0     Off |                  N/A |
| 25%   56C    P0    N/A /  N/A |   3701MiB /  4042MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+

I ran keras functions with theano backend enabled and with GPU support with cnmem disabled,setting it to higher values was giving memory errors Following are the things I tried:
a) 200 samples on theano backend with GPU enabled but gives memory error. Following is the message

Creating validation split...
Epoch 1/50
Error allocating 37345632 bytes of device memory (out of memory). Driver report 4915200 bytes free and 4238540800 bytes total 

b)Reduced the size of config["image_shape"] from 144X144X144 to 80X80X80 and again got memory error
Error allocating 7077888 bytes of device memory (out of memory). Driver report 1114112 bytes free and 4238540800 bytes total
c)Reduced the data set size to 100 patients again got memory error
d)Now frustrated at this point, I disabled the GPU and re-ran with 100 samples and config["image_shape"] with 80X80X80 ,finally it runs fine
e) That gave a hope, so switched back to 200 patients with config["image_shape"]=(144,144,144) on CPU only but then every epoch is taking quite long to finish to be precise approximately 2 iterations of an epoch in one hour,by that rate it might take around 2 days for 1 epoch to finish. So 100 days for 50 epochs! That's quite a lot of time!

Epoch 1/50
  1/160 [..............................] - ETA: 275472s - loss: -0.0516 - dice_c  2/160 [..............................] - ETA: 272867s - loss: -0.0491 - dice_c  3/160 [..............................] - ETA: 270868s - loss: -0.0554 - dice_c  4/160 [..............................] - ETA: 268988s - loss: -0.0727 - dice_c  5/160 [..............................] - ETA: 267151s - loss: -0.0688 - dice_c  6/160 [>.............................] - ETA: 265368s - loss: -0.0774 - dice_c  7/160 [>.............................] - ETA: 263591s - loss: -0.0894 - dice_coef: 0.0894                     

I looked this but already we had config["batch_size"] = 1. Also I reduced the config["image_shape"] = (80,80,80) but no use. When I referred paper again I saw they used Caffe and CuDNN. I believe their GPU has larger memory capabilities.

Will it be possible for you to upload trained model?

Attempted relative import in non-package

when I input "from preprocess import convert_brats_data" in bpython or ipython, it returns
Traceback (most recent call last):
File "", line 1, in
from preprocess import convert_brats_data
File "/usr/lib/python2.7/dist-packages/bpython/curtsiesfrontend/repl.py", line 257, in load_module
module = pkgutil.ImpLoader.load_module(self, name)
File "/usr/lib/python2.7/pkgutil.py", line 246, in load_module
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
File "preprocess.py", line 15, in
from .config import config
ValueError: Attempted relative import in non-package

looking forward to your reply, thanks!

3d vessel Segmentation and Testing issue

Hi @ellisdg
I am doing 3d image segmentation for detection stenosis and plague of vessel inside coronary artery. Can I use the code that you provided? My dataset format is also in Nifti format but only one modality. Besides, will you provide the testing code for this 3d unet segmentation example?

Thank you for your time:)

Segmentation fault (core dumped)

I get a segmentation fault (core dumped) message after the first epoch is trained. I had to turn off the Deconvolution3D because it wasn't available from keras_contrib. I don't know if that has anything to do with the segmentation fault.

CUDA 8, CUDNN 6, 2 x Nvidia 1080 Ti, 64 gigabytes RAM, Python 3.6 virtual environment, Keras 2.1.1

how to match input shape with array of shape

Hi, because my GPU only has 8GB memory, so I want to reduce the image shape. But I get the error below:
ValueError: Error when checking input: expected input_2 to have shape (None, 3, 128, 128, 128) but got array with shape (1, 3, 144, 144, 144)

How can i modify the output shape to match my input shape?

Prediction.py patch_shape expected int() argument must be a string or a number, not 'TensorVariable'

I'm going to post this here is case someone else run into the same problem

After training I ran prediction, but there was a weird error on this line:

patch_shape = tuple([int(dim) for dim in model.input.shape[-3:]])

dim is expecting an int but model.input.shape[-3:] was returning a TensorVariable

I think it's because I'm using a new version of keras and theano, what you have to do to solve this problem, what I did was just replace it with my patch_size for example:

patch_shape = tuple([int(dim) for dim in [64, 64, 64]])

same problem this this line:

patch_shape = tuple([int(dim) for dim in model.input.shape[-3:]])

replace with your patch_shape:

patch_shape = tuple([int(dim) for dim in [64, 64, 64]])

This is a very hack-y solution!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.