Code Monkey home page Code Monkey logo

promise12_segmentation's Introduction

PROMISE12 Challenge - Automated Segmentation of Prostate Structures from MR Images

This is Keras implementation of a fully convolutional neural network with residual connections for automatic segmentation of prostate structures from MR images.

More info on this competition can be found on Grand Challenges website. Data can be downloaded from https://promise12.grand-challenge.org/download/

The network architecture was inspired by U-Net: Convolutional Networks for Biomedical Image Segmentation and by Keras implementation of the model by Paul-Louis Pröve.

The predictions of this model achieved the score 83.70 and was ranked #8 in the competition. For more details about the model and the implementation see the file project_summary.pdf

images/schematic.png

How to use

Dependencies

This tutorial depends on the following libraries:

  • scikit-image, numpy, matplotlib, scipy
  • SimpleITK
  • OpenCV
  • Tensorflow >=1.4
  • Keras >= 2.0

This code should also be compatible with Theano backend of Keras, but in my experience Theano is slower than TensorFlow.

Running the model

  • Run python train.py to pre-process the data and train the model. Model weights are save in file ../data/weights.h5 .

  • Run python test.py to test the model on the train and validation set and generate some images with some best and worst predictions.

promise12_segmentation's People

Contributors

imirzaevtest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

promise12_segmentation's Issues

InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a int64 tensor but is a float tensor [Op:Mul] name: loss/conv2d_22_loss/mul/

Hello, I am running on Colab using tf 2.0 and keras 2.3.1 and training using another dataset in .nii format.

Unfortunately, the algorithm outputs an error in the first epoch:

Epoch 1/20 2020-01-24 16:05:30.895469: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-01-24 16:05:36.929059: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 Traceback (most recent call last): File "train.py", line 285, in <module> n_imgs=15*10**4, batch_size=8) File "train.py", line 275, in keras_fit_generator use_multiprocessing=True) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 1297, in fit_generator steps_name='steps_per_epoch') File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 323, in model_iteration steps_name='validation_steps') File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration batch_outs = batch_function(*batch_data) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 1070, in test_on_batch reset_metrics=reset_metrics) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 327, in test_on_batch output_loss_metrics=model._output_loss_metrics) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_eager.py", line 354, in test_on_batch output_loss_metrics=output_loss_metrics)) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_eager.py", line 166, in _model_loss per_sample_losses = loss_fn.call(targets[i], outs[i]) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/losses.py", line 221, in call return self.fn(y_true, y_pred, **self._fn_kwargs) File "/content/drive/My Drive/promise12_segmentation2/codes/metrics.py", line 18, in dice_coef_loss return -dice_coef(y_true, y_pred) File "/content/drive/My Drive/promise12_segmentation2/codes/metrics.py", line 12, in dice_coef intersection = K.sum(y_true_f * y_pred_f) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_ops.py", line 899, in binary_op_wrapper return func(x, y, name=name) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_ops.py", line 1206, in _mul_dispatch return gen_math_ops.mul(x, y, name=name) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_math_ops.py", line 6698, in mul _six.raise_from(_core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a int64 tensor but is a float tensor [Op:Mul] name: loss/conv2d_22_loss/mul/

Any ideas how to solve this?

Thanks in advance!

missed folder '''test_samples'''

Hi everyone
I trying to run this code, the training it's done but when i tried to do the test, i did not find the folder test_samples, i dont know if i need to transfer the files in test data to .png format or if there a missed folder for this repository.

Thank for your reply

how to use this model to segement the rgb pictures?

Thanks for sharing the code. I am sorry to Sorry to bother you.I want use this model to do segement in my own dataset. But I do not know how to label my own pictures which are .png or .jpg to use this model?

about py2.7 topy3.5

Sorry to bother you,but I notice that you use python2.7,now I want to transform it to python3.5,.But in py3.5,filter produce a generator format, not like py2.7(list format),I do not know how to transform it so I delete the line48 and 95(fileList.sort() can not work).
but problem still happened:
Traceback (most recent call last):
File "C:/Users/lz666/PycharmProjects/promise12_segmentation/codes/train.py", line 233, in
n_imgs=15*10**4, batch_size=32)
File "C:/Users/lz666/PycharmProjects/promise12_segmentation/codes/train.py", line 167, in keras_fit_generator
data_to_array(img_rows, img_cols)
File "C:/Users/lz666/PycharmProjects/promise12_segmentation/codes/train.py", line 72, in data_to_array
images = np.concatenate( images , axis=0 ).reshape(-1, img_rows, img_cols, 1)
ValueError: need at least one array to concatenate
I do not think that is the problem with py2.7 and 3.5,I use CPU only,can you fix it or transform it to py3.5?

Stuck on Epoch 1/20

I ran the train.py file and have been stuck on `Epoch 1/20 for over an hour. I'm using a Google Cloud Platform VM instance to run the file. Any thoughts?

Screenshot 2019-05-23 at 11 16 05 PM

Error after training

Hi,

Thanks for this great tutorial. I modify this to satisfy python3, however, when I run this codes, this error shows as below, could you please give me some advice to fix this problem?

Thank you very much!

Input arrays should have the same number of samples as target arrays. Found 950 input samples and 910 target samples.

Total params: 53,249,754
Trainable params: 53,237,530
Non-trainable params: 12,224
_________________________________________________________________________________________________
Traceback (most recent call last):
  File "train.py", line 237, in <module>
    n_imgs=15*10**4, batch_size=32)
  File "train.py", line 227, in keras_fit_generator
    use_multiprocessing=True)
  File "/opt/Anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wapper
    return func(*args, **kwargs)
  File "/opt/Anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2116, in ft_generator
    val_x, val_y, val_sample_weight)
  File "/opt/Anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1438, in _tandardize_user_data
    _check_array_lengths(x, y, sample_weights)
  File "/opt/Anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 217, in _ceck_array_lengths
    'and ' + str(list(set_y)[0]) + ' target samples.')
ValueError: Input arrays should have the same number of samples as target arrays. Found 950 iput samples and 910 target samples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.