Code Monkey home page Code Monkey logo

optic-disc-unet's Introduction

Optic-Disc-Unet

A modified Attention Unet model with post-process for retina optic disc segmention.

The performance of our model on Messidor-dataset:

Patched Based Attention Unet Model

I use a modified Attention Unet which input of model is 128x128pix image patches. To know more about attn-unet, please see the paper. When sampling the patches, I focus the algorithm get samples around optic disc. The patches is like that:

sample patches

so the groundtruth is:

Pretrained Model & Dataset

The model is trained on DRION dataset. 90 images to train. 19 images to test.

To get the groundtruth of DRION, I write a convert tool, you can find in DRION-DB-PythonTools.

Pretrained can be downloaded here. Extract them to dir Dataset.

Post-Process Methods

When directly use unet model, we often get some error predictions. So I use a post-process algorithm:

  1. predicted area can't be to small.
  2. minimum bounding rectangle's height/width or width/height should be in 0.45~2.5

lefted area is the final output. The problem of this algorithm is that the parameters not self-adjusting, so you have to change them if input image is larger or smaller than before.

Project Structure

The structure is based on my own DL_Segmention_Template. Difference between this project and the template is that we have metric module in dir: perception/metric/. To get more Information about the structure please see readme in DL_Segmention_Template.

You can find model parameter in configs/segmention_config.json.

First to run

please run main_trainer.py first time, then you will get data_route in experiment dir. Put your data in there, now you can run main_trainer.py again to train a model.

where to put Pretrained Model

The model is trained with DRION dataset on my own desktop (intel i7-7700hq, 24g, gtx1050 2g) within 30 minutes. Dataset

Test your own image

If u want to test your own image, put your image to (OpticDisc)/test/origin,and change the img_type of predict settings in configs/segmention_config.json, run main_test.py to get your result. The result is in (OpticDisc)/test/result

optic-disc-unet's People

Contributors

deeptrial avatar kant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

optic-disc-unet's Issues

code error

thank you for your share, i try to run your code, but it appear one error:
first,i want to know your image, gt are 1D in your dataset?
In standard_loader.py , 47 line code : orgImg=plt.imread(orgPath) ,it will produce 2D array,
but 48 line code : imgs[index,:,:,0]=np.asarray(orgImg[:,:,1]*0.75+orgImg[:,:,0]*0.25), it appear one
error , too many index! and i don't know what you want in this line code?

Request Pretrained Weights

Thanks for sharing your work.
Please share your Pretrained Weights for this work for further analysis, as it would help me and verify your results.
I really appreciate it if you answer these questions. Thanks very much.

test problem

Why is the probability output of the test results almost close to 1 when I am training with your project, the test picture is close to white paper?

ValueError

[INFO] Saving Training Data
Traceback (most recent call last):
File "main_train.py", line 54, in
main_train()
File "main_train.py", line 35, in main_train
dataloader.prepare_dataset()
File "/home/mir/Optic-Disc-Unet/experiments/data_loaders/standard_loader.py", line 69, in prepare_dataset
imgs_val, groundTruth = self._access_dataset(self.val_img_path, self.val_groundtruth_path, self.val_type)
File "/home/mir/Optic-Disc-Unet/experiments/data_loaders/standard_loader.py", line 48, in _access_dataset
imgs[index,:,:,0]=np.asarray(orgImg[:,:,1]*0.75+orgImg[:,:,0]*0.25)
ValueError: could not broadcast input array from shape (400,599) into shape (400,600)

not enough values in test

Traceback (most recent call last):
File "main_test.py", line 40, in
main_test()
File "main_test.py", line 28, in main_test
infer.predict()
File "C:\Users\vcvis\Desktop\Optic-Disc-Unet-master\perception\infers\segmention_infer.py", line 51, in predict
_, contours, _ = cv2.findContours(binaryResult, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
ValueError: not enough values to unpack (expected 3, got 2)

please help
thank you

Train with our own data set

Hi!
First of all, thank you very much for sharing your code!

I would to train it with my own data set. Could you explain me what things should I change?
Only in segmention_config.json? And what mean each variable?
Sorry for so much question!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.