seva100 / optic-nerve-cnn Goto Github PK
View Code? Open in Web Editor NEWCode repository for a paper "Optic Disc and Cup Segmentation Methods for Glaucoma Detection with Modification of U-Net Convolutional Neural Network"
License: MIT License
Code repository for a paper "Optic Disc and Cup Segmentation Methods for Glaucoma Detection with Modification of U-Net Convolutional Neural Network"
License: MIT License
Thanks for open-sourcing this.
Noticed you had this line in one of your notebooks:
512 px cropped by Optic Disc area and resized to 128 px images were used.
How about automating the crop of images by optic disc? - Was thinking to experiment with showing slightly more (like 15% extra), then to do another pass through all the images to make them equal pixel height and width.
i want to test the training model on single input image, i have train my own model using your steps as an example and want to test on single image please help
Can somebody please clearly state the preprocessing(cropping, resizing, CLAHE) steps taken ? I am not able to follow up. I understand the image augmentation techniques (random flips, rotations etc) deployed using generator.
code does not run under new keras/tf
Unable to unzip after downloading. I hope there is a new link
Can you guide me on how to calculate the Cup to Disc Ratio for any dataset like the RIM-ONE v3, in this paper-you are cropping the OD to segment the OC, is it necessary to crop it? and how do we calculate the CDR values as input size is different?
How can we create an hdf5 file with a joint optic cup disc for the rimonev3 database in extract_data.py
I was trying to replicate the results presented in the paper by following the procedure you have outlined. For some reason, no matter which weights I load from the 'models_weights' folder, and no matter how I crop the image, it only segments the cup and never segments the disc.
I tried running it on images from the RIM-ONE and DRIONS dataset images and it either didn't segment properly or segmented the cup.
Here are the preprocessing steps I followed for segmenting the disc:
Then I passed the image into the model. Is there anything that I am missing or doing wrong?
I can run your U-Net, OD on DRIONS-DB (fold 0).ipynb with no problem using your pre-trained model. And for the dataset I didn't recreate it, I downloaded from the url you gave. So now I want to run this model on other images but i don't know how. Do I have to start at the beginning of recreating dataset just to run 1 of my image? Please help, I am new to coding
Dear Artem,
Can you please review my code https://github.com/abhinav-iiit/fundus-image-segmentation . I am unable to reproduce the results despite using weights mentioned here - https://github.com/seva100/optic-nerve-cnn/tree/master/models_weights
RIM_ONE_v3.hdf5 and all_data.hdf5 need how to generate it, what are the requirements?
I am using your
U-Net, OD on DRIONS-DB (fold 0)
notebook to simulate OD segmentation on DRIONS DB, It works perfectly fine when i use your pre-trained model from folder
05.03,02_40,U-Net light, on DRIONS-DB 256 px fold 0, SGD, high augm, CLAHE, log_dice loss
to segment images from your "DRIONS_DB.hdf5" But
When I load image from DRIONS db, and predict segmentation using the following code, then I see very bad segmentation.
img_path = 'E:/DRIONS/DRIONS-DB/images/image_001.jpg'
im = np.array(Image.open(img_path))
im = im[0:, 40:]
print(im.shape)
im = cv2.resize(im, (256,256))
print(im.shape)
plt.imshow(im), plt.show()
im = np.expand_dims(im, axis=0)
im = tf_to_th_encoding(im)
prediction = (model.predict(im)[0, 0]).astype(np.float64)
plt.imshow(prediction>0.5, cmap=plt.cm.Greys_r), plt.show()
So can you help me out in understanding why it is working when using an image from your DRIONS_DB.hdf5 file and why it is failing while processing a new image?
There's a compatibility error on the U-Net, OD cup on
RIM-ONE v3, cropped by OD (fold 0).ipynb (nbviewer)
on the following code:
if K.backend() == 'tensorflow': sess = K.get_session()
it throws that error:
'
get_sessionis not available ' RuntimeError:
get_session is not available when using TensorFlow 2.0.
Any idea on how to fix it?
@seva100 Can u tell how to create .h5py file? From random.sets of fundus images probably(DRISHTI/DRIONS) for getting them to
h5py file thought the code u mentioned in scrips ? How can I be able to import the images & their CSV format file. I just want to create a new h5py file supposingly so can I be able to do that. I think that the one u provided in script is taking data from preexisting h5py file .
@seva100 , sorry to disturb again, can you help me with the code to create an hdf5 file creation of a folder containing approximately 8k (6k training & 1k training & 1k for testing) , location in a directory let's say D:/New Folder/(8k images) , which are of different sizes(pixel values).I am getting confused.
Immediate help would be appreciated.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.