Code Monkey home page Code Monkey logo

ddcm-semantic-segmentation-pytorch's Introduction

linkedin badge Gmail Badge Google Scholar Badge

Hey, I'm Qinghui (Brian) Liu, a researcher at Oslo University Hospital (OUS) CRAI group, and am working on the development of deep learning algorithms for medical image analysis. I hold a PhD from UiT machine learning group and Norwegian Computing Center. My research interests center around supervised/semi-supervised/self-supervised machine learning methods for computer vision and multi-modal data.

  • ๐Ÿ˜„ ย  Ask me about anything tech related, I am happy to help;
  • ๐Ÿ“ซ ย  Iโ€™m looking to collaborate on machine/deep learning applications in the medical field.

ovi

ddcm-semantic-segmentation-pytorch's People

Contributors

samleoqh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ddcm-semantic-segmentation-pytorch's Issues

Dataset

Hi @samleoqh
I can't download the ISPRS datasets with ftp,
may you share a google driver link of the dataset?
yours sincerely

Questions about training

Hi,
Thank you for sharing your code on GitHub and congratulations on your TGRS 2020 paper (it's a great piece of work). I have two questions about the training phase of the DDCM model. I would really appreciate it if you could help me find their answers:

1- Can anyone please tell me when the training process ends in train.py? I have read both the paper and code and have not been able to find a hint on this matter. I am referring specifically to training on the Vaihingen dataset. There appears to be a continuous reduction in the lr, but I am unsure when this reduction is stopped since there is no maximum number of epochs, minimum lr, etc. defined in the configurations. There is only one restriction defined: the maximum number of iterations is 10e8. Does this mean that the training should continue for 10e8 iterations? Since each iteration lasts for 1 minute on my computer (batch size =5), 10e8 iterations will take forever!

2- I ran your code with the Vaihingen dataset for approximately 16 epochs (16k iterations) and the following are the training and validation loss trends.

git1

As can be seen in the figure, the main_loss is reduced (with a few abrupt steps) from 1.5 to 0.157 (black box in the middle of the figure belongs to main_loss at 16k iteration). However, the val_loss keeps fluctuating between 0.40 and 0.47 instead of decreasing, e.g., in epoch=16, main_loss equals 0.157, while val_loss equals 0.47. So, according to the figure, training loss decreases during training, but validation loss does not. This means that we are dealing with a large gap between the train and validation losses (overfitting). Have you observed this behavior in your trainings? Do you have any specific suggestions, solutions, or comments to fix the overfitting problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.