Comments (12)
Hi, could you elaborate the problem you are facing?
As far as the training goes, we don't change any hyperparameters during training.
from rvgan.
until now, I have trained 80 epoch, and I did not change any hyperparameters, but the results of F1 score was still less than 0.8.
from rvgan.
I downloaded your weight and test, the results was right, so I did not whether I need to pay attention to some details during training.
from rvgan.
Hi, you are on the right track. After training, loop over all the models and then print the metics for each to find the best snapshot. As GAN doesn't converge like regular model, the loss values fluctuate a lot, so the last saved model won't be the best one.
You can also try to reload the best snapshot and train again. I believe I did it for some of my similar work, as the discriminator might suffer from mode collapse and only print loss=0.00 . This is a common phenomenon with GAN training.
Also, while testing, try to use a smaller stride ( stride=3) to do the patch-wise prediction on the whole test image. This is slow but will you give you the best auc and F1 score.
Hope this helps!
from rvgan.
I tested each epoch, but the F1 score was still less than 0.80.
I also tried a smaller stride (stride=3), but not much improvement.
Could you tell me about your training process?
from rvgan.
I have trained the model for 100 epochs. After training, I loaded the weights of the best performing Coarse and Fine Generator ( For the same epoch) and then trained again for 100 epochs. I did not load the weights of the discriminators while retraining. So the discriminators were trained from scratch. I think I did this procedure 2-3 times. And also looped over all 100 saved weights to find the best performing coarse and fine generator pairs.
Thanks !
from rvgan.
For table 1 in your paper, I used sensitivity,specificity and accuracy to calculate the F1 score on CHASE-DB1, and I got 0.7858 F1 score, but your result is 0.8957. M-GAN's sensitivity,specificity and accuracy better than your RV-GAN, however, F1 score is 0.08 lower than yours. Could you explain this?
from rvgan.
ChaseDB doesnt come with training and test splits. Which images are you using for Chase-DB for testing ? Maybe the evaluation technique + test images you are using is giving you the bad results?
Also the table results are correct, as we have experimented and validated the weights extensively with different stride values, and found stride=3 to give us the best results. The only typo is the M-GAN sensitivity result will be in BOLD text rather than ours.
from rvgan.
I used sensitivity=0.8199, specificity=0.9806 and accuracy= 0.9697 to calculate F1 score, as shown in table 1, however, the value of F1 score I calculated by hand is too far from the value of F1 score in the table 1.
from rvgan.
F1-scrore is calculated using Precision and Sensitivity. We used pycm library.
import pycm
cm = ConfusionMatrix(actual_vector=y_true, predict_vector=y_pred)
print("Specificity: " + str(cm.TNR[1]))
print("Sensitivity: " + str(cm.TPR[1]))
print("Accuracy : " + str(cm.ACC[0]))
print("F1 Score: " + str((2*cm.PPV[0]*cm.TPR[1])/(cm.PPV[0]+cm.TPR[1])))
from rvgan.
I think you should check the calculation again instead of the program. Because I also tested M-GAN by sensitivity=0.8234, specificity=0.9938 and accuracy= 0.9736 to calculate F1 score, which is right.
from rvgan.
I downloaded your weight and test, the results was right, so I did not whether I need to pay attention to some details during training.
Where did you test the weights and biases? Could you please help me with the procedure?
from rvgan.
Related Issues (20)
- Inference time HOT 1
- Code for printing the metrics HOT 5
- Some questions about the paper and code HOT 9
- pretrained model of chase can't be loaded HOT 16
- Killed error HOT 1
- local_plot predictions always blank HOT 3
- How to determine the best model HOT 4
- need help! HOT 2
- Hello! HOT 5
- Hi~ HOT 1
- Having nan value for all the losses HOT 1
- length error! HOT 3
- libtiff error
- Pretrained Weights
- When we run train.py, I find the loss is 'nan' at epchs 2. Do you have this problem. I want to know why and how to solve it. HOT 1
- Question HOT 12
- Question about loss decline HOT 2
- How should we train the model using tf 2.6.0? HOT 4
- Warning while training HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rvgan.