Comments (10)
Hello!
Some kind of error, sorry.
So we retrained lama-fourier model and here is a log:
from lama.
If it is a problem of github, I could ask for the help of admin. But on github I have never encountered the situation before.
If there is anything wrong in my issue, we can confirm and correct that. But, deleting is a .... in-proper solution I think. Of course I do not know how that happened..
from lama.
@cohimame @windj007 Please check it. Thank you.
from lama.
@sydney0zq
One more important thing, that is now on readme:
During training process, we compute metrics on val and extra_val of 2000 images:
[2021-11-10 23:40:09,398][saicinpainting.training.trainers.base][INFO] - Validation metrics after epoch #0, total 24999 iterations:
fid lpips ssim ssim_fid100_f1
mean mean std mean std mean
0-10% 9.233729 0.030847 0.015940 0.972129 0.019427 NaN
10-20% 24.030723 0.082105 0.016849 0.923911 0.029110 NaN
20-30% 36.392329 0.139558 0.021874 0.870975 0.046391 NaN
30-40% 52.908639 0.192846 0.024251 0.820481 0.062490 NaN
40-50% 80.976875 0.248025 0.026053 0.765624 0.078790 NaN
total 14.276092 0.134199 0.074044 0.875066 0.083600 0.865561
...
[2021-11-10 23:42:07,588][saicinpainting.training.trainers.base][INFO] - Extra val random_thick_512 metrics after epoch #0, total 24999 iterations:
fid lpips ssim ssim_fid100_f1
mean mean std mean std mean
0-10% 8.164209 0.028028 0.016255 0.974709 0.019556 NaN
10-20% 25.238185 0.087229 0.020829 0.918119 0.034653 NaN
20-30% 42.584494 0.146621 0.023015 0.860249 0.045562 NaN
30-40% 63.517448 0.206390 0.028195 0.805664 0.063081 NaN
40-50% 83.640965 0.270336 0.032390 0.744625 0.078762 NaN
total 16.707589 0.148648 0.086964 0.859751 0.094933 0.845626
Then, for a trained model:
# To achieve same level of metric values as in paper, you need
# to sample previously unseen 30k images and generate masks for them
bash fetch_data/places_standard_evaluation_prepare_data.sh
....
from lama.
from lama.
Hello! Some kind of error, sorry. So we retrained lama-fourier model and here is a log:
Thanks to your great work and reply. Really appreciate it!
from lama.
@cohimame Thanks for your answer. A very trivial question, the FID in your log is about 16.707589, but in paper, the FID is very small:
So What the difference? thank you .
from lama.
@cohimame Thanks for your answer. A very trivial question, the FID in your log is about 16.707589, but in paper, the FID is very small:
So What the difference? thank you .
@cohimame , the same question, and why the fid results from other methods are low too while I note that fid results from other papers are not so low. Do you retrain those models or what is the raw way for computing fid and lpips?
from lama.
@cohimame Thanks for your answer. A very trivial question, the FID in your log is about 16.707589, but in paper, the FID is very small:
So What the difference? thank you .@cohimame , the same question, and why the fid results from other methods are low too while I note that fid results from other papers are not so low. Do you retrain those models or what is the raw way for computing fid and lpips?
hi! did you solve this? why the fid results are so high?looking forward to your reply! you can connect me by qq:3559640395
from lama.
@sydney0zq One more important thing, that is now on readme:
During training process, we compute metrics on val and extra_val of 2000 images:
[2021-11-10 23:40:09,398][saicinpainting.training.trainers.base][INFO] - Validation metrics after epoch #0, total 24999 iterations: fid lpips ssim ssim_fid100_f1 mean mean std mean std mean 0-10% 9.233729 0.030847 0.015940 0.972129 0.019427 NaN 10-20% 24.030723 0.082105 0.016849 0.923911 0.029110 NaN 20-30% 36.392329 0.139558 0.021874 0.870975 0.046391 NaN 30-40% 52.908639 0.192846 0.024251 0.820481 0.062490 NaN 40-50% 80.976875 0.248025 0.026053 0.765624 0.078790 NaN total 14.276092 0.134199 0.074044 0.875066 0.083600 0.865561 ... [2021-11-10 23:42:07,588][saicinpainting.training.trainers.base][INFO] - Extra val random_thick_512 metrics after epoch #0, total 24999 iterations: fid lpips ssim ssim_fid100_f1 mean mean std mean std mean 0-10% 8.164209 0.028028 0.016255 0.974709 0.019556 NaN 10-20% 25.238185 0.087229 0.020829 0.918119 0.034653 NaN 20-30% 42.584494 0.146621 0.023015 0.860249 0.045562 NaN 30-40% 63.517448 0.206390 0.028195 0.805664 0.063081 NaN 40-50% 83.640965 0.270336 0.032390 0.744625 0.078762 NaN total 16.707589 0.148648 0.086964 0.859751 0.094933 0.845626
Then, for a trained model:
# To achieve same level of metric values as in paper, you need # to sample previously unseen 30k images and generate masks for them bash fetch_data/places_standard_evaluation_prepare_data.sh ....
I appreciate your efforts and have a few inquiries. Could you assist me with the following questions?
- On what metrics is the standard deviation based?
- What causes the presence of NaN values?
- Regarding the first column being the percentage range, could you explain what these percentages represent?
from lama.
Related Issues (20)
- Hi, I have made a iOS App with your great model!
- Prediction failed due to Missing key visualizer
- Can I fine-tune the model? HOT 2
- why is tensorflow necessary?
- ImportError: cannot import name 'DualIAATransform' from 'albumentations' HOT 1
- About the training command 2 HOT 1
- Created single-file version of LaMa
- Question about generating validation and eval data
- Can I separate the Feature Refinement to Improve the High-Resolution Image Inpainting technique
- A simple ckpt to pt model convertor
- Repeated Refinement?
- Error finetuning the big-lama-with-discr model HOT 7
- Data set training problem HOT 1
- After executing the training command, it has been stuck at this point without any progress in the training. HOT 1
- Inpaint a NEW thing? HOT 3
- Refinement with Multiple Images
- How to draw a loss function curve
- Dataset is empty if configuring img_suffix: .jpg in default.yaml
- ONNX Model done HOT 4
- Output Error: No inpainted in the output_dir HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lama.