lucidrains / unet-stylegan2 Goto Github PK
View Code? Open in Web Editor NEWA Pytorch implementation of Stylegan2 with UNet Discriminator
License: MIT License
A Pytorch implementation of Stylegan2 with UNet Discriminator
License: MIT License
Hi,
Do you have the pre-trained model for the images you showed on the project's main page? Could you please provide a copy of that, please.
Thanks
Hi, I am not able to resume traning from checkpoints, i have noticed that some people have this issue on your stylegan2-pytorch repo. The error is a key mismatch in discriminator module, i haven't change anything. The checkpoints size is 286MB for celeba dataset with image size of 256 maybe the ckpts are not saved properly if you confirm it? A portion of error msg is:
RuntimeError: Error(s) in loading state_dict for StyleGAN2:
size mismatch for D.down_blocks.0.conv_res.weight: copying a param with shape torch.Size([32, 3, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 3, 1, 1]).
size mismatch for D.down_blocks.0.conv_res.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for D.down_blocks.0.net.0.weight: copying a param with shape torch.Size([32, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3]).
size mismatch for D.down_blocks.0.net.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
The error msg is pretty self-explainatory but the problem is i didn't change anything.
The other problem is the model didn't save '.config.json' file
Has anyone else tried running the U-Net discriminator with mixed precision mode?
After a few iterations I get large output values and nan
loss.
Hi,
Thanks for your excellent work. I want to consult whether you have the original official code of the Unet Gan
Because I didn't found the Github link in their paper and cannot find it on the Github
Thanks
got unet_stylegan2 from pip and I cannot generate interpolation
unet_stylegan2 --data '$DATASET' --results_dir '$DATASET_SG2' --models_dir '$DATASET_SG2' --name model-128x128 --generate_interpolation
I'm curious if you have any comments on using U2NET instead of unet?
Hi! First off, thank you for being so engaged and active with folks' issues.
I'm training on a dataset of 3k Japanese flag logos, plus their rotations of 90, 180, and 270 degrees for 12k total images.
I generally experience very quick mode collapse, often within the first 5k iterations. The one result that it does produce is pretty nice, though!
unet_stylegan2 --data jpegs/ --attn-layers [1,2]
39k iterations:
Increasing the batch size and gradient accumulate every has reduced it to partial mode collapse. The catch is I'm training on colab, and this can be unworkably slow depending on the GPU (30 secs per iteration)
unet_stylegan2 --data jpegs/ --batch-size 32 --gradient-accumulate-every 8 --cl-reg --attn-layers [1,2]
Do you have any tuning suggestions for increasing speed and avoiding mode collapse? The results at 24k (above) are promising, but definitely need a lot more refining!
Just tried to train a model with a 3.5K-face dataset but it seems strange.
My command was this:
unet_stylegan2 --batch_size 4 --network_capacity 32 --gradient_accumulate_every 8 --aug_prob 0.25 --attn-layers [1,2]
This is on the 14 epoch (14000)
Any suggestion to improve this or just wait a little bit more?
I had better results at this point with regular stylegan2
Hi, first thanks for writing this! I'm trying to replicate your flowers results within colab.
I'm getting mode collapse with the flowers dataset: http://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz
sample image:
https://drive.google.com/file/d/12vbFGqi8Rj3CLYYRYqMsQUezS-NgcgEC/view?usp=sharing
I am getting crazy numbers for D and G loss, but there was an abrupt change around 30k steps.
all the output values are at the bottom of this https://gist.github.com/foobar8675/affbb99eead9cf61dffc396bf4c8a6ea
I am using the default parameters from def train_from_folder(
but am definitely missing something! I was wondering if you can share what input parameters you are using for the flowers dataset?
(I copied and pasted the project into colab)
https://github.com/foobar8675/ml_experiments/blob/47c04a05e3ec9d9848d14256d41d99318b350742/stylegan2_pytorch.ipynb
Do you have any suggestions?
Hello, would it be possible to utilize Pytorch 1.6.0's AMP for mixed precision training instead of APEX?
It would be easier to use, and I personally haven't managed to get APEX working.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.