conorlazarou / pokegan Goto Github PK
View Code? Open in Web Editor NEWGAN for generating pokemon sprites
License: MIT License
GAN for generating pokemon sprites
License: MIT License
@ConorLazarou
I newbie here for GAN.
I wrote this code and tried to generate something.
import torch
from aegan import Generator as G
import torchvision.utils as vutils
device = torch.device('cpu')
netG = G()
netG.load_state_dict(torch.load('trained_generator_weights.pt', map_location=device))
fake = netG()
print(fake)
vutils.save_image(fake.data, 'testfake.png', normalize=True)
But I'm getting this error
Traceback (most recent call last): File ".\generate.py", line 7, in <module> netG.load_state_dict(torch.load('trained_generator_weights.pt', map_location=device)) File "C:\Users\Akila\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Generator: size mismatch for projection.0.0.weight: copying a param with shape torch.Size([8, 16]) from checkpoint, the shape in current model is torch.Size([8, 8]). size mismatch for projection.1.0.weight: copying a param with shape torch.Size([8, 24]) from checkpoint, the shape in current model is torch.Size([8, 16]). size mismatch for projection.2.0.weight: copying a param with shape torch.Size([8, 32]) from checkpoint, the shape in current model is torch.Size([8, 24]). size mismatch for projection.3.0.weight: copying a param with shape torch.Size([8, 40]) from checkpoint, the shape in current model is torch.Size([8, 32]). size mismatch for projection.4.0.weight: copying a param with shape torch.Size([8, 48]) from checkpoint, the shape in current model is torch.Size([8, 40]). size mismatch for projection.5.0.weight: copying a param with shape torch.Size([8, 56]) from checkpoint, the shape in current model is torch.Size([8, 48]). size mismatch for projection.6.0.weight: copying a param with shape torch.Size([8, 64]) from checkpoint, the shape in current model is torch.Size([8, 56]). size mismatch for colourspace_r.0.weight: copying a param with shape torch.Size([128, 72]) from checkpoint, the shape in current model is torch.Size([128, 64]). size mismatch for colourspace_g.0.weight: copying a param with shape torch.Size([128, 72]) from checkpoint, the shape in current model is torch.Size([128, 64]). size mismatch for colourspace_b.0.weight: copying a param with shape torch.Size([128, 72]) from checkpoint, the shape in current model is torch.Size([128, 64]). size mismatch for seed.0.weight: copying a param with shape torch.Size([4608, 72]) from checkpoint, the shape in current model is torch.Size([4608, 64]). size mismatch for conv.1.1.weight: copying a param with shape torch.Size([256, 146, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 144, 4, 4]). size mismatch for conv.2.1.weight: copying a param with shape torch.Size([256, 82, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 80, 4, 4]). size mismatch for conv.3.1.weight: copying a param with shape torch.Size([256, 82, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 80, 4, 4]). size mismatch for conv.4.1.weight: copying a param with shape torch.Size([64, 82, 4, 4]) from checkpoint, the shape in current model is torch.Size([64, 80, 4, 4]).
Model loading from py36 is failing as somewhat expected, given that you mentioned you used py37.
Trying py37, there seems to be no matching torch1.6.0gpu distribution found for windows/py37.
Are you on Windows or Linux?
I fairly new to NN's in general, so forgive my ignorance.
i've dropped in images into the sprites folder.
but when i run the main.py i get this error.
Epoch 1; Elapsed time = 00:00:00s Traceback (most recent call last): File "main.py", line 113, in <module> main() File "main.py", line 90, in main gan.train_epoch(max_steps=100) File "/home/michael/nft/PokeGAN/aegan.py", line 645, in train_epoch ldx_, ldz_ = self.train_step_discriminators(real_samples) File "/home/michael/nft/PokeGAN/aegan.py", line 605, in train_step_discriminators Z_hat = self.encoder(X) File "/home/michael/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/michael/nft/PokeGAN/aegan.py", line 357, in forward down = intermediate.view(-1, 6*6*512) RuntimeError: shape '[-1, 18432]' is invalid for input of size 4194304
Thanks!
Thanks for the quick answer to the prior issue Lazarou. Marvelous code.
Any chance that you have the pre-trained model somewhere?
Epoch 199 results on py36 on my machine:
Within 200 epochs, I saw proper results forming.
However, got a crash after that time.
Wondering if that crash happened because I tried to invoke gpu to do training on a separate task which uses tensorflow/gpu compute on my machine.
Question:
What version of python did you use? (I am on Py36)
Any chance that you have the saved model somewhere?
Potential problem:
I may be mistaken, but I had expected to see transparent regions around early figures generated even after only 2 epochs.
Instead , I see a warning "Palette images with Transparency expressed in bytes should be converted to RGBA images", and I also see images being returned as the item below, with colour where I anticipated transparent regions to me.
Note that the plot is also the same:
In case you don't have time to answer, my plan is to dig down into the offending warning PIL/Image file, and adjust the Image loading function somehow to perform adequate rgba conversion. These issues may be arising from using invalid version of python, wrt main codes.
//////////////////////////////
Edit: After some more epochs, things start to seem okay. I would still like the trained model though, for my little computer has gone through a bit :)
Hello, I admire your work here, I love GANs and Pokémon, I've read your paper on AEGAN and I have some questions. How should I properly resume the training? Is it enough to save the generator weights or does it make too hard for the discriminators training to converge? Should I save the four networks? The generator and the encoder? Any other variable? How should I change the architecture to support 48x48 or 32x32 datasets to make the training faster?
How did you come up with such a neat idea?
Really more of a question, but why do you use nn.ModuleList throughout the project instead of defining an nn.Sequential model or assigning the layers to self? Is this something prescribed by the paper or was this more of a stylistic choice?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.