Comments (13)
@showmin An example of how to pass the sample_fn
is here.
The sample_fn
should call model.sample()
and do any pre/post processing to convert the model samples into images. In the example above, we first call model.sample()
to generate the raw samples then binarize the output to 0
or 1
since we're generating binarized MNIST images.
For ImageGPT, I believe the sample_fn
should be as straightforward as passing sample_fn=model.sample
.
Would you please add a simple example in the readme?
Good point, I will update the readme with an example of how to sample from a trained model.
from pytorch-generative.
Yes, like I said above it only works for new checkpoints unfortunately. For old checkpoints you'd need to manually use model.sample
.
from pytorch-generative.
@EugenHotaj Thanks. It works without any other issue.
from pytorch-generative.
@showmin I thought a bit more about the sample_fn
and decided to remove it altogether because it's confusing. You should now no longer need to pass in sample_fn
and can just set sample_epochs
on the Trainer
. The trainer will directly call model.sample()
.
Hopefully this is a lot easier. Let me know if you run into issues :) .
from pytorch-generative.
@EugenHotaj Thanks for the reply. That's really easier for me. Well, now I have to wait for the result after trainning 1 epoch but cannot restore the epoch that I've trained then generate the samples. It would be perfect for me if I can generate samples from trained epoch.
from pytorch-generative.
@showmin If you want to generate samples at a particular epoch, the easiest thing to do is manually call model.sample()
after you've loaded the checkpoint. The samples generated in the Trainer
are mostly meant to track the progress during training.
from pytorch-generative.
I try to add self.sample_one_batch()
after the code:
pytorch-generative/pytorch_generative/trainer.py
Lines 230 to 232 in f9de41c
such that I can generate samples after I restore the latest epoch. But it came out an error:
Found 2 saved checkpoints.
Restoring trainer state from checkpoint trainer_state_2.ckpt.
Falied to sample from the model: 'ImageGPT' object has no attribute '_c'
from pytorch-generative.
The issue is that we weren't saving some variables during checkpointing. I just landed 406a7be which should fix this issue for new checkpoints only.
from pytorch-generative.
Still has the same error message. Does it only work for the new trained epoch after I checkout the latest code?
from pytorch-generative.
After training a new epoch, errors here:
Found 3 saved checkpoints.
Restoring trainer state from checkpoint trainer_state_3.ckpt.
Traceback (most recent call last):
File "train.py", line 76, in <module>
main(args)
File "train.py", line 44, in main
MODEL_DICT[args.model].reproduce(args.epochs, args.batch_size, args.logdir)
File "D:\Jupyter_Home\pytorch-generative\pytorch_generative\models\autoregressive\image_gpt.py", line 175, in reproduce
model_trainer.interleaved_train_and_eval(n_epochs)
File "D:\Jupyter_Home\pytorch-generative\pytorch_generative\trainer.py", line 232, in interleaved_train_and_eval
self.restore_checkpoint()
File "D:\Jupyter_Home\pytorch-generative\pytorch_generative\trainer.py", line 134, in restore_checkpoint
self.model.load_state_dict(checkpoint["model"])
File "C:\Users\w00665547\Anaconda3\envs\env_ml\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ImageGPT:
Unexpected key(s) in state_dict: "_c", "_h", "_w".
from pytorch-generative.
It seems that the error was caused by putting register_buffer() in call method in base.py
. I tried to put them in init method and it removed the error successfully. If it is the root cause, probably you need to assign an input image when creating the object.
from pytorch-generative.
@showmin busy week so just had a chance to look at this. You're right, the issue is that we create some dynamic buffers during forward
which won't be present on a newly initialized model causing the checkpoint loading to fail. I just pushed in 449795f which should fix the issue. You should be able to load in any existing checkpoints now.
Let me know if you're still seeing issues.
from pytorch-generative.
Great! Going to close this issue out, please feel free to open again if needed.
from pytorch-generative.
Related Issues (20)
- Sample Images During Training to Check Generation Quality
- Loading 3 Channels Images to PixelSnail HOT 3
- train.py file is gone along with others HOT 5
- Sampling on 3 channels looks corrupted HOT 2
- Replicating NLL Results for PIXEL CNN HOT 2
- Training ImageGPT on 64 * 64 size images HOT 2
- Using Multiple Style Images in Neural Style Transfer HOT 4
- Add slider bars for the different parameters in Neural Style Transfer HOT 4
- Applying Style Transfer to a GIF HOT 2
- How to run the style_transfer notebook without square images HOT 6
- How to view progress during training? HOT 2
- Are pretrained weights for any of these models in repo available anywhere? HOT 1
- VectorQuantizedVAE2 HOT 1
- Positional encoding is replicated across all channels. HOT 1
- Update KDE models to handle multi-dimensional inputs HOT 4
- Typo in README HOT 1
- How does generation work for autoregressive models? HOT 2
- how to use multiple GPUs HOT 1
- How to get the labels of generated images HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-generative.