ritheshkumar95 / pytorch-vqvae Goto Github PK
View Code? Open in Web Editor NEWVector Quantized VAEs - PyTorch Implementation
Vector Quantized VAEs - PyTorch Implementation
Hello, thanks for your work.
I am fighting some issues when training your models. I followed your Instructions.
First, I think you may have forgotten to add '--dataset' argument in both of your commands.
Then, I think you forget to import datasets from torchvision in pixelcnn_prior.py.
Eventually, running:
python3 pixelcnn_prior.py --data-folder /tmp/miniimagenet --output-folder models/vqvae --dataset mnist
results in:
AttributeError: 'MNIST' object has no attribute '_label_encoder'
And I have the same issue with CIFAR dataset.
The PixelCNN learn to model the prior q(z) in the paper and the code. For any given classes/labels, PixelCNN should model their prior q(z), as shown in the code
Line 262 in 8d123c0
I first generate the index for some given classes as the codes
Line 262 in 8d123c0
Line 142 in 8d123c0
Can we evaluate the PixelCNN based on the generated images? How can I get the realistic images based on the prior generated by PixelCNN?
Best wishes!
I found ctx.needs_input_grad[1]
is False
during training VQ-VAE. Is this correct, and does it mean the embedding of the codebook does not update during training?
Line 53 in 8d123c0
Thanks for your great work! I can't figure out the difference between variable (z_ q_ x_ st) and variable (z_q_x) in forward() of class VectorQuantizedVAE.
I debug the code but find that both (z_q_x_st) and (z_q_x) need gradient and (z_q_x_st == z_q_x) check is always true.
Does anyone know their difference?
Can you please explain how you are computing the distance between the codebook and inputs? In functions.py
, you are using this line:
distances = torch.addmm(codebook_sqr + inputs_sqr, inputs_flatten, codebook.t(), alpha=-2.0, beta=1.0)
I am unable to understand how this will give the euclidean distance between inputs and codebook.
i want know the code of loss:
log_px = nll.mean().item() - np.log(128) + kl_d.item()
in that code of loss , the 128 of 'np.log(128)' is value of Z_DIM ?
how to train against CIFAR?
Hi,
I have trained the VQVAE network on my own dataset comprise of 10,000 images of 64×64 pixels without any labels. In order to train PixelCNN network, I faked some labels like this:
label_set=torch.zeros((10000,1), dtype=torch.int64)
However, the shape of my faked labels seems not to fit in the code. In modules.py, there is this line out_v = self.gate(h_vert + h[:, :, None, None])
in GatedMaskedConv2d.forward, where h is the label. In this way, the shape of h_vert would be (batch, 2×dim, 16, 16), but the shape of h would be (batch, 1, 2×dim).
So can anyone tell me how to deal with the labels?
Thanks.
Dear ritheshkumar95,
We want to express our gratitude for your implementation of the pytorch VQ-VAE. Thanks to your work, we were able to develop and publish our own model, TVQ-VAE, which has been accepted for presentation at the AAAI 24 Conference (https://arxiv.org/abs/2312.11532).
We would like to request your permission to publish our implementation code, which was inspired by your work. Rest assured, we will properly cite your repository in our implementation as a reference.
Thank you for your contribution and support.
Best regards,
Nice implementation of the VQ-Straight through function!
However, when looking at the autograd graph there is an edge that is breaking the separated gradients for the reconstruction loss and the VQ loss. So the reconstruction loss is also updating the embedding, which should not happen. I tried to figure out why that happens. My understanding of pytorch isn't that thorough though. Do you might have an idea?
I marked the edge here.
Hi, thanks for your implementation !
I'm now trying to implement the audio
experiments of VQ-VAE. But when try to imitate your code, there is something I got confused:
nn.Embedding
for VQEmbedding. My code is:class VQEmbedding(nn.Module):
def __init__(self):
super().__init__()
self.embedding = nn.Embedding(hp.K, hp.D)
self.embedding.weight.data.uniform_(-1./hp.K, 1./hp.K)
def forward(self, z_e_x):
# z_e_x - (B, D, T)
# emb - (K, D)
emb = self.embedding.weight
dists = torch.pow(z_e_x.unsqueeze(1) - emb[None, :, :, None], 2)
z_q_x = dists.min(1)[1].float()
return z_q_x
So my z_q_x
and z_e_x
have the same dim, say (1, 256, 16000)
(Batch, Dim, Length).
But when I train the model by computing the .grad
:
optimizer.zero_grad()
x_recon, z_e_x, z_q_x = model(qt_var, speaker_var)
z_q_x.retain_grad()
loss_recon = cross_entropy_loss(x_recon.view(hp.BATCH_SIZE, hp.Q, -1), quantized_audio.view(hp.BATCH_SIZE, -1).long())
loss_recon.backward(retain_graph=True)
# Staright-through estimator
z_e_x.backward(z_q_x.grad, retain_graph=True)
Error happened:
RuntimeError: grad can be implicitly created only for scalar outputs
It means my z_q_x
does not have grad
. Actually because I dido some quantization work, my z_q_x
and z_e_x
are LongTensor
, is this the reason for no grad ?
I am just wondering why your let train_loader shuffle = False and test_loader shuffle=True? Should it be vice versa?
Hello, and thanks for the code! I want to replicate the audio results from the paper, but the DeepMind repo does not have a VQ-VAE example for audio (see google-deepmind/sonnet#141 ), and it seems quite different from the one for CIFAR:
We train a VQ-VAE where the encoder has 6 strided convolutions with stride 2 and window-size 4. This yields a latent space 64x smaller than the original waveform. The latents consist of one feature map and the discrete space is 512-dimensional.
Could you please include an example of using your code for audio?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.