Comments (1)
In NCP-VAE and VAEBM, we trained new NVAEs from scratch using a Gaussian image decoder (i.e., p(x|z)). This was primarily because for VAEBM, we needed to backpropagate through generated images in the decoder, and this was easy to formulate with a Gaussian decoder using the reparameterization trick. This decoder type was not needed for NCP-VAE as it was completely formulated in the latent space. But, at the time, we didn't know about the implications of the Gaussian decoder.
In the original NVAE paper and later in LSGM, we used the discretized logistic mixture distribution in the decoder. You can read about this distribution in this paper. When writing the LSGM paper, we went back and computed FID for the original publicly available NVAE checkpoints and we were also surprised to see that they obtain a lower FID (29.76) compared to NVAEs trained for NCP-VAE and VAEBM (~40).
Here is why we think the FID score gets better with the discretized logistic mixture: This decoder is a better statistical model for representing pixel intensities in an image and it forms simple conditional dependencies between the RGB channels. In contrast, the Gaussian decoder is a simple model that predicts RGB channels independently. Our experiments show that the discretized logistic mixture requires encoding less information in the latent space to reconstruct input images which in turn translates to having fewer prior holes in the prior distribution. Because of this, it seems that the FID score gets better with this decoder.
I hope this clarifies the confusion. If you have any further questions, please let me know here.
from nvae.
Related Issues (20)
- NomalDecoder & num_bits
- TypeError: batch_norm_backward_elemt() missing 1 required positional arguments: "count" HOT 1
- How to run without using parallelization? HOT 1
- Can you provide pretrained models? HOT 1
- why is there self.prior_ftr0 in the decoder model?
- Why some of the generate images by the official checkpoint of CelebA64 are NaN-value? HOT 2
- Query: CelebA HQ 256
- Query: Dataset CelebA-HQ 256x256 issue
- Query: FFHQ Pre-Processing HOT 3
- FFHQ Training
- CelebA-HQ 256x256 Data Pre-processing HOT 1
- Possible typo in the log_p() function
- ImageNet Checkpoint
- Question regarding traversing the latent space
- Why output for 3-rd channel is unused in Logistic mixture? HOT 1
- how can i use the code on my own dataset. if it's necessary to modify the code carefully myself? HOT 1
- "arch_instance" argument
- Problem while converting tfrecord to lmdb data AttributeError: 'bytes' object has no attribute 'cpu' HOT 4
- Question about KL computation HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nvae.