Code Monkey home page Code Monkey logo

Comments (4)

robogast avatar robogast commented on June 12, 2024

Hi! Answers to your questions:

  1. You cannot sample the decoder directly, you need to train an autoregressive prior (i.e. pixelcnn, pixelsnail, ViT, ..., maybe using a discrete denoising model would be cool...) on the embeddings obtained by putting your dataset through the encoder.
    You then sample your autoregressive model for embeddings, and put those embeddings through the decoder. See the original VQ-VAE paper: https://arxiv.org/pdf/1711.00937.pdf
    • calc_ssim_from_checkpoint -> I simply had not added SSIM as a metric to tensorboard yet when I wrote this script, so this file can be ignored (or removed) now.
    • decode_embeddings.py -> the db_path are the generated embeddings by your autoregressive model, so you don't have them right now.
    • extract_embeddings.py -> yes, this file in principle takes your model + dataset and created the embeddings which should be used as training input for your autoregressive model.
    • As a general note, these three files are scripts and not intended as library files, and thus should be treated as such (i.e. low quality control, hardcoding a lot of stuff).

Nice to see that you're progressing :)

from 3d-vq-vae-2.

aksg87 avatar aksg87 commented on June 12, 2024

@robogast - Appreciate all of the information! Need to review the paper again :)

I look forward to trying the other scripts and posting how things go!

from 3d-vq-vae-2.

aksg87 avatar aksg87 commented on June 12, 2024

Hi @robogast

Your comments make much more sense now after reviewing the literature further :)

This is a nice overview from AI Epiphany!

https://www.youtube.com/watch?v=VZFVUrYcig0&t=1736s

from 3d-vq-vae-2.

aksg87 avatar aksg87 commented on June 12, 2024

Hi @robogast

I was trying to better understand encoding_idx. My understanding is that this is the last item in each of the 3 bottle neck layers? Curious why we throw the rest of the information away?

Thanks in advance!
-Akshay

def extract_samples(model, dataloader):
    model.eval()
    model.to(GPU)

    with torch.no_grad():
        for sample, _ in dataloader:
            sample = sample.to(GPU)
            *_, encoding_idx = zip(*model.encode(sample))
            yield encoding_idx

from 3d-vq-vae-2.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.