Code Monkey home page Code Monkey logo

toward-controlled-generation-of-text-pytorch's Introduction

A PyTorch Implementation of "Toward Controlled Generation of Text"

This is a PyTorch implementation of the model proposed in paper Toward Controlled Generation of Text, which aims to generate natural language given some target attributes.

toward-controlled-generation-of-text-pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

toward-controlled-generation-of-text-pytorch's Issues

Memory blow up when setting retain_graph = True

I followed the suggestion to set retain_graph to True, but when I run the code the memory just blow up, even though I am using GPUs. I believe we only need to use retain_graph = True before last time backwards, but during the last time we need to delete the graph, otherwise will consume a lot of memory. Wonder whether you can fix this?

How to create a demo?

Hi! Thanks for your work. I'm trying to train this model on lyrics and create conditional text for demo. However, how do you do something like below? Thanks!

input_lyric = "Sky is high and ocean is blue."
output_lyric = some_function(input_lyric, latent_emotion = "sad")
print(output_lyric) # expect something like: 'Oh sad sky there is'

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Epoch 0 batch 0's average language loss: 8.74623718262, latent loss: 0.00267713633366
l_attr_c loss: 0.693091034889, l_attr_z loss: 0.00270487181842
Traceback (most recent call last):
File "train.py", line 296, in
train_vae_with_attr_loss(encoder, decoder, discriminator)
File "train.py", line 270, in train_vae_with_attr_loss
total_vae_loss.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 98, in backward
variables, grad_variables, retain_graph)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 91, in apply
return self._forward_cls.backward(self, *args)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py", line 30, in backward
matrix1, matrix2 = ctx.saved_variables
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Is it possible to generate sentences using the repo ?

Hello,
I've tried training the models using the code provided but I haven't found any code for generating new sentences. I've been wondering if anyone actually got to the part of generating sentences with the models trained, latent variables and all.
I know that Generator need to be used for that, but I'm confused about how it is trained and which function do I need to use in train.py, main_alg or train_vae_with_attr_loss.

I will be grateful for some help, thanks !

Not enough memory

The following problem is encountered when running your code:

Traceback (most recent call last):
  File "/home/syrup274/toward-controlled-generation-of-text-pytorch-master/train.py", line 113, in <module>
    train_discriminator(discriminator)
  File "/home/syrup274/toward-controlled-generation-of-text-pytorch-master/train.py", line 107, in train_discriminator
    _, predicted = torch.max(discriminator(input_data).data, 1)
  File "/home/syrup274/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/syrup274/toward-controlled-generation-of-text-pytorch-master/Model/Modules.py", line 164, in forward
    emb_sentence = self.src_word_emb(input_sentence)
  File "/home/syrup274/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/syrup274/.local/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 94, in forward
    )(input, self.weight)
  File "/home/syrup274/.local/lib/python3.5/site-packages/torch/nn/_functions/thnn/sparse.py", line 53, in forward
    output = torch.index_select(weight, 0, indices.view(-1))
RuntimeError: $ Torch: not enough memory: you tried to allocate 13GB. Buy new RAM! at /b/wheel/pytorch-src/torch/lib/TH/THGeneral.c:270

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.