Code Monkey home page Code Monkey logo

crepe's People

Contributors

jason-cooke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crepe's Issues

error running example code

When I run $ qlua main.lua, I get the following error message:

dyld: Library not loaded: @rpath/./libQtGui.4.dylib
Referenced from: /usr/local/bin/qlua
Reason: image not found
Trace/BPT trap: 5

My environment: os x 10.10.5, torch7, lua 5.1.4, xcode 6.2

duplicate hyphen (minus?) in alphabeta?

In train/config.lua, there are two hyphens (minus) in alphabeta:
... 9-, ...
and
... +-= ...

Is it intentional? If not, does it affect the convnet size specs?

Any pre trained model?

Wondering if the author or other researcher is able to provide a pre-trained model? Thanks:) Just delete this thread if a general discussion like this is not welcomed as "issue".

Printing classes

I was told that I can used test.decision[i] to see the classes Crepe outputs.
Would I print out test.decision[i] in main.lua?
Also is there a way to associate the i-th input with test.decision[i]?
Can I do the same for training?

[Question] Where is batch Tensor initialized?

Sorry for the dumb question I must have missed something but I fail to see where the Tensor used to store batch data is initialized in this function? I can see how the one "bit" corresponding to each character is set in stringToTensor but don't you need to explicitly zero-initialize the tensor on that line?

How to run this in cpu only mode?

My gpu is not supported under cuda. Also is there an aws ami with this pre installed? And what would be the expected training time for dbpedia dataset on an aws g2xlarge?

Replicating on mxnet - too much memory for GPU

I'm having a bit of difficulty replicating this cool model using the mxnet package (instead of torch). I'm not sure if it's something to do with my implementation of it, or the mxnet package - however it takes up way more than 3GB of memory.

My version is here and below is a direct paste of the model:

Edit: fixed with updated params

    input_x = mx.sym.Variable('data')  # placeholder for input
    input_y = mx.sym.Variable('softmax_label')  # placeholder for output
    #6 Convolutional layers
    #1. alphabet x 1014
    conv1 = mx.symbol.Convolution(
        data=input_x, kernel=(7, 69), num_filter=NUM_FILTERS)
    relu1 = mx.symbol.Activation(
        data=conv1, act_type="relu")
    pool1 = mx.symbol.Pooling(
        data=relu1, pool_type="max", kernel=(3, 1), stride=(1, 1))
    #2. 336 x 256
    conv2 = mx.symbol.Convolution(
        data=pool1, kernel=(7, 1), num_filter=NUM_FILTERS)
    relu2 = mx.symbol.Activation(
        data=conv2, act_type="relu")
    pool2 = mx.symbol.Pooling(
        data=relu2, pool_type="max", kernel=(3, 1), stride=(1, 1))
    #3. 110 x 256
    conv3 = mx.symbol.Convolution(
        data=pool2, kernel=(3, 1), num_filter=NUM_FILTERS)
    relu3 = mx.symbol.Activation(
        data=conv3, act_type="relu")
    #4. 108 x 256
    conv4 = mx.symbol.Convolution(
        data=relu3, kernel=(3, 1), num_filter=NUM_FILTERS)
    relu4 = mx.symbol.Activation(
        data=conv4, act_type="relu")
    #5. 106 x 256
    conv5 = mx.symbol.Convolution(
        data=relu4, kernel=(3, 1), num_filter=NUM_FILTERS)
    relu5 = mx.symbol.Activation(
        data=conv5, act_type="relu")
    #6. 104 x 256
    conv6 = mx.symbol.Convolution(
        data=relu5, kernel=(3, 1), num_filter=NUM_FILTERS)
    relu6 = mx.symbol.Activation(
        data=conv6, act_type="relu")
    pool6 = mx.symbol.Pooling(
        data=relu6, pool_type="max", kernel=(3, 1), stride=(1, 1))
    #34 x 256
    flatten = mx.symbol.Flatten(data=pool6)
    #3 Fully-connected layers
    #7.  8704
    fc1 = mx.symbol.FullyConnected(
        data=flatten, num_hidden=1024)
    act_fc1 = mx.symbol.Activation(
        data=fc1, act_type="relu")
    drop1 = mx.sym.Dropout(act_fc1, p=0.5)
    #8. 1024
    fc2 = mx.symbol.FullyConnected(
        data=drop1, num_hidden=1024)
    act_fc2 = mx.symbol.Activation(
        data=fc2, act_type="relu")
    drop2 = mx.sym.Dropout(act_fc2, p=0.5)
    #9. 1024
    fc3 = mx.symbol.FullyConnected(
        data=drop2, num_hidden=NOUTPUT)
    crepe = mx.symbol.SoftmaxOutput(
        data=fc3, label=input_y, name="softmax")

Forward propagation time benchmark

Hi, may I know your forward propagation time in test.lua recorded here self.time.forward?

I would like to train the model on GPU machine (aws g2.8xlarge, CudaTensor), and test on large data set on cluster using CPU(DoubleTensor)only. In my case, using GPU the forward time is around 1.5s, and CPU is 3s.

I think it's very slow, may I know your time for comparison? so that I can check if there is some wrong with my setting.

Thanks a lot

Attempt to call field 'TemporalConvolution_updateOutput' (a nil value)

I am using Crepe for my sequential data. On executing main.lua the following error appears:
qlua: ...b/torch/install/share/lua/5.1/nn/TemporalConvolution.lua:41: attempt to call field 'TemporalConvolution_updateOutput' (a nil value)
stack traceback:
[C]: in function 'TemporalConvolution_updateOutput'
...b/torch/install/share/lua/5.1/nn/TemporalConvolution.lua:41: in function 'updateOutput'
...e/user/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
./model.lua:30: in function 'forward'
./train.lua:89: in function 'batchStep'
./train.lua:68: in function 'run'
main.lua:140: in function 'run'
main.lua:42: in function 'main'
main.lua:316: in main chunk

Is it some kind of a bug or any specific change is to be done with the configuration?

Thanks,
Asif

Y-axis scale and units?

I just want to verify that my network is working correctly and that I'm interpreting the graphs correctly on my own data. I was tinkering around with the code before launching it, so not sure if I accidentally messed up somewhere. Several questions:

  1. For loss, I got values over 1 for validation. How bad is that? Is a loss over 1 big or is it reasonable?
  2. Is the error y-axis unit in percent? So here, for the error, would it range from 0% to 0.16% or is it 0% to 16%? crepe-10000-error
    crepe-10000-loss
  3. When I trained on the dbpedia_csv data, the loss and error graphs looked about the same shape. In general, should the loss and error graphs always have the same shape?
  4. Are the error and loss on different scales?

crepe_output

Apologies again, Zhang, for all the questions.

Issue in saving data

The networks executes well but after completing all the era it get's stuck and just displays 'Saving data' and doesn't complete the execution.

Unable to connect X11 server

When I run the main.lua by 'qlua main.lua', I got bugs like below:

Unable to connect X11 server (continuing with -nographics)
Device set to 1
Loading datasets...
Loading the model...
Model randomized.
Current model type: torch.CudaTensor
Loading the trainer...
Loading the tester...
qlua: not loading module qtuiloader (running with -nographics)
qlua: not loading module qtgui (running with -nographics)
qlua: qtwidget window functions will not be usable (running with -nographics)
qtwidget window functions will not be usable (running with -nographics)
qlua: ./scroll.lua:16: attempt to index global 'qtuiloader' (a nil value)
stack traceback:
    [C]: in function '__index'
    ./scroll.lua:16: in function '__init'
    ...ysfeng/lvchao/torch/install/share/lua/5.1/torch/init.lua:91: in function <...ysfeng/lvchao/torch/install/share/lua/5.1/torch/init.lua:87>
    [C]: in function 'Scroll'
    ./mui.lua:21: in function '__init'
    ...ysfeng/lvchao/torch/install/share/lua/5.1/torch/init.lua:91: in function <...ysfeng/lvchao/torch/install/share/lua/5.1/torch/init.lua:87>
    [C]: in function 'Mui'
    main.lua:123: in function 'new'
    main.lua:41: in function 'main'
    main.lua:316: in main chunk
'''
How can I remove the ui show?

Training on a custom dataset

Hi Zhang,

Thanks for open sourcing this code. I am trying to run your model on my dataset, I have pre-processed the data in it's required format.

Now I am not familiar with lua and torch, so can you tell me is there anything I need to change in the config file to run it over a different dataset than dbpedia one? (I could only decipher the no of classes parameter from config file). Sorry if the question sounds elementary, I just want to make sure I am folllowing the right steps for my experiements.

Thanks in advance.

Testing on training data for era 1 never stops

It seems that the part "Testing on training data for era 1 never stops" doesn't stop on the example data.
The function Test:run(logfunc) starts a loop. I have no idea how to control this loop.
What parameters should change in config.lua?

Thanks in advance!

Question on batch normalization.

Why did you not use batch normalization on the Crepe network described in Text Understanding from Scratch? I looked at the paper but couldn't find anything that mentions it...
Does it have to do with the sparsity of the input data?

Thanks a lot for your work!

Yelp dataset

How is the Yelp dataset constructed?
The paper mentions that there are 1.5M+ samples in the original dataset, but you used 280K + 19K samples for the polarity version. How did you construct these small ones? Are they available for download?

Thanks!

Number of eras

I'm trying to reproduce the experiments in the Crepe paper. One detail I can't seem to find is how many eras the model was trained on each of the datasets -- could you please let me know what the number of eras you used was?

DIGITS Tutorial based on this project

Hi, just to let watchers know that I have added a Tutorial on DIGITS for text classification using the model from this project.

See the write-up

I am using CuDNN and an optimized data loader and training time is an order of magnitude faster than the reference implementation. On my system it takes ~15mn to train one epoch of 498400 training samples and 56000 validation samples, with 4 validation sweeps per epoch.

How to change Crepe configurations to TEST Dataset on Trained Model

I was wondering, how to use already trained model for validating my dataset; Can you please highlight all steps how to manage such change in crepe because -resume is not fulfilling my requirement;

Lastly, If I want to change error function to predict only one label, can you shed some light on that as well?

Regards,
Khan Awais

yelp data set

one question regarding yelp,

from which year is this dataset? and where it from exactly

thanks!

How do I save the confusion matrix data in a vector?

I am looking to save the confusion matrix data as a vector (in test.lua) ...is there a way to save and load it in lua?
Also, the results reports in the paper...is 1.5% the error rate on the DBPedia dataset? I was unable to replicate the results so I was wondering...

Exception while testing on custom dataset

Hi,
I've created my own training and testing datasets with just 2 classes. I changed the number of output classes in config.lua and set it to 2 as per instructions in the related article. Now, the training goes on well, but testing crashes with this exception:

Testing on training data for era 1
qlua: /home/murad/torch/install/share/lua/5.1/nn/THNN.lua:110: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /home/murad/torch/extra/nn/lib/THNN/generic/ClassNLLCriterion.c:57
stack traceback:
[C]: at 0x7f0b7de7ef50
[C]: in function 'v'
/home/murad/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'ClassNLLCriterion_updateOutput'
...rad/torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:44: in function 'forward'
./test.lua:60: in function 'run'
main.lua:145: in function 'run'
main.lua:42: in function 'main'
main.lua:316: in main chunk

I googled it out and read that it means that the number of output classes in the last layer doesn't match the number of classes in dataset which obviously cannot be, I checked it out stupid amount of times!
I'm running the code without CUDA, in CPU only mode. Will appreciate any help. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.