Code Monkey home page Code Monkey logo

hnmt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hnmt's Issues

Variable minibatch size to optimally use GPU memory

Vary the number of sentences per minibatch, so that the number of tokens (including padding) is close to optimal. The current fixed-size minibatches under-utilize the available memory when the minibatch is filled with short sentences.

Averaging-and-ensembling

Averaging parameters for a single training run improves results, as does ensembling several independently trained models. We should make the decoder able to combine these two methods, perhaps it's possible to squeeze out an extra bit of performance this way. It should be quick to implement.

Pretrained Models?

Pretrained models would be much appreciated. Do you provide them?

If not, then you could maybe add a statement about this in the readme?

Thanks!

Hybrid word-character decoder

Implement a two-level decoder following [1].

First a word-level decoder produces a sequence that may contain <UNK> symbols.
Next, these <UNK> symbols are filled in using a character-level decoder.

[1] Luong, Minh-Thang, and Christopher D. Manning. "Achieving open vocabulary neural machine translation with hybrid word-character models." arXiv preprint arXiv:1604.00788 (2016).
http://arxiv.org/pdf/1604.00788

Error in Training . How to overcome this problem ?

ERROR (theano.gof.opt): SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f6bb55b4198>
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/opt.py", line 87, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/scan_module/scan_opt.py", line 685, in apply
node = self.process_node(fgraph, node)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/scan_module/scan_opt.py", line 745, in process_node
node, args)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
add_as_nitsots)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
reason='scanOp_pushout_output')
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/toolbox.py", line 391, in replace_all_validate_remove
chk = fgraph.replace_all_validate(replacements, reason)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/toolbox.py", line 365, in replace_all_validate
fgraph.validate()
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/toolbox.py", line 256, in validate_
ret = fgraph.execute_callbacks('validate')
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/fg.py", line 589, in execute_callbacks
fn(self, *args, **kwargs)
File "/home/lnmiit/.local/lib/python3.6/site-packages/Theano-0.9.0-py3.6.egg/theano/gof/toolbox.py", line 422, in validate
raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
theano.gof.fg.InconsistencyError: Trying to reintroduce a removed node

Traceback (most recent call last):
File "hnmt.py", line 1426, in
if name == 'main': main()
File "hnmt.py", line 1214, in main
'Heldout training sentences is no longer supported')

Error received when trying to train the Machine Translation System using command

python3 hnmt.py --train europarl-v7.sv-en \

            --source-tokenizer word \
            --target-tokenizer char \
            --load-source-vocabulary vocab.sv \
            --load-target-vocabulary vocab.en \
            --batch-budget 32 \

--training-time 8
--log en-sv.log
--save-model en-sv.model

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.