Code Monkey home page Code Monkey logo

Comments (8)

klauspa avatar klauspa commented on July 17, 2024 1

Hello, can you share me with some information about how to translate English to Chinese as your work is about Mongolian to Chinese which is also not listed as example in this repo. My puzzle is how to preprocess the data and apply the pretrained model. Thanks in advance!

from xlm.

glample avatar glample commented on July 17, 2024

The pretrained result looks good. Regarding the unsupervised MT on the pretrained model, I'm not sure why it OOM. Can you try to set --bt_steps '' to temporarily disable back-translation? Just to had an idea of where the issue comes from.
Also, can you try to set --max_batch_size 64 ? It will avoid the model to generate huge batches with a lot of very small sentences of --tokens_per_batch words.

Last thing: we use this for back-translation: https://github.com/facebookresearch/XLM/blob/master/src/trainer.py#L816 it indicates that the generated back-translation should not exceed 1.3 times the length of the source sentence. It is usually true for English-French, but if in Mn-Zh sentences can be twice longer, you should replace the 1.3 by 2 (but it will be more likely to OOM as well, since generated sequences will be longer).

from xlm.

Julisa-test avatar Julisa-test commented on July 17, 2024

Hi,@glample

Thanks for your reply!

I tried to reduce --tokens_per_batch from 2000 to 400 and it started running, but at the end of the epoch 0 when computed the BLEU and the issue reappeared.
Also I tried set --max_batch_size 64, but that didn't help.
In addition, I tried set --bt_steps '' , there is another issue appeared.

(xlm) julisa@julisa:~/XLM/XLM$ python train.py --exp_name unsupMT_mnzh --dump_path './dumped/' --reload_model 'best-valid_mlm_ppl.pth,best-valid_mlm_ppl.pth' --data_path './data/processed/mn-zh/' --lgs 'mn-zh' --ae_steps 'mn,zh' --bt_steps '' --word_shuffle 3 --word_dropout '0.1' --word_blank '0.1' --lambda_ae '0:1,100000:0.1,300000:0' --encoder_only false --emb_dim 1024 --n_layers 6 --n_heads 8 --dropout '0.1' --attention_dropout 0 --gelu_activation true --tokens_per_batch 2000 --max_batch_size 64 --bptt 256 --optimizer 'adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001' --epoch_size 300000 --eval_bleu true --stopping_criterion 'valid_mn-zh_mt_bleu,10' --validation_metrics 'valid_mn-zh_mt_bleu'
Traceback (most recent call last):
File "train.py", line 318, in
check_data_params(params)
File "/home/julisa/XLM/XLM/src/data/loader.py", line 306, in check_data_params
assert params.eval_bleu is False or len(params.mt_steps + params.bt_steps) > 0
AssertionError

from xlm.

Julisa-test avatar Julisa-test commented on July 17, 2024

Hi,@glample

I tried set --bt_steps '' ,but it still OOM.

python train.py --exp_name unsupMT_mnzh --dump_path ./dumped/ --exp_id '19031015' --reload_model './dumped/my_mnzh_mlm/190228/best-valid_mlm_ppl.pth,./dumped/my_mnzh_mlm/190228/best-valid_mlm_ppl.pth' --data_path ./data/processed/mn-zh/ --lgs 'mn-zh' --ae_steps 'mn,zh' --bt_steps '' --word_shuffle 3 --word_dropout 0.1 --word_blank 0.1 --lambda_ae '0:1,100000:0.1,300000:0' --encoder_only false --emb_dim 1024 --n_layers 6 --n_heads 8 --dropout 0.1 --attention_dropout 0 --gelu_activation true --tokens_per_batch 400 --batch_size 16 --max_batch_size 64 --bptt 256 --optimizer adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001 --epoch_size 300000 --eval_bleu true --stopping_criterion 'valid_mn-zh_mt_bleu,10' --validation_metrics 'valid_mn-zh_mt_bleu'

INFO - 03/10/19 15:43:47 - 0:00:27 - ============ Starting epoch 0 ... ============
INFO - 03/10/19 15:43:47 - 0:00:27 - Creating new training data iterator (ae,mn) ...
/home/julisa/.pyenv/versions/xlm/lib/python3.6/site-packages/torch/nn/_reduction.py:16: UserWarning: reduction='elementwise_mean' is deprecated, please use reduction='mean' instead.
warnings.warn("reduction='elementwise_mean' is deprecated, please use reduction='mean' instead.")
INFO - 03/10/19 15:43:48 - 0:00:28 - Creating new training data iterator (ae,zh) ...
Traceback (most recent call last):
File "train.py", line 328, in
main(params)
File "train.py", line 288, in main
trainer.mt_step(lang, lang, params.lambda_ae)
File "/home/julisa/XLM/XLM/src/trainer.py", line 769, in mt_step
self.optimize(loss, ['encoder', 'decoder'])
File "/home/julisa/XLM/XLM/src/trainer.py", line 130, in optimize
loss.backward()
File "/home/julisa/.pyenv/versions/xlm/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/julisa/.pyenv/versions/xlm/lib/python3.6/site-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 7.92 GiB total capacity; 5.83 GiB already allocated; 79.62 MiB free; 27.70 MiB cached)

from xlm.

glample avatar glample commented on July 17, 2024

This is not normal, with 400 tokens per batch you should not have any issues. What GPU do you use, and what is its total capacity? From the log I can see GPU 0; 7.92 GiB total capacity but this seems strange to me.

from xlm.

Julisa-test avatar Julisa-test commented on July 17, 2024

The GPU I used is NVIDIA 1070.

from xlm.

Julisa-test avatar Julisa-test commented on July 17, 2024

Hi, @glample

I think I found a solution. I just tried reducing --emb_dim from 1024 to 512 and It works.

Thanks for everything!

from xlm.

645709712 avatar 645709712 commented on July 17, 2024

@Julisa-test hi,I have a question to ask you,If you change the emb_dim, I think there will be a mismatch in the size of the variable, because the dimension of the model you trained is 1024. You force the dimension down. Is n’t there a problem?I also encountered the problem of insufficient GPU space. I am looking for a solution. I saw your reply. I just want to know the details of the solution.Thank you very much

from xlm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.