Code Monkey home page Code Monkey logo

Comments (11)

StillKeepTry avatar StillKeepTry commented on May 19, 2024 2

First, we used 50M monolingual data for each language during the training, this may be one reason (pre-training usually needs more data). Second, epoch_size=200000 in this code means training 200,000 sentences for one epoch. In other words, nearly 62 epochs (size=200,000) is equal to train on 50M data for once.

To explain this, I have uploaded some logs to this link from my previous experiments when epoch_size = 200000. It can obtain 8.24/5.45 (at 10 epochs), 11.95/8.21 (at 50 epochs), 13.38/9.34 (at 100 epochs), 14.26/10.06 (at 146 epochs). This is just the result of 146 epochs. While we take over 500 epochs in our experiments for pre-training.

And in the latest code, you can try this hyperparameter for pre-training which result in better performance:

DATA_PATH=/data/processed/de-en

export NGPU=8; CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=$NGPU train.py \
        --exp_name unsupMT_ende                              \
        --dump_path ./models/en-de/                          \
        --exp_id test                                        \
        --data_path $DATA_PATH                               \
        --lgs 'en-de'                                        \
        --mass_steps 'en,de'                                 \
        --encoder_only false                                 \
        --emb_dim 1024                                       \
        --n_layers 6                                         \
        --n_heads 8                                          \
        --dropout 0.1                                        \
        --attention_dropout 0.1                              \
        --gelu_activation true                               \
        --tokens_per_batch 3000                              \
        --batch_size 32                                      \
        --bptt 256                                           \
        --optimizer adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001 \
        --epoch_size 200000                                  \
        --max_epoch 50                                       \
        --eval_bleu true                                     \
        --word_mass 0.5                                      \
        --min_len 4                                          \
        --save_periodic 10                                   \
        --lambda_span "8"                                    \
        --word_mask_keep_rand '0.8,0.05,0.15'

from mass.

k888 avatar k888 commented on May 19, 2024 1

@StillKeepTry could you also please provide the log for bt steps for the en_de model you pretrained?

Also log for pretraining and bt for en_fr would be highly appreciated as well!

from mass.

AndyELiu avatar AndyELiu commented on May 19, 2024

I also had a hard time to reproduce the unsupervised MT result. I run exactly as suggested in ReadME on en-fr. At epoch 4, my "valid_fr-en_mt_bleu" is only a little above 1, but you had "valid_fr-en_mt_bleu -> 10.55". I run it a few times.

from mass.

StillKeepTry avatar StillKeepTry commented on May 19, 2024

@xianxl Our implementation based on fairseq contains some different methods from our paper. There will have some slight different experiment settings in fairseq.

from mass.

xianxl avatar xianxl commented on May 19, 2024

@StillKeepTry thanks for your reply. So what do you recommend to set word_mask_keep_rand in MASS-fairseq implementation? The default is "0,0,1" (which means no mask?) and this arg is not set in the training command you shared.

from mass.

caoyuan avatar caoyuan commented on May 19, 2024

I followed the exactly the same setting as what the git page shows, running on a machine with 8 V100 gpus as the paper describes:

image

The git page claims that after 4 epochs, even without back translation the unsupervised BLEU should be close to the following numbers:

epoch -> 4
valid_fr-en_mt_bleu -> 10.55
valid_en-fr_mt_bleu -> 7.81
test_fr-en_mt_bleu -> 11.72
test_en-fr_mt_bleu -> 8.80

However this is not what I got, my numbers are much worse at epoch 4:

image

Could you please let us know if any param is wrong, or there are any hidden recipe that we're not aware of to reproduce the results?

On the other hand, I also loaded your pre-trained en-fr model, and the results are much better. So alternatively, could you share the settings you used to train the pre-trained model?

from mass.

caoyuan avatar caoyuan commented on May 19, 2024

After some investigation, it seems that the suggested epoch size (200000) is really small and not the one used to produce the paper results. Could you confirm on this hypothesis?

from mass.

Bachstelze avatar Bachstelze commented on May 19, 2024

Can we conclude that the results are not reproducible?

from mass.

tan-xu avatar tan-xu commented on May 19, 2024

from mass.

LibertFan avatar LibertFan commented on May 19, 2024

@k888 have you find the hyperparameters for en_de BT steps?

from mass.

Frankszc avatar Frankszc commented on May 19, 2024

First, we used 50M monolingual data for each language during the training, this may be one reason (pre-training usually needs more data). Second, epoch_size=200000 in this code means training 200,000 sentences for one epoch. In other words, nearly 62 epochs (size=200,000) is equal to train on 50M data for once.

To explain this, I have uploaded some logs to this link from my previous experiments when epoch_size = 200000. It can obtain 8.24/5.45 (at 10 epochs), 11.95/8.21 (at 50 epochs), 13.38/9.34 (at 100 epochs), 14.26/10.06 (at 146 epochs). This is just the result of 146 epochs. While we take over 500 epochs in our experiments for pre-training.

And in the latest code, you can try this hyperparameter for pre-training which result in better performance:

DATA_PATH=/data/processed/de-en

export NGPU=8; CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=$NGPU train.py \
        --exp_name unsupMT_ende                              \
        --dump_path ./models/en-de/                          \
        --exp_id test                                        \
        --data_path $DATA_PATH                               \
        --lgs 'en-de'                                        \
        --mass_steps 'en,de'                                 \
        --encoder_only false                                 \
        --emb_dim 1024                                       \
        --n_layers 6                                         \
        --n_heads 8                                          \
        --dropout 0.1                                        \
        --attention_dropout 0.1                              \
        --gelu_activation true                               \
        --tokens_per_batch 3000                              \
        --batch_size 32                                      \
        --bptt 256                                           \
        --optimizer adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001 \
        --epoch_size 200000                                  \
        --max_epoch 50                                       \
        --eval_bleu true                                     \
        --word_mass 0.5                                      \
        --min_len 4                                          \
        --save_periodic 10                                   \
        --lambda_span "8"                                    \
        --word_mask_keep_rand '0.8,0.05,0.15'

Hello, thanks for your great work. And you said that you used 50M monolingual data(50,000,000 sentences) for each language during the training, the epoch_size is 200,000, so why the number of epoch training on 50M data for once is 62? why not 250? @StillKeepTry

from mass.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.