Comments (11)
First, we used 50M monolingual data for each language during the training, this may be one reason (pre-training usually needs more data). Second, epoch_size=200000
in this code means training 200,000 sentences for one epoch. In other words, nearly 62 epochs (size=200,000) is equal to train on 50M data for once.
To explain this, I have uploaded some logs to this link from my previous experiments when epoch_size = 200000. It can obtain 8.24/5.45 (at 10 epochs), 11.95/8.21 (at 50 epochs), 13.38/9.34 (at 100 epochs), 14.26/10.06 (at 146 epochs). This is just the result of 146 epochs. While we take over 500 epochs in our experiments for pre-training.
And in the latest code, you can try this hyperparameter for pre-training which result in better performance:
DATA_PATH=/data/processed/de-en
export NGPU=8; CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=$NGPU train.py \
--exp_name unsupMT_ende \
--dump_path ./models/en-de/ \
--exp_id test \
--data_path $DATA_PATH \
--lgs 'en-de' \
--mass_steps 'en,de' \
--encoder_only false \
--emb_dim 1024 \
--n_layers 6 \
--n_heads 8 \
--dropout 0.1 \
--attention_dropout 0.1 \
--gelu_activation true \
--tokens_per_batch 3000 \
--batch_size 32 \
--bptt 256 \
--optimizer adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001 \
--epoch_size 200000 \
--max_epoch 50 \
--eval_bleu true \
--word_mass 0.5 \
--min_len 4 \
--save_periodic 10 \
--lambda_span "8" \
--word_mask_keep_rand '0.8,0.05,0.15'
from mass.
@StillKeepTry could you also please provide the log for bt steps for the en_de model you pretrained?
Also log for pretraining and bt for en_fr would be highly appreciated as well!
from mass.
I also had a hard time to reproduce the unsupervised MT result. I run exactly as suggested in ReadME on en-fr. At epoch 4, my "valid_fr-en_mt_bleu" is only a little above 1, but you had "valid_fr-en_mt_bleu -> 10.55". I run it a few times.
from mass.
@xianxl Our implementation based on fairseq contains some different methods from our paper. There will have some slight different experiment settings in fairseq.
from mass.
@StillKeepTry thanks for your reply. So what do you recommend to set word_mask_keep_rand in MASS-fairseq implementation? The default is "0,0,1" (which means no mask?) and this arg is not set in the training command you shared.
from mass.
I followed the exactly the same setting as what the git page shows, running on a machine with 8 V100 gpus as the paper describes:
The git page claims that after 4 epochs, even without back translation the unsupervised BLEU should be close to the following numbers:
epoch -> 4
valid_fr-en_mt_bleu -> 10.55
valid_en-fr_mt_bleu -> 7.81
test_fr-en_mt_bleu -> 11.72
test_en-fr_mt_bleu -> 8.80
However this is not what I got, my numbers are much worse at epoch 4:
Could you please let us know if any param is wrong, or there are any hidden recipe that we're not aware of to reproduce the results?
On the other hand, I also loaded your pre-trained en-fr model, and the results are much better. So alternatively, could you share the settings you used to train the pre-trained model?
from mass.
After some investigation, it seems that the suggested epoch size (200000) is really small and not the one used to produce the paper results. Could you confirm on this hypothesis?
from mass.
Can we conclude that the results are not reproducible?
from mass.
from mass.
@k888 have you find the hyperparameters for en_de BT steps?
from mass.
First, we used 50M monolingual data for each language during the training, this may be one reason (pre-training usually needs more data). Second,
epoch_size=200000
in this code means training 200,000 sentences for one epoch. In other words, nearly 62 epochs (size=200,000) is equal to train on 50M data for once.To explain this, I have uploaded some logs to this link from my previous experiments when epoch_size = 200000. It can obtain 8.24/5.45 (at 10 epochs), 11.95/8.21 (at 50 epochs), 13.38/9.34 (at 100 epochs), 14.26/10.06 (at 146 epochs). This is just the result of 146 epochs. While we take over 500 epochs in our experiments for pre-training.
And in the latest code, you can try this hyperparameter for pre-training which result in better performance:
DATA_PATH=/data/processed/de-en export NGPU=8; CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=$NGPU train.py \ --exp_name unsupMT_ende \ --dump_path ./models/en-de/ \ --exp_id test \ --data_path $DATA_PATH \ --lgs 'en-de' \ --mass_steps 'en,de' \ --encoder_only false \ --emb_dim 1024 \ --n_layers 6 \ --n_heads 8 \ --dropout 0.1 \ --attention_dropout 0.1 \ --gelu_activation true \ --tokens_per_batch 3000 \ --batch_size 32 \ --bptt 256 \ --optimizer adam_inverse_sqrt,beta1=0.9,beta2=0.98,lr=0.0001 \ --epoch_size 200000 \ --max_epoch 50 \ --eval_bleu true \ --word_mass 0.5 \ --min_len 4 \ --save_periodic 10 \ --lambda_span "8" \ --word_mask_keep_rand '0.8,0.05,0.15'
Hello, thanks for your great work. And you said that you used 50M monolingual data(50,000,000 sentences) for each language during the training, the epoch_size is 200,000, so why the number of epoch training on 50M data for once is 62? why not 250? @StillKeepTry
from mass.
Related Issues (20)
- Quick question about "masked_block_start" HOT 1
- Confusion regarding data HOT 1
- Do two direction data for parallel data is necessary?
- Incorrect dictionary format HOT 3
- How to create dictionary dict.lg.txt
- Question towards the Pre-trained weight for the Neural Machine Translation under supNMT
- Predictions on XSUM?
- Questions for SupNMT
- Question about data processing in Unsupervised NMT
- How to create dictionary dict.lg.txt in MASS supNMT
- Translation results on Zh-En pre-trained model
- invalid task choice
- Mass_unsup has no problem on a single GPU, and errors are reported on multiple GPUs
- How does MASS supervised machine translation perform preprocessing?
- supNMT pre-train problem with multi gpus HOT 1
- how can you get the data for MASS supNMT?
- Does mass implement the translate method? HOT 1
- Where is the file "fairseq-preprocess"
- This repo is missing important files HOT 2
- who can share the model with me
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mass.