Code Monkey home page Code Monkey logo

Comments (6)

jiaaoc avatar jiaaoc commented on June 19, 2024
  1. There might be sth related to the update of fairseq.
  2. Can you try more epochs? From my past experiments, the best models usually appeared after 6 or 7 epochs.
  3. Can you try BART baseline/single-view models as well?

from multi-view-seq2seq.

Ricardokevins avatar Ricardokevins commented on June 19, 2024
  1. There might be sth related to the update of fairseq.
  2. Can you try more epochs? From my past experiments, the best models usually appeared after 6 or 7 epochs.
  3. Can you try BART baseline/single-view models as well?

**Thank a lot for your in-time reply. **
For the first Point, The only modified I did to Code is here #4 ( I think if i pip install from this repo, the version of fairseq should be same as yours?)

And I try more epoch, but Eval Loss increase and increase which looks like already overfit So I stop training

And I try train_single_view by exec train_single_view.sh in train_sh. And the result seems still below the reported result in paper
following is the train log.

It seems Hit lowest val loss in Epoch4. According test ROUGE is
Test {'rouge-1': {'f': 0.4806754387783283, 'p': 0.47575689443993296, 'r': 0.533090841327234}, 'rouge-2': {'f': 0.24723642740645052, 'p': 0.24485542975184837, 'r': 0.27718088869081886}, 'rouge-l': {'f': 0.4685603118461953, 'p': 0.4639976287785791, 'r': 0.5105758908770055}}

2021-05-31 10:21:37 | INFO | fairseq_cli.train | model bart_large, criterion LabelSmoothedCrossEntropyCriterion
2021-05-31 10:21:37 | INFO | fairseq_cli.train | num. model params: 416791552 (num. trained: 416791552)
2021-05-31 10:21:41 | INFO | fairseq_cli.train | training on 1 GPUs
2021-05-31 10:21:41 | INFO | fairseq_cli.train | max tokens per GPU = 800 and max sentences per GPU = None
2021-05-31 10:21:44 | INFO | fairseq.trainer | loaded checkpoint /home/data_ti4_c/gengx/PGN/DialogueSum/bart.large/bart.large/model.pt (epoch 41 @ 0 updates)
group1:
511
group2:
12
2021-05-31 10:21:44 | INFO | fairseq.trainer | NOTE: your device may support faster training with --fp16
here schedule!
2021-05-31 10:21:44 | INFO | fairseq.trainer | loading train data for epoch 0
2021-05-31 10:21:44 | INFO | fairseq.data.data_utils | loaded 14731 examples from: cnn_dm-bin/train.source-target.source
2021-05-31 10:21:44 | INFO | fairseq.data.data_utils | loaded 14731 examples from: cnn_dm-bin/train.source-target.target
2021-05-31 10:21:44 | INFO | fairseq.tasks.translation | cnn_dm-bin train source-target 14731 examples
2021-05-31 10:21:44 | WARNING | fairseq.data.data_utils | 6 samples have invalid sizes and will be skipped, max_positions=(800, 800), first few sample ids=[6248, 12799, 12502, 9490, 4269, 8197]
False
epoch 001 | loss 5.513 | nll_loss 3.596 | ppl 12.092 | wps 916.2 | ups 0.23 | wpb 4077.1 | bsz 155 | num_updates 95 | lr 1.425e-05 | gnorm 32.314 | clip 100 | oom 0 | train_wall 414 | wall 429
epoch 001 | valid on 'valid' subset | loss 4.094 | nll_loss 2.2 | ppl 4.595 | wps 2698.9 | wpb 130.4 | bsz 5 | num_updates 95
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [02:02<00:00, 6.67it/s]
Val {'rouge-1': {'f': 0.46480339065997567, 'p': 0.47049869812919654, 'r': 0.5024883727698615}, 'rouge-2': {'f': 0.2276876538373641, 'p': 0.22955358188022437, 'r': 0.24868065849103466}, 'rouge-l': {'f': 0.4542692837269408, 'p': 0.46116078309275904, 'r': 0.48214810071155845}}
2021-05-31 10:31:16 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_best.pt (epoch 1 @ 95 updates, score 4.094) (writing took 12.353126897010952 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [02:05<00:00, 6.50it/s]
Test {'rouge-1': {'f': 0.46114517047708026, 'p': 0.46537271323141755, 'r': 0.5042302000143742}, 'rouge-2': {'f': 0.2224237363836951, 'p': 0.22493083103419967, 'r': 0.24544451649543284}, 'rouge-l': {'f': 0.44474361373637405, 'p': 0.448927478999051, 'r': 0.47784909879262455}}
epoch 002 | loss 4.109 | nll_loss 2.276 | ppl 4.843 | wps 555.9 | ups 0.14 | wpb 4077.1 | bsz 155 | num_updates 190 | lr 2.85e-05 | gnorm 3.147 | clip 100 | oom 0 | train_wall 411 | wall 1126
epoch 002 | valid on 'valid' subset | loss 3.962 | nll_loss 2.089 | ppl 4.256 | wps 2811.3 | wpb 130.4 | bsz 5 | num_updates 190 | best_loss 3.962
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [02:11<00:00, 6.21it/s]
Val {'rouge-1': {'f': 0.4725457817939638, 'p': 0.4475755100804326, 'r': 0.5487082350417644}, 'rouge-2': {'f': 0.24120891668239897, 'p': 0.22704731361123773, 'r': 0.2844809692235128}, 'rouge-l': {'f': 0.4663167405970829, 'p': 0.4482708376545491, 'r': 0.5233676583405283}}
2021-05-31 10:43:12 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_best.pt (epoch 2 @ 190 updates, score 3.962) (writing took 22.749575738969725 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [02:17<00:00, 5.93it/s]
Test {'rouge-1': {'f': 0.46377397032285694, 'p': 0.4401052932465904, 'r': 0.5403512633724206}, 'rouge-2': {'f': 0.227645627774666, 'p': 0.21571026664497425, 'r': 0.26880516383329606}, 'rouge-l': {'f': 0.4566739235923508, 'p': 0.43997051958006705, 'r': 0.5136008923034153}}
epoch 003 | loss 3.891 | nll_loss 2.058 | ppl 4.165 | wps 532.3 | ups 0.13 | wpb 4077.1 | bsz 155 | num_updates 285 | lr 2.94688e-05 | gnorm 4.792 | clip 100 | oom 0 | train_wall 411 | wall 1854
epoch 003 | valid on 'valid' subset | loss 3.907 | nll_loss 2.06 | ppl 4.17 | wps 2804.9 | wpb 130.4 | bsz 5 | num_updates 285 | best_loss 3.907
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [02:03<00:00, 6.63it/s]
Val {'rouge-1': {'f': 0.4898078401784239, 'p': 0.49122354401662094, 'r': 0.5326126375037953}, 'rouge-2': {'f': 0.2511270631508343, 'p': 0.25084270093658784, 'r': 0.2765290715978279}, 'rouge-l': {'f': 0.47323360177962054, 'p': 0.47368732278776043, 'r': 0.5075501058044481}}
2021-05-31 10:55:09 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_best.pt (epoch 3 @ 285 updates, score 3.907) (writing took 20.81832773098722 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [02:04<00:00, 6.57it/s]
Test {'rouge-1': {'f': 0.4805272898101375, 'p': 0.4894492247983221, 'r': 0.5191923582925742}, 'rouge-2': {'f': 0.24698109742466243, 'p': 0.25318572010412027, 'r': 0.26818822523077385}, 'rouge-l': {'f': 0.4670266945740137, 'p': 0.47376982129121814, 'r': 0.49816406780975997}}
epoch 004 | loss 3.695 | nll_loss 1.851 | ppl 3.608 | wps 550.5 | ups 0.14 | wpb 4077.1 | bsz 155 | num_updates 380 | lr 2.8875e-05 | gnorm 1.693 | clip 100 | oom 0 | train_wall 411 | wall 2557
epoch 004 | valid on 'valid' subset | loss 3.868 | nll_loss 2.032 | ppl 4.09 | wps 2825.3 | wpb 130.4 | bsz 5 | num_updates 380 | best_loss 3.868
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [02:05<00:00, 6.52it/s]
Val {'rouge-1': {'f': 0.4932441558906935, 'p': 0.4849630708516529, 'r': 0.5470369285867579}, 'rouge-2': {'f': 0.257195820352599, 'p': 0.2512144258115529, 'r': 0.2890727559145565}, 'rouge-l': {'f': 0.4809091476362043, 'p': 0.4733958728540178, 'r': 0.5243670254679919}}
2021-05-31 11:06:56 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_best.pt (epoch 4 @ 380 updates, score 3.868) (writing took 22.06681061600102 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [02:05<00:00, 6.51it/s]
Test {'rouge-1': {'f': 0.4806754387783283, 'p': 0.47575689443993296, 'r': 0.533090841327234}, 'rouge-2': {'f': 0.24723642740645052, 'p': 0.24485542975184837, 'r': 0.27718088869081886}, 'rouge-l': {'f': 0.4685603118461953, 'p': 0.4639976287785791, 'r': 0.5105758908770055}}
epoch 005 | loss 3.531 | nll_loss 1.672 | ppl 3.186 | wps 546.2 | ups 0.13 | wpb 4077.1 | bsz 155 | num_updates 475 | lr 2.82813e-05 | gnorm 1.635 | clip 100 | oom 0 | train_wall 411 | wall 3266
epoch 005 | valid on 'valid' subset | loss 3.886 | nll_loss 2.054 | ppl 4.153 | wps 2806.1 | wpb 130.4 | bsz 5 | num_updates 475 | best_loss 3.868
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [02:04<00:00, 6.56it/s]
Val {'rouge-1': {'f': 0.4944719510449904, 'p': 0.48697511330010795, 'r': 0.5466309284512514}, 'rouge-2': {'f': 0.2574232380298895, 'p': 0.25247220336568593, 'r': 0.2882837122736105}, 'rouge-l': {'f': 0.4798093107727924, 'p': 0.4732569365674383, 'r': 0.5224374168296736}}
2021-05-31 11:18:37 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_last.pt (epoch 5 @ 475 updates, score 3.886) (writing took 14.390713212022092 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [02:05<00:00, 6.51it/s]
Test {'rouge-1': {'f': 0.48509137071148967, 'p': 0.48264626996202653, 'r': 0.5366850271571914}, 'rouge-2': {'f': 0.2525567410158395, 'p': 0.2534527679918857, 'r': 0.2804807867555358}, 'rouge-l': {'f': 0.47183828333752353, 'p': 0.47041232293825985, 'r': 0.5118757042998228}}
epoch 006 | loss 3.39 | nll_loss 1.516 | ppl 2.859 | wps 552.4 | ups 0.14 | wpb 4077.1 | bsz 155 | num_updates 570 | lr 2.76875e-05 | gnorm 1.943 | clip 100 | oom 0 | train_wall 412 | wall 3967
epoch 006 | valid on 'valid' subset | loss 3.909 | nll_loss 2.08 | ppl 4.23 | wps 2779.9 | wpb 130.4 | bsz 5 | num_updates 570 | best_loss 3.868
here bpe NONE
here!
Test on val set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 817/817 [01:55<00:00, 7.07it/s]
Val {'rouge-1': {'f': 0.4910794409859213, 'p': 0.4974831279301767, 'r': 0.5273234137514343}, 'rouge-2': {'f': 0.2531795545509134, 'p': 0.2557374222451652, 'r': 0.27467339364892834}, 'rouge-l': {'f': 0.4749285189223979, 'p': 0.4792720164512937, 'r': 0.5046984304455332}}
2021-05-31 11:30:08 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints_stage/checkpoint_last.pt (epoch 6 @ 570 updates, score 3.909) (writing took 12.884504603978712 seconds)
Test on testing set:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 818/818 [01:56<00:00, 7.05it/s]
Test {'rouge-1': {'f': 0.4831205514829332, 'p': 0.4938850069205325, 'r': 0.5181878116116928}, 'rouge-2': {'f': 0.24977383043483103, 'p': 0.2561533148681936, 'r': 0.2691522722501041}, 'rouge-l': {'f': 0.47026294118429235, 'p': 0.4798468138122066, 'r': 0.49714213160146487}}

from multi-view-seq2seq.

jiaaoc avatar jiaaoc commented on June 19, 2024
  1. For the fairseq, it seems that they updated the bart-large model (the vocab size (including encoder.json', /vocab.bpe' and 'dict.txt') is changed as well). There might exist some discrepancies.

  2. For single-view models, you might need to change some codes in 'fairseq_cli/train.py' as well to match the test and validation files. Did you also try the BART baseline?

from multi-view-seq2seq.

Ricardokevins avatar Ricardokevins commented on June 19, 2024
  1. For the fairseq, it seems that they updated the bart-large model (the vocab size (including encoder.json', /vocab.bpe' and 'dict.txt') is changed as well). There might exist some discrepancies.
  2. For single-view models, you might need to change some codes in 'fairseq_cli/train.py' as well to match the test and validation files. Did you also try the BART baseline?

Thank a lot for reply

During these days I tried to fetch the previous version of BART. But get nothing, the DownLoad Link in Fairseq Repo seems unchange since it was created. Do you keep the BART model.pt and other file(like vocab) you used in paper? Thanks a lot!

For the BART baseline, I can not found the version of Baseline you implement in this Repo. The baseline implemented myself is differ from yours ( because of different data preprocessing or some other things)

from multi-view-seq2seq.

jiaaoc avatar jiaaoc commented on June 19, 2024

I did not keep the version of BART I was using. I noticed this issue because when I was preparing this repo and trying to load the trained model (https://drive.google.com/file/d/1Rhzxk1B7oaKi85Gsxr_8WcqTRx23HO-y/view) I saved, I got an error saying vocab mismatch. (I guessed they updated data pre-processing configs like encoder.json', /vocab.bpe' and 'dict.txt').

from multi-view-seq2seq.

jiaaoc avatar jiaaoc commented on June 19, 2024

For the BART baseline, i think the easiest way is to change the input files (remove all the segmentations).

from multi-view-seq2seq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.