Code Monkey home page Code Monkey logo

Comments (17)

ChrisGeishauser avatar ChrisGeishauser commented on June 1, 2024 2

Hey Nick,

regarding your question in #8 (comment)

I used the best_mle model as a starting point and did not train my own one. I managed to reproduce the results once but when I tried again, I failed somehow and do not know why.

In your code, you can also use the load-method provided by policy_sys and do not have to write your own. (but whatever feels better to you :D)

Apart from that, my code is exactly like yours, where after every epoch I evaluate my model on compele-rate and success-rate.

I dont know if you noticed it but they fixed a small bug in "convlab2/dialog_agent/env.py", where they added a line "self.sys_dst.state['user_action'] = dialog_act". This line is very important as it tells the system what was the last user action was.

When you pretrain the model on the multiwoz-data set, this information is provided and the system really exploits that knowledge to decide about the next action. I guess that PPO failed all the time because it was initialised with a model that has the knowledge about the last user-action, while it didnt got that info during training with the simulator.

I tried again with the bug fixed and now PPO trains well and reaches high performance of around 90% complete-rate and 76% success-rate after 10 epochs.

from convlab-2.

sherlock1987 avatar sherlock1987 commented on June 1, 2024

GDPL still could not train, the loss is so big!

from convlab-2.

thenickben avatar thenickben commented on June 1, 2024

Cool. I'll try later to set up a simple script where it's easy to change hyperparameters and seeds and get the performance, and will share it here and also do some runs!

from convlab-2.

zqwerty avatar zqwerty commented on June 1, 2024

I dont know if you noticed it but they fixed a small bug in "convlab2/dialog_agent/env.py", where they added a line "self.sys_dst.state['user_action'] = dialog_act". This line is very important as it tells the system what was the last user action was.

Yes, please update this.

from convlab-2.

sherlock1987 avatar sherlock1987 commented on June 1, 2024

Hey, guys, I am using vanilla PPO with original reward to train. However, the evluation (success rate) is not good at all. It could not go higher when it meets 0.3, far from 0.74.

I know using MLE could help me go to 0.74, but the thing is, I have to use this vanilla PPO, since I am doing the work of reward function, my baseline is vanilla PPO.

Besides, this is the score after 40 epochs: [0.31500000000000006, 0.31875, 0.31375000000000003, 0.308125, 0.29875, 0.30000000000000004, 0.30500000000000005, 0.30125, 0.306875, 0.318125, 0.30562500000000004, 0.31249999999999994, 0.2975, 0.30062500000000003, 0.295, 0.30250000000000005, 0.29500000000000004, 0.29874999999999996, 0.30374999999999996, 0.2975, 0.3025, 0.29125, 0.28625, 0.2875, 0.28812499999999996, 0.28437500000000004, 0.284375, 0.29124999999999995, 0.28875, 0.286875, 0.289375, 0.303125, 0.30000000000000004, 0.3025, 0.29937499999999995, 0.301875, 0.313125, 0.30874999999999997, 0.31125, 0.30437500000000006, 0.29937499999999995, 0.295625, 0.298125, 0.30187499999999995, 0.30562500000000004, 0.30125, 0.29625, 0.29125, 0.3, 0.301875, 0.3025, 0.301875, 0.305625, 0.31499999999999995, 0.31250000000000006, 0.31125, 0.311875, 0.306875, 0.314375, 0.30875]

from convlab-2.

thenickben avatar thenickben commented on June 1, 2024

Hey, guys, I am using vanilla PPO with original reward to train. However, the evluation (success rate) is not good at all. It could not go higher when it meets 0.3, far from 0.74.

I know using MLE could help me go to 0.74, but the thing is, I have to use this vanilla PPO, since I am doing the work of reward function, my baseline is vanilla PPO.

Besides, this is the score after 40 epochs: [0.31500000000000006, 0.31875, 0.31375000000000003, 0.308125, 0.29875, 0.30000000000000004, 0.30500000000000005, 0.30125, 0.306875, 0.318125, 0.30562500000000004, 0.31249999999999994, 0.2975, 0.30062500000000003, 0.295, 0.30250000000000005, 0.29500000000000004, 0.29874999999999996, 0.30374999999999996, 0.2975, 0.3025, 0.29125, 0.28625, 0.2875, 0.28812499999999996, 0.28437500000000004, 0.284375, 0.29124999999999995, 0.28875, 0.286875, 0.289375, 0.303125, 0.30000000000000004, 0.3025, 0.29937499999999995, 0.301875, 0.313125, 0.30874999999999997, 0.31125, 0.30437500000000006, 0.29937499999999995, 0.295625, 0.298125, 0.30187499999999995, 0.30562500000000004, 0.30125, 0.29625, 0.29125, 0.3, 0.301875, 0.3025, 0.301875, 0.305625, 0.31499999999999995, 0.31250000000000006, 0.31125, 0.311875, 0.306875, 0.314375, 0.30875]

I think the big question here would be "why is PPO not training if you don't pre-train it with MLE?". I think the answer has to be related to the complexity of this kind of environments, where without any sort of "expert knowledge" (e.g. taking info from demonstrations as in MLE) simple models like PPO will never train..

from convlab-2.

sherlock1987 avatar sherlock1987 commented on June 1, 2024

Y

Hey, guys, I am using vanilla PPO with original reward to train. However, the evluation (success rate) is not good at all. It could not go higher when it meets 0.3, far from 0.74.
I know using MLE could help me go to 0.74, but the thing is, I have to use this vanilla PPO, since I am doing the work of reward function, my baseline is vanilla PPO.
Besides, this is the score after 40 epochs: [0.31500000000000006, 0.31875, 0.31375000000000003, 0.308125, 0.29875, 0.30000000000000004, 0.30500000000000005, 0.30125, 0.306875, 0.318125, 0.30562500000000004, 0.31249999999999994, 0.2975, 0.30062500000000003, 0.295, 0.30250000000000005, 0.29500000000000004, 0.29874999999999996, 0.30374999999999996, 0.2975, 0.3025, 0.29125, 0.28625, 0.2875, 0.28812499999999996, 0.28437500000000004, 0.284375, 0.29124999999999995, 0.28875, 0.286875, 0.289375, 0.303125, 0.30000000000000004, 0.3025, 0.29937499999999995, 0.301875, 0.313125, 0.30874999999999997, 0.31125, 0.30437500000000006, 0.29937499999999995, 0.295625, 0.298125, 0.30187499999999995, 0.30562500000000004, 0.30125, 0.29625, 0.29125, 0.3, 0.301875, 0.3025, 0.301875, 0.305625, 0.31499999999999995, 0.31250000000000006, 0.31125, 0.311875, 0.306875, 0.314375, 0.30875]

I think the big question here would be "why is PPO not training if you don't pre-train it with MLE?". I think the answer has to be related to the complexity of this kind of environments, where without any sort of "expert knowledge" (e.g. taking info from demonstrations as in MLE) simple models like PPO will never train..

Yeah, I agree with that. I guess without expert trajectory, PPO will never goes to that highest point.

from convlab-2.

lalapo avatar lalapo commented on June 1, 2024

is there any update about RuntimeError: CUDA error: device-side assert triggered issue?
I have trained my own RL agents, all of them were pretrained using MLE. However, when I do end-to-end evaluation using analyzer this error "RuntimeError: CUDA error: device-side assert triggered" appears. Is there any solution to this problem?

from convlab-2.

zqwerty avatar zqwerty commented on June 1, 2024

is there any update about RuntimeError: CUDA error: device-side assert triggered issue?
I have trained my own RL agents, all of them were pretrained using MLE. However, when I do end-to-end evaluation using analyzer this error "RuntimeError: CUDA error: device-side assert triggered" appears. Is there any solution to this problem?

this error may come from the mismatch of output dimension. You could add CUDA_LAUNCH_BLOCKING=1 args to see detail information

from convlab-2.

YenChen-Wu avatar YenChen-Wu commented on June 1, 2024
  1. ppo performance drops from 0.8 (40 epochs) to 0.4 (200 epochs)
  2. When I train ppo without mle pretraining, the performance stuck at 0.1, while 6 months ago it reached 0.35. What has been updated in user simulator during these months? When I look into the trajectories, the rewards are weird. Sequences of rewards of 5 ruin the value estimation. e.g. -1,-1,5,5,5,5,5,5,5,5,5,5,5 (end)
    Just curious does anyone have the same problem.

from convlab-2.

ChrisGeishauser avatar ChrisGeishauser commented on June 1, 2024

Hey Yen-Chen!

I am not sure what has been changed but maybe the following happens with the reward:

https://github.com/thu-coai/ConvLab-2/blob/master/convlab2/evaluator/multiwoz_eval.py#L417

If you look here, you get a reward of 5 if one domain has been successfully completed. Once completed, I guess it views it as completed in every consecutive turn as well, so that you still get the reward in every turn and not just once. I don't know how you feel about that but I just changed it so that you only get rewarded for success/failure at the very end of the dialogue as we are used to it. Giving the 5 correctly would enhance learning a bit but at the moment, I skipped that.

from convlab-2.

zqwerty avatar zqwerty commented on June 1, 2024

Hey Yen-Chen!

I am not sure what has been changed but maybe the following happens with the reward:

https://github.com/thu-coai/ConvLab-2/blob/master/convlab2/evaluator/multiwoz_eval.py#L417

If you look here, you get a reward of 5 if one domain has been successfully completed. Once completed, I guess it views it as completed in every consecutive turn as well, so that you still get the reward in every turn and not just once. I don't know how you feel about that but I just changed it so that you only get rewarded for success/failure at the very end of the dialogue as we are used to it. Giving the 5 correctly would enhance learning a bit but at the moment, I skipped that.

Thanks. But I think this is not the reason. We just move the reward given by the evaluator from https://github.com/thu-coai/ConvLab-2/blob/0bd551b5b3ad7ceb97b9d9a7e86e5b9bff8a9383/convlab2/dialog_agent/env.py to https://github.com/thu-coai/ConvLab-2/blob/master/convlab2/evaluator/multiwoz_eval.py#L417 . You can choose the reward function to use in env.py

from convlab-2.

YenChen-Wu avatar YenChen-Wu commented on June 1, 2024

Thank you, Chris and zqwerty!

I think domain rewards of 5 should be removed or modified. Otherwise, the system agent is encouraged to stay in the completed domain and get rewards for free. I just experimented on PPO and both success rate and variance are improved after domain rewards are removed. (from 0.67 to 0.72)

Some other issues:
1.

reward_tot.append(np.mean(reward))

should be reward_tot.append(np.sum(reward)) but not mean.

env = Environment(None, simulator, None, dst_sys)

should be env = Environment(None, simulator, None, dst_sys, evaluator).
The evaluator should be included so that the testing rewards are the same as the training ones.

from convlab-2.

aaa123git avatar aaa123git commented on June 1, 2024

Thanks all of you. The discussions help a lot. I followed the instructions and trained a better PPO policy. The evaluation results are updated (PR #211 ). I'll share my experience.

  1. Train an MLE policy by running python convlab2/policy/mle/multiwoz/train.py with the default config. The best model is saved automatically.
  2. Train PPO policy by running python convlab2/policy/ppo/train.py --load_path PATH_TO_MLE --epoch 40. PATH_TO_MLE is the best MLE policy trained at step 1. Note that PATH_TO_MLE should be "xxxx/best_mle" instead of "xxxx/best_mle.pol.mdl".
  3. Choose the best epoch by running python convlab2/policy/evaluate.py --load_path PATH_TO_PPO. I suggest you update the code and set calculate_reward=False. Otherwise, it may take you a while to find the success rate in the log.

from convlab-2.

zqwerty avatar zqwerty commented on June 1, 2024

@YenChen-Wu @thenickben @sherlock1987 @ChrisGeishauser
We have re-trained PPO and updated the results (PR #211), thanks @aaa123git !

from convlab-2.

ruleGreen avatar ruleGreen commented on June 1, 2024

I am afraid that the evaluation results of PPO maybe not correct.
There are two levels of evaluation. The first one is policy/evaluate.py which is action-level, it follows the instructions by @aaa123git But the second one is tests/test_BERTNLU_xxxx.py which is sentence-level. These two results are not the same at all.

from convlab-2.

ChrisGeishauser avatar ChrisGeishauser commented on June 1, 2024

@ruleGreen

These two results are not the same at all.

This is reasonable as sentence-level creates a more difficult environment for the policy. On action-level, you basically have the ground-truth of the user simulator input, whereas with BERTNLU you have the incorporated error of the NLU component. Moreover, I guess the policy in the sentence-level pipeline is trained on action-level and only evaluated in the sentence-level. This creates a mismatch between training and testing. So a drop in performance is to be expected.

from convlab-2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.