Code Monkey home page Code Monkey logo

rfqi's Introduction

Robust Reinforcement Learning using Offline Data

Implementation of the algorithm Robust Fitted Q-Iteration (RFQI). RFQI is introduced in our paper Robust Reinforcement Learning using Offline Data (NeurIPS'22). This implementation of RFQI is based on the implementation of BCQ and the implementation of PQL.

Our method is tested in OpenAI gym discrete control task, CartPole, and two MuJoCo continuous control tasks, Hopper and HalfCheetah, using the D4RL benchmark. Thus it is required that MuJoCo and D4RL are both installed prior to using this repo.

Setup

Install requirements:

pip install -r requirements.txt

Next, you need to properly register the perturbed Gym environments which are placed under the folder perturbed_env. A recommended way to do this: first, place cartpole_perturbed.py under gym/envs/classic_control, hopper_perturbed.py and half_cheetah_perturbed.py under gym/envs/mujoco. Then add the following to _init_.py under gym/envs:

register(
    id="CartPolePerturbed-v0",
    entry_point="gym.envs.classic_control.cartpole_perturbed:CartPolePerturbedEnv",
    max_episode_steps=200,
    reward_threshold=195.0,
)
register(
    id="HopperPerturbed-v3",
    entry_point="gym.envs.mujoco.hopper_perturbed:HopperPerturbedEnv",
    max_episode_steps=1000,
    reward_threshold=3800.0,
)
register(
    id="HalfCheetahPerturbed-v3",
    entry_point="gym.envs.mujoco.half_cheetah_perturbed:HalfCheetahPerturbedEnv",
    max_episode_steps=1000,
    reward_threshold=4800.0,
)

You can test this by running:

import gym

gym.make('HopperPerturbed-v3')

After installing MuJoCo and D4RL, you can run the following script to download D4RL offline data and make it conform to our format, or you can directly go to TL;DR section below:

python load_d4rl_data.py

TL;DR

Here you can find shell scripts that take you directly from offline data generation to evaluation results.

To get all data, run

sh scripts/gen_all_data.sh

To get all results, run

sh scripts/run_cartpole.sh
sh scripts/run_hopper.sh
sh scripts/run_half_cheetah.sh

To evaluate all pre-trained models, run

sh scripts/eval_all.sh

Detailed instructions

To generate the epsilon-greedy dataset for CartPole-v0 with epsilon=0.3, run the following:

python generate_offline_data.py --env=CartPole-v0 --gendata_pol=ppo --eps=0.3

To generate the mixed dataset specified in Appendix E.1, run the following:

python generate_offline_data.py --env=Hopper-v3 --gendata_pol=sac --eps=0.3 --mixed=True

To train a RFQI policy on Hopper-v3 with d4rl-hopper-medium-v0 and uncertainty hyperparameter rho=0.5, please run:

python train_rfqi.py --env=Hopper-v3 --d4rl=True --rho=0.5

You can also train a RFQI policy on Hopper-v3 with mixed dataset and uncertainty hyperparameter rho=0.5 by running

python train_rfqi.py --env=Hopper-v3 --data_eps=0.3 --gendata_pol=sac --mixed=True --rho0.5

Miscellaneous

If you are using a remote machine to run this repo, please remember to assign a display/virtual display for the evaluation suite to properly generate gifs.

rfqi's People

Contributors

zaiyan-x avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

rfqi's Issues

Reproduce in mix dataset of Hopper-v3

Hi @zaiyan-x ,

Thank for your work.

I'm trying to reproduce your code in the mix dataset of the Hopper-v3 environment. I begin with running the generate_offline_data.py to generate the mix dataset of Hopper-v3. Then I run the code train_rfqi.py to train the agent. However, around 80k iters, the critic loss goes to high value and the max_eta goes to zero.
image
image

I am quite confused about this behavior. Did you face the same behavior while training on Hopper-v3?
Thank you so much and have a nice day.
Best,
Linh

Ask about compute the target in robust bellman operator

Hi @zaiyan-x,

Thank you about your amazing work.
I have been using your code base for my project. I see that when you calculate the robust bellman target for updating Q function, you didn't multiple target_Q with not_done variable. I'm just curious about this choice. Was it an implementation bug or a specific choice?
Here is the line I mentioned

RFQI/rfqi.py

Line 305 in 0e58372

target_Q = reward - gamma * torch.maximum(etas - target_Q, etas.new_tensor(0)) + (1 - rho) * etas * gamma

Thank you again and have a nice day.
Best,
Linh

Training time

Hi @zaiyan-x ,

I'm running and trying to reproduce your code for my project. But it takes quite a long for training.
I checked the log file and found out that most of the time is for eta optimization, which is around 3 to 5 seconds per one agent training iteration. So I just want to ask that in your experiments, how long did it take you to train one agent (Hopper, HalfCheetah)? And could you have any suggestion to reduce the training time?

Thank you a lot and have a nice day.
Best,
Linh

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.