Code Monkey home page Code Monkey logo

batch_rl's Introduction

Important info about this fork

This fork is meant to work together with my dopamine fork

To get started:

  • In a directory atari clone both this fork and my dopamine fork
  • docker pull justnikos/batchrl
  • Start a docker container with this image that mounts the atari directory so that it is accessible inside the container. I start my container like this
sudo docker run -d -ti --name nikos --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --env="DISPLAY" --net=host -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME:/root justnikos/batchrl /bin/bash

Then I create shells like this

sudo docker exec -ti nikos /bin/bash

The reason is I can run pycharm to edit the projects which helps a lot when you are working on someone else's code

pycharm-community &
  • Create a venv for this project (always a good idea). Here it important to give access to already installed stuff in the docker
python -m venv --system-site-packages venv
  • activate the venv (source venv/bin/activate)
  • Go to the dopamine repo and pip install it as editable (find where setup.py resides (typically at the repo root) and do pip install -e .)
  • Now any changes in our dopamine fork will be reflected immediately in batch_rl (assuming we never forget activate the venv)
  • Download data from atarilogs azure blob (below I'm assuming they end up under $HOME/breakout $HOME/seaquest)
az storage blob download-batch --account-name atarilogs -s batchrl -d $HOME

plus any sas tokens/connection strings you need.

  • Some useful commands (assuming you are in atari/batch_rl and you have activated the venv): Test everything is installed correctly
python -um batch_rl.tests.fixed_replay_runner_test --replay_dir=$HOME/breakout

Run the batch dqn agent

python -um batch_rl.fixed_replay.train --base_dir=/tmp/breakout/dqn --replay_dir=$HOME/breakout --gin_files=batch_rl/fixed_replay/configs/dqn.gin

Run the rem agent

python -um batch_rl.fixed_replay.train --base_dir=/tmp/breakout/rem --replay_dir=$HOME/breakout --gin_files=batch_rl/fixed_replay/configs/rem.gin --agent_name=multi_head_dqn

Run our agent

python -um batch_rl.fixed_replay.train --base_dir=/tmp/breakout/opdqn --replay_dir=$HOME/breakout --gin_files=batch_rl/fixed_replay/configs/off_policy_dqn.gin --agent_name=off_policy_dqn

You can start a tensorboard to monitor the experiments. Forward port 6006 and do

tensorboard --logdir /tmp/breakout/

Now you should see stuff if you go to localhost:6006. It takes a while for the first actual datapoints to appear.

An Optimistic Perspective on Offline Reinforcement Learning (ICML, 2020)

This project provides the open source implementation using the Dopamine framework for running experiments mentioned in An Optimistic Perspective on Offline Reinforcement Learning. In this work, we use the logged experiences of a DQN agent for training off-policy agents (shown below) in an offline setting (i.e., batch RL) without any new interaction with the environment during training. Refer to offline-rl.github.io for the project page.

Architechture of different off-policy agents

DQN Replay Dataset (Logged DQN data)

The DQN Replay Dataset was collected as follows: We first train a DQN agent, on all 60 Atari 2600 games with sticky actions enabled for 200 million frames (standard protocol) and save all of the experience tuples of (observation, action, reward, next observation) (approximately 50 million) encountered during training.

This logged DQN data can be found in the public GCP bucket gs://atari-replay-datasets which can be downloaded using gsutil. To install gsutil, follow the instructions here.

After installing gsutil, run the command to copy the entire dataset:

gsutil -m cp -R gs://atari-replay-datasets/dqn

To run the dataset only for a specific Atari 2600 game (e.g., replace GAME_NAME by Pong to download the logged DQN replay datasets for the game of Pong), run the command:

gsutil -m cp -R gs://atari-replay-datasets/dqn/[GAME_NAME]

This data can be generated by running the online agents using batch_rl/baselines/train.py for 200 million frames (standard protocol). Note that the dataset consists of approximately 50 million experience tuples due to frame skipping (i.e., repeating a selected action for k consecutive frames) of 4. The stickiness parameter is set to 0.25, i.e., there is 25% chance at every time step that the environment will execute the agent's previous action again, instead of the agent's new action.

Asymptotic Performance of offline agents on Atari-replay dataset

Number of games where a batch agent outperforms online DQN Asymptotic Performance of offline agents on DQN data

Installation

Install the dependencies below, based on your operating system, and then install Dopamine, e.g.

pip install git+https://github.com/google/dopamine.git

Finally, download the source code for batch RL, e.g.

git clone https://github.com/google-research/batch_rl.git

Ubuntu

If you don't have access to a GPU, then replace tensorflow-gpu with tensorflow in the line below (see Tensorflow instructions for details).

sudo apt-get update && sudo apt-get install cmake zlib1g-dev
pip install absl-py atari-py gin-config gym opencv-python tensorflow-gpu

Mac OS X

brew install cmake zlib
pip install absl-py atari-py gin-config gym opencv-python tensorflow

Running Tests

Assuming that you have cloned the batch_rl repository, follow the instructions below to run unit tests.

Basic test

You can test whether basic code is working by running the following:

cd batch_rl
python -um batch_rl.tests.atari_init_test

Test for training an agent with fixed replay buffer

To test an agent using a fixed replay buffer, first generate the data for the Atari 2600 game of Pong to $DATA_DIR.

export DATA_DIR="Insert directory name here"
mkdir -p $DATA_DIR/Pong
gsutil -m cp -R gs://atari-replay-datasets/dqn/Pong/1 $DATA_DIR/Pong

Assuming the replay data is present in $DATA_DIR/Pong/1/replay_logs, run the FixedReplayDQNAgent on Pong using the logged DQN data:

cd batch_rl
python -um batch_rl.tests.fixed_replay_runner_test \
  --replay_dir=$DATA_DIR/Pong/1

Training batch agents on DQN data

The entry point to the standard Atari 2600 experiment is batch_rl/fixed_replay/train.py. Run the batch DQN agent using the following command:

python -um batch_rl.fixed_replay.train \
  --base_dir=/tmp/batch_rl \
  --replay_dir=$DATA_DIR/Pong/1 \
  --gin_files='batch_rl/fixed_replay/configs/dqn.gin'

By default, this will kick off an experiment lasting 200 training iterations (equivalent to experiencing 200 million frames for an online agent).

To get finer-grained information about the process, you can adjust the experiment parameters in batch_rl/fixed_replay/configs/dqn.gin, in particular by increasing the FixedReplayRunner.num_iterations to see the asymptotic performance of the batch agents. For example, run the batch REM agent for 800 training iterations on the game of Pong using the following command:

python -um batch_rl.fixed_replay.train \
  --base_dir=/tmp/batch_rl \
  --replay_dir=$DATA_DIR/Pong/1 \
  --agent_name=multi_head_dqn \
  --gin_files='batch_rl/fixed_replay/configs/rem.gin' \
  --gin_bindings='FixedReplayRunner.num_iterations=1000' \
  --gin_bindings='atari_lib.create_atari_environment.game_name = "Pong"'

More generally, since this code is based on Dopamine, it can be easily configured using the gin configuration framework.

Dependencies

The code was tested under Ubuntu 16 and uses these packages:

  • tensorflow-gpu>=1.13
  • absl-py
  • atari-py
  • gin-config
  • opencv-python
  • gym
  • numpy

Citing

If you find this open source release useful, please reference in your paper:

Agarwal, R., Schuurmans, D. & Norouzi, M.. (2020). An Optimistic Perspective on Offline Reinforcement Learning International Conference on Machine Learning (ICML).

@article{agarwal2019optimistic,
  title={An Optimistic Perspective on Offline Reinforcement Learning},
  author={Agarwal, Rishabh and Schuurmans, Dale and Norouzi, Mohammad},
  journal={International Conference on Machine Learning},
  year={2020}
}

Note: A previous version of this work was titled "Striving for Simplicity in Off Policy Deep Reinforcement Learning" and was presented as a contributed talk at NeurIPS 2019 DRL Workshop.

Disclaimer: This is not an official Google product.

batch_rl's People

Contributors

agarwl avatar n17s avatar thesparta avatar tangbotony avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.