Code Monkey home page Code Monkey logo

medqn's Introduction

MeDQN

This is the official implementation of MeDQN algorithm, introduced in our paper Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation.
Note that this codebase is for Atari experiments only. Please check the implementation of MeDQN in Explorer for other games.

Catastrophic forgetting prevents an agent from continual learning. In deep RL, this problem is largely masked by using a large replay buffer. In this work, we show that by reducing forgetting with value-based lnowledge consolidation, we can improve memory efficiency, sample efficiency, and computational efficiency all together. Specifically, in Atari games, our method (MeDQN) can reduce the memory size of the replay buffer in DQN from 7GB to 0.7GB, while achieving comparable or better performance, higher sample efficiency, and faster training speed.

Please share it with anyone who might be interested. Email me or submit an issue if you have any questions!

Plot of median score on each individual Atari game for each agent (m is the size of the replay buffer):

learning curves

Installation

  • Python (>=3.8.10)

  • PyTorch: GPU version

  • Others: see requirements.txt.

    pip install -r requirements.txt

Usage

Hyperparameter and Experiments

All hyperparameters, including parameters for grid search, are stored in a configuration file in the directory configs. To run an experiment, a configuration index is first used to generate a configuration dict corresponding to this specific configuration index. Then we run an experiment defined by this configuration dict. All results, including log files, are saved in the directory logs. Please refer to the codebase for details.

For example, run an experiment with configuration file medqn.json and configuration index 1:

python main.py --config_file ./configs/medqn.json --config_idx 1

To do a grid search, we first compute the number of total combinations in a configuration file (e.g. medqn.json):

python utils/sweeper.py

The output will be:

The number of total combinations in medqn.json: 60

Then we run through all configuration indexes from 1 to 60. The simplest way is using a bash script:

for index in {1..60}
do
  python main.py --config_file ./configs/medqn.json --config_idx $index
done

Parallel is usually a better choice to schedule a large number of jobs:

parallel --ungroup --jobs procfile python main.py --config_file ./configs/medqn.json --config_idx {1} ::: $(seq 1 60)

Any configuration index with the same remainder (divided by the number of total combinations) should have the same configuration dict (except the random seed if generate_random_seed = True). So for multiple runs, we just need to add the number of total combinations to the configuration index. For example, 5 runs for configuration index 1:

for index in 1 61 121 181 241
do
  python main.py --config_file ./configs/medqn.json --config_idx $index
done

Or a simpler way:

parallel --ungroup --jobs procfile python main.py --config_file ./configs/medqn.json --config_idx {1} ::: $(seq 1 60 300)

Analysis

To analyze the experimental results, just run:

python analysis.py

Inside analysis.py, unfinished_index will print out the configuration indexes of unfinished jobs based on the existence of the result file. memory_info will print out the memory usage information and generate a histogram to show the distribution of memory usages in the directory logs/medqn/0. Similarly, time_info will print out the time information and generate a histogram to show the time distribution in the directory logs/medqn/0. Finally, analyze will generate csv files that store training and test results. Please check analysis.py for more details. More functions are available in utils/plotter.py.

Run gather_results.py to gather all Atari results into csv files in the directory results.

Finally, run plot.py to plot the learning curves of all Atari results in the directory results.

Citation

If you find this work useful to your research, please cite our paper.

@article{lan2023memoryefficient,
  title={Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation},
  author={Lan, Qingfeng and Pan, Yangchen and Luo, Jun and Mahmood, A. Rupam},
  journal={Transactions on Machine Learning Research},
  year={2023}
}

Acknowledgement

We thank the following projects which provide great references:

medqn's People

Contributors

qlan3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

127161782

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.