Code Monkey home page Code Monkey logo

dqn's Introduction

Deep Q-Network

Implementation of the DQN algorithm and six independent improvements as described in the paper Rainbow: Combining Improvements in Deep Reinforcement Learning [1].

  • DQN [2]
  • Double DQN [3]
  • Prioritized Experience Replay [4]
  • Dueling Network Architecture [5]
  • Multi-step Bootstrapping [6]
  • Distributional RL [7]
  • Noisy Networks [8]

I provide a main.py as well as a Jupyter Notebook which demonstrate how to set up, train and compare multiple agents to reproduce the results of the aforementioned paper.

Don't hesitate to modify the default hyperparameters or the code to see how your new algorithm compare to the standard ones.

Project Structure

├── README.md
├── main.py                             # Lab where agents are declared, trained and compared
├── .gitignore
├── agents
│   ├── dqn_agent.py                    # DQN agent
│   ├── ddqn_agent.py                   # Double DQN agent
│   └── rainbow_agent.py                # Rainbow ageent
├── replay_memory
│   ├── replay_buffer.py                # The standard DQN replay memory
│   ├── prioritized_replay_buffer.py    # Prioritized replay memory using a sum tree to sample
│   └── sum_tree.py                     # Sum tree implementation used by the prioritized replay memory
└── utils
    ├── network_architectures.py        # A collection of network architectures including standard, dueling, noisy or distributional
    ├── wrappers.py                     # Wrappers and utilities to create Gym environments
    └── plot.py                         # Plot utilities to display agents' performances

In wrappers.py, I also provide a clean implementation of a CartPole Swing Up environment. The pole starts hanging down and the cart must first swing the pole to an upright position before balancing it as in normal CartPole.

Instructions

First download the source code.

git clone https://github.com/maxencefaldor/dqn.git

Finally setup the environment and install dqn's dependencies

pip install -U pip
pip install -r dqn/requirements.txt

Requirements

Acknowledgements

References

[1] Rainbow: Combining Improvements in Deep Reinforcement Learning, Hessel et al., 2017.
[2] Playing Atari with Deep Reinforcement Learning, Mnih et al., 2013.
[3] Deep Reinforcement Learning with Double Q-learning, Hasselt et al., 2015.
[4] Prioritized Experience Replay, Schaul et al., 2015.
[5] Dueling Network Architectures for Deep Reinforcement Learning, Wang et al., 2015.
[6] Reinforcement Learning: An Introduction, Sutton and Barto, 1998.
[7] A Distributional Perspective on Reinforcement Learning, Bellemare et al., 2017.
[8] Noisy Networks for Exploration, Fortunato et al., 2017.

dqn's People

Contributors

maxencefaldor avatar

Stargazers

 avatar Gabriel Béna avatar Luuudo avatar  avatar

Watchers

 avatar  avatar

Forkers

cmarcon22

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.