For this project, the agent will be trained to navigate (and collect bananas!) in a large, square world.
- A reward of +1 is provided for collecting a yellow banana,
- A reward of -1 is provided for collecting a blue banana.
Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0
- move forward.1
- move backward.2
- turn left.3
- turn right.
The task is episodic, and in order to solve the environment, the agent must get an average score of +13 over 100 consecutive episodes.
-
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name drlnd python=3.6 source activate drlnd
- Windows:
conda create --name drlnd python=3.6 activate drlnd
-
Install these dependencies in the environment (My environment was windows 10).
- Install Unity ML-Agents
pip3 install --user mlagents
- Install Unity Agents
pip install unityagents
- Install Pytorch
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
-
Download the Banana environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
-
Unzip (or decompress) the file in the
p1_navigation/
folder
(Note: the project is built locally successfully but the training took long so i used Udacity workspace with GPU enabled.)
-
Copy both files dqn_agent.py and model.py from
dqn/solution
folder into the project folderp1_navigation/
-
To run the agent, three subsequent cells are added into
Navigation
notebook:- Cell for the instaniation of the agent and the reset of the environment
agent = Agent(state_size=state_size, action_size=action_size, seed=0) env_info = env.reset(train_mode=True)[brain_name]
- Cell for running and train the agent
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): . . . scores = dqn()
- Cell for plotting the
average scores per 100 episodes
against theEpisode #