Code Monkey home page Code Monkey logo

carla-rl's Introduction

Carla RL Project

Installation and Setup

Running the CARLA Server

Our program uses the CARLA simulator as the environment. The easiest way to install CARLA is to use the Docker container by running, docker pull carlasim/carla:0.8.2

We change the default settings (the timeout) when running the server and therefore we have our own carla-server docker image that builds on this image. You can build it by running docker build server -t carla-server.

Next, you can run the server Docker container with nvidia-docker run --rm -it -p 2000-2002:2000-2002 carlasim/carla:0.8.2 /bin/bash

Note that this requires nvidia-docker to be installed on your machine (which means you will also need a GPU). Finally, inside the docker container you can run the server with ./CarlaUE4.sh /Game/Maps/Town01 -carla-server -benchmark -fps=15 -windowed -ResX=800 -ResY=600

However, since we often require running more than 1 server, we recommend using the script server/run_servers.py to run the CARLA server. You can run N servers by running (the logs for stdout and stderr will be under server_output folder): python server/run_servers.py --num-servers N In order to see the servers output docker logs -ft CONTAINER_ID follows and tails it.

Running the client (training code, benchmark code)

Our code requires:

  • Python 3
  • PyTorch
  • OpenAI Gym
  • OpenAI Baselines

We suggest that you use our own Dockerfile to install all these dependencies. You can build our client Dockerfile with, docker build client -t carla-client then you can run nvidia-docker run -it --network=host -v $PWD:/app carla-client /bin/bash The --network=host flag allows the Docker container to make requests to the server. Once you are inside the container, you can run any of our scripts like python client/train.py --config client/config/base.yaml.

Arguments and Config Files

Our client/train.py script uses both arguments and a configuration file. The configuration file specifies all components of the model. The config file should have everything necessary to reproduce the results of a given model. The arguments of the script deal with things that are independent of the model (this includes things, like for example, how often to create videos or log to Tensorboard)

Hyperparameter Tuning

To test a set of hyperparemeters see the scripts/test_hyperparameters_parallel.py script. This will let you specify a set of hyperparameters to test different from those specified in the client/config/base.yaml file.

Benchmark Results

A2C

To reproduce our results, run a CARLA server and inside the carla-client docker run, python client/train.py --config client/config/a2c.yaml

ACKTR

To reproduce our results, run a CARLA server and inside the carla-client docker run, python client/train.py --config client/config/acktr.yaml

PPO

To reproduce our results, run a CARLA server and inside the carla-client docker run, python client/train.py --config client/config/ppo.yaml

On-Policy HER

To reproduce our results, run a CARLA server and inside the carla-client docker run, python client/train.py --config client/config/her.yaml

carla-rl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

carla-rl's Issues

connection error

Hi
if I run
observation_space, action_space = self.remotes[0].recv()
Then,
/home/cecilia/anaconda3/envs/ros/lib/python3.8/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
Trying to make client on port 2021
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2000) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2003) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2006) failed to connect: [Errno 111] Connection refused
(127.0.0.1:2012) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2015) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2009) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2018) failed to connect: [Errno 111] Connection refused
Got TCPConnectionError..sleeping for 1
(127.0.0.1:2021) failed to connect: [Errno 111] Connection refused
Trying to make client on port 2000
Trying to make client on port 2003
Trying to make client on port 2006
Trying to make client on port 2012
Trying to make client on port 2015
Trying to make client on port 2009
Trying to make client on port 2018
Trying to make client on port 2021

How to implement CIRL paper

How to implement muti-branches CIRL DDPG in papaer "CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving"?

docker: Error response from daemon

Thank you for your great work!

I follow your instructions here and run nvidia-docker run -it --network=host -v $PWD:/app carla-client /bin/bash.
Error reported:

$ nvidia-docker run -it --network=host -v $PWD:/app carla-client /bin/bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:385: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.0 brand=tesla,driver>=384,driver<385 --pid=2968 /var/lib/docker/overlay2/ef4f31179c2e2eeb33275988cc9bf2c2e03be5e9319d8baea3ceeb0e048d0497/merged]\\\\nnvidia-container-cli: requirement error: invalid expression\\\\n\\\"\"": unknown.

Do you have any suggestions?

dtype must be explicitly provided

Hi, I followed the instructions in the README but with the carla server running, when try to run a client I get the following error:

:/app# python client/train.py --config client/config/a2c.yaml
Environment 0 running in port 2000
Trying to make client on port 2000
Environment 1 running in port 2003
Trying to make client on port 2003
Environment 2 running in port 2006
Trying to make client on port 2006
Environment 3 running in port 2009
Trying to make client on port 2009
Environment 4 running in port 2012
Trying to make client on port 2012
Environment 5 running in port 2015
Trying to make client on port 2015
Environment 6 running in port 2018
Trying to make client on port 2018
Environment 7 running in port 2021
Trying to make client on port 2021
Successfully made client on port 2000
Traceback (most recent call last):
File "client/train.py", line 301, in
main()
File "client/train.py", line 117, in main
video_every=args.video_interval, video_dir=os.path.join(args.save_dir, 'video', experiment_name))
File "/app/client/envs_manager.py", line 45, in make_vec_envs
envs = VecPyTorchFrameStack(envs, num_frame_stack, device)
File "/app/client/envs_manager.py", line 139, in init
low=low, high=high, dtype=venv.observation_space.dtype)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/gym/spaces/box.py", line 21, in init
assert dtype is not None, 'dtype must be explicitly provided. '

And from the server:
[2019.05.03-03.33.36:809][ 0]LogContentStreaming: Texture pool size now 2000 MB
[2019.05.03-03.33.37:044][ 1]LogLinux: Setting swap interval to 'Immediate'
[2019.05.03-03.33.37:044][ 1]LogLinux: Warning: Unable to set desired swap interval 'Immediate'
ERROR:ERROR: tcpserver tcpserver 2002 2000 : error reading message: End of file: error reading message: End of file

[2019.05.03-03.33.38:873][ 3]LogCarlaServer: Warning: Client disconnected, server needs restart
[2019.05.03-03.33.38:873][ 3]LogCarlaServer: Restarting the level...
[2019.05.03-03.33.38:885][ 4]LogNet: Browse: /Game/Maps/Town01?Name=Player?restart
[2019.05.03-03.33.38:896][ 4]LogLoad: LoadMap: /Game/Maps/Town01?Name=Player
[2019.05.03-03.33.41:203][ 4]LogAIModule: Creating AISystem for world Town01
[2019.05.03-03.33.41:399][ 4]LogLoad: Game class is 'CarlaGameMode_C'
[2019.05.03-03.33.41:657][ 4]LogWorld: Bringing World /Game/Maps/Town01.Town01 up for play (max tick rate 0) at 2019.05.03-03.33.41
[2019.05.03-03.33.41:658][ 4]LogCarla: Loading weather description from ../../../CarlaUE4/Config/CarlaWeather.ini
[2019.05.03-03.33.41:659][ 4]LogCarla: Loading weather description from ../../../CarlaUE4/Config/CarlaWeather.Town01.ini
[2019.05.03-03.33.41:661][ 4]LogCarlaServer: Waiting for the client to connect...
^C[2019.05.03-03.33.42:370][ 4]LogLinux: FLinuxPlatformMisc::RequestExit(bForce=false, ReturnCode=130)
[2019.05.03-03.33.42:370][ 4]LogLinux: FLinuxPlatformMisc::RequestExit(0)
[2019.05.03-03.33.42:370][ 4]LogGenericPlatformMisc: FPlatformMisc::RequestExit(0)
ERROR:[2019.05.03-03.33.51:661][ 4]LogCarlaServer: Warning: Failed to initialize, server needs restart
tcpserver 0 : connection failed: Operation canceled

Any ideas on why this is occurring?

Collision

I am training carla with a2c, with only one process, with cpu it runs but I see quite often :
Episodes 312, Updates 3911, num timesteps 78220, FPS 12.578678451185294

Collision

It crashes into walls or gets out of the road quite often. Is that ok?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.