Code Monkey home page Code Monkey logo

madras's Introduction

Description

MADRaS is a Multi-Agent Autonomous Driving Simulator built on top of TORCS. The simulator can be used to test autonomous vehicle algorithms both heuristic and learning based on an inherently multi agent setting.

Note please : The repository is under active developement and re-design. Currently the master branch has the Single-Agent version of MADRaS whereas the Multi-Agent part is in the devel branch.

Installation

Installation prerequisities

  • TORCS
git clone https://github.com/madras-simulator/TORCS.git
  • plib
  • Install Dependencies
sudo apt-get install libalut-dev 
sudo apt-get install libvorbis-dev 
sudo apt-get install libxrandr2 libxrandr-dev 
sudo apt-get install zlib1g-dev 
sudo apt-get install libpng-dev 
sudo apt-get install libplib-dev libplib1 
sudo apt-get install python-tk
sudo apt-get install xautomation
  • Installling plib (follow instructions on the plib page)
  • Installing TORCS
cd TORCS/
./configure --prefix=$HOME/usr/local
make && make install
make datainstall
export PATH=$HOME/usr/local/bin:$PATH
export LD_LIBRARY_PATH=$HOME/usr/local/lib:$LD_LIBRARY_PATH
  • test if torcs is running by typing torcs in a new terminal window
  • test if scr client is installed or not.
    • open TORCS, navigate to configure race (race->quickrace->configure race -> select drivers)
    • check the Not-Selected list for scr-serverx where x will range in [1,9]

Tested on ubuntu-16.04 & ubuntu-18.04

Installation MADRaS

# if req an env can also be created
git clone https://github.com/madras-simulator/MADRaS
cd MADRaS/
pip3 install -e .

For further information regarding the simulator please checkout our Wiki

Maintainers

Credits

Developers:

Project Manager:

Mentors:

madras's People

Contributors

buridiaditya avatar mehakaushik avatar rudrasohan avatar santara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

madras's Issues

Minor bug in imports

Maybe the imports section of MADRaS/traffic/example_usage.py needs a fix.

It seems import traffic.const_vel as playGame_const_vel_s should be import traffic.const_vel_s as playGame_const_vel_s

unable to run torcs command

After installing and following all the guideline, I got this error:

The program 'torcs' is currently not installed. You can install it by typing: sudo apt install torcs

The experiment not starting properly.

I have tried and tested the MadrasEnv in both rllab and baselines. The integration seems fine as I currently see no error. But am experiencing similar problems with both. Even though the agent moves forward the i.e. the network is providing some actions but judging from the output in the terminal it's unlikely that the algorithm is moving forward.

For trying out with baselines: refer #9
For rllab I have created a file which implements TRPO:

from rllab.algos.trpo import TRPO
from rllab.baselines.linear_feature_baseline import LinearFeatureBaseline
from rllab.envs.gym_env import GymEnv
from rllab.envs.normalized_env import normalize
from rllab.misc.instrument import run_experiment_lite
from rllab.policies.gaussian_mlp_policy import GaussianMLPPolicy


def run_task(*_):
    # Please note that different environments with different action spaces may
    # require different policies. For example with a Discrete action space, a
    # CategoricalMLPPolicy works, but for a Box action space may need to use
    # a GaussianMLPPolicy (see the trpo_gym_pendulum.py example)
    env = normalize(GymEnv("Madras-v0"))

    #policy = CategoricalMLPPolicy(
    #    env_spec=env.spec,
        # The neural network policy should have two hidden layers, each with 32 hidden units.
    #    hidden_sizes=(32, 32)
    #)

    policy = GaussianMLPPolicy(
    env_spec=env.spec,
    # The neural network policy should have two hidden layers, each with 32 hidden units.
    hidden_sizes=(32, 32)
    )


    baseline = LinearFeatureBaseline(env_spec=env.spec)

    algo = TRPO(
        env=env,
        policy=policy,
        baseline=baseline,
        batch_size=4000,
        max_path_length=env.horizon,
        n_itr=50,
        discount=0.99,
        step_size=0.01,
        # Uncomment both lines (this and the plot parameter below) to enable plotting
        # plot=True,
    )
    algo.train()


run_experiment_lite(
    run_task,
    # Number of parallel workers for sampling
    n_parallel=1,
    # Only keep the snapshot parameters for the last iteration
    snapshot_mode="last",
    # Specifies the seed for the experiment. If this is not provided, a random seed
    # will be used
    seed=1,
    # plot=True,
)

How to run baselines experiments?

When I start any experiment with baselines, the agent waiting for the random port server:
Waiting for server on 38992............
I used:
python -m baselines.run --alg=ddpg --env='Madras-v0'
With multiple agents it requests different random ports.
TORCS doesn't starts automatically with baselines.

How can I specify agent's port when I am using baselines?

Multiple starting issues

Dear community,

first of all, I am not able to install torcs 1.3.6 by following your commands you provided. I tried this multiple times, every time reverting my VM, following your version 1 branch and again trying with the master branch and so on, but after the installation, torcs is just not found. On my other VM, I tried applying MADRaS with the gym-torcs 1.3.1, which I have installed already based on the repository of yanpanlau (DDPG-Keras-TensorFlow). I just wanted to explore DDPG driving with multiple vehicles, this is how came to your repository. I dont know if the version is the issue, but it would be sufficient enough if I get the experiments run, even with version 1.3.1. So I am trying to get some experiments run, based on your description, by just running the examples, which ends up problematic. More precisely:

Behavior reflex -> single agent:
For Quickrace, one scr_server is already selected. Then I close torcs and go with the following:

robert@robert-VirtualBox:~/Desktop/MADRaS/MADRaS$ python3 -m example_controllers.behavior_reflex.playGame_DDPG 3101
is_training : 1
Starting best_reward : -10000
600000.0
6000
10000
1
config_file : ~/.torcs/config/raceman/quickrace.xml
2019-04-03 11:20:54.783635: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-04-03 11:20:54.795673: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600000000 Hz
2019-04-03 11:20:54.801725: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x34ab010 executing computations on platform Host. Devices:
2019-04-03 11:20:54.801761: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /home/robert/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Could not find old network weights
I have been asked to use port:  3101
Trying to set connection
Trying to establish connection
Waiting for server on 3101............
Count Down : 15
Trying to establish connection
Waiting for server on 3101............
Count Down : 14
Trying to establish connection
Waiting for server on 3101............

I tried again, but this time starting a quickrace in torcs due to "waiting for server on 3101" and executing again with port 3101. Interestingly, the simulation starts by displaying the vehicle and the track and so on, but it is not starting to drive. From the first console, where i started torcs, i get:

Visual Properties Report
------------------------
Compatibility mode, properties unknown.
Can't open file tracks/oval/backyard4/backyard4.png
gfParmSetStr: fopen (config/raceman/quickrace.xml, "wb") failed
WARNING: grscene:initBackground Failed to open shadow2.rgb for reading
WARNING:         no shadow mapping on cars for this track 
Waiting for request on port 3101
OpenAL backend info:
  Vendor: OpenAL Community
  Renderer: OpenAL Soft
  Version: 1.1 ALSOFT 1.18.2
  Available sources: 256
  Available buffers: 1024 or more
  Dynamic Sources: requested: 235, created: 235
  #static sources: 21
  #dyn sources   : 235
gfParmSetStr: fopen (config/graph.xml, "wb") failed
Timeout for client answer
Timeout for client answer
Timeout for client answer
Timeout for client answer

and from the second, where I executed the start for the single agent, it looks familiar again:

robert@robert-VirtualBox:~/Desktop/MADRaS/MADRaS$ python3 -m example_controllers.behavior_reflex.playGame_DDPG 3101
is_training : 1
Starting best_reward : -10000
600000.0
6000
10000
1
config_file : ~/.torcs/config/raceman/quickrace.xml
2019-04-03 11:30:29.466835: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-04-03 11:30:29.470110: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600000000 Hz
2019-04-03 11:30:29.470318: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x1967420 executing computations on platform Host. Devices:
2019-04-03 11:30:29.470376: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /home/robert/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Could not find old network weights
I have been asked to use port:  3101
Trying to set connection
Trying to establish connection
Trying to set connection
Trying to establish connection
Waiting for server on 3101............
Count Down : 15
Trying to establish connection
Waiting for server on 3101............
Count Down : 14
Trying to establish connection
Waiting for server on 3101............

Behavior reflex -> multiple agents

robert@robert-VirtualBox:~/Desktop/MADRaS/MADRaS$ python3 -m example_controllers.behavior_reflex.multi_agent
numb of workers is3
2019-04-03 11:36:17.432516: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-04-03 11:36:17.439372: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600000000 Hz
2019-04-03 11:36:17.439582: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x1cf2d40 executing computations on platform Host. Devices:
2019-04-03 11:36:17.439647: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /home/robert/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
/home/robert/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py:1702: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).
  warnings.warn('An interactive session is already active. This can '
Could not find old network weights
I have been asked to use port:  3001
Trying to set connection
Trying to establish connection
Could not find old network weights
I have been asked to use port:  3002
Trying to set connection
Trying to establish connection
Waiting for server on 3001............
Count Down : 15
Trying to establish connection
Could not find old network weights
I have been asked to use port:  3003
Trying to set connection
Trying to establish connection
Waiting for server on 3002............
Count Down : 15
Trying to establish connection
Waiting for server on 3001............
Count Down : 14
Trying to establish connection
Waiting for server on 3003............
Count Down : 15
Trying to establish connection
Waiting for server on 3002............
Count Down : 14
Trying to establish connection
Waiting for server on 3001............
Count Down : 13
Trying to establish connection
Waiting for server on 3003............
Count Down : 14
Trying to establish connection
Waiting for server on 3002............

PID -> single agent

robert@robert-VirtualBox:~/Desktop/MADRaS/MADRaS$ python3 -m example_controllers.pid.playGame_DDPG_pid 3001
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/robert/Desktop/MADRaS/MADRaS/example_controllers/pid/playGame_DDPG_pid.py", line 15, in <module>
    from utils.gym_madras import MadrasEnv
ModuleNotFoundError: No module named 'utils.gym_madras'

For the multple agents file, I receive the same as for single agent.

I am hoping that the cause of the problem is not version 1.3.1 because I saw the vehicle, the track, etc. thus the simulation is able to work properly. I think that actually starting it right is rather the issue and maybe the missing utils.gym_madras. Can somebody please help in any way? I am becoming really desperate and for any help I would be really grateful!

Best regards,
Robert

Installation on ubuntu 18.04

Some things are missing in readme, so pointing out for convenience to other people:

  1. Install freeglut
  2. Install plib in the following manner:
# export CFLAGS="-fPIC"
# export CPPFLAGS=$CFLAGS
# export CXXFLAGS=$CFLAGS

# ./configure
# make
# make install

Also the tensorflow specified in setup.py is a cpu-version right?

TORCS installation

Will the TORCS 1.3.7 SCR patched simulator work? That's what I have at the moment

Hello,I have a question!

Hi,I am very insterested in this project!I want to know,do you have the multi-agent algorithm 'COMA' in this project?Because I need some help about this algorithm.Thank you very much!

Multi Agent Latency Issue

The Multi-Agent part is showing some latency when the other agents are trying to connect to the simulator.

Image observations

I am trying to take image observations from the torcs game but I cant. It seems that the raw obs doesn't provide an image observation. I have already set the screen size to 64*64 but nothing happens.

Observation values out of range

I am getting Observation values greater than 1 when I am running train_rllib_agent.py
(using master branch of this repo)

2020-03-26 18:21:35,305	INFO resource_spec.py:212 -- Starting Ray with 3.27 GiB memory available for workers and up to 1.64 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-03-26 18:21:36,115	INFO services.py:1078 -- View the Ray dashboard at localhost:8265
2020-03-26 18:21:36,904	INFO trainer.py:420 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2020-03-26 18:21:37,054	INFO trainer.py:580 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
  warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
INFO:root:Specified torcs_server_port 60934 is not available. Searching for alternative...
INFO:root:torcs_server_port has been reassigned to 44366
INFO:root:-------------------------CURRENT TRACK:forza------------------------
UDP Timeout set to 10000000 10E-6 seconds.
Laptime limit disabled!
Noisy Sensors!
Waiting for request on port 44366
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  obj = yaml.load(type_)
(pid=17992) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
(pid=17992)   warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
(pid=17992) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
(pid=17992)   obj = yaml.load(type_)
Traceback (most recent call last):
  File "train_rllib_agent.py", line 29, in <module>
    result = trainer.train()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 494, in train
    raise e
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 483, in train
    result = Trainable.train(self)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/tune/trainable.py", line 254, in train
    result = self._train()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 133, in _train
    fetches = self.optimizer.step()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 137, in step
    self.num_envs_per_worker, self.train_batch_size)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/rollout.py", line 25, in collect_samples
    next_sample = ray_get_and_free(fut_sample)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/memory.py", line 29, in ray_get_and_free
    result = ray.get(object_ids)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/worker.py", line 1504, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::RolloutWorker.sample() (pid=17992, ip=192.168.0.52)
  File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 430, in ray._raylet.execute_task.function_executor
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 488, in sample
    batches = [self.input_reader.next()]
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 52, in next
    batches = [self.get_data()]
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 95, in get_data
    item = next(self.rollout_provider)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 315, in _env_runner
    soft_horizon, no_done_at_end)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 404, in _process_observations
    policy_id).transform(raw_obs)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/models/preprocessors.py", line 162, in transform
    self.check_shape(observation)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/models/preprocessors.py", line 61, in check_shape
    self._obs_space, observation)
ValueError: ('Observation outside expected value range', Box(60,), array([ 5.56550782e-08,  2.02869512e-02,  5.94304986e-02,  7.98150003e-02,
        1.37273997e-01,  2.09871992e-01,  3.48023504e-01,  6.04140043e-01,
        7.67295003e-01,  8.23809981e-01,  1.00119007e+00,  8.62675011e-01,
        8.87974977e-01,  8.75970006e-01,  8.09940040e-01,  4.89742011e-01,
        3.08934003e-01,  1.72173008e-01,  1.03631496e-01,  4.32050526e-02,
        3.33332986e-01,  0.00000000e+00,  0.00000000e+00, -2.22880002e-04,
        1.00978994e+00,  9.86819983e-01,  9.75160003e-01,  9.67469990e-01,
        1.01222503e+00,  9.89215016e-01,  9.94629979e-01,  1.01731002e+00,
        9.55635011e-01,  1.00125504e+00,  1.00168002e+00,  9.95970011e-01,
        9.77665007e-01,  1.01389503e+00,  9.82594967e-01,  9.79219973e-01,
        1.04600501e+00,  9.87020016e-01,  1.00498998e+00,  1.00105000e+00,
        9.85194981e-01,  1.01610005e+00,  1.01473498e+00,  9.86200035e-01,
        9.56799984e-01,  9.75569963e-01,  9.90504980e-01,  9.76555049e-01,
        9.68820035e-01,  1.00439501e+00,  1.00737500e+00,  1.00959003e+00,
        1.03243494e+00,  9.96204972e-01,  1.01459002e+00,  1.02346492e+00]))

When i set normalize = True in sim_options.yml

I am getting this error :

2020-03-26 18:38:33,066	INFO resource_spec.py:212 -- Starting Ray with 2.69 GiB memory available for workers and up to 1.35 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-03-26 18:38:33,488	INFO services.py:1078 -- View the Ray dashboard at localhost:8265
2020-03-26 18:38:33,794	INFO trainer.py:420 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2020-03-26 18:38:33,874	INFO trainer.py:580 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
  warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
INFO:root:Specified torcs_server_port 60934 is not available. Searching for alternative...
INFO:root:torcs_server_port has been reassigned to 36193
INFO:root:-------------------------CURRENT TRACK:forza------------------------
UDP Timeout set to 10000000 10E-6 seconds.
Laptime limit disabled!
Noisy Sensors!
Waiting for request on port 36193
/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  obj = yaml.load(type_)
(pid=19494) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
(pid=19494)   warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
(pid=19494) /home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/from_config.py:134: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
(pid=19494)   obj = yaml.load(type_)
Traceback (most recent call last):
  File "train_rllib_agent.py", line 29, in <module>
    result = trainer.train()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 494, in train
    raise e
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 483, in train
    result = Trainable.train(self)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/tune/trainable.py", line 254, in train
    result = self._train()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 133, in _train
    fetches = self.optimizer.step()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 137, in step
    self.num_envs_per_worker, self.train_batch_size)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/optimizers/rollout.py", line 25, in collect_samples
    next_sample = ray_get_and_free(fut_sample)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/utils/memory.py", line 29, in ray_get_and_free
    result = ray.get(object_ids)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/worker.py", line 1504, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NameError): ray::RolloutWorker.sample() (pid=19494, ip=192.168.0.52)
  File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 430, in ray._raylet.execute_task.function_executor
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 488, in sample
    batches = [self.input_reader.next()]
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 52, in next
    batches = [self.get_data()]
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 95, in get_data
    item = next(self.rollout_provider)
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 301, in _env_runner
    base_env.poll()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/base_env.py", line 308, in poll
    self.new_obs = self.vector_env.vector_reset()
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/vector_env.py", line 96, in vector_reset
    return [e.reset() for e in self.envs]
  File "/home/saivinay/Documents/TORCS/temp/lib/python3.6/site-packages/ray/rllib/env/vector_env.py", line 96, in <listcomp>
    return [e.reset() for e in self.envs]
  File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/envs/gym_madras.py", line 244, in reset
    s_t = self.observation_manager.get_obs(self.ob, self._config)
  File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/utils/observation_manager.py", line 20, in get_obs
    full_obs = self.normalize_obs(full_obs, game_config)
  File "/home/saivinay/Documents/TORCS/MADRaS/MADRaS/utils/observation_manager.py", line 31, in normalize_obs
    key, key, key, key))
  File "<string>", line 1, in <module>
NameError: name 'angle' is not defined

Can someone help regarding this problem?

Waiting for server env.reset(relaunch=false)

Hi,

I was running a simple experiment in torcs, while calling env.reset(relaunch=true) it all works fine and good, but once I call env.reset(relaunch=false), the client is never able to connect to the server. It always says,

Waiting for server on 3001............
Count Down : 5
Waiting for server on 3001............
Count Down : 4

But when I use vtorcs 'https://github.com/ugo-nama-kun/gym_torcs' it is able to connect even with env.reset(relaunch=false) but not with 'https://github.com/madras-simulator/TORCS' . Do you know what could be the issue here?

DDPG is not training

I have run about 1000 episodes and the car does not learn to drive. I am running the algorithm with the default params. Also, I have turned the train indicator to 1. Do I have to change something else? Do I miss something?

Initialize damage to the initial value observed.

In some tracks and for some cars, the initial value of self.ob.damage is >0. In these cases, the environment detects collision right at the outset of an episode. To prevent this from happening, the initial value of self.damage in

class CollisionPenalty(MadrasReward):
def __init__(self, cfg):
self.damage = 0.0
super(CollisionPenalty, self).__init__(cfg)
def compute_reward(self, game_config, game_state):
del game_config
reward = 0.0
if self.damage < game_state["damage"]:
reward = -self.cfg["scale"]
return reward

and
class Collision(MadrasDone):
def __init__(self):
self.damage = 0.0
self.num_steps = 0
def check_done(self, game_config, game_state):
del game_config
self.num_steps += 1
if self.damage < game_state["damage"]:
logging.info("Done: Episode terminated because agent collided after {} steps.".format(self.num_steps))
self.damage = 0.0
return True
else:
return False
def reset(self):
self.damage = 0.0
self.num_steps = 0

should be set to the initial value of self.ob.damage observed upon reset.

Error while running multiagent.py file

I have done all the preliminary setups for running MADRAS-Simulator. While executing the "multiagent.py" file, I get the following error,

Could not find old network weights
I have been asked to use port: 3001
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Waiting for server on 3001............
Count Down : 100
Could not find old network weights
I have been asked to use port: 3002
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Could not find old network weights
I have been asked to use port: 3003
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Waiting for server on 3001............
Count Down : 99
Waiting for server on 3002............
Count Down : 100
Waiting for server on 3003............
.
.
.
.

May i know where i might have went wrong?
OR
Before executing do we have to start any server?

Docstrings

  • Remove redundant and deprecated files.

  • Add Docs for each and every function.

  • Setup Sphinx doc for HTML view of the docs.

TRPO baselines experiment is not running.

I am running the trpo_mpi code on the version1 branch. When I run the experiment it is waiting for a random port. "Waiting for server on 33791..."
It shows a different port for multi-threaded. I tried to find where the port was specified but couldn't find it. A previous issue raised suggests to set "visualise=False", but in version1 this "visualise" argument is not present. The installation works fine for the example playDDPG though. Attaching the screenshot of the issue.
Screenshot from 2019-06-14 22-41-07

Revise Intervehicular Communication protocol

The following architecture must be implemented:

  1. The buffer should be organized as a queue of dictionaries. Each dictionary must hold the variables of one time step.
  2. Each agent has its own buffer_size. If it is requested variables which are older than its buffer_size it will throw an error.
  3. At every step, an agent should organize its full observation as well as its action in the form of a dictionary where each variable is accessible by its name. This dictionary should be inserted into the buffer.
  4. While making an observation, Agent_1 should request Agent_0 for sharing variables by providing a list of variable names and the number of time steps that it wants to observe.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.