Code Monkey home page Code Monkey logo

decisionforce / pgdrive Goto Github PK

View Code? Open in Web Editor NEW
126.0 8.0 17.0 77.25 MB

PGDrive: an open-ended driving simulator with infinite scenes from procedural generation

Home Page: https://decisionforce.github.io/pgdrive/

License: Apache License 2.0

Python 71.49% Shell 0.09% GLSL 0.82% Jupyter Notebook 27.23% Cython 0.38%
autonomous-driving reinforcement-learning machine-learning simulator procedural-generation imitation-learning generalization deep-learning simulation computer-vision

pgdrive's Introduction

PGDrive: an open-ended driving simulator with infinite scenes

build codecov Documentation GitHub license GitHub stars

This project is deprecated and merged into MetaDrive. Please follow the MetaDrive repo for the latest development and maintenance.

[ 📺 Website | 🏗 Github Repo | 📜 Documentation | 🎓 Paper ]

Welcome to PGDrive! PGDrive is an driving simulator with many key features, including:

  • 🎏 Lightweight: Extremely easy to download, install and run in almost all platforms.
  • 📷 Realistic: Accurate physics simulation and multiple sensory inputs.
  • 🚀 Efficient: Up to 500 simulation step per second and easy to parallel.
  • 🗺 Open-ended: Support generating infinite scenes and configuring various traffic, vehicle, and environmental settings.

🛠 Quick Start

Please install PGDrive via:

pip install pgdrive

If you wish to contribute to this project or make some modification, you can clone the latest version of PGDrive locally and install via:

git clone https://github.com/decisionforce/pgdrive.git
cd pgdrive
pip install -e .

You can verify the installation and efficiency of PGDrive via running:

python -m pgdrive.examples.profile_pgdrive

The above script is supposed to be runnable in all places. Note that please do not run the above command in the folder that has a sub-folder called ./pgdrive.

🚕 Examples

Please run the following line to drive the car in the environment manually with keyboard!

python -m pgdrive.examples.enjoy_manual

You can also enjoy a journey carrying out by our professional driver pretrained from reinforcement learning!

python -m pgdrive.examples.enjoy_expert

A fusion of expert and manual controller, where the expect will try to rescue the manually controlled vehicle from danger, can be experienced via:

python -m pgdrive.examples.enjoy_saver

To show the main feature, procedural generation, we provide a script to show BIG:

python -m pgdrive.examples.render_big

Note that the above three scripts can not be run in headless machine. Please refer to the installation guideline in documentation for more information.

Running the following line allows you to draw the generated maps:

python -m pgdrive.examples.draw_maps

To build the environment in python script, you can simply run:

import pgdrive  # Import this package to register the environment!
import gym

env = gym.make("PGDrive-v0", config=dict(use_render=True))
# env = pgdrive.PGDriveEnv(config=dict(environment_num=100))  # Or build environment from class
env.reset()
for i in range(1000):
    obs, reward, done, info = env.step(env.action_space.sample())  # Use random policy
    env.render()
    if done:
        env.reset()
env.close()

We also prepare a Colab which demonstrates some basic usage of PGDrive as follows: Open In Colab

📦 Predefined environment sets

We also define several Gym environment names, so user can start training in the minimalist manner:

import gym
import pgdrive  # Register the environment
env = gym.make("PGDrive-v0")

The following table presents some predefined environment names.

  Gym Environment Name Random Seed Range Number of Maps Comments
PGDrive-test-v0 [0, 200) 200 Test set, not change for all experiments.
PGDrive-validation-v0                   [200, 1000) 800 Validation set.
PGDrive-v0 [1000, 1100) 100 Default training setting, for quick start.
PGDrive-10envs-v0 [1000, 1100) 10 Training environment with 10 maps.
PGDrive-1000envs-v0 [1000, 1100) 1000 Training environment with 1000 maps.
PGDrive-training0-v0 [3000, 4000) 1000 First set of 1000 environments.
PGDrive-training1-v0 [5000, 6000) 1000 Second set of 1000 environments.
PGDrive-training2-v0 [7000, 8000) 1000 Thirds set of 1000 environments.
... More map set can be added in response to the requests

🏫 Documentations

More information about PGDrive can be found in PGDrive Documentation. Besides, the training code of our paper can be found in this repo.

📎 Citation

If you find this work useful in your project, please consider to cite it through:

@article{li2020improving,
  title={Improving the Generalization of End-to-End Driving through Procedural Generation},
  author={Li, Quanyi and Peng, Zhenghao and Zhang, Qihang and Qiu, Cong and Liu, Chunxiao and Zhou, Bolei},
  journal={arXiv preprint arXiv:2012.13681},
  year={2020}
}

Codacy Badge GitHub contributors GitHub forks GitHub issues

pgdrive's People

Contributors

edwardhk avatar quanyili avatar zhoubolei avatar zqh0253 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pgdrive's Issues

Validate the state save & restore is working

The primal goal:

Use the trajectory from lidar-specified environment, to generate a new RGB-image observation trajectory.

This is the preliminary functionality, before we wish to use lidar-based agent to generate image-based trajectory for imitation learning.

To Do List:

  • Can single vehicle state be perfectly restored?
  • Can opponent vehicle state be perfectly restored?
  • Can the image observation be perfectly restored?

Formal PGDrive v0.1.0 release roadmap

  • 1. Final update of README, with carefully polished languague

  • 2. Final update of Documents

  • 3. Upload a new expert agent

  • 4. Check all test scripts is working (remove failed ones)

  • 5. Check all examples scripts is working (three local-run examples, colab script should wait the publication of repo.)

Traffic Model & Sky Box problem

SkyBox

  • Tear on OpenGL (MacOS)
  • Tear on tinydisplay (All Platform)
  • Tear on OpenGLES (Headless Machine)

Other Vehicle Model

  • Load incorrectly on Tinydisplay (All Platform)
  • Load incorrectly on OpenGLES (All Platform)
  • Load incorrectly on OpenGL (MacOS with OpenGL 3 2)
  • Too Big, making repo big & making training resources exhausted

Add several colab examples

Including:

  • 1. Scalar observation environment go through
  • 2. Image-based observation go through (colab dependencies installation is also important)
  • 3. How to specify the config (change density, blocks nums and so on)
  • 4. The BIG algorithm bird-view demonstrations (a video or gif would be useful)
  • 5. Use my super agent to run the environment (can be applied to all 4 examples)

Scene flicker during reset

When you set camera_height to 800 to see the overview of the whole scenes, you will find that during reset, the environment is reverse for a moment.

Reproduction:

  1. Arbitrary multi-map environment,
  2. camera_height to 800
  3. Keep resetting the environment.

Before reset (last environment):

image

First moment when resetting:

image

Third moment when resetting:

image

The agent driving in the scene:

image

pg_world.mode should include "highway" mode

We should:

  1. Use "highway" or other string to represent the "highway_mode", instead of spliting self.pg_world.mode and self.pg_world.pg_config["highway_render"].
  2. Stop querying self.pg_world.pg_config["highway_render"] in all places inside pg_world.

Therefore, we will have four strings to represent render mode in the internal PGWorld now:

  1. none
  2. onscreen
  3. offscreen
  4. highway (or other name, I am not sure, like birdview or topdown)

Rendering is completely broken

image

This issue is for reference only. We believe this bug is fixed after #86 and #146 .

If any user find this issue again, please report to us, thanks!

Keywords: memory leak, render, rendering, pipeline, model, texture, compress, compression, break, broken, gles, Mac, platform, onscreen, window, pgdrive v0.1.0

Is depth camera working?

image

depth相机给出来的画面挺诡异的

总之我没有要找到一个文件能够合理的展示depth相机的黑白画面(如同你上次展示的那样)

我对他的功能还是存疑的,如果他只是把地板和天空给过滤掉的话,那和原来的应该有的功能差别还是太大了

2D-render

A top-down view like highway-env to support more research topics. Besides, Carla and other simulators also provide a simple top down view render and multi-layer image observation.

Does traffic vehicle density really take effect?

    env = PGDriveEnv({
        "environment_num": 10000,
        "traffic_density": 0.2,
        "map_config": {
            Map.GENERATE_METHOD: MapGenerateMethod.BIG_BLOCK_NUM,
            Map.GENERATE_PARA: 7,
        }
    })

image


    env = PGDriveEnv({
        "environment_num": 10000,
        "traffic_density": 0.5,
        "map_config": {
            Map.GENERATE_METHOD: MapGenerateMethod.BIG_BLOCK_NUM,
            Map.GENERATE_PARA: 7,
        }
    })

image


@lqy0057 Please take a look on: add-helper branch, pgdrive/tests/generalization_env_test/test_vehicle_num.py

Some problem in top-down viewer

  1. Layout error

image

  1. Possibly size error?

Is ego vehicle has correct size compared to traffic vehicle?

image

I suggest stop developing this thing before release of v0.1.0

Support of environment with decoupling map seed and traffic seed

Currently, the map seed and traffic seed are coupled in PGDriveEnv.
It would be hard to re-implement works based on Carla. Since works on carla use fixed maps with changing dynamic factors (ego car's spawn place / target place, other vehicle's spawn place, ...)
Also, I am training IL algorithm on 1-env setting and want to collect some expert data. I utilized pgdrive.examples.expert but to find out that the collected trajectories were almost the same!

So, shall we provide an environment with decoupling map seed and traffic seed? Like:
env = PGDriveDecouplingEnv(dict=(environment_num=1, traffic_num=100))

Sensors didn't destory with vehicle

There are still references of ImageBuffer in pgworld class and python will not release these useless things. It's ok when we only create vehicle for once, but creating and destroying vehicle for several times will cause critical bugs due to memory leak (segment fault or other problem)

(Feel free for the agents training with lidar. This bug will not affect them

UI-Error

The driving status label and white line framework are incorrect, when using different resolution

Rename everything

  • Repo name change to pgdrive
  • System name (in setup.py's description and Readme, logo) change to PGDrive
  • python package change to pgdrive (everything in code and setup.py's name)

Strange behavior when training 7 blocks

image

The episode reward stop growing and achieve similar episode reward as in 3 blocks environments.

image

The training success rate is quite low for 7 blocks even the reward of 7 blocks and 3 blocks are similar.

New render procedure

  • mode != "human" return current image
  • use igloop to do render instead of calling renderFrame() out of the loop
  • move main camera moving and other update into taskMgr to do it automatically

Extremely large step reward

image

We notice extremely large step reward.

After fixing the velocity #125 #129 , no extremely small step reward found:

image

Let's take a look:

        current_lane = self.vehicle.lane
        long_last, _ = current_lane.local_coordinates(self.vehicle.last_position)
        long_now, lateral_now = current_lane.local_coordinates(self.vehicle.position)

        # Suspicious!
        reward = 0.0
        lateral_factor = 1 - 2 * abs(lateral_now) / self.current_map.lane_width
        reward += self.config["driving_reward"] * (long_now - long_last) * lateral_factor

        # Suspicious!
        steering_change = abs(self.vehicle.last_current_action[0][0] - self.vehicle.last_current_action[1][0])
        steering_penalty = self.config["steering_penalty"] * steering_change * self.vehicle.speed / 20
        reward -= steering_penalty

        # Zero!
        acceleration_penalty = self.config["acceleration_penalty"] * ((action[1])**2)
        reward -= acceleration_penalty

        # Zero!
        low_speed_penalty = 0
        if self.vehicle.speed < 1:
            low_speed_penalty = self.config["low_speed_penalty"]  # encourage car
        reward -= low_speed_penalty

        # Zero!
        reward -= self.config["general_penalty"]

        # Suspicious!
        reward += self.config["speed_reward"] * (self.vehicle.speed / self.vehicle.max_speed)

Add a knob like "accepting R as reset keyboard shortcut"

Quanyi has made a ResetEnv before, with the capacity to use keyboard stroke to reset the environment. I think that's good to integrate such functionalities into the default environment when using manual controlling.

We can also think of other useful commands that we wish to tell the environment, for example the maximum FPS.

Configs that can be reset and take effect in runtime

The following config can be changed and take effect in next episode:

  • mannual control
  • traffic density
  • traffic mode
  • image source (buffer size should be same when change)
  • decision repeat
  • the whole reward scheme
  • map config, when fall back to use BIG online (need special process,)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.