Code Monkey home page Code Monkey logo

rl_rvo_nav's Introduction

Hi 👋, I am Ruihua Han.

  • I am deeply passionate about developing the generally intelligent and theoretically guaranteed robotics systems capable of performing complex tasks comparable to human capabilities.

  • My current research focuses on the optimal control and motion planning for ground mobile robots navigating unknown, cluttered, and inhabited environments. I am particularly interested in integrating learning techniques with optimization theory and applying them to real robots to enhance the adaptability and efficiency of intelligent autonomous systems.

I am seeking postdoctoral opportunities in the field of robotics.

Anurag's GitHub stats

rl_rvo_nav's People

Contributors

hanruihua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rl_rvo_nav's Issues

How to change environment by adding dynamic and static obstacles?

I've changed the environment using examples in the usage of intelligent-robot-simulator, but where can I implement this environment in gym_env and policy_train_process? Thank you.
1

Basically, how should we add all the varied environments like dynamic obstacles and different shapes of obstacles from 'Intelligent robot simulator' to this project? i.e. which file do we need to add or edit, or the reward function...

I cannot reproduce the result

I can only obtain results in 4 robot settings, but when using weights to train 10 robots, I cannot obtain good results.

Code consult

Hello, I'm running train_process_s1.py,the model is saved in mode_ save, and then I run policy _ test.py,after I run the file an error occurred that the file could not be found.I want to ask the author what content this binary file holds(parser.add_argument('--arg_name', default='r4_17/r4_17'))
File "policy_test.py", line 33, in
r = open(args_path, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: './policy_train/model_save/r4_17/r4_17'

项目报错咨询:TypeError: module() takes at most 2 arguments (3 given)

具体的报错信息如下:
Traceback (most recent call last):
File "E:\KT_info\rl_rvo_nav\rl_rvo_nav\policy_train\train_process_s1.py", line 98, in
env = gym.make(args.env_name, world_name=args.world_path, robot_number=args.robot_number, neighbors_region=args.neighbors_region, neighbors_num=args.neighbors_num, robot_init_mode=args.init_mode, env_train=args.env_train, random_bear=args.random_bear, random_radius=args.random_radius, reward_parameter=args.reward_parameter, full=args.full)
File "E:\Laboratory_project\interpreter\rl_rvo_nav\lib\site-packages\gym\envs\registration.py", line 581, in make
env_creator = load(spec_.entry_point)
File "E:\Laboratory_project\interpreter\rl_rvo_nav\lib\site-packages\gym\envs\registration.py", line 61, in load
mod = importlib.import_module(mod_name)
File "E:\Software\python-3.8.2\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 783, in exec_module
File "", line 219, in call_with_frames_removed
File "E:\KT_info\rl_rvo_nav\gym_env\gym_env\envs_init
.py", line 1, in
from gym_env.envs.mrnav import mrnav
File "E:\KT_info\rl_rvo_nav\gym_env\gym_env\envs\mrnav.py", line 2, in
from gym_env.envs.ir_gym import ir_gym
File "E:\KT_info\rl_rvo_nav\gym_env\gym_env\envs\ir_gym.py", line 7, in
class ir_gym(env_base):
TypeError: module() takes at most 2 arguments (3 given)

result

I have a question about stage_1, why can't the result reach 100%?

policy_name: r4_0_50  successful rate: 12.00% average EpLen: 90.58 std length 9.24 average speed: 1.08 std speed 0.07
policy_name: r4_0_100  successful rate: 51.00% average EpLen: 91.39 std length 12.24 average speed: 0.93 std speed 0.09
policy_name: r4_0_150  successful rate: 39.00% average EpLen: 77.46 std length 7.88 average speed: 1.08 std speed 0.09
policy_name: r4_0_200  successful rate: 84.00% average EpLen: 64.77 std length 5.52 average speed: 1.27 std speed 0.1
policy_name: r4_0_250  successful rate: 72.00% average EpLen: 64.61 std length 5.3 average speed: 1.23 std speed 0.12

The successful rate is 0.00% after second stage

Hi,

I follow the experiment in the README.md, the first stage is normal. However, after training in a circle scenario with 10 robots (python train_process_s2.py), the success rate is 0.00%. The experimental log is shown below:

....
time cost in one epoch 11.53249478340149 estimated remain time 0.009610412319501242 hours
current epoch 1998
The reward in this epoch: min [-81.33, -94.36, -91.05, -72.8, -132.44, -130.41, -156.77, -158.99, -124.27, -80.02] mean [-40.36, -56.27, -42.8, -50.67, -109.06, -70.3, -99.54, -83.21, -55.97, -49.13] max [-10.39, -35.73, -0.65, -26.63, -84.51, -13.68, -42.53, -23.88, -0.79, -28.84]
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
time cost in one epoch 11.119003295898438 estimated remain time 0.00617722405327691 hours
current epoch 1999
The reward in this epoch: min [-70.02, -77.01, -80.48, -62.08, -86.05, -113.78, -55.8, -77.18, -93.87, -111.82] mean [-50.08, -50.29, -50.64, -44.62, -43.91, -60.43, -39.81, -40.26, -59.32, -59.31] max [-31.91, -34.94, -24.68, -0.93, -0.76, -25.75, -17.74, -19.18, -37.56, -29.54]
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
time cost in one epoch 11.041501760482788 estimated remain time 0.00306708382235633 hours
current epoch 2000
The reward in this epoch: min [-67.57, -85.75, -105.89, -89.92, -103.05, -179.82, -113.64, -159.73, -124.8, -111.59] mean [-48.79, -51.79, -55.82, -50.35, -54.2, -71.38, -42.5, -52.12, -67.09, -56.98] max [-30.68, -27.21, -35.23, -26.13, -17.7, -0.68, -0.56, -0.53, -29.92, -29.35]
Policy Test Start !
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
Early stopping at step 0 due to reaching max kl.
time cost in one epoch 32.75892734527588 estimated remain time 0.0 hours
policy_name: r10_0_2000 successful rate: 0.00% average EpLen: 0 std length 0 average speed: 0.96 std speed 0.05

Accidental argument

Hello author, may I ask that TypeError occurs when I run train_process for the first time: make() got an unexpected keyword argument 'world_name', of course, the following parameters are also redundant, I am a novice, hope the author can help point out

rl_rvo_nav目录下的文件在哪里调用了mranv和ir_gym呢

orz没理解错的话gym_env下envs文件中的mrnav以及ir_gym文件是智能体学习所用的环境,rl_rvo_nav文件下是训练智能体所用的算法,但是没有在rl_rvo_nav文件夹下的文件中发现对mrnav以及ir_gym文件的调用,这是为什么呢

A question about relatived file

Hello author, I did not find the code quoted in this sentencefrom ir_sim.env import env_base in your project, does it have some effect ?

How does each robot find its own goal?

Hello, I only see in your paper and code about the reward for RVO collision avoidance, and there is no reward for finding the robot to find the target, how is it set about the robot finding the target?

SARL for multi-robot

How do you implement SARL for multi-robot? Can you share source code, how to install and train for me?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.