reiniscimurs / gdae Goto Github PK
View Code? Open in Web Editor NEWA goal-driven autonomous exploration through deep reinforcement learning (ICRA 2022) system that combines reactive and planned robot navigation in unknown environments
A goal-driven autonomous exploration through deep reinforcement learning (ICRA 2022) system that combines reactive and planned robot navigation in unknown environments
Hi , I've read your paper and I 'm confused about how the robot know the location of global goal. The environment is unknow and the global goal is unseen so how does the robot identify the position of the global target?
I want to run the algorithm in ross but this algorithm is not packaged so I wonder how to proceed
I'm trying to make a package file myself and proceed, but I'm wondering if there's a better way.
Hi, I was trying to get the code running but there was some issue where the final goal was never selected as the POI, so I checked the GDAM_env.py. Correct me if I understood wrongly, it seems to me that the h score mentioned in the paper is calculated in the heuristic function, but the calculation is not the same as the one described in the paper?
Thanks!
What does the variable "nodes" in line 140 of GDAE_env.py represent. Are these the blue dots in the map or each indiviudal coordinate in the map
Really appreciate your resources btw.
I'm learning the deep learning algorithm. Your paper is very wonderful. Can you share the complete project package
Hi Reinis Cimurs, first of all, thank you very much for sharing.
I have a question about this program. Does this program include local planning and global planning?
Hi, thanks for sharing your marvelous work!
I got some issues with the simulation using GADE, becasue I don't have an actual car.
I have trained the model from your another work "DRL navigation", and changed the according code in https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py, and I modified the topic depicited in GADM_env.py which alligned with my own robot topic as shown.
Meanwhile the SLAM_TOOBOX is used for map construction, but It seemed that there was no goal is published and the robot cannot move at all. I checked the self.nodes and found it was not empty.
Here were my RVIZ screenshot and rqt_graph, and I tried to run " rostopic echo / /global_goal_publisher ", but no response received. Could your give me some advice about this problem?
I'm encountering an issue with my laser sensor data, my laser sensor has a 180-degree range with 720 samples, When attempting to divide the laser data into left and right readings, then the map nodes mapped obstacle positions appear reversed and at a certain distance. code and rviz image is attached
Interestingly, when I use the original code with right[0:360], the mapped codes for the right side are correct,
I'm seeking suggestions on how to correctly divide the laser data into left and right samples, as well as how to further divide the laser state into 20 ranges to enhance my robot's ability to track the goal.
Hi Reinis,
Thank you for sharing the code and for all the support. I have trained the TD3 model for Neotic. Now, I want to use that model in GDAE for path-planning. I tried to run it however here are some of the issues that I could not understand as a beginner.
Hello, I am also a person who is doing the autonomous exploration of this robot. When can you publish your code? I recently encountered a problem in gym-gazebo simulation. Can I talk to you? My email is [email protected]
Excuse me, I stopped there after running to self.client.wait_for_server(), do I need to write another server python file to create a server
Dear Reinis Cimurs,
I hope this message finds you well.First of all thank you very much for the work you have done, it has helped me a lot. Now I'm trying to build a global navigation simulation.
I removed the move_base code and used the trained TD3 network to drive the robot, and I added the code for reward in GDAM_env:
These codes are written with reference to the code of training TD3 in DRL_navigation, which I personally understand as adding the setting of reward function to let the robot know that reaching local_goal can get a positive reward of 100, and the robot will go to the selected POI through the trained network output.
In the original definition of the check_goal function, I also changed the code about Movebase publishing moving targets:
However, the simulation video shows that the robot has no intention to go to the local_goal, and I feel that the robot does not know where to go. I am very confused, can you give me some guidance.
Thank you for your time, and I appreciate your efforts in making this project open-source.
Hi, thanks for sharing your marvelous work!
I am currently applying this research to a real-word robot.
However, I don't have a similar LiDAR, so I am using a 2D laser LiDAR as a substitute.
The front 180-degree laser data provides 253 points at a 10Hz scanning frequency, and 510 points when at 5Hz.When running the code with front 180-degree laser data at a 10Hz scanning frequency, the Points of Interest (POI) are not being generated properly.
If there are fewer than 720 data points from the front 180-degree LiDAR, will it be impossible to properly generate POIs?
Thank you!
Hi Reinis Cimurs, May I ask why not continue to use pytorch and instead use TensorFlow here? Is it feasible for me to change it to versions after TensorFlow 2.0? It seems that version 1.14 cannot be downloaded now
Hi, thank you for making such great work open source. But I do have some questions:
def test(env, actor):
while True:
action = [0.0, 0.0]
s2, toGoal = env.step(action)
s = np.append(s2, toGoal)
s = np.append(s, action)
while True:
a = actor.predict([s])
aIn = a
aIn[0,0] = (aIn[0,0]+1)/4
s2, toGoal = env.step(aIn[0])
s = np.append(s2, a[0])
s = np.append(s, toGoal)`
In the GDAM.py, line 40-41:
s = np.append(s2, toGoal)
s = np.append(s, action)
the state of the input to the network is combine of " Laser ranges + Goal( dis and theta) + Action(linear and angular) ", the states are combined in the same order as you set in DRL-navigation.
However, In the GDAM.py, line 49-50:
s = np.append(s2, a[0])
s = np.append(s, toGoal)
the order of combinations of states seems to become " Laser ranges + Action(linear and angular) + Goal( dis and theta) " , this order is different from the order of states in the previous code in line 40-41.
Is there some mistake in my understanding? Thank you very much for taking the time to answer my questions!
Hello!
Your work is very promising. However I couldn't find the training code and the model online. Is it available? If so, could you please upload it?
Thank you very much!
Sorry, I'm here to ask a question again
I'm trying to execute GDAM.py
i can't find the file
OSError: File /home/wenzhi/GDAE/Code/assets/launch/multi_robot_scenario.launch does not exist
I'm not sure what's wrong
I haven't connected the device yet, just trying to execute
Another question is can tensorflow errors be ignored?
/r1/cmd_vel What is this node?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.