Code Monkey home page Code Monkey logo

gdae's People

Contributors

reiniscimurs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

gdae's Issues

How did the robot know the global goal's location、

Hi , I've read your paper and I 'm confused about how the robot know the location of global goal. The environment is unknow and the global goal is unseen so how does the robot identify the position of the global target?

how to run it in ros

I want to run the algorithm in ross but this algorithm is not packaged so I wonder how to proceed

I'm trying to make a package file myself and proceed, but I'm wondering if there's a better way.

heuristic function in GDAM_env.py different from the paper?

Hi, I was trying to get the code running but there was some issue where the final goal was never selected as the POI, so I checked the GDAM_env.py. Correct me if I understood wrongly, it seems to me that the h score mentioned in the paper is calculated in the heuristic function, but the calculation is not the same as the one described in the paper?

Thanks!

What do the the nodesrepresent,

What does the variable "nodes" in line 140 of GDAE_env.py represent. Are these the blue dots in the map or each indiviudal coordinate in the map

Really appreciate your resources btw.

Problem with global planning

Hi Reinis Cimurs, first of all, thank you very much for sharing.
I have a question about this program. Does this program include local planning and global planning?

problem with simultion

Hi, thanks for sharing your marvelous work!
I got some issues with the simulation using GADE, becasue I don't have an actual car.

I have trained the model from your another work "DRL navigation", and changed the according code in https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py, and I modified the topic depicited in GADM_env.py which alligned with my own robot topic as shown.

Meanwhile the SLAM_TOOBOX is used for map construction, but It seemed that there was no goal is published and the robot cannot move at all. I checked the self.nodes and found it was not empty.

Here were my RVIZ screenshot and rqt_graph, and I tried to run " rostopic echo / /global_goal_publisher ", but no response received. Could your give me some advice about this problem?
p1
p2
p3
p4

Lidar data for a physical robot

Hi, when I was experimenting in a real world environment, I noticed that my LiDAR data was often abnormal, as shown below, has this happened to you?and I would like to get your answer.

6c8b00ed126d82e61516db5d868ebee

52970d973eb5cd8af4f98027bad383b

Misalignment in Laser Data Partitioning

I'm encountering an issue with my laser sensor data, my laser sensor has a 180-degree range with 720 samples, When attempting to divide the laser data into left and right readings, then the map nodes mapped obstacle positions appear reversed and at a certain distance. code and rviz image is attached
changes
changed_rviz
Interestingly, when I use the original code with right[0:360], the mapped codes for the right side are correct,
right
r-rviz
I'm seeking suggestions on how to correctly divide the laser data into left and right samples, as well as how to further divide the laser state into 20 ranges to enhance my robot's ability to track the goal.

How to use the TD3 trained model for GDAE for beginners?

Hi Reinis,

Thank you for sharing the code and for all the support. I have trained the TD3 model for Neotic. Now, I want to use that model in GDAE for path-planning. I tried to run it however here are some of the issues that I could not understand as a beginner.

  1. How to start the path planner server? I tried to run the roscore in a separate shell but the robot does not move. Is it the right way?
  2. How to link the GDAE with the TD3 trained model. I mean how to use the trained model files (TD3_velodyne_actor.pth and/or TD3_velodyne_critic.pth) I could not find a place in the GDAE code files.

Simulation problem

Hello, I am also a person who is doing the autonomous exploration of this robot. When can you publish your code? I recently encountered a problem in gym-gazebo simulation. Can I talk to you? My email is [email protected]

Procedural issues

Excuse me, I stopped there after running to self.client.wait_for_server(), do I need to write another server python file to create a server

Global Navigation: Does the robot know the next waypoint by setting the reward function?

Dear Reinis Cimurs,
I hope this message finds you well.First of all thank you very much for the work you have done, it has helped me a lot. Now I'm trying to build a global navigation simulation.
I removed the move_base code and used the trained TD3 network to drive the robot, and I added the code for reward in GDAM_env:

GDAM_env_4
GDM_env_1
GDAM_env_3

These codes are written with reference to the code of training TD3 in DRL_navigation, which I personally understand as adding the setting of reward function to let the robot know that reaching local_goal can get a positive reward of 100, and the robot will go to the selected POI through the trained network output.

In the original definition of the check_goal function, I also changed the code about Movebase publishing moving targets:
GDAM_2

GDAM_4_7.mp4

However, the simulation video shows that the robot has no intention to go to the local_goal, and I feel that the robot does not know where to go. I am very confused, can you give me some guidance.
Thank you for your time, and I appreciate your efforts in making this project open-source.

Regarding the Issue of the Relationship Between the Number of Forward 180-Degree Lasers and the Generation of Points of Interest (POI)

Hi, thanks for sharing your marvelous work!

I am currently applying this research to a real-word robot.

However, I don't have a similar LiDAR, so I am using a 2D laser LiDAR as a substitute.

The front 180-degree laser data provides 253 points at a 10Hz scanning frequency, and 510 points when at 5Hz.When running the code with front 180-degree laser data at a 10Hz scanning frequency, the Points of Interest (POI) are not being generated properly.

If there are fewer than 720 data points from the front 180-degree LiDAR, will it be impossible to properly generate POIs?

Thank you!

use pytorch and instead use TensorFlow

Hi Reinis Cimurs, May I ask why not continue to use pytorch and instead use TensorFlow here? Is it feasible for me to change it to versions after TensorFlow 2.0? It seems that version 1.14 cannot be downloaded now

why divide the lase

image
Here, after dividing the readings into 19 groups, why divide the laser state by 10? If I still use simulated laser in Project drl-robot-navigation to divide 360 readings into 20 groups, do I still need to divide by 10?

problem with GDAM.py

Hi, thank you for making such great work open source. But I do have some questions:
def test(env, actor):

while True:
    action = [0.0, 0.0]
    s2, toGoal = env.step(action)
    s = np.append(s2, toGoal)
    s = np.append(s, action)

    while True:

        a = actor.predict([s])
        aIn = a
        aIn[0,0] = (aIn[0,0]+1)/4
        s2, toGoal = env.step(aIn[0])
        s = np.append(s2, a[0])
        s = np.append(s, toGoal)`

In the GDAM.py, line 40-41:
s = np.append(s2, toGoal)
s = np.append(s, action)
the state of the input to the network is combine of " Laser ranges + Goal( dis and theta) + Action(linear and angular) ", the states are combined in the same order as you set in DRL-navigation.
However, In the GDAM.py, line 49-50:
s = np.append(s2, a[0])
s = np.append(s, toGoal)
the order of combinations of states seems to become " Laser ranges + Action(linear and angular) + Goal( dis and theta) " , this order is different from the order of states in the previous code in line 40-41.
Is there some mistake in my understanding? Thank you very much for taking the time to answer my questions!

About the training code and the model

Hello!
Your work is very promising. However I couldn't find the training code and the model online. Is it available? If so, could you please upload it?
Thank you very much!

execute GDAM.py

Sorry, I'm here to ask a question again
I'm trying to execute GDAM.py
i can't find the file

OSError: File /home/wenzhi/GDAE/Code/assets/launch/multi_robot_scenario.launch does not exist
2023-02-01 11-17-37 的螢幕擷圖
I'm not sure what's wrong
I haven't connected the device yet, just trying to execute
Another question is can tensorflow errors be ignored?
/r1/cmd_vel What is this node?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.