Code Monkey home page Code Monkey logo

Comments (15)

laurimi avatar laurimi commented on August 17, 2024

Hi,
I think that since the warning refers to explore_costmap/explore_boundary, this message actually originated from the frontier_exploration node that is ran to support the node provided by this package. Take a look at this issue, too: paulbovbel/frontier_exploration#21
You could check to make sure, if this is the case it may be required to modify that node to be able to handle such costmap resizing. Do you also get the same warnings if you use another SLAM method?

The default parameters should be mostly fine for a stock Turtlebot. Of course, it helps if you tune the parameters to match those than in your physical system (e.g., laser scanner, robot velocity, etc.). If you have a powerful computer, you could try to increase the horizon or number of samples a bit for better exploration performance. It's hard to give suggestions that work in general, as the exploration performance depends on many things, including the environment features.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

Thank you, @laurimi
I've tried your simulation system using gmapping and teb_local_planner, and I find the costmap error still occur sometimes. But a more complex problem expose: the exploration goal keep stucking in the boundary, as the video shows: https://youtu.be/VQIrX6x1TfM
And the problem occurs in the real world exploration process as well.

Here's my exploration planner parameters:

these parameters are set at startup

map_frame_id: map
base_frame_id: base_link

goal_execution_time: 10.0
min_traj_length: 0.2
min_reward: 60.0
max_sampling_tries: 30
frontier_distance_threshold: 1.0
wait_between_planner_iterations: false

lin_v_min: 0.2
lin_v_max: 0.5
ang_v_min: -0.2
ang_v_max: 0.2
f_rot_min: -0.02
f_rot_max: 0.02

the following can also be changed via dynamic_reconfigure

horizon: 5
discount: 10.0
schedule_a: 4
schedule_b: 3
num_kernels: 5
std_vel: 0.2
std_ang: 0.1
std_fr: 0.02
num_particles: 10
resample_thresh: 0.33

allow_unknown_targets: true
default_ctrl_duration: 1.0

laser_min_angle_deg: -90.0
laser_max_angle_deg: 90.0
laser_angle_step_deg: 5.0
laser_max_dist_m: 4.0
p_false_pos: 0.05
p_false_neg: 0.05

----------------------------------
I set 'discount' as a high value, in order to let robot to explore new area, not just keep stuck in the obstacle to complete the 'boundary', is it right?
And I've increased the 'horizon' .
Thank you in advance!

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Setting discount high will basically do what you say. But as a side effect, it will make the all estimated rewards very high as well. So the threshold value min_reward probably needs to be increased as well. I had not actually considered the discount might be set higher than 1.0, but it is certainly possible. Getting stuck at obstacles could also be caused by a faulty configuration of the navigation costmaps.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

Thank you! I've tried to print avgRewards and modify min_reward carefully, it works!
A new question: as the optimized path is determined by both map and observation data(laser scan), is it possible to generate a desired goal with roughly inverse orientation to the origin robot pose?
Sometimes the robot explores into a corner and cannot generate a path to escape from it, and frontier_exploration triggers. However a goal generated from frontier_exploration may also stuck in inflated obstacle like this, I will try to ask the author of frontier_exploration.
screenshot from 2016-09-27 16 16 03

A related issue with frontier_exploration problem: paulbovbel/frontier_exploration#11

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Yes, this may happen with frontier_exploration, you can also attempt to work around it by tuning the parameters for the costmap applied in frontier_exploration. Not exactly sure what you mean by a goal with inverse orientation, but if you want to modify the goals output by ase_exploration you either need to change the code directly or write another node that does whatever processing you need before sending the goal further to move_base.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

I mean, if a robot stuck in the corner with unexplored area behind, can the SMCPlanner generate a goal to the unexplored area like this:
example_new

(Too rough picture, hope I can show my thought clearly :) )

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

Another question: If I want to 'send goal' to axclient as soon as the exploration task failed, in order to generate a new goal from SMCPlanner or frontier_exploration and exploring 'endlessly', which tool can I use to implement this?
Thank you in advance!

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Yes, it is possible to generate goal like that. The reason why you are not getting this is probably that you have set the robot's motion model configuration such that this rarely if ever happens. Try for example increasing the range of ang_v_min and ang_v_max to sample from a wider range of angular velocities.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

Thank you, @laurimi, it works! Sorry for the late reply
I'm trying to figure out the trajectory evaluate function from the code as I'm not able to understand the formulas from the paper, here are the questions:

  1. What's the meaning of entropy in the code? Why the entropy is set in an array with size of 101, differs from 0 to 1?
  2. How can I write out a formula that is not so 'abstract'? I'm trying to explain the principle in the paper, based on the turtlebot and Kinect platform.
    Sincerely thanks!

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

The entropy is a measure of uncertainty related to a random variable. The random variables here are the map cell occupancies. They are random variables with two possible values, 0 for a free cell and 1 for an occupied cell. A single real number between 0 and 1 models the probability of a cell being occupied. The array for entropy that you see is just a computationally efficient way to compute the trajectory rewards. ROS uses occupancy grid maps where cell occupancy probability is represented by an 8-bit integer, between 0 and 100. The array is a lookup table for the corresponding entropy value, so the values don't have to be calculated every time.

To modify how trajectories are scored, you would have to write your own version of trajectory evaluation. Take a look at TrajectoryEvaluator.cpp - here the reward at each step is the mutual information, or, the difference between prior and posterior entropies. It should be possible to change this to whatever is appropriate in your application.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

Thank you! I've figured out the meaning of entropy, E(p) = -plog(p) - (1-p)log(1-p), is it right? The TrajectoryEvaluator.cpp used map and current robot pose, but I didn't find out where the scan data has been used, as I try to find out the corresponding code of the algorithm 3"Observation sampling for a laser range finder(LRF)" in the paper.

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Yes that is the entropy of a binary random variable. Scan data is simulated based on the current map information and the information in the current map sample. Sampling the scans is the responsibility of the sensor model. For laser scanners, this is done via 2D raytracing, see LaserScanner2D::sampleMeasurement for the details. This is pretty close to Algorithm 3 of the paper.

from ase_exploration.

Severn-Vergil avatar Severn-Vergil commented on August 17, 2024

So the scan is simulated randomly, not using actual data from Kinect, is it?
By the way, what does 'map_sample' mean, as the input of LaserScanner2D::sampleMeasurement? What's the difference between map_sample and map_prior? It seems that in evaluateTrajectory from TrajectoryEvaluator.cpp, map_sample is directly copied from map, as the initialization shows:
PlanarGridOccupancyMap map_sample(map.getOrigin(), map.getMetersPerCell());

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Correct, this node does not use any sensor data at all. Only information about the current map.

The map_sample is exactly that, a sample drawn from the probability distribution corresponding to the current map information. It is iteratively created through simulation of a trajectory. Note that to initialize it only the origin and resolution of the map are used, otherwise map_sample is empty. map_prior can be applied to encode any other prior information not already contained in the current map.

from ase_exploration.

laurimi avatar laurimi commented on August 17, 2024

Closing this as the original issue was resolved.

from ase_exploration.

Related Issues (2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.