mdrwiega / depth_nav_tools Goto Github PK
View Code? Open in Web Editor NEWA set of tools for mobile robot navigation with the depth sensor
Home Page: http://wiki.ros.org/depth_nav_tools
License: Other
A set of tools for mobile robot navigation with the depth sensor
Home Page: http://wiki.ros.org/depth_nav_tools
License: Other
Hi,
Does it work with Kinectv2? It is not mentioned in your documentation.
Hi,
I would be happy to get a little explanation on the following distance to ground calculation:
dist_to_ground_[i] = sensor_mount_height_ * sin(M_PI / 2 - delta_row_[i]) /
cos(M_PI / 2 - delta_row_[i] - alpha);
Shouldn't the sin also decrease the alpha? If so, wouldn't it be easier to have:
dist_to_ground_[i] = sensor_mount_height_ * tan(M_PI / 2 - delta_row_[i] - alpha);
Hi Michal,
I'm trying to use this wonderful package without sucess. I managed to use laserscan node and it works perfect (better than depthimage_to_laserscan), but I'm struggling with the rest of nodes. You know, I'm using costmap2d for mapping and at the moment, the local_costmap isn't detecting downard stairs. I set correctly the kinect height and tilt_angle, but when I see rqt_graph only laserscan node is publishing LaserScan mesagges through scan topic which I'm using, but not cliff_detector nor depth_sensor_pose. Do I need to use these three nodes to get this detection working or there is something I'm missing? Could you tell me what else I'm missing, please?
Btw for costmap2d I'm using as source the scans from laserscan with max_obstacle_height: 2.0 meters and min_obstacle_height: -2.0 meters, but seems not working.
Do I need adding another observation sources: for scan again, but as pointcloud message with max_obstacle_height of 0.0m and min_obstacle_height: -2.0 or something similar?
When I set for camera
I hope you can enlighten me about the correct use of this package.
Thanks again!
Hi,
I am to add your nav_layer_from_points plug-in to our robot's global costmap. But we get the following errors:
Using plugin "simplelayer"
terminate called after throwing an instance of 'pluginlib::CreateClassException'
what(): MultiLibraryClassLoader: Could not create object of class type nav_layer_from_points::NavLayerPoints as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()
My global_costmap.yaml is listed as follows:
global_costmap:
global_frame: /map
robot_base_frame: /base_footprint
update_frequency: 5.0
static_map: true
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: simplelayer, type: "nav_layer_from_points::NavLayerPoints"}
Could you please enlighten me how to solve this problem?
Thanks.
How to use laserscan_kinect to improve the performance of a single lidar? And I want to use a depth camera to avoid obstacles or downward stairs in path planning. I'll appreciate if someone give me some advice. Thank you.
Hi,
Distances to ground are stored as uints:
std::vector<unsigned> dist_to_ground_;
This causes the distances to always be rounded down, and thus provide wrong results. If for example, the distance is 0.9 meters, the stored value will be 0, causing false positives.
Hi,
In the parametersCallback
, the depth_sensor_params_update
flag is being set to true
. Since this is being called before detectCliff
, it blocks updating the camera details from the CameraInfo message (unless cam_model_update_
is set to true).
The line that sets it to true is:
IMHO, it will be more correct to actually set the depth_sensor_params_update
to false
after updating parameters, so it re-initializes the camera model from the next CameraInfo message, as parameters are being used in the camera info initialization code.
We are using ROS2 Humble.
Hello,
I am trying to use this package for stair detection. When I run all the nodes I get this error:
[ERROR] [1553614236.245436569]: Ground points not detected
[ERROR] [1553614236.245497004]: height = 0.0000 angle = 0.0000
Any idea how I can fix it?
Thanks,
Heta
The URL https://github.com/mdrwiega/depth_nav_tools describes the depth_nav_tools code. The url states that the documentation is for ros1. In order to avoid other complications, I am building to noetic, 20.04. I have gazebo, rviz, and the kinect camera working fine. However, when I try to build in depth_nav_tools, I get the error Warning: Skipping package laserscan_kinect
because it has an unsupported package build type: ament_cmake
. Ament_cmake is, of course, part of ros2, and I notice from the above URL that update workflow to 22.04 occurred 3 months ago. I need the depth_nav_tools code compatible with ros1. Please let me know where this previous code, for ros1, resides.
Hey Michal,
Thanks for your code. I'd like to use it for kinect I'm using (V1). Turns out that I'm mapping a 2D plannar indoors environment like most of people does. I want to go forward by using cliff (downstairs) avoidance and hopefully you've already done that. So could you tell me what configuration should I set for my kinect if it's 0.28m height and looking straight ahead? I know kinect's viewing angle has 43 vertical degrees. So, 21.5 degrees upwards and downwards. As my kinect is 0.28m high (and in front of the robot according to navigation tutorials by ROS) it should detect the floor at 0.71084m of distance.
How costmap_2d perceive the stairs detection? I've sent you an email in case you weren't notified.
Hope you can enlighten me about that
Thanks
when I run cliff_detector, I got this error, it's in function CliffDetectorNode::depthCb line_58, what encoding does this node want ?
Hello Michat,
Iโm very interested in using you node for some close obstacle avoidance.
However I cant seem to get your "laserscan_kinect" working in indigo. Im using a astra orbbec S camera on a inidgo Jackal.
My lunch file looks like this;
Unfortunately every time I try to view the output scan in RVIZ it crashes. It also seems it is not connecting to the 2 topics i give it. do you know why?
Node [/laserscan_kinect]
Publications:
Subscriptions: None
Services:
contacting node http://CPR-J100-0150:52464/ ...
Pid: 14284
Connections:
Without tilt compensation enabled and my kinect level, off center obstacles line up very well with a 360-degree laser scanner I have on my robot. However, the kinect is about 50-inches above ground so I tilt it down pretty extreme (25-degrees) and after making the adjustments in settings, off-center objects no longer line up. The scan seems to 'narrow-in' in relationship to the laser scanner. I don't think this is a camera calibration issue as it lines up well without a tilt and seems like it might potentially be a math issue if its not a user issue, but I've used dynamic reconfigure via rqt and could not succeed in finding any combination of settings (height/tilt) that would realign the off-center portion of the scan with the laser scanner.
I left a comment in commit 5e07d3f (the last one) for laserscan_kinect, but it applies to other files like cliff_detector and pose_estimator.
It looks like the callback when the scan topic is subscribed to includes a test that is always false:
sub_ != nullptr
The previous code tested
!sub
The change always tests false since initially sub_ is a nullptr. Since that's the case, it never gets set to subscribe to image and therefore, there is never a callback to publish the scan message.
In my copy, I changed it to
sub_ == nullptr
and it worked. Whether or not that is the proper way to do it, I don't know..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.