Code Monkey home page Code Monkey logo

Comments (9)

carlosmccosta avatar carlosmccosta commented on July 30, 2024 1

The reference_pointcloud_filename is the map that the system loads when it starts.
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L32

If your map is not static and changes over time, you can update the reference point cloud by sending a msg to reference_pointcloud_topic or reference_costmap_topic
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L47
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L45

The ambient sensor data is received in the ambient_pointcloud_topic
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L51

If you have sensors that publish ros sensor_msgs/LaserScan, you can specify the topics in laser_scan_topics, which will use my laserscan_to_pointcloud package to assemble and convert the topics to point clouds:
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L71
https://github.com/carlosmccosta/laserscan_to_pointcloud/blob/master/launch/laserscan_to_pointcloud_assembler.launch#L4

For every message received in ambient_pointcloud_topic, the system will try to align it to the reference point cloud and estimate the pose of the robot.

When the system starts, it will try to find the initial pose of the robot using feature matching.
If your environment does not have good features, you can manually specify the initial pose in rviz, which will send a message to /initialpose
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/launch/dynamic_robot_localization_system.launch#L7
http://wiki.ros.org/rviz/UserGuide#A2D_Pose_Estimate_.28Keyboard_shortcut:_p.29

After having a valid initial pose, the system will enter tracking mode using the Iterative Closest Point (ICP) algorithm to align the sensor data with the reference map.

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on July 30, 2024

Hello,

The reference point cloud can be given in a file or a ros point cloud message:
https://github.com/carlosmccosta/dynamic_robot_localization#reference-map
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/yaml/schema/drl_configs.yaml#L189
https://github.com/carlosmccosta/dynamic_robot_localization/blob/noetic-devel/yaml/schema/drl_configs.yaml#L73

For 3 DoF localization you can specify the file in the parameter reference_pointcloud_filename in the dynamic_robot_localization_system.launch.

Have a nice day,

from dynamic_robot_localization.

kubs0ne avatar kubs0ne commented on July 30, 2024

Thank you very much! So that means I could specify the poitcloud filename, and then specify the topic of the live pointclud here:

<param name="subscribe_topic_names/reference_pointcloud_topic" type="str" value="$(arg reference_pointcloud_topic)" />
and the script will predict my localization? Or should I provide some more data?

from dynamic_robot_localization.

kubs0ne avatar kubs0ne commented on July 30, 2024

Thank you for the detailed response! I have one more question, how do I visualise the estimated localization in rviz?

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on July 30, 2024

Hello,

You can add a pose arrow or axis associated with topic /dynamic_robot_localization/localization_pose (geometry_msgs/PoseStamped).
http://wiki.ros.org/rviz/DisplayTypes/Pose

Or you can show the map frame_id in TF.
http://wiki.ros.org/rviz/DisplayTypes/TF

Have a nice day,

from dynamic_robot_localization.

kubs0ne avatar kubs0ne commented on July 30, 2024

Hi, I tried the localization node, but I'm not able to visualise the ambient pointcloud nor the localization pose. What the fixed frame should be? In my ambient_pointcloud_topic the default fixed frame is rslidar, but it doesn't work with ambient_pointcloud. The terminal window shows a message: [pcl::SampleConsensusModelLine::optimizeModelCoefficients] Not enough inliers to refine/optimize the model's coefficients (2)! Returning the same coefficients. Should I change some parameters, or my pointcloud is just not compatible?

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on July 30, 2024

Hello,

The fixed frame should be the one configured in the argument map_frame_id, which by default is map.

The dynamic_robot_localization_system.launch is configured for 3 DoF (x, y, theta) and expects sensor data from sensor_msgs/LaserScan messages.

If you want to perform 6 DoF localization (x, y, z, roll, pitch, yaw), then you need to configure drl for that use case.
In the dynamic_robot_localization_system_6dof.launch you have an example of a possible configuration, which was a refactoring of the tests from ethzasl_kinect_dataset.launch, used to generate this video.

Have a nice day,

from dynamic_robot_localization.

kubs0ne avatar kubs0ne commented on July 30, 2024

I stll have problems with this configuration. I used the dynamic_robot_localization_system_6dof.launch file as you recommended, and changed base_link to rslidar. But now the topics don't publish any messages. In my recorded bag file the rslidar frame actsas a base_link. But I don't understand how the map frame works. Is it generated by the launch file?

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on July 30, 2024

Hello,

What kind of sensor do you have ?
Is it a 2D lidar that outputs sensor_msgs/LaserScan or a 3D lidar that generates sensor_msgs/PointCloud2 ?

If it is a 2d lidar then you are likely performing 3 DoF localization (example here) and you should look at dynamic_robot_localization_system.launch.

If it is a 3d lidar, then you probably want 6 DoF localization (example here) and you should look at dynamic_robot_localization_system_6dof.launch.

The TF package manages matrix transformations between coordinate systems:
http://wiki.ros.org/tf2

The REP 105 specifies the naming conventions for mobile robotics:
https://www.ros.org/reps/rep-0105.html

Namely:

  • map -> odom is published by a global localization node (such as drl) and can have discrete jumps to correct the robot pose
  • odom -> base_link is published by the wheel / visual / imu odometry node and has smooth evolution over time

drl when used for mobile robotics publishes map -> odom, more precisely:

  • it takes the sensor data and transforms it to the map frame, then it uses the Iterative closest point (icp) algorithm to align the sensor data with the reference map
  • at the end of the pipeline, it publishes a geometry_msgs/PoseStamped, that is received by the pose_to_tf_publisher, that transforms the pose and publishes the map -> odom TF

For following the conventions, you need to publish the tf that connects odom -> base_link and base_link -> lidar_frame_id

If you do not have odometry, then you can set the odom_frame_id to an empty string which will cause drl to publish the tf map -> base_link

The default naming is map -> odom -> base_link, but you can changed the names if you want.

For example, if you do not want to follow the conventions and just want to track you lidar in 3d space, you need to set:

<arg name="map_frame_id" default="map" />
<arg name="odom_frame_id" default="" />
<arg name="base_link_frame_id" default="rslidar" />
<arg name="sensor_frame_id" default="rslidar" />

But keep in mind that you need to specify the reference point cloud file path and the initial pose of the sensor within that reference point cloud, in the dynamic_robot_localization_system_6dof.launch.

You can look at ethzasl_kinect_dataset_high_complexity_slow_fly_movement.launch (and analyze the chain of launch files until reaching the drl node) for an example of 6 DoF tracking of a 3d sensor (video here).

Have a nice day,

from dynamic_robot_localization.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.