Code Monkey home page Code Monkey logo

realsense_gazebo_plugin's Introduction

Intel RealSense Gazebo ROS plugin

This package is a Gazebo ROS plugin for the Intel D435 realsense camera.

Acknowledgement

This is a continuation of work done by SyrianSpock for a Gazebo ROS plugin with RS200 camera.

This package also includes the work developed by Intel Corporation with the ROS model fo the D435 camera.

realsense_gazebo_plugin's People

Contributors

adriaroig avatar christian-rauch avatar dvigne avatar jordan-palacios avatar saikishor avatar v-lopez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realsense_gazebo_plugin's Issues

‘rosnode_’ was not declared in this scope

I cloned foxy-devel but colcon build fails

/code/ros2_ws/src/realsense_gazebo_plugin/src/gazebo_ros_realsense.cpp:63:3: error: ‘rosnode_’ was not declared in this scope; did you mean ‘node_’?

Pointcloud without color

Hello! Great work. I used it with MAV simulator.
and when I adjust the xacro to use the realsense d435 for color pointcloud:
<pointCloud>true</pointCloud>
<pointCloudTopicName>depth/color/points</pointCloudTopicName>
<pointCloudCutoff>0.25</pointCloudCutoff>
<pointCloudCutoffMax>9.0</pointCloudCutoffMax>
but what i get is pointcloud without color
realsense
with situation in gazebo is:
gazebo

Does it work well for you?I have seen some issues mentioned this problem, but there is not a solution till now. For example,
https://github.com/pal-robotics/realsense_gazebo_plugin/issues/37#issue-916293171.
Looking forward your kind reply and helps! Thanks a lot!!

Stereo camera (ir1 & ir2)

This could be my fault, but I'm pretty sure that there's a problem with the stereo camera. First, the timestamps from the two images are not synchronized. Also, I just noticed that the Ir1 and ir2 images are the same (I've specified the right frames in the plugin parameters).
Again, This could be me, but I strongly recommend anyone using the plugin for stereo SLAM in Gazebo to be careful with that.

Synchronized images and P(0,3) filled

Hi.
Is there any idea how to make images coming from this plugin synchronized?
AFAIU, MultiCameraSensor should be used for synchronized cameras creation, but it doesn't support depth sensor.
Which leaves me with no ideas.

Another useful feature is to fill P(0,3) projection matrix element which holds info about stereo baseline, as it's done for standard Camera plugin.

how can i see the camera in gazebo

I'm trying to visualize the camera in gazebo but i run into error : No name given for the robot.

Launch file:

<launch> <arg name="robot_ns" default="/"/> <param name="robot_description" command="$(find xacro)/xacro --inorder '$(find realsense_gazebo_plugin)/urdf/d435.urdf.xacro' use_nominal_extrinsics:=true" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" > <param name="tf_prefix" type="string" value="/" /> </node> <node pkg="joint_state_publisher" name="joint_state_publisher" type="joint_state_publisher" /> </launch>

How to use with ROS

Thank you for this repository! I was able to get the basics set up and working. I have the camera spawned in Gazebo and I am able to view the different topic properly. However, my query is regarding moving forward.
How do I use this plugin in a real-time code, to get my images.
The two relevant Intel Packages are librealsense and realsense-ros
Is this package meant to be a substitute for any of them?
Can I use the librealsense scripts with this? If so, is there an example code of the same?
(I am able to record the images using rosbag and play it, however, I am looking for realtime integration with librealsense)
Thanks!

Edit: If it is helpful, I tried to read a rosbag using the librealsense library. If I try to intergrate with librealsense, I get this message : Invalid format, file does not contain topic "/file_version" nor "/FILE_VERSION". i.e. for the library to work, it expects file_version in the rosbag.

Release for Melodic and Noetic

Kind of duplicate of #32. But just wanted to bring this up again. Release process is fairly straightforward and I think will benefit a lot of people using this. I am also happy to contribute anything you would need for this.

Unable to link a realsense link to a robot link

It is more a doubt than an issue itself...

How can I set the pose of a realsense frame like "camera_color_frame" below relative with a frame in the robot?
<colorOpticalframeName>camera_color_frame</colorOpticalframeName>

Anyone can give me an example of a complete SDF file?

Thank you in advance.

Unable to diplay pointcloud data in rviz

I tried changing the pointcloud tag to true expecting some data to be published on depth/points but i don't see the topic at all. Digging into the code I couldn't find where pointcloud was being published.
Did I miss something? Any help would be appreciated :)

Screenshot from 2020-01-28 17-05-00

camera_info parameters

Hello,
I wondered if I can set somewhere camera info parameters? through URDF or directly inside plugging? Like with kinect pluging
PS: I know I can edit plugin directly inside cameraInfo function.

        <plugin name="kinect_controller" filename="libgazebo_ros_openni_kinect.so">
          <distortionK1>0.00000001</distortionK1>
          <distortionK2>0.00000001</distortionK2>
          <distortionK3>0.00000001</distortionK3>
          <distortionT1>0.00000001</distortionT1>
          <distortionT2>0.00000001</distortionT2>
          <CxPrime>0</CxPrime>
          <Cx>0</Cx>
          <Cy>0</Cy>
          <focalLength>0</focalLength>
          <hackBaseline>0</hackBaseline>

I am doing an Apriltag detection inside the gazebo based on the realsense 435i model, however, the tag pose is detected with an error which I assume comes from camera parameters.

Best wishes

Plan for supporting ROS Noetic / ros release?

Hi

(1) Is the melodic version adaptable to ROS Noetic as well? Or will there be any noetic release?
(2) Besides, is there ros release version? that can be install via sudo apt install.

Best,
Samuel

same cam showing images in different size without resizing

I put the cam on my gazebo bot. However dispite being the same cam and no resizing, the raw (/camera/color/image_raw ) and depth image (/camera/depth/image_raw) are displaying in different size. I've tried using opencv and rosrun to display, both output showed different size. Have no idea what's wrong and where to check.
github
urdf of the bot with the cam
launch file
showing images using:
rosrun

rosrun image_view image_view image:=/camera/color/image_raw __name:=image_view_color
rosrun image_view image_view image:=/camera/depth/image_raw __name:=image_view_depth

opencv
raw
depth
the issue output
rosrun
opencv

How to install pal-robotics / realsense_gazebo_plugin in ros2

Please explain the steps how to use above repo in ros2 humble.
I cloned the repo in src folder and did colcon build but it showing following error.
How do i genererate "librealsense_gazebo_plugin.so" file.

--- stderr: realsense_gazebo_plugin
CMake Error at CMakeLists.txt:5 (find_package):
By not providing "Findcatkin.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "catkin", but
CMake did not find one.

Could not find a package configuration file provided by "catkin" with any
of the following names:

catkinConfig.cmake
catkin-config.cmake

Add the installation prefix of "catkin" to CMAKE_PREFIX_PATH or set
"catkin_DIR" to a directory containing one of the above files. If "catkin"
provides a separate development package or SDK, be sure it has been
installed.


Failed <<< realsense_gazebo_plugin [0.45s, exited with code 1]

How to change refresh rate of streams?

Following documentation i successfully enabled realsense d435 on my robot. I want to check for refresh rates so i changed the following lines in _d435.gazebo.xacro.

Firstly, it says that refresh rate should be 60.0 (HZ) as denoted here

<depthUpdateRate>60.0</depthUpdateRate>

Both Gazebo Topics review and rostopic hz /camera/color/image_raw report at around 20 HZ. I don't think this is expected behavior.

Secondly,I tried to change them from

<depthUpdateRate>60.0</depthUpdateRate>
<colorUpdateRate>60.0</colorUpdateRate>
<infraredUpdateRate>60.0</infraredUpdateRate>

to

<depthUpdateRate>10.0</depthUpdateRate>
<colorUpdateRate>10.0</colorUpdateRate>
<infraredUpdateRate>10.0</infraredUpdateRate>

but nothing happend. Refresh rate remained at 20 HZ as before.

Is this a bug? How can i change the refresh rate? Or did i completely misunderstood the updateRate and publish rate?

In, ROS2-Foxy realsense_gazebo_plugin shut down gazebo simulator..

hello
i really appreciate Sharing this project!

but, this plugin not work for me.

My Env. ROS2-Foxy, Gazebo 11.x

i just build using command 'colcon build' this package

and

cd {build_directory} (For finding .so plugin file)

and

export GAZEBO_PLUGIN_PATH=`pwd`:$GAZEBO_PLUGIN_PATH

finally, i turn on the gazebo with camera sdf file

but a few second ago gazebo shut down..

Can you please tell me how to do it right?

below code is the sdf file

<?xml version="1.0"?>
<sdf version="1.7">
  <model name="realsense_d455">
    <pose>0 0 0.035 0 0 3.14159265359</pose>
    <static>true</static>
    <link name="link">
      <inertial>
        <pose>0.1 0.025 0.025 0 0 0</pose>
        <mass>0.077</mass>
        <inertia>
          <ixx>0.003881243</ixx>
          <ixy>0</ixy>
          <ixz>0</ixz>
          <iyy>0.000498940</iyy>
          <iyz>0</iyz>
          <izz>0.003879257</izz>
        </inertia>
      </inertial>
     <visual name="visual">
      <pose>-0.01 0 0 1.57079632679 0 1.57079632679</pose>
        <geometry>
          <mesh>
            <uri>model://realsense_d455/mesh/realsense_d455.obj</uri>
            <scale>0.001 0.001 0.001 </scale>
          </mesh>
          <material>
            <script>
              <name>Gazebo/White</name>
              <uri>file://media/materials/scripts/gazebo.material</uri>
            </script>
          </material>
        </geometry>
      </visual>
      <sensor name="cameradepth" type="depth">
        <camera name="camera">
          <horizontal_fov>1.57</horizontal_fov>
          
          <clip>
            <near>0.1</near>
            <far>100</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.100</stddev>
          </noise>
        </camera>
        <always_on>1</always_on>
        <update_rate>30</update_rate>
        <visualize>0</visualize>
      </sensor>
      <sensor name="cameracolor" type="camera">
        <camera name="camera">
          <horizontal_fov>1.57</horizontal_fov>
          
          <clip>
            <near>0.1</near>
            <far>100</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.007</stddev>
          </noise>
        </camera>
        <always_on>1</always_on>
        <update_rate>30</update_rate>
        <visualize>1</visualize>
      </sensor>
      <sensor name="cameraired1" type="camera">
        <camera name="camera">
          <horizontal_fov>1.57</horizontal_fov>
          
          <clip>
            <near>0.1</near>
            <far>100</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.05</stddev>
          </noise>
        </camera>
        <always_on>1</always_on>
        <update_rate>1</update_rate>
        <visualize>0</visualize>
      </sensor>
      <sensor name="cameraired2" type="camera">
        <camera name="camera">
          <horizontal_fov>1.57</horizontal_fov>
          
          <clip>
            <near>0.1</near>
            <far>100</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.05</stddev>
          </noise>
        </camera>
        <always_on>1</always_on>
        <update_rate>1</update_rate>
        <visualize>0</visualize>
      </sensor>
    </link>
    <joint name="realsense_joint" type="fixed">
      <parent>base_link</parent>
      <child>realsense_link</child>
      <pose>0.4 0 0.4 0 0 0</pose>
    </joint>

    <plugin name="camera" filename="librealsense_gazeasdabo_plugin.so">
      <prefix>camera</prefix>
      <depthUpdateRate>30.0</depthUpdateRate>
      <colorUpdateRate>30.0</colorUpdateRate>
      <infraredUpdateRate>1.0</infraredUpdateRate>
      <depthTopicName>aligned_depth_to_color/image_raw</depthTopicName>
      <depthCameraInfoTopicName>depth/camera_info</depthCameraInfoTopicName>
      <colorTopicName>color/image_raw</colorTopicName>
      <colorCameraInfoTopicName>color/camera_info</colorCameraInfoTopicName>
      <infrared1TopicName>infra1/image_raw</infrared1TopicName>
      <infrared1CameraInfoTopicName>infra1/camera_info</infrared1CameraInfoTopicName>
      <infrared2TopicName>infra2/image_raw</infrared2TopicName>
      <infrared2CameraInfoTopicName>infra2/camera_info</infrared2CameraInfoTopicName>
      <colorOpticalframeName>camera_color_optical_frame</colorOpticalframeName>
      <depthOpticalframeName>camera_depth_optical_frame</depthOpticalframeName>
      <infrared1OpticalframeName>camera_left_ir_optical_frame</infrared1OpticalframeName>
      <infrared2OpticalframeName>camera_right_ir_optical_frame</infrared2OpticalframeName>
      <rangeMinDepth>0.3</rangeMinDepth>
      <rangeMaxDepth>3.0</rangeMaxDepth>
      <pointCloud>true</pointCloud>
      <pointCloudTopicName>depth/color/points</pointCloudTopicName>
      <pointCloudCutoff>0.3</pointCloudCutoff>
  </plugin>
  </model>
</sdf>

wrong colour channel order in point cloud

The published point cloud is using the wrong colour channel order.

In this scene, there is a blue mailbox in the 2D image, but it is shown as orange in the point cloud:
realsense_gazebo_colour_order

Distortion Model Missing

I cannot use this library with depthimage_to_laserscan because the distortion model is not published. Is there a way to enable or specify it? Thanks!!

Incorrect Image (lens shown)

Hello!
I am able to get the camera in my world, and obtain images. However, the images are incorrect. As you can see in the image, the lens of the camera and the outer mesh seems to be overlayed onto the image.

Screenshot from 2020-04-30 15-30-58

I haven't changed the source code. I followed the instructions you provided in another issue #7

Please let me know where I am going wrong :)

Documentation missing for building and using the plugin.

Hey,

I was looking for a plugin for therealsense d435 camera in ros. The README file says that this package supports that. However I see no documentation and an sdf file for the same.

Is this package still supported?

Regards,
Aniket

Warning: Conversion of sensor type[depth] not supported

Hi,
I am using Gazebo 9.0, ros-melodic on an Ubuntu 18.04.5 LTS.
I cloned this plugin in my catkin_ws/src folder and attached the camera to my robot (via urdf files).
In general my world loads and runs, however I always get this warning:

[Wrn] [msgs.cc:1852] Conversion of sensor type[depth] not supported

I am concerned, that the plugin is not working correct. Does anyone know what the solution to this warning is?

Realsense D435 Gazebo Model

Hello,
Do you have the gazebo model for the intel real sense d435? or how do you tested it?
Is it possible to supply an example or a tutorial on how to use the plugin?

Thanks in advance and waiting for your response.

Regards,

Error while launching multiple cameras mounted on different robots

While i am writing the urdf file of my robot i put a sensor d435 like the picture below but whenever i try to make multiple robots out of my robot model this error is raised and only one camera is publishing its data and i can't figure how to modify it to have the 2 cameras on different robots pubish on different topics.
Screenshot from 2020-06-28 07-23-21

Screenshot from 2020-06-28 06-18-55
Screenshot from 2020-06-28 06-18-28
Screenshot from 2020-06-28 06-17-56

works in desktop , does not work in laptop

HI ,
I have integrated this plugin into my project, it does what it should do in the desktop, when I pull exactly the same code to laptop it does not work. I tried in 2 different laptops but still no luck.
What could be the possible reasons, have you met similar issues ?
thanks for your time.

Multiple Instance Support

great work, but just wondering if your plugin supports multiple instance in one robot. tried creating one but no luck.
appreciate your response!

Dark tinted image outputs from image_raw

All images from this realsense plugin are noticeable darker than rendered from other Gazebo cameras for some reason.

When loading this plugins xacro into the URDF it also appears to affect ALL other cameras, including non-realsense_gazebo_plugin cameras.

Without realsense xacro on camera/color/image_raw:
without_realsense

Then with realsense xacro loaded on camera/color/image_raw:
with_realsense

Note this an identical world with everything identical just with or without gazebo plugin loaded in. The camera is also not occluded by anything.

I have gone through the plugin code and I can't really see anything that looks like it would cause any issues.

Any ideas?

Cheers

Just publishing in gazebo

Thank you for this repository! I was able to get the basics set up and working. But when i ran this plugin with my robot, publish just in the gazebo topics, dont in ros. How i can change that? thanks in advance.

No RGB data in output pointcloud

When running with pointcloud enable I can only seem to get mono output from the pointcloud output. I am checking RGB in Rviz and manually inspected the cloud output to verify that indeed it is only publishing mono data.

I can see that the plugin is always dropping into this else if:

else if (this->image_msg_.data.size() == rows_arg * cols_arg)
{
// mono (or bayer? @todo; fix for bayer)
iter_rgb[0] = image_src[i + j * cols_arg];
iter_rgb[1] = image_src[i + j * cols_arg];
iter_rgb[2] = image_src[i + j * cols_arg];

Since we are filling the this->image_msg_ data with both RGB colour images and MONO IR images here:

fillImage(this->image_msg_, pixel_format, cam->ImageHeight(),
cam->ImageWidth(), cam->ImageDepth() * cam->ImageWidth(),
reinterpret_cast<const void *>(cam->ImageData()));

I'm suspecting that somehow only the MONO images are actually being used exclusively for:

uint8_t *image_src = (uint8_t *)(&(this->image_msg_.data[0]));

It seems at least one other user may have experienced this issue too: #3

Any ideas or fixes would be much appreciated! Happy to contribute back if I figure something out too.

Cheers

catkin_make 0% built target

when trying to catkin_make the realsense_gazebo_plugin, I get alot of 0% built target.
I have git cloned the repository to my /src folder, and when i catkin_make I get the following output:

[  0%] Built target roscpp_generate_messages_eus
[  0%] Built target tf2_msgs_generate_messages_lisp
[  0%] Built target std_msgs_generate_messages_cpp
[  0%] Built target roscpp_generate_messages_cpp
[  0%] Built target std_msgs_generate_messages_nodejs
[  0%] Built target roscpp_generate_messages_lisp
[  0%] Built target rosgraph_msgs_generate_messages_nodejs
[  0%] Built target roscpp_generate_messages_py
[  0%] Built target roscpp_generate_messages_nodejs
[  0%] Built target std_msgs_generate_messages_lisp
[  0%] Built target std_msgs_generate_messages_py
[  0%] Built target rosgraph_msgs_generate_messages_cpp
[  0%] Built target rosgraph_msgs_generate_messages_py
[  0%] Built target std_msgs_generate_messages_eus
[  0%] Built target rosgraph_msgs_generate_messages_eus
[  0%] Built target rosgraph_msgs_generate_messages_lisp
[  0%] Built target gazebo_msgs_generate_messages_lisp
[  0%] Built target trajectory_msgs_generate_messages_cpp
[  0%] Built target trajectory_msgs_generate_messages_lisp
[  0%] Built target dynamic_reconfigure_generate_messages_py
[  0%] Built target gazebo_msgs_generate_messages_py
[  0%] Built target gazebo_msgs_generate_messages_nodejs
[  0%] Built target trajectory_msgs_generate_messages_py
[  0%] Built target gazebo_msgs_generate_messages_eus
[  0%] Built target dynamic_reconfigure_gencfg
[  0%] Built target trajectory_msgs_generate_messages_nodejs
[  0%] Built target trajectory_msgs_generate_messages_eus
[  0%] Built target gazebo_msgs_generate_messages_cpp
[  0%] Built target dynamic_reconfigure_generate_messages_nodejs
[  0%] Built target dynamic_reconfigure_generate_messages_lisp
[  0%] Built target dynamic_reconfigure_generate_messages_cpp
[  0%] Built target dynamic_reconfigure_generate_messages_eus
[  0%] Built target tf_generate_messages_cpp
[  0%] Built target tf_generate_messages_lisp
[  0%] Built target sensor_msgs_generate_messages_cpp
[  0%] Built target tf2_msgs_generate_messages_nodejs
[  0%] Built target geometry_msgs_generate_messages_nodejs
[  0%] Built target sensor_msgs_generate_messages_eus
[  0%] Built target std_srvs_generate_messages_py
[  0%] Built target tf2_msgs_generate_messages_eus
[  0%] Built target std_srvs_generate_messages_lisp
[  0%] Built target tf_generate_messages_nodejs
[  0%] Built target geometry_msgs_generate_messages_lisp
[  0%] Built target actionlib_msgs_generate_messages_nodejs
[  0%] Built target actionlib_generate_messages_eus
[  0%] Built target geometry_msgs_generate_messages_cpp
[  0%] Built target tf_generate_messages_eus
[  0%] Built target gazebo_ros_gencfg
[  0%] Built target actionlib_msgs_generate_messages_cpp
[  0%] Built target std_srvs_generate_messages_eus
[  0%] Built target geometry_msgs_generate_messages_py
[  0%] Built target geometry_msgs_generate_messages_eus
[  0%] Built target tf_generate_messages_py
[  0%] Built target sensor_msgs_generate_messages_lisp
[  0%] Built target std_srvs_generate_messages_cpp
[  0%] Built target sensor_msgs_generate_messages_nodejs
[  0%] Built target std_srvs_generate_messages_nodejs
[  0%] Built target sensor_msgs_generate_messages_py
[  0%] Built target actionlib_generate_messages_cpp
[  0%] Built target actionlib_msgs_generate_messages_eus
[  0%] Built target actionlib_generate_messages_lisp
[  0%] Built target actionlib_generate_messages_py
[  0%] Built target actionlib_msgs_generate_messages_lisp
[  0%] Built target actionlib_generate_messages_nodejs
[  0%] Built target tf2_msgs_generate_messages_cpp
[  0%] Built target actionlib_msgs_generate_messages_py
[  0%] Built target tf2_msgs_generate_messages_py
[100%] Built target realsense_gazebo_plugin

Support for Intel Realsense D455

Hi,

I was wondering if this repository also supports D455, and if not, what would be the necessary changes (if possible) to integrate it.

Thank you and I appreciate a quick response.

Quest2GM

corrupted depth imaged for "compressed" transport

The compressed images on depth/image_raw/compressed seem to have a wrong depth scale (or wrong intrinsics).

When visualising the depth in RViz via the DepthCloud display, the raw depth looks correct:
nxgz_realsense_raw

but the compressed depth is wrongly scaled and corrupted:
nxgz_realsense_compressed

SDF model

Hi,

Thank you for providing this package. I am looking for the SDF file for the camera model. Can you please mention how to generate D435 SDF file from the available xacro files? I tried to use xacro package to first get a urdf file then convert it to sdf but did not provide proper urdf file. I am using ROS Kinetic.

Here is what I get when I run:

rosrun xacro xacro --inorder d435.urdf.xacro 

Result:

<?xml version="1.0" ?>
<!-- =================================================================================== -->
<!-- |    This document was autogenerated by xacro from d435.urdf.xacro                | -->
<!-- |    EDITING THIS FILE BY HAND IS NOT RECOMMENDED                                 | -->
<!-- =================================================================================== -->
<!--
License: Apache 2.0. See LICENSE file in root directory.
Copyright(c) 2017 Intel Corporation. All Rights Reserved

This is the URDF model for the Intel RealSense 430 camera, in it's
aluminum peripherial evaluation case.
-->
<robot xmlns:xacro="http://ros.org/wiki/xacro">
</robot>

Thanks

Simulated D435: PointCloud is shifted on the left wrt real world

Hi,
I found out that there is some error that set the pointcloud a bit on the left wrt the real objects:

image

Here you can see the red container, which is positioned according to what gazebo is saying, and the green container which is positioned according to the simulated D435.

This happens on all objects in any position wrt to the camera. Did anyone else experience this issue?

After some trial and error I found out that the error should be around 0.015m,
At the moment to remove it I am shifting of this quantity the pointcloud after it is generated by the plugin

Can not subscribe to camera topics via Python/Cpp Node

Hey,

I am not able to subscribe to the camera topics via C++/Python nodes. Visualization in Rviz and RQT works though.

I wrote a small demo program:

import rospy
from sensor_msgs.msg import Image as msg_Image

def callback(data):
    print("hello")
    
def listener():
    rospy.init_node('listener')
    depth_image_topic = 'camera/color/image_raw'
    sub = rospy.Subscriber(depth_image_topic, msg_Image, callback)

    rospy.spin()

if __name__ == '__main__':
    listener()

And although I can see a video stream for the topic camera/color/image_raw in RVIZ and RQT, the callback function is never called, so "hello" is never printed.

If I use the Kinect plugin everything is fine and the callback function is called and "hello" is printed many times.

Did someone had the same problem or a hint, where I could take a look?

Cheers,
Johannes

Align Depth and Color Images

Thanks for the repo! I have managed to get it running and I am able to get the images. I notice that the depth images are of size 720x1280 while the color images are of size 1080x1920 . (HxW). How can I align the two streams of images, to a common format?
I require this, because I wish to segment objects using the color images, and mask them in the corresponding depth images.
Thanks!

Align Depth ROS Topic

Hi everyone!

I'm wondering if it would be possible to use align_depth topic from original ROS realsense drivers with this package. I need it to use the aligned image of the color camera with the depth picture in Gazebo.

Thank in advance for any help!

IMU data

Hi

I use Intel D435i for simulation in Gazebo
I want to add IMU data to simulate

Could you offer some instructions to me?
or some else plugin could suggest

Thanks you!!!

Compatability with Realsense D415

Hey,
first of all great work, much appreciated! I would like to know, whether this plugin would work with the Intel Realsense D415 also? After a first look, I could not find any good reason why it would not (given the corresponding D415 Gazebo / URDF files), is that correct?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.