kinovarobotics / ros_kortex_vision Goto Github PK
View Code? Open in Web Editor NEWROS package for KINOVA® KORTEX™ arms vision module
License: BSD 3-Clause "New" or "Revised" License
ROS package for KINOVA® KORTEX™ arms vision module
License: BSD 3-Clause "New" or "Revised" License
Hi,
I'm trying to roslaunch the color_only node with roslaunch kinova_vision kinova_vision_color_only.launch color_camera_info_url:=file:///homes/corcodel/ws_kortex/src/ros_kortex_vision/launch/calibration/default_color_calib_1920x1080.ini
but I'm getting the message
[ WARN] [1658266356.621532826]: [color]: Calibration file sensor resolution (1080x1920 pixels) doesn't match stream resolution (720x1280 pixels)
which is not unexpected since I didn't specify anywhere the resolution that I want. Can you please tell me where do I set it the resolution? Neither launchers have the launch arguments to set the resolution.
Thank you.
Radu,
Hello, I managed to get a grayscale image of (480,640) through the code below.
cap_depth = cv2.VideoCapture("rtsp://admin:[email protected]/depth", cv2.CAP_GSTREAMER)
while(True):
ret2, frame_depth = cap_depth.read()
cv2.namedWindow('kinova_depth', cv2.WINDOW_AUTOSIZE)
cv2.imshow('kinova_depth',frame_depth)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap_depth.release()
cv2.destroyAllWindows()
However, the output image was in uint8 format ranging from 0 to 255, and accurate depth information could not be obtained.
I was able to get a value similar to uint16 through (pixel/255)*65535, but a lot of errors occurred because I had already brought the value of uint8.
I did a lot of searching and found several ways, but I didn't know the exact way to get depth data in uint16 format from realsense(D410) attached to gen3.
I know that using ros_rgbd.launch I can get the point cloud, so I'm pretty sure I can get uint16's information, but I don't know how.
I want to know how to get depth information from ros_kortex_vision repo or an example, so I leave a question.
Thank you
Hi, I'd like to know how to read the cartesian position and rotation matrix corresponding to the robot base frame. Is there any ros topic responsible for this, or can we calculate them from some other information? Thank you very much!
Hi,
I want to do a calibration on the camera of the kinova-gen3, however, I found that the camera open the auto-focus function.
How can I close this function?
Thanks.
Dear developers,
I have made some changes to make ros_kortex_vision work with ROS 2, for a competition.
You can find the changes in my repository
https://github.com/RRL-ALeRT/ros_kortex_vision
you may check it out..
Hi,
When I was doing some transforms with the pointcloud from the wrist camera I noticed that the registered points are in the camera_color_frame coordinate system. Shouldn't it be in the camera_depth_frame system?
Thank you.
Hi,
I'm trying to create a depth-to-color aligned image stream, similar to what the Realsense driver can do if you do roslaunch realsense2_camera rs_rgbd.launch
. I tried launching the kinova_vision_rgbd.launch
from kortex_vision but I only get a registered pointcloud; which is good but I need to go one step further and create an aligned dept stream. For this I would need both cameras intrinsics (which we have) but I also need the extrinsics and I have no idea where to get that from. Could you help me with that?
Thank you.
Best,
Radu
I have created a Docker container to utilise the vision package. I've run this with great success when the container is running on an Ubuntu host, however, whenever its run on a Windows system, the streams fail to connect. This even extends to running the native ROS package on an Ubuntu virtual machine, if that VM is run on a Windows host. Do you have any suggestions?
Hello, is there an example update for the align color image and depth image topic `posted, similar to“/camera/aligh_depth/image" 480X270 and "/camera/align_color/image_raw" 480X270.
When I visualize the point cloud from the realsense in rviz the color and depth images are not aligned properly. It seems as though the color image is shifted to the right and I don't know how to correct this misalignment.
I'm running the kortex_vision_rgbd.launch file to start up the realsense on the kortex arm and it is currently using the 1280x720 calibration file for the color stream and the 480x270 calibration file for the depth stream. I looked through the launch files and couldn't find any information regarding configuring the depth registration. How can i properly align the color and depth images?
ros version: melodic
Picking up from #18
We have a functional migration of this package to ROS 2 in PickNikRobotics#1. These changes will help ROS 2 developers using Kinova arms, alongside the updated ros2 kortex drivers in https://github.com/Kinovarobotics/ros2_kortex.
If possible, we would recommend creating a basic ros2
branch in the official, Kinova owned repository and merging our changes here for wider consumption. Alternatively, we could create a Kinovarobotics/ros2_kortex_vision
and merge the PickNik owned fork there.
Looking for guidance.
I launched the kortex_driver and was able to Home the robot through RViz. Then I launched the kinova_vision file to add the color camera topic to RViZ and while the camera was brought up, then I was unable to move the robot anymore. I got this message:
[my_gen3/my_gen3_driver-2] process has died [pid 22054, exit code -6, cmd /home/maru/catkin_ws/devel/lib/kortex_driver/kortex_arm_driver __name:=my_gen3_driver __log:=/home/maru/.ros/log/2e9f0704-7bb4-11eb-a70c-00216be2491f/my_gen3-my_gen3_driver-2.log].
log file: /home/maru/.ros/log/2e9f0704-7bb4-11eb-a70c-00216be2491f/my_gen3-my_gen3_driver-2*.log
[ERROR] [1614730062.376534521]: Controller is taking too long to execute trajectory (the expected upper bound for the trajectory execution was 16.142000 seconds). Stopping trajectory.
[ INFO] [1614730062.376600621]: Cancelling execution for gen3_joint_trajectory_controller
[ INFO] [1614730062.376707557]: Completed trajectory execution with status TIMED_OUT ...
[ WARN] [1614730072.484201720]: Unable to transform object from frame 'camera_depth_frame' to planning frame 'base_link' (Could not find a connection between 'base_link' and 'camera_depth_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1614730072.484351976]: Unable to transform object from frame 'camera_link' to planning frame 'base_link' (Could not find a connection between 'base_link' and 'camera_link' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1614730072.484435858]: Unable to transform object from frame 'camera_color_frame' to planning frame 'base_link' (Could not find a connection between 'base_link' and 'camera_color_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
Then I tried to move the robot through the Kinova Web App, and I had been logged out and when trying to log back in I saw the message: "Connection Error. Make sure Robot is started." Nothing had changed, the robot was still on and the camera is still streaming and updating in RViZ
Hi @alexvannobel ,
When I doing the '#~/catkin_ws/src/ros_kortex_vision/launch$ roslaunch kinova_vision.launch', the terminal will show the following errors... And the image data cannot be obtained in RViz.
process[camera/camera_nodelet_manager-1]: started with pid [55127]
process[camera/depth/kinova_vision_depth-2]: started with pid [55128]
process[camera/color/kinova_vision_color-3]: started with pid [55129]
process[camera/camera_depth_tf_publisher-4]: started with pid [55130]
process[camera/camera_color_tf_publisher-5]: started with pid [55131]
process[camera/color_rectify_color-6]: started with pid [55145]
process[camera/depth_rectify_depth-7]: started with pid [55153]
process[camera/depth_metric_rect-8]: started with pid [55157]
[ INFO] [1632800861.267624745]: Initializing nodelet with 4 worker threads.
process[camera/depth_metric-9]: started with pid [55165]
process[camera/depth_points-10]: started with pid [55174]
[ INFO] [1632800861.295277407]: Using gstreamer config from rosparam: "rtspsrc location=rtsp://192.168.1.10/depth latency=30 ! rtpgstdepay"
[ INFO] [1632800861.351253797]: Using gstreamer config from rosparam: "rtspsrc location=rtsp://192.168.1.10/color latency=30 ! rtph264depay ! avdec_h264 ! videoconvert"
[ERROR] [1632800861.683547761]: [depth]: Failed to start stream
[ INFO] [1632800861.684516204]: [depth]: Trying to connect... (attempt #1)
[ERROR] [1632800862.087753582]: [color]: Failed to start stream
[ INFO] [1632800862.089763737]: [color]: Trying to connect... (attempt #1)
[ERROR] [1632800865.022285919]: [depth]: Failed to start stream
[ INFO] [1632800865.023103441]: [depth]: Trying to connect... (attempt #2)
[ERROR] [1632800865.424782482]: [color]: Failed to start stream
[ INFO] [1632800865.426127682]: [color]: Trying to connect... (attempt #2)
[ERROR] [1632800868.346317575]: [depth]: Failed to start stream
I'm having an issue building the vision package during the catkin_make stage. Its throwing an error relating to camera_calibration_parsersConfig.cmake. See full text below:
Base path: /catkin_ws
Source space: /catkin_ws/src
Build space: /catkin_ws/build
Devel space: /catkin_ws/devel
Install space: /catkin_ws/install
####
#### Running command: "cmake /catkin_ws/src -DCATKIN_DEVEL_PREFIX=/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/catkin_ws/install -G Unix Makefiles" in "/catkin_ws/build"
####
-- Using CATKIN_DEVEL_PREFIX: /catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/kinetic
-- This workspace overlays: /opt/ros/kinetic
-- Found PythonInterp: /usr/bin/python2 (found suitable version "2.7.12", minimum required is "2")
-- Using PYTHON_EXECUTABLE: /usr/bin/python2
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /catkin_ws/build/test_results
-- Found gtest sources under '/usr/src/gmock': gtests will be built
-- Found gmock sources under '/usr/src/gmock': gmock will be built
-- Found PythonInterp: /usr/bin/python2 (found version "2.7.12")
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.29
-- BUILD_SHARED_LIBS is on
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - kinova_vision
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'kinova_vision'
-- ==> add_subdirectory(ros_kortex_vision)
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by
"camera_calibration_parsers" with any of the following names:
camera_calibration_parsersConfig.cmake
camera_calibration_parsers-config.cmake
Add the installation prefix of "camera_calibration_parsers" to
CMAKE_PREFIX_PATH or set "camera_calibration_parsers_DIR" to a directory
containing one of the above files. If "camera_calibration_parsers"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
ros_kortex_vision/CMakeLists.txt:12 (find_package)
-- Could not find the required component 'camera_calibration_parsers'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by
"camera_calibration_parsers" with any of the following names:
camera_calibration_parsersConfig.cmake
camera_calibration_parsers-config.cmake
Add the installation prefix of "camera_calibration_parsers" to
CMAKE_PREFIX_PATH or set "camera_calibration_parsers_DIR" to a directory
containing one of the above files. If "camera_calibration_parsers"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
ros_kortex_vision/CMakeLists.txt:12 (find_package)
-- Configuring incomplete, errors occurred!
See also "/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/catkin_ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
Hello,
I was wondering if there is a way to set the camera options specified in the documentation of the Kortex_API, in the parameters of the Vision Node.
This could be interesting in some contexts, where setting the exposure, the saturation, or the gain of the camera may give better images to work with.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.