Code Monkey home page Code Monkey logo

contact_graspnet's Introduction

Contact-GraspNet

Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes

Martin Sundermeyer, Arsalan Mousavian, Rudolph Triebel, Dieter Fox
ICRA 2021

paper, project page, video

Installation

This code has been tested with python 3.7, tensorflow 2.2, CUDA 11.1

Create the conda env

conda env create -f contact_graspnet_env.yml

Troubleshooting

  • Recompile pointnet2 tf_ops:
sh compile_pointnet_tfops.sh

Hardware

Training: 1x Nvidia GPU >= 24GB VRAM, >=64GB RAM
Inference: 1x Nvidia GPU >= 8GB VRAM (might work with less)

Download Models and Data

Model

Download trained models from here and copy them into the checkpoints/ folder.

Test data

Download the test data from here and copy them them into the test_data/ folder.

Inference

Contact-GraspNet can directly predict a 6-DoF grasp distribution from a raw scene point cloud. However, to obtain object-wise grasps, remove background grasps and to achieve denser proposals it is highly recommended to use (unknown) object segmentation [e.g. 1, 2] as preprocessing and then use the resulting segmentation map to crop local regions and filter grasp contacts.

Given a .npy/.npz file with a depth map (in meters), camera matrix K and (optionally) a 2D segmentation map, execute:

python contact_graspnet/inference.py \
       --np_path=test_data/*.npy \
       --local_regions --filter_grasps

--> close the window to go to next scene

Given a .npy/.npz file with just a 3D point cloud (in meters), execute for example:

python contact_graspnet/inference.py --np_path=/path/to/your/pc.npy \
                                     --forward_passes=5 \
                                     --z_range=[0.2,1.1]

--np_path: input .npz/.npy file(s) with 'depth', 'K' and optionally 'segmap', 'rgb' keys. For processing a Nx3 point cloud instead use 'xzy' and optionally 'xyz_color' as keys.
--ckpt_dir: relative path to checkpooint directory. By default checkpoint/scene_test_2048_bs3_hor_sigma_001 is used. For very clean / noisy depth data consider scene_2048_bs3_rad2_32 / scene_test_2048_bs3_hor_sigma_0025 trained with no / strong noise.
--local_regions: Crop 3D local regions around object segments for inference. (only works with segmap)
--filter_grasps: Filter grasp contacts such that they only lie on the surface of object segments. (only works with segmap)
--skip_border_objects Ignore segments touching the depth map boundary.
--forward_passes number of (batched) forward passes. Increase to sample more potential grasp contacts.
--z_range [min, max] z values in meter used to crop the input point cloud, e.g. to avoid grasps in the foreground/background(as above).
--arg_configs TEST.second_thres:0.19 TEST.first_thres:0.23 Overwrite config confidence thresholds for successful grasp contacts to get more/less grasp proposals

Training

Download Data

Download the Acronym dataset, ShapeNet meshes and make them watertight, following these steps.

Download the training data consisting of 10000 table top training scenes with contact grasp information from here and extract it to the same folder:

acronym
├── grasps
├── meshes
├── scene_contacts
└── splits

Train Contact-GraspNet

When training on a headless server set the environment variable

export PYOPENGL_PLATFORM='egl'

Start training with config contact_graspnet/config.yaml

python contact_graspnet/train.py --ckpt_dir checkpoints/your_model_name \
                                 --data_path /path/to/acronym/data

Generate Contact Grasps and Scenes yourself (optional)

The scene_contacts downloaded above are generated from the Acronym dataset. To generate/visualize table-top scenes yourself, also pip install the acronym_tools package in your conda environment as described in the acronym repository.

In the first step, object-wise 6-DoF grasps are mapped to their contact points saved in mesh_contacts

python tools/create_contact_infos.py /path/to/acronym

From the generated mesh_contacts you can create table-top scenes which are saved in scene_contacts with

python tools/create_table_top_scenes.py /path/to/acronym

Takes ~3 days in a single thread. Run the command several times to process on multiple cores in parallel.

You can also visualize existing table-top scenes and grasps

python tools/create_table_top_scenes.py /path/to/acronym \
       --load_existing scene_contacts/000000.npz -vis

Citation

@article{sundermeyer2021contact,
  title={Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes},
  author={Sundermeyer, Martin and Mousavian, Arsalan and Triebel, Rudolph and Fox, Dieter},
  booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021}
}

contact_graspnet's People

Contributors

abhishek47kashyap avatar arsalan-mousavian avatar martinsmeyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

contact_graspnet's Issues

The performance of the algorithm is greatly affected by the perspective of the collected point cloud

I found that the algorithm seemed to only achieve the best performance for point clouds collected from certain viewpoints, and for point clouds collected from other viewpoints or after the coordinate transformation of the point cloud, the results of the grasps generated by the algorithm would be poor or not generate usable grasps at all. Is there an optimal point cloud acquisition perspective? Or is there a way to transform the collected point cloud to this perspective?

Thanks

Output grasp pose

Hi,

I am running into some problems when transforming the output grasp pose to the pose that the robot needs. May I ask if the output grasp pose is in the actual camera coordinate or some other coordinate system? Thank you so much! @MartinSmeyer

tf_sampling_so.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernel

tensorflow.python.framework.errors_impl.NotFoundError: /home/cyf/contact_graspnet/pointnet2/tf_ops/sampling/tf_sampling_so.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernelRegistrar12InitInternalEPKNS_9KernelDefEN4absl11string_viewESt10unique_ptrINS0_15OpKernelFactoryESt14default_deleteIS8_EE

I found that it is because the g++ version > 4, maybe some other problem. So how can I solve it ?

cannot obtain a strong and accurate grasp net

I followed all of the instructions step by step, but my grip map looks terrible, even after running the sample file:
ppppp

pppppp

So here's the image I already generate the segmentation mask, but my grasp net was bad:
xxxx

xxxxxxxxxxxxxx

any ideas I can fix that, please?

No grab available

I have a problem. When I use the model I trained to predict, the success rate of grabbing is very low, about 0.003.
I hope you can give me some help to solve this problem.

No good grasps

Hi, thank you for your sharing!
I've run the examples but falied to get good grasps, could you give me some solutions about this?
f4b642f32a66ad86c9f7d7500f8f336

inference runtime

Hi, thx for the code. May I ask how much time it usually takes to run inference on a pcd of 10k points? for some reason it is super slow for us and we suspect it was because the GPUs are not utilized (we have two 3090s). Then we updated the TF version to 2.5 but it would cause conflicts in the PointNet2 implementation. Is there a way to get around this? thanks.

Scores model output

I have noticed that the model produces scores when segmentation mask is set to None. Do these scores have any significance and if so what is that?

P.S.
I added your model as a servicecall in a ROSBRIDGE, and, currently, I am naively selecting the top 5 poses with the highest score values without a segmentation mask.
Also, i added the the appropriate transforms to get it working on the interbotix wx250s. its surprisingly good even before training and just filtering out invalid gripper widths. I am a senior in undergrad btw, so my journey has been similar to: #8

ModuleNotFoundError: No module named 'tensorflow.compat'

Hi, I followed the install instruction and installed tensorflow-gpu2.2.0, but when run inference.py,got the error:

Traceback (most recent call last):
File "contact_graspnet/inference.py", line 12, in
import tensorflow.compat.v1 as tf
ModuleNotFoundError: No module named 'tensorflow.compat'

Thanks.

Gripper Width

Hello, Thank you again for your code.
I tried to change gripper width in config file but I didn't see any difference in inference. I need to train again to change this feature? also I saw the filter with the gripper oppenings but I didn't find a consistent relation, could you explain me it, please.

Fingers length

Hello, Thank you for your awesome code.
I'm trying to implement your code with a different gripper, the fingers are longer but I haven't been able to find where to adjust this.
Could you help me, please?

About the format of the trained model

Thank you for making the code public. I'm trying this method. But I found that the provided model is in model.ckpt format, and some .meta files are missing, which makes my conversion to '.pb'/'.trt' format always fail, and there are certain problems. Excuse me, can you provide the trained model in '.pb' format, or methods in other formats. I want to try an experimental deployment.

Fine-tuning

Hi,

I have been successful picking up objects with a gripper different from the panda robot using the weights from scene_test_2048_bs3_hor_sigma_001, but I would like to attempt fine-tuning for the gripper I'm using with those pretrained weights. I have also attempted training as you explain in the readme, but I could only do it with a V100 GPU 16GB, which is smaller in terms of size to what you suggest. The result from training is not good, the confidence of the grasps doesn't meet the threshold set (high=0.25 and low=0.2), and if I visualize them on the test data without using the threshold they look all over the place, only a few of them look okay. I believe that fine-tuning will require much less computational resources. I think I could do this by setting the optimizer to only apply the gradient to the layers I want, which are the last fully connected layers, but let me know if there is a better way please. So, when I run train.py with ckpt_dir='checkpoints/scene_test_2048_bs3_hor_sigma_001' and I get the following error:

  File "contact_graspnet/train.py", line 225, in <module>
    train(global_config, ckpt_dir)
  File "contact_graspnet/train.py", line 78, in train
    loss_ops = load_labels_and_losses(grasp_estimator, contact_infos, global_config)
  File "/home/juancm/contact_graspnet/contact_graspnet/tf_train_ops.py", line 86, in load_labels_and_losses
    tf_pos_finger_diffs, tf_scene_idcs = load_contact_grasps(contact_infos, global_config['DATA'])
  File "/home/juancm/contact_graspnet/contact_graspnet/tf_train_ops.py", line 254, in load_contact_grasps
    tf_pos_contact_points = tf.constant(np.array(pos_contact_points), tf.float32)
  File "/root/miniconda3/envs/contact_graspnet_env/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 161, in constant_v1
    allow_broadcast=False)
  File "/root/miniconda3/envs/contact_graspnet_env/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 300, in _constant_impl
    allow_broadcast=allow_broadcast))
  File "/root/miniconda3/envs/contact_graspnet_env/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 522, in make_tensor_proto
    "Cannot create a tensor proto whose content is larger than 2GB.")
ValueError: Cannot create a tensor proto whose content is larger than 2GB.

Do you think that this is due to my hardware (3060 RTX and RAM 32GB)? I wasn't planning to train on my laptop, but at least checking that the training works before doing it on a server.

Thank you! :)

Noise Levels for Training

Hello. Thank you for sharing amazing repository. I have a question about noise levels for retraining.

Regarding the provided pretrained models, the following descriptions are available

By default, the checkpoint/scene_test_2048_bs3_hor_sigma_001 is used. For very clean or noisy depth data, consider using scene_2048_bs3_rad2_32 or scene_test_2048_bs3_hor_sigma_0025 trained with no or strong noise.

What noise level would be equivalent to when retraining according to the training procedure outlined in the README? I would appreciate any guidance on methods for retraining with strong noise.

Problems encountered in the training process

When I changed the use_farthest_point in the configuration file to true, the training time was very long. It took about 3 seconds to train a scene before it was changed, and it took about 180 seconds to train a scene after it was changed. I also monitored the usage of gpu and found that it was basically useless.

Gripper control points

Hello. I am an undergraduate student currently using contact-graspnet on a project.

I am using contact-graspnet on a different robot (TIAGo from pal robotics) and the gripper differs from the panda robot. In order to be safe, I move the generated grasps backwards by an offset and then perform a forward motion to get the object between the robot's fingers. You have trained the network with the panda's gripper configuration by using its STL model and also points on the gripper (contact_graspnet/gripper_control_points/panda_gripper_coords.yml). I wonder if retraining the network using gripper coordinates for TIAGo's gripper would improve the generated grasps. I think that one of the things it would improve would be the grasps generated based on the gripper width constraints as the panda gripper is wider than TIAGO's gripper.

Thanks in advance!

PyTorch implementation

Thank you for making the code public.

Is there possibility of presenting the PyTorch implementation of your code?

Result different during inference

Hi,

I was using default parameters to test the model on 7.npy, and the resulting grasps are sparse as shown in the image below. May I ask if I was doing anything wrong?

Here is the command I used: python contact_graspnet/inference.py --np_path=test_data/7.npy --local_regions --filter_grasps

Checkpoint: checkpoints/scene_test_2048_bs3_hor_sigma_001

Thank you!
@MartinSmeyer
grasp

Grasping problem of real robotic arm

Thank you for making the code public.

When I used kinova gen3 to perform the grasping task, I used inference.py to generate grasping posture (changed gripper_width and center_to_tip in config.yaml), but the robot arm always failed to reach the specified position. So I asked if I needed to change any other parameters, or if I needed to retrain the model. What parameters need to be modified if the model needs to be retrained.

Normalizing depth image for inference

Hi, @MartinSmeyer

I wanted to test the inference.py on real-time data from realsense depth camera. and I realized depth image from the provided test data is normalized while the depth image from realsense camera is uint16 format.
I tried to normalize using this formula but i am not certain about the range.
depth_normalized = (depth - min_depth) / (max_depth - min_depth)
so, could you provide the range used to normalize or any other ways to preprocess depth image?

Thanks,
Amanuel

First call to sess.run() at inference time is slow

Hi, have you encountered an issue where the first call to sess.run() in contact_grasp_estimator.py is slow? I am running the inference example in the readme, and when I time sess.run() the first call takes much longer than subsequent calls:

Run inference 1162.3998165130615
Preprocess pc for inference 0.0007269382476806641
Run inference 0.2754530906677246
Preprocess pc for inference 0.0006759166717529297

I found this thread on what seems to be a similar issue but the simple resolutions have not worked, and I have not tried compiling tensorflow from source yet. I am running on a GTX 3090 with CUDA 11.1, tensorflow-gpu==2.2. Have you encountered this issue before? Thanks for your help.

the result is strange

hello! I used my own point cloud data. That looks a little bit problematic. I hope you can help me.

test data:
I change the load_available_input_data(), it will return pc_full and others are None. The pc_full is read from a .pcd file .
segmap, rgb, depth, cam_K, pc_full, pc_colors = load_available_input_data(p, K=K)
the whole data is like this:
image

Experiment 1:
I generate the point cloud from my camera and no segmentation information, the grasp result looks good.
image

Experiment 2:
I segment object first and then generate the point cloud,the grasp result looks bad, and very different from Experiment 1.
image

I want to segment object first(just like Experiment 2) and grasp the box from it's frontage. Can you tell me what should I do?

Training : Killed

Hey,

python contact_graspnet/train.py --ckpt_dir checkpoints/custom1 --data_path acronym/

I am trying to get the training pipeline running, but after showing "**** EPOCH 000 ****" my training is Killed. I could pin down the error to this line sess.run(ops['iterator'].initializer) in train.py line 109. Do you have an idea what might be wrong? I am using a conda env, when I run sh compile_pointnet_tfops.sh I get:

[ RUN ] GroupPointTest.test
[ OK ] GroupPointTest.test
[ RUN ] GroupPointTest.test_grad

1.6927719116210938e-05
[ OK ] GroupPointTest.test_grad
[ RUN ] GroupPointTest.test_session
[ SKIPPED ] GroupPointTest.test_session

3.540515899658203e-05
[ OK ] GroupPointTest.test_grad
[ RUN ] GroupPointTest.test_session
[ SKIPPED ] GroupPointTest.test_session

Might this be the reason why it is not working or is this fine?

My training data folder looks like this (no meshes folder):

acronym
--grasps
--scene_contacts
--splits

Thank you very much!

Robot Origin

When I tested the contact graspnet, my franka robot was slightly short of the goal.
So, I'm wondering if the target point is the midpoint of the end effector, or some other origin point like joint 7.

I have a question?

When I finish training, I use the generated weights for testing. After run 'python contact_graspnet/inference.py' , there is no grabbing gesture. I use cuda11.1 and tensorflow2.4.I want to know what causes this and how to solve this problem.

Visualizing step - Stuck for hours

Hi,

While trying to generate grasps using the following command:

python contact_graspnet/inference.py \ --np_path=test_data/*.npy \ --local_regions --filter_grasps

the code does not proceed after the following print statement:

Generated 1 grasps for object 1.0 Generated 11 grasps for object 2.0 Generated 33 grasps for object 3.0 Generated 134 grasps for object 4.0 Generated 56 grasps for object 5.0 Generated 92 grasps for object 6.0 Generated 66 grasps for object 7.0 Generated 40 grasps for object 8.0 Generated 48 grasps for object 9.0 Generated 5 grasps for object 10.0 /home/kaykay/contact_graspnet/contact_graspnet/visualization_utils.py:63: MatplotlibDeprecationWarning: You are modifying the state of a globally registered colormap. In future versions, you will not be able to modify a registered colormap in-place. To remove this warning, you can make a copy of the colormap first. cmap = copy.copy(mpl.cm.get_cmap("rainbow")) cmap.set_under(alpha=0.0) Visualizing...takes time

It has been around 2 hours, what is the average expected time for the visualization step and if there is a CUDA memory limit (which I suspect) do we get a warning or an error?

how to convert output grasp from inference.py to real-world coordinate system

Hi,@MartinSmeyer
I have a question: how to convert output grasp from inference.py to the real-world coordinate system? I use the following code to calculate the position from pred_grasps_cam,but the result seem incorrect. So I'm wondering if the matrix from pred_grasps_cam contains position and rotation information about the coordinates relative to the object what it want to grasp, or the coordinates actually in the image, or the coordinates actually in real-world?

def rotationMatrixToEulerAngles(R) :

    if R[1,0]>0.998:
        x= 0
        y=math.pi/2
        z=math.atan2(R[0, 2], R[2, 2])
    elif R[1, 0] <-0.998:
        x=0
        y=-math.pi / 2
        z = math.atan2(R[0, 2], R[2, 2])
    else:
        x = math.atan2(-R[1,2],R[1,1])
        y = math.asin(R[1,0])
        z = math.atan2(-R[2,0],R[0,0])

    return x,y,z

def getRotationAndPosition(transformation):
    assert transformation.shape[0] ==4 and transformation.shape[1] ==4 ,"shape error"
    x = transformation[0,3]
    y= transformation[1,3]
    z = transformation[2,3]
    x_r,y_r,z_r =rotationMatrixToEulerAngles(transformation[0:3,0:3])
    return x, y, z, x_r,y_r,z_r

Inference time issue

When I run python3 contact_graspnet/inference_ros.py --local_regions --filter_grasps and use my own RGB and depth data from the RealSense D435 camera (resolution: 640*480) on my computer with an Nvidia GeForce RTX 3060 GPU (12GB), the output is satisfactory. However, I am experiencing an inference time of approximately 45 seconds, whereas I noticed in your paper that the reported inference time is 0.28 seconds. Could you please help me identify the issue I might be encountering?
Thank you!!!

Generate Contact Grasps and Scenes yourself

Hi,

Im running into an issue while trying to generate contant grasps and scenens myslef. I want use my own gripper.

Steps I did:

  1. Download Acronym dataset
  2. run python tools/create_contact_infos.py /home/avena/Downloads/acronym
    and I'm getting following error:
Root folder /home/avena/Downloads/acronym/
Failed to establish dbus connectionComputing grasp contacts...
Reading:  /home/avena/Downloads/acronym/grasps/Chair_47ac4f73d91f8ff0c862eec8232fff1e_0.006926674889814925.h5
positive grasps: (1837, 4, 4) negative grasps: (163, 4, 4)
Traceback (most recent call last):
  File "tools/create_contact_infos.py", line 124, in <module>
    save_contact_data(pcreader, grasp_path)
  File "tools/create_contact_infos.py", line 95, in save_contact_data
    pcreader.change_object(cad_path, cad_scale)
  File "/home/avena/software/contact_graspnet/contact_graspnet/data.py", line 679, in change_object
    self._renderer.change_object(cad_path, cad_scale)
AttributeError: 'SceneRenderer' object has no attribute 'change_object'
********** terminating renderer **************

Do I need to download ShapeNet meshes and make them watertight too?

conflict environment

Hi,
My GPU is GTX3080Ti and i try to use contact_graspnet(inferency.py).it's pretty slow. I have seen the other issues #16 and #9. I try to use cuda11.2 and cudnn8.1 with TensorFlow-gpu2.5. But it exits a lot of packages conflict and python package bugs.
So, is it possible that you provide a new version yml file for TensorFlow-gpu2.5?

Best regards,
xiaolin

Two gpu training

Can I use two 12G gpu to train this code. I have modified some parts but it only uses one. What should I do?

train error: string is not a file

When I start training with small dataset(10 contact scene), it's all going well. But load full dataset, the error is always thorwed by trimesh:

Traceback (most recent call last):
File "contact_graspnet/train.py", line 229, in
train(global_config, ckpt_dir)
File "contact_graspnet/train.py", line 115, in train
step = train_one_epoch(sess, ops, summary_ops, file_writers, pcreader)
File "contact_graspnet/train.py", line 137, in train_one_epoch
batch_data, cam_poses, scene_idx = pcreader.get_scene_batch(scene_idx=batch_idx)
File "code/contact_graspnet/contact_graspnet/data.py", line 616, in get_scene_batch
self.change_scene(obj_paths, mesh_scales, obj_trafos, visualize=False)
File "code/contact_graspnet/contact_graspnet/data.py", line 698, in change_scene
self._renderer.change_scene(obj_paths, obj_scales, obj_transforms)
File "code/contact_graspnet/contact_graspnet/scene_renderer.py", line 158, in change_scene
object_context = self._load_object(p, s)
File "code/contact_graspnet/contact_graspnet/scene_renderer.py", line 114, in _load_object
obj = Object(path)
File "code/contact_graspnet/contact_graspnet/mesh_utils.py", line 28, in init
self.mesh = trimesh.load(filename)
File "software/anaconda3/envs/contact_graspnet_env/lib/python3.7/site-packages/trimesh/exchange/load.py", line 110, in load
resolver=resolver)
File "software/anaconda3/envs/contact_graspnet_env/lib/python3.7/site-packages/trimesh/exchange/load.py", line 573, in parse_file_args
raise ValueError('string is not a file: {}'.format(file_obj))
ValueError: string is not a file: code/contact_graspnet/acronym/data/meshes/feb146982d0c64dfcbf4f3f04bbad8.obj

seem tell me it not find this file. I try both relative and abs path and also check this file exist, the error still there that.
image
this file is generated following acronym instrustion and simplify script.

Hi, @MartinSmeyer, can you help me analysis this reason? thanks for any help in advance!

Question: Effectiveness of Learnt Grasping Strategy

I am in the process of reimplementing contact_graspnet so I can use it in my own research. One item that crossed my mind is the fact that while a stable grasp may be achievable with the given method, the grasp may be suboptimal for subsequent task completion.

For instance suppose I wish to place an object on a cluttered shelf, how I grasped the object will become quite important for solving this task.

Do you have thoughts and/or opinions on approaches that tackle the above problem? Maybe there is a loss term that can be introduced to help ensure the chosen grasp provides sufficient manoeuvrability.

Or maybe how grasps are being evaluated could be altered?

Issue fo using my own data

Hello, I am using my own data to train contact graspnet. However, I am unable to obtain thicker grasping poses in the output. Could this be due to a problem with my data or do I need to modify the test_data/*.npy files? I have attached my input and output data for reference. Thank you!

Screenshot from 2023-02-14 17-01-42

Screenshot from 2023-02-14 17-03-33

Screenshot from 2023-02-14 16-54-49

Screenshot from 2023-02-14 16-55-06

ROS 2 Wrapper

Are there any existing ROS 2 wrappers for contact_graspnet?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.