MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis -100,000 images across 5 difficulty levels
I have one question:
I read that your dataset provides Amodal Segmentation Masks (occlusion masks). However, according to my understanding, the Nvidia Isaac Sim Replicator cannot generate the Amodal Segmentation Masks for each occluded object.
May I ask how your team generated these Amodal Segmentation Masks?
Hello
I read your article and was very interested in the process of making data sets.
Will you expose the parts of the code that make up the data in the future?
I really want to study the process of how to generate data sets, and I hope to get your reply.
Thank you.