Comments (3)
I trained the network with only depth for 30 epochs and the best iou is 74%. With RGB being added, the best iou reaches 78% (although iou is not 100% equal to the performance of grasping in the real world). In my opinion, sometimes objects are so small that depth maps contain few information. RGB can provide the network with more information in this kind of situation.
from ggcnn.
Overall I found it to not help very much, and in some cases was a negative. I agree with what @youkenhou said, that in the cases where there is minimal depth data it helps, and it improves performance within the dataset. However, there were two main issues with using RGB that I found in practice:
- The network can get confused by objects that have strong colour gradients, like logos etc. which almost outweighs the small number of benefits in the cases of poor depth information.
- With RGB, transferring to a robot with a different camera in a different setting (e.g. with a different or cluttered/textured background) the RGB doesn't transfer well (for example, picking out of a red Amazon tote). There are things you can probably do to avoid this, but since depth doesn't have this problem I didn't look into it further.
from ggcnn.
Thanks @dougsm and @youkenhou for your inputs. We found that our method highly depends on the quality of the depth camera. Currently, we are using D435 and D415, which restricts our grasping method only to grasp big objects in a cluttered case. If we are going to grasp one object with 2 cm width or one object with black surface (for example cell phone), then the depth camera can't give the depth information much. In this case, I'm seeking some solution. I was thinking that maybe RGB would help me.
from ggcnn.
Related Issues (20)
- Processing of data set labels. HOT 2
- Cannot download Cornell Grasping Dataset HOT 1
- What IoU % metric is used in the paper?
- Is there a bug in models/ggcnn.py? HOT 2
- How to use image-wise (IW) and object-wise (OW)Cornell data splits mentioned in the paper?Did you use data augment by default when you trained Cornell?Thank you. HOT 2
- Problem with training on macOS HOT 1
- problem with Cornell dataset HOT 1
- Labeling the dataset HOT 1
- Error while converting pcd to depth image
- Error while converting pcd to depth image
- About "train_ggcnn.py"
- can the ggcnn
- can the ggcnn deal with the image of different size HOT 2
- How to set parameters when train on Jacquard dataset HOT 2
- Error while evaluating
- How to visualise with 3D graspping pose? HOT 1
- models.common.py
- UnicodeEncodedError
- AttributeError HOT 2
- something wrong about eval_ggcnn.py
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ggcnn.