Code Monkey home page Code Monkey logo

grasp_multiobject_multigrasp's People

Contributors

fujenchu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

grasp_multiobject_multigrasp's Issues

I have a problem during training

I have already generate data.When I run ./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50 ,I have this issue:
Traceback (most recent call last):
File "./tools/trainval_net.py", line 104, in
imdb, roidb = combined_roidb(args.imdb_name)
File "./tools/trainval_net.py", line 75, in combined_roidb
roidbs = [get_roidb(s) for s in imdb_names.split('+')]
File "./tools/trainval_net.py", line 75, in
roidbs = [get_roidb(s) for s in imdb_names.split('+')]
File "./tools/trainval_net.py", line 72, in get_roidb
roidb = get_training_roidb(imdb)
File "/home/minghao/PycharmProjects/grasp_multiObject_multiGrasp/tools/../lib/model/train_val.py", line 294, in get_training_roidb
rdl_roidb.prepare_roidb(imdb)
File "/home/minghao/PycharmProjects/grasp_multiObject_multiGrasp/tools/../lib/roi_data_layer/roidb.py", line 31, in prepare_roidb
roidb[i]['image'] = imdb.image_path_at(i)
IndexError: list index out of range

I try to clean the data/cache folder,but it didn't work.

About the simulation of the code

I don't generate data of this code .

mainfileID = fopen(['/home/fujenchu/projects/deepLearning/deepGraspExtensiveOffline/data/grasps/scripts/trainttt' sprintf('%02d',folder) '.txt'],'a');

I do not quite understand the path of this line, how to modify it? I cannot generate valid data at this time.

datasets

Hi @fujenchu , how do you splits Cornell dataset on image-wise and object-wise? Thanks!

Error by using your Dataset

Dear Fu, I'm using your resized/cropped dataset above and formated my dataset following the VOC rules.
But there are still errors about the path in the train.txt. I already changed the numbers to the name of the pcd0100cposCropped320.txt, but it didn't changed the Error. Hopefully you can help me out

IOError: [Errno 2] No such file or directory: '/home/grasp_multiObject_multiGrasp/data/grasps/01_Cropped320/pcd0100rCropped320.png 84.415625 180.074844 39.841938 18.498237 -0.076635.txt'
Command exited with non-zero status 1
1.14user 0.47system 0:01.09elapsed 147%CPU (0avgtext+0avgdata 199864maxresident)k
0inputs+0outputs (0major+61638minor)pagefaults 0swaps

Marcel

how can i train with this code

hello,
when i train with the ./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50, then i find that there is no ./tools/test_net.py, and i meet the wrong with it, and how can i resolve it.Thank you your early reply.

ModuleNotFoundError: No module named 'utils.cython_nms'

Hello, this error occurred when I was running the program, how can I solve it? Thank you!
./tools/demo_graspRGD.py --net res50 --dataset grasp
Traceback (most recent call last):
File "./tools/demo_graspRGD.py", line 20, in
from model.test import im_detect
File "/home/chen/grasp_multiObject_multiGrasp/tools/../lib/model/test.py", line 20, in
from utils.cython_nms import nms, nms_new
ModuleNotFoundError: No module named 'utils.cython_nms

Error when train the model

Hi,I have followed the step 4 to download pretrained models and put it under /output/res50/train/default.Then I split the cornell dataset using dataPreprocessingTest_fasterrcnn_split.m.Before I train the model,I have changed the ITERS to 250000 in train_faster_rcnn.sh at line 22.However,when I run ./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50,it is ok at the begining and occurs an error at iter 241300 as below:

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
  • GPU_ID=0
  • DATASET=graspRGB
  • NET=res50
  • array=($@)
  • len=3
  • EXTRA_ARGS=
  • EXTRA_ARGS_SLUG=
  • case ${DATASET} in
  • TRAIN_IMDB=graspRGB_train
  • TEST_IMDB=graspRGB_test
  • STEPSIZE=50000
  • ITERS=250000
  • ANCHORS='[8,16,32]'
  • RATIOS='[0.5,1,2]'
    ++ date +%Y-%m-%d_%H-%M-%S
  • LOG=experiments/logs/res50_graspRGB_train__res50.txt.2019-04-08_14-27-29
  • exec
    ++ tee -a experiments/logs/res50_graspRGB_train__res50.txt.2019-04-08_14-27-29
  • echo Logging output to experiments/logs/res50_graspRGB_train__res50.txt.2019-04-08_14-27-29
    Logging output to experiments/logs/res50_graspRGB_train__res50.txt.2019-04-08_14-27-29
  • set +x
  • '[' '!' -f output/res50/graspRGB_train/default/res50_faster_rcnn_iter_250000.ckpt.index ']'
  • [[ ! -z '' ]]
  • CUDA_VISIBLE_DEVICES=0
  • time python ./tools/trainval_net.py --weight data/imagenet_weights/res50.ckpt --imdb graspRGB_train --imdbval graspRGB_test --iters 250000 --cfg experiments/cfgs/res50.yml --net res50 --set ANCHOR_SCALES '[8,16,32]' ANCHOR_RATIOS '[0.5,1,2]' TRAIN.STEPSIZE 50000
    Called with args:
    Namespace(cfg_file='experiments/cfgs/res50.yml', imdb_name='graspRGB_train', imdbval_name='graspRGB_test', max_iters=250000, net='res50', set_cfgs=['ANCHOR_SCALES', '[8,16,32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'TRAIN.STEPSIZE', '50000'], tag=None, weight='data/imagenet_weights/res50.ckpt')
    Using config:
    {'ANCHOR_RATIOS': [0.5, 1, 2],
    'ANCHOR_SCALES': [8, 16, 32],
    'DATA_DIR': '/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/data',
    'DEDUP_BOXES': 0.0625,
    'EPS': 1e-14,
    'EXP_DIR': 'res50',
    'GPU_ID': 0,
    'MATLAB': 'matlab',
    'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
    'POOLING_MODE': 'crop',
    'POOLING_SIZE': 7,
    'RESNET': {'BN_TRAIN': False, 'FIXED_BLOCKS': 1, 'MAX_POOL': False},
    'RNG_SEED': 3,
    'ROOT_DIR': '/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp',
    'TEST': {'BBOX_REG': True,
    'HAS_RPN': True,
    'MAX_SIZE': 1000,
    'MODE': 'nms',
    'NMS': 0.3,
    'PROPOSAL_METHOD': 'gt',
    'RPN_NMS_THRESH': 0.7,
    'RPN_POST_NMS_TOP_N': 300,
    'RPN_PRE_NMS_TOP_N': 6000,
    'RPN_TOP_N': 5000,
    'SCALES': [600],
    'SVM': False},
    'TRAIN': {'ASPECT_GROUPING': False,
    'BATCH_SIZE': 256,
    'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
    'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
    'BBOX_NORMALIZE_TARGETS': True,
    'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
    'BBOX_REG': True,
    'BBOX_THRESH': 0.5,
    'BG_THRESH_HI': 0.5,
    'BG_THRESH_LO': 0.0,
    'BIAS_DECAY': False,
    'DISPLAY': 20,
    'DOUBLE_BIAS': False,
    'FG_FRACTION': 0.25,
    'FG_THRESH': 0.5,
    'GAMMA': 0.1,
    'HAS_RPN': True,
    'IMS_PER_BATCH': 1,
    'LEARNING_RATE': 0.0001,
    'MAX_SIZE': 1000,
    'MOMENTUM': 0.9,
    'PROPOSAL_METHOD': 'gt',
    'RPN_BATCHSIZE': 256,
    'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'RPN_CLOBBER_POSITIVES': False,
    'RPN_FG_FRACTION': 0.5,
    'RPN_NEGATIVE_OVERLAP': 0.3,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POSITIVE_OVERLAP': 0.7,
    'RPN_POSITIVE_WEIGHT': -1.0,
    'RPN_POST_NMS_TOP_N': 2000,
    'RPN_PRE_NMS_TOP_N': 12000,
    'SCALES': [600],
    'SNAPSHOT_ITERS': 3000,
    'SNAPSHOT_KEPT': 3,
    'SNAPSHOT_PREFIX': 'res50_faster_rcnn',
    'STEPSIZE': 50000,
    'SUMMARY_INTERVAL': 180,
    'TRUNCATED': False,
    'USE_ALL_GT': True,
    'USE_FLIPPED': False,
    'USE_GT': False,
    'WEIGHT_DECAY': 0.0001},
    'USE_GPU_NMS': True}
    Loaded dataset train for training
    Set proposal method: gt
    Preparing training data...
    train gt roidb loaded from /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/data/cache/train_gt_roidb.pkl
    done
    88500 roidb entries
    Output will be saved to /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/output/res50/train/default
    TensorFlow summaries will be saved to /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tensorboard/res50/train/default
    Loaded dataset test for training
    Set proposal method: gt
    Preparing training data...
    test gt roidb loaded from /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/data/cache/test_gt_roidb.pkl
    done
    22125 validation roidb entries
    Filtered 155 roidb entries: 88500 -> 88345
    Filtered 1 roidb entries: 22125 -> 22124
    2019-04-08 14:27:39.699505: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    2019-04-08 14:27:39.791009: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2019-04-08 14:27:39.791344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
    name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
    pciBusID: 0000:01:00.0
    totalMemory: 10.92GiB freeMemory: 10.22GiB
    2019-04-08 14:27:39.791378: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
    2019-04-08 14:27:39.959998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-04-08 14:27:39.960041: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0
    2019-04-08 14:27:39.960046: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N
    2019-04-08 14:27:39.960679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9882 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
    Solving...
    WARNING:tensorflow:From /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/nets/network.py:58: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version.
    Instructions for updating:
    Use the axis argument instead
    /home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
    "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
    Restorining model snapshots from /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/output/res50/train/default/res50_faster_rcnn_iter_240000.ckpt
    Restored.
    28184218
    iter: 240020 / 250000, total loss: 0.394197

rpn_loss_cls: 0.026991
rpn_loss_box: 0.022555
loss_cls: 0.130532
loss_box: 0.214119
lr: 0.000010
speed: 0.377s / iter
iter: 240040 / 250000, total loss: 0.578167
rpn_loss_cls: 0.109651
rpn_loss_box: 0.093203
loss_cls: 0.192862
loss_box: 0.182451
lr: 0.000010
speed: 0.308s / iter
iter: 240060 / 250000, total loss: 0.406551
rpn_loss_cls: 0.031112
rpn_loss_box: 0.015439
loss_cls: 0.173509
loss_box: 0.186491
lr: 0.000010
speed: 0.287s / iter
iter: 240080 / 250000, total loss: 0.637210
rpn_loss_cls: 0.026172
rpn_loss_box: 0.028585
loss_cls: 0.429069
loss_box: 0.153384
lr: 0.000010
speed: 0.276s / iter
iter: 240100 / 250000, total loss: 0.429005
rpn_loss_cls: 0.019737
rpn_loss_box: 0.191460
loss_cls: 0.153798
loss_box: 0.064010
lr: 0.000010
speed: 0.271s / iter
iter: 240120 / 250000, total loss: 0.689090
rpn_loss_cls: 0.069732
rpn_loss_box: 0.022117
loss_cls: 0.270273
loss_box: 0.326968
lr: 0.000010
speed: 0.266s / iter
iter: 240140 / 250000, total loss: 0.411485
rpn_loss_cls: 0.021398
rpn_loss_box: 0.005378
loss_cls: 0.218210
loss_box: 0.166498
lr: 0.000010
speed: 0.264s / iter
iter: 240160 / 250000, total loss: 0.551314
rpn_loss_cls: 0.028966
rpn_loss_box: 0.016956
loss_cls: 0.323813
loss_box: 0.181579
lr: 0.000010
speed: 0.262s / iter
iter: 240180 / 250000, total loss: 0.895891
rpn_loss_cls: 0.038562
rpn_loss_box: 0.094599
loss_cls: 0.322961
loss_box: 0.439769
lr: 0.000010
speed: 0.261s / iter
iter: 240200 / 250000, total loss: 0.650058
rpn_loss_cls: 0.090730
rpn_loss_box: 0.020830
loss_cls: 0.317425
loss_box: 0.221072
lr: 0.000010
speed: 0.259s / iter
iter: 240220 / 250000, total loss: 0.839294
rpn_loss_cls: 0.087463
rpn_loss_box: 0.149362
loss_cls: 0.344389
loss_box: 0.258080
lr: 0.000010
speed: 0.258s / iter
iter: 240240 / 250000, total loss: 0.346862
rpn_loss_cls: 0.010364
rpn_loss_box: 0.060713
loss_cls: 0.203279
loss_box: 0.072506
lr: 0.000010
speed: 0.257s / iter
iter: 240260 / 250000, total loss: 0.398515
rpn_loss_cls: 0.017909
rpn_loss_box: 0.006822
loss_cls: 0.254393
loss_box: 0.119391
lr: 0.000010
speed: 0.257s / iter
iter: 240280 / 250000, total loss: 0.451701
rpn_loss_cls: 0.053550
rpn_loss_box: 0.034726
loss_cls: 0.174309
loss_box: 0.189116
lr: 0.000010
speed: 0.256s / iter
iter: 240300 / 250000, total loss: 0.491569
rpn_loss_cls: 0.034996
rpn_loss_box: 0.020888
loss_cls: 0.175702
loss_box: 0.259984
lr: 0.000010
speed: 0.255s / iter
iter: 240320 / 250000, total loss: 0.711634
rpn_loss_cls: 0.108009
rpn_loss_box: 0.018921
loss_cls: 0.338126
loss_box: 0.246577
lr: 0.000010
speed: 0.256s / iter
iter: 240340 / 250000, total loss: 0.182259
rpn_loss_cls: 0.012904
rpn_loss_box: 0.022112
loss_cls: 0.082081
loss_box: 0.065161
lr: 0.000010
speed: 0.255s / iter
iter: 240360 / 250000, total loss: 0.918216
rpn_loss_cls: 0.051016
rpn_loss_box: 0.037730
loss_cls: 0.579229
loss_box: 0.250240
lr: 0.000010
speed: 0.255s / iter
iter: 240380 / 250000, total loss: 0.423978
rpn_loss_cls: 0.061914
rpn_loss_box: 0.031512
loss_cls: 0.228920
loss_box: 0.101632
lr: 0.000010
speed: 0.254s / iter
iter: 240400 / 250000, total loss: 0.549734
rpn_loss_cls: 0.059622
rpn_loss_box: 0.029153
loss_cls: 0.246731
loss_box: 0.214229
lr: 0.000010
speed: 0.254s / iter
iter: 240420 / 250000, total loss: 0.621568
rpn_loss_cls: 0.083683
rpn_loss_box: 0.017001
loss_cls: 0.256338
loss_box: 0.264546
lr: 0.000010
speed: 0.253s / iter
iter: 240440 / 250000, total loss: 0.811440
rpn_loss_cls: 0.081632
rpn_loss_box: 0.028330
loss_cls: 0.452049
loss_box: 0.249429
lr: 0.000010
speed: 0.253s / iter
iter: 240460 / 250000, total loss: 0.530511
rpn_loss_cls: 0.040286
rpn_loss_box: 0.006497
loss_cls: 0.366227
loss_box: 0.117501
lr: 0.000010
speed: 0.252s / iter
iter: 240480 / 250000, total loss: 0.409711
rpn_loss_cls: 0.110652
rpn_loss_box: 0.024880
loss_cls: 0.102580
loss_box: 0.171599
lr: 0.000010
speed: 0.252s / iter
iter: 240500 / 250000, total loss: 0.276122
rpn_loss_cls: 0.079631
rpn_loss_box: 0.020785
loss_cls: 0.057938
loss_box: 0.117768
lr: 0.000010
speed: 0.252s / iter
iter: 240520 / 250000, total loss: 1.087258
rpn_loss_cls: 0.022514
rpn_loss_box: 0.105564
loss_cls: 0.662296
loss_box: 0.296884
lr: 0.000010
speed: 0.251s / iter
iter: 240540 / 250000, total loss: 0.282809
rpn_loss_cls: 0.005182
rpn_loss_box: 0.005749
loss_cls: 0.203030
loss_box: 0.068848
lr: 0.000010
speed: 0.251s / iter
iter: 240560 / 250000, total loss: 0.202932
rpn_loss_cls: 0.033388
rpn_loss_box: 0.028246
loss_cls: 0.076810
loss_box: 0.064488
lr: 0.000010
speed: 0.251s / iter
iter: 240580 / 250000, total loss: 1.253523
rpn_loss_cls: 0.060988
rpn_loss_box: 0.179745
loss_cls: 0.552823
loss_box: 0.459967
lr: 0.000010
speed: 0.251s / iter
iter: 240600 / 250000, total loss: 0.996804
rpn_loss_cls: 0.019245
rpn_loss_box: 0.214910
loss_cls: 0.422165
loss_box: 0.340484
lr: 0.000010
speed: 0.251s / iter
iter: 240620 / 250000, total loss: 1.186319
rpn_loss_cls: 0.063667
rpn_loss_box: 0.043222
loss_cls: 0.569530
loss_box: 0.509901
lr: 0.000010
speed: 0.251s / iter
iter: 240640 / 250000, total loss: 0.808511
rpn_loss_cls: 0.101701
rpn_loss_box: 0.080035
loss_cls: 0.372106
loss_box: 0.254669
lr: 0.000010
speed: 0.251s / iter
iter: 240660 / 250000, total loss: 0.539256
rpn_loss_cls: 0.014632
rpn_loss_box: 0.007609
loss_cls: 0.429422
loss_box: 0.087593
lr: 0.000010
speed: 0.250s / iter
iter: 240680 / 250000, total loss: 0.516130
rpn_loss_cls: 0.020743
rpn_loss_box: 0.100046
loss_cls: 0.188938
loss_box: 0.206403
lr: 0.000010
speed: 0.250s / iter
iter: 240700 / 250000, total loss: 1.114982
rpn_loss_cls: 0.175120
rpn_loss_box: 0.046076
loss_cls: 0.530750
loss_box: 0.363038
lr: 0.000010
speed: 0.251s / iter
iter: 240720 / 250000, total loss: 1.059178
rpn_loss_cls: 0.076946
rpn_loss_box: 0.242583
loss_cls: 0.413382
loss_box: 0.326268
lr: 0.000010
speed: 0.252s / iter
iter: 240740 / 250000, total loss: 0.399763
rpn_loss_cls: 0.005802
rpn_loss_box: 0.012870
loss_cls: 0.293739
loss_box: 0.087352
lr: 0.000010
speed: 0.252s / iter
iter: 240760 / 250000, total loss: 0.623830
rpn_loss_cls: 0.062881
rpn_loss_box: 0.326528
loss_cls: 0.131081
loss_box: 0.103341
lr: 0.000010
speed: 0.252s / iter
iter: 240780 / 250000, total loss: 1.060719
rpn_loss_cls: 0.129308
rpn_loss_box: 0.121679
loss_cls: 0.478948
loss_box: 0.330784
lr: 0.000010
speed: 0.252s / iter
iter: 240800 / 250000, total loss: 0.676388
rpn_loss_cls: 0.051277
rpn_loss_box: 0.073800
loss_cls: 0.238108
loss_box: 0.313204
lr: 0.000010
speed: 0.252s / iter
iter: 240820 / 250000, total loss: 1.538054
rpn_loss_cls: 0.062897
rpn_loss_box: 0.295753
loss_cls: 0.742005
loss_box: 0.437399
lr: 0.000010
speed: 0.251s / iter
iter: 240840 / 250000, total loss: 1.349331
rpn_loss_cls: 0.099009
rpn_loss_box: 0.266996
loss_cls: 0.585687
loss_box: 0.397640
lr: 0.000010
speed: 0.251s / iter
iter: 240860 / 250000, total loss: 0.673254
rpn_loss_cls: 0.105726
rpn_loss_box: 0.121507
loss_cls: 0.277880
loss_box: 0.168140
lr: 0.000010
speed: 0.251s / iter
iter: 240880 / 250000, total loss: 0.349164
rpn_loss_cls: 0.035372
rpn_loss_box: 0.041176
loss_cls: 0.130681
loss_box: 0.141935
lr: 0.000010
speed: 0.251s / iter
iter: 240900 / 250000, total loss: 0.470094
rpn_loss_cls: 0.014149
rpn_loss_box: 0.034909
loss_cls: 0.284553
loss_box: 0.136483
lr: 0.000010
speed: 0.251s / iter
iter: 240920 / 250000, total loss: 0.305913
rpn_loss_cls: 0.005797
rpn_loss_box: 0.004497
loss_cls: 0.162595
loss_box: 0.133024
lr: 0.000010
speed: 0.251s / iter
iter: 240940 / 250000, total loss: 0.405718
rpn_loss_cls: 0.048279
rpn_loss_box: 0.028047
loss_cls: 0.201994
loss_box: 0.127398
lr: 0.000010
speed: 0.251s / iter
iter: 240960 / 250000, total loss: 0.482021
rpn_loss_cls: 0.095734
rpn_loss_box: 0.015089
loss_cls: 0.234206
loss_box: 0.136993
lr: 0.000010
speed: 0.251s / iter
iter: 240980 / 250000, total loss: 0.940721
rpn_loss_cls: 0.101523
rpn_loss_box: 0.327078
loss_cls: 0.327597
loss_box: 0.184523
lr: 0.000010
speed: 0.251s / iter
iter: 241000 / 250000, total loss: 0.706818
rpn_loss_cls: 0.152379
rpn_loss_box: 0.029179
loss_cls: 0.326735
loss_box: 0.198525
lr: 0.000010
speed: 0.251s / iter
iter: 241020 / 250000, total loss: 0.547916
rpn_loss_cls: 0.014910
rpn_loss_box: 0.019031
loss_cls: 0.360033
loss_box: 0.153941
lr: 0.000010
speed: 0.250s / iter
iter: 241040 / 250000, total loss: 0.668502
rpn_loss_cls: 0.114890
rpn_loss_box: 0.025980
loss_cls: 0.295288
loss_box: 0.232344
lr: 0.000010
speed: 0.250s / iter
iter: 241060 / 250000, total loss: 0.702156
rpn_loss_cls: 0.162466
rpn_loss_box: 0.095116
loss_cls: 0.216422
loss_box: 0.228152
lr: 0.000010
speed: 0.250s / iter
iter: 241080 / 250000, total loss: 0.765090
rpn_loss_cls: 0.031265
rpn_loss_box: 0.060063
loss_cls: 0.516003
loss_box: 0.157758
lr: 0.000010
speed: 0.250s / iter
iter: 241100 / 250000, total loss: 1.010236
rpn_loss_cls: 0.091459
rpn_loss_box: 0.090091
loss_cls: 0.482126
loss_box: 0.346559
lr: 0.000010
speed: 0.249s / iter
iter: 241120 / 250000, total loss: 0.695079
rpn_loss_cls: 0.077868
rpn_loss_box: 0.032527
loss_cls: 0.323937
loss_box: 0.260747
lr: 0.000010
speed: 0.249s / iter
iter: 241140 / 250000, total loss: 0.563509
rpn_loss_cls: 0.045062
rpn_loss_box: 0.123109
loss_cls: 0.178680
loss_box: 0.216658
lr: 0.000010
speed: 0.248s / iter
iter: 241160 / 250000, total loss: 0.308086
rpn_loss_cls: 0.068224
rpn_loss_box: 0.046918
loss_cls: 0.096000
loss_box: 0.096945
lr: 0.000010
speed: 0.248s / iter
iter: 241180 / 250000, total loss: 0.220385
rpn_loss_cls: 0.030209
rpn_loss_box: 0.002021
loss_cls: 0.098307
loss_box: 0.089849
lr: 0.000010
speed: 0.248s / iter
iter: 241200 / 250000, total loss: 0.883988
rpn_loss_cls: 0.059955
rpn_loss_box: 0.029180
loss_cls: 0.475541
loss_box: 0.319312
lr: 0.000010
speed: 0.247s / iter
iter: 241220 / 250000, total loss: 0.394192
rpn_loss_cls: 0.014284
rpn_loss_box: 0.016078
loss_cls: 0.160365
loss_box: 0.203465
lr: 0.000010
speed: 0.247s / iter
iter: 241240 / 250000, total loss: 0.373082
rpn_loss_cls: 0.062779
rpn_loss_box: 0.012493
loss_cls: 0.137777
loss_box: 0.160034
lr: 0.000010
speed: 0.247s / iter
iter: 241260 / 250000, total loss: 0.881644
rpn_loss_cls: 0.091138
rpn_loss_box: 0.029091
loss_cls: 0.347669
loss_box: 0.413746
lr: 0.000010
speed: 0.247s / iter
iter: 241280 / 250000, total loss: 0.438661
rpn_loss_cls: 0.025545
rpn_loss_box: 0.021388
loss_cls: 0.218897
loss_box: 0.172831
lr: 0.000010
speed: 0.246s / iter
iter: 241300 / 250000, total loss: 0.780659
rpn_loss_cls: 0.035025
rpn_loss_box: 0.085696
loss_cls: 0.362855
loss_box: 0.297083
lr: 0.000010
speed: 0.246s / iter
Traceback (most recent call last):
File "./tools/trainval_net.py", line 136, in
max_iters=args.max_iters)
File "/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/model/train_val.py", line 339, in train_net
sw.train_model(sess, max_iters)
File "/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/model/train_val.py", line 222, in train_model
blobs = self.data_layer.forward()
File "/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/roi_data_layer/layer.py", line 89, in forward
blobs = self._get_next_minibatch()
File "/home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/roi_data_layer/layer.py", line 84, in _get_next_minibatch
minibatch_db = [self._roidb[i] for i in db_inds]
IndexError: list index out of range
Command exited with non-zero status 1
156.56user 30.26system 5:38.28elapsed 55%CPU (0avgtext+0avgdata 2887836maxresident)k
105064inputs+6992outputs (3major+2178752minor)pagefaults 0swaps

I do not know why.When I search it in Google,it points at a issue rbgirshick/fast-rcnn#79 I have deleted the cache file,it doesn't work.Have you ever met this issue?
By the way,would you mind to tell me how to get the pretrained model?Train it using cornell dataset or download from github like https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models?

import error

Traceback (most recent call last):
File "demo_graspRGD.py", line 20, in
from model.test import im_detect
File "/content/gdrive/My Drive/grasp_multiObject_multiGrasp/tools/../lib/model/test.py", line 20, in
from utils.cython_nms import nms, nms_new
ImportError: /content/gdrive/My Drive/grasp_multiObject_multiGrasp/tools/../lib/utils/cython_nms.so: undefined symbol: _Py_ZeroStruct

Missing tools/test_net.py

Dear Fu,
I forked this project and tried to train by the steps that shown in readme.md.
But, there is an error when I execute
./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50

I found the reason is there is no test_net.py in ./tools
Can you please show test_net.py for us ?
Thank you very much for your help.
Best Regard

Error when running the demo_graspRGD.py

When I run the demo_graspRGD.py,it occurs some problems.Could you please tell me how to solve it?
my environment:
ubuntu16.04
tensorflow 1.10.0
python 2.7
cuda 9.0

2019-03-28 20:28:34.307571: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-28 20:28:34.392082: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-28 20:28:34.392497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 10.92GiB freeMemory: 10.04GiB
2019-03-28 20:28:34.392510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2019-03-28 20:28:34.555931: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-28 20:28:34.555960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0
2019-03-28 20:28:34.555965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N
2019-03-28 20:28:34.556152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9705 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /home/jinhuan/robotic_grasp/Georgia_Tech/grasp_multiObject_multiGrasp/tools/../lib/nets/network.py:58: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version.
Instructions for updating:
Use the axis argument instead
2019-03-28 20:28:36.722838: W tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key resnet_v1_101/bbox_pred/biases not found in checkpoint
Traceback (most recent call last):
File "./tools/demo_graspRGD.py", line 200, in
saver.restore(sess, tfmodel)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1743, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key resnet_v1_101/bbox_pred/biases not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op u'save/RestoreV2', defined at:
File "./tools/demo_graspRGD.py", line 199, in
saver = tf.train.Saver()
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1281, in init
self.build()
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1293, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1330, in _build
build_save=build_save, build_restore=build_restore)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 778, in _build_internal
restore_sequentially, reshape)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 397, in _AddRestoreOps
restore_sequentially)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 829, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
op_def=op_def)
File "/home/jinhuan/ENV/Georgia_Tech/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1717, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key resnet_v1_101/bbox_pred/biases not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Something wrong when split the cornell dataset

When I run dataPreprocessingTest_fasterrcnn_split.m,I find something strange as below.First,

mainfileID = fopen(['/home/fujenchu/projects/deepLearning/deepGraspExtensiveOffline/data/grasps/scripts/trainttt' sprintf('%02d',folder) '.txt'],'a');
...........
fclose(mainfileID);

However,there isn't the function fprintf to write something in trainttt.txt.I think you want to record something but miss the code.Second,

filenum = str2num(imgname(4:7));
if(any(test_list == filenum))
    file_writeID = fopen(imgSetTest,'a');
    fprintf(file_writeID, '%s\n', [imgDataDir(1:end-3) 'Cropped320_rgd/' imgname '_preprocessed_1.png' ] );
    fclose(file_writeID);
    continue;
end

I can not understand the path of test.txt.Why not the same as train.txt?Just differ from the image_index.The last problem is that the image_indexs in test.txt don't have the
corresponding pcdXXXXr_preprocessed_XX.txt in Annotations and pcdXXXXr_preprocessed_XX.png in ImageSets.

physical grasp

The 5-dimensional grasp rectangle is defined in a 2D space. Then I was wondering how could the end-effector know the 'z' value of a grasp in physical grasp experiments. I am very new to grasp planning. Thank you so much!

Error when runing the demo

My env:

  • ubuntu 14.04
  • python 2.7
  • tensorflow 1.4.0
  • cuda 8.0
  • cudnn 6.0

I followed the README to run the demo step by step:

  1. clone the repository
  2. build cython modules
  3. install coco
  4. download pretrain models

Everything was fine untill I run the demo: ./tools/demo_graspRGD.py --net res50 --dataset grasp,
then I got an error:

2018-12-28 20:55:48.853153: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-12-28 20:55:48.930862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: Quadro K2200 major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:03:00.0
totalMemory: 3.94GiB freeMemory: 5.62MiB
2018-12-28 20:55:48.930895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Quadro K2200, pci bus id: 0000:03:00.0, compute capability: 5.0)
Traceback (most recent call last):
  File "./tools/demo_graspRGD.py", line 196, in <module>
    tag='default', anchor_scales=[8, 16, 32])
  File "/home/zhan/ws/grasp_multiObject_multiGrasp/tools/../lib/nets/network.py", line 306, in create_architecture
    rois, cls_prob, bbox_pred = self.build_network(sess, training)
  File "/home/zhan/ws/grasp_multiObject_multiGrasp/tools/../lib/nets/resnet_v1.py", line 155, in build_network
    scope=self._resnet_scope)
  File "/home/zhan/.local/lib/python2.7/site-packages/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py", line 207, in resnet_v1
    net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
  File "/home/zhan/.local/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
    return func(*args, **current_args)
  File "/home/zhan/.local/lib/python2.7/site-packages/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py", line 215, in stack_blocks_dense
    net = block.unit_fn(net, rate=1, **unit)
TypeError: bottleneck() argument after ** must be a mapping, not tuple

Did someone meet this error before?
I also try on ubuntu 18.04 with python 3.6, tensorflow 1.9.0, but still saw the same error.

TypeError: bottleneck() argument after ** must be a mapping, not tuple

I run :python demo_graspRGD.py --net res50 --dataset grasp, and got an error like :

Traceback (most recent call last):
File "demo_graspRGD.py", line 197, in
tag='default', anchor_scales=[8, 16, 32])
File "/content/grasp_multiObject_multiGrasp/tools/../lib/nets/network.py", line 306, in create_architecture
rois, cls_prob, bbox_pred = self.build_network(sess, training)
File "/content/grasp_multiObject_multiGrasp/tools/../lib/nets/resnet_v1.py", line 155, in build_network
scope=self._resnet_scope)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py", line 207, in resnet_v1
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args
return func(*args, **current_args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py", line 215, in stack_blocks_dense
net = block.unit_fn(net, rate=1, **unit)
TypeError: bottleneck() argument after ** must be a mapping, not tuple

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.