Code Monkey home page Code Monkey logo

keras-frcnn's Introduction

keras-frcnn

Keras implementation of Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. cloned from https://github.com/yhenon/keras-frcnn/

USAGE:

  • Both theano and tensorflow backends are supported. However compile times are very high in theano, and tensorflow is highly recommended.

  • train_frcnn.py can be used to train a model. To train on Pascal VOC data, simply do: python train_frcnn.py -p /path/to/pascalvoc/.

  • the Pascal VOC data set (images and annotations for bounding boxes around the classified objects) can be obtained from: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar

  • simple_parser.py provides an alternative way to input data, using a text file. Simply provide a text file, with each line containing:

    filepath,x1,y1,x2,y2,class_name

    For example:

    /data/imgs/img_001.jpg,837,346,981,456,cow

    /data/imgs/img_002.jpg,215,312,279,391,cat

    The classes will be inferred from the file. To use the simple parser instead of the default pascal voc style parser, use the command line option -o simple. For example python train_frcnn.py -o simple -p my_data.txt.

  • Running train_frcnn.py will write weights to disk to an hdf5 file, as well as all the setting of the training run to a pickle file. These settings can then be loaded by test_frcnn.py for any testing.

  • test_frcnn.py can be used to perform inference, given pretrained weights and a config file. Specify a path to the folder containing images: python test_frcnn.py -p /path/to/test_data/

  • Data augmentation can be applied by specifying --hf for horizontal flips, --vf for vertical flips and --rot for 90 degree rotations

NOTES:

  • config.py contains all settings for the train or test run. The default settings match those in the original Faster-RCNN paper. The anchor box sizes are [128, 256, 512] and the ratios are [1:1, 1:2, 2:1].
  • The theano backend by default uses a 7x7 pooling region, instead of 14x14 as in the frcnn paper. This cuts down compiling time slightly.
  • The tensorflow backend performs a resize on the pooling region, instead of max pooling. This is much more efficient and has little impact on results.

Example output:

ex1 ex2 ex3 ex4

ISSUES:

  • If you get this error: ValueError: There is a negative shape in the graph!
    than update keras to the newest version

  • This repo was developed using python2. python3 should work thanks to the contribution of a number of users.

  • If you run out of memory, try reducing the number of ROIs that are processed simultaneously. Try passing a lower -n to train_frcnn.py. Alternatively, try reducing the image size from the default value of 600 (this setting is found in config.py.

keras-frcnn's People

Contributors

ahleroy avatar ajk4 avatar antobi avatar butcher211 avatar jonasharnau avatar kbardool avatar masoudkaviani avatar sam575 avatar small-yellow-duck avatar thobaro avatar thomasjanssens avatar yhenon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-frcnn's Issues

Gray Image Training

Hi

I want to training with gray images. Which lines I should change?
Also, what should I do for decrease the test time except changing image size?

Meaning of bbox_threshold and overlap_thresh - test_frcnn.py

Hi all,

I would like to know what is the difference among the
var "bbox_threshold" and the
parameter "overlap_thresh" that is passed to the roi_helpers.rpn_to_roi(... , overlap_thresh=0.7) function.
File: "test_frcnn.py"

How are these values influence the detection results ?

Thanks.

ran out of memory

Hey i get the rom when i execute the train file.

GeForce GTX 1060 6GB
tensorflow 1.14.0
keras 2.2.3
i wont to mentioned that set num_ROI = 8 and size to 300

WARNING: Logging before flag parsing goes to stderr.
W0818 03:54:37.337376 140073701984064 deprecation_wrapper.py:119] From /media/romeo/Volume/Projekt/activ_lerning-_object_dection/train_frcnn.py:14: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W0818 03:54:37.337819 140073701984064 deprecation_wrapper.py:119] From /media/romeo/Volume/Projekt/activ_lerning-_object_dection/train_frcnn.py:17: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2019-08-18 03:54:37.352707: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX FMA
2019-08-18 03:54:37.379296: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3515690000 Hz
2019-08-18 03:54:37.380498: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561330485ae0 executing computations on platform Host. Devices:
2019-08-18 03:54:37.380669: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2019-08-18 03:54:37.384859: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-08-18 03:54:37.497965: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5613315baad0 executing computations on platform CUDA. Devices:
2019-08-18 03:54:37.498098: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1
2019-08-18 03:54:37.499131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
2019-08-18 03:54:37.499561: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-08-18 03:54:37.502870: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-08-18 03:54:37.505865: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2019-08-18 03:54:37.506353: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2019-08-18 03:54:37.509578: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2019-08-18 03:54:37.511485: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2019-08-18 03:54:37.516464: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-18 03:54:37.517654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-18 03:54:37.517826: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-08-18 03:54:37.519293: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-18 03:54:37.519370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-18 03:54:37.519436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-18 03:54:37.520565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5075 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-08-18 03:54:37.523002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
2019-08-18 03:54:37.523175: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-08-18 03:54:37.523307: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-08-18 03:54:37.523458: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2019-08-18 03:54:37.523546: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2019-08-18 03:54:37.523633: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2019-08-18 03:54:37.523718: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2019-08-18 03:54:37.523805: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-18 03:54:37.525239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-18 03:54:37.525377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-18 03:54:37.525497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-18 03:54:37.525609: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-18 03:54:37.527235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 5075 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
W0818 03:54:51.347155 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0818 03:54:51.347605 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0818 03:54:51.353192 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0818 03:54:51.385004 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1919: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W0818 03:54:51.387523 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0818 03:54:52.891735 140073701984064 deprecation_wrapper.py:119] From /media/romeo/Volume/Projekt/activ_lerning-_object_dection/keras_frcnn/RoiPoolingConv.py:105: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

W0818 03:54:54.030263 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

2019-08-18 03:54:54.596591: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
2019-08-18 03:54:54.596845: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-08-18 03:54:54.597062: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-08-18 03:54:54.597159: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2019-08-18 03:54:54.597261: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2019-08-18 03:54:54.597423: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2019-08-18 03:54:54.597516: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2019-08-18 03:54:54.597599: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-18 03:54:54.598577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-18 03:54:54.598682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-18 03:54:54.598752: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-18 03:54:54.598818: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-18 03:54:54.599826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5075 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
W0818 03:54:58.334833 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

W0818 03:54:58.351473 140073701984064 deprecation.py:323] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0818 03:54:59.863604 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/callbacks.py:850: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.

W0818 03:54:59.864075 140073701984064 deprecation_wrapper.py:119] From /home/romeo/anaconda3/envs/midrasAl/lib/python3.6/site-packages/keras/callbacks.py:853: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

2019-08-18 03:55:08.801365: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-18 03:55:10.593121: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:10.703856: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:10.768077: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.35GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:10.771052: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.29GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:10.796909: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.29GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:10.850633: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:11.112124: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:13.396548: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:13.411168: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:13.663084: W tensorflow/core/common_runtime/bfc_allocator.cc:237] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-08-18 03:55:28.746892: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
available gpu divice: /device:GPU:0
Config has been written to /media/romeo/Volume/Projekt/activ_lerning-_object_dection/models/model_frcnn.pickle, and can be loaded when testing to ensure correct results

Erstellung von Datenmenge Seed und unlabellierte

Parsing annotation files
Erstellung anotation, class_maping und class_count vom Seed
Erstellung anotation, class_maping und class_count vom unlabellierte Daten
size of train data: 13700
size of data reste data 3425
size of train data: 13700
size of data reste data 3425
Num train samples 13700
Num val samples 0
Num test samples 0
loading weights from /media/romeo/Volume/Projekt/activ_lerning-_object_dection/models/model_frcnn.hdf5
Starting training
Epoch 1/500

1/500 [..............................] - ETA: 4:01:22 - rpn_cls: 9.6333 - rpn_regr: 0.2463 - detector_cls: 3.0445 - detector_regr: 0.0000e+00
2/500 [..............................] - ETA: 2:08:49 - rpn_cls: 9.3267 - rpn_regr: 0.3321 - detector_cls: 2.9546 - detector_regr: 0.0000e+00
3/500 [..............................] - ETA: 1:31:15 - rpn_cls: 9.1111 - rpn_regr: 0.3446 - detector_cls: 2.8315 - detector_regr: 0.0000e+00
4/500 [..............................] - ETA: 1:11:19 - rpn_cls: 9.0066 - rpn_regr: 0.3356 - detector_cls: 2.7265 - detector_regr: 0.0000e+00
5/500 [..............................] - ETA: 59:55 - rpn_cls: 8.9325 - rpn_regr: 0.3277 - detector_cls: 2.6177 - detector_regr: 0.0132
6/500 [..............................] - ETA: 57:16 - rpn_cls: 8.7480 - rpn_regr: 0.3211 - detector_cls: 2.4911 - detector_regr: 0.0202
7/500 [..............................] - ETA: 54:14 - rpn_cls: 8.6329 - rpn_regr: 0.3207 - detector_cls: 2.3776 - detector_regr: 0.0241
8/500 [..............................] - ETA: 48:25 - rpn_cls: 8.4861 - rpn_regr: 0.3197 - detector_cls: 2.2704 - detector_regr: 0.0262
9/500 [..............................] - ETA: 44:07 - rpn_cls: 8.3802 - rpn_regr: 0.3187 - detector_cls: 2.1689 - detector_regr: 0.0274
10/500 [..............................] - ETA: 41:07 - rpn_cls: 8.2609 - rpn_regr: 0.3163 - detector_cls: 2.0743 - detector_regr: 0.0279
11/500 [..............................] - ETA: 38:46 - rpn_cls: 8.1636 - rpn_regr: 0.3135 - detector_cls: 1.9868 - detector_regr: 0.0281
12/500 [..............................] - ETA: 36:43 - rpn_cls: 8.0326 - rpn_regr: 0.3101 - detector_cls: 1.9062 - detector_regr: 0.0281
13/500 [..............................] - ETA: 34:54 - rpn_cls: 7.9082 - rpn_regr: 0.3063 - detector_cls: 1.8319 - detector_regr: 0.0279
14/500 [..............................] - ETA: 32:57 - rpn_cls: 7.7910 - rpn_regr: 0.3046 - detector_cls: 1.7635 - detector_regr: 0.0276
15/500 [..............................] - ETA: 31:48 - rpn_cls: 7.6792 - rpn_regr: 0.3023 - detector_cls: 1.7004 - detector_regr: 0.0272
16/500 [..............................] - ETA: 30:22 - rpn_cls: 7.5956 - rpn_regr: 0.2996 - detector_cls: 1.6419 - detector_regr: 0.0268
17/500 [>.............................] - ETA: 29:13 - rpn_cls: 7.5288 - rpn_regr: 0.2986 - detector_cls: 1.5877 - detector_regr: 0.0264
18/500 [>.............................] - ETA: 28:17 - rpn_cls: 7.4761 - rpn_regr: 0.2977 - detector_cls: 1.5373 - detector_regr: 0.0259
19/500 [>.............................] - ETA: 27:18 - rpn_cls: 7.4390 - rpn_regr: 0.2967 - detector_cls: 1.4903 - detector_regr: 0.0255
20/500 [>.............................] - ETA: 26:21 - rpn_cls: 7.4145 - rpn_regr: 0.2957 - detector_cls: 1.4464 - detector_regr: 0.0250
21/500 [>.............................] - ETA: 25:25 - rpn_cls: 7.3864 - rpn_regr: 0.2948 - detector_cls: 1.4053 - detector_regr: 0.0246
22/500 [>.............................] - ETA: 25:04 - rpn_cls: 7.3664 - rpn_regr: 0.2940 - detector_cls: 1.3667 - detector_regr: 0.0241
23/500 [>.............................] - ETA: 24:20 - rpn_cls: 7.3427 - rpn_regr: 0.2932 - detector_cls: 1.3304 - detector_regr: 0.0237
24/500 [>.............................] - ETA: 24:10 - rpn_cls: 7.3120 - rpn_regr: 0.2922 - detector_cls: 1.2962 - detector_regr: 0.0233
25/500 [>.............................] - ETA: 23:44 - rpn_cls: 7.2819 - rpn_regr: 0.2928 - detector_cls: 1.2640 - detector_regr: 0.0229
26/500 [>.............................] - ETA: 23:09 - rpn_cls: 7.2586 - rpn_regr: 0.2935 - detector_cls: 1.2335 - detector_regr: 0.0225
27/500 [>.............................] - ETA: 22:45 - rpn_cls: 7.2357 - rpn_regr: 0.2939 - detector_cls: 1.2046 - detector_regr: 0.0221
28/500 [>.............................] - ETA: 22:24 - rpn_cls: 7.2130 - rpn_regr: 0.2940 - detector_cls: 1.1772 - detector_regr: 0.0218
29/500 [>.............................] - ETA: 22:03 - rpn_cls: 7.1936 - rpn_regr: 0.2941 - detector_cls: 1.1511 - detector_regr: 0.0214
30/500 [>.............................] - ETA: 21:32 - rpn_cls: 7.1729 - rpn_regr: 0.2939 - detector_cls: 1.1264 - detector_regr: 0.0211
31/500 [>.............................] - ETA: 21:06 - rpn_cls: 7.1570 - rpn_regr: 0.2939 - detector_cls: 1.1028 - detector_regr: 0.0207
32/500 [>.............................] - ETA: 20:39 - rpn_cls: 7.1454 - rpn_regr: 0.2936 - detector_cls: 1.0803 - detector_regr: 0.0204
33/500 [>.............................] - ETA: 20:37 - rpn_cls: 7.1339 - rpn_regr: 0.2933 - detector_cls: 1.0588 - detector_regr: 0.0201
34/500 [=>............................] - ETA: 20:22 - rpn_cls: 7.1220 - rpn_regr: 0.2931 - detector_cls: 1.0382 - detector_regr: 0.0198
35/500 [=>............................] - ETA: 20:12 - rpn_cls: 7.1126 - rpn_regr: 0.2928 - detector_cls: 1.0185 - detector_regr: 0.0195
36/500 [=>............................] - ETA: 19:48 - rpn_cls: 7.1062 - rpn_regr: 0.2923 - detector_cls: 0.9997 - detector_regr: 0.0192
37/500 [=>............................] - ETA: 19:25 - rpn_cls: 7.1013 - rpn_regr: 0.2918 - detector_cls: 0.9816 - detector_regr: 0.0189
38/500 [=>............................] - ETA: 19:05 - rpn_cls: 7.0959 - rpn_regr: 0.2911 - detector_cls: 0.9643 - detector_regr: 0.0186
39/500 [=>............................] - ETA: 18:46 - rpn_cls: 7.0919 - rpn_regr: 0.2904 - detector_cls: 0.9476 - detector_regr: 0.0184
40/500 [=>............................] - ETA: 18:27 - rpn_cls: 7.0895 - rpn_regr: 0.2896 - detector_cls: 0.9316 - detector_regr: 0.0181
41/500 [=>............................] - ETA: 18:17 - rpn_cls: 7.0884 - rpn_regr: 0.2888 - detector_cls: 0.9161 - detector_regr: 0.0179
42/500 [=>............................] - ETA: 18:04 - rpn_cls: 7.0860 - rpn_regr: 0.2880 - detector_cls: 0.9012 - detector_regr: 0.0176
43/500 [=>............................] - ETA: 17:54 - rpn_cls: 7.0826 - rpn_regr: 0.2876 - detector_cls: 0.8883 - detector_regr: 0.0177
44/500 [=>............................] - ETA: 17:46 - rpn_cls: 7.0805 - rpn_regr: 0.2871 - detector_cls: 0.8767 - detector_regr: 0.0188
45/500 [=>............................] - ETA: 17:38 - rpn_cls: 7.0791 - rpn_regr: 0.2866 - detector_cls: 0.8664 - detector_regr: 0.0202
46/500 [=>............................] - ETA: 17:22 - rpn_cls: 7.0792 - rpn_regr: 0.2860 - detector_cls: 0.8564 - detector_regr: 0.0215
47/500 [=>............................] - ETA: 17:07 - rpn_cls: 7.0798 - rpn_regr: 0.2854 - detector_cls: 0.8465 - detector_regr: 0.0227
48/500 [=>............................] - ETA: 16:55 - rpn_cls: 7.0791 - rpn_regr: 0.2847 - detector_cls: 0.8370 - detector_regr: 0.0239
49/500 [=>............................] - ETA: 16:49 - rpn_cls: 7.0802 - rpn_regr: 0.2841 - detector_cls: 0.8276 - detector_regr: 0.0249
50/500 [==>...........................] - ETA: 16:35 - rpn_cls: 7.0816 - rpn_regr: 0.2834 - detector_cls: 0.8185 - detector_regr: 0.0259
51/500 [==>...........................] - ETA: 16:26 - rpn_cls: 7.0836 - rpn_regr: 0.2827 - detector_cls: 0.8117 - detector_regr: 0.0274
52/500 [==>...........................] - ETA: 16:15 - rpn_cls: 7.0837 - rpn_regr: 0.2820 - detector_cls: 0.8057 - detector_regr: 0.0290
53/500 [==>...........................] - ETA: 16:03 - rpn_cls: 7.0827 - rpn_regr: 0.2813 - detector_cls: 0.7999 - detector_regr: 0.0307
54/500 [==>...........................] - ETA: 15:56 - rpn_cls: 7.0821 - rpn_regr: 0.2806 - detector_cls: 0.7953 - detector_regr: 0.0326
55/500 [==>...........................] - ETA: 15:46 - rpn_cls: 7.0796 - rpn_regr: 0.2799 - detector_cls: 0.7909 - detector_regr: 0.0349
56/500 [==>...........................] - ETA: 15:37 - rpn_cls: 7.0778 - rpn_regr: 0.2791 - detector_cls: 0.7866 - detector_regr: 0.0373
57/500 [==>...........................] - ETA: 15:30 - rpn_cls: 7.0767 - rpn_regr: 0.2783 - detector_cls: 0.7825 - detector_regr: 0.0398
58/500 [==>...........................] - ETA: 15:19 - rpn_cls: 7.0749 - rpn_regr: 0.2776 - detector_cls: 0.7784 - detector_regr: 0.0421
59/500 [==>...........................] - ETA: 15:09 - rpn_cls: 7.0723 - rpn_regr: 0.2769 - detector_cls: 0.7743 - detector_regr: 0.0445
60/500 [==>...........................] - ETA: 15:00 - rpn_cls: 7.0702 - rpn_regr: 0.2761 - detector_cls: 0.7706 - detector_regr: 0.0469
61/500 [==>...........................] - ETA: 14:52 - rpn_cls: 7.0686 - rpn_regr: 0.2754 - detector_cls: 0.7670 - detector_regr: 0.0491
62/500 [==>...........................] - ETA: 14:46 - rpn_cls: 7.0672 - rpn_regr: 0.2748 - detector_cls: 0.7633 - detector_regr: 0.0512
63/500 [==>...........................] - ETA: 14:37 - rpn_cls: 7.0667 - rpn_regr: 0.2742 - detector_cls: 0.7598 - detector_regr: 0.0534
64/500 [==>...........................] - ETA: 14:28 - rpn_cls: 7.0666 - rpn_regr: 0.2735 - detector_cls: 0.7563 - detector_regr: 0.0555
65/500 [==>...........................] - ETA: 14:22 - rpn_cls: 7.0660 - rpn_regr: 0.2729 - detector_cls: 0.7531 - detector_regr: 0.0575
66/500 [==>...........................] - ETA: 14:14 - rpn_cls: 7.0652 - rpn_regr: 0.2723 - detector_cls: 0.7502 - detector_regr: 0.0597
67/500 [===>..........................] - ETA: 14:10 - rpn_cls: 7.0635 - rpn_regr: 0.2716 - detector_cls: 0.7475 - detector_regr: 0.0619
68/500 [===>..........................] - ETA: 14:03 - rpn_cls: 7.0624 - rpn_regr: 0.2710 - detector_cls: 0.7449 - detector_regr: 0.0639
69/500 [===>..........................] - ETA: 14:03 - rpn_cls: 7.0611 - rpn_regr: 0.2704 - detector_cls: 0.7427 - detector_regr: 0.0661
70/500 [===>..........................] - ETA: 13:55 - rpn_cls: 7.0594 - rpn_regr: 0.2698 - detector_cls: 0.7404 - detector_regr: 0.0681
71/500 [===>..........................] - ETA: 13:50 - rpn_cls: 7.0579 - rpn_regr: 0.2692 - detector_cls: 0.7382 - detector_regr: 0.0700
72/500 [===>..........................] - ETA: 13:42 - rpn_cls: 7.0556 - rpn_regr: 0.2687 - detector_cls: 0.7361 - detector_regr: 0.0719
73/500 [===>..........................] - ETA: 13:40 - rpn_cls: 7.0538 - rpn_regr: 0.2681 - detector_cls: 0.7342 - detector_regr: 0.0738
74/500 [===>..........................] - ETA: 13:38 - rpn_cls: 7.0515 - rpn_regr: 0.2675 - detector_cls: 0.7324 - detector_regr: 0.0756
75/500 [===>..........................] - ETA: 13:33 - rpn_cls: 7.0486 - rpn_regr: 0.2672 - detector_cls: 0.7305 - detector_regr: 0.0774
76/500 [===>..........................] - ETA: 13:26 - rpn_cls: 7.0453 - rpn_regr: 0.2668 - detector_cls: 0.7288 - detector_regr: 0.0792
77/500 [===>..........................] - ETA: 13:22 - rpn_cls: 7.0426 - rpn_regr: 0.2664 - detector_cls: 0.7270 - detector_regr: 0.0810
78/500 [===>..........................] - ETA: 13:17 - rpn_cls: 7.0405 - rpn_regr: 0.2661 - detector_cls: 0.7254 - detector_regr: 0.0828
79/500 [===>..........................] - ETA: 13:11 - rpn_cls: 7.0380 - rpn_regr: 0.2657 - detector_cls: 0.7237 - detector_regr: 0.0845
80/500 [===>..........................] - ETA: 13:04 - rpn_cls: 7.0355 - rpn_regr: 0.2653 - detector_cls: 0.7220 - detector_regr: 0.0862
81/500 [===>..........................] - ETA: 13:01 - rpn_cls: 7.0333 - rpn_regr: 0.2649 - detector_cls: 0.7203 - detector_regr: 0.0879
82/500 [===>..........................] - ETA: 12:55 - rpn_cls: 7.0311 - rpn_regr: 0.2645 - detector_cls: 0.7188 - detector_regr: 0.0896
83/500 [===>..........................] - ETA: 12:49 - rpn_cls: 7.0290 - rpn_regr: 0.2641 - detector_cls: 0.7172 - detector_regr: 0.0912
84/500 [====>.........................] - ETA: 12:44 - rpn_cls: 7.0270 - rpn_regr: 0.2637 - detector_cls: 0.7157 - detector_regr: 0.0928
85/500 [====>.........................] - ETA: 12:38 - rpn_cls: 7.0247 - rpn_regr: 0.2633 - detector_cls: 0.7143 - detector_regr: 0.0944
86/500 [====>.........................] - ETA: 12:32 - rpn_cls: 7.0224 - rpn_regr: 0.2628 - detector_cls: 0.7128 - detector_regr: 0.0959
87/500 [====>.........................] - ETA: 12:37 - rpn_cls: 7.0196 - rpn_regr: 0.2624 - detector_cls: 0.7113 - detector_regr: 0.0973
88/500 [====>.........................] - ETA: 12:32 - rpn_cls: 7.0166 - rpn_regr: 0.2620 - detector_cls: 0.7098 - detector_regr: 0.0987
89/500 [====>.........................] - ETA: 12:26 - rpn_cls: 7.0131 - rpn_regr: 0.2616 - detector_cls: 0.7084 - detector_regr: 0.1001
90/500 [====>.........................] - ETA: 12:20 - rpn_cls: 7.0092 - rpn_regr: 0.2613 - detector_cls: 0.7072 - detector_regr: 0.1015
91/500 [====>.........................] - ETA: 12:15 - rpn_cls: 7.0054 - rpn_regr: 0.2610 - detector_cls: 0.7061 - detector_regr: 0.1029
92/500 [====>.........................] - ETA: 12:10 - rpn_cls: 7.0013 - rpn_regr: 0.2606 - detector_cls: 0.7049 - detector_regr: 0.1043
93/500 [====>.........................] - ETA: 12:06 - rpn_cls: 6.9978 - rpn_regr: 0.2603 - detector_cls: 0.7037 - detector_regr: 0.1056
94/500 [====>.........................] - ETA: 12:00 - rpn_cls: 6.9946 - rpn_regr: 0.2599 - detector_cls: 0.7024 - detector_regr: 0.1069
95/500 [====>.........................] - ETA: 11:56 - rpn_cls: 6.9914 - rpn_regr: 0.2596 - detector_cls: 0.7013 - detector_regr: 0.1081
96/500 [====>.........................] - ETA: 11:54 - rpn_cls: 6.9884 - rpn_regr: 0.2593 - detector_cls: 0.7003 - detector_regr: 0.1094
97/500 [====>.........................] - ETA: 11:49 - rpn_cls: 6.9854 - rpn_regr: 0.2589 - detector_cls: 0.6994 - detector_regr: 0.1107
98/500 [====>.........................] - ETA: 11:47 - rpn_cls: 6.9822 - rpn_regr: 0.2586 - detector_cls: 0.6984 - detector_regr: 0.1120
99/500 [====>.........................] - ETA: 11:49 - rpn_cls: 6.9788 - rpn_regr: 0.2583 - detector_cls: 0.6975 - detector_regr: 0.1132
100/500 [=====>........................] - ETA: 11:44 - rpn_cls: 6.9758 - rpn_regr: 0.2579 - detector_cls: 0.6965 - detector_regr: 0.1144
101/500 [=====>........................] - ETA: 11:40 - rpn_cls: 6.9727 - rpn_regr: 0.2577 - detector_cls: 0.6956 - detector_regr: 0.1156
102/500 [=====>........................] - ETA: 11:36 - rpn_cls: 6.9691 - rpn_regr: 0.2574 - detector_cls: 0.6947 - detector_regr: 0.1168
103/500 [=====>........................] - ETA: 11:33 - rpn_cls: 6.9654 - rpn_regr: 0.2572 - detector_cls: 0.6938 - detector_regr: 0.1180
104/500 [=====>........................] - ETA: 11:28 - rpn_cls: 6.9615 - rpn_regr: 0.2569 - detector_cls: 0.6930 - detector_regr: 0.1193
105/500 [=====>........................] - ETA: 11:24 - rpn_cls: 6.9579 - rpn_regr: 0.2566 - detector_cls: 0.6922 - detector_regr: 0.1205
106/500 [=====>........................] - ETA: 11:19 - rpn_cls: 6.9539 - rpn_regr: 0.2564 - detector_cls: 0.6914 - detector_regr: 0.1217
107/500 [=====>........................] - ETA: 11:22 - rpn_cls: 6.9503 - rpn_regr: 0.2561 - detector_cls: 0.6907 - detector_regr: 0.1230
108/500 [=====>........................] - ETA: 11:20 - rpn_cls: 6.9466 - rpn_regr: 0.2559 - detector_cls: 0.6902 - detector_regr: 0.1243
109/500 [=====>........................] - ETA: 11:15 - rpn_cls: 6.9433 - rpn_regr: 0.2556 - detector_cls: 0.6896 - detector_regr: 0.1256
110/500 [=====>........................] - ETA: 11:14 - rpn_cls: 6.9400 - rpn_regr: 0.2553 - detector_cls: 0.6890 - detector_regr: 0.1269
111/500 [=====>........................] - ETA: 11:12 - rpn_cls: 6.9369 - rpn_regr: 0.2551 - detector_cls: 0.6886 - detector_regr: 0.1281
112/500 [=====>........................] - ETA: 11:08 - rpn_cls: 6.9340 - rpn_regr: 0.2549 - detector_cls: 0.6883 - detector_regr: 0.1294
113/500 [=====>........................] - ETA: 11:04 - rpn_cls: 6.9306 - rpn_regr: 0.2546 - detector_cls: 0.6880 - detector_regr: 0.1307
114/500 [=====>........................] - ETA: 11:00 - rpn_cls: 6.9269 - rpn_regr: 0.2544 - detector_cls: 0.6877 - detector_regr: 0.1320
115/500 [=====>........................] - ETA: 10:56 - rpn_cls: 6.9228 - rpn_regr: 0.2541 - detector_cls: 0.6876 - detector_regr: 0.1332
116/500 [=====>........................] - ETA: 10:58 - rpn_cls: 6.9188 - rpn_regr: 0.2539 - detector_cls: 0.6875 - detector_regr: 0.1345
117/500 [======>.......................] - ETA: 10:54 - rpn_cls: 6.9149 - rpn_regr: 0.2537 - detector_cls: 0.6875 - detector_regr: 0.1357
118/500 [======>.......................] - ETA: 10:50 - rpn_cls: 6.9108 - rpn_regr: 0.2534 - detector_cls: 0.6875 - detector_regr: 0.1370
119/500 [======>.......................] - ETA: 10:46 - rpn_cls: 6.9070 - rpn_regr: 0.2532 - detector_cls: 0.6875 - detector_regr: 0.1382
120/500 [======>.......................] - ETA: 10:43 - rpn_cls: 6.9034 - rpn_regr: 0.2529 - detector_cls: 0.6876 - detector_regr: 0.1393
121/500 [======>.......................] - ETA: 10:39 - rpn_cls: 6.8996 - rpn_regr: 0.2527 - detector_cls: 0.6876 - detector_regr: 0.1405
122/500 [======>.......................] - ETA: 10:38 - rpn_cls: 6.8955 - rpn_regr: 0.2524 - detector_cls: 0.6877 - detector_regr: 0.1416
123/500 [======>.......................] - ETA: 10:34 - rpn_cls: 6.8912 - rpn_regr: 0.2522 - detector_cls: 0.6879 - detector_regr: 0.1428
124/500 [======>.......................] - ETA: 10:30 - rpn_cls: 6.8866 - rpn_regr: 0.2519 - detector_cls: 0.6881 - detector_regr: 0.1440
125/500 [======>.......................] - ETA: 10:32 - rpn_cls: 6.8821 - rpn_regr: 0.2517 - detector_cls: 0.6883 - detector_regr: 0.1451
126/500 [======>.......................] - ETA: 10:28 - rpn_cls: 6.8774 - rpn_regr: 0.2514 - detector_cls: 0.6887 - detector_regr: 0.1463
127/500 [======>.......................] - ETA: 10:26 - rpn_cls: 6.8723 - rpn_regr: 0.2511 - detector_cls: 0.6891 - detector_regr: 0.1475
128/500 [======>.......................] - ETA: 10:22 - rpn_cls: 6.8670 - rpn_regr: 0.2509 - detector_cls: 0.6895 - detector_regr: 0.1486
129/500 [======>.......................] - ETA: 10:20 - rpn_cls: 6.8618 - rpn_regr: 0.2506 - detector_cls: 0.6899 - detector_regr: 0.1498
130/500 [======>.......................] - ETA: 10:17 - rpn_cls: 6.8566 - rpn_regr: 0.2504 - detector_cls: 0.6904 - detector_regr: 0.1509
131/500 [======>.......................] - ETA: 10:14 - rpn_cls: 6.8514 - rpn_regr: 0.2501 - detector_cls: 0.6908 - detector_regr: 0.1520
132/500 [======>.......................] - ETA: 10:12 - rpn_cls: 6.8459 - rpn_regr: 0.2498 - detector_cls: 0.6913 - detector_regr: 0.1532
133/500 [======>.......................] - ETA: 10:08 - rpn_cls: 6.8404 - rpn_regr: 0.2496 - detector_cls: 0.6917 - detector_regr: 0.1543
134/500 [=======>......................] - ETA: 10:05 - rpn_cls: 6.8347 - rpn_regr: 0.2493 - detector_cls: 0.6921 - detector_regr: 0.1554
135/500 [=======>......................] - ETA: 10:01 - rpn_cls: 6.8287 - rpn_regr: 0.2491 - detector_cls: 0.6927 - detector_regr: 0.1564
136/500 [=======>......................] - ETA: 10:05 - rpn_cls: 6.8226 - rpn_regr: 0.2489 - detector_cls: 0.6933 - detector_regr: 0.1575
137/500 [=======>......................] - ETA: 10:02 - rpn_cls: 6.8166 - rpn_regr: 0.2486 - detector_cls: 0.6939 - detector_regr: 0.1586
138/500 [=======>......................] - ETA: 9:59 - rpn_cls: 6.8107 - rpn_regr: 0.2484 - detector_cls: 0.6946 - detector_regr: 0.1597
139/500 [=======>......................] - ETA: 9:56 - rpn_cls: 6.8049 - rpn_regr: 0.2482 - detector_cls: 0.6952 - detector_regr: 0.1607
140/500 [=======>......................] - ETA: 9:52 - rpn_cls: 6.7990 - rpn_regr: 0.2479 - detector_cls: 0.6959 - detector_regr: 0.1618
141/500 [=======>......................] - ETA: 9:49 - rpn_cls: 6.7930 - rpn_regr: 0.2477 - detector_cls: 0.6965 - detector_regr: 0.1628
142/500 [=======>......................] - ETA: 9:46 - rpn_cls: 6.7868 - rpn_regr: 0.2474 - detector_cls: 0.6972 - detector_regr: 0.1639
143/500 [=======>......................] - ETA: 9:44 - rpn_cls: 6.7806 - rpn_regr: 0.2472 - detector_cls: 0.6979 - detector_regr: 0.1649
144/500 [=======>......................] - ETA: 9:41 - rpn_cls: 6.7743 - rpn_regr: 0.2469 - detector_cls: 0.6986 - detector_regr: 0.1660
145/500 [=======>......................] - ETA: 9:41 - rpn_cls: 6.7682 - rpn_regr: 0.2467 - detector_cls: 0.6992 - detector_regr: 0.1670
146/500 [=======>......................] - ETA: 9:38 - rpn_cls: 6.7619 - rpn_regr: 0.2465 - detector_cls: 0.6998 - detector_regr: 0.1681
147/500 [=======>......................] - ETA: 9:38 - rpn_cls: 6.7554 - rpn_regr: 0.2462 - detector_cls: 0.7006 - detector_regr: 0.1691
148/500 [=======>......................] - ETA: 9:36 - rpn_cls: 6.7489 - rpn_regr: 0.2460 - detector_cls: 0.7013 - detector_regr: 0.1702
149/500 [=======>......................] - ETA: 9:33 - rpn_cls: 6.7425 - rpn_regr: 0.2458 - detector_cls: 0.7020 - detector_regr: 0.1712
150/500 [========>.....................] - ETA: 9:31 - rpn_cls: 6.7363 - rpn_regr: 0.2455 - detector_cls: 0.7028 - detector_regr: 0.1722
151/500 [========>.....................] - ETA: 9:29 - rpn_cls: 6.7303 - rpn_regr: 0.2453 - detector_cls: 0.7035 - detector_regr: 0.1732
152/500 [========>.....................] - ETA: 9:26 - rpn_cls: 6.7241 - rpn_regr: 0.2451 - detector_cls: 0.7042 - detector_regr: 0.1742
153/500 [========>.....................] - ETA: 9:25 - rpn_cls: 6.7178 - rpn_regr: 0.2449 - detector_cls: 0.7050 - detector_regr: 0.1752
154/500 [========>.....................] - ETA: 9:22 - rpn_cls: 6.7117 - rpn_regr: 0.2446 - detector_cls: 0.7058 - detector_regr: 0.1762
155/500 [========>.....................] - ETA: 9:19 - rpn_cls: 6.7054 - rpn_regr: 0.2444 - detector_cls: 0.7066 - detector_regr: 0.1772
156/500 [========>.....................] - ETA: 9:17 - rpn_cls: 6.6992 - rpn_regr: 0.2442 - detector_cls: 0.7075 - detector_regr: 0.1781
157/500 [========>.....................] - ETA: 9:15 - rpn_cls: 6.6928 - rpn_regr: 0.2440 - detector_cls: 0.7084 - detector_regr: 0.1791
158/500 [========>.....................] - ETA: 9:11 - rpn_cls: 6.6863 - rpn_regr: 0.2438 - detector_cls: 0.7093 - detector_regr: 0.1801
159/500 [========>.....................] - ETA: 9:08 - rpn_cls: 6.6797 - rpn_regr: 0.2436 - detector_cls: 0.7102 - detector_regr: 0.1811
160/500 [========>.....................] - ETA: 9:06 - rpn_cls: 6.6731 - rpn_regr: 0.2433 - detector_cls: 0.7112 - detector_regr: 0.1821
161/500 [========>.....................] - ETA: 9:03 - rpn_cls: 6.6664 - rpn_regr: 0.2431 - detector_cls: 0.7121 - detector_regr: 0.1831
162/500 [========>.....................] - ETA: 9:00 - rpn_cls: 6.6597 - rpn_regr: 0.2429 - detector_cls: 0.7130 - detector_regr: 0.1840
163/500 [========>.....................] - ETA: 8:57 - rpn_cls: 6.6529 - rpn_regr: 0.2427 - detector_cls: 0.7140 - detector_regr: 0.1850
164/500 [========>.....................] - ETA: 8:54 - rpn_cls: 6.6460 - rpn_regr: 0.2425 - detector_cls: 0.7150 - detector_regr: 0.1859
165/500 [========>.....................] - ETA: 8:51 - rpn_cls: 6.6389 - rpn_regr: 0.2423 - detector_cls: 0.7160 - detector_regr: 0.1869
166/500 [========>.....................] - ETA: 8:48 - rpn_cls: 6.6317 - rpn_regr: 0.2421 - detector_cls: 0.7170 - detector_regr: 0.1878
167/500 [=========>....................] - ETA: 8:47 - rpn_cls: 6.6245 - rpn_regr: 0.2419 - detector_cls: 0.7181 - detector_regr: 0.1887
168/500 [=========>....................] - ETA: 8:44 - rpn_cls: 6.6173 - rpn_regr: 0.2417 - detector_cls: 0.7191 - detector_regr: 0.1897
169/500 [=========>....................] - ETA: 8:42 - rpn_cls: 6.6100 - rpn_regr: 0.2415 - detector_cls: 0.7202 - detector_regr: 0.1906
170/500 [=========>....................] - ETA: 8:39 - rpn_cls: 6.6029 - rpn_regr: 0.2412 - detector_cls: 0.7213 - detector_regr: 0.1915
171/500 [=========>....................] - ETA: 8:37 - rpn_cls: 6.5957 - rpn_regr: 0.2410 - detector_cls: 0.7224 - detector_regr: 0.1924
172/500 [=========>....................] - ETA: 8:34 - rpn_cls: 6.5886 - rpn_regr: 0.2408 - detector_cls: 0.7234 - detector_regr: 0.1932
173/500 [=========>....................] - ETA: 8:35 - rpn_cls: 6.5813 - rpn_regr: 0.2406 - detector_cls: 0.7245 - detector_regr: 0.1941
174/500 [=========>....................] - ETA: 8:33 - rpn_cls: 6.5742 - rpn_regr: 0.2405 - detector_cls: 0.7255 - detector_regr: 0.1949
175/500 [=========>....................] - ETA: 8:31 - rpn_cls: 6.5669 - rpn_regr: 0.2403 - detector_cls: 0.7266 - detector_regr: 0.1958
176/500 [=========>....................] - ETA: 8:31 - rpn_cls: 6.5598 - rpn_regr: 0.2401 - detector_cls: 0.7276 - detector_regr: 0.1966
177/500 [=========>....................] - ETA: 8:28 - rpn_cls: 6.5528 - rpn_regr: 0.2400 - detector_cls: 0.7286 - detector_regr: 0.1974
178/500 [=========>....................] - ETA: 8:26 - rpn_cls: 6.5456 - rpn_regr: 0.2398 - detector_cls: 0.7297 - detector_regr: 0.1982
179/500 [=========>....................] - ETA: 8:24 - rpn_cls: 6.5384 - rpn_regr: 0.2396 - detector_cls: 0.7307 - detector_regr: 0.1990
180/500 [=========>....................] - ETA: 8:21 - rpn_cls: 6.5311 - rpn_regr: 0.2394 - detector_cls: 0.7318 - detector_regr: 0.1998
181/500 [=========>....................] - ETA: 8:19 - rpn_cls: 6.5237 - rpn_regr: 0.2393 - detector_cls: 0.7329 - detector_regr: 0.2006
182/500 [=========>....................] - ETA: 8:16 - rpn_cls: 6.5165 - rpn_regr: 0.2391 - detector_cls: 0.7339 - detector_regr: 0.2013
183/500 [=========>....................] - ETA: 8:14 - rpn_cls: 6.5094 - rpn_regr: 0.2389 - detector_cls: 0.7350 - detector_regr: 0.2021
184/500 [==========>...................] - ETA: 8:11 - rpn_cls: 6.5023 - rpn_regr: 0.2388 - detector_cls: 0.7360 - detector_regr: 0.2029
185/500 [==========>...................] - ETA: 8:09 - rpn_cls: 6.4951 - rpn_regr: 0.2386 - detector_cls: 0.7371 - detector_regr: 0.2037
186/500 [==========>...................] - ETA: 8:06 - rpn_cls: 6.4878 - rpn_regr: 0.2384 - detector_cls: 0.7381 - detector_regr: 0.2044
187/500 [==========>...................] - ETA: 8:04 - rpn_cls: 6.4806 - rpn_regr: 0.2382 - detector_cls: 0.7392 - detector_regr: 0.2052
188/500 [==========>...................] - ETA: 8:03 - rpn_cls: 6.4735 - rpn_regr: 0.2381 - detector_cls: 0.7402 - detector_regr: 0.2059
189/500 [==========>...................] - ETA: 8:00 - rpn_cls: 6.4663 - rpn_regr: 0.2379 - detector_cls: 0.7412 - detector_regr: 0.2067
190/500 [==========>...................] - ETA: 7:59 - rpn_cls: 6.4591 - rpn_regr: 0.2378 - detector_cls: 0.7423 - detector_regr: 0.2074
191/500 [==========>...................] - ETA: 7:57 - rpn_cls: 6.4520 - rpn_regr: 0.2377 - detector_cls: 0.7433 - detector_regr: 0.2082
192/500 [==========>...................] - ETA: 7:55 - rpn_cls: 6.4450 - rpn_regr: 0.2376 - detector_cls: 0.7444 - detector_regr: 0.2089
193/500 [==========>...................] - ETA: 7:53 - rpn_cls: 6.4381 - rpn_regr: 0.2375 - detector_cls: 0.7454 - detector_regr: 0.2096
194/500 [==========>...................] - ETA: 7:50 - rpn_cls: 6.4313 - rpn_regr: 0.2373 - detector_cls: 0.7465 - detector_regr: 0.2103
195/500 [==========>...................] - ETA: 7:49 - rpn_cls: 6.4246 - rpn_regr: 0.2372 - detector_cls: 0.7475 - detector_regr: 0.2110
196/500 [==========>...................] - ETA: 7:46 - rpn_cls: 6.4181 - rpn_regr: 0.2371 - detector_cls: 0.7485 - detector_regr: 0.2117
197/500 [==========>...................] - ETA: 7:44 - rpn_cls: 6.4115 - rpn_regr: 0.2370 - detector_cls: 0.7495 - detector_regr: 0.2124
198/500 [==========>...................] - ETA: 7:42 - rpn_cls: 6.4049 - rpn_regr: 0.2369 - detector_cls: 0.7505 - detector_regr: 0.2131
199/500 [==========>...................] - ETA: 7:39 - rpn_cls: 6.3983 - rpn_regr: 0.2368 - detector_cls: 0.7516 - detector_regr: 0.2138
200/500 [===========>..................] - ETA: 7:37 - rpn_cls: 6.3917 - rpn_regr: 0.2366 - detector_cls: 0.7526 - detector_regr: 0.2145
201/500 [===========>..................] - ETA: 7:35 - rpn_cls: 6.3850 - rpn_regr: 0.2365 - detector_cls: 0.7537 - detector_regr: 0.2152
202/500 [===========>..................] - ETA: 7:32 - rpn_cls: 6.3784 - rpn_regr: 0.2364 - detector_cls: 0.7547 - detector_regr: 0.2159
203/500 [===========>..................] - ETA: 7:30 - rpn_cls: 6.3718 - rpn_regr: 0.2363 - detector_cls: 0.7558 - detector_regr: 0.2166
204/500 [===========>..................] - ETA: 7:28 - rpn_cls: 6.3652 - rpn_regr: 0.2362 - detector_cls: 0.7569 - detector_regr: 0.2173
205/500 [===========>..................] - ETA: 7:26 - rpn_cls: 6.3586 - rpn_regr: 0.2361 - detector_cls: 0.7580 - detector_regr: 0.2180
206/500 [===========>..................] - ETA: 7:24 - rpn_cls: 6.3520 - rpn_regr: 0.2360 - detector_cls: 0.7591 - detector_regr: 0.2187
207/500 [===========>..................] - ETA: 7:21 - rpn_cls: 6.3455 - rpn_regr: 0.2359 - detector_cls: 0.7603 - detector_regr: 0.2194
208/500 [===========>..................] - ETA: 7:19 - rpn_cls: 6.3391 - rpn_regr: 0.2358 - detector_cls: 0.7614 - detector_regr: 0.2201
209/500 [===========>..................] - ETA: 7:19 - rpn_cls: 6.3326 - rpn_regr: 0.2357 - detector_cls: 0.7625 - detector_regr: 0.2208
210/500 [===========>..................] - ETA: 7:17 - rpn_cls: 6.3261 - rpn_regr: 0.2356 - detector_cls: 0.7637 - detector_regr: 0.2215
211/500 [===========>..................] - ETA: 7:15 - rpn_cls: 6.3197 - rpn_regr: 0.2355 - detector_cls: 0.7648 - detector_regr: 0.2221
212/500 [===========>..................] - ETA: 7:13 - rpn_cls: 6.3134 - rpn_regr: 0.2354 - detector_cls: 0.7659 - detector_regr: 0.2228
213/500 [===========>..................] - ETA: 7:12 - rpn_cls: 6.3070 - rpn_regr: 0.2353 - detector_cls: 0.7671 - detector_regr: 0.2235
214/500 [===========>..................] - ETA: 7:11 - rpn_cls: 6.3008 - rpn_regr: 0.2352 - detector_cls: 0.7682 - detector_regr: 0.2241
215/500 [===========>..................] - ETA: 7:09 - rpn_cls: 6.2945 - rpn_regr: 0.2351 - detector_cls: 0.7693 - detector_regr: 0.2248
216/500 [===========>..................] - ETA: 7:06 - rpn_cls: 6.2882 - rpn_regr: 0.2350 - detector_cls: 0.7704 - detector_regr: 0.2254
217/500 [============>.................] - ETA: 7:05 - rpn_cls: 6.2818 - rpn_regr: 0.2349 - detector_cls: 0.7716 - detector_regr: 0.2261
218/500 [============>.................] - ETA: 7:02 - rpn_cls: 6.2755 - rpn_regr: 0.2348 - detector_cls: 0.7727 - detector_regr: 0.2267
219/500 [============>.................] - ETA: 7:02 - rpn_cls: 6.2691 - rpn_regr: 0.2347 - detector_cls: 0.7739 - detector_regr: 0.2274
220/500 [============>.................] - ETA: 7:00 - rpn_cls: 6.2627 - rpn_regr: 0.2346 - detector_cls: 0.7750 - detector_regr: 0.2280
221/500 [============>.................] - ETA: 6:58 - rpn_cls: 6.2562 - rpn_regr: 0.2345 - detector_cls: 0.7761 - detector_regr: 0.2286
222/500 [============>.................] - ETA: 6:56 - rpn_cls: 6.2499 - rpn_regr: 0.2345 - detector_cls: 0.7772 - detector_regr: 0.2293
223/500 [============>.................] - ETA: 6:54 - rpn_cls: 6.2435 - rpn_regr: 0.2344 - detector_cls: 0.7784 - detector_regr: 0.2299
224/500 [============>.................] - ETA: 6:51 - rpn_cls: 6.2373 - rpn_regr: 0.2343 - detector_cls: 0.7794 - detector_regr: 0.2305
225/500 [============>.................] - ETA: 6:51 - rpn_cls: 6.2310 - rpn_regr: 0.2342 - detector_cls: 0.7805 - detector_regr: 0.2312
226/500 [============>.................] - ETA: 6:49 - rpn_cls: 6.2248 - rpn_regr: 0.2341 - detector_cls: 0.7816 - detector_regr: 0.2318
227/500 [============>.................] - ETA: 6:47 - rpn_cls: 6.2186 - rpn_regr: 0.2341 - detector_cls: 0.7826 - detector_regr: 0.2324
228/500 [============>.................] - ETA: 6:45 - rpn_cls: 6.2124 - rpn_regr: 0.2340 - detector_cls: 0.7837 - detector_regr: 0.2330
229/500 [============>.................] - ETA: 6:43 - rpn_cls: 6.2063 - rpn_regr: 0.2339 - detector_cls: 0.7848 - detector_regr: 0.2336
230/500 [============>.................] - ETA: 6:41 - rpn_cls: 6.2002 - rpn_regr: 0.2338 - detector_cls: 0.7859 - detector_regr: 0.2342
231/500 [============>.................] - ETA: 6:39 - rpn_cls: 6.1940 - rpn_regr: 0.2338 - detector_cls: 0.7870 - detector_regr: 0.2348
232/500 [============>.................] - ETA: 6:37 - rpn_cls: 6.1878 - rpn_regr: 0.2337 - detector_cls: 0.7881 - detector_regr: 0.2354
233/500 [============>.................] - ETA: 6:35 - rpn_cls: 6.1816 - rpn_regr: 0.2336 - detector_cls: 0.7892 - detector_regr: 0.2359
234/500 [=============>................] - ETA: 6:33 - rpn_cls: 6.1755 - rpn_regr: 0.2335 - detector_cls: 0.7903 - detector_regr: 0.2365
235/500 [=============>................] - ETA: 6:31 - rpn_cls: 6.1694 - rpn_regr: 0.2334 - detector_cls: 0.7914 - detector_regr: 0.2371
236/500 [=============>................] - ETA: 6:28 - rpn_cls: 6.1633 - rpn_regr: 0.2333 - detector_cls: 0.7925 - detector_regr: 0.2377
237/500 [=============>................] - ETA: 6:26 - rpn_cls: 6.1571 - rpn_regr: 0.2333 - detector_cls: 0.7936 - detector_regr: 0.2382
238/500 [=============>................] - ETA: 6:25 - rpn_cls: 6.1510 - rpn_regr: 0.2332 - detector_cls: 0.7947 - detector_regr: 0.2388
239/500 [=============>................] - ETA: 6:23 - rpn_cls: 6.1449 - rpn_regr: 0.2331 - detector_cls: 0.7958 - detector_regr: 0.2394
240/500 [=============>................] - ETA: 6:21 - rpn_cls: 6.1389 - rpn_regr: 0.2330 - detector_cls: 0.7968 - detector_regr: 0.2399
241/500 [=============>................] - ETA: 6:19 - rpn_cls: 6.1330 - rpn_regr: 0.2329 - detector_cls: 0.7978 - detector_regr: 0.2405
242/500 [=============>................] - ETA: 6:17 - rpn_cls: 6.1270 - rpn_regr: 0.2328 - detector_cls: 0.7989 - detector_regr: 0.2411
243/500 [=============>................] - ETA: 6:16 - rpn_cls: 6.1211 - rpn_regr: 0.2328 - detector_cls: 0.7999 - detector_regr: 0.2416
244/500 [=============>................] - ETA: 6:14 - rpn_cls: 6.1152 - rpn_regr: 0.2327 - detector_cls: 0.8009 - detector_regr: 0.2422
245/500 [=============>................] - ETA: 6:12 - rpn_cls: 6.1092 - rpn_regr: 0.2326 - detector_cls: 0.8019 - detector_regr: 0.2427
246/500 [=============>................] - ETA: 6:10 - rpn_cls: 6.1033 - rpn_regr: 0.2325 - detector_cls: 0.8029 - detector_regr: 0.2433
247/500 [=============>................] - ETA: 6:08 - rpn_cls: 6.0974 - rpn_regr: 0.2324 - detector_cls: 0.8039 - detector_regr: 0.2439
248/500 [=============>................] - ETA: 6:06 - rpn_cls: 6.0915 - rpn_regr: 0.2323 - detector_cls: 0.8049 - detector_regr: 0.2444
249/500 [=============>................] - ETA: 6:04 - rpn_cls: 6.0856 - rpn_regr: 0.2323 - detector_cls: 0.8059 - detector_regr: 0.2450
250/500 [==============>...............] - ETA: 6:02 - rpn_cls: 6.0797 - rpn_regr: 0.2322 - detector_cls: 0.8069 - detector_regr: 0.2455
251/500 [==============>...............] - ETA: 6:01 - rpn_cls: 6.0738 - rpn_regr: 0.2321 - detector_cls: 0.8079 - detector_regr: 0.2460
252/500 [==============>...............] - ETA: 5:59 - rpn_cls: 6.0679 - rpn_regr: 0.2320 - detector_cls: 0.8089 - detector_regr: 0.2466
253/500 [==============>...............] - ETA: 5:57 - rpn_cls: 6.0620 - rpn_regr: 0.2320 - detector_cls: 0.8100 - detector_regr: 0.2471
254/500 [==============>...............] - ETA: 5:56 - rpn_cls: 6.0560 - rpn_regr: 0.2319 - detector_cls: 0.8110 - detector_regr: 0.2476
255/500 [==============>...............] - ETA: 5:54 - rpn_cls: 6.0501 - rpn_regr: 0.2318 - detector_cls: 0.8120 - detector_regr: 0.2482
256/500 [==============>...............] - ETA: 5:52 - rpn_cls: 6.0442 - rpn_regr: 0.2317 - detector_cls: 0.8130 - detector_regr: 0.2487
257/500 [==============>...............] - ETA: 5:50 - rpn_cls: 6.0384 - rpn_regr: 0.2316 - detector_cls: 0.8140 - detector_regr: 0.2492
258/500 [==============>...............] - ETA: 5:48 - rpn_cls: 6.0326 - rpn_regr: 0.2316 - detector_cls: 0.8150 - detector_regr: 0.2497
259/500 [==============>...............] - ETA: 5:47 - rpn_cls: 6.0267 - rpn_regr: 0.2315 - detector_cls: 0.8160 - detector_regr: 0.2502
260/500 [==============>...............] - ETA: 5:46 - rpn_cls: 6.0210 - rpn_regr: 0.2314 - detector_cls: 0.8170 - detector_regr: 0.2507
261/500 [==============>...............] - ETA: 5:44 - rpn_cls: 6.0152 - rpn_regr: 0.2313 - detector_cls: 0.8180 - detector_regr: 0.2512
262/500 [==============>...............] - ETA: 5:42 - rpn_cls: 6.0095 - rpn_regr: 0.2313 - detector_cls: 0.8189 - detector_regr: 0.2517
263/500 [==============>...............] - ETA: 5:40 - rpn_cls: 6.0038 - rpn_regr: 0.2312 - detector_cls: 0.8199 - detector_regr: 0.2522
264/500 [==============>...............] - ETA: 5:38 - rpn_cls: 5.9980 - rpn_regr: 0.2312 - detector_cls: 0.8209 - detector_regr: 0.2527
265/500 [==============>...............] - ETA: 5:38 - rpn_cls: 5.9922 - rpn_regr: 0.2311 - detector_cls: 0.8219 - detector_regr: 0.2532
266/500 [==============>...............] - ETA: 5:36 - rpn_cls: 5.9864 - rpn_regr: 0.2310 - detector_cls: 0.8228 - detector_regr: 0.2537
267/500 [===============>..............] - ETA: 5:36 - rpn_cls: 5.9807 - rpn_regr: 0.2310 - detector_cls: 0.8238 - detector_regr: 0.2542
268/500 [===============>..............] - ETA: 5:34 - rpn_cls: 5.9749 - rpn_regr: 0.2309 - detector_cls: 0.8248 - detector_regr: 0.2547
269/500 [===============>..............] - ETA: 5:32 - rpn_cls: 5.9692 - rpn_regr: 0.2308 - detector_cls: 0.8257 - detector_regr: 0.2552
270/500 [===============>..............] - ETA: 5:31 - rpn_cls: 5.9636 - rpn_regr: 0.2308 - detector_cls: 0.8267 - detector_regr: 0.2556
271/500 [===============>..............] - ETA: 5:30 - rpn_cls: 5.9580 - rpn_regr: 0.2307 - detector_cls: 0.8276 - detector_regr: 0.2561
272/500 [===============>..............] - ETA: 5:29 - rpn_cls: 5.9524 - rpn_regr: 0.2307 - detector_cls: 0.8286 - detector_regr: 0.2566
273/500 [===============>..............] - ETA: 5:27 - rpn_cls: 5.9469 - rpn_regr: 0.2306 - detector_cls: 0.8295 - detector_regr: 0.2570
274/500 [===============>..............] - ETA: 5:25 - rpn_cls: 5.9414 - rpn_regr: 0.2305 - detector_cls: 0.8305 - detector_regr: 0.2575
275/500 [===============>..............] - ETA: 5:23 - rpn_cls: 5.9359 - rpn_regr: 0.2305 - detector_cls: 0.8314 - detector_regr: 0.2579
276/500 [===============>..............] - ETA: 5:22 - rpn_cls: 5.9304 - rpn_regr: 0.2304 - detector_cls: 0.8324 - detector_regr: 0.2584
277/500 [===============>..............] - ETA: 5:20 - rpn_cls: 5.9250 - rpn_regr: 0.2304 - detector_cls: 0.8333 - detector_regr: 0.2589
278/500 [===============>..............] - ETA: 5:18 - rpn_cls: 5.9196 - rpn_regr: 0.2303 - detector_cls: 0.8343 - detector_regr: 0.2593
279/500 [===============>..............] - ETA: 5:18 - rpn_cls: 5.9142 - rpn_regr: 0.2302 - detector_cls: 0.8353 - detector_regr: 0.2598
280/500 [===============>..............] - ETA: 5:16 - rpn_cls: 5.9088 - rpn_regr: 0.2302 - detector_cls: 0.8362 - detector_regr: 0.2602
281/500 [===============>..............] - ETA: 5:15 - rpn_cls: 5.9034 - rpn_regr: 0.2301 - detector_cls: 0.8372 - detector_regr: 0.2607
282/500 [===============>..............] - ETA: 5:13 - rpn_cls: 5.8981 - rpn_regr: 0.2301 - detector_cls: 0.8382 - detector_regr: 0.2611
283/500 [===============>..............] - ETA: 5:11 - rpn_cls: 5.8928 - rpn_regr: 0.2300 - detector_cls: 0.8392 - detector_regr: 0.2615
284/500 [================>.............] - ETA: 5:09 - rpn_cls: 5.8875 - rpn_regr: 0.2299 - detector_cls: 0.8402 - detector_regr: 0.2620
285/500 [================>.............] - ETA: 5:08 - rpn_cls: 5.8822 - rpn_regr: 0.2299 - detector_cls: 0.8412 - detector_regr: 0.2624
286/500 [================>.............] - ETA: 5:06 - rpn_cls: 5.8769 - rpn_regr: 0.2298 - detector_cls: 0.8422 - detector_regr: 0.2629
287/500 [================>.............] - ETA: 5:04 - rpn_cls: 5.8716 - rpn_regr: 0.2298 - detector_cls: 0.8432 - detector_regr: 0.2633
288/500 [================>.............] - ETA: 5:03 - rpn_cls: 5.8663 - rpn_regr: 0.2297 - detector_cls: 0.8442 - detector_regr: 0.2638
289/500 [================>.............] - ETA: 5:01 - rpn_cls: 5.8609 - rpn_regr: 0.2296 - detector_cls: 0.8452 - detector_regr: 0.2642
290/500 [================>.............] - ETA: 5:00 - rpn_cls: 5.8557 - rpn_regr: 0.2296 - detector_cls: 0.8462 - detector_regr: 0.2646
291/500 [================>.............] - ETA: 4:58 - rpn_cls: 5.8504 - rpn_regr: 0.2295 - detector_cls: 0.8472 - detector_regr: 0.2651
292/500 [================>.............] - ETA: 4:57 - rpn_cls: 5.8451 - rpn_regr: 0.2295 - detector_cls: 0.8482 - detector_regr: 0.2655
293/500 [================>.............] - ETA: 4:55 - rpn_cls: 5.8398 - rpn_regr: 0.2294 - detector_cls: 0.8492 - detector_regr: 0.2659
294/500 [================>.............] - ETA: 4:53 - rpn_cls: 5.8346 - rpn_regr: 0.2293 - detector_cls: 0.8502 - detector_regr: 0.2663
295/500 [================>.............] - ETA: 4:52 - rpn_cls: 5.8295 - rpn_regr: 0.2293 - detector_cls: 0.8511 - detector_regr: 0.2667
296/500 [================>.............] - ETA: 4:51 - rpn_cls: 5.8243 - rpn_regr: 0.2292 - detector_cls: 0.8522 - detector_regr: 0.2672
297/500 [================>.............] - ETA: 4:49 - rpn_cls: 5.8191 - rpn_regr: 0.2291 - detector_cls: 0.8531 - detector_regr: 0.2676
298/500 [================>.............] - ETA: 4:47 - rpn_cls: 5.8139 - rpn_regr: 0.2291 - detector_cls: 0.8541 - detector_regr: 0.2680
299/500 [================>.............] - ETA: 4:45 - rpn_cls: 5.8087 - rpn_regr: 0.2290 - detector_cls: 0.8551 - detector_regr: 0.2684
300/500 [=================>............] - ETA: 4:44 - rpn_cls: 5.8036 - rpn_regr: 0.2289 - detector_cls: 0.8561 - detector_regr: 0.2689
301/500 [=================>............] - ETA: 4:42 - rpn_cls: 5.7984 - rpn_regr: 0.2289 - detector_cls: 0.8571 - detector_regr: 0.2693
302/500 [=================>............] - ETA: 4:41 - rpn_cls: 5.7933 - rpn_regr: 0.2288 - detector_cls: 0.8581 - detector_regr: 0.2697
303/500 [=================>............] - ETA: 4:39 - rpn_cls: 5.7882 - rpn_regr: 0.2287 - detector_cls: 0.8591 - detector_regr: 0.2701
304/500 [=================>............] - ETA: 4:38 - rpn_cls: 5.7831 - rpn_regr: 0.2287 - detector_cls: 0.8601 - detector_regr: 0.2705
305/500 [=================>............] - ETA: 4:36 - rpn_cls: 5.7779 - rpn_regr: 0.2286 - detector_cls: 0.8611 - detector_regr: 0.2709
306/500 [=================>............] - ETA: 4:34 - rpn_cls: 5.7728 - rpn_regr: 0.2285 - detector_cls: 0.8621 - detector_regr: 0.2713
307/500 [=================>............] - ETA: 4:33 - rpn_cls: 5.7676 - rpn_regr: 0.2285 - detector_cls: 0.8630 - detector_regr: 0.2717
308/500 [=================>............] - ETA: 4:31 - rpn_cls: 5.7625 - rpn_regr: 0.2284 - detector_cls: 0.8640 - detector_regr: 0.2721
309/500 [=================>............] - ETA: 4:29 - rpn_cls: 5.7574 - rpn_regr: 0.2283 - detector_cls: 0.8650 - detector_regr: 0.2725
310/500 [=================>............] - ETA: 4:28 - rpn_cls: 5.7522 - rpn_regr: 0.2283 - detector_cls: 0.8660 - detector_regr: 0.2729
311/500 [=================>............] - ETA: 4:26 - rpn_cls: 5.7471 - rpn_regr: 0.2282 - detector_cls: 0.8670 - detector_regr: 0.2733
312/500 [=================>............] - ETA: 4:24 - rpn_cls: 5.7420 - rpn_regr: 0.2281 - detector_cls: 0.8679 - detector_regr: 0.2737
313/500 [=================>............] - ETA: 4:23 - rpn_cls: 5.7369 - rpn_regr: 0.2280 - detector_cls: 0.8689 - detector_regr: 0.2741
314/500 [=================>............] - ETA: 4:21 - rpn_cls: 5.7319 - rpn_regr: 0.2280 - detector_cls: 0.8698 - detector_regr: 0.2745
315/500 [=================>............] - ETA: 4:19 - rpn_cls: 5.7268 - rpn_regr: 0.2279 - detector_cls: 0.8707 - detector_regr: 0.2749
316/500 [=================>............] - ETA: 4:18 - rpn_cls: 5.7218 - rpn_regr: 0.2278 - detector_cls: 0.8716 - detector_regr: 0.2753
317/500 [==================>...........] - ETA: 4:16 - rpn_cls: 5.7168 - rpn_regr: 0.2278 - detector_cls: 0.8726 - detector_regr: 0.2757
318/500 [==================>...........] - ETA: 4:15 - rpn_cls: 5.7118 - rpn_regr: 0.2277 - detector_cls: 0.8735 - detector_regr: 0.2760
319/500 [==================>...........] - ETA: 4:13 - rpn_cls: 5.7067 - rpn_regr: 0.2276 - detector_cls: 0.8744 - detector_regr: 0.2764
320/500 [==================>...........] - ETA: 4:12 - rpn_cls: 5.7018 - rpn_regr: 0.2276 - detector_cls: 0.8753 - detector_regr: 0.2768
321/500 [==================>...........] - ETA: 4:11 - rpn_cls: 5.6968 - rpn_regr: 0.2275 - detector_cls: 0.8762 - detector_regr: 0.2772
322/500 [==================>...........] - ETA: 4:10 - rpn_cls: 5.6919 - rpn_regr: 0.2274 - detector_cls: 0.8771 - detector_regr: 0.2775
323/500 [==================>...........] - ETA: 4:08 - rpn_cls: 5.6870 - rpn_regr: 0.2274 - detector_cls: 0.8780 - detector_regr: 0.2779
324/500 [==================>...........] - ETA: 4:07 - rpn_cls: 5.6821 - rpn_regr: 0.2273 - detector_cls: 0.8789 - detector_regr: 0.2783
325/500 [==================>...........] - ETA: 4:05 - rpn_cls: 5.6773 - rpn_regr: 0.2272 - detector_cls: 0.8798 - detector_regr: 0.2786
326/500 [==================>...........] - ETA: 4:03 - rpn_cls: 5.6725 - rpn_regr: 0.2272 - detector_cls: 0.8806 - detector_regr: 0.2790
327/500 [==================>...........] - ETA: 4:02 - rpn_cls: 5.6677 - rpn_regr: 0.2271 - detector_cls: 0.8815 - detector_regr: 0.2793
328/500 [==================>...........] - ETA: 4:00 - rpn_cls: 5.6630 - rpn_regr: 0.2270 - detector_cls: 0.8823 - detector_regr: 0.2797
329/500 [==================>...........] - ETA: 3:59 - rpn_cls: 5.6582 - rpn_regr: 0.2270 - detector_cls: 0.8832 - detector_regr: 0.2800
330/500 [==================>...........] - ETA: 3:57 - rpn_cls: 5.6535 - rpn_regr: 0.2269 - detector_cls: 0.8840 - detector_regr: 0.2804
331/500 [==================>...........] - ETA: 3:56 - rpn_cls: 5.6488 - rpn_regr: 0.2269 - detector_cls: 0.8848 - detector_regr: 0.2807
332/500 [==================>...........] - ETA: 3:54 - rpn_cls: 5.6441 - rpn_regr: 0.2268 - detector_cls: 0.8857 - detector_regr: 0.2811
333/500 [==================>...........] - ETA: 3:53 - rpn_cls: 5.6394 - rpn_regr: 0.2267 - detector_cls: 0.8865 - detector_regr: 0.2814
334/500 [===================>..........] - ETA: 3:51 - rpn_cls: 5.6347 - rpn_regr: 0.2267 - detector_cls: 0.8873 - detector_regr: 0.2817
335/500 [===================>..........] - ETA: 3:50 - rpn_cls: 5.6301 - rpn_regr: 0.2266 - detector_cls: 0.8881 - detector_regr: 0.2821
336/500 [===================>..........] - ETA: 3:48 - rpn_cls: 5.6255 - rpn_regr: 0.2266 - detector_cls: 0.8890 - detector_regr: 0.2824
337/500 [===================>..........] - ETA: 3:47 - rpn_cls: 5.6210 - rpn_regr: 0.2265 - detector_cls: 0.8898 - detector_regr: 0.2828
338/500 [===================>..........] - ETA: 3:45 - rpn_cls: 5.6164 - rpn_regr: 0.2265 - detector_cls: 0.8906 - detector_regr: 0.2831
339/500 [===================>..........] - ETA: 3:44 - rpn_cls: 5.6118 - rpn_regr: 0.2264 - detector_cls: 0.8914 - detector_regr: 0.2834
340/500 [===================>..........] - ETA: 3:42 - rpn_cls: 5.6074 - rpn_regr: 0.2263 - detector_cls: 0.8922 - detector_regr: 0.2838
341/500 [===================>..........] - ETA: 3:40 - rpn_cls: 5.6029 - rpn_regr: 0.2263 - detector_cls: 0.8930 - detector_regr: 0.2841
342/500 [===================>..........] - ETA: 3:39 - rpn_cls: 5.5985 - rpn_regr: 0.2262 - detector_cls: 0.8938 - detector_regr: 0.2844
343/500 [===================>..........] - ETA: 3:37 - rpn_cls: 5.5940 - rpn_regr: 0.2262 - detector_cls: 0.8945 - detector_regr: 0.2847
344/500 [===================>..........] - ETA: 3:36 - rpn_cls: 5.5896 - rpn_regr: 0.2261 - detector_cls: 0.8953 - detector_regr: 0.2850
345/500 [===================>..........] - ETA: 3:34 - rpn_cls: 5.5852 - rpn_regr: 0.2260 - detector_cls: 0.8961 - detector_regr: 0.2854
346/500 [===================>..........] - ETA: 3:33 - rpn_cls: 5.5808 - rpn_regr: 0.2260 - detector_cls: 0.8968 - detector_regr: 0.2857
347/500 [===================>..........] - ETA: 3:31 - rpn_cls: 5.5764 - rpn_regr: 0.2259 - detector_cls: 0.8976 - detector_regr: 0.2860
348/500 [===================>..........] - ETA: 3:30 - rpn_cls: 5.5720 - rpn_regr: 0.2259 - detector_cls: 0.8983 - detector_regr: 0.2863
349/500 [===================>..........] - ETA: 3:28 - rpn_cls: 5.5676 - rpn_regr: 0.2258 - detector_cls: 0.8991 - detector_regr: 0.2867
350/500 [====================>.........] - ETA: 3:27 - rpn_cls: 5.5633 - rpn_regr: 0.2258 - detector_cls: 0.8998 - detector_regr: 0.2870
351/500 [====================>.........] - ETA: 3:25 - rpn_cls: 5.5589 - rpn_regr: 0.2257 - detector_cls: 0.9005 - detector_regr: 0.2873
352/500 [====================>.........] - ETA: 3:24 - rpn_cls: 5.5547 - rpn_regr: 0.2257 - detector_cls: 0.9013 - detector_regr: 0.2876
353/500 [====================>.........] - ETA: 3:22 - rpn_cls: 5.5504 - rpn_regr: 0.2256 - detector_cls: 0.9020 - detector_regr: 0.2879
354/500 [====================>.........] - ETA: 3:21 - rpn_cls: 5.5461 - rpn_regr: 0.2256 - detector_cls: 0.9027 - detector_regr: 0.2882
355/500 [====================>.........] - ETA: 3:19 - rpn_cls: 5.5419 - rpn_regr: 0.2255 - detector_cls: 0.9034 - detector_regr: 0.2886
356/500 [====================>.........] - ETA: 3:18 - rpn_cls: 5.5377 - rpn_regr: 0.2255 - detector_cls: 0.9041 - detector_regr: 0.2889
357/500 [====================>.........] - ETA: 3:16 - rpn_cls: 5.5335 - rpn_regr: 0.2254 - detector_cls: 0.9049 - detector_regr: 0.2892
358/500 [====================>.........] - ETA: 3:15 - rpn_cls: 5.5294 - rpn_regr: 0.2254 - detector_cls: 0.9056 - detector_regr: 0.2895
359/500 [====================>.........] - ETA: 3:13 - rpn_cls: 5.5253 - rpn_regr: 0.2253 - detector_cls: 0.9063 - detector_regr: 0.2898
360/500 [====================>.........] - ETA: 3:12 - rpn_cls: 5.5212 - rpn_regr: 0.2253 - detector_cls: 0.9070 - detector_regr: 0.2901
361/500 [====================>.........] - ETA: 3:10 - rpn_cls: 5.5171 - rpn_regr: 0.2252 - detector_cls: 0.9077 - detector_regr: 0.2904
362/500 [====================>.........] - ETA: 3:09 - rpn_cls: 5.5130 - rpn_regr: 0.2252 - detector_cls: 0.9084 - detector_regr: 0.2907
363/500 [====================>.........] - ETA: 3:07 - rpn_cls: 5.5089 - rpn_regr: 0.2251 - detector_cls: 0.9091 - detector_regr: 0.2909
364/500 [====================>.........] - ETA: 3:06 - rpn_cls: 5.5048 - rpn_regr: 0.2250 - detector_cls: 0.9098 - detector_regr: 0.2912
365/500 [====================>.........] - ETA: 3:04 - rpn_cls: 5.5008 - rpn_regr: 0.2250 - detector_cls: 0.9106 - detector_regr: 0.2915
366/500 [====================>.........] - ETA: 3:03 - rpn_cls: 5.4967 - rpn_regr: 0.2249 - detector_cls: 0.9113 - detector_regr: 0.2918
367/500 [=====================>........] - ETA: 3:01 - rpn_cls: 5.4926 - rpn_regr: 0.2249 - detector_cls: 0.9120 - detector_regr: 0.2921
368/500 [=====================>........] - ETA: 3:00 - rpn_cls: 5.4886 - rpn_regr: 0.2248 - detector_cls: 0.9127 - detector_regr: 0.2924
369/500 [=====================>........] - ETA: 2:58 - rpn_cls: 5.4845 - rpn_regr: 0.2248 - detector_cls: 0.9133 - detector_regr: 0.2927
370/500 [=====================>........] - ETA: 2:57 - rpn_cls: 5.4805 - rpn_regr: 0.2247 - detector_cls: 0.9140 - detector_regr: 0.2930
371/500 [=====================>........] - ETA: 2:56 - rpn_cls: 5.4765 - rpn_regr: 0.2247 - detector_cls: 0.9147 - detector_regr: 0.2932
372/500 [=====================>........] - ETA: 2:54 - rpn_cls: 5.4726 - rpn_regr: 0.2246 - detector_cls: 0.9154 - detector_regr: 0.2935
373/500 [=====================>........] - ETA: 2:53 - rpn_cls: 5.4686 - rpn_regr: 0.2246 - detector_cls: 0.9161 - detector_regr: 0.2938
374/500 [=====================>........] - ETA: 2:51 - rpn_cls: 5.4647 - rpn_regr: 0.2245 - detector_cls: 0.9168 - detector_regr: 0.2941
375/500 [=====================>........] - ETA: 2:50 - rpn_cls: 5.4609 - rpn_regr: 0.2245 - detector_cls: 0.9175 - detector_regr: 0.2944
376/500 [=====================>........] - ETA: 2:48 - rpn_cls: 5.4570 - rpn_regr: 0.2244 - detector_cls: 0.9181 - detector_regr: 0.2946
377/500 [=====================>........] - ETA: 2:47 - rpn_cls: 5.4532 - rpn_regr: 0.2244 - detector_cls: 0.9188 - detector_regr: 0.2949
378/500 [=====================>........] - ETA: 2:46 - rpn_cls: 5.4494 - rpn_regr: 0.2243 - detector_cls: 0.9195 - detector_regr: 0.2952
379/500 [=====================>........] - ETA: 2:44 - rpn_cls: 5.4456 - rpn_regr: 0.2243 - detector_cls: 0.9201 - detector_regr: 0.2955
380/500 [=====================>........] - ETA: 2:43 - rpn_cls: 5.4418 - rpn_regr: 0.2242 - detector_cls: 0.9208 - detector_regr: 0.2957
381/500 [=====================>........] - ETA: 2:41 - rpn_cls: 5.4380 - rpn_regr: 0.2242 - detector_cls: 0.9215 - detector_regr: 0.2960
382/500 [=====================>........] - ETA: 2:40 - rpn_cls: 5.4342 - rpn_regr: 0.2241 - detector_cls: 0.9222 - detector_regr: 0.2963
383/500 [=====================>........] - ETA: 2:39 - rpn_cls: 5.4303 - rpn_regr: 0.2240 - detector_cls: 0.9228 - detector_regr: 0.2965
384/500 [======================>.......] - ETA: 2:37 - rpn_cls: 5.4265 - rpn_regr: 0.2240 - detector_cls: 0.9235 - detector_regr: 0.2968
385/500 [======================>.......] - ETA: 2:36 - rpn_cls: 5.4228 - rpn_regr: 0.2239 - detector_cls: 0.9242 - detector_regr: 0.2971
386/500 [======================>.......] - ETA: 2:34 - rpn_cls: 5.4190 - rpn_regr: 0.2239 - detector_cls: 0.9248 - detector_regr: 0.2973
387/500 [======================>.......] - ETA: 2:33 - rpn_cls: 5.4152 - rpn_regr: 0.2238 - detector_cls: 0.9255 - detector_regr: 0.2976
388/500 [======================>.......] - ETA: 2:32 - rpn_cls: 5.4115 - rpn_regr: 0.2238 - detector_cls: 0.9262 - detector_regr: 0.2979
389/500 [======================>.......] - ETA: 2:30 - rpn_cls: 5.4077 - rpn_regr: 0.2237 - detector_cls: 0.9268 - detector_regr: 0.2981
390/500 [======================>.......] - ETA: 2:29 - rpn_cls: 5.4041 - rpn_regr: 0.2237 - detector_cls: 0.9275 - detector_regr: 0.2984
391/500 [======================>.......] - ETA: 2:27 - rpn_cls: 5.4004 - rpn_regr: 0.2236 - detector_cls: 0.9281 - detector_regr: 0.2987
392/500 [======================>.......] - ETA: 2:26 - rpn_cls: 5.3966 - rpn_regr: 0.2236 - detector_cls: 0.9288 - detector_regr: 0.2989
393/500 [======================>.......] - ETA: 2:24 - rpn_cls: 5.3929 - rpn_regr: 0.2235 - detector_cls: 0.9294 - detector_regr: 0.2992
394/500 [======================>.......] - ETA: 2:23 - rpn_cls: 5.3893 - rpn_regr: 0.2234 - detector_cls: 0.9301 - detector_regr: 0.2994
395/500 [======================>.......] - ETA: 2:22 - rpn_cls: 5.3856 - rpn_regr: 0.2234 - detector_cls: 0.9307 - detector_regr: 0.2997
396/500 [======================>.......] - ETA: 2:20 - rpn_cls: 5.3819 - rpn_regr: 0.2233 - detector_cls: 0.9313 - detector_regr: 0.3000
397/500 [======================>.......] - ETA: 2:19 - rpn_cls: 5.3783 - rpn_regr: 0.2233 - detector_cls: 0.9320 - detector_regr: 0.3002
398/500 [======================>.......] - ETA: 2:18 - rpn_cls: 5.3746 - rpn_regr: 0.2232 - detector_cls: 0.9326 - detector_regr: 0.3005
399/500 [======================>.......] - ETA: 2:16 - rpn_cls: 5.3710 - rpn_regr: 0.2232 - detector_cls: 0.9333 - detector_regr: 0.3007
400/500 [=======================>......] - ETA: 2:15 - rpn_cls: 5.3673 - rpn_regr: 0.2231 - detector_cls: 0.9339 - detector_regr: 0.3010
401/500 [=======================>......] - ETA: 2:13 - rpn_cls: 5.3637 - rpn_regr: 0.2231 - detector_cls: 0.9345 - detector_regr: 0.3012
402/500 [=======================>......] - ETA: 2:12 - rpn_cls: 5.3602 - rpn_regr: 0.2230 - detector_cls: 0.9351 - detector_regr: 0.3015
403/500 [=======================>......] - ETA: 2:11 - rpn_cls: 5.3566 - rpn_regr: 0.2230 - detector_cls: 0.9357 - detector_regr: 0.3017
404/500 [=======================>......] - ETA: 2:09 - rpn_cls: 5.3530 - rpn_regr: 0.2229 - detector_cls: 0.9364 - detector_regr: 0.3020
405/500 [=======================>......] - ETA: 2:08 - rpn_cls: 5.3495 - rpn_regr: 0.2229 - detector_cls: 0.9370 - detector_regr: 0.3022
406/500 [=======================>......] - ETA: 2:07 - rpn_cls: 5.3460 - rpn_regr: 0.2228 - detector_cls: 0.9376 - detector_regr: 0.3025
407/500 [=======================>......] - ETA: 2:05 - rpn_cls: 5.3425 - rpn_regr: 0.2228 - detector_cls: 0.9382 - detector_regr: 0.3027
408/500 [=======================>......] - ETA: 2:04 - rpn_cls: 5.3390 - rpn_regr: 0.2227 - detector_cls: 0.9388 - detector_regr: 0.3029
409/500 [=======================>......] - ETA: 2:03 - rpn_cls: 5.3355 - rpn_regr: 0.2227 - detector_cls: 0.9394 - detector_regr: 0.3032
410/500 [=======================>......] - ETA: 2:01 - rpn_cls: 5.3320 - rpn_regr: 0.2226 - detector_cls: 0.9400 - detector_regr: 0.3034
411/500 [=======================>......] - ETA: 2:00 - rpn_cls: 5.3285 - rpn_regr: 0.2226 - detector_cls: 0.9406 - detector_regr: 0.3037
412/500 [=======================>......] - ETA: 1:58 - rpn_cls: 5.3250 - rpn_regr: 0.2225 - detector_cls: 0.9411 - detector_regr: 0.3039
413/500 [=======================>......] - ETA: 1:57 - rpn_cls: 5.3216 - rpn_regr: 0.2225 - detector_cls: 0.9417 - detector_regr: 0.3041
414/500 [=======================>......] - ETA: 1:55 - rpn_cls: 5.3182 - rpn_regr: 0.2224 - detector_cls: 0.9423 - detector_regr: 0.3044
415/500 [=======================>......] - ETA: 1:54 - rpn_cls: 5.3147 - rpn_regr: 0.2224 - detector_cls: 0.9429 - detector_regr: 0.3046
416/500 [=======================>......] - ETA: 1:53 - rpn_cls: 5.3113 - rpn_regr: 0.2223 - detector_cls: 0.9434 - detector_regr: 0.3048
417/500 [========================>.....] - ETA: 1:51 - rpn_cls: 5.3079 - rpn_regr: 0.2223 - detector_cls: 0.9440 - detector_regr: 0.3051
418/500 [========================>.....] - ETA: 1:50 - rpn_cls: 5.3045 - rpn_regr: 0.2222 - detector_cls: 0.9446 - detector_regr: 0.3053
419/500 [========================>.....] - ETA: 1:49 - rpn_cls: 5.3012 - rpn_regr: 0.2222 - detector_cls: 0.9452 - detector_regr: 0.3055
420/500 [========================>.....] - ETA: 1:47 - rpn_cls: 5.2979 - rpn_regr: 0.2221 - detector_cls: 0.9458 - detector_regr: 0.3058
421/500 [========================>.....] - ETA: 1:46 - rpn_cls: 5.2945 - rpn_regr: 0.2221 - detector_cls: 0.9464 - detector_regr: 0.3060
422/500 [========================>.....] - ETA: 1:45 - rpn_cls: 5.2912 - rpn_regr: 0.2220 - detector_cls: 0.9470 - detector_regr: 0.3062
423/500 [========================>.....] - ETA: 1:43 - rpn_cls: 5.2879 - rpn_regr: 0.2220 - detector_cls: 0.9476 - detector_regr: 0.3065
424/500 [========================>.....] - ETA: 1:42 - rpn_cls: 5.2846 - rpn_regr: 0.2219 - detector_cls: 0.9481 - detector_regr: 0.3067
425/500 [========================>.....] - ETA: 1:41 - rpn_cls: 5.2813 - rpn_regr: 0.2219 - detector_cls: 0.9487 - detector_regr: 0.3069
426/500 [========================>.....] - ETA: 1:39 - rpn_cls: 5.2781 - rpn_regr: 0.2218 - detector_cls: 0.9493 - detector_regr: 0.3071
427/500 [========================>.....] - ETA: 1:38 - rpn_cls: 5.2748 - rpn_regr: 0.2218 - detector_cls: 0.9499 - detector_regr: 0.3074
428/500 [========================>.....] - ETA: 1:36 - rpn_cls: 5.2716 - rpn_regr: 0.2217 - detector_cls: 0.9505 - detector_regr: 0.3076
429/500 [========================>.....] - ETA: 1:35 - rpn_cls: 5.2684 - rpn_regr: 0.2217 - detector_cls: 0.9511 - detector_regr: 0.3078
430/500 [========================>.....] - ETA: 1:34 - rpn_cls: 5.2652 - rpn_regr: 0.2216 - detector_cls: 0.9517 - detector_regr: 0.3080
431/500 [========================>.....] - ETA: 1:32 - rpn_cls: 5.2621 - rpn_regr: 0.2216 - detector_cls: 0.9522 - detector_regr: 0.3083
432/500 [========================>.....] - ETA: 1:31 - rpn_cls: 5.2589 - rpn_regr: 0.2215 - detector_cls: 0.9528 - detector_regr: 0.3085
433/500 [========================>.....] - ETA: 1:29 - rpn_cls: 5.2557 - rpn_regr: 0.2215 - detector_cls: 0.9534 - detector_regr: 0.3087
434/500 [=========================>....] - ETA: 1:28 - rpn_cls: 5.2526 - rpn_regr: 0.2214 - detector_cls: 0.9540 - detector_regr: 0.3089
435/500 [=========================>....] - ETA: 1:27 - rpn_cls: 5.2495 - rpn_regr: 0.2214 - detector_cls: 0.9545 - detector_regr: 0.3091
436/500 [=========================>....] - ETA: 1:25 - rpn_cls: 5.2464 - rpn_regr: 0.2213 - detector_cls: 0.9551 - detector_regr: 0.3093
437/500 [=========================>....] - ETA: 1:24 - rpn_cls: 5.2433 - rpn_regr: 0.2213 - detector_cls: 0.9557 - detector_regr: 0.3096
438/500 [=========================>....] - ETA: 1:23 - rpn_cls: 5.2403 - rpn_regr: 0.2212 - detector_cls: 0.9562 - detector_regr: 0.3098
439/500 [=========================>....] - ETA: 1:21 - rpn_cls: 5.2372 - rpn_regr: 0.2212 - detector_cls: 0.9568 - detector_regr: 0.3100
440/500 [=========================>....] - ETA: 1:20 - rpn_cls: 5.2342 - rpn_regr: 0.2211 - detector_cls: 0.9573 - detector_regr: 0.3102
441/500 [=========================>....] - ETA: 1:18 - rpn_cls: 5.2312 - rpn_regr: 0.2211 - detector_cls: 0.9579 - detector_regr: 0.3104
442/500 [=========================>....] - ETA: 1:17 - rpn_cls: 5.2282 - rpn_regr: 0.2210 - detector_cls: 0.9584 - detector_regr: 0.3106
443/500 [=========================>....] - ETA: 1:16 - rpn_cls: 5.2253 - rpn_regr: 0.2210 - detector_cls: 0.9590 - detector_regr: 0.3108
444/500 [=========================>....] - ETA: 1:14 - rpn_cls: 5.2223 - rpn_regr: 0.2209 - detector_cls: 0.9595 - detector_regr: 0.3110
445/500 [=========================>....] - ETA: 1:13 - rpn_cls: 5.2194 - rpn_regr: 0.2209 - detector_cls: 0.9600 - detector_regr: 0.3113
446/500 [=========================>....] - ETA: 1:12 - rpn_cls: 5.2165 - rpn_regr: 0.2208 - detector_cls: 0.9606 - detector_regr: 0.3115
447/500 [=========================>....] - ETA: 1:10 - rpn_cls: 5.2135 - rpn_regr: 0.2208 - detector_cls: 0.9611 - detector_regr: 0.3117
448/500 [=========================>....] - ETA: 1:09 - rpn_cls: 5.2106 - rpn_regr: 0.2207 - detector_cls: 0.9616 - detector_regr: 0.3119
449/500 [=========================>....] - ETA: 1:08 - rpn_cls: 5.2077 - rpn_regr: 0.2207 - detector_cls: 0.9622 - detector_regr: 0.3121
450/500 [==========================>...] - ETA: 1:06 - rpn_cls: 5.2048 - rpn_regr: 0.2206 - detector_cls: 0.9627 - detector_regr: 0.3123
451/500 [==========================>...] - ETA: 1:05 - rpn_cls: 5.2019 - rpn_regr: 0.2206 - detector_cls: 0.9632 - detector_regr: 0.3125
452/500 [==========================>...] - ETA: 1:03 - rpn_cls: 5.1990 - rpn_regr: 0.2206 - detector_cls: 0.9637 - detector_regr: 0.3127
453/500 [==========================>...] - ETA: 1:02 - rpn_cls: 5.1961 - rpn_regr: 0.2205 - detector_cls: 0.9642 - detector_regr: 0.3129
454/500 [==========================>...] - ETA: 1:01 - rpn_cls: 5.1933 - rpn_regr: 0.2205 - detector_cls: 0.9647 - detector_regr: 0.3131
455/500 [==========================>...] - ETA: 59s - rpn_cls: 5.1904 - rpn_regr: 0.2204 - detector_cls: 0.9652 - detector_regr: 0.3133
456/500 [==========================>...] - ETA: 58s - rpn_cls: 5.1875 - rpn_regr: 0.2204 - detector_cls: 0.9658 - detector_regr: 0.3135
457/500 [==========================>...] - ETA: 57s - rpn_cls: 5.1846 - rpn_regr: 0.2203 - detector_cls: 0.9663 - detector_regr: 0.3137
458/500 [==========================>...] - ETA: 55s - rpn_cls: 5.1818 - rpn_regr: 0.2203 - detector_cls: 0.9668 - detector_regr: 0.3139
459/500 [==========================>...] - ETA: 54s - rpn_cls: 5.1789 - rpn_regr: 0.2202 - detector_cls: 0.9673 - detector_regr: 0.3141
460/500 [==========================>...] - ETA: 53s - rpn_cls: 5.1761 - rpn_regr: 0.2202 - detector_cls: 0.9678 - detector_regr: 0.3143
461/500 [==========================>...] - ETA: 51s - rpn_cls: 5.1732 - rpn_regr: 0.2201 - detector_cls: 0.9683 - detector_regr: 0.3145
462/500 [==========================>...] - ETA: 50s - rpn_cls: 5.1704 - rpn_regr: 0.2201 - detector_cls: 0.9688 - detector_regr: 0.3147
463/500 [==========================>...] - ETA: 49s - rpn_cls: 5.1675 - rpn_regr: 0.2201 - detector_cls: 0.9693 - detector_regr: 0.3149
464/500 [==========================>...] - ETA: 47s - rpn_cls: 5.1648 - rpn_regr: 0.2200 - detector_cls: 0.9698 - detector_regr: 0.3151
465/500 [==========================>...] - ETA: 46s - rpn_cls: 5.1620 - rpn_regr: 0.2200 - detector_cls: 0.9702 - detector_regr: 0.3153
466/500 [==========================>...] - ETA: 45s - rpn_cls: 5.1592 - rpn_regr: 0.2199 - detector_cls: 0.9707 - detector_regr: 0.3155
467/500 [===========================>..] - ETA: 43s - rpn_cls: 5.1565 - rpn_regr: 0.2199 - detector_cls: 0.9712 - detector_regr: 0.3157
468/500 [===========================>..] - ETA: 42s - rpn_cls: 5.1538 - rpn_regr: 0.2198 - detector_cls: 0.9717 - detector_regr: 0.3159
469/500 [===========================>..] - ETA: 41s - rpn_cls: 5.1511 - rpn_regr: 0.2198 - detector_cls: 0.9722 - detector_regr: 0.3161
470/500 [===========================>..] - ETA: 39s - rpn_cls: 5.1485 - rpn_regr: 0.2197 - detector_cls: 0.9726 - detector_regr: 0.3163
471/500 [===========================>..] - ETA: 38s - rpn_cls: 5.1458 - rpn_regr: 0.2197 - detector_cls: 0.9731 - detector_regr: 0.3164
472/500 [===========================>..] - ETA: 37s - rpn_cls: 5.1432 - rpn_regr: 0.2197 - detector_cls: 0.9735 - detector_regr: 0.3166
473/500 [===========================>..] - ETA: 35s - rpn_cls: 5.1406 - rpn_regr: 0.2196 - detector_cls: 0.9740 - detector_regr: 0.3168
474/500 [===========================>..] - ETA: 34s - rpn_cls: 5.1380 - rpn_regr: 0.2196 - detector_cls: 0.9745 - detector_regr: 0.3170
475/500 [===========================>..] - ETA: 33s - rpn_cls: 5.1355 - rpn_regr: 0.2195 - detector_cls: 0.9749 - detector_regr: 0.3172
476/500 [===========================>..] - ETA: 31s - rpn_cls: 5.1329 - rpn_regr: 0.2195 - detector_cls: 0.9754 - detector_regr: 0.3174
477/500 [===========================>..] - ETA: 30s - rpn_cls: 5.1304 - rpn_regr: 0.2194 - detector_cls: 0.9758 - detector_regr: 0.3175
478/500 [===========================>..] - ETA: 29s - rpn_cls: 5.1279 - rpn_regr: 0.2194 - detector_cls: 0.9763 - detector_regr: 0.3177
479/500 [===========================>..] - ETA: 27s - rpn_cls: 5.1254 - rpn_regr: 0.2193 - detector_cls: 0.9767 - detector_regr: 0.3179
480/500 [===========================>..] - ETA: 26s - rpn_cls: 5.1229 - rpn_regr: 0.2193 - detector_cls: 0.9772 - detector_regr: 0.3181
481/500 [===========================>..] - ETA: 25s - rpn_cls: 5.1204 - rpn_regr: 0.2193 - detector_cls: 0.9776 - detector_regr: 0.3183
482/500 [===========================>..] - ETA: 23s - rpn_cls: 5.1180 - rpn_regr: 0.2192 - detector_cls: 0.9781 - detector_regr: 0.3184
483/500 [===========================>..] - ETA: 22s - rpn_cls: 5.1155 - rpn_regr: 0.2192 - detector_cls: 0.9785 - detector_regr: 0.3186
484/500 [============================>.] - ETA: 21s - rpn_cls: 5.1131 - rpn_regr: 0.2192 - detector_cls: 0.9790 - detector_regr: 0.3188
485/500 [============================>.] - ETA: 19s - rpn_cls: 5.1107 - rpn_regr: 0.2191 - detector_cls: 0.9794 - detector_regr: 0.3190Average number of overlapping bounding boxes from RPN = 8.96 for 500 previous iterations

486/500 [============================>.] - ETA: 18s - rpn_cls: 5.1082 - rpn_regr: 0.2191 - detector_cls: 0.9798 - detector_regr: 0.3191
487/500 [============================>.] - ETA: 17s - rpn_cls: 5.1058 - rpn_regr: 0.2190 - detector_cls: 0.9803 - detector_regr: 0.3193
488/500 [============================>.] - ETA: 15s - rpn_cls: 5.1034 - rpn_regr: 0.2190 - detector_cls: 0.9807 - detector_regr: 0.3195
489/500 [============================>.] - ETA: 14s - rpn_cls: 5.1010 - rpn_regr: 0.2190 - detector_cls: 0.9811 - detector_regr: 0.3197
490/500 [============================>.] - ETA: 13s - rpn_cls: 5.0986 - rpn_regr: 0.2189 - detector_cls: 0.9816 - detector_regr: 0.3198
491/500 [============================>.] - ETA: 11s - rpn_cls: 5.0961 - rpn_regr: 0.2189 - detector_cls: 0.9820 - detector_regr: 0.3200
492/500 [============================>.] - ETA: 10s - rpn_cls: 5.0937 - rpn_regr: 0.2189 - detector_cls: 0.9824 - detector_regr: 0.3202
493/500 [============================>.] - ETA: 9s - rpn_cls: 5.0913 - rpn_regr: 0.2188 - detector_cls: 0.9828 - detector_regr: 0.3204
494/500 [============================>.] - ETA: 7s - rpn_cls: 5.0889 - rpn_regr: 0.2188 - detector_cls: 0.9833 - detector_regr: 0.3205
495/500 [============================>.] - ETA: 6s - rpn_cls: 5.0865 - rpn_regr: 0.2188 - detector_cls: 0.9837 - detector_regr: 0.3207
496/500 [============================>.] - ETA: 5s - rpn_cls: 5.0842 - rpn_regr: 0.2187 - detector_cls: 0.9841 - detector_regr: 0.3209
497/500 [============================>.] - ETA: 3s - rpn_cls: 5.0818 - rpn_regr: 0.2187 - detector_cls: 0.9845 - detector_regr: 0.3210
498/500 [============================>.] - ETA: 2s - rpn_cls: 5.0794 - rpn_regr: 0.2187 - detector_cls: 0.9849 - detector_regr: 0.3212
499/500 [============================>.] - ETA: 1s - rpn_cls: 5.0771 - rpn_regr: 0.2186 - detector_cls: 0.9853 - detector_regr: 0.3213
500/500 [==============================] - 659s 1s/step - rpn_cls: 5.0747 - rpn_regr: 0.2186 - detector_cls: 0.9857 - detector_regr: 0.3215
2019-08-18 04:06:25.245342: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
2019-08-18 04:06:25.245531: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-08-18 04:06:25.245594: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-08-18 04:06:25.245654: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2019-08-18 04:06:25.245715: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2019-08-18 04:06:25.245777: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2019-08-18 04:06:25.245838: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2019-08-18 04:06:25.245911: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-18 04:06:25.246598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-18 04:06:25.246675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-18 04:06:25.246727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-18 04:06:25.246774: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-18 04:06:25.247520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5075 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)

No object detection box

I train 50 epochs more than 5 times but i cannot get any boxes in result images, Is this algorithm works correct or not? i have 36 class the numbers and characters in images.

'NoneType' object has no attribute 'shape' when training?

I have created my own data set for a project with VOC annotation files. I added new data and now get this return

45/500 [=>............................] - ETA: 4430s - rpn_cls: 0.2582 - rpn_regr: 0.1585 - detector_cls: 0.3072 - detector_regr: 0.2381'NoneType' object has no attribute 'shape'

Why is this the case? I debugged it to this line

X, Y, img_data = next(data_gen_train)

array are not less-odered

I get that issue in test-frcnn.py
the line : new_boxes, new_probs = roi_helpers.non_max_suppression_fast(bbox, np.array(probs[key]), overlap_thresh=0.5)

vorhersage von 1517 über 5244 fürs Bild 2008_008773.jpg
vorhersage von 1518 über 5244 fürs Bild 2011_003657.jpg
vorhersage von 1519 über 5244 fürs Bild 2009_000705.jpg
vorhersage von 1520 über 5244 fürs Bild 2010_003898.jpg
vorhersage von 1521 über 5244 fürs Bild 2012_001944.jpg
vorhersage von 1522 über 5244 fürs Bild 2010_000045.jpg
vorhersage von 1523 über 5244 fürs Bild 2009_001825.jpg
vorhersage von 1524 über 5244 fürs Bild 2008_004620.jpg
vorhersage von 1525 über 5244 fürs Bild 2008_003477.jpg
Traceback (most recent call last):
File "active_learning_modul.py", line 257, in
train_simple()
File "active_learning_modul.py", line 187, in train_simple
predict_list=test.make_predicton_new(list_to_predict,con)
File "/home/kamgo/Active-Learning-Faster-RCNN/keras_frcnn/test_frcnn.py", line 532, in make_predicton_new
new_boxes, new_probs = roi_helpers.non_max_suppression_fast(bbox, np.array(probs[key]), overlap_thresh=0.5)
File "/home/kamgo/Active-Learning-Faster-RCNN/keras_frcnn/roi_helpers.py", line 166, in non_max_suppression_fast
np.testing.assert_array_less(x1, x2)
File "/home/kamgo/AL_dir/py36/lib/python3.6/site-packages/numpy/testing/nose_tools/utils.py", line 1038, in assert_array_less
equal_inf=False)
File "/home/kamgo/AL_dir/py36/lib/python3.6/site-packages/numpy/testing/nose_tools/utils.py", line 781, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not less-ordered

(mismatch 25.0%)
x: array([112, 112, 128, 112])
y: array([128, 160, 144, 112])

as you can it cloud predict; but i receive that error
pleas i need help

Why there's `model_classifier_only` in `test_frcnn.py`

I noticed there's two "model_classifier" implemented in test_frcnn.py script.

  • One is model_classifier which is compiled and loaded weights, but not used in actual inference.
  • The other is model_classifier_only, which is not compiled nor loaded weights, but used in actual inference.

I suspect this could cause inconsistency and issues.
Please kindly explain me the circumstances behind this

For what is measure_map.py?

Hi, I'm checking this code and I don't know why there's measure_map.py file. I can't find any function import in other files, so it must be called directly by command line. There's also no info in readme, so I'm a bit confused when can I use such file?

Strange tx,ty,tw,th values?

In data_generators.py, the values for tx,ty,tw,th which are passed to y_rpn_regr are negative for most of the cases. Those are for the box dimensions scaled to feature map size so they should not be negative. Or am I missing something?

rpn_accuracy_rpn_monitor doesn't "count" equally in different epochs

As an example the figure below show's, that in 3 different epochs the monitoring of rpn_accuracy_rpn_monitor displays "after 20 epochs" not equally.

In every image is always only one groundtruth bbox. So i thought that the pos_samples should only get on sample per iteration, but sometimes the length of pos_samples is 2. But i have no clue why this is happening.

Unbenannt

No object detected

I have run the above code and the model executed perfectly with no errors, but when I run the test part of the model then i get no output, i.e., there is no bounding box in the output image but the images are written with no errors.

ValueError

Using TensorFlow backend.
{0: 'ridge', 1: 'bifurcation', 2: 'bif', 3: 'singular', 4: 'bg'}
Loading weights from ./model_frcnn.hdf5
Traceback (most recent call last):
File "test_frcnn.py", line 136, in
model_rpn.load_weights(C.model_path, by_name=True)
File "C:\Users\Siddharth\Anaconda3\envs\py33\lib\site-packages\keras\engine\network.py", line 1163, in load_weights
reshape=reshape)
File "C:\Users\Siddharth\Anaconda3\envs\py33\lib\site-packages\keras\engine\saving.py", line 1149, in load_weights_from_hdf5_group_by_name
str(weight_values[i].shape) + '.')
ValueError: Layer #177 (named "rpn_conv1"), weight <tf.Variable 'rpn_conv1/kernel:0' shape=(3, 3, 512, 512) dtype=float32_ref> has shape (3, 3, 512, 512), but the saved weight has shape (512, 1024, 3, 3).

No foreground is detected when testing

Hi @kbardool ,
Thanks for share this useful code.
I train the model using command like this:

python3 train_frcnn.py -p /path/to/VOCDevkit

After training for 100 epochs, I test trained model on VOC images:

python3 test_frcnn.py -p ../VOCdevkit/VOC2012/JPEGImages

However, The output is like this:

2007_000121.jpg                                           
Elapsed time = 0.6811630725860596                               
[]                         

After debug, I find testing images are all treated as background, i.e., the network classify them to bg class.
Can you tell me what should I do to further solve this problem? Thanks :)

Training does not start if too many training images are used

Hi,
I've already successfully done a Fine-Tuning with about 2000 synthetic images, where all training images (train+val) were listet in "Num train samples" and "Num val samples" (1st image).
im1

But if I take about 9000 pictures he only shows 31 and does not start the training (Second picture), but it shows the right number of images per class (9072)
im2

Has anyone an explanation for this or a similar problem?

Question about shape of rpn_cls and rpn_regr

Hello, I'm currently implementing my own Faster R-CNN algorithm, to get a better understanding of the algorithm. However I do have some trouble in understanding the mathematics behind it all.

In the data_generators.py file we calculate the rpn_cls and rpn_regr values from the generated anchors and the ground truth boxes. However this outputs different values then what I expected.

For the rpn_cls I expected following shape: (None, 64, 64, 9). Since we multiply the anchor_scales with the anchor_ratios to get to the value of 9.
However the output in reality is: (None, 64, 64, 18).

The same goes for the rpn_regr where I expected following shape: (None, 64, 64, 36).
Where 36 = 9 * 4 (coordinates of box)
But received: (None, 64, 64, 72)

I don't quite understand where these values come from and why they differentiate from my expected values.

If anyone could explain further, I would really appreciate it.
Thanks in advance!

Training Failing for dataset with only one class.

Hi,

I am trying to train a network on GPU to localize tables in the images. In my dataset there is one class i.e. "table". So including background it will be 2. During training I am getting below error in rpn_out_class layer. Exception: Error when checking target: expected rpn_out_class to have shape (None, None, None, 9) but got array with shape (1, 49, 38, 18). Can anyone help me to know how the last dimension of rpn_out_class became 18 instead it should be 9 because 9 anchor boxes exist.

image

I am using simple ground truth dataset format.

On CPU the network training is happening without any errors.

Recurrent Layers in Faster RCNN?

Hi, I'm checking out that code, but I'm not sure, does this model use some recurrent layers? As I understand this code, it's rather using temporal blocks, right?
Sorry for really newbie question, but I'm trying to learn a bit about ML :)

activation function

where is the activation function set? i'm getting exploding loss so would like to change the activation function

InvalidArgumentError

I got the following error while running "python train_frcnn.py -o simple -p train.txt "

Using TensorFlow backend.
Parsing annotation files
Training images per class:
{'bg': 0, 'table': 418}
Num classes (including bg) = 2
Config has been written to config.pickle, and can be loaded when testing to ensure correct results
Num train samples 287
Num val samples 51
Traceback (most recent call last):
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 1 but is rank 0 for 'bn_conv1/Reshape_4' (op: 'Reshape') with input shapes: [1,1,1,64], [].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train_frcnn.py", line 122, in <module>
    shared_layers = nn.nn_base(img_input, trainable=True)
  File "/Users/ketulshah/table-detection/keras-frcnn/keras_frcnn/resnet.py", line 180, in nn_base
    x = FixedBatchNormalization(axis=bn_axis, name='bn_conv1')(x)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "/Users/ketulshah/table-detection/keras-frcnn/keras_frcnn/FixedBatchNormalization.py", line 73, in call
    epsilon=self.epsilon)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1908, in batch_normalization
    mean = tf.reshape(mean, (-1))
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 6482, in reshape
    "Reshape", tensor=tensor, shape=shape, name=name)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
    op_def=op_def)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in __init__
    control_input_ops)
  File "/Users/ketulshah/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
    raise ValueError(str(e))
ValueError: Shape must be rank 1 but is rank 0 for 'bn_conv1/Reshape_4' (op: 'Reshape') with input shapes: [1,1,1,64], [].

Convert entire prediction pipeline to a single model file

I wondered, is it in any way possible to convert the entire prediction pipeline with all of it's functions into a single model file, resulting in a fully self contained model?

Would be handy if you were to deploy the model to a different code environment.

Gpu memory

2019-09-03 13:02:34.743548: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd1b00 next 95 of size 512
2019-09-03 13:02:34.743569: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd1d00 next 96 of size 512
2019-09-03 13:02:34.743590: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd1f00 next 97 of size 512
2019-09-03 13:02:34.743611: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2100 next 98 of size 512
2019-09-03 13:02:34.743632: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2300 next 99 of size 512
2019-09-03 13:02:34.743653: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2500 next 100 of size 512
2019-09-03 13:02:34.743674: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2700 next 101 of size 512
2019-09-03 13:02:34.743696: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2900 next 102 of size 512
2019-09-03 13:02:34.743717: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2b00 next 103 of size 512
2019-09-03 13:02:34.743738: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2d00 next 104 of size 512
2019-09-03 13:02:34.743758: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd2f00 next 105 of size 512
2019-09-03 13:02:34.743780: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd3100 next 106 of size 256
2019-09-03 13:02:34.743801: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6cd3200 next 107 of size 589824
2019-09-03 13:02:34.743823: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6d63200 next 18446744073709551615 of size 642560
2019-09-03 13:02:34.743843: I tensorflow/core/common_runtime/bfc_allocator.cc:793] Next region of size 4194304
2019-09-03 13:02:34.743866: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a6e00000 next 18446744073709551615 of size 4194304
2019-09-03 13:02:34.743887: I tensorflow/core/common_runtime/bfc_allocator.cc:793] Next region of size 8388608
2019-09-03 13:02:34.743908: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a7200000 next 71 of size 4194304
2019-09-03 13:02:34.743930: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0x7fd6a7600000 next 18446744073709551615 of size 4194304
2019-09-03 13:02:34.743950: I tensorflow/core/common_runtime/bfc_allocator.cc:809] Summary of in-use Chunks by size:
2019-09-03 13:02:34.743980: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 868 Chunks of size 256 totalling 217.0KiB
2019-09-03 13:02:34.744004: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 480 Chunks of size 512 totalling 240.0KiB
2019-09-03 13:02:34.744027: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 886 Chunks of size 1024 totalling 886.0KiB
2019-09-03 13:02:34.744050: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 1280 totalling 1.2KiB
2019-09-03 13:02:34.744073: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 567 Chunks of size 2048 totalling 1.11MiB
2019-09-03 13:02:34.744096: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 8 Chunks of size 2816 totalling 22.0KiB
2019-09-03 13:02:34.744119: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 400 Chunks of size 4096 totalling 1.56MiB
2019-09-03 13:02:34.744141: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 190 Chunks of size 8192 totalling 1.48MiB
2019-09-03 13:02:34.744165: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 33 Chunks of size 16384 totalling 528.0KiB
2019-09-03 13:02:34.744187: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 19 Chunks of size 18432 totalling 342.0KiB
2019-09-03 13:02:34.744210: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 33 Chunks of size 37632 totalling 1.18MiB
2019-09-03 13:02:34.744233: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 166 Chunks of size 65536 totalling 10.38MiB
2019-09-03 13:02:34.752035: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 26 Chunks of size 73728 totalling 1.83MiB
2019-09-03 13:02:34.752086: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 33 Chunks of size 131072 totalling 4.12MiB
2019-09-03 13:02:34.752117: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 83 Chunks of size 147456 totalling 11.67MiB
2019-09-03 13:02:34.752145: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 26 Chunks of size 172032 totalling 4.27MiB
2019-09-03 13:02:34.752174: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 191 Chunks of size 262144 totalling 47.75MiB
2019-09-03 13:02:34.752202: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 66 Chunks of size 524288 totalling 33.00MiB
2019-09-03 13:02:34.752223: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 107 Chunks of size 589824 totalling 60.19MiB
2019-09-03 13:02:34.752244: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 642560 totalling 627.5KiB
2019-09-03 13:02:34.752264: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 25 Chunks of size 655360 totalling 15.62MiB
2019-09-03 13:02:34.752289: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 951552 totalling 929.2KiB
2019-09-03 13:02:34.752314: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 290 Chunks of size 1048576 totalling 290.00MiB
2019-09-03 13:02:34.752339: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 1117952 totalling 1.07MiB
2019-09-03 13:02:34.752367: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 53 Chunks of size 2097152 totalling 106.00MiB
2019-09-03 13:02:34.752394: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 158 Chunks of size 2359296 totalling 355.50MiB
2019-09-03 13:02:34.752423: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 3465216 totalling 3.30MiB
2019-09-03 13:02:34.752444: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 4128768 totalling 3.94MiB
2019-09-03 13:02:34.752465: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 82 Chunks of size 4194304 totalling 328.00MiB
2019-09-03 13:02:34.752484: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 7429632 totalling 7.08MiB
2019-09-03 13:02:34.752504: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 19 Chunks of size 8388608 totalling 152.00MiB
2019-09-03 13:02:34.752524: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 45 Chunks of size 9437184 totalling 405.00MiB
2019-09-03 13:02:34.752541: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 9961472 totalling 9.50MiB
2019-09-03 13:02:34.752558: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 12 Chunks of size 10272768 totalling 117.56MiB
2019-09-03 13:02:34.752575: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 14929920 totalling 14.24MiB
2019-09-03 13:02:34.752592: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 15533056 totalling 14.81MiB
2019-09-03 13:02:34.752609: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 15687168 totalling 14.96MiB
2019-09-03 13:02:34.752626: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 17 Chunks of size 18874368 totalling 306.00MiB
2019-09-03 13:02:34.752643: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 18936832 totalling 18.06MiB
2019-09-03 13:02:34.752660: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 6 Chunks of size 20275200 totalling 116.02MiB
2019-09-03 13:02:34.752677: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 21097472 totalling 20.12MiB
2019-09-03 13:02:34.752693: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 30767872 totalling 29.34MiB
2019-09-03 13:02:34.752710: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 33554432 totalling 32.00MiB
2019-09-03 13:02:34.752726: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 40009728 totalling 38.16MiB
2019-09-03 13:02:34.752743: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 7 Chunks of size 40280064 totalling 268.90MiB
2019-09-03 13:02:34.752760: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 22 Chunks of size 41091072 totalling 862.12MiB
2019-09-03 13:02:34.752776: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 42663936 totalling 40.69MiB
2019-09-03 13:02:34.752793: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 50486272 totalling 48.15MiB
2019-09-03 13:02:34.752815: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 70828032 totalling 67.55MiB
2019-09-03 13:02:34.752834: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 3 Chunks of size 81100800 totalling 232.03MiB
2019-09-03 13:02:34.752852: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 3 Chunks of size 161120256 totalling 460.97MiB
2019-09-03 13:02:34.752868: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 162278400 totalling 154.76MiB
2019-09-03 13:02:34.752885: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 201400320 totalling 192.07MiB
2019-09-03 13:02:34.752901: I tensorflow/core/common_runtime/bfc_allocator.cc:816] Sum Total of in-use chunks: 4.79GiB
2019-09-03 13:02:34.752917: I tensorflow/core/common_runtime/bfc_allocator.cc:818] total_region_allocated_bytes_: 5308940288 memory_limit_: 5308940288 available bytes: 0 curr_region_allocation_bytes_: 4294967296
2019-09-03 13:02:34.752942: I tensorflow/core/common_runtime/bfc_allocator.cc:824] Stats:
Limit: 5308940288
InUse: 5146169600
MaxInUse: 5296907776
NumAllocs: 470936928
MaxAllocSize: 2802284544

2019-09-03 13:02:34.753288: W tensorflow/core/common_runtime/bfc_allocator.cc:319] ****************************************************************************************************
2019-09-03 13:02:34.753332: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at strided_slice_op.cc:246 : Resource exhausted: OOM when allocating tensor with shape[1,38,264,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,38,264,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node gradients_7/roi_pooling_conv_7/strided_slice_99_grad/StridedSliceGrad}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "active_learning_modul.py", line 253, in
train_simple()
File "active_learning_modul.py", line 178, in train_simple
con = train.train_model(seed_imgs,seed_classes_count,seed_classes_mapping,con,Earlystopping_patience,config_output_filename)
File "/mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/train_frcnn.py", line 230, in train_model
loss_class = model_classifier.train_on_batch([X, X2[:, sel_samples, :]], [Y1[:, sel_samples, :], Y2[:, sel_samples, :]])
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/engine/training.py", line 1621, in train_on_batch
outputs = self.train_function(ins)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2103, in call
feed_dict=feed_dict)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,38,264,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node gradients_7/roi_pooling_conv_7/strided_slice_99_grad/StridedSliceGrad (defined at /home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:2138) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Errors may have originated from an input operation.
Input Source operations connected to node gradients_7/roi_pooling_conv_7/strided_slice_99_grad/StridedSliceGrad:
roi_pooling_conv_7/strided_slice_99/stack_2 (defined at /mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/RoiPoolingConv.py:105)

Original stack trace for 'gradients_7/roi_pooling_conv_7/strided_slice_99_grad/StridedSliceGrad':
File "active_learning_modul.py", line 253, in
train_simple()
File "active_learning_modul.py", line 178, in train_simple
con = train.train_model(seed_imgs,seed_classes_count,seed_classes_mapping,con,Earlystopping_patience,config_output_filename)
File "/mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/train_frcnn.py", line 230, in train_model
loss_class = model_classifier.train_on_batch([X, X2[:, sel_samples, :]], [Y1[:, sel_samples, :], Y2[:, sel_samples, :]])
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/engine/training.py", line 1620, in train_on_batch
self._make_train_function()
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/engine/training.py", line 1002, in _make_train_function
self.total_loss)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/optimizers.py", line 381, in get_updates
grads = self.get_gradients(loss, params)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/optimizers.py", line 47, in get_gradients
grads = K.gradients(loss, params)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2138, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 158, in gradients
unconnected_gradients)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py", line 731, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py", line 403, in _MaybeCompile
return grad_fn() # Exit early
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py", line 731, in
lambda: grad_fn(op, *out_grads))
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/array_grad.py", line 279, in _StridedSliceGrad
shrink_axis_mask=op.get_attr("shrink_axis_mask")), None, None, None
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 10193, in strided_slice_grad
shrink_axis_mask=shrink_axis_mask, name=name)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()

...which was originally created as op 'roi_pooling_conv_7/strided_slice_99', defined at:
File "active_learning_modul.py", line 253, in
train_simple()
[elided 0 identical lines from previous traceback]
File "active_learning_modul.py", line 178, in train_simple
con = train.train_model(seed_imgs,seed_classes_count,seed_classes_mapping,con,Earlystopping_patience,config_output_filename)
File "/mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/train_frcnn.py", line 106, in train_model
classifier = nn.classifier(shared_layers, roi_input, con.num_rois, nb_classes=len(classes_count), trainable=True)
File "/mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/resnet.py", line 239, in classifier
out_roi_pool = RoiPoolingConv(pooling_regions, num_rois)([base_layers, input_rois])
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/keras/engine/topology.py", line 578, in call
output = self.call(inputs, **kwargs)
File "/mnt/0CCCB718CCB6FB52/Projekt/Active-Learning-Faster-RCNN/keras_frcnn/RoiPoolingConv.py", line 105, in call
rs = tf.image.resize_images(img[:, y:y+h, x:x+w, :], (self.pool_size, self.pool_size))
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 680, in _slice_helper
name=name)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 846, in strided_slice
shrink_axis_mask=shrink_axis_mask)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 9989, in strided_slice
shrink_axis_mask=shrink_axis_mask, name=name)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/kamgo/environments/pyp36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()
I not understand why i receive that error during training.
I won say that I use the repository to implement active learning. that means i loop on data to train the model. the train function doesn't give error during the tree first iteration and then i receive this error.
please help me !

weigth

please where can wait pretrainned weigth?

Probelm loading the model (ValueError: Unknown layer: FixedBatchNormalization)

import keras
keras.models.load_model('./model_frcnn.hdf5')
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W1113 16:06:51.578105 140014292600640 deprecation_wrapper.py:119] From /home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

Traceback (most recent call last):
  File "/home/qendrim/solaborate/repos/solaborate/Solaborate.ML/scripts/solaborate_scripts/practice.py", line 2, in <module>
    keras.models.load_model('./model_frcnn.hdf5')
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/engine/saving.py", line 419, in load_model
    model = _deserialize_model(f, custom_objects, compile)
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
    model = model_from_config(model_config, custom_objects=custom_objects)
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/engine/saving.py", line 458, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/engine/network.py", line 1022, in from_config
    process_layer(layer_data)
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/engine/network.py", line 1008, in process_layer
    custom_objects=custom_objects)
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/qendrim/anaconda3/envs/hello-ai/lib/python3.7/site-packages/keras/utils/generic_utils.py", line 138, in deserialize_keras_object
    ': ' + class_name)
ValueError: Unknown layer: FixedBatchNormalization

Could somebody help, how can i successfully load the model?

roi_helpers code issue

Seems line 278 of roi_helpers should run only for tensorflow, since rpn_layer is always in size (1, num_anchors, height, width) for 'th'.

Early stopping

Hey I begin to learn keras wiht tensorflow. I won to know how/an where can I implemented Early stopping on that code train_frcnn.py
thank you for you help

Very slow Training process

Hi,

Can anyone suggest how to increase training speed ? I am having GPU but unable to utilise with required keras version.

For 10 epochs only it took almost 2.5 hours on my PC.

Thanks.

RPN Code Help Needed

This screenshot is part of the code from data_generators.py file for calc_rpn function. Could you explain what is happening here and why are we doing this?
Specifically, tx , ty , tw, th

screenshot 2019-01-09 at 22 23 30

Question about intention of code snippet in measure_map.py

First of all, when i run this file i get a key error as it can't find the key 'difficult'.
Second, why would we add a true prediction with score 0 if these conditions are met? I'm a bit confused..

Code snippet from get_map in measure_map.py:

for gt_box in gt:
		if not gt_box['bbox_matched'] and not gt_box['difficult']:
			if gt_box['class'] not in P:
				P[gt_box['class']] = []
				T[gt_box['class']] = []
			
                        T[gt_box['class']].append(1)
			P[gt_box['class']].append(0)

Can someone explain this? @kbardool

getting error Shape must be rank 1 but is rank 0 while running

Hi i have followed the instructions from blog:https://www.analyticsvidhya.com/blog/2018/11/implementation-faster-r-cnn-python-object-detection/ to train my images with 2 classes and image size of 224 X 224 . However i am getting error mentioned below while running command:

python3 train_frcnn.py -o simple -p annotate.txt

python3 -c  "import keras; print(keras.__version__) "
Using TensorFlow backend.
2.2.4 

Traceback (most recent call last):
  File "train_frcnn.py", line 123, in <module>
    shared_layers = nn.nn_base(img_input, trainable=True)
  File "/Users/b0208131/notebooks/objectDetection_1/keras-frcnn/keras_frcnn/resnet.py", line 181, in nn_base
    x = FixedBatchNormalization(axis=bn_axis, name='bn_conv1')(x)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "/Users/b0208131/notebooks/objectDetection_1/keras-frcnn/keras_frcnn/FixedBatchNormalization.py", line 73, in call
    epsilon=self.epsilon)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1908, in batch_normalization
    mean = tf.reshape(mean, (-1))
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 6199, in reshape
    "Reshape", tensor=tensor, shape=shape, name=name)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
    return func(*args, **kwargs)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
    op_def=op_def)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1731, in __init__
    control_input_ops)
  File "/anaconda3/envs/tensorflow_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1579, in _create_c_op
    raise ValueError(str(e))
ValueError: Shape must be rank 1 but is rank 0 for 'bn_conv1/Reshape_4' (op: 'Reshape') with input shapes: [1,1,1,64], [].

Exception: 'a' cannot be empty unless no samples are taken

I ran the script train_frcnn.py to identify water bodies in an image using the following command

python train_frcnn.py -o simple -p new.csv

Even though the model is working to some extent, I'm getting this message

Exception: 'a' cannot be empty unless no samples are taken

Can anyone tell me why?

Here's the traceback

$ source activate tf

$ python train_frcnn.py -o simple -p new.csv

Using TensorFlow backend.
Parsing annotation files
Training images per class:
{'bg': 0, 'w': 4368}
Num classes (including bg) = 2
Config has been written to config.pickle, and can be loaded when testing to ensure correct results
Num train samples 30
Num val samples 2
loading weights from resnet50_weights_tf_dim_ordering_tf_kernels.h5
Could not load pretrained model weights. Weights can be found in the keras application folder https://github.com/fchollet/keras/tree/master/keras/applications
Starting training
Epoch 1/100
2019-05-30 15:08:07.699326: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-05-30 15:08:08.013302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties:
name: Quadro M6000 major: 5 minor: 2 memoryClockRate(GHz): 1.114
pciBusID: 0000:84:00.0
totalMemory: 11.92GiB freeMemory: 11.51GiB
2019-05-30 15:08:08.187655: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 1 with properties:
name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745
pciBusID: 0000:04:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2019-05-30 15:08:08.188024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Device peer to peer matrix
2019-05-30 15:08:08.188331: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1126] DMA: 0 1
2019-05-30 15:08:08.188359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1136] 0: Y N
2019-05-30 15:08:08.188372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1136] 1: N Y
2019-05-30 15:08:08.188400: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Quadro M6000, pci bus id: 0000:84:00.0, compute capability: 5.2)
2019-05-30 15:08:08.188421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:1) -> (device: 1, name: Tesla K40c, pci bus id: 0000:04:00.0, compute capability: 3.5)
168/500 [=========>....................] - ETA: 27:35 - rpn_cls: 4.8963 - rpn_regr: 2.6810 - detector_cls: 0.4262 - detector_regr: 0.2180Exception: 'a' cannot be empty unless no samples are taken
225/500 [============>.................] - ETA: 22:33 - rpn_cls: 4.8436 - rpn_regr: 2.4221 - detector_cls: 0.4458 - detector_regr: 0.1770Exception: 'a' cannot be empty unless no samples are taken
250/500 [==============>...............] - ETA: 20:50 - rpn_cls: 4.8341 - rpn_regr: 2.3409 - detector_cls: 0.4516 - detector_regr: 0.1718Exception: 'a' cannot be empty unless no samples are taken
252/500 [==============>...............] - ETA: 20:39 - rpn_cls: 4.8375 - rpn_regr: 2.3325 - detector_cls: 0.4510 - detector_regr: 0.1705Exception: 'a' cannot be empty unless no samples are taken
258/500 [==============>...............] - ETA: 20:07 - rpn_cls: 4.8275 - rpn_regr: 2.3078 - detector_cls: 0.4506 - detector_regr: 0.1709Exception: 'a' cannot be empty unless no samples are taken
277/500 [===============>..............] - ETA: 18:25 - rpn_cls: 4.8210 - rpn_regr: 2.2516 - detector_cls: 0.4536 - detector_regr: 0.1668Exception: 'a' cannot be empty unless no samples are taken
291/500 [================>.............] - ETA: 17:15 - rpn_cls: 4.8115 - rpn_regr: 2.2050 - detector_cls: 0.4538 - detector_regr: 0.1683Exception: 'a' cannot be empty unless no samples are taken
318/500 [==================>...........] - ETA: 15:02 - rpn_cls: 4.8006 - rpn_regr: 2.1506 - detector_cls: 0.4504 - detector_regr: 0.1646Exception: 'a' cannot be empty unless no samples are taken
332/500 [==================>...........] - ETA: 13:58 - rpn_cls: 4.7915 - rpn_regr: 2.1160 - detector_cls: 0.4538 - detector_regr: 0.1627Exception: 'a' cannot be empty unless no samples are taken
348/500 [===================>..........] - ETA: 12:42 - rpn_cls: 4.7853 - rpn_regr: 2.0814 - detector_cls: 0.4532 - detector_regr: 0.1633Exception: 'a' cannot be empty unless no samples are taken
352/500 [====================>.........] - ETA: 12:24 - rpn_cls: 4.7789 - rpn_regr: 2.0717 - detector_cls: 0.4537 - detector_regr: 0.1621Exception: 'a' cannot be empty unless no samples are taken
354/500 [====================>.........] - ETA: 12:17 - rpn_cls: 4.7760 - rpn_regr: 2.0703 - detector_cls: 0.4566 - detector_regr: 0.1613Exception: 'a' cannot be empty unless no samples are taken
358/500 [====================>.........] - ETA: 12:01 - rpn_cls: 4.7718 - rpn_regr: 2.0621 - detector_cls: 0.4586 - detector_regr: 0.1610Exception: 'a' cannot be empty unless no samples are taken
362/500 [====================>.........] - ETA: 11:39 - rpn_cls: 4.7773 - rpn_regr: 2.0579 - detector_cls: 0.4583 - detector_regr: 0.1605Exception: 'a' cannot be empty unless no samples are taken
372/500 [=====================>........] - ETA: 10:46 - rpn_cls: 4.7745 - rpn_regr: 2.0369 - detector_cls: 0.4588 - detector_regr: 0.1654Exception: 'a' cannot be empty unless no samples are taken

Kindly let me know if more information is required to answer this.
Thanks in advance :)

No bounding boxes in resulting images

Hi,

I trained my custom dataset with 31 images (of varying sizes) with 25 epochs and 10 epoch length and tested the model on new images but the model does not draw any bounding boxes on the test images.
What could possibly be wrong on my part? Kindly help me with this problem.

Thanks in advance

Config file

Hi,

I want to know what's the use of

self.std_scaling = 4.0
self.classifier_regr_std = [8.0, 8.0, 4.0, 4.0]

in the config.py file? They don't seem to be used anywhere in the code.

Shape problem

Both on VOC and my data I get the following:
ValueError: Shape must be rank 1 but is rank 0 for 'bn_conv1/Reshape_4' (op: 'Reshape') with input shapes: [1,1,1,64], [].

Any ideas?

No 'setup.py' found

When I try to install the module I get the following error :

`Collecting git+https://github.com/kbardool/keras-frcnn.git
Cloning https://github.com/kbardool/keras-frcnn.git to /tmp/pip-req-build-qrj79ubo
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/root/anaconda3/envs/python_jupyter2/lib/python3.5/tokenize.py", line 454, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-req-build-qrj79ubo/setup.py'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-qrj79ubo/

Why is there no setup.py file ?
What should I do ?

current keras does crash

Regarding to the problem described here: #5 we are currently sticking to an outdated version of keras. Is this really reasonable?

Rank issues in ROI pooling Conv.py

When i try to implement ROI pooling Conv.py,
line 94: it seems y1,y2,x1,and x2 are tensors, the rank is different from the expected:

The bug details:
ValueError: Shapes must be equal rank, but are 2 and 0
From merging shape 2 with other shapes. for 'lambda_11/strided_slice_4/stack_2' (op: 'Pack') with input shapes: [], [?,1], [?,1], [].

Do you know how to solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.