Code Monkey home page Code Monkey logo

object_detector_app's Introduction

Object-Detector-App

A real-time object recognition application using Google's TensorFlow Object Detection API and OpenCV.

Getting Started

  1. conda env create -f environment.yml
  2. python object_detection_app.py / python object_detection_multithreading.py Optional arguments (default value):
    • Device index of the camera --source=0
    • Width of the frames in the video stream --width=480
    • Height of the frames in the video stream --height=360
    • Number of workers --num-workers=2
    • Size of the queue --queue-size=5
    • Get video from HLS stream rather than webcam '--stream-input=http://somertmpserver.com/hls/live.m3u8'
    • Send stream to livestreaming server '--stream-output=--stream=http://somertmpserver.com/hls/live.m3u8'

Tests

pytest -vs utils/

Requirements

Notes

  • OpenCV 3.1 might crash on OSX after a while, so that's why I had to switch to version 3.0. See open issue and solution here.
  • Moving the .read() part of the video stream in a multiple child processes did not work. However, it was possible to move it to a separate thread.

Copyright

See LICENSE for details. Copyright (c) 2017 Dat Tran.

object_detector_app's People

Contributors

danilodeveloper avatar datitran avatar samcrane8 avatar zahidirfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

object_detector_app's Issues

can't run object-detector-app.py

I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0: Y
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 940M, pci bus id: 0000:04:00.0)
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\executor.cc:594] Executor failed to create kernel. Invalid argument: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]
[INFO/SpawnPoolWorker-3] process shutting down
[DEBUG/SpawnPoolWorker-3] running all "atexit" finalizers with priority >= 0
[DEBUG/SpawnPoolWorker-3] running the remaining "atexit" finalizers
Process SpawnPoolWorker-3:
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1022, in _do_call
return fn(*args)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1004, in _run_fn
status, run_metadata)
File "D:\Anaconda3\lib\contextlib.py", line 66, in exit
next(self.gen)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Anaconda3\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "D:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "D:\Anaconda3\lib\multiprocessing\pool.py", line 103, in worker
initializer(*initargs)
File "E:\tf\Object-Detector-App-master\object_detection_app.py", line 80, in worker
output_q.put(detect_objects(frame, sess, detection_graph))
File "E:\tf\Object-Detector-App-master\object_detection_app.py", line 50, in detect_objects
feed_dict={image_tensor: image_np_expanded})
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 767, in run
run_metadata_ptr)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

Caused by op 'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise', defined at:
File "", line 1, in
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 119, in _main
return self._bootstrap()
File "D:\Anaconda3\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "D:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "D:\Anaconda3\lib\multiprocessing\pool.py", line 103, in worker
initializer(*initargs)
File "E:\tf\Object-Detector-App-master\object_detection_app.py", line 72, in worker
tf.import_graph_def(od_graph_def, name='')
File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\importer.py", line 287, in import_graph_def
op_def=op_def)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1264, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

[INFO/SpawnPoolWorker-3] process exiting with exitcode 1
[DEBUG/MainProcess] cleaning up worker 1
[Level 5/MainProcess] finalizer calling with args (692,) and kwargs {}
[DEBUG/MainProcess] added worker

object_dection

While executing the program on anaconda(spyder) always this error arise, if any one can help
error :OpenCV(3.4.6) D:\Build\OpenCV\opencv-3.4.6\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

Cant run object_deteection_app.py

Hi guys, I tried to run object_deteection_app.py but I has this error
`phong@Storm:~/PycharmProjects/Object-Detector-App-master$ python object_detection_app.py
[DEBUG/MainProcess] created semlock with handle 139891181490176
[DEBUG/MainProcess] created semlock with handle 139891181486080
[DEBUG/MainProcess] created semlock with handle 139891180916736
[DEBUG/MainProcess] Queue._after_fork()
[DEBUG/MainProcess] created semlock with handle 139891180912640
[DEBUG/MainProcess] created semlock with handle 139891180908544
[DEBUG/MainProcess] created semlock with handle 139891180904448
[DEBUG/MainProcess] Queue._after_fork()
[DEBUG/MainProcess] created semlock with handle 139891180900352
[DEBUG/MainProcess] created semlock with handle 139891180896256
[DEBUG/MainProcess] created semlock with handle 139891180892160
[DEBUG/MainProcess] created semlock with handle 139891180888064
[DEBUG/MainProcess] added worker
[DEBUG/PoolWorker-2] Queue._after_fork()
[DEBUG/PoolWorker-2] Queue._after_fork()
[INFO/PoolWorker-2] child process calling self.run()
[DEBUG/MainProcess] added worker
[DEBUG/PoolWorker-3] Queue._after_fork()
[DEBUG/PoolWorker-3] Queue._after_fork()
[INFO/PoolWorker-3] child process calling self.run()
[DEBUG/MainProcess] Queue._start_thread()
[DEBUG/MainProcess] doing self._thread.start()
[DEBUG/MainProcess] starting thread to feed data to pipe
[DEBUG/MainProcess] ... done self._thread.start()
2017-06-28 11:10:45.146711: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.146755: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.146776: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.254015: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.254073: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.254082: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:45.300271: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-06-28 11:10:45.301197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1070
major: 6 minor: 1 memoryClockRate (GHz) 1.7085
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 7.43GiB
2017-06-28 11:10:45.301219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-06-28 11:10:45.301227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2017-06-28 11:10:45.301238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0)
2017-06-28 11:10:45.390544: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-06-28 11:10:45.390759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1070
major: 6 minor: 1 memoryClockRate (GHz) 1.7085
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 302.94MiB
2017-06-28 11:10:45.390785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-06-28 11:10:45.390793: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2017-06-28 11:10:45.390824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0)
2017-06-28 11:10:46.907853: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2017-06-28 11:10:46.907894: W tensorflow/stream_executor/stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
[INFO/PoolWorker-2] process shutting down
[DEBUG/PoolWorker-2] running all "atexit" finalizers with priority >= 0
[DEBUG/PoolWorker-2] running the remaining "atexit" finalizers
Process PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 97, in worker
initializer(*initargs)
File "object_detection_app.py", line 79, in worker
output_q.put(detect_objects(frame, sess, detection_graph))
File "object_detection_app.py", line 49, in detect_objects
feed_dict={image_tensor: image_np_expanded})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
InternalError: Blas SGEMM launch failed : m=22500, n=64, k=32
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_pointwise/weights/read)]]
[[Node: Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Minimum_31/_1185 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_7609_Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Minimum_31", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/convolution', defined at:
File "object_detection_app.py", line 107, in
pool = Pool(args.num_workers, worker, (input_q, output_q))
File "/usr/lib/python2.7/multiprocessing/init.py", line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 159, in init
self._repopulate_pool()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 223, in _repopulate_pool
w.start()
File "/usr/lib/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 126, in init
code = process_obj._bootstrap()
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 97, in worker
initializer(*initargs)
File "object_detection_app.py", line 71, in worker
tf.import_graph_def(od_graph_def, name='')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 311, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in init
self._traceback = _extract_stack()

InternalError (see above for traceback): Blas SGEMM launch failed : m=22500, n=64, k=32
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_pointwise/weights/read)]]
[[Node: Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Minimum_31/_1185 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_7609_Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Minimum_31", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]]

[INFO/PoolWorker-2] process exiting with exitcode 1
[DEBUG/MainProcess] cleaning up worker 0
[DEBUG/MainProcess] added worker
[DEBUG/PoolWorker-4] Queue._after_fork()
[DEBUG/PoolWorker-4] Queue._after_fork()
[INFO/PoolWorker-4] child process calling self.run()
2017-06-28 11:10:49.761792: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:49.761841: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:49.761849: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-28 11:10:49.871478: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-06-28 11:10:49.871800: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1070
major: 6 minor: 1 memoryClockRate (GHz) 1.7085
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 7.26GiB
2017-06-28 11:10:49.871817: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-06-28 11:10:49.871824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2017-06-28 11:10:49.871836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0)`

I see my webcam work but there is no windows openning after that.
One more weird thing is I can run object_detection_multiplayer.py with no problem.
Please help me to fix it !

Why the fps isn't improve

Hi:
I have download your project in June,and then run it in my computer and get the fps of 2.2. Today I have find that you update it in Aug 17 and note with "Change folder structure, finish the multithreading example to improve fps on low performance machines ". So I run the new project ,but the fps is also 2.2. Is there something else I could do to improve fps or something wrong what I do? I run this command: $ python object_detection_multithreading.py --source=0

ResolvePackageNotFound

I am trying to install these packages from Anaconda Navigator CMD Promt still the issue persist.

ResolvePackageNotFound:
- vc==14.1=h21ff451_3
- openssl==1.1.1=he774522_0
- vs2015_runtime==15.5.2=3
image

object_detection_app.py hangs

Output looks the same as example until the line:

[INFO/ForkPoolWorker-2] process shutting down

Then getting the traceback:

Traceback (most recent call last): File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "object_detection_app.py", line 79, in worker output_q.put(detect_objects(frame, sess, detection_graph)) File "object_detection_app.py", line 49, in detect_objects feed_dict={image_tensor: image_np_expanded}) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 789, in run run_metadata_ptr) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 968, in _run np_val = np.asarray(subfeed_val, dtype=subfeed_dtype) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray return array(a, dtype, copy=False, order=order) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' [INFO/ForkPoolWorker-2] process exiting with exitcode 1

Unsure the cause of the issue. I am using OpenCV 3.1 instead of 3.0 and replaced:

- menpo::tbb=4.3_20141023=0

with:

- menpo::tbb

although I don't see how they would be related.

Any intuition as to what the issue could be?

Thanks.

Full output:
(object-detection) daniel@daniel-Satellite-C55Dt-A:~/bin/Object-Detector-App$ python object_detection_app.py --source=0 --num-workers=2 [DEBUG/MainProcess] created semlock with handle 139755691372544 [DEBUG/MainProcess] created semlock with handle 139755691368448 [DEBUG/MainProcess] created semlock with handle 139755691364352 [DEBUG/MainProcess] Queue._after_fork() [DEBUG/MainProcess] created semlock with handle 139755691360256 [DEBUG/MainProcess] created semlock with handle 139755691356160 [DEBUG/MainProcess] created semlock with handle 139755691352064 [DEBUG/MainProcess] Queue._after_fork() [DEBUG/MainProcess] created semlock with handle 139755691347968 [DEBUG/MainProcess] created semlock with handle 139755691343872 [DEBUG/MainProcess] created semlock with handle 139755691339776 [DEBUG/MainProcess] created semlock with handle 139755691335680 [DEBUG/MainProcess] added worker [DEBUG/ForkPoolWorker-1] Queue._after_fork() [DEBUG/ForkPoolWorker-1] Queue._after_fork() [INFO/ForkPoolWorker-1] child process calling self.run() [DEBUG/MainProcess] added worker [DEBUG/ForkPoolWorker-2] Queue._after_fork() [DEBUG/ForkPoolWorker-2] Queue._after_fork() [INFO/ForkPoolWorker-2] child process calling self.run() [DEBUG/MainProcess] Queue._start_thread() [DEBUG/MainProcess] doing self._thread.start() [DEBUG/MainProcess] starting thread to feed data to pipe [DEBUG/MainProcess] ... done self._thread.start() 2017-08-02 16:02:44.171442: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:44.171442: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:44.171513: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:44.171516: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:44.171542: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:44.171554: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. [INFO/ForkPoolWorker-2] process shutting down [DEBUG/ForkPoolWorker-2] running all "atexit" finalizers with priority >= 0 [DEBUG/ForkPoolWorker-2] running the remaining "atexit" finalizers Process ForkPoolWorker-2: Traceback (most recent call last): File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "object_detection_app.py", line 79, in worker output_q.put(detect_objects(frame, sess, detection_graph)) File "object_detection_app.py", line 49, in detect_objects feed_dict={image_tensor: image_np_expanded}) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 789, in run run_metadata_ptr) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 968, in _run np_val = np.asarray(subfeed_val, dtype=subfeed_dtype) File "/home/daniel/anaconda2/envs/object-detection/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray return array(a, dtype, copy=False, order=order) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' [INFO/ForkPoolWorker-2] process exiting with exitcode 1 [DEBUG/MainProcess] cleaning up worker 1 [Level 5/MainProcess] finalizer calling <built-in function close> with args (13,) and kwargs {} [DEBUG/MainProcess] added worker [DEBUG/ForkPoolWorker-3] Queue._after_fork() [DEBUG/ForkPoolWorker-3] Queue._after_fork() [INFO/ForkPoolWorker-3] child process calling self.run() 2017-08-02 16:02:54.370080: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:54.370152: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-02 16:02:54.370179: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

Make a web demo?

Thank you for your great work.
I did all step and got some errors, but anyway these were solved.
Do you have any idea to make a web demo for object detection app?

Less accurate

I trained the model to detect various shapes say triangle , circle. The detection is less accurate. I used aroudn 5000 images per class. the detection is not accurate and it is jumping.

Hold up... Are these tensorflow models trained in RGB or BGR?

I have reason to believe that the tensorflow models have been trained with image arrays in RGB format, but this codebase loads images using OpenCV, which defaults to BGR.

This issue is plaguing me at work on a different project, and I want to confirm: Should I feed it an RGB image or a BGR image?

If anyone knows, this would be quite helpful!

About Object Detection Demo

There is no corresponding Object Detection demo file of Quick Starte on your GitHub.

and I copy all the code on the page(https://github.com/datitran/Object-Detector-App/blob/master/object_detection/object_detection_tutorial.ipynb)

then running,Then the following error occurs:

[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1022, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1004, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/wangzishenz/PycharmProjects/Object-Detector-App-master/object_detection/tf_qstart.py", line 97, in
feed_dict={image_tensor: image_np_expanded})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

Caused by op 'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise', defined at:
File "/home/wangzishenz/PycharmProjects/Object-Detector-App-master/object_detection/tf_qstart.py", line 55, in
tf.import_graph_def(od_graph_def, name='')
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 287, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]


and I would like to ask the question is whether the code in the page is complete,If it is complete, where is the problem?
If the code is not complete, please upload the source code of the code to GitHub.

Whether our technical route is a problem,I mean that do not need to run the demo file, starting directly from the setup section.

Looking forward to your reply ^-^

Error while running multithreading (object_detection_multilayer.py)

PicklingError Traceback (most recent call last)
in ()
153 child_process.daemon = False
154
--> 155 main_process.start()
156 child_process.start()
157

~\Anaconda2\envs\tensorflow\lib\multiprocessing\process.py in start(self)
103 'daemonic processes are not allowed to have children'
104 _cleanup()
--> 105 self._popen = self._Popen(self)
106 self._sentinel = self._popen.sentinel
107 _children.add(self)

~\Anaconda2\envs\tensorflow\lib\multiprocessing\context.py in _Popen(process_obj)
210 @staticmethod
211 def _Popen(process_obj):
--> 212 return _default_context.get_context().Process._Popen(process_obj)
213
214 class DefaultContext(BaseContext):

~\Anaconda2\envs\tensorflow\lib\multiprocessing\context.py in _Popen(process_obj)
311 def _Popen(process_obj):
312 from .popen_spawn_win32 import Popen
--> 313 return Popen(process_obj)
314
315 class SpawnContext(BaseContext):

~\Anaconda2\envs\tensorflow\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj)
64 try:
65 reduction.dump(prep_data, to_child)
---> 66 reduction.dump(process_obj, to_child)
67 finally:
68 context.set_spawning_popen(None)

~\Anaconda2\envs\tensorflow\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
57 def dump(obj, file, protocol=None):
58 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 59 ForkingPickler(file, protocol).dump(obj)
60
61 #

PicklingError: Can't pickle <function main_process at 0x000002D485859840>: it's not the same object as main.main_process

I am running this in a virtual environment with each required dependencies ,any information will be helpful

cv2.error: D:\Build\OpenCV\opencv-3.3.1\modules\imgproc\src\color.cpp:11016: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor

When I use your code,I find there is a error
First,I modified ['-src', '--source', dest='video_source', type=int,……]
to ['-src', '--source', dest='video_source', type=str,……]
Then I use this command "python object_detection_multithreading.py -src E:C.mp4",but the mistake appeared.
E:\test_opencv\object_detector_app-master_PB>python object_detection_multithreading.py -src E:C.mp4
warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:808)
warning: E:C.mp4 (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:809)
[INFO] elapsed time: 0.00
[INFO] elapsed time: 0.00
[INFO] elapsed time: 0.00
[INFO] elapsed time: 0.00
[INFO] elapsed time: 0.00

OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor, file D:\Build\OpenCV\opencv-3.3.1\modules\imgproc\src\color.cpp, line 11016
Exception in thread Thread-1:
Traceback (most recent call last):
File "E:\Anaconda3\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "E:\Anaconda3\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "object_detection_multithreading.py", line 77, in worker
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
cv2.error: D:\Build\OpenCV\opencv-3.3.1\modules\imgproc\src\color.cpp:11016: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor

How to fix it?

int() argument must be a string

Followed your installation instructions, and I found this error


name: GeForce GTX 1060
major: 6 minor: 1 memoryClockRate (GHz) 1.733
pciBusID 0000:01:00.0
Total memory: 2.94GiB
Free memory: 2.42GiB
2017-08-27 18:48:40.677002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-08-27 18:48:40.677008: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-08-27 18:48:40.677018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0)
[INFO/ForkPoolWorker-2] process shutting down
[DEBUG/ForkPoolWorker-2] running all "atexit" finalizers with priority >= 0
[DEBUG/ForkPoolWorker-2] running the remaining "atexit" finalizers
Process ForkPoolWorker-2:
Traceback (most recent call last):
  File "/storage/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/storage/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/storage/anaconda3/lib/python3.6/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
  File "object_detection_app.py", line 83, in worker
    output_q.put(detect_objects(frame, sess, detection_graph))
  File "object_detection_app.py", line 49, in detect_objects
    feed_dict={image_tensor: image_np_expanded})
  File "/storage/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/storage/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1093, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
  File "/storage/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py", line 531, in asarray
    return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

This is really weird, I cant seem to find out why this is happening. Tensorflow is installed correctly, my libraries are all upgraded and I have protobuf.

Would you happen to have any idea? :)

environment.yml ResolvePackageNotFound again

I hope i just did a stupid mistake...
Installed anaconda, tf and opencv

i got this:

`c:\repos\object_detector_app>conda env create -f environment.yml
Solving environment: failed

ResolvePackageNotFound:

  • libtiff==4.0.6=3
  • icu==54.1=0
  • freetype==2.5.5=2
  • sqlite==3.13.0=0
  • tk==8.5.18=0
  • jbig==2.1=0
  • tbb==4.3_20141023=0
  • openssl==1.0.2l=0
  • xz==5.2.2=1
  • qt==5.6.2=2
  • libpng==1.6.27=0
  • jpeg==9b=0
  • readline==6.2=2
  • zlib==1.2.8=3
  • opencv3==3.0.0=py35_0`

I tried that: #41
didn't do much, i tried to install package manually but it doesn't change anything

What could i did wrong? I'm on W10 x64

data_flow_ops.py, line 91, in _as_name_list raise ValueError when run with Python3.5 but everything run well with Python2.7

When I run train.py with model faster_rcnn_resnet101 by Python2.7 interpreter on my customized data set, it work well, however when I run the same code in the same context and the same data set with Python3.5 interpreter, it report below error:
tf_35-train-fail-2
I am running with below setting:
GeForce GTX 1070
Ubuntu 16.04.2
tensorflow 1.4.1
and below is my config:
model {
faster_rcnn {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 400
width: 400
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0002
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}

train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0001
schedule {
step: 0
learning_rate: .0001
}
schedule {
step: 5000
learning_rate: .00001
}
schedule {
step: 7000
learning_rate: .000001
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
batch_queue_capacity: 2
prefetch_queue_capacity: 2
fine_tune_checkpoint: "models/model.ckpt"
from_detection_checkpoint: true
num_steps: 2000
}

train_input_reader: {
tf_record_input_reader {
input_path: "data/train.record"
}
label_map_path: "data/label_map.pbtxt"
}

eval_config: {
num_examples: 272
num_visualizations: 272
}

eval_input_reader: {
tf_record_input_reader {
input_path: "data/test.record"
}
label_map_path: "data/label_map.pbtxt"
shuffle: true
}

@datitran Can you help over this?

How to switch back to OpenCV 3.0

Note: If you are on Mac OSX like me and you’re using OpenCV 3.1, there might be a chance that OpenCV’s VideoCapture crashes after a while. There is already an issue filed. Switching back to OpenCV 3.0 solved the issue though.

Up is your note,
my environment is win7, anaconda 4.2, openCV3.1
Every time I run "python object_detection_app.py", computer is crash.
You have noted that we need "Switching back to OpenCV 3.0 " to solve this problem.

My question is below
I use this command to install opencv "conda install -c menpo opencv3", it auto install 3.1version .
SO, how to switch back to OpenCV 3.0 (my environment is win7, anaconda 4.2, openCV3.1)

Thank you very much

Print the detected objects on console

How to print the detected objects on console on real-time. Could you please provide any code snippet. I have embedded it in my Drone and now want to print the detected objects in real time. Thanks

protoc on osx

Hi,
I've made my way through this blog post and the custom object detection and when trying to train my model it looks like Google cloud is failing because I didn't run the protoc in tensorflow/models .. just as you mentioned.

Should protoc be installed in the environment with the protobuf package??

thanks and again, awesome posts!
Chris

Tensorflow consumes all GPU memory

Hi,

For some reason I can only run 1 worker, because tensorflow automatically assigns all free memory to the first one. Also it seems like the object detection is not the bottleneck, because if I change the model to a more complex one the fps does not decrease.

Image unsupported by OpenCV

Hi!

So, first, I tried to copy your environment:

conda env create -f environment.yml
Using Anaconda API: https://api.anaconda.org
Fetching package metadata .............


NoPackagesFoundError: Package missing in current linux-64 channels: 
  - tbb 4.3_20141023 0 

After that did not work, I tried to set it up on my own.

conda list
# packages in environment at /home/dremet/anaconda3/envs/object_recog:
#
freetype                  2.5.5                         2  
funcsigs                  0.4                      py35_0  
hdf5                      1.8.17                        1  
jbig                      2.1                           0  
jpeg                      8d                            2  
lcms                      1.19                          0  
libpng                    1.6.27                        0  
libprotobuf               3.2.0                         0  
libtiff                   4.0.6                         2  
mkl                       2017.0.1                      0  
mock                      2.0.0                    py35_0  
numpy                     1.12.1                   py35_0  
opencv                    3.1.0               np112py35_1  
openssl                   1.0.2l                        0  
pbr                       1.10.0                   py35_0  
pillow                    3.4.2                    py35_0  
pip                       9.0.1                    py35_1  
protobuf                  3.2.0                    py35_0  
python                    3.5.3                         1  
readline                  6.2                           2  
setuptools                27.2.0                   py35_0  
six                       1.10.0                   py35_0  
sqlite                    3.13.0                        0  
tensorflow                1.1.0               np112py35_0  
tk                        8.5.18                        0  
werkzeug                  0.12.2                   py35_0  
wheel                     0.29.0                   py35_0  
xz                        5.2.2                         1  
zlib                      1.2.8                         3 

Now, when I run the script, I get this output (until I interrupted with my keyboard):

python object_detection_app.py --source=0
[DEBUG/MainProcess] created semlock with handle 140384818184192
[DEBUG/MainProcess] created semlock with handle 140384818180096
[DEBUG/MainProcess] created semlock with handle 140384818176000
[DEBUG/MainProcess] Queue._after_fork()
[DEBUG/MainProcess] created semlock with handle 140384818171904
[DEBUG/MainProcess] created semlock with handle 140384818167808
[DEBUG/MainProcess] created semlock with handle 140384818163712
[DEBUG/MainProcess] Queue._after_fork()
[DEBUG/MainProcess] created semlock with handle 140384818159616
[DEBUG/MainProcess] created semlock with handle 140384818155520
[DEBUG/MainProcess] created semlock with handle 140384818151424
[DEBUG/MainProcess] created semlock with handle 140384818147328
[DEBUG/MainProcess] added worker
[DEBUG/ForkPoolWorker-2] Queue._after_fork()
[DEBUG/ForkPoolWorker-2] Queue._after_fork()
[INFO/ForkPoolWorker-2] child process calling self.run()
[DEBUG/MainProcess] added worker
[DEBUG/ForkPoolWorker-3] Queue._after_fork()
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Unable to stop the stream: Device or resource busy
[DEBUG/ForkPoolWorker-3] Queue._after_fork()
[INFO/ForkPoolWorker-3] child process calling self.run()
[DEBUG/MainProcess] Queue._start_thread()
[DEBUG/MainProcess] doing self._thread.start()
[DEBUG/MainProcess] starting thread to feed data to pipe
[DEBUG/MainProcess] ... done self._thread.start()
2017-06-26 09:50:05.562006: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:05.562039: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:05.562045: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
[INFO/ForkPoolWorker-3] process shutting down
[DEBUG/ForkPoolWorker-3] running all "atexit" finalizers with priority >= 0
[DEBUG/ForkPoolWorker-3] running the remaining "atexit" finalizers
Process ForkPoolWorker-3:
Traceback (most recent call last):
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/multiprocessing/pool.py", line 103, in worker
    initializer(*initargs)
  File "object_detection_app.py", line 79, in worker
    output_q.put(detect_objects(frame, sess, detection_graph))
  File "object_detection_app.py", line 49, in detect_objects
    feed_dict={image_tensor: image_np_expanded})
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run
    run_metadata_ptr)
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 954, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
  File "/home/dremet/anaconda3/envs/object_recog/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
    return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
[INFO/ForkPoolWorker-3] process exiting with exitcode 1
[DEBUG/MainProcess] cleaning up worker 1
[Level 5/MainProcess] finalizer calling <built-in function close> with args (13,) and kwargs {}
[DEBUG/MainProcess] added worker
[DEBUG/ForkPoolWorker-4] Queue._after_fork()
[DEBUG/ForkPoolWorker-4] Queue._after_fork()
[INFO/ForkPoolWorker-4] child process calling self.run()
2017-06-26 09:50:05.625579: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:05.625605: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:05.625613: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:14.247294: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:14.247325: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-26 09:50:14.247332: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

Help is very appreciated.

Getting issue while running code

I'm getting the below error while running the code:

/usr/bin/python3.5 /home/deeplearning/Object-Detector-App/object_detection_app.py
Traceback (most recent call last):
File "/home/deeplearning/Object-Detector-App/object_detection_app.py", line 26, in
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
File "/home/deeplearning/Object-Detector-App/object_detection/utils/label_map_util.py", line 107, in load_labelmap
text_format.Merge(label_map_string, label_map)
File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 472, in Merge
text.split('\n'),
TypeError: a bytes-like object is required, not 'str'

Error while using 'object_detection_multilayer.py' with GPU on Tegra X1

`I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:874] ARM has no NUMA node, hardcoding to return zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: NVIDIA Tegra X1
major: 5 minor: 3 memoryClockRate (GHz) 0.072
pciBusID 0000:00:00.0
Total memory: 3.90GiB
Free memory: 1.60GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
E tensorflow/core/common_runtime/executor.cc:594] Executor failed to create kernel. Invalid argument: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]
Process Process-2:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1022, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1004, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "object_detection_multilayer.py", line 111, in child_process
image2 = detect_objects(image, sess, detection_graph)
File "object_detection_multilayer.py", line 48, in detect_objects
feed_dict={image_tensor: image_np_expanded})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]

Caused by op 'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise', defined at:
File "object_detection_multilayer.py", line 126, in
child_process.start()
File "/usr/lib/python3.5/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.5/multiprocessing/context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/lib/python3.5/multiprocessing/context.py", line 267, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 74, in _launch
code = process_obj._bootstrap()
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "object_detection_multilayer.py", line 105, in child_process
tf.import_graph_def(od_graph_def, name='')
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 288, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1226, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)
[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]
`

close

solved it myself. please close.

Object-Detector-App import cv2 No module named 'cv2'

Hi Datritan. I have Tensorflow object detection demo and sample code working on Ubuntu 16.04, so I thought I would try your project to improve performance. However, I've run into two errors:

  1. menpo::tbb=4.3_20141023=0 in environment.yml generates this error:

Error: NoPackagesFoundError: Package missing in current linux-64 channels:

  • tbb 4.3_20141023 0
    I "fixed" this by changing the line to "mempo:ttb" and removing the rest. I have no idea if this is good or bad, but the environment appeared to install without errors.
  1. After installing the environment as in 1. above, I tried to run "python object_detection_app.py" but I got the following error:
    File "object_detection_app.py", line 2, in
    import cv2
    ModuleNotFoundError: No module named 'cv2'

If you have any suggestions how to resolve this, I would appreciate it very much. Thank you.

Can't run python object_detection_app.py

When I try
conda env create -f environment.yml

I receive

Using Anaconda API: https://api.anaconda.org

SpecNotFound: Can't process without a name

When I run
python object_detection_app.py
I get the following
obj
After pressing stop program, I receive following repetitive code 3-4 times when I try to close command prompt.
obj2

Please point me if it's not a bug.

Cheers

Will only run with 1 worker

When I am running the script, I can only get it to execute when using 1 working. 2 or more workers will lead to endless loop of debug messages. Do I need any pre-reqs to run with workers?

Protobuf Window installation

this issue is not related to your project but I would really appreciate if you could provide the instruction to install protobuf on windows OS.
Thank you

environment.yml for conda linux.

Hi - this is a fun little project. You mentioned that the .yml is for MacOS currently.
I get various package version conflicts on linux.
Such as:
UnsatisfiableError: The following specifications were found to be in conflict:

  • jpeg 9b 0
  • qt 5.6.2 2 -> jpeg 8*
    Use "conda info " to see the dependencies for each package.

Does anyone know how to get it to run on Linux? Would be great.

while running the object_detection.py file getting following error?

output_q.put(detect_objects(frame, sess, detection_graph))
File "test.py", line 48, in detect_objects
feed_dict={image_tensor: image_np_expanded})
File "/home/renjith/venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/renjith/venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1093, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/home/renjith/venv/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Why use Process and Pool?

I see that Process and Pool are used in 'object_detection_app.py'. Do you think the Process is necessary?

Trying to implement multi-cam object detection

Thanks for your work! I could improve FPS from 3 to 9 (Yolov3). Now, I would like to develop a multi-cam based object detection. Currently, when I try to spawn out new threads for each camera, I keep getting tensorflow session errors. Could you please guide me on how to set-up multi cameras for object det?

video in out_q may not in right order?

after reading your code. I think If I use multiple thread/process to do detection. the video frame in out_q may not be in the right order, am I right?

Performance issue in /object_detection/eval_util.py (by P3)

Hello! I've found a performance issue in /object_detection/eval_util.py: sess = tf.Session(master, graph=tf.get_default_graph())(here) is defined in the function run_checkpoint_once(here) which is repeatedly called in the loop while True(here).

tf.Session being defined repeatedly could lead to incremental overhead. If you define tf.Session out of the loop and pass tf.Session as a parameter to the loop, your program would be much more efficient. Here is the Stack Overflow post to support it.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

Conda env create fails using environment.yml file with gxx and gcc packages

Hello,

I installed anaconda, updated conda.
I tried to export an environment with conda env export -n my_env.yml
Then conda env create -f ~/.path/to/env/environment.yml

Actual Behaviour

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • gcc_impl_linux-64==7.3.0=habb00fd_2
  • gxx_impl_linux-64==7.3.0=hdf63c60_2

If I remove those, my pipeline will not work properly so what will be the optimal solution? Thanks!

my_env.yml:

`name: GEM
channels:

  • conda-forge
  • bioconda
  • defaults
    dependencies:
  • _libgcc_mutex=0.1=main
  • _r-mutex=1.0.1=anacondar_1
  • bedtools=2.29.0=h6ed99ea_1
  • binutils_impl_linux-64=2.31.1=h6176602_1
  • binutils_linux-64=2.31.1=h6176602_9
  • bioconductor-gem=1.10.0=r361_1
  • bwidget=1.9.11=0
  • bzip2=1.0.8=h516909a_1
  • ca-certificates=2019.11.28=hecc5488_0
  • cairo=1.16.0=hfb77d84_1002
  • curl=7.65.3=hf8cf82a_0
  • datamash=1.1.0=0
  • fontconfig=2.13.1=h86ecdb6_1001
  • freetype=2.10.0=he983fc9_1
  • fribidi=1.0.5=h516909a_1002
  • gcc_impl_linux-64=7.3.0=habb00fd_1
  • gcc_linux-64=7.3.0=h553295d_9
  • gettext=0.19.8.1=hc5be6a0_1002
  • gfortran_impl_linux-64=7.3.0=hdf63c60_1
  • gfortran_linux-64=7.3.0=h553295d_9
  • glib=2.58.3=h6f030ca_1002
  • graphite2=1.3.13=hf484d3e_1000
  • gsl=2.5=h294904e_1
  • gxx_impl_linux-64=7.3.0=hdf63c60_1
  • gxx_linux-64=7.3.0=h553295d_9
  • harfbuzz=2.4.0=h9f30f68_3
  • icu=64.2=he1b5a44_1
  • jpeg=9c=h14c3975_1001
  • krb5=1.16.3=h05b26f9_1001
  • libblas=3.8.0=12_openblas
  • libcblas=3.8.0=12_openblas
  • libcurl=7.65.3=hda55be3_0
  • libedit=3.1.20170329=hf8c457e_1001
  • libffi=3.2.1=he1b5a44_1006
  • libgcc-ng=9.1.0=hdf63c60_0
  • libgfortran-ng=7.3.0=hdf63c60_0
  • libiconv=1.15=h516909a_1005
  • liblapack=3.8.0=12_openblas
  • libopenblas=0.3.7=h6e990d7_1
  • libpng=1.6.37=hed695b0_0
  • libssh2=1.8.2=h22169c7_2
  • libstdcxx-ng=9.1.0=hdf63c60_0
  • libtiff=4.0.10=h57b8799_1003
  • libuuid=2.32.1=h14c3975_1000
  • libxcb=1.13=h14c3975_1002
  • libxml2=2.9.9=hee79883_5
  • lz4-c=1.8.3=he1b5a44_1001
  • make=4.2.1=h14c3975_2004
  • ncurses=6.1=hf484d3e_1002
  • openssl=1.1.1d=h516909a_0
  • pango=1.42.4=ha030887_1
  • pcre=8.41=hf484d3e_1003
  • pixman=0.38.0=h516909a_1003
  • pthread-stubs=0.4=h14c3975_1001
  • r-askpass=1.1=r36hcdcec82_1
  • r-assertthat=0.2.1=r36h6115d3f_1
  • r-backports=1.1.4=r36hcdcec82_1
  • r-base=3.6.1=hba50c9b_4
  • r-brew=1.0_6=r36h6115d3f_1002
  • r-callr=3.3.2=r36h6115d3f_0
  • r-cli=1.1.0=r36h6115d3f_2
  • r-clipr=0.7.0=r36h6115d3f_0
  • r-clisymbols=1.2.0=r36h6115d3f_1002
  • r-colorspace=1.4_1=r36hcdcec82_1
  • r-commonmark=1.7=r36hcdcec82_1001
  • r-covr=3.3.1=r36h0357c0b_0
  • r-crayon=1.3.4=r36h6115d3f_1002
  • r-crosstalk=1.0.0=r36h6115d3f_1002
  • r-curl=4.2=r36hcdcec82_0
  • r-desc=1.2.0=r36h6115d3f_1002
  • r-devtools=2.2.1=r36h6115d3f_0
  • r-digest=0.6.21=r36h0357c0b_0
  • r-dt=0.9=r36h6115d3f_0
  • r-ellipsis=0.3.0=r36hcdcec82_0
  • r-evaluate=0.14=r36h6115d3f_1
  • r-fansi=0.4.0=r36hcdcec82_1001
  • r-fs=1.3.1=r36h0357c0b_1
  • r-ggplot2=3.2.1=r36h6115d3f_0
  • r-gh=1.0.1=r36h6115d3f_1002
  • r-git2r=0.26.1=r36h5ca76e2_1
  • r-glue=1.3.1=r36hcdcec82_1
  • r-gtable=0.3.0=r36h6115d3f_2
  • r-htmltools=0.3.6=r36he1b5a44_1003
  • r-htmlwidgets=1.3=r36h6115d3f_1001
  • r-httpuv=1.5.2=r36h0357c0b_1
  • r-httr=1.4.1=r36h6115d3f_1
  • r-ini=0.3.1=r36h6115d3f_1002
  • r-jsonlite=1.6=r36hcdcec82_1001
  • r-labeling=0.3=r36h6115d3f_1002
  • r-later=0.8.0=r36h0357c0b_2
  • r-lattice=0.20_38=r36hcdcec82_1002
  • r-lazyeval=0.2.2=r36hcdcec82_1
  • r-magrittr=1.5=r36h6115d3f_1002
  • r-mass=7.3_51.4=r36hcdcec82_1
  • r-matrix=1.2_17=r36hcdcec82_1
  • r-memoise=1.1.0=r36h6115d3f_1003
  • r-mgcv=1.8_29=r36hcdcec82_0
  • r-mime=0.7=r36hcdcec82_1
  • r-munsell=0.5.0=r36h6115d3f_1002
  • r-nlme=3.1_141=r36h9bbef5b_1
  • r-openssl=1.4.1=r36h9c8475f_0
  • r-pillar=1.4.2=r36h6115d3f_2
  • r-pkgbuild=1.0.5=r36h6115d3f_0
  • r-pkgconfig=2.0.3=r36h6115d3f_0
  • r-pkgload=1.0.2=r36h0357c0b_1001
  • r-plyr=1.8.4=r36h0357c0b_1003
  • r-praise=1.0.0=r36h6115d3f_1003
  • r-prettyunits=1.0.2=r36h6115d3f_1002
  • r-processx=3.4.1=r36hcdcec82_0
  • r-promises=1.0.1=r36h0357c0b_1001
  • r-ps=1.3.0=r36hcdcec82_1001
  • r-purrr=0.3.2=r36hcdcec82_1
  • r-r6=2.4.0=r36h6115d3f_2
  • r-rcmdcheck=1.3.3=r36h6115d3f_2
  • r-rcolorbrewer=1.1_2=r36h6115d3f_1002
  • r-rcpp=1.0.2=r36h0357c0b_0
  • r-remotes=2.1.0=r36h6115d3f_1
  • r-reshape2=1.4.3=r36h0357c0b_1004
  • r-rex=1.1.2=r36h6115d3f_1001
  • r-rlang=0.4.0=r36hcdcec82_1
  • r-roxygen2=6.1.1=r36h0357c0b_1001
  • r-rprojroot=1.3_2=r36h6115d3f_1002
  • r-rstudioapi=0.10=r36h6115d3f_2
  • r-rversions=2.0.0=r36h6115d3f_1
  • r-scales=1.0.0=r36h0357c0b_1002
  • r-sessioninfo=1.1.1=r36h6115d3f_1001
  • r-shiny=1.3.2=r36h6115d3f_1
  • r-sourcetools=0.1.7=r36he1b5a44_1001
  • r-stringi=1.4.3=r36h0e574ca_3
  • r-stringr=1.4.0=r36h6115d3f_1
  • r-sys=3.3=r36hcdcec82_0
  • r-testthat=2.2.1=r36h0357c0b_0
  • r-tibble=2.1.3=r36hcdcec82_1
  • r-usethis=1.5.1=r36h6115d3f_1
  • r-utf8=1.1.4=r36hcdcec82_1001
  • r-vctrs=0.2.0=r36hcdcec82_1
  • r-viridislite=0.3.0=r36h6115d3f_1002
  • r-whisker=0.4=r36h6115d3f_0
  • r-withr=2.1.2=r36h6115d3f_1001
  • r-xml2=1.2.2=r36h0357c0b_0
  • r-xopen=1.0.0=r36h6115d3f_1002
  • r-xtable=1.8_4=r36h6115d3f_2
  • r-yaml=2.2.0=r36hcdcec82_1002
  • r-zeallot=0.1.0=r36h6115d3f_1001
  • readline=8.0=hf8c457e_0
  • sed=4.7=h1bed415_1000
  • tk=8.6.9=hed695b0_1003
  • tktable=2.10=h555a92e_2
  • xorg-kbproto=1.0.7=h14c3975_1002
  • xorg-libice=1.0.10=h516909a_0
  • xorg-libsm=1.2.3=h84519dc_1000
  • xorg-libx11=1.6.8=h516909a_0
  • xorg-libxau=1.0.9=h14c3975_0
  • xorg-libxdmcp=1.1.3=h516909a_0
  • xorg-libxext=1.3.4=h516909a_0
  • xorg-libxrender=0.9.10=h516909a_1002
  • xorg-renderproto=0.11.1=h14c3975_1002
  • xorg-xextproto=7.3.0=h14c3975_1002
  • xorg-xproto=7.0.31=h14c3975_1007
  • xz=5.2.4=h14c3975_1001
  • zlib=1.2.11=h516909a_1006
  • zstd=1.4.0=h3b9ef0a_0`

Crashes after first successful start

I can run this script exactly once and it works (more or less,somehow it does not print the fps)
But after ending it and restarting it does not work and keeps trying to build workers or whatever.
The only way to get it working again is to reboot.

Seems that the implementation of the multiprocessing/threading causes big problems.

Has anybody experienced the same? Or even better: Has found a solution?

about fps

My fps rate is1.36 using cpu,is it normal?Why are your implement is 0.11?

About CCTV camera with rtsp mode.

I use this project in the video surveillance platform, analysis of people, cars and other objects, but I use RTSP to obtain the video image monitoring equipment, very Carlton, get the normal FPS. Is there any way you can do it?

custom object / model training question

Very cool project! .. I have it running on a raspi. Worked like a charm.

I've been using some opencv template matching for a simple object detection system, but may consider this approach.

Can you point me in the right direction on how I would train the system to detect a custom object?
Take a hotdog for example .. no bun, just the hotdog meat. Is it possible to create a training set with my own images and import into the project so that I can improve the detection results?

I'm guessing, I'd need to create a training set, then retrain the system?

thanks and great blog post/project!

No checkpoint found:

Traceback (most recent call last):
File "demo.py", line 128, in
main(args)
File "demo.py", line 97, in main
checkpoint = load_checkpoint(args.resume)
File "C:\Users\Radhesh Harlalka\Desktop\Scene_text_PRL\aster.pytorch-master\lib\utils\serialization.py", line 66, in load_checkpoint
raise ValueError("=> No checkpoint found at '{}'".format(load_path))
ValueError: => No checkpoint found at 'C:/Program Files/Git/data/mkyang/logs/recognition/aster.pytorch/logs/baseline_aster/baseline_aster/demo.pth.tar'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.