Code Monkey home page Code Monkey logo

Comments (17)

uwmyuan avatar uwmyuan commented on May 11, 2024 1

@Bendidi
Thanks for your response.
I'll share the full configs to recreate the errors. My env is Ubuntu 16.04. My test video file is https://drive.google.com/open?id=1l5CZtzK6W2Kf2vYLBgmcvZqMTVqIUwL, the pb file is https://drive.google.com/open?id=1BjUg7jvMmCsm44y3tl1vIvsVGf_xCOe, the meta file is https://drive.google.com/open?id=1UKU3-1nMXy9yITXo0cr_uwdinzfMuF2. The following are my run.pys and error messages. I'll bold the changed code. If you have any question, please reply and I'll respond asap.

Case 1 using SORT
Code:

from darkflow.darkflow.defaults import argHandler #Import the default arguments
import os
from darkflow.darkflow.net.build import TFNet


FLAGS = argHandler()
FLAGS.setDefaults()

FLAGS.demo = "**123.mov**" # video file to use, or if camera just put "camera"
FLAGS.model = "darkflow/cfg/yolo.cfg" # tensorflow model
FLAGS.load = "darkflow/bin/yolo.weights" # tensorflow weights
FLAGS.threshold = 0.7 # threshold of decetion confidance (detection if confidance > threshold )
FLAGS.gpu = 0.7 #how much of the GPU to use (between 0 and 1) 0 means use cpu
FLAGS.track = True # wheither to activate tracking or not
FLAGS.trackObj = "person" # the object to be tracked
FLAGS.saveVideo = False  #whether to save the video or not
FLAGS.BK_MOG = **True** # activate background substraction using cv2 MOG substraction,
                        #to help in worst case scenarion when YOLO cannor predict(able to detect mouvement, it's not ideal but well)
                        # helps only when number of detection < 5, as it is still better than no detection.
FLAGS.tracker = "sort" # wich algorithm to use for tracking deep_sort/sort (NOTE : deep_sort only trained for people detection )
FLAGS.skip = 0 # how many frames to skipp between each detection to speed up the network
FLAGS.csv = True #whether to write csv file or not(only when tracking is set to True)
FLAGS.display = False # display the tracking or not

tfnet = TFNet(FLAGS)

tfnet.camera()
exit('Demo stopped, exit.')

Error:

Parsing darkflow/cfg/yolo.cfg
Loading darkflow/bin/yolo.weights ...
Successfully identified 203934260 bytes
Finished in 0.06644797325134277s
Model has a coco model name, loading coco labels.

Building net ...
Source | Train? | Layer description                | Output size
-------+--------+----------------------------------+---------------
       |        | input                            | (?, 608, 608, 3)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 608, 608, 32)
 Load  |  Yep!  | maxp 2x2p0_2                     | (?, 304, 304, 32)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 304, 304, 64)
 Load  |  Yep!  | maxp 2x2p0_2                     | (?, 152, 152, 64)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 152, 152, 128)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 152, 152, 64)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 152, 152, 128)
 Load  |  Yep!  | maxp 2x2p0_2                     | (?, 76, 76, 128)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 76, 76, 256)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 76, 76, 128)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 76, 76, 256)
 Load  |  Yep!  | maxp 2x2p0_2                     | (?, 38, 38, 256)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 38, 38, 512)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 38, 38, 256)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 38, 38, 512)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 38, 38, 256)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 38, 38, 512)
 Load  |  Yep!  | maxp 2x2p0_2                     | (?, 19, 19, 512)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 19, 19, 512)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 19, 19, 512)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | concat [16]                      | (?, 38, 38, 512)
 Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 38, 38, 64)
 Load  |  Yep!  | local flatten 2x2                | (?, 19, 19, 256)
 Load  |  Yep!  | concat [27, 24]                  | (?, 19, 19, 1280)
 Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 19, 19, 1024)
 Load  |  Yep!  | conv 1x1p0_1    linear           | (?, 19, 19, 425)
-------+--------+----------------------------------+---------------
GPU mode with 0.7 usage
2017-11-19 21:18:50.904161: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:18:50.904180: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:18:50.904198: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:18:50.904261: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:18:50.904267: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:18:51.256496: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-11-19 21:18:51.257303: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:06:00.0
Total memory: 1.95GiB
Free memory: 1.61GiB
2017-11-19 21:18:51.257404: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-11-19 21:18:51.257432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-11-19 21:18:51.257518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:06:00.0)
Finished in 7.303749084472656s

Press [ESC] to quit demo
2017-11-19 21:19:03.519535: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2.247 FPSTraceback (most recent call last):
  File "run.py", line 27, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 122, in postprocess
    trackers = tracker.update(detections)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 207, in update
    matched, unmatched_dets, unmatched_trks = associate_detections_to_trackers(dets,trks)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 146, in associate_detections_to_trackers
    iou_matrix[d,t] = iou(det,trk)
ZeroDivisionError: division by zero

Case 2: using pb and beta
Code:

from darkflow.darkflow.defaults import argHandler #Import the default arguments
import os
from darkflow.darkflow.net.build import TFNet


FLAGS = argHandler()
FLAGS.setDefaults()

FLAGS.demo = "/home/yunyuan/Downloads/StanfordDroneDataset/videos/nexus/video2/video.mov" # video file to use, or if camera just put "camera"
FLAGS.pbLoad = "darkflow/built_graph/tiny-yolo-voc-traffic.pb" # tensorflow model
FLAGS.metaLoad = "darkflow/built_graph/tiny-yolo-voc-traffic.meta" # tensorflow weights
FLAGS.threshold = 0.4 # threshold of decetion confidance (detection if confidance > threshold )
FLAGS.gpu = 0.0 #how much of the GPU to use (between 0 and 1) 0 means use cpu
FLAGS.track = True # wheither to activate tracking or not
FLAGS.trackObj = "{'Bicyclist','Pedestrian','Skateboarder','Cart','Car','Bus'}" # the object to be tracked
FLAGS.saveVideo = True  #whether to save the video or not
FLAGS.BK_MOG = True # activate background substraction using cv2 MOG substraction,
                        #to help in worst case scenarion when YOLO cannor predict(able to detect mouvement, it's not ideal but well)
                        # helps only when number of detection < 5, as it is still better than no detection.
FLAGS.tracker = "sort" # wich algorithm to use for tracking deep_sort/sort (NOTE : deep_sort only trained for people detection )
FLAGS.skip = 0 # how many frames to skipp between each detection to speed up the network
FLAGS.csv = True #whether to write csv file or not(only when tracking is set to True)
FLAGS.display = False # display the tracking or not

tfnet = TFNet(FLAGS)

tfnet.camera()
exit('Demo stopped, exit.')

Error:

Loading from .pb and .meta
Running entirely on CPU
2017-11-19 21:29:38.571220: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:29:38.571235: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:29:38.571253: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:29:38.571257: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:29:38.571298: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:29:38.671981: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-11-19 21:29:38.672308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:06:00.0
Total memory: 1.95GiB
Free memory: 1.49GiB
2017-11-19 21:29:38.672334: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-11-19 21:29:38.672338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-11-19 21:29:38.672358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:06:00.0)

(python:3300): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

** (python:3300): CRITICAL **: gst_app_src_set_caps: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:3300): CRITICAL **: gst_app_src_set_stream_type: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:3300): CRITICAL **: gst_app_src_set_size: assertion 'GST_IS_APP_SRC (appsrc)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:3300): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:3300): GStreamer-CRITICAL **: gst_bin_add_many: assertion 'GST_IS_ELEMENT (element_1)' failed

(python:3300): GStreamer-CRITICAL **: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
OpenCV Error: Unspecified error (GStreamer: cannot link elements
) in CvVideoWriter_GStreamer::open, file /feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp, line 1626
VIDEOIO(cvCreateVideoWriter_GStreamer (filename, fourcc, fps, frameSize, is_color)): raised OpenCV exception:

/feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp:1626: error: (-2) GStreamer: cannot link elements
 in function CvVideoWriter_GStreamer::open

Press [ESC] to quit demo
~/Documents/Tracking-with-darkflow/sort/sort.py:68: RuntimeWarning: invalid value encountered in true_divide
  h = x[2]/w
Traceback (most recent call last):
  File "run1.py", line 27, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 130, in postprocess
    bbox = [int(track[0]),int(track[1]),int(track[2]),int(track[3])]
ValueError: cannot convert float NaN to integer

Case 3:
When I use darkflow as a submodule of this project, it cannot train the net. (I don't know why this happen maybe my wrong code.) So I clone darkflow standalone and copy to this project folder and train. Then run the code and have the error. Please tell me how to do this in the right way.
Code:

from darkflow.net.build import TFNet
options={
"demo":"/home/yunyuan/Tracking-with-darkflow/123.mov",
"pbLoad":"built_graph/tiny-yolo-voc-traffic.pb",
"metaLoad":"built_graph/tiny-yolo-voc-traffic.meta",
"threshold":0.7,
"track":True,
"trackObj": "{'Bicyclist','Pedestrian','Skateboarder','Cart','Car','Bus'}",
"saveVideo": True, 
"BK_MOG": True,
"tracker": "deep_sort",
"skip": 0,
"csv":True,
"display":True,
"gpu":0.7
}
tfnet = TFNet(options)

tfnet.camera()
exit('Demo stopped, exit.')

Error:

Loading from .pb and .meta
GPU mode with 0.7 usage
2017-11-19 21:40:31.699879: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:40:31.699896: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:40:31.699915: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:40:31.699919: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:40:31.699937: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-19 21:40:31.790642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-11-19 21:40:31.790966: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:06:00.0
Total memory: 1.95GiB
Free memory: 1.49GiB
2017-11-19 21:40:31.790991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-11-19 21:40:31.790995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-11-19 21:40:31.791015: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:06:00.0)
Press [ESC] to quit demo
2017-11-19 21:40:32.851397: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2017-11-19 21:40:32.867208: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.29GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2017-11-19 21:40:32.936878: E tensorflow/stream_executor/cuda/cuda_blas.cc:366] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2017-11-19 21:40:32.936915: W tensorflow/stream_executor/stream.cc:1756] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
    return fn(*args)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1306, in _run_fn
    status, run_metadata)
  File "~/anaconda/lib/python3.6/contextlib.py", line 89, in __exit__
    next(self.gen)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : m=169, n=55, k=1024
	 [[Node: 22-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Pad_8, 22-convolutional/filter)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "run.py", line 19, in <module>
    tfnet.camera()
  File "~/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 127, in camera
    net_out = self.sess.run(self.out, feed_dict)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : m=169, n=55, k=1024
	 [[Node: 22-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Pad_8, 22-convolutional/filter)]]

Caused by op '22-convolutional', defined at:
  File "run.py", line 17, in <module>
    tfnet = TFNet(options)
  File "~/Tracking-with-darkflow/darkflow/darkflow/net/build.py", line 54, in __init__
    self.build_from_pb()
  File "~/Tracking-with-darkflow/darkflow/darkflow/net/build.py", line 87, in build_from_pb
    name=""
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
    op_def=op_def)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "~/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InternalError (see above for traceback): Blas SGEMM launch failed : m=169, n=55, k=1024
	 [[Node: 22-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Pad_8, 22-convolutional/filter)]]

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

Sort is not a neural net so it's not trainable, it's based mainly on Kalman Filters for the tracking (check their project )
For your case it's normal that it does not work well, because the neural network (darkflow/YOLO) was not training on that kind of data , so in short you should just train your own darkflow model on uav images using the darkflow project and use the generated weights for tracking with sort (I think it's pretty straightforward , everything is in the readMe file of darkflow here)

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

Thanks for your quick response.

I understand your idea now, and will follow your instruction. Thanks.
I just tried SORT with default settings, however, got the following prompt. It seems to be caused by none detected objects.
Press [ESC] to quit demo 0.471 FPSC:\Users\Yun\Tracking-with-darkflow\sort\sort.py:59: RuntimeWarning: divide by zero encountered in true_divide r = w/float(h) C:\Users\Yun\Tracking-with-darkflow\sort\sort.py:67: RuntimeWarning: invalid value encountered in multiply w = np.sqrt(x[2]*x[3]) 0.471 FPSC:\Users\Yun\Tracking-with-darkflow\sort\sort.py:68: RuntimeWarning: invalid value encountered in true_divide h = x[2]/w 0.470 FPSTraceback (most recent call last): File "run.py", line 27, in <module> tfnet.camera() File "C:\Users\Yun\Tracking-with-darkflow\darkflow\darkflow\net\help.py", line 171, in camera encoder=encoder,tracker=tracker) File "C:\Users\Yun\Tracking-with-darkflow\darkflow\darkflow\net\yolov2\predict.py", line 122, in postprocess trackers = tracker.update(detections) File "C:\Users\Yun\Tracking-with-darkflow\sort\sort.py", line 207, in update matched, unmatched_dets, unmatched_trks = associate_detections_to_trackers(dets,trks) File "C:\Users\Yun\Tracking-with-darkflow\sort\sort.py", line 146, in associate_detections_to_trackers iou_matrix[d,t] = iou(det,trk) ZeroDivisionError: division by zero

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

thanks for reporting the bug

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

Hi @Bendidi

I trained the yolo net with darkflow with my own dataset, and got aforementioned error again. Here are more prompts:

(python:31331): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

** (python:31331): CRITICAL **: gst_app_src_set_caps: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:31331): CRITICAL **: gst_app_src_set_stream_type: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:31331): CRITICAL **: gst_app_src_set_size: assertion 'GST_IS_APP_SRC (appsrc)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:31331): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:31331): GStreamer-CRITICAL **: gst_bin_add_many: assertion 'GST_IS_ELEMENT (element_1)' failed

(python:31331): GStreamer-CRITICAL **: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
OpenCV Error: Unspecified error (GStreamer: cannot link elements
) in CvVideoWriter_GStreamer::open, file /feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp, line 1626
VIDEOIO(cvCreateVideoWriter_GStreamer (filename, fourcc, fps, frameSize, is_color)): raised OpenCV exception:

/feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp:1626: error: (-2) GStreamer: cannot link elements
 in function CvVideoWriter_GStreamer::open

Press [ESC] to quit demo
~/Documents/Tracking-with-darkflow/sort/sort.py:68: RuntimeWarning: invalid value encountered in true_divide
  h = x[2]/w
Traceback (most recent call last):
  File "run.py", line 27, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 130, in postprocess
    bbox = [int(track[0]),int(track[1]),int(track[2]),int(track[3])]
ValueError: cannot convert float NaN to integer


from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

it should be fixed now,

on a side note , you can activate background subtraction to detect movement if there is no detection from darkflow

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

@Bendidi
Thanks. I pulled the new version and it worked well with default settings. For using it with own dataset and multi-objects, I tested with the following settings:

  1. If tracker="sort" and BK_MOG=False, it works (without output) when no objects detected. It's ok.
  2. Let tracker ="sort" and BK_MOG=True, it doesn't work:
2017-11-19 10:04:07.378863: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2.744 FPSTraceback (most recent call last):
  File "run.py", line 27, in <module>
    tfnet.camera()
  File "~Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 122, in postprocess
    trackers = tracker.update(detections)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 207, in update
    matched, unmatched_dets, unmatched_trks = associate_detections_to_trackers(dets,trks)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 146, in associate_detections_to_trackers
    iou_matrix[d,t] = iou(det,trk)
ZeroDivisionError: division by zero
  1. Note that if YOLO is trained(fine-tuned) with darkflow, darkflow cannot use model and weights but .pb and .meta. So I remove the model and load and use .pb and .meta as following:
FLAGS.pbLoad = "darkflow/built_graph/tiny-yolo-voc-traffic.pb" # tensorflow model
FLAGS.metaLoad = "darkflow/built_graph/tiny-yolo-voc-traffic.meta" # tensorflow weights

This setting doesn't work:

(python:4107): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

** (python:4107): CRITICAL **: gst_app_src_set_caps: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:4107): CRITICAL **: gst_app_src_set_stream_type: assertion 'GST_IS_APP_SRC (appsrc)' failed

** (python:4107): CRITICAL **: gst_app_src_set_size: assertion 'GST_IS_APP_SRC (appsrc)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:4107): GLib-GObject-CRITICAL **: g_object_set: assertion 'G_IS_OBJECT (object)' failed

(python:4107): GStreamer-CRITICAL **: gst_bin_add_many: assertion 'GST_IS_ELEMENT (element_1)' failed

(python:4107): GStreamer-CRITICAL **: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
OpenCV Error: Unspecified error (GStreamer: cannot link elements
) in CvVideoWriter_GStreamer::open, file /feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp, line 1626
VIDEOIO(cvCreateVideoWriter_GStreamer (filename, fourcc, fps, frameSize, is_color)): raised OpenCV exception:

/feedstock_root/build_artefacts/opencv_1510577881683/work/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp:1626: error: (-2) GStreamer: cannot link elements
 in function CvVideoWriter_GStreamer::open

Press [ESC] to quit demo
~/Documents/Tracking-with-darkflow/sort/sort.py:68: RuntimeWarning: invalid value encountered in true_divide
  h = x[2]/w
Traceback (most recent call last):
  File "run1.py", line 27, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 130, in postprocess
    bbox = [int(track[0]),int(track[1]),int(track[2]),int(track[3])]
ValueError: cannot convert float NaN to integer
  1. For using meta and pb, I run the code with standalone darkflow:
from darkflow.net.build import TFNet
options={
"demo":"~/Tracking-with-darkflow/123.mov",
"pbLoad":"built_graph/tiny-yolo-voc-traffic.pb",
"metaLoad":"built_graph/tiny-yolo-voc-traffic.meta",
"threshold":0.7,
"track":True,
"trackObj": "{'Bicyclist','Pedestrian','Skateboarder','Cart','Car','Bus'}",
"saveVideo": True, 
"BK_MOG": True,
"tracker": "sort",
"skip": 0,
"csv":True,
"display":True,
"gpu":0.7
}
tfnet = TFNet(options)

tfnet.camera()
exit('Demo stopped, exit.')

The darkflow can work, but the tracking part sort or deep_sort don't work. I understand you may change the darkflow options.

Thanks for your kind help.

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

I've tried recreating the error , but to no avail , can you share the input you are using and the full config to test it locally ?
on a side note there should be no difference between using .weights or .pb for YOLO

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

The last error is an error related to your tensorflow/Cuda/cudann installation, I've tested the weights you provided to see how good is the detection , and the result was a little bad, anyway I've tried the same config you used with background subtraction activated :

FLAGS.demo = "123.MOV" # video file to use, or if camera just put "camera"
# FLAGS.model = "darkflow/cfg/yolo.cfg" # tensorflow model
# FLAGS.load = "darkflow/bin/yolo.weights" # tensorflow weights
FLAGS.pbLoad = "tiny-yolo-voc-traffic.pb" # tensorflow model
FLAGS.metaLoad = "tiny-yolo-voc-traffic.meta" # tensorflow weights
FLAGS.threshold = 0.7 # threshold of decetion confidance (detection if confidance > threshold )
FLAGS.gpu = 0.8 #how much of the GPU to use (between 0 and 1) 0 means use cpu
FLAGS.track = True # wheither to activate tracking or not
FLAGS.trackObj = ['Bicyclist','Pedestrian','Skateboarder','Cart','Car','Bus'] # the object to be tracked
#FLAGS.trackObj = ["person"]
FLAGS.saveVideo = True  #whether to save the video or not
FLAGS.BK_MOG = True # activate background substraction using cv2 MOG substraction,
                        #to help in worst case scenarion when YOLO cannor predict(able to detect mouvement, it's not ideal but well)
                        # helps only when number of detection < 3, as it is still better than no detection.
FLAGS.tracker = "sort" # wich algorithm to use for tracking deep_sort/sort (NOTE : deep_sort only trained for people detection )
FLAGS.skip = 3 # how many frames to skipp between each detection to speed up the network
FLAGS.csv = False #whether to write csv file or not(only when tracking is set to True)
FLAGS.display = True # display the tracking or not

and I did get an error that happens when there is a detection followed by no detections in next frame that I did a quick fix to (bad fix) but no error the same as you showed, please make sure to get latest version of both the repo and it's sub-modules.

here is the video file generated after the tracking , it's messy but that's what you get using background subtraction with a moving camera :
https://drive.google.com/open?id=1Hhdowy6ZYKgkHD0Po_X-wO1XUXjp1fok

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

@Bendidi
Thanks for your quick response. I pull this repo and submodules.
For the last error, it works! To fix the bad installation, I used conda install -c conda-forge tensorflow-gpu.
For the second case, ph and meta works.
For the first case, I turned off the background subtraction with FLAGS.BK_MOG = False and got the following error. I wander if this is the same as your error.

QGtkStyle could not resolve GTK. Make sure you have installed the proper libraries.
File "run.py", line 30, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 125, in postprocess
    trackers = tracker.update(detections)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 213, in update
    trk.update(dets[d,:][0])
IndexError: too many indices for array
Segmentation fault (core dumped)

It looks the error is caused by qt. My qt version is 5.6.2. conda install -c conda-forge qt

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

my bad I forgot to push the quick fix for that error before responding, it should be good now
this is just a temporary fix , until I get time to find and test a better idea for a solution to the case of vanishing detection mid tracking (e.g: frame 1 : 2 detected boxes --> frame 2 : 0 detected boxes) , this is the case that causes that error ,

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

@Bendidi
Thanks. I pull this repo and get the following error. I'm not sure why.
Code:

FLAGS.gpu = 0.7
FLAGS.BK_MOG = False

Error:

2017-11-21 18:35:46.572624: E tensorflow/stream_executor/cuda/cuda_dnn.cc:371] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2017-11-21 18:35:46.572663: E tensorflow/stream_executor/cuda/cuda_dnn.cc:338] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
2017-11-21 18:35:46.572684: F tensorflow/core/kernels/conv_ops.cc:672] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms) 
Aborted (core dumped)

Code:

 FLAGS.gpu = 0.0
FLAGS.BK_MOG = False

Error:

Traceback (most recent call last):
  File "run.py", line 30, in <module>
    tfnet.camera()
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/help.py", line 171, in camera
    encoder=encoder,tracker=tracker)
  File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 125, in postprocess
    trackers = tracker.update(detections)
  File "~/Documents/Tracking-with-darkflow/sort/sort.py", line 213, in update
    trk.update(dets[d,:][0])
IndexError: too many indices for array
Segmentation fault (core dumped)

from tracking-with-darkflow.

obendidi avatar obendidi commented on May 11, 2024

there is a problem with your cudnn installation , I propose you make a new clean install of everything, also for the second error , I don't think you are getting the last version of code , because in the error you submitted :

File "~/Documents/Tracking-with-darkflow/darkflow/darkflow/net/yolov2/predict.py", line 125, in postprocess
    trackers = tracker.update(detections)

it says that line 125 of pedict.py contains trackers = tracker.update(detections) while in the last code line 125 contains trackers = tracker.tracks

to pull the new changes in submodules you should use : git submodule update --init --recursive

I just tested the code with both configs and didn't get any error !

from tracking-with-darkflow.

307509256 avatar 307509256 commented on May 11, 2024

@uwmyuan @Bendidi
The link you offered for downloading is invalid, could you please upload it again, thanks a lot!

Thanks for your response.
I'll share the full configs to recreate the errors. My env is Ubuntu 16.04. My test video file is https://drive.google.com/open?id=1l5CZtzK6W2Kf2vYLBgmcvZqMTVqIUwL2, the pb file is https://drive.google.com/open?id=1BjUg7jvMmCsm44y3tl1vIvsVGf_xCOec, the meta file is https://drive.google.com/open?id=1UKU3-1nMXy9yITXo0cr_uwdinzfMuF29. The following are my run.pys and error messages. I'll bold the changed code. If you have any question, please reply and I'll response asap.
.........

from tracking-with-darkflow.

uwmyuan avatar uwmyuan commented on May 11, 2024

@307509256
Please follow the instruction https://github.com/thtrieu/darkflow#save-the-built-graph-to-a-protobuf-file-pb to produce your own .meta and .pd files.

from tracking-with-darkflow.

307509256 avatar 307509256 commented on May 11, 2024

@uwmyuan
All the link you offered is invalid, may be the link is privated.
perhaps, could you send email to me with your .meta and .pd files?
Thank you very much!
email:
[email protected]
[email protected]

from tracking-with-darkflow.

307509256 avatar 307509256 commented on May 11, 2024

@Bendidi @uwmyuan
I don't have time to produce my own .meta and .pd files, so I want to try the tiny-yolo-voc-traffic.meta quickly. if necessary, I will be paid in accordance with the regulations.

from tracking-with-darkflow.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.