Code Monkey home page Code Monkey logo

dw2tf's Introduction

DW2TF: Darknet to TensorFlow

This is a simple converter which converts:

  • Darknet weights (.weights) to TensorFlow weights (.ckpt)
  • Darknet model (.cfg) to TensorFlow graph (.pb, .meta)

Requirements

  • Ubuntu
  • Python 3.6 (known issues with Python 3.7)

Use it

For a full list of options:

python3 main.py -h

Provide optional argument --training to generate training graph (uses batch norm in training mode).

Object Detection Networks

yolov2

python3 main.py \
    --cfg 'data/yolov2.cfg' \
    --weights 'data/yolov2.weights' \
    --output 'data/' \
    --prefix 'yolov2/' \
    --gpu 0

yolov2-tiny

python3 main.py \
    --cfg 'data/yolov2-tiny.cfg' \
    --weights 'data/yolov2-tiny.weights' \
    --output 'data/' \
    --prefix 'yolov2-tiny/' \
    --gpu 0

yolov3

python3 main.py \
    --cfg 'data/yolov3.cfg' \
    --weights 'data/yolov3.weights' \
    --output 'data/' \
    --prefix 'yolov3/' \
    --gpu 0

yolov3-tiny

python3 main.py \
    --cfg 'data/yolov3-tiny.cfg' \
    --weights 'data/yolov3-tiny.weights' \
    --output 'data/' \
    --prefix 'yolov3-tiny/' \
    --gpu 0

Image Classification Networks

darknet19

python3 main.py \
    --cfg 'data/darknet19.cfg' \
    --weights 'data/darknet19.weights' \
    --output 'data/' \
    --prefix 'darknet19/' \
    --gpu 0

darknet19_448

python3 main.py \
    --cfg 'data/darknet19_448.cfg' \
    --weights 'data/darknet19_448.weights' \
    --output 'data/' \
    --prefix 'darknet19_448/' \
    --gpu 0

Todo

  • More layer types

Thanks

dw2tf's People

Contributors

jinyu121 avatar sjain-stanford avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dw2tf's Issues

ValueError: ''yolov3-tiny/'net1' is not a valid scope name

image

I'm currently training my images using yolov3-tiny.

The command i use is :

python main.py --cfg "C://Users//G//Desktop//ML//YOLO//DW2TF-master//DW2TF-master//data//yolov3-tiny-new.cfg" --weights "C://Users//G//Desktop//ML//YOLO//DW2TF-master//DW2TF-master//data//yolov3-tiny.weights" --output 'data//' --prefix 'yolov3-tiny/' --gpu 0

And which resulted in the error above.

Please advice. Thanks!

YoloV3 detects very large boxes after conversion from .weights to .pb

Hi,

I used the DW2TF to convert a Yolov3 model to a .pb model, but after the conversion, the model no longer detects well.
The detected boxes' widths are all equal to the width of the input image while the heights take from the third to the whole height of the input image.
I successfully converted a model almost two months ago, it was exactly the same as the current one except for the number of classes (1 before, 7 now)

Have you ever encountered this problem? Do you have any idea how to solve it?

Thank you very much in advance

Transfer learning ?

Hi, is it possible to transfer learning using the converted YOLOv3 weight ? Thank you very much.

tiny Yolov3 with custom parameters

Hello,

Can I convert a darknet custom tiny-YOLOv3 with those parameter changed?

  • number of classes set to 1
  • image size
  • number of channels set to 1 (grayscale)

The image size is easy to modify in the Daknet class code, but I can't find where and how to set the number of classes and channels. Is that even possible?

Thank you

Can't convert Yolov3-tiny to TensorFlow

I'm trying to convert Yolov3-tiny model to TensorFlow.
cfg and weights file from https://pjreddie.com/darknet/yolo/ (Yolov3-tiny)
OS: Ubunty 16.04
command to convert:
python3 main.py --cfg 'data/yolov3-tiny.cfg' --weights 'data/yolov3-tiny.weights' --output 'data/' --prefix 'yolov3/' --gpu 0

Error:

  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 24 and 26. Shapes are [?,24,24] and [?,26,26]. for 'yolov3/route2' (op: 'ConcatV2') with input shapes: [?,24,24,128], [?,26,26,128], [] and with computed input tensors: input[2] = <-1>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 111, in <module>
    main(args)
  File "main.py", line 52, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 32, in parse_net
    training=training, const_inits=const_inits, verbose=verbose)
  File "/test/DW2TF-master/util/cfg_layer.py", line 197, in get_cfg_layer
    layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_inits, verbose)
  File "test/DW2TF-master/util/cfg_layer.py", line 131, in cfg_route
    net = tf.concat(nets, axis=-1, name=scope)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1124, in concat
    return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1033, in concat_v2
    "ConcatV2", values=values, axis=axis, name=name)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
    op_def=op_def)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in __init__
    control_input_ops)
  File "test/anaconda3/envs/condaenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 24 and 26. Shapes are [?,24,24] and [?,26,26]. for 'yolov3/route2' (op: 'ConcatV2') with input shapes: [?,24,24,128], [?,26,26,128], [] and with computed input tensors: input[2] = <-1>.

"Python has stopped working"

Hello ! I meet a problem trying to convert YOLOv2 weights to Tensorflow .ckpt.

When I use the following command :

python main.py --cfg data/yolov2.cfg --weights data/yolov2.weights --output data/ --prefix yolov2/ --gpu 0 --layers 0 --training

everything goes well at first, then I get a "Python has stopped working" pop-up during execution of line 92 in main.py. I do not get any other error message but I end up with a temp file

yolov2.ckpt.index.tempstate2841949181619307473

and a model I can't use.

Do you have any idea what could be causing this issue ?

I am using Python 3.6.8 by the way, in case it helps.

Convert tiny-yolov3 to Tflite.

Can you help me in converting the tiny-yolov3 model to .Tflite to run it on android? Please let me know if you have any leads for me.

Batch normalization --training parameter

Hi, I wanted to use YOLOv3-tiny model. Downloaded cfg and weights from official website.

With this code below i successfully built .pb and .meta files.
python main.py --cfg ../yolov3-tiny/yolov3-tiny.cfg --weights ../yolov3-tiny/yolov3-tiny.weights --output ../yolov3-tiny/ --prefix "YOLO/"

With this script below I could load graph and weights.
Tried to get output from last convolutional13 layer, I got array with full of nan values:

import tensorflow as tf
import numpy as np
import cv2
saver = tf.train.import_meta_graph("yolov3-tiny/yolov3-tiny.meta")
sess = tf.Session()
saver.restore(sess, "yolov3-tiny/yolov3-tiny.ckpt")

image = cv2.cvtColor(cv2.imread("sample.jpg"), cv2.COLOR_BGR2RGB) / 255.0
image = np.expand_dims(image, axis=0)
print(
	sess.run("YOLO/convolutional13/BiasAdd:0", feed_dict={"YOLO/net1:0":image})
)

Outputs:

[[[[nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   ...
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]]

  [[nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   ...
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]
   [nan nan nan ... nan nan nan]]]]

However when i tried same conversion with
python main.py --training --cfg ../yolov3-tiny/yolov3-tiny.cfg --weights ../yolov3-tiny/yolov3-tiny.weights --output ../yolov3-tiny/ --prefix "YOLO/

Same script outputs:

[[[[-0.5312634   0.23449755 -0.22042923 ... -0.99058443 -0.75764066
     0.05638865]
   [-0.1264087  -0.06148954 -0.13978335 ... -0.57391363 -0.65091616
    -0.34988856]
   [-0.27005857  0.18064664 -0.1842366  ... -0.7720764  -0.63676864
    -0.22235665]
   ...
   [-0.14108022  0.12593661  0.040429   ... -0.51453155 -0.8112872
    -0.2482701 ]
   [-0.14169356  0.05826963  0.04545707 ... -0.36210614 -0.6568373
    -0.17424914]
   [-0.24074644  0.49974358 -0.17072684 ... -1.1237179  -0.8400626
    -0.20994306]]

  [[-0.37883073  0.06569445  0.07646853 ... -0.72665095 -0.5669313
     0.23495841]
   [-0.11390454  0.00512573  0.09839267 ...  0.02260823 -0.31830767
     0.00776402]
   [-0.18927872  0.14090516  0.06336813 ... -0.17192174 -0.3423958
     0.07134365]
   ...
   [-0.5374908   0.17205149  0.30092606 ... -1.299513   -0.50735444
    -0.45372528]
   [-0.44234592  0.17717186  0.11988509 ... -0.9887123  -0.25854525
    -0.40106654]
   [-0.30651295  0.32414198  0.01627261 ... -1.7556211  -0.55981153
    -0.5505434 ]]]]

I believe this is because batch-normalization, --training parameter. And I want to use this model for transfer learning.

Also when I tried to get output from earlier layers like convolutional2 (without --training parameter), values were like:

[[[[nan -1.4262159e+36 -1.6400952e+36 ... -1.5521092e+36
     1.1826908e+38 -1.1971094e+37]
   [           nan -5.4608188e+36           -inf ... -2.9475174e+35
    -2.9942158e+36           -inf]
   [           nan -5.4608188e+36           -inf ... -2.9475174e+35
    -2.9942158e+36           -inf]
   ...
   [           nan -5.4608188e+36           -inf ... -2.9475174e+35
    -2.9942158e+36           -inf]
   [           nan -5.4608188e+36           -inf ... -2.9475174e+35
    -2.9942158e+36           -inf]
   [           nan -4.9901782e+36 -2.4481979e+36 ...  8.4210530e+36
              -inf -1.1353102e+37]]

  [[           nan -1.3676106e+36            inf ...  1.5158864e+37
               inf -8.5954786e+36]
   [           nan -7.9527132e+36            inf ...  2.1685821e+37
     1.6828479e+37           -inf]
   [           nan -7.9527132e+36            inf ...  2.1685821e+37
     1.6828479e+37           -inf]
   ...
   [           nan -3.1938362e+36            inf ...  1.5331453e+37
     3.3975579e+37 -9.5892951e+36]
   [           nan -3.1938362e+36            inf ...  1.5331453e+37
     3.3975579e+37 -9.5892951e+36]
   [           nan -5.6393693e+36  4.6983167e+37 ...  1.0347686e+37
    -5.8164126e+36 -4.1906564e+36]]]]

Is this a problem about code or am I missing something about like image input?

Wrong input and output shape?

Hi,

I'm trying to convert darknet yolov2-tiny model to onnx. I'm using DW2TF to convert weights and cfg files to tensorflow (pb).
I checked tensorflow pb file in Netron, and I see that input and output shape of my yolov2-tiny VOC is different that I expected:

Actual shape:
Input: float32[unk__77,416,416,3]
Output: float32[unk__78,13,13,40]

Expected shape format:
Input: float32[unk__77,3,416,416]
Output: float32[unk__78,40,13,13]

Link to darknet VOC model:
https://pjreddie.com/darknet/yolov2/

Please check this onnx model example: https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny-yolov2

Is it possible to change model input and output shape? I need it to use trained model in Unity with Barracuda.

How to read the output of the graph?

hi, so how exactly we read the output of the network?

for example, using yolov3-tiny, i feed in an image
the output of it is with shape (1,26,26,255)
Screenshot from 2019-04-05 15-08-12

whats the next step to get the predicted class and bounding box?
sry im new with this

YoloV3 and YoloV3-Tiny change input image size in .cfg file

I had change input image size in .cfg file.
To convert .meta, and .pb file is fine.

but run the detection, it cannot detect any object.
The YoloV3 case:
image size : 800x608 is cannot detected anything.
image size : 608x608 is fine.
The YoloV3-Tiny case:
image size : 640x480 is cannot detected anything.
image size : 608x608 is fine.
image size : 416x416 is fine.

I want to change input image size.
Do i something wrong? Plz Let me know.
Thanks

Can't freeze tensorflow graph

Trying to run the freeze graph utility on the output from the conversion yields the esoteric:
ValueError: No variables to save

I think it's because there's no output nodes but it's possible that's something I'm supposed to define in my training:

image

Update deprecated ops

WARNING:tensorflow:From /content/DW2TF/util/cfg_layer.py:74: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /content/DW2TF/util/cfg_layer.py:93: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.batch_normalization instead.
1 Tensor("yolov3-tiny/convolutional1/Activation:0", shape=(?, 416, 416, 16), dtype=float32)
WARNING:tensorflow:From /content/DW2TF/util/cfg_layer.py:108: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.max_pooling2d instead.

VALUE ERROR!

i got this error. any idea?

ValueError: Dimension 2 in both shapes must be equal, but are 148 and 147. Shapes are [?,260,148] and [?,260,147]. for 'network/route2' (op: 'ConcatV2') with input shapes: [?,260,148,256], [?,260,147,512], [] and with computed input tensors: input[2] = <-1>.
85 Tensor("network/convolutional60/Activation:0", shape=(?, 130, 74, 256), dtype=float32)
86 Tensor("network/upsample1:0", shape=(?, 260, 148, 256), dtype=float32)

TypeError: string indices must be integers

Traceback (most recent call last):
File "main.py", line 116, in
main(args)
File "main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 28, in parse_net
layer_name = layer['name']
TypeError: string indices must be integers

UnicodeDecodeError (Python 3.6.8)

Trying to convert yolov3 with I get this error:

Traceback (most recent call last):
  File "main.py", line 116, in <module>
    main(args)
  File "main.py", line 57, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 25, in parse_net
    for ith, layer in enumerate(cfg_walker):
  File "/home/user/temp/DW2TF/util/reader.py", line 84, in get_block
    line = next(line_getter)
  File "/home/user/temp/DW2TF/util/reader.py", line 74, in _get_line
    for line in open(self.fnm):
  File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 1241: ordinal not in range(128)

Command:

python3 main.py \
    --cfg 'data/yolov3.cfg' \
    --weights 'data/yolov3.weights' \
    --output 'data/' \
    --prefix 'yolov3/' \
    --gpu 0

Can't convert Yolov2

Hi, I had a error when I tried to convert yolov2. Any ideas? thank you.
OS: Ubuntu 18.04
Cuda: 10.0
tensorflow-gpu: 1.13.1

31 Tensor("yolov2/convolutional23/BiasAdd:0", shape=(?, 19, 19, 425), dtype=float32)
=> Ignore:  {'name': 'region', 'anchors': ['0.57273', '0.677385', '1.87446', '2.06253', '3.33843', '5.47434', '7.88282', '3.52778', '9.77052', '9.16828'], 'bias_match': '1', 'classes': '80', 'coords': '4', 'num': '5', 'softmax': '1', 'jitter': '.3', 'rescore': '1', 'object_scale': '5', 'noobject_scale': '1', 'class_scale': '1', 'coord_scale': '1', 'absolute': '1', 'thresh': '.6', 'random': '1'}
32 Tensor("yolov2/convolutional23/BiasAdd:0", shape=(?, 19, 19, 425), dtype=float32)
Traceback (most recent call last):
  File "/mnt/hdd1/dev/DW2TF/util/reader.py", line 84, in get_block
    line = next(line_getter)
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "main.py", line 112, in <module>
    main(args)
  File "main.py", line 53, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 25, in parse_net
    for ith, layer in enumerate(cfg_walker):
RuntimeError: generator raised StopIteration

ValueError: not enough values to unpack (expected 2, got 1)

I am converting YOLO v2-tiny model to TensorFlow.
I downloaded the weights from the official site and saved them to the data/ directory.
After running
!python3 main.py \ --cfg 'data/yolov2.cfg' \ --weights 'data/yolov2.weights' \ --output 'data/' \ --prefix 'yolov2/' \ --gpu 0
I mistakenly renamed the yolov2-tiny files to yolov2 files, hence the paths to cfg and weights may seem wrong

But this is what I get:

WARNING:tensorflow:From main.py:56: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

Traceback (most recent call last):
  File "main.py", line 116, in <module>
    main(args)
  File "main.py", line 57, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 25, in parse_net
    for ith, layer in enumerate(cfg_walker):
  File "/content/DW2TF/util/reader.py", line 94, in get_block
    key, value = line[0:2]
ValueError: not enough values to unpack (expected 2, got 1)

Convert Custom Dataset

Hi. I have trained my own dataset in Windows and I want to convert it to .pb file. Can i simply convert the weight file that i've trained using your converter?

Converting tiny-yolov3 gives input/output error

Hello @sjain-stanford . I am trying out the new release that you made for tiny-yolov3 but facing errors during the conversion. I used your code to convert the models:

'python main.py
--cfg 'data/yolov3-tiny.cfg'
--weights 'data/yolov3-tiny.weights'
--output 'data/'
--prefix 'yolov3-tiny/'
--gpu 0'

I used original darkflow weights file and the cfg file. However I am getting the error below.

UnknownError (see above for traceback): Failed to rename: data/yolov3-tiny.ckpt.index.tempstate5997668412469460368 to: data/yolov3-tiny.ckpt.index : Access is denied.
; Input/output error

error

As output I get the following files (see screenshot). I am uncertain if the conversion has been completed. Please let me know!

output

Converting Yolov3: ValueError: not enough values to unpack

Hi,

I'm trying to convert YOLOv3 model to tensorflow. I downloaded YOLOv3-416 model from here: https://pjreddie.com/darknet/yolo/

When I'm running this command:

python3 main.py --cfg 'data/yolov3.cfg' --weights 'data/yolov3.weights' --output 'data/' --prefix 'yolov3/' --gpu 0

I'm getting the following error:
python3 main.py --cfg 'data/yolov3.cfg' --weights 'data/yolov3.weights' --output 'data/' --prefix 'yolov3/' --gpu 0 Traceback (most recent call last): File "main.py", line 112, in <module> main(args) File "main.py", line 53, in main parse_net(args.layers, args.cfg, args.weights, args.training) File "main.py", line 25, in parse_net for ith, layer in enumerate(cfg_walker): File "/Users/mice/Desktop/DW2TF-master/util/reader.py", line 94, in get_block key, value = line[0:2] ValueError: not enough values to unpack (expected 2, got 1)

My python version is: Python 3.6.8 :: Anaconda, Inc.

Can you please help and tell how to convert that YOLOv3 model?

Does CPU conversion support? I tried to convert on a MAC, but got an error

Does CPU conversion support? I tried to convert on a MAC, but got an error

Traceback (most recent call last):
File "main.py", line 116, in
main(args)
File "main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 25, in parse_net
for ith, layer in enumerate(cfg_walker):

Assert in "/Users/Test/code/DW2TF/util/cfg_layer.py", line 162

DW2TF looked like exactly what I needed, to convert darknet model to TensorFlow. I have trained a custom model based on yolov3-tiny.cfg, with some custom changes. One of the changes is to change stride in one of the layers to 8.

This triggers an assert in "/Users/Test/code/DW2TF/util/cfg_layer.py", line 162

Can this be fixed?

default value to pad

Hi,
When the pad is not mentioned in the cfg file, like in the enet-coco.cfg the conversion fails, I think that something like this would cover it safely:
pad = 'valid' if 'pad' in param and param['pad'] == '1': pad = 'same'

generator raised StopIteration

python main.py --cfg testv3.cfg --weights testv3.weights --output data/ --prefix yolov3/ --gpu 0

C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From main.py:56: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From C:\cfg\DW2TF-master\DW2TF-master\util\cfg_layer.py:32: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

0 Tensor("yolov3/net1:0", shape=(?, 832, 832, 3), dtype=float32)
WARNING:tensorflow:From C:\Users\gaonpf\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py:507: calling Constant.init (from tensorflow.python.ops.init_ops) with verify_shape is deprecated and will be removed in a future version.
Instructions for updating:
Objects must now be the required shape or no shape can be specified
WARNING:tensorflow:From C:\cfg\DW2TF-master\DW2TF-master\util\cfg_layer.py:74: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras.layers.Conv2D instead.
WARNING:tensorflow:From C:\cfg\DW2TF-master\DW2TF-master\util\cfg_layer.py:93: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, tf.control_dependencies(tf.GraphKeys.UPDATE_OPS) should not be used (consult the tf.keras.layers.batch_normalization documentation).
1 Tensor("yolov3/convolutional1/Activation:0", shape=(?, 832, 832, 32), dtype=float32)
2 Tensor("yolov3/convolutional2/Activation:0", shape=(?, 416, 416, 64), dtype=float32)
3 Tensor("yolov3/convolutional3/Activation:0", shape=(?, 416, 416, 32), dtype=float32)
4 Tensor("yolov3/convolutional4/Activation:0", shape=(?, 416, 416, 64), dtype=float32)
5 Tensor("yolov3/shortcut1:0", shape=(?, 416, 416, 64), dtype=float32)
6 Tensor("yolov3/convolutional5/Activation:0", shape=(?, 208, 208, 128), dtype=float32)
7 Tensor("yolov3/convolutional6/Activation:0", shape=(?, 208, 208, 64), dtype=float32)
8 Tensor("yolov3/convolutional7/Activation:0", shape=(?, 208, 208, 128), dtype=float32)
9 Tensor("yolov3/shortcut2:0", shape=(?, 208, 208, 128), dtype=float32)
10 Tensor("yolov3/convolutional8/Activation:0", shape=(?, 208, 208, 64), dtype=float32)
11 Tensor("yolov3/convolutional9/Activation:0", shape=(?, 208, 208, 128), dtype=float32)
12 Tensor("yolov3/shortcut3:0", shape=(?, 208, 208, 128), dtype=float32)
13 Tensor("yolov3/convolutional10/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
14 Tensor("yolov3/convolutional11/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
15 Tensor("yolov3/convolutional12/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
16 Tensor("yolov3/shortcut4:0", shape=(?, 104, 104, 256), dtype=float32)
17 Tensor("yolov3/convolutional13/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
18 Tensor("yolov3/convolutional14/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
19 Tensor("yolov3/shortcut5:0", shape=(?, 104, 104, 256), dtype=float32)
20 Tensor("yolov3/convolutional15/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
21 Tensor("yolov3/convolutional16/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
22 Tensor("yolov3/shortcut6:0", shape=(?, 104, 104, 256), dtype=float32)
23 Tensor("yolov3/convolutional17/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
24 Tensor("yolov3/convolutional18/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
25 Tensor("yolov3/shortcut7:0", shape=(?, 104, 104, 256), dtype=float32)
26 Tensor("yolov3/convolutional19/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
27 Tensor("yolov3/convolutional20/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
28 Tensor("yolov3/shortcut8:0", shape=(?, 104, 104, 256), dtype=float32)
29 Tensor("yolov3/convolutional21/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
30 Tensor("yolov3/convolutional22/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
31 Tensor("yolov3/shortcut9:0", shape=(?, 104, 104, 256), dtype=float32)
32 Tensor("yolov3/convolutional23/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
33 Tensor("yolov3/convolutional24/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
34 Tensor("yolov3/shortcut10:0", shape=(?, 104, 104, 256), dtype=float32)
35 Tensor("yolov3/convolutional25/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
36 Tensor("yolov3/convolutional26/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
37 Tensor("yolov3/shortcut11:0", shape=(?, 104, 104, 256), dtype=float32)
38 Tensor("yolov3/convolutional27/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
39 Tensor("yolov3/convolutional28/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
40 Tensor("yolov3/convolutional29/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
41 Tensor("yolov3/shortcut12:0", shape=(?, 52, 52, 512), dtype=float32)
42 Tensor("yolov3/convolutional30/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
43 Tensor("yolov3/convolutional31/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
44 Tensor("yolov3/shortcut13:0", shape=(?, 52, 52, 512), dtype=float32)
45 Tensor("yolov3/convolutional32/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
46 Tensor("yolov3/convolutional33/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
47 Tensor("yolov3/shortcut14:0", shape=(?, 52, 52, 512), dtype=float32)
48 Tensor("yolov3/convolutional34/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
49 Tensor("yolov3/convolutional35/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
50 Tensor("yolov3/shortcut15:0", shape=(?, 52, 52, 512), dtype=float32)
51 Tensor("yolov3/convolutional36/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
52 Tensor("yolov3/convolutional37/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
53 Tensor("yolov3/shortcut16:0", shape=(?, 52, 52, 512), dtype=float32)
54 Tensor("yolov3/convolutional38/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
55 Tensor("yolov3/convolutional39/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
56 Tensor("yolov3/shortcut17:0", shape=(?, 52, 52, 512), dtype=float32)
57 Tensor("yolov3/convolutional40/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
58 Tensor("yolov3/convolutional41/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
59 Tensor("yolov3/shortcut18:0", shape=(?, 52, 52, 512), dtype=float32)
60 Tensor("yolov3/convolutional42/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
61 Tensor("yolov3/convolutional43/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
62 Tensor("yolov3/shortcut19:0", shape=(?, 52, 52, 512), dtype=float32)
63 Tensor("yolov3/convolutional44/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
64 Tensor("yolov3/convolutional45/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
65 Tensor("yolov3/convolutional46/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
66 Tensor("yolov3/shortcut20:0", shape=(?, 26, 26, 1024), dtype=float32)
67 Tensor("yolov3/convolutional47/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
68 Tensor("yolov3/convolutional48/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
69 Tensor("yolov3/shortcut21:0", shape=(?, 26, 26, 1024), dtype=float32)
70 Tensor("yolov3/convolutional49/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
71 Tensor("yolov3/convolutional50/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
72 Tensor("yolov3/shortcut22:0", shape=(?, 26, 26, 1024), dtype=float32)
73 Tensor("yolov3/convolutional51/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
74 Tensor("yolov3/convolutional52/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
75 Tensor("yolov3/shortcut23:0", shape=(?, 26, 26, 1024), dtype=float32)
76 Tensor("yolov3/convolutional53/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
77 Tensor("yolov3/convolutional54/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
78 Tensor("yolov3/convolutional55/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
79 Tensor("yolov3/convolutional56/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
80 Tensor("yolov3/convolutional57/Activation:0", shape=(?, 26, 26, 512), dtype=float32)
81 Tensor("yolov3/convolutional58/Activation:0", shape=(?, 26, 26, 1024), dtype=float32)
82 Tensor("yolov3/convolutional59/BiasAdd:0", shape=(?, 26, 26, 255), dtype=float32)
83 Tensor("yolov3/convolutional59/BiasAdd:0", shape=(?, 26, 26, 255), dtype=float32)
84 Tensor("yolov3/route1:0", shape=(?, 26, 26, 512), dtype=float32)
85 Tensor("yolov3/convolutional60/Activation:0", shape=(?, 26, 26, 256), dtype=float32)
WARNING:tensorflow:From C:\cfg\DW2TF-master\DW2TF-master\util\cfg_layer.py:164: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

86 Tensor("yolov3/upsample1:0", shape=(?, 52, 52, 256), dtype=float32)
87 Tensor("yolov3/route2:0", shape=(?, 52, 52, 768), dtype=float32)
88 Tensor("yolov3/convolutional61/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
89 Tensor("yolov3/convolutional62/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
90 Tensor("yolov3/convolutional63/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
91 Tensor("yolov3/convolutional64/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
92 Tensor("yolov3/convolutional65/Activation:0", shape=(?, 52, 52, 256), dtype=float32)
93 Tensor("yolov3/convolutional66/Activation:0", shape=(?, 52, 52, 512), dtype=float32)
94 Tensor("yolov3/convolutional67/BiasAdd:0", shape=(?, 52, 52, 255), dtype=float32)
95 Tensor("yolov3/convolutional67/BiasAdd:0", shape=(?, 52, 52, 255), dtype=float32)
96 Tensor("yolov3/route3:0", shape=(?, 52, 52, 256), dtype=float32)
97 Tensor("yolov3/convolutional68/Activation:0", shape=(?, 52, 52, 128), dtype=float32)
98 Tensor("yolov3/upsample2:0", shape=(?, 104, 104, 128), dtype=float32)
99 Tensor("yolov3/route4:0", shape=(?, 104, 104, 384), dtype=float32)
100 Tensor("yolov3/convolutional69/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
101 Tensor("yolov3/convolutional70/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
102 Tensor("yolov3/convolutional71/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
103 Tensor("yolov3/convolutional72/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
104 Tensor("yolov3/convolutional73/Activation:0", shape=(?, 104, 104, 128), dtype=float32)
105 Tensor("yolov3/convolutional74/Activation:0", shape=(?, 104, 104, 256), dtype=float32)
106 Tensor("yolov3/convolutional75/BiasAdd:0", shape=(?, 104, 104, 24), dtype=float32)
107 Tensor("yolov3/convolutional75/BiasAdd:0", shape=(?, 104, 104, 24), dtype=float32)
Traceback (most recent call last):
File "C:\cfg\DW2TF-master\DW2TF-master\util\reader.py", line 84, in get_block
line = next(line_getter)
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "main.py", line 116, in
main(args)
File "main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 25, in parse_net
for ith, layer in enumerate(cfg_walker):
RuntimeError: generator raised StopIteration

why?

ValueError: ''yolov3-tiny/'net1' is not a valid scope name

Traceback (most recent call last):
File "main.py", line 116, in
main(args)
File "main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 33, in parse_net
training=training, const_inits=const_inits, verbose=verbose)
File "D:\yolo\extra\DW2TF-1.2_v2\util\cfg_layer.py", line 198, in get_cfg_layer
layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_init
s, verbose)
File "D:\yolo\extra\DW2TF-1.2_v2\util\cfg_layer.py", line 32, in cfg_net
net = tf.compat.v1.placeholder(tf.float32, [None, width, height, channels], name=scope)
File "C:\Users\MD-SHARIF-ULLAH.virtualenvs\yolo-lBfTfznQ\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 2630, in placeholde
r
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "C:\Users\MD-SHARIF-ULLAH.virtualenvs\yolo-lBfTfznQ\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 6670, in placeh
older
"Placeholder", dtype=dtype, shape=shape, name=name)
File "C:\Users\MD-SHARIF-ULLAH.virtualenvs\yolo-lBfTfznQ\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 410, in
_apply_op_helper
with g.as_default(), ops.name_scope(name) as scope:
File "C:\Users\MD-SHARIF-ULLAH.virtualenvs\yolo-lBfTfznQ\lib\site-packages\tensorflow_core\python\framework\ops.py", line 6349, in enter
return self._name_scope.enter()
File "c:\users\md-sharif-ullah\appdata\local\programs\python\python37\Lib\contextlib.py", line 112, in enter
return next(self.gen)
File "C:\Users\MD-SHARIF-ULLAH.virtualenvs\yolo-lBfTfznQ\lib\site-packages\tensorflow_core\python\framework\ops.py", line 4132, in name_scope

raise ValueError("'%s' is not a valid scope name" % name)                                                                                   

ValueError: ''yolov3-tiny/'net1' is not a valid scope name

Problem in converting model KeyError: 'width'

I had cloned your repository to my directory. After that I tried to convert yolo.weight file using your code but I received this message and error:

Traceback (most recent call last): File "main.py", line 116, in <module> main(args) File "main.py", line 57, in main parse_net(args.layers, args.cfg, args.weights, args.training) File "main.py", line 33, in parse_net training=training, const_inits=const_inits, verbose=verbose) File "/home/ai-station/Desktop/workspace/navid/training/test/darknet/Convert_Model/loadingmodel/util/cfg_layer.py", line 198, in get_cfg_layer layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_inits, verbose) File "/home/ai-station/Desktop/workspace/navid/training/test/darknet/Convert_Model/loadingmodel/util/cfg_layer.py", line 29, in cfg_net width = int(param["width"]) KeyError: 'width'

How could I resolve it?

Screenshot from 2019-08-04 12-53-38

minimal inference example with the TF model?

Thanks for the repository. I manged to convert my weights to .pb and .ckpt files. However could you share or point to a link of an example inference using the models files ? just to see which tensor is the input/placeholder and which one are the outputs ?

runtime error,i use windows pycharm to run this

Traceback (most recent call last):
File "G:\darknet_to_tensorflow\DW2TF-1.2\util\reader.py", line 84, in get_block
line = next(line_getter)
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "G:/darknet_to_tensorflow/DW2TF-1.2/main.py", line 116, in
main(args)
File "G:/darknet_to_tensorflow/DW2TF-1.2/main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "G:/darknet_to_tensorflow/DW2TF-1.2/main.py", line 25, in parse_net
for ith, layer in enumerate(cfg_walker):
RuntimeError: generator raised StopIteration

AssertionError: Over-read models\yolov3.weights

python main.py --cfg models\yolov3.cfg --weights models\yolov3.weights --output data\ --gpu 0

C:\Users\me\Documents\DW2TF\util\reader.py:31: RuntimeWarning: overflow encountered in long_scalars
if ((major*10 + minor) >= 2 and major < 1000 and minor < 1000):
0 Tensor("yolov2/net1:0", shape=(?, 608, 608, 3), dtype=float32)
Traceback (most recent call last):
File "main.py", line 112, in
main(args)
File "main.py", line 53, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 33, in parse_net
training=training, const_inits=const_inits, verbose=verbose)
File "C:\Users\me\Documents\DW2TF\util\cfg_layer.py", line 198, in get_cfg_layer
layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_inits, verbose)
File "C:\Users\me\Documents\DW2TF\util\cfg_layer.py", line 52, in cfg_convolutional
batch_normalize=batch_normalize)
File "C:\Users\me\Documents\DW2TF\util\reader.py", line 64, in get_weight
return self.get_weight_convolutional(**args)
File "C:\Users\me\Documents\DW2TF\util\reader.py", line 55, in get_weight_convolutional
biases = self.walk(filters)
File "C:\Users\me\Documents\DW2TF\util\reader.py", line 41, in walk
assert end_point <= self.size, 'Over-read {}'.format(self.path)
AssertionError: Over-read models\yolov3.weights

converting yolov3-tiny got error

The output text is below...
As I can see since last max pool input size is changed from 13 to 12...
I don't understand why its happened...
Please help

0 Tensor("yolov3/net1:0", shape=(?, 416, 416, 3), dtype=float32)
1 Tensor("yolov3/convolutional1/Activation:0", shape=(?, 416, 416, 16), dtype=float32)
2 Tensor("yolov3/maxpool1/MaxPool:0", shape=(?, 208, 208, 16), dtype=float32)
3 Tensor("yolov3/convolutional2/Activation:0", shape=(?, 208, 208, 32), dtype=float32)
4 Tensor("yolov3/maxpool2/MaxPool:0", shape=(?, 104, 104, 32), dtype=float32)
5 Tensor("yolov3/convolutional3/Activation:0", shape=(?, 104, 104, 64), dtype=float32)
6 Tensor("yolov3/maxpool3/MaxPool:0", shape=(?, 52, 52, 64), dtype=float32)
7 Tensor("yolov3/convolutional4/Activation:0", shape=(?, 52, 52, 128), dtype=float32)
8 Tensor("yolov3/maxpool4/MaxPool:0", shape=(?, 26, 26, 128), dtype=float32)
9 Tensor("yolov3/convolutional5/Activation:0", shape=(?, 26, 26, 256), dtype=float32)
10 Tensor("yolov3/maxpool5/MaxPool:0", shape=(?, 13, 13, 256), dtype=float32)
11 Tensor("yolov3/convolutional6/Activation:0", shape=(?, 13, 13, 512), dtype=float32)
12 Tensor("yolov3/maxpool6/MaxPool:0", shape=(?, 12, 12, 512), dtype=float32)
13 Tensor("yolov3/convolutional7/Activation:0", shape=(?, 12, 12, 1024), dtype=float32)
14 Tensor("yolov3/convolutional8/Activation:0", shape=(?, 12, 12, 256), dtype=float32)
15 Tensor("yolov3/convolutional9/Activation:0", shape=(?, 12, 12, 512), dtype=float32)
16 Tensor("yolov3/convolutional10/BiasAdd:0", shape=(?, 12, 12, 30), dtype=float32)
17 Tensor("yolov3/convolutional10/BiasAdd:0", shape=(?, 12, 12, 30), dtype=float32)
18 Tensor("yolov3/route1:0", shape=(?, 12, 12, 256), dtype=float32)
19 Tensor("yolov3/convolutional11/Activation:0", shape=(?, 12, 12, 128), dtype=float32)
20 Tensor("yolov3/upsample1:0", shape=(?, 24, 24, 128), dtype=float32)
Traceback (most recent call last):
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1589, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 24 and 26. Shapes are [?,24,24] and [?,26,26]. for 'yolov3/route2' (op: 'ConcatV2') with input shapes: [?,24,24,128], [?,26,26,128], [] and with computed input tensors: input[2] = <-1>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "DW2TF/main.py", line 111, in <module>
    main(args)
  File "DW2TF/main.py", line 52, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "DW2TF/main.py", line 32, in parse_net
    training=training, const_inits=const_inits, verbose=verbose)
  File "/raid/jupyterhub/notebook/sg180824/minkoo/yolo/DW2TF/util/cfg_layer.py", line 197, in get_cfg_layer
    layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_inits, verbose)
  File "/raid/jupyterhub/notebook/sg180824/minkoo/yolo/DW2TF/util/cfg_layer.py", line 131, in cfg_route
    net = tf.concat(nets, axis=-1, name=scope)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1113, in concat
    return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1029, in concat_v2
    "ConcatV2", values=values, axis=axis, name=name)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
    op_def=op_def)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1756, in __init__
    control_input_ops)
  File "/home/sg180824/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1592, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 24 and 26. Shapes are [?,24,24] and [?,26,26]. for 'yolov3/route2' (op: 'ConcatV2') with input shapes: [?,24,24,128], [?,26,26,128], [] and with computed input tensors: input[2] = <-1>.

TypeError: Expected Tensor's shape: (1,), got ().

Hi, i try to convert custom YOLOv3 weights and cfg but i failled, can someone help me ? Thanks in advence.

68 Tensor("yolov3/convolutional48/Activation:0", shape=(?, 13, 13, 1024), dtype=float32)
69 Tensor("yolov3/shortcut21:0", shape=(?, 13, 13, 1024), dtype=float32)
Traceback (most recent call last):
File "main.py", line 116, in
main(args)
File "main.py", line 57, in main
parse_net(args.layers, args.cfg, args.weights, args.training)
File "main.py", line 33, in parse_net
training=training, const_inits=const_inits, verbose=verbose)
File "/home/linux/Bureau/DW2TF/util/cfg_layer.py", line 198, in get_cfg_layer
layer = _cfg_layer_dict.get(layer_name, cfg_ignore)(B, H, W, C, net, param, weights_walker, stack, output_index, scope, training, const_inits, verbose)
File "/home/linux/Bureau/DW2TF/util/cfg_layer.py", line 93, in cfg_convolutional
net = tf.layers.batch_normalization(net, name=scope+'/BatchNorm', **batch_norm_args)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/layers/normalization.py", line 327, in batch_normalization
return layer.apply(inputs, training=training)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1479, in apply
return self.call(inputs, *args, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 537, in call
outputs = super(Layer, self).call(inputs, *args, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 591, in call
self._maybe_build(inputs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1881, in _maybe_build
self.build(input_shapes)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/normalization.py", line 358, in build
experimental_autocast=False)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 450, in add_weight
**kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 384, in add_weight
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 663, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1496, in get_variable
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1239, in get_variable
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 562, in get_variable
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 514, in _true_getter
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 929, in _get_single_variable
aggregation=aggregation)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 259, in call
return cls._variable_v1_call(*args, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 220, in _variable_v1_call
shape=shape)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 198, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2511, in default_variable_creator
shape=shape)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 263, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1568, in init
shape=shape)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1698, in _init_from_args
initial_value(), name="initial_value", dtype=dtype)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 901, in
partition_info=partition_info)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py", line 244, in call
self.value, dtype=dtype, shape=shape, verify_shape=verify_shape)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 180, in constant_v1
allow_broadcast=False)
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "/home/linux/.local/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 523, in make_tensor_proto
(tuple(shape), nparray.shape))
TypeError: Expected Tensor's shape: (1,), got ().

How to convert yolo layer output to bounding box coordinates?

yolo layers seem to have an output shape of

[ batch, height, width, 3  * (5 + num_classes)]

where height and width are size of the grid.

this is what I've tried referencing model.py

def get_bbox_with_anchors(prediction, anchors, classes_num=8):
    # split by anchor box
    # split into three
    split_preds = tf.split(prediction, 3, axis=-1)
    pred = tf.stack(split_preds, axis=-2)
    
    # get grid shape
    grid_size = tf.shape(pred)[1]
    box_xy, box_wh, objectness, class_probs = tf.split(
        pred, (2, 2, 1, classes_num), axis=-1)
    
    # apply transforms
    # box_xy = tf.sigmoid(box_xy)
    # darknet does not seem to appy sigmoid on offset
    objectness = tf.sigmoid(objectness)
    class_probs = tf.sigmoid(class_probs)
    pred_box = tf.concat((box_xy, box_wh), axis=-1)  # original xywh for loss

    # calculate offsets
    grid = tf.meshgrid(tf.range(grid_size), tf.range(grid_size))
    grid = tf.expand_dims(tf.stack(grid, axis=-1), axis=2)  # [gx, gy, 1, 2]

    box_xy = (box_xy + tf.cast(grid, tf.float32)) / \
        tf.cast(grid_size, tf.float32)
    box_wh = tf.exp(box_wh) * anchors

    box_x1y1 = box_xy - box_wh / 2
    box_x2y2 = box_xy + box_wh / 2
    bbox = tf.concat([box_x1y1, box_x2y2], axis=-1)

    return bbox, objectness, class_probs

overlaying these bbox to the original image seems to be off quite a bit
I am not sure what I am doing wrong.
Any help is appreciated

Problems with converted model

Hi,

I'm trying to convert a tiny yolo v3 trained in a custom dataset with a single class to coreml.
I managed to convert to tensorflow using this DW2TF tool (just some deprecated warnings), but when I try converting the tensorflow graph to coreml using tf-coreml, I got the following errors:

Graph Loaded.
Collecting all the 'Const' ops from the graph, by running it....
Traceback (most recent call last):
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value network/convolutional11/BatchNorm/moving_variance
[[{{node network/convolutional11/BatchNorm/moving_variance/read}}]]
[[{{node network/convolutional4/BatchNorm/moving_variance/Initializer/ones}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "ts_coreml.py", line 9, in
image_scale = 1 / 255.0)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 586, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 243, in _convert_pb_to_mlmodel
tensors_evaluated = sess.run(tensors, feed_dict=input_feed_dict)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value network/convolutional11/BatchNorm/moving_variance
[[node network/convolutional11/BatchNorm/moving_variance/read (defined at /home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py:153) ]]
[[node network/convolutional4/BatchNorm/moving_variance/Initializer/ones (defined at /home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py:153) ]]

Caused by op 'network/convolutional11/BatchNorm/moving_variance/read', defined at:
File "ts_coreml.py", line 9, in
image_scale = 1 / 255.0)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 586, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 153, in _convert_pb_to_mlmodel
tf.import_graph_def(gdef, name='')
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 235, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3433, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3433, in
for c_op in c_api_util.new_tf_operations(self)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3325, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "/home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value network/convolutional11/BatchNorm/moving_variance
[[node network/convolutional11/BatchNorm/moving_variance/read (defined at /home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py:153) ]]
[[node network/convolutional4/BatchNorm/moving_variance/Initializer/ones (defined at /home/dock/.conda/envs/custommodelapp_dw/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py:153) ]]

I used this simple code to try to make this conversion:

import tfcoreml as tf_converter

tf_converter.convert(tf_model_path = 'data/custommodel_tiny.pb',
mlmodel_path = 'data/custommodel_tiny.mlmodel',
output_feature_names = ['detect_1', 'detect_2'],
is_bgr = False,
input_name_shape_dict = {'input1' : [1, 608, 608, 3]},
image_input_names = ['input1'],
image_scale = 1 / 255.0)

I had success with tf-coreml in other conversions, and as the errors point to problems in preconditions, I think there might be something with the darknet -> tensorflow conversion that didn't go so well.
I viewed the produced pb on tensorboard:
convertDW2F

 Any ideas?

tiny yolo v3 conversion

When I use your tool to convert my cfg and weight files to tensorflow models, there seems to be an error. Here is the resulting graph from tensorboard, and I think that the connection down the right hand side should connect to a convolutional block, not a maxpool block. Specifically, convolutional5.
tensorboard_graph

AlexeyAB's explanation of route layers shows yolov2 has a similar connection to a convolutional block.

Also when I tried converting with this similar repo it connects to a convolutional block.

I'm not 100% sure I've understood route blocks correctly, but it could be that you count the input net1 as a block? If so, then you should increment the index (in this case from 8 to 9). Of course that only applies where the route has a positive number.

Pre-trained biases are ignored when batch_norm is present using tf-slim API

Issue

As per tf-slim conv2d documentation:

normalizer_fn: Normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function

In DW2TF, biases were being assigned using biases_initializer, which would silently get ignored when normalizer_fn was provided.

Examples

Conv without BN (biases are populated correctly)
without_bn

Conv with BN (biases are ignored)
with_bn

Solution

The correct way to handle this is to use the beta initializer of the normalizer for biases.

I've fixed this using tf.layers, where biases are added to convolution when BN is absent, or absorbed in beta of BN when present. PR is on the way.

Weights format mismatches with Darkflow / Darknet

Issue

I was trying to (numerically) compare the DW2TF converted TF weights with Darkflow converted TF weights. I found that the weights were reshaped incorrectly in DW2TF, which results in different output tensors from convolutional layers from the two graphs. This renders the pre-trained graph pretty much useless.

Details

Convolutional weights read from Darknet .weights are of shape:
[n, c, ksize, ksize]
[reference]

They are then transposed to match with TensorFlow:
[ksize, ksize, c, n]
[reference]

However, in DW2TF the resulting weights from reshape+transpose are of shape:
[ksize, ksize, n, c]
[reference]

Why was this not caught due to shape mismatch

The fact that any tensor fed to tf.initializers.constant() is stored as raw bytes (serialized in memory) means that any shape mismatch is silently ignored as long as the overall size matches. To catch such dangerous shape mismatches, the argument verify_shape=True should be provided to the initializer.

For example, if expected weights.shape is [3, 3, 32, 64], but the weights const initializer is fed a tensor of shape [3, 3, 64, 32], TensorFlow doesn't complain. UNLESS verify_shape=True is provided, in which case it fails with the error:

  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1487, in get_variable
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1237, in get_variable
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 540, in get_variable
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 492, in _true_getter
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 922, in _get_single_variable
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 183, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call
    aggregation=aggregation)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 125, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator
    expected_shape=expected_shape, import_scope=import_scope)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 187, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1329, in __init__
    constraint=constraint)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1437, in _init_from_args
    initial_value(), name="initial_value", dtype=dtype)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 896, in <lambda>
    shape.as_list(), dtype=dtype, partition_info=partition_info)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py", line 220, in __call__
    self.value, dtype=dtype, shape=shape, verify_shape=verify_shape)
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
    value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/scratch/anaconda3/envs/venv/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 492, in make_tensor_proto
    (tuple(shape), nparray.shape))
TypeError: Expected Tensor's shape: (3, 3, 32, 64), got (3, 3, 64, 32).

Fix

Update this line to

    weights = weights.reshape(filters, C, size, size).transpose([2, 3, 1, 0])

and add verify_shape=True to all const initializers.

PR to follow.

Unable to Convert - RuntimeError: generator raised StopIteration

I had this following problem:

ease use tf.compat.v1.placeholder instead.

0 Tensor("yolov3-tiny/net1:0", shape=(?, 608, 608, 3), dtype=float32)
W0709 15:38:30.056267 4518786496 deprecation.py:506] From /usr/local/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with verify_shape is deprecated and will be removed in a future version.
Instructions for updating:
Objects must now be the required shape or no shape can be specified
W0709 15:38:30.056703 4518786496 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:74: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.Conv2D` instead.
W0709 15:38:30.241756 4518786496 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:93: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead.  In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).
1 Tensor("yolov3-tiny/convolutional1/Activation:0", shape=(?, 608, 608, 16), dtype=float32)
W0709 15:38:30.337605 4518786496 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:108: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
2 Tensor("yolov3-tiny/maxpool1/MaxPool:0", shape=(?, 304, 304, 16), dtype=float32)
3 Tensor("yolov3-tiny/convolutional2/Activation:0", shape=(?, 304, 304, 32), dtype=float32)
4 Tensor("yolov3-tiny/maxpool2/MaxPool:0", shape=(?, 152, 152, 32), dtype=float32)
5 Tensor("yolov3-tiny/convolutional3/Activation:0", shape=(?, 152, 152, 64), dtype=float32)
6 Tensor("yolov3-tiny/maxpool3/MaxPool:0", shape=(?, 76, 76, 64), dtype=float32)
7 Tensor("yolov3-tiny/convolutional4/Activation:0", shape=(?, 76, 76, 128), dtype=float32)
8 Tensor("yolov3-tiny/maxpool4/MaxPool:0", shape=(?, 38, 38, 128), dtype=float32)
9 Tensor("yolov3-tiny/convolutional5/Activation:0", shape=(?, 38, 38, 256), dtype=float32)
10 Tensor("yolov3-tiny/maxpool5/MaxPool:0", shape=(?, 19, 19, 256), dtype=float32)
11 Tensor("yolov3-tiny/convolutional6/Activation:0", shape=(?, 19, 19, 512), dtype=float32)
12 Tensor("yolov3-tiny/maxpool6/MaxPool:0", shape=(?, 19, 19, 512), dtype=float32)
13 Tensor("yolov3-tiny/convolutional7/Activation:0", shape=(?, 19, 19, 1024), dtype=float32)
14 Tensor("yolov3-tiny/convolutional8/Activation:0", shape=(?, 19, 19, 256), dtype=float32)
15 Tensor("yolov3-tiny/convolutional9/Activation:0", shape=(?, 19, 19, 512), dtype=float32)
16 Tensor("yolov3-tiny/convolutional10/BiasAdd:0", shape=(?, 19, 19, 18), dtype=float32)
17 Tensor("yolov3-tiny/convolutional10/BiasAdd:0", shape=(?, 19, 19, 18), dtype=float32)
18 Tensor("yolov3-tiny/route1:0", shape=(?, 19, 19, 256), dtype=float32)
19 Tensor("yolov3-tiny/convolutional11/Activation:0", shape=(?, 19, 19, 128), dtype=float32)
W0709 15:38:31.420687 4518786496 deprecation_wrapper.py:119] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:164: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

20 Tensor("yolov3-tiny/upsample1:0", shape=(?, 38, 38, 128), dtype=float32)
21 Tensor("yolov3-tiny/route2:0", shape=(?, 38, 38, 256), dtype=float32)
22 Tensor("yolov3-tiny/convolutional12/Activation:0", shape=(?, 38, 38, 256), dtype=float32)
23 Tensor("yolov3-tiny/convolutional13/BiasAdd:0", shape=(?, 38, 38, 18), dtype=float32)
24 Tensor("yolov3-tiny/convolutional13/BiasAdd:0", shape=(?, 38, 38, 18), dtype=float32)
Traceback (most recent call last):
  File "/Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/reader.py", line 84, in get_block
    line = next(line_getter)
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "main.py", line 112, in <module>
    main(args)
  File "main.py", line 53, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 25, in parse_net
    for ith, layer in enumerate(cfg_walker):
RuntimeError: generator raised StopIteration
Sees-MacBook-Pro:DW2TF See$ clear

Sees-MacBook-Pro:DW2TF See$ python3 main.py --cfg ../prod_model/yolov3-tiny.cfg  --weights ../prod_model/yolov3-tiny_final-TL.weights --output ../prod_model/ --prefix yolov3-tiny/ --gpu 0
WARNING: Logging before flag parsing goes to stderr.
W0709 15:39:03.818010 4551931328 deprecation_wrapper.py:119] From main.py:52: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

W0709 15:39:03.827378 4551931328 deprecation_wrapper.py:119] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:32: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

0 Tensor("yolov3-tiny/net1:0", shape=(?, 608, 608, 3), dtype=float32)
W0709 15:39:03.866192 4551931328 deprecation.py:506] From /usr/local/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with verify_shape is deprecated and will be removed in a future version.
Instructions for updating:
Objects must now be the required shape or no shape can be specified
W0709 15:39:03.866840 4551931328 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:74: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.Conv2D` instead.
W0709 15:39:04.044296 4551931328 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:93: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead.  In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).
1 Tensor("yolov3-tiny/convolutional1/Activation:0", shape=(?, 608, 608, 16), dtype=float32)
W0709 15:39:04.123064 4551931328 deprecation.py:323] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:108: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.MaxPooling2D instead.
2 Tensor("yolov3-tiny/maxpool1/MaxPool:0", shape=(?, 304, 304, 16), dtype=float32)
3 Tensor("yolov3-tiny/convolutional2/Activation:0", shape=(?, 304, 304, 32), dtype=float32)
4 Tensor("yolov3-tiny/maxpool2/MaxPool:0", shape=(?, 152, 152, 32), dtype=float32)
5 Tensor("yolov3-tiny/convolutional3/Activation:0", shape=(?, 152, 152, 64), dtype=float32)
6 Tensor("yolov3-tiny/maxpool3/MaxPool:0", shape=(?, 76, 76, 64), dtype=float32)
7 Tensor("yolov3-tiny/convolutional4/Activation:0", shape=(?, 76, 76, 128), dtype=float32)
8 Tensor("yolov3-tiny/maxpool4/MaxPool:0", shape=(?, 38, 38, 128), dtype=float32)
9 Tensor("yolov3-tiny/convolutional5/Activation:0", shape=(?, 38, 38, 256), dtype=float32)
10 Tensor("yolov3-tiny/maxpool5/MaxPool:0", shape=(?, 19, 19, 256), dtype=float32)
11 Tensor("yolov3-tiny/convolutional6/Activation:0", shape=(?, 19, 19, 512), dtype=float32)
12 Tensor("yolov3-tiny/maxpool6/MaxPool:0", shape=(?, 19, 19, 512), dtype=float32)
13 Tensor("yolov3-tiny/convolutional7/Activation:0", shape=(?, 19, 19, 1024), dtype=float32)
14 Tensor("yolov3-tiny/convolutional8/Activation:0", shape=(?, 19, 19, 256), dtype=float32)
15 Tensor("yolov3-tiny/convolutional9/Activation:0", shape=(?, 19, 19, 512), dtype=float32)
16 Tensor("yolov3-tiny/convolutional10/BiasAdd:0", shape=(?, 19, 19, 18), dtype=float32)
17 Tensor("yolov3-tiny/convolutional10/BiasAdd:0", shape=(?, 19, 19, 18), dtype=float32)
18 Tensor("yolov3-tiny/route1:0", shape=(?, 19, 19, 256), dtype=float32)
19 Tensor("yolov3-tiny/convolutional11/Activation:0", shape=(?, 19, 19, 128), dtype=float32)
W0709 15:39:04.944850 4551931328 deprecation_wrapper.py:119] From /Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/cfg_layer.py:164: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

20 Tensor("yolov3-tiny/upsample1:0", shape=(?, 38, 38, 128), dtype=float32)
21 Tensor("yolov3-tiny/route2:0", shape=(?, 38, 38, 256), dtype=float32)
22 Tensor("yolov3-tiny/convolutional12/Activation:0", shape=(?, 38, 38, 256), dtype=float32)
23 Tensor("yolov3-tiny/convolutional13/BiasAdd:0", shape=(?, 38, 38, 18), dtype=float32)
24 Tensor("yolov3-tiny/convolutional13/BiasAdd:0", shape=(?, 38, 38, 18), dtype=float32)
Traceback (most recent call last):
  File "/Users/See/PycharmProjects/python-traffic-counter-with-yolo-and-sort/DW2TF/util/reader.py", line 84, in get_block
    line = next(line_getter)
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "main.py", line 112, in <module>
    main(args)
  File "main.py", line 53, in main
    parse_net(args.layers, args.cfg, args.weights, args.training)
  File "main.py", line 25, in parse_net
    for ith, layer in enumerate(cfg_walker):
RuntimeError: generator raised StopIteration
Sees-MacBook-Pro:DW2TF See$ 
```
`

Any suggestion on how to solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.