Code Monkey home page Code Monkey logo

edge-tpu-tiny-yolo's People

Contributors

guichristmann avatar hackmd-deploy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

edge-tpu-tiny-yolo's Issues

Tiny model conversion fail

Edit #3 (a tough day at the office)

Hi, I don't know if this issue has been already opened before, however, I am having an issue in training the tiny model. In particular I have downloaded the tiny weights and cfg from pjreddie site and have successfully converted the model into keras one.

Now, when I try to convert the model to tflite, it tells me that the fully quantized model has been converted, with the following output:

2020-09-09 18:36:58.637167: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2020-09-09 18:36:58.637217: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:tf.keras.backend.set_learning_phase is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the training argument of the __call__ method of your layer or model.
2020-09-09 18:37:00.669780: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-09-09 18:37:00.670767: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-09 18:37:00.673642: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-09-09 18:37:00.673682: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (a7fb2c0c58ea): /proc/driver/nvidia/version does not exist
2020-09-09 18:37:00.674021: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-09-09 18:37:00.674188: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually.
2020-09-09 18:37:01.104764: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-09-09 18:37:01.104999: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-09-09 18:37:01.105335: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-09-09 18:37:01.105626: I tensorflow/core/platform/profile_utils/cpu_utils.cc:108] CPU Frequency: 2200000000 Hz
2020-09-09 18:37:01.109356: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:872] Optimization results for grappler item: graph_to_optimize
function_optimizer: function_optimizer did nothing. time = 0.027ms.
function_optimizer: function_optimizer did nothing. time = 0ms.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:109: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2020-09-09 18:37:04.393755: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:109: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2020-09-09 18:37:07.257855: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert_saved_model.py:60: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
2020-09-09 18:37:07.478564: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2020-09-09 18:37:07.637655: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-09-09 18:37:08.125680: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-09-09 18:37:08.125892: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-09-09 18:37:08.126235: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-09-09 18:37:08.229160: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:872] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 632 nodes (507), 1238 edges (998), time = 13.368ms.
function_optimizer: function_optimizer did nothing. time = 0.351ms.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/util.py:326: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py:856: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2020-09-09 18:37:08.605841: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:315] Ignored output_format.
2020-09-09 18:37:08.605918: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:318] Ignored drop_control_dependency.
2020-09-09 18:37:08.678676: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set

In the meanwhile, this happens when I install the compiler:

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 653 100 653 0 0 24185 0 --:--:-- --:--:-- --:--:-- 25115
OK
deb https://packages.cloud.google.com/apt coral-edgetpu-stable main
Get:1 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB]
Get:5 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B]
Get:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [43.0 kB]
Ign:9 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease
Ign:10 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
Hit:11 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release
Get:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B]
Get:13 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,864 kB]
Get:14 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B]
Get:15 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ Packages [95.7 kB]
Get:16 https://packages.cloud.google.com/apt coral-edgetpu-stable InRelease [6,332 B]
Get:17 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [900 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,384 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,425 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [27.7 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [132 kB]
Get:23 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [47.5 kB]
Get:24 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [897 kB]
Get:25 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 Packages [1,284 B]
Get:26 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1,089 kB]
Get:27 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [10.1 kB]
Get:28 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [116 kB]
Fetched 8,333 kB in 2s (4,365 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
libnvidia-common-440
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
libedgetpu1-std
The following NEW packages will be installed:
edgetpu-compiler libedgetpu1-std
0 upgraded, 2 newly installed, 0 to remove and 74 not upgraded.
Need to get 6,770 kB of archives.
After this operation, 25.5 MB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 libedgetpu1-std amd64 14.1 [311 kB]
Get:2 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 edgetpu-compiler amd64 14.1 [6,458 kB]
Fetched 6,770 kB in 1s (5,476 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 2.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Selecting previously unselected package libedgetpu1-std:amd64.
(Reading database ... 144579 files and directories currently installed.)
Preparing to unpack .../libedgetpu1-std_14.1_amd64.deb ...
Unpacking libedgetpu1-std:amd64 (14.1) ...
Selecting previously unselected package edgetpu-compiler.
Preparing to unpack .../edgetpu-compiler_14.1_amd64.deb ...
Unpacking edgetpu-compiler (14.1) ...
Setting up libedgetpu1-std:amd64 (14.1) ...
Setting up edgetpu-compiler (14.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Doing so, the tflite file comes out. However, when I try to compile it via edgetpu_compiler output-filename.tflite, it outputs

Edge TPU Compiler version 14.1.317412892
Invalid model: output-filename.tflite
Model not quantized

Will wait for news from you, thanks in advance

AttributeError: 'Delegate' object has no attribute '_library'

Python version
3.7.8
TensorFlow version
1.15.0
Command
py .\inference.py --model .\out_edgetpu.tflite --classes .\plc.names --image 1.jpg --edge_tpu --quant --anchors .\tiny_yolo_anchors.txt
Logs
2020-07-14 09:41:13.618078: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found 2020-07-14 09:41:13.626348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File ".\inference.py", line 223, in <module> interpreter = make_interpreter(args.model, args.edge_tpu) File ".\inference.py", line 35, in make_interpreter tf.lite.experimental.load_delegate(EDGETPU_SHARED_LIB) File "C:\Users\test\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 165, in load_delegate delegate = Delegate(library, options) File "C:\Users\test\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 89, in __init__ self._library = ctypes.pydll.LoadLibrary(library) File "C:\Users\test\AppData\Local\Programs\Python\Python37\lib\ctypes\__init__.py", line 442, in LoadLibrary return self._dlltype(name) File "C:\Users\test\AppData\Local\Programs\Python\Python37\lib\ctypes\__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found Exception ignored in: <function Delegate.__del__ at 0x000001B4AC7FE5E8> Traceback (most recent call last): File "C:\Users\test\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 124, in __del__ if self._library is not None: AttributeError: 'Delegate' object has no attribute '_library'

Cannot compile .tflite to tpu tflite

I train tiny-yolov3 on darknet then convert to Keras model by follow instruction.
But when use edgetpu_compiler have error.

Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.
ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!

How to slove it.

OS: ubuntu 20.04
tensorflow: 2.6.0

Inference time very high

Hello Everyone,

I have converted yolov3 model into tflite and now, when I run the script, there are two inferences as an output on the console.

Net forward-pass time: 1677.3169040679932 ms.
Box computation time: 0.9748935699462891 ms.

Does "Net forward-pass time" include the time of loading the model as well? If yes, I would like to know if there is a way, where I can measure inference after the model is loaded into the memory.

Because, my inference without plugging in USB accelerator for Yolov3 on PC was around 350ms.

your own example is not working

I set up a new ubuntu 20.04
and did the following steps:

sudo apt-get install git
sudo apt-get install python-is-python3
sudo apt-get install python3-pip
sudo apt install curl
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
sudo apt-get update
sudo apt-get install edgetpu-compiler

pip3 install numpy
pip3 install keras
pip3 install tensorflow

git clone https://github.com/guichristmann/edge-tpu-tiny-yolo.git
git clone https://github.com/guichristmann/keras-yolo3.git

after that I copied the coco-tiny-v3-relu.cfg and the coco-tiny-v3-relu.weights from the edge-tpu-tiny-yolo dir to the keras-yolo3 dir
and did a python convert.py coco-tiny-v3-relu.cfg coco-tiny-v3-relu.weights mytest1_keras.h5
I copied this new h5 file back to edge-tpu-tiny-yolo and did a
python keras_to_tflite_quant.py mytest1_keras.h5 mytest1.tflite
after that I did an edgetpu_compiler mytest1.tflite
but I got the well hatet "Model not quantized" message I got with my own weights, too

So something changed since you set up your scripts.
The also provided original quant_coco-tiny-v3-relu.tflite is compiled by edgetpu, so the problem must be
earlier.
Please check your own files and tell me, what to do.

Best regards
Roland

Huge difference to regular darknet output

Hey there!

After compiling my custom tiny yolo v3 network for the edgetpu I´m able to run my network but the output is insanely different.
On my custom network there is only one class, anchors and input size remain the same so i don´t know where these issues might be coming from.

Edge Tpu output:
result

Desired output
predictions

Do you have any why this is happening and how to fix this?

Segmentation Fault

I am getting "segmentation Fault" after inference.py.
I made some changes in inference.py like "#23 (comment)"
Configs :
Windows 10
Python 3.7.1
tf = 1.15

Could anyone please help me, how to overcome this issue. ?
And if someone can share working code for this project, please can you share.?

※important※ Mistake in nms_boxes() of util.py

Hi, @guichristmann

There are mistakes in NMS processing.
・errors

  1. NMS applies regardless of class.
  2. Inappropriate class reference during NMS processing

・code corrections
■utils.py - line 120,121
delete 「#」(comment out)
■utils.py - line 128,136
correct indents following bellow

※before correction
□□□if len(to_remove) != 0:
□□□□□# Remove boxes
□□□□□for r in to_remove[::-1]:
□□□□□□□del boxes[r]
□□□□□□□del scores[r]
□□□□□□□del classes[r]
□□□□□□□i += 1

※after correction
□□□if len(to_remove) != 0:
□□□□□# Remove boxes
□□□□□for r in to_remove[::-1]:
□□□□□□□del boxes[r]
□□□□□□□del scores[r]
□□□□□□□del classes[r]
□□□i += 1

By correcting like this, I could obtain appropriate inference result.

Full YoloV3 model

I'm wondering if you have tried building the full model (not tiny) for the Edge TPU?

※important※ Mistake in iou() of util.py

I added exception handling.
def iou(box1, box2):
    xi1 = max(box1[0][0], box2[0][0])
    yi1 = max(box1[0][1], box2[0][1])
    xi2 = min(box1[1][0], box2[1][0])
    yi2 = min(box1[1][1], box2[1][1])
    
    if(((xi2 - xi1) < 0) or ((yi2 - yi1) < 0)):
        IoU = 0
        return IoU
    else:
        inter_area = (xi2 - xi1)*(yi2 - yi1)
        
        # Formula: Union(A,B) = A + B - Inter(A,B)
        box1_area = (box1[1][1] - box1[0][1])*(box1[1][0]- box1[0][0])
        box2_area = (box2[1][1] - box2[0][1])*(box2[1][0]- box2[0][0])
        union_area = (box1_area + box2_area) - inter_area
        # compute the IoU
        IoU = inter_area / union_area
    
        return IoU

Converting the provided darknet model to tflite results in a 'model not quantized' error

Using the provided darknet model weights and cfg and the recommended repo to convert to Keras (h5) then to tflite, I find that the generated tflite is different from the tflite in the model folder.

This results in a 'model not quantized error' even though the provided quant_coco-tiny-v3-relu.tflite converts just fine.

Comparing the two tflite models (not compiled for the edge TPU), I find the only difference is this block but not sure what is causing the difference here. I was attempting to convert my own custom model but tried this as a (failed) sanity check.

quant_coco-tiny-v3-relu.tflite provided:

image

generated tflite using keras_to_tflite_quant.py (h5 generated with coco-tiny-v3-relu.cfg and coco-tiny-v3-relu.weights):

image

converting to tflite error

when i try to convert keras to tflite model i get the following error:

INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
File "keras_to_tflite_quant.py", line 40, in
tflite_model = converter.convert()
File "/home/bond/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 922, in convert
inference_output_type)
File "/home/bond/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 200, in _calibrate_quantize_model
inference_output_type, allow_float)
File "/home/bond/.local/lib/python3.6/site-packages/tensorflow/lite/python/optimize/calibrator.py", line 78, in calibrate_and_quantize
np.dtype(output_type.as_numpy_dtype()).num, allow_float)
File "/home/bond/.local/lib/python3.6/site-packages/tensorflow/lite/python/optimize/tensorflow_lite_wrap_calibration_wrapper.py", line 115, in QuantizeModel
return _tensorflow_lite_wrap_calibration_wrapper.CalibrationWrapper_QuantizeModel(self, input_py_type, output_py_type, allow_float)
RuntimeError: Quantization not yet supported for op: LEAKY_RELU

do you know what could be the issue ?

thanks

License to use

Hi, are you planning on adding a license to share this repo? :)

RESIZE_BILINEAR Operation version not supported

when I use edgetpu_compiler(Edge TPU Compiler version 14.1.317412892) to compile YOLOv3.tflite , and I got this info, RESIZE_BILINEAR Operation version not supported. I use "tf.image.resize(input_layer, [input_layer.shape[1] * 2, input_layer.shape[2] * 2], method='nearest') " to do upsample with TF2.3。Can you help me? ths.

Error running Inference.py script

I am working on windows platform and have installed TF = 1.15.0 along with python version 3.7.11.

I am getting the below error when running inference .py script. Please guide me solve the issue. Thanks in advance.

coral-TPU-error-21

Description of outputs?

I've written a tool called DOODS https://github.com/snowzach/doods that allows for offloaded object detection. It's written in Go and I would love to be able to work with this model (with the edge tpu)

Is there any documentation on the output format of this model? I am having a hard time following the code (I'm not a python guy)
I see there are 2 outputs, 1, 13, 13, 255 and 1, 26, 26, 255

{"package": "detector.tflite", "name": "edgetpu2", "n": 0, "name": "Identity", "type": "UInt8", "num_dims": 4, "byte_size": 43095, "quant": {"Scale":0.08390676975250244,"ZeroPoint":230}, "shape": [1, 13, 13, 255]}
{"package": "detector.tflite", "name": "edgetpu2", "n": 0, "dim": 0, "dim_size": 1}
{"package": "detector.tflite", "name": "edgetpu2", "n": 0, "dim": 1, "dim_size": 13}
{"package": "detector.tflite", "name": "edgetpu2", "n": 0, "dim": 2, "dim_size": 13}
{"package": "detector.tflite", "name": "edgetpu2", "n": 0, "dim": 3, "dim_size": 255}
{"package": "detector.tflite", "name": "edgetpu2", "n": 1, "name": "Identity_1", "type": "UInt8", "num_dims": 4, "byte_size": 172380, "quant": {"Scale":0.08317398279905319,"ZeroPoint":214}, "shape": [1, 26, 26, 255]}
{"package": "detector.tflite", "name": "edgetpu2", "n": 1, "dim": 0, "dim_size": 1}
{"package": "detector.tflite", "name": "edgetpu2", "n": 1, "dim": 1, "dim_size": 26}
{"package": "detector.tflite", "name": "edgetpu2", "n": 1, "dim": 2, "dim_size": 26}
{"package": "detector.tflite", "name": "edgetpu2", "n": 1, "dim": 3, "dim_size": 255}
{"package": "detector", "name": "edgetpu2", "type": "tflite", "model": "models/quant_coco-tiny-v3-relu.tflite", "labels": 80, "width": 416, "height": 416}

Any help at all would be greatly appreciated!

Not all operations are supported by the Edge TPU

Hey there!

I´m using tensorflow gpu version 2.3.0 and my custom model (anchors and image size are the same, i only modified the number of classes and used relu layers instead of leaky).

First of all I needed to add this line converter.experimental_new_converter = False because my tensorflow version is higher than 2.2 (just in case this is interesting for you).
After that I was able to run edgetpu_compiler -s quant.tflite but the output tells me that some operations will be run on the CPU

Edge TPU Compiler version 14.1.317412892

Model compiled successfully in 329 ms.

Input model: quant.tflite
Input size: 8.38MiB
Output model: quant_edgetpu.tflite
Output size: 8.40MiB
On-chip memory used for caching model parameters: 1.84MiB
On-chip memory remaining for caching model parameters: 256.75KiB
Off-chip memory used for streaming uncached model parameters: 5.64MiB
Number of Edge TPU subgraphs: 1
Total number of operations: 25
Operation log: quant_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 19
Number of operations that will run on CPU: 6

Operator                       Count      Status

RESIZE_NEAREST_NEIGHBOR        1          Operation version not supported
MAX_POOL_2D                    6          Mapped to Edge TPU
CONCATENATION                  1          More than one subgraph is not supported
QUANTIZE                       1          More than one subgraph is not supported
QUANTIZE                       2          Mapped to Edge TPU
QUANTIZE                       1          Operation is otherwise supported, but not mapped due to some unspecified limitation
CONV_2D                        2          More than one subgraph is not supported
CONV_2D                        11         Mapped to Edge TPU

Do you have any idea how to fix this?

How to generate anchor file

I had a doubt with the anchor file. I got it sorted as i found it in the .cfg file. Hence, closing the issue! :)

Why does quantified model run slower?

I know these models are designed to run in Coral USB but should't also it run faster in a PC?

It takes about 1.15 seconds to run a tiny-yolov3.tflite in my coumputer but arround 15 seconds to run the quant_coco-tiny-v3-relu.tflite

Is this normal behaviour?
will it run faster than 1.15s in Coral USB TPU when I buy it?

Thanks for answering!

Yolo V3 and Yolo V3 Spp

Thank you for the great work.
I am trying to compile yolo v3 and yolo v3 spp to quantized tflite model but unable to do so.
I found out that, there are shortcut layers in yolo v3 and spp models which is non existent in tiny model. Could it be a possible reason for this?

Is there any way to compile them for Coral TPU Board ?Please help me.
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.