Code Monkey home page Code Monkey logo

tflitedetection's Introduction

TensorFlow Lite Object Detection in Python

This code snipset is heavily based on TensorFlow Lite Object Detection
The detection model can be downloaded from above link.
For the realtime implementation on Android look into the Android Object Detection Example
Follow the object detection.ipynb to get information about how to use the TFLite model in your Python environment.

Details

The ssd_mobilenet_v1_1_metadata_1.tflite file's input takes normalized 300x300x3 shape image. And the output is composed of 4 different outputs. The 1st output contains the bounding box locations, 2nd output contains the label number of the predicted class, 3rd output contains the probabilty of the image belongs to the class, 4th output contains number of detected objects(maximum 10). The specific labels of the classes are stored in the labelmap.txt file.
I found the labelmap.txt in the Android Object Detection Example repository in below directory.

TFLite_examples/lite/examples/object_detection/android/app/src/main/assets

For model inference, we need to load, resize, typecast the image.
The mobileNet model uses uint8 format so typecast numpy array to uint8.

Then if you follow the correct instruction provided by Google in load_and_run_a_model_in_python, you would get output in below shape

Now we need to process this output to use it for object detection

import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image

label_names = [line.rstrip('\n') for line in open("labelmap.txt")]
label_names = np.array(label_names)
numDetectionsOutput = int(np.minimum(numDetections[0],10))

for i in range(numDetectionsOutput):
    # Create figure and axes
    fig, ax = plt.subplots()

    # Display the image
    ax.imshow(res_im)

    # Create a Rectangle patch
    inputSize = 300
    left = outputLocations[0][i][1] * inputSize
    top = outputLocations[0][i][0] * inputSize
    right = outputLocations[0][i][3] * inputSize
    bottom = outputLocations[0][i][2] * inputSize
    class_name = label_names[int(outputClasses[0][i])]
    print("Output class: "+class_name+" | Confidence: "+ str(outputScores[0][i]))
    rect = patches.Rectangle((left, bottom), right-left, top-bottom, linewidth=1, edgecolor='r', facecolor='none')

    # Add the patch to the Axes
    ax.add_patch(rect)

    plt.show()


I believe you can modify the rest of the code as you want by yourself.
Thank you!

tflitedetection's People

Contributors

joonb14 avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

tflitedetection's Issues

Missing input quantization

Thank you for the sample!

I'm not 100% sure, but I believe that the processor is missing input quantization as described at https://www.tensorflow.org/lite/performance/post_training_integer_quant#run_the_tensorflow_lite_models

Something like

Edited with corrected code based on this discussion:

# 1. Read image as 300x300x3 uint8
res_im = im.resize((300, 300))
np_res_im = np.array(res_im)

#2 Transform from input uint8 RGB [0..255] to float [-1, 1] 
np_res_im = (np_res_im / 255) * 2 - 1

#3 Apply quantization based on https://www.tensorflow.org/lite/performance/post_training_integer_quant#run_the_tensorflow_lite_models
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
    input_scale, input_zero_point = input_details["quantization"]
    np_res_im = np_res_im / input_scale + input_zero_point
 
np_res_im = np.expand_dims(np_res_im, axis=0).astype(input_details["dtype"])

# Quantized input [0..255] is still 300x300x3 but the quantized values are different from the original image:
print(np_res_im)

This is based on this output from the first cell:

[{'name': 'normalized_input_image_tensor',
  'index': 175,
  'shape': array([  1, 300, 300,   3], dtype=int32),
  'shape_signature': array([  1, 300, 300,   3], dtype=int32),
  'dtype': numpy.uint8,
  'quantization': (0.0078125, 128),
  'quantization_parameters': {'scales': array([0.0078125], dtype=float32),
   'zero_points': array([128], dtype=int32),
   'quantized_dimension': 0},
  'sparsity_parameters': {}}]

The dtype is uint8 and quantization and zero-point information is available.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.