Code Monkey home page Code Monkey logo

Comments (8)

vishal-patel17 avatar vishal-patel17 commented on July 20, 2024 1

Also facing the same issue upon using:
Tflite.runModelOnImage(path: image.path);

from flutter_tflite.

2shrestha22 avatar 2shrestha22 commented on July 20, 2024 1

Anyone having this issue should export model with float16 quantization.

config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

from flutter_tflite.

shaqian avatar shaqian commented on July 20, 2024

Hi Vishal,

Can you check if the error happens when feeding input tensor or output tensor?

You can set a breakpoint at the following line. If you are able to get here, the error is because the output tensor is of type uint8 but labelProb is float32.
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L452

The definition of labelProb:
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L55

Output of image classification is usually float number between 0 and 1. You may need to check how the model is trained.

Thanks,
Qian

from flutter_tflite.

shaqian avatar shaqian commented on July 20, 2024

I'm archiving this thread. Feel free to reopen if you have further questions.

Thanks,
Qian

from flutter_tflite.

PepeExpress avatar PepeExpress commented on July 20, 2024

Im facing same issue both when using Tflite.runModelOnImage(path: image.path); and await Tflite.runModelOnBinary(binary:binary);

I attach an image with model properties of the tflite model I'm using.
netron_model

from flutter_tflite.

zoraiz-WOL avatar zoraiz-WOL commented on July 20, 2024

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf
assert tf.version.startswith('2')

from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

#to unzip a rar
!unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder')
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
model = image_classifier.create(train_data, validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

from flutter_tflite.

elkhalifte avatar elkhalifte commented on July 20, 2024

inside your code change the output from float to byte and finally get the float value from byte data.
before:

float[][] labelProb = new float[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) { 
float confidence = labelProb[0][i]; 
} 

after:

byte[][] labelProb = new byte[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) {
 float confidence = (float)labelProb[0][i];
 }

i might send a pull request for this.

from flutter_tflite.

umang752 avatar umang752 commented on July 20, 2024

I am getting this error,

Cannot copy to a TensorFlowLite tensor (input_1) with 602112 bytes from a Java Buffer with 150528 bytes.

`import os

import numpy as np

import tensorflow as tf

from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader

EXPORT_DIR = '/home/ailabs/work/TFLite/Model/'
CAR_POTO_DIR = '/home/ailabs/work/TFLite/car_photos/'
EPOCHS = 1

data = DataLoader.from_folder(CAR_POTO_DIR)

train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)

model = image_classifier.create(train_data, epochs=EPOCHS, validation_data=validation_data)

loss, accuracy = model.evaluate(test_data)

config = QuantizationConfig.for_float16()

model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_quant.tflite',quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_labels.txt',export_format=ExportFormat.LABEL)
`

from flutter_tflite.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.