Comments (8)
Also facing the same issue upon using:
Tflite.runModelOnImage(path: image.path);
from flutter_tflite.
Anyone having this issue should export model with float16 quantization.
config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)
from flutter_tflite.
Hi Vishal,
Can you check if the error happens when feeding input tensor or output tensor?
You can set a breakpoint at the following line. If you are able to get here, the error is because the output tensor is of type uint8 but labelProb
is float32.
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L452
The definition of labelProb
:
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L55
Output of image classification is usually float number between 0 and 1. You may need to check how the model is trained.
Thanks,
Qian
from flutter_tflite.
I'm archiving this thread. Feel free to reopen if you have further questions.
Thanks,
Qian
from flutter_tflite.
Im facing same issue both when using Tflite.runModelOnImage(path: image.path);
and await Tflite.runModelOnBinary(binary:binary);
I attach an image with model properties of the tflite model I'm using.
from flutter_tflite.
use this code to train you custom model
import os
import numpy as np
import tensorflow as tf
assert tf.version.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
import matplotlib.pyplot as plt
#to unzip a rar
!unzip path-of-zip-file -d path-to-save-extract-file
data = DataLoader.from_folder('path-of-custom-folder')
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
model = image_classifier.create(train_data, validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)
from flutter_tflite.
inside your code change the output from float to byte and finally get the float value from byte data.
before:
float[][] labelProb = new float[1][labels.size()];
for (int i = 0; i < labels.size(); ++i) {
float confidence = labelProb[0][i];
}
after:
byte[][] labelProb = new byte[1][labels.size()];
for (int i = 0; i < labels.size(); ++i) {
float confidence = (float)labelProb[0][i];
}
i might send a pull request for this.
from flutter_tflite.
I am getting this error,
Cannot copy to a TensorFlowLite tensor (input_1) with 602112 bytes from a Java Buffer with 150528 bytes.
`import os
import numpy as np
import tensorflow as tf
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
EXPORT_DIR = '/home/ailabs/work/TFLite/Model/'
CAR_POTO_DIR = '/home/ailabs/work/TFLite/car_photos/'
EPOCHS = 1
data = DataLoader.from_folder(CAR_POTO_DIR)
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
model = image_classifier.create(train_data, epochs=EPOCHS, validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_quant.tflite',quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_labels.txt',export_format=ExportFormat.LABEL)
`
from flutter_tflite.
Related Issues (20)
- [!] No podspec found for `flutter_tflite` in `.symlinks/plugins/flutter_tflite/ios` HOT 13
- Deprecated version of the Android embedding HOT 1
- app crashed while deploying tflite loaded model and Android V2 Embedding error HOT 1
- Do not support version greater than 3
- Can I build desktop app by this package?
- E/AndroidRuntime( 4563): Caused by: java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:0) with shape [1, 10647, 6] to a Java object with shape [1, 13, 13, 30]. HOT 1
- runModelOnImage returns null HOT 1
- I have a issue in runSegmentationOnImage
- The plugin `tflite` uses a deprecated version of the Android embedding. HOT 1
- The plugin `tflite` uses a deprecated version of the Android embedding. HOT 1
- got this error when running flutter project HOT 3
- Unhandled Exception: PlatformException(Failed to run model, Interpreter busy, java.lang.RuntimeException: Interpreter busy HOT 3
- Can anyone help giving this code in Flutter HOT 2
- help me please
- Failed to transform guice-5.1.0.jar (com.google.inject:guice:5.1.0) to match attributes HOT 1
- Please Update this package HOT 1
- crash when useGpuDelegate is true in v2 HOT 1
- E/AndroidRuntime(10989): Caused by: java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_conv2d_3_input:0) with 90000 bytes from a Java Buffer with 67500 bytes.
- URGENT!!!!! Please HELP!!! Unhandled Exception: PlatformException(Failed to run model, Attempt to invoke virtual method 'org.tensorflow.lite.Tensor org.tensorflow.lite.Interpreter.getInputTensor(int)' on a null object reference, java.lang.NullPointerException: Attempt to invoke virtual method 'org.tensorflow.lite.Tensor org.tensorflow.lite.Interpreter.getInputTensor(int)' on a null object reference
- URGENT!!!!! Please HELP!!! Unhandled Exception: PlatformException(Failed to run model, Attempt to invoke virtual method 'org.tensorflow.lite.Tensor org.tensorflow.lite.Interpreter.getInputTensor(int)' on a null object reference, java.lang.NullPointerException: Attempt to invoke virtual method 'org.tensorflow.lite.Tensor org.tensorflow.lite.Interpreter.getInputTensor(int)' on a null object reference
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flutter_tflite.