rajatkalsotra / face-recognition-flutter Goto Github PK
View Code? Open in Web Editor NEWRealtime face recognition with Flutter
License: MIT License
Realtime face recognition with Flutter
License: MIT License
Hi,
I download the code and build apk on Flutter Android Studio, but it fails on main.dart line 64. Unable to create interpreter. So I down load APK from this repo, but it won't install at all. I am using android version 5.0.
From the log I found that when Initialize TensorFlow lite runtime L2_NORMALIZATION: Operation is not supported, and 2 lines after that : OpenCL library not loaded - dlopen failed: library "libOpenCL-pixel.so" not found.
Could you help? Thank you.
Aditya
Hi, one of our project need face recognition. I'm trying all the resources. This repository seems to complete that. But I couldn't run the app on linux & windows. I followed all the instructions, still.... I desperately request you @Rajatkalsotra to look after it. We need some solution as soon as possible.
../../snap/flutter/common/flutter/.pub-cache/hosted/pub.dartlang.org/tflite_flutter-0.4.2/lib/src/delegates/gpu_delegate.dart:58:10: Error: The getter 'addressOf' isn't defined for the class 'TfLiteGpuDelegateOptionsV2'.
Download the project and it works perfectly, but when I try to implement it as a module from another project it gives an error when calling the load model. Could you tell me what it can be?
D/libGLESv2(25436): DTS_GLAPI : DTS is not allowed for Package : com.rajatkalsotra.face_recognition
W/linker (25436): /data/app/com.rajatkalsotra.face_recognition-1/lib/arm/libtensorflowlite_c.so: unused DT entry: type 0x1d arg 0x12ec
I/tflite (25436): Initialized TensorFlow Lite runtime.
I/CameraManagerGlobal(25436): Connecting to camera service
V/ActivityThread(25436): updateVisibility : ActivityRecord{3c1ef5 token=android.os.BinderProxy@2abe2a3 {com.rajatkalsotra.face_recognition/com.rajatkalsotra.face_recognition.MainActivity}} show : true
E/SensorManager(25436): nativeGetSensorAtIndex: name, vendor - 0, K2HH Acceleration , STM
E/SensorManager(25436): nativeGetSensorAtIndex: name, vendor - 1, STK3013 Proximity Sensor, Sensortek
E/SensorManager(25436): nativeGetSensorAtIndex: name, vendor - 2, Screen Orientation Sensor, Samsung Electronics
D/SensorManager(25436): registerListener :: 0, K2HH Acceleration , 200000, 0,
I/CameraManager(25436): Using legacy camera HAL.
I/CameraDeviceState(25436): Legacy camera service transitioning to state CONFIGURING
I/RequestThread-1(25436): Configure outputs: 2 surfaces configured.
D/Camera (25436): app passed NULL surface
I/RequestThread-1(25436): configureOutputs - set take picture size to 320x240
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e6404
D/mali_winsys(25436): new_window_surface returns 0x3000, [320x240]-format:1
I/CameraDeviceState(25436): Legacy camera service transitioning to state IDLE
I/RequestQueue(25436): Repeating capture request set.
W/LegacyRequestMapper(25436): convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
I/Timeline(25436): Timeline: Activity_idle id: android.os.BinderProxy@2abe2a3 time:10156240
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
I/CameraDeviceState(25436): Legacy camera service transitioning to state CAPTURING
I/RequestQueue(25436): Repeating capture request cancelled.
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
I/CameraDeviceState(25436): Legacy camera service transitioning to state IDLE
I/CameraDeviceState(25436): Legacy camera service transitioning to state CONFIGURING
I/RequestThread-1(25436): Configure outputs: 2 surfaces configured.
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/Camera (25436): app passed NULL surface
D/libEGL (25436): eglTerminate EGLDisplay = 0xce1eaf94
D/libEGL (25436): eglTerminate EGLDisplay = 0xce1eaf94
D/libEGL (25436): eglTerminate EGLDisplay = 0xce1eaf94
D/libEGL (25436): eglTerminate EGLDisplay = 0xce1eaf94
D/libEGL (25436): eglTerminate EGLDisplay = 0xce0e649c
D/libEGL (25436): eglTerminate EGLDisplay = 0xce0e643c
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e6404
D/mali_winsys(25436): new_window_surface returns 0x3000, [320x240]-format:1
I/CameraDeviceState(25436): Legacy camera service transitioning to state IDLE
I/RequestQueue(25436): Repeating capture request set.
W/LegacyRequestMapper(25436): convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
I/CameraDeviceState(25436): Legacy camera service transitioning to state CAPTURING
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xee91e000, .cb: 0xee930c01, .cr: 0xee930c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xd6b63000, .cb: 0xd6b75c01, .cr: 0xd6b75c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
W/DynamiteModule(25436): Local module descriptor class for com.google.android.gms.vision.dynamite.face not found.
I/DynamiteModule(25436): Considering local module com.google.android.gms.vision.dynamite.face:0 and remote module com.google.android.gms.vision.dynamite.face:0
D/FaceNativeHandle(25436): Cannot load feature, fall back to load whole module.
W/DynamiteModule(25436): Local module descriptor class for com.google.android.gms.vision.dynamite not found.
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
I/DynamiteModule(25436): Considering local module com.google.android.gms.vision.dynamite:0 and remote module com.google.android.gms.vision.dynamite:2703
I/DynamiteModule(25436): Selected remote version of com.google.android.gms.vision.dynamite, version >= 2703
V/DynamiteModule(25436): Dynamite loader version >= 2, using loadModule2NoCrashUtils
D/gralloc (25436): gralloc_lock_ycbcr success. format : 11, usage: 30, ycbcr.y: 0xd64a3000, .cb: 0xd64b5c01, .cr: 0xd64b5c00, .ystride: 320 , .cstride: 320, .chroma_step: 2
D/libEGL (25436): eglInitialize EGLDisplay = 0xce0e62f4
W/ResourceType(25436): ResTable_typeSpec entry count inconsistent: given 67, previously 69
D/ResourcesManager(25436): For user 0 new overlays fetched Null
W/System (25436): ClassLoader referenced unknown path: /data/data/com.google.android.gms/app_chimera/m/00000052/n/armeabi-v7a
W/System (25436): ClassLoader referenced unknown path: /data/data/com.google.android.gms/app_chimera/m/00000052/n/armeabi
D/ResourcesManager(25436): For user 0 new overlays fetched Null
W/DynamiteModule(25436): Local module descriptor class for com.google.android.gms.vision.face not found.
I/DynamiteModule(25436): Considering local module com.google.android.gms.vision.face:0 and remote module com.google.android.gms.vision.face:0
E/FaceDetectorCreator(25436): Error loading optional module com.google.android.gms.vision.face
E/FaceDetectorCreator(25436): hh: No acceptable module found. Local version is 0 and remote version is 0.
E/FaceDetectorCreator(25436): at hk.e(:com.google.android.gms.dynamite_dynamitemodulesa@[email protected] (040306-0):84)
E/FaceDetectorCreator(25436): at com.google.android.gms.vision.face.ChimeraNativeFaceDetectorCreator.newFaceDetector(:com.google.android.gms.dynamite_dynamitemodulesa@[email protected] (040306-0):2)
E/FaceDetectorCreator(25436): at jl.a(:com.google.android.gms.dynamite_dynamitemodulesa@[email protected] (040306-0):4)
E/FaceDetectorCreator(25436): at aq.onTransact(:com.google.android.gms.dynamite_dynamitemodulesa@[email protected] (040306-0):4)
E/FaceDetectorCreator(25436): at android.os.Binder.transact(Binder.java:387)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.vision.zza.transactAndReadException(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.vision.face.internal.client.zzh.zza(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.vision.face.internal.client.zza.zza(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.vision.zzl.zzp(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.vision.face.internal.client.zza.<init>(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.vision.face.FaceDetector$Builder.build(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.firebase_ml.zziv.zzfm(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.firebase_ml.zzhr.zzfp(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.firebase_ml.zzhr.call(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.firebase_ml.zzhg.zza(Unknown Source)
E/FaceDetectorCreator(25436): at com.google.android.gms.internal.firebase_ml.zzhh.run(Unknown Source)
E/FaceDetectorCreator(25436): at android.os.Handler.handleCallback(Handler.java:739)
E/FaceDetectorCreator(25436): at android.os.Handler.dispatchMessage(Handler.java:95)
E/FaceDetectorCreator(25436): at android.os.Looper.loop(Looper.java:148)
E/FaceDetectorCreator(25436): at android.os.HandlerThread.run(HandlerThread.java:61)
I/Vision (25436): Request download for engine face
W/FaceNativeHandle(25436): Native handle not yet available. Reverting to no-op handle.
I suspect it is not able to use firebase model
Hello,
I'm trying compare image from base64 with the camera, but I'm stuck in this step:
//GET IMAGE BASE64
var strImage = this.widget.base64Photo.substring(22);
Uint8List bytes = base64.decode(strImage);
//CONVERT TO LIST AS YOUR EXEMPLE CODE
imglib.Image img2 = imglib.Image.fromBytes(128, 128, bytes);
List input = imageToByteListFloat32(img2, 112, 128, 128);
input = input.reshape([1, 112, 112, 3]);
List output = List(1 * 192).reshape([1, 192]);
interpreter.run(input, output);
output = output.reshape([192]);
e2 = List.from(output);
and compare Lists, where currEmb is my face detection from camera:
String compare(List currEmb) {
double minDist = 999;
double currDist = 0.0;
String predRes = 'NOT RECOGNIZED';
currDist = euclideanDistance(e2, currEmb);
if (currDist <= threshold && currDist < minDist) {
minDist = currDist;
predRes = 'RECOGNIZED';
}
print(minDist.toString() + ' ' + predRes);
return predRes;
}
but always return NOT RECOGNIZED. Any sugestion for recognize from base64?
hey I m using this model for my cllg mini project.
But when m running it on redmi 8a dual its working fine but when I m running it on Vivo Y18 thee model crashes with error :
E/tflite ( 9076): First 230 operations will run on the GPU, and the remaining 1 on the CPU.
E/tflite ( 9076): OpenCL library not loaded - dlopen failed: library "libOpenCL-pixel.so" not found
E/tflite ( 9076): Falling back to OpenGL
E/libEGL ( 9076): call to OpenGL ES API with no current context (logged once per thread)
I/tflite ( 9076): Initialized OpenGL-based API.
E/tflite ( 9076): TfLiteGpuDelegate Init: fuse_auto_input failed
E/tflite ( 9076): TfLiteGpuDelegate Prepare: delegate is not initialized
E/tflite ( 9076): Node number 231 (TfLiteGpuDelegateV2) failed to prepare.
E/tflite ( 9076): Restored previous execution plan after delegate application failure.
E/flutter ( 9076): [ERROR:flutter/lib/ui/ui_dart_state.cc(177)] Unhandled Exception: Invalid argument(s): Unable to create interpreter.
i can not run anyway. just show The plugin firebase_ml_vision
uses a deprecated version of the Android embedding. how i can run and solve it. please help anybody. thanks
Hi
Have you tried the model for Indian faces? I am trying to use this for a project for student attendance project but it recognises falsely almost 10-15% of time, which is not good.
Is there any way I can improve the recognition even if it makes recognition slower or stricter in terms of pose etc?
flutter setup is kind of a hassle.
Step 3 is not clear , hope you can expand on Step 3.
Can you make a short youtube video on Step 1 to 4 ?
Is there a java android version of this project available?
Thanks for such a good project! Appreciate your work making the flutter version.
I:\websites\flutter\Face-Recognition-Flutter>flutter run
Using hardware rendering with device AOSP on IA Emulator. If you notice graphics artifacts, consider enabling software
rendering with "--enable-software-rendering".
Launching lib\main.dart on AOSP on IA Emulator in debug mode...
FAILURE: Build completed with 2 failures.
Where:
Build file 'C:\src\flutter.pub-cache\hosted\pub.dartlang.org\firebase_ml_vision-0.12.0+3\android\build.gradle' line: 26
What went wrong:
A problem occurred evaluating project ':firebase_ml_vision'.
Could not find the firebase_core FlutterFire plugin, have you added it as a dependency in your pubspec?
compileSdkVersion is not specified.
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
==============================================================================
Get more help at https://help.gradle.org
BUILD FAILED in 2s
Running Gradle task 'assembleDebug'... 3.8s
Exception: Gradle task assembleDebug failed with exit code 1
I ran the model and I got the Euclidean distance between 2 different faces as 0.61 , in addition most of my values hover around 0.8-0.9 for different faces some occasionally around 1.008 , so I want to know what threshold should I choose for high accuracy
Before that I thank you for inspiring. I've run this application, for android version it works normally but for ios version it can't work.
I've changed the gpu delegate code
final gpuDelegate = GpuDelegate(
options: GpuDelegateOptions(true, TFLGpuDelegateWaitType.active),
);
var interpreterOptions = InterpreterOptions()..addDelegate(gpuDelegate);
final interpreter = await Interpreter.fromAsset('your_model.tflite',
options: interpreterOptions);
but still it doesn't work, is there a solution for working on ios
Thanks Before
Hello @Rajatkalsotra, very nice work, but can i want ask something about mobilefacenet output size, is there any reason for using 192 as output size of mobilefacenet model
I run this app on my Android device then everything works fine. But when running on IOS after I debug, the _convertCameraImage
method does not return results. It seems that in that method there is too much computation so it doesn't work on iOS or something like that. Please give me a way to solve this problem. Thank you.
when using the back camera to detect the faces, the bounding box is moving opposite of the subject we are trying to detect
Good job and good project. Can you help me how to recognition only face?
I don't want to recognition many people at once.
Currently, the performance is not very good. I want to ask if it is possible to capture image from camera and send to server
and recognition on Python Server to ease the app. On the web I used SocketIO to do this. Do not know with Flutter can do this?
i get this exception Exception: Invalid argument(s): Failed to load dynamic library (dlopen failed: library "libtensorflowlite_c.so" not found), and i don't know how to install this library.
Is it possible to update the migrated version of the app in the repo ? Or can someone provide the link to the repo of the migrated/upgraded version of the app ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.