Comments (20)
I plan to do that as soon as I get time to clean it up, hopefully within the coming weeks.
from clmtrackr.
Very appreciate.
from clmtrackr.
i have a question, is possible tracking more than one face in screen?
from clmtrackr.
Hi Harold,
not out of the box. Theoretically it might be possible to track two faces by creating two instances of clmtrackr, and initializing them on separate bounding boxes on different parts of the screen. But you would have to fiddle with the code to get this to work properly. In practice, I think the biggest problem would be that it would run very slow, since running one instance of clmtrackr is already pretty heavy on most computers.
from clmtrackr.
Hello, it is possible to detect such a gesture, changing tracking parameters? and rather follow up on the gesture of a face instead of the face as such and so, if there are multiple faces, having only capture the gesture in particular, or am I wrong.
from clmtrackr.
Hi, I'm not sure what you mean by gesture, could you clarify?
from clmtrackr.
Hi, great project ! I don't know if @haroldSanchezb means the same, but what I'm trying to do is to detect changes on the face, for example if I ask the user to smile or to move the eyelashes, I want to know if they are doing so. I thought that the easiest way would be comparing the position of the eyelashes or of the mouth before and after asking them to do the gesture. What do you suggest for tracking this ?
from clmtrackr.
Hello @djabif , if I want to do the same, but the system doesn't detect gestures,for allow doing that, I'm doing is drawing the parameters of gestures and when the person do a gesture, make a comparison, and say which had more percentage of confidence between gestures
from clmtrackr.
If you're interested in detecting specific movements of the face, like close/open mouth, smile, etc., the easiest way is probably to get the parameters of the fitted facial model, via getCurrentParameters(). This will return an array of 24 values, which determines the pose of the face model. The first four parameters are just scale, rotation and coordinates, so you can ignore these, but the remaining parameters describes the pose. You can play with these pose parameters in an example I put up here:
http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html
So for detecting a smile, you could for instance detect whether component 4 or 7 (note that this will be parameter 8 and 13 in the parameter-array) are larger than some thresholds. I've put together a demo of emotion detection using clmtrackr that I've been meaning to put out, I'll see if I can publish it this weekend.
from clmtrackr.
Thank you very much for the answer ! If you can publish that example it would be awsome !!
from clmtrackr.
hi! @auduno thanks! Today I started to play with the positions of the points, and that more or less calculated and when someone opened his mouth lol, but what you say would be better!
from clmtrackr.
hi! @auduno how are you ?, I wanted to ask the example you were going to post, when would be possible: P
from clmtrackr.
Sorry, I haven't really had time to put it out, and I can't really promise anything in the coming days, since I have three exams this week.. :( I don't think I get time until next weekend to look at it.
from clmtrackr.
I've put out the example of emotion detection with clmtrackr here now:
http://auduno.github.io/clmtrackr/examples/clm_emotiondetection.html
This is a bit more than simple thresholds on some of the parameters, it's a logistic regression model on all of the parameters, but the general idea is the same as what I described. I'm going to release the code for training models as well soon!
from clmtrackr.
Thanks for the example !!! π
from clmtrackr.
Hi,
I am really new to this tracking thing and also want to create an app to detect when a mouth is open, closed, or makes an "O" face (hehehe)... to put it in another way, I want to track a person chewing gum.
I was using getCurrentPosition but I had difficulty seeing any change between point 44 and 50 (corners of mouth). I read above to use getCurrentParameters, and then saw you did something with emotion which maybe the best approach, although I have no clue what those coefficients are (are they x,y and z positions?) or what bias does, let alone what the difference is between each model (svm 20 vs svm 10, etc) ...
Is there any documentation around this to understand what exactly is happening, and what these data matrix are exactly and how to generate new ones or tweak them?
I think if I have a much better understanding of how this works, I could create an emotionModel with points that would be used to detect these mouth states I need.
Hope this makes sense.
from clmtrackr.
Hi, there's not any documentation to this way of detecting emotions, yet. I plan to write some articles on it during the holidays and put out code for creating models.
The coefficients and "bias" in the emotionmodel.js file are trained parameters for a logistic regression model. It would be possible to train such a model yourself, however, you'd have to have access to annotated, classified faces, which might be a problem. I've collected some annotations myself, and plan to make those public soon, but most facial images that are available have some kind of copyright, which makes it hard to make public for these kind of open-source projects.
I think the best bet is to take a look at this modelviewer:
http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html
If you're interested only in detecting open mouth, for instance, parameter 7 in getParameters() is a pretty good indicator of whether the mouth is open (Note that this is component 3 in the modelviewer, since the first four parameters are just scale, rotation and location parameters). So you could create an "open mouth" detector by checking whether parameter 7 is lower than some threshold, -5 for instance. Hope this is helpful!
from clmtrackr.
Thanks this is great. Will try it out and see what I can do :)
On Dec 13, 2013 11:40 PM, "Audun Mathias Γygard" [email protected]
wrote:
Hi, there's not any documentation to this way of detecting emotions, yet.
I plan to write some articles on it during the holidays and put out code
for creating models.The coefficients and "bias" in the emotionmodel.js file are trained
parameters for a logistic regression model. It would be possible to train
such a model yourself, however, you'd have to have access to annotated,
classified faces, which might be a problem. I've collected some annotations
myself, and plan to make those public soon, but most facial images that are
available have some kind of copyright, which makes it hard to make public
for these kind of open-source projects.I think the best bet is to take a look at this modelviewer:
http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html
If you're interested only in detecting open mouth, for instance, parameter
7 in getParameters() is a pretty good indicator of whether the mouth is
open (Note that this is component 3 in the modelviewer, since the first
four parameters are just scale, rotation and location parameters). So you
could create an "open mouth" detector by checking whether parameter 7 is
lower than some threshold, -5 for instance. Hope this is helpful!β
Reply to this email directly or view it on GitHubhttps://github.com//issues/1#issuecomment-30545868
.
from clmtrackr.
The model training code is now available, with some documentation, at https://github.com/auduno/clmtools
from clmtrackr.
@auduno I'm trying to increase number of emotions. Thus would like to know how many images are enough for single emotion from you experience? And do you have any recommendation about choosing right image samples? Thanks!
from clmtrackr.
Related Issues (20)
- Is it possible to change the color of the green lines? HOT 2
- Camera stream freezes once the clmtrackr detects a face (MacOs Mojave, Safari) HOT 9
- Is it possible to detect only the smile? HOT 1
- Video gets laggy on Safari for iOS HOT 1
- Alert Android9 Browser Face Deformer.
- detect_multi_scale TypeError HOT 2
- T
- ERR_SSL_PROTOCOL_ERROR HOT 2
- Problems on webpack and Symfony installation HOT 7
- jsfeat module is not defined HOT 5
- Unable to stop the tracker once started
- how improve the emotion detector
- Detect eye blink HOT 1
- How to stop the shaking on the mask
- Show a cap on head HOT 1
- Issue in Android Mobile
- Bug: Image tracking on mobile gives very bad results HOT 1
- High resolution error
- hide fps
- Nose slimming effect?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clmtrackr.