Code Monkey home page Code Monkey logo

Comments (20)

auduno avatar auduno commented on May 20, 2024

I plan to do that as soon as I get time to clean it up, hopefully within the coming weeks.

from clmtrackr.

mdqyy avatar mdqyy commented on May 20, 2024

Very appreciate.

from clmtrackr.

haroldSanchezb avatar haroldSanchezb commented on May 20, 2024

i have a question, is possible tracking more than one face in screen?

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

Hi Harold,
not out of the box. Theoretically it might be possible to track two faces by creating two instances of clmtrackr, and initializing them on separate bounding boxes on different parts of the screen. But you would have to fiddle with the code to get this to work properly. In practice, I think the biggest problem would be that it would run very slow, since running one instance of clmtrackr is already pretty heavy on most computers.

from clmtrackr.

haroldSanchezb avatar haroldSanchezb commented on May 20, 2024

Hello, it is possible to detect such a gesture, changing tracking parameters? and rather follow up on the gesture of a face instead of the face as such and so, if there are multiple faces, having only capture the gesture in particular, or am I wrong.

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

Hi, I'm not sure what you mean by gesture, could you clarify?

from clmtrackr.

djabif avatar djabif commented on May 20, 2024

Hi, great project ! I don't know if @haroldSanchezb means the same, but what I'm trying to do is to detect changes on the face, for example if I ask the user to smile or to move the eyelashes, I want to know if they are doing so. I thought that the easiest way would be comparing the position of the eyelashes or of the mouth before and after asking them to do the gesture. What do you suggest for tracking this ?

from clmtrackr.

haroldSanchezb avatar haroldSanchezb commented on May 20, 2024

Hello @djabif , if I want to do the same, but the system doesn't detect gestures,for allow doing that, I'm doing is drawing the parameters of gestures and when the person do a gesture, make a comparison, and say which had more percentage of confidence between gestures

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

If you're interested in detecting specific movements of the face, like close/open mouth, smile, etc., the easiest way is probably to get the parameters of the fitted facial model, via getCurrentParameters(). This will return an array of 24 values, which determines the pose of the face model. The first four parameters are just scale, rotation and coordinates, so you can ignore these, but the remaining parameters describes the pose. You can play with these pose parameters in an example I put up here:

http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html

So for detecting a smile, you could for instance detect whether component 4 or 7 (note that this will be parameter 8 and 13 in the parameter-array) are larger than some thresholds. I've put together a demo of emotion detection using clmtrackr that I've been meaning to put out, I'll see if I can publish it this weekend.

from clmtrackr.

djabif avatar djabif commented on May 20, 2024

Thank you very much for the answer ! If you can publish that example it would be awsome !!

from clmtrackr.

haroldSanchezb avatar haroldSanchezb commented on May 20, 2024

hi! @auduno thanks! Today I started to play with the positions of the points, and that more or less calculated and when someone opened his mouth lol, but what you say would be better!

from clmtrackr.

haroldSanchezb avatar haroldSanchezb commented on May 20, 2024

hi! @auduno how are you ?, I wanted to ask the example you were going to post, when would be possible: P

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

Sorry, I haven't really had time to put it out, and I can't really promise anything in the coming days, since I have three exams this week.. :( I don't think I get time until next weekend to look at it.

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

I've put out the example of emotion detection with clmtrackr here now:
http://auduno.github.io/clmtrackr/examples/clm_emotiondetection.html
This is a bit more than simple thresholds on some of the parameters, it's a logistic regression model on all of the parameters, but the general idea is the same as what I described. I'm going to release the code for training models as well soon!

from clmtrackr.

djabif avatar djabif commented on May 20, 2024

Thanks for the example !!! πŸ‘

from clmtrackr.

ErenPhayte avatar ErenPhayte commented on May 20, 2024

Hi,

I am really new to this tracking thing and also want to create an app to detect when a mouth is open, closed, or makes an "O" face (hehehe)... to put it in another way, I want to track a person chewing gum.

I was using getCurrentPosition but I had difficulty seeing any change between point 44 and 50 (corners of mouth). I read above to use getCurrentParameters, and then saw you did something with emotion which maybe the best approach, although I have no clue what those coefficients are (are they x,y and z positions?) or what bias does, let alone what the difference is between each model (svm 20 vs svm 10, etc) ...

Is there any documentation around this to understand what exactly is happening, and what these data matrix are exactly and how to generate new ones or tweak them?

I think if I have a much better understanding of how this works, I could create an emotionModel with points that would be used to detect these mouth states I need.

Hope this makes sense.

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

Hi, there's not any documentation to this way of detecting emotions, yet. I plan to write some articles on it during the holidays and put out code for creating models.

The coefficients and "bias" in the emotionmodel.js file are trained parameters for a logistic regression model. It would be possible to train such a model yourself, however, you'd have to have access to annotated, classified faces, which might be a problem. I've collected some annotations myself, and plan to make those public soon, but most facial images that are available have some kind of copyright, which makes it hard to make public for these kind of open-source projects.

I think the best bet is to take a look at this modelviewer:
http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html
If you're interested only in detecting open mouth, for instance, parameter 7 in getParameters() is a pretty good indicator of whether the mouth is open (Note that this is component 3 in the modelviewer, since the first four parameters are just scale, rotation and location parameters). So you could create an "open mouth" detector by checking whether parameter 7 is lower than some threshold, -5 for instance. Hope this is helpful!

from clmtrackr.

ErenPhayte avatar ErenPhayte commented on May 20, 2024

Thanks this is great. Will try it out and see what I can do :)
On Dec 13, 2013 11:40 PM, "Audun Mathias Øygard" [email protected]
wrote:

Hi, there's not any documentation to this way of detecting emotions, yet.
I plan to write some articles on it during the holidays and put out code
for creating models.

The coefficients and "bias" in the emotionmodel.js file are trained
parameters for a logistic regression model. It would be possible to train
such a model yourself, however, you'd have to have access to annotated,
classified faces, which might be a problem. I've collected some annotations
myself, and plan to make those public soon, but most facial images that are
available have some kind of copyright, which makes it hard to make public
for these kind of open-source projects.

I think the best bet is to take a look at this modelviewer:
http://auduno.github.io/clmtrackr/examples/modelviewer_pca.html
If you're interested only in detecting open mouth, for instance, parameter
7 in getParameters() is a pretty good indicator of whether the mouth is
open (Note that this is component 3 in the modelviewer, since the first
four parameters are just scale, rotation and location parameters). So you
could create an "open mouth" detector by checking whether parameter 7 is
lower than some threshold, -5 for instance. Hope this is helpful!

β€”
Reply to this email directly or view it on GitHubhttps://github.com//issues/1#issuecomment-30545868
.

from clmtrackr.

auduno avatar auduno commented on May 20, 2024

The model training code is now available, with some documentation, at https://github.com/auduno/clmtools

from clmtrackr.

hyzhak avatar hyzhak commented on May 20, 2024

@auduno I'm trying to increase number of emotions. Thus would like to know how many images are enough for single emotion from you experience? And do you have any recommendation about choosing right image samples? Thanks!

from clmtrackr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.