Code Monkey home page Code Monkey logo

expression-recognizer's Introduction

Facial Expression Recognition

We are exploring different models to accurately identify facial expressions from photos. This repo stores the scripts that implement KNN and CNN to extract facial expressions from photos.

All of the models in this repo are trained, validated, and testing using images and category from the Kaggle Challenge, Challenges in Representation Learning: Facial Expression Recognition Challenge

  • More relevant datasets can be found here

The full research paper can be found in the reports folder.

A demo video can be found on youtube.

Getting Started

To get started using this repo

$ pip install -r requirements.txt --upgrade

The Models

The prototypes used for this project can be found in the notebooks directory.

Currently, the notebooks with complete models are

  • knn.ipynb
  • cnn.ipynb

Using the Models

Pretrained models (our best models) will appear in the models directory, these models may be uploaded and directly used at the end of each notebook listed above with Google Colab.

Training the Models

To train the models, run the notebooks listed above in sequence on Jupyter or Google Colab. There are detailed instructions on how to load the data into the notebooks. Each built model is evaluated by their accuracy as well as their confusion matrix.

Useful Notes

Setup Virtual Environment

cd ./example_repo
virtualenv example_repo_env
source ./example_repo_env/bin/activate

Setup .env File for Python Decouple

Add your environmental variables to .env file,

PYTHONPATH="/Users/usr/PATH_TO_REPO/"

use it like the following in your code:

from decouple import config
config('PYTHONPATH')

Check here for more information on python decouple.

Database Setup for PostgreSQL

To setup Postgres and an engine for a Postgres database, refer to documentation here.

Project Organization

├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.testrun.org

Project based on the cookiecutter data science project template. #cookiecutterdatascience

expression-recognizer's People

Contributors

baby1900 avatar meixinzhang avatar zhangtrex avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.