Mainly four phases involved:
- Feature Extraction - Extract the optical flow features (u, v, ε) that represents each frame.
- Pre-processing - Remove global head motion, eye masking, ROI selection, and image resampling.
- SOFTNet - Three-stream shallow architecture that takes inputs (u, v, ε) and outputs a spotting confidence score.
- Spotting - Smoothing spotting confidence score, then perform thresholding and peak detection to obtain the spotted interval for evaluation.
Tensorflow and Keras are used in the experiment. Two datasets with macro- and micro-expression are used for training and testing purposes:
CAS(ME)2 - http://fu.psych.ac.cn/CASME/cas(me)2-en.php
SAMM Long Videos - http://www2.docm.mmu.ac.uk/STAFF/M.Yap/dataset.php
Comparison between the proposed approaches against baseline and state-of-the-art approaches in Third Facial Micro-Expression Grand Challenge (MEGC 2020) in terms of F1-Score:
Samples visual results for SOFTNet:
The proposed SOFTNet approach outperforms other methods on CAS(ME)2 while ranked second on SAMM Long Videos. To better justify the effectiveness of the SOFTNet approach, we experimented with a similar framework but without SOFTNet, the results show that the framework with SOFTNet is much more efficient overall.
Visually, SOFTNet activation units show our intuition to concatenate the optical flow features (u, v, ε) from three-stream. The spatio-temporal motion information is captured when macro and micro-expression occur. After the concatenation, action unit 4 (Brow Lower) is triggered when a disgust emotion is elicited.
Step 1) Download datasets, CAS(ME)2 (CASME_sq) and SAMM Long Videos (SAMMLV) and placed in the structure as follows:
├─SOFNet_Weights
├─Utils
├─Extraction_Preprocess.ipynb
├─SOFTNet_Spotting.ipynb
├─extraction_preprocess.py
├─load_images.py
├─load_label.py
├─main.py
├─requirements.txt
├─training.py
├─CASME_sq├─CAS(ME)^2code_final.xlsx
├─cropped
├─rawpic
├─rawvideo
└─selectedpic
├─SAMMLV
├─SAMM_longvideos
└─SAMM_LongVideos_V1_Release.xlsx
Step 2) Installation of packages using pip
pip install -r requirements.txt
Step 3) Dataset setting
Open main.py, change the dataset name and expression type for evaluation.
Step 4) SOFTNet Training and Evaluation
python main.py
The step-by-step codes with explaination are provided here for a better understanding.
Step 1) Similar to the step 1 with python script.
Step 2) Feature Extraction and Pre-processing
Open the Extraction_Preprocess.ipynb and run the codes follow the instruction given inside.
Step 3) SOFTNet and Spotting
Open the SOFTNet_Spotting.ipynb and run the codes follow the instruction given inside. The evaluation for TP, FP, FN, F1-Score is returned at the last piece of code.
The pre-trained weights for CAS(ME)2and SAMM Long Videos with macro and micro-expression separately are located under folder SOFTNet_Weights. You may load the weights at SOFTNet_Spotting.ipynb for evaluation. However, the result is slightly different from the result given in the table shown above.