This repo contains the official code and models for the "Active Speakers in Context" CVPR 2020 paper.
This code works over face crops and their corresponding audio track, before you start training you need to preprocess the videos in the AVA dataset. We have 3 utility files that contain the basic data to start this process, download them using ./scripts/dowloads.sh
.
- Extract the audio tracks from every video in the dataset. Go to ./data/extract_audio_tracks.py in main adapt the
ava_video_dir
(directory with the original ava videos) andtarget_audios
(empty directory where the audio tracks will be stores) to your local file system. The code relies on 16k wav files and will fail with other formats and bit rates. - Slice the audio tracks by timestamp. Go to ./data/slice_audio_tracks.py in main adapt the
ava_audio_dir
(the audio tracks you extracted on step 1),output_dir
(empty directory where you will store the sliced audio files) andcsv
(the utility file you download previously, use the set accordingly) to your local file system. - Extract the face crops by timestamp. Go to ./data/extract_audio_tracks.py in main adapt the
ava_video_dir
(directory with the original ava videos),csv_file
(the utility file you download previously, use the set accordingly) andoutput_dir
(empty directory where you will store the face crops) to your local file system.
The full audio tracks obtained on 1 step 1. will not be used anymore.
Training the ASC is divided in two major stages: the optimization of the Short-Term Encoder (similar to google baseline) and the optimization of the Context Ensemble Network. The second step includes the pair-wise refinement and the temporal refinement, and relies on a full forward pass of the Short-Term Encoder on the training and validation sets.
Got to ./core/config.py and modify the STE_inputs dictionary so that the keys audio_dir, video_dir and models_out points to to the audio clips, face crops (those extracted on ‘Before Training’) and an empty directory where the STE models will be saved.
Execute the script STE_train.py clip_lenght cuda_device_number
, we used clip_lenght=11 on the paper, but it can be set to any uneven value greater than 0 (performance will vary).
The Active Speaker context relies on the features extracted from the STE, execute the script python STE_forward.py clip_lenght cuda_device_number
, use the same clip_lenght as the training. Check lines 44 and 45 to obtain a list of training and val videos, you will need both subsets for the next step
Once all the STE features have been calculated, go to ./core/config.py and change the dictionary ‘ASC_inputs’ and modify the value of keys, features_train_full, features_val_full, and models_out point to point to the local directories where the features train and val have been stored and an empty directory where the models will 'be stored. execure ‘./ASC_train.py clip_lenght skip_frames speakers cuda_device_number’ clip_lenght must be the same size used to train the STE, skip_frames determines the amount of frames in between sampled clips, we used 4 for the results presented in the paper, speakers is the number of candidates speakers in the contex.
use ./ASC_forward.py to forward the models produced by the last step