Code for "How important is motion in sign language translation?" published by IET Computer Vision
setup2.py --> This is the file we use to process the original dataset videos. It applies subsampling, oversampling and flipping operations to the original frames and stores them in the results folder as .npy objects
mainp.py --> This file calls the files that create the model specified in the Notebook, creates the generators and launches the training.
NB_SLT_RUN.ipynb --> Notebook to run the entire proposed approach.
Performance_Evaluation_Notebook.ipynb --> This Notebook evaluates the translations saved with the metrics provided in NSLT.
/utils/augmentation.py --> This file contains the frame-sampling functions used to augment the Colombian data.
/utils/generators.py --> This file contains the data generators used to train the models.
/utils/utils.py --> This file contains important functions such as the oversampling and subsampling functions used to augment the data.
/models/languageModels/translationModels.py --> This file contains and creates the different translation models depending on the selected parameters. It also contains the feature extraction network (based on the LTC) and the attention modules used.
/models/learningModels/learningKerasModels.py --> This file contains the function that training the created model.
/results/dataTrain_phoenix_sentences/dicts/ --> This folder contains the dictionaries used to create the OneHot Encoding vectors for each video.
Download training weights and translations results
Work done during my master's degree in computer science at the Universidad Industrial de Santander - Colombia under the research group "Biomedical Imaging, Vision and Learning Laboratory". This project performs video translation of Colombian Sign Language and German Sing Language.