The following project is an application to recognize static American Sign Language gestures using a selection of user trainable/customizable Convolutional Neural Network (CNN) architectures such as:
- LeNet5
- AlexNet
- Custom built CNN model
The application also allows webcam input to take images and test trained models on accuracy.
The aim of building this was not only to strengthen our design skills and understanding AI training, but to bring people a stepping stone in which they can use to build, train and test models with simplicity.
The MNIST ASL dataset used can be found on Kaggle. It is important to note that in this application J
and Z
are not included in the training/testing as they are motion-based gestures.
Application is built upon Python v3.9. Download the zip file or clone the repository using: git clone
The dependencies to run this application can be found in the requirements.txt
file. It is best to create a conda environment that uses Python v3.9, and migrate to the folder with the dependencies to run the command through the terminal:
pip install -r requirements.txt
To run program, in terminal run python3 main.py
- Allow live ASL detection via webcan to allow motion-based gestures.
- Create a way for users to load their own created models.