This project converts American Sign Language to English language in Real time video using Convolutional Neural Networks (CNNs).
train.py
main.py
src/
| -augments.py
|
| -dataset.py
|
| -models.py
|
| -trainer.py
requirements.txt
image-data-file.csv
lb.pkl
main.py
contains the code to access webcam and make predictionstrain.py
contains the code to train the modelsrc/augments.py
contains the code to define Train and Valid Augmentationssrc/dataset.py
contains the code to create Train and Valid Dataset classsrc/models.py
contains the code to create a custom neural network modelsrc/trainer.py
contains the code to train one epoch of the modelimage-data-file.csv
has 2 columns - image_path and corresponding targets (labels encoded using scikit-learn)lb.pkl
is a saved scikit-learn model to encode labels
You can download the data from here
Now, take the downloaded .zip
file and extract it into a new folder: dataset/
and delete the space/
folder from the dataset/
.
Make sure the dataset/
folder is at the same directory level as the train.py file.
Note: If you already have the requirements (libraries and packages) included in the requirements.txt
skip this step.
$ conda create --name <env> --file requirements.txt
If you have done the above steps right, then just running the train.py script should not produce any errors.
To run training, open the terminal and change your working directory to the same level as the train.py file.
$ python train.py
This should start a training in a few seconds and you should see a progress bar.
If you don't want to train the model yourself and just want to make predictions then just install the required packages by following this step.
Then run the following command
$ python main.py
If you have any query regarding the code or anything else, please open an Issue and I'll be happy to help!