Object Detection Model for ASL Alphabet
In an attempt to maximize inclusion and expand the level of communication between signers and non-signers, this project employs the use of machine learning to translate American Sign Language fingerspelling alphabet into text and speech in real-time.
To load the JS version, follow this tutorial.
To test Signans on your machine with OpenCV, follow the next steps.
Use the package manager pip to install virtualenv and create a virtual environment on your machine.
pip install virtualenv
python3.8 -m venv signans-python
On Windows, run:
signans-python\Scripts\activate.bat
pip install --upgrade pip
cd signans-python
On Unix or MacOS, run:
source signans-python/bin/activate
pip install --upgrade pip
cd signans-python
git clone https://github.com/brunobpr/Signans
git clone https://github.com/tensorflow/models.git
Install the required packages:
pip install -r Signans/requirements.txt
Move the models to the python environment:
On Windows, run:
move "models\research\object_detection" "Lib\site-packages"
move "models\official" "Lib\site-packages"
cd Signans
On Unix or MacOS, run:
mv models/research/object_detection lib/python3.8/site-packages
mv models/official lib/python3.8/site-packages
cd Signans
python Signans.py
Once the video capturing window opens, use the ASL alphabet to finger-spell words or sentences. There are two extra signs, space
and dot
.
To change the detection speed, run the scrip with the argument —speed or -s:
python Signans.py --speed {time in seconds}
signans-python/Signans
and rename it as authentication.json
At the moment the supported languages are:
| Language | Code |
|:————: |———:|
|French | fr|
|Spanish | es|
|Portuguese | pt |
|Mandarin | zh|
To change the language, run the scrip with the argument —language or -l
python Signans.py --language {code}
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.