Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
/ Gesture-Once Public archive

An educational platform for learning American Sign Language with constant feedback from a fine-tuned YOLOv8 model that has a 90%+ accuracy on hand gesture classification.

Notifications You must be signed in to change notification settings

jaynopponep/Gesture-Once

Repository files navigation

Gesture Once

Overview

Gesture Once (p.k.a. Learn-ASL) is a machine learning project designed to recognize American Sign Language (ASL) gestures and translate them into text using the YOLOv8 object detection model and MediaPipe for hand landmark detection. This project aims to bridge communication gaps for ASL users by providing an educational and efficient sign-to-text conversion service for learning basic ASL, including the alphabet and many common phrases in ASL.

Features

Object Detection with YOLOv8: Recognizes ASL letters and gestures from a live video feed. Hand Landmarks with MediaPipe: Enhances gesture recognition by aligning bounding boxes to hand landmarks. Gesture Logging: Logs the highest predicted gesture with confidence scores to a text file for debugging and potential user interfaces.

Demo

Gesture Once

IMPORTANT NOTES

Although it'd be awesome to have this deployed so others can freely test the model, deploying it will be computationally expensive, and users may run into network issues regardless. However, setting this up locally is extremely easy! Instructions are available below.

Tools and Libraries

Use Cases:

Real-Time Conversion

The system can be engineered to detect ASL gestures using a camera which converts sign language into text in real-time. This would enable deaf individuals to communicate more easily with those who do not understand ASL. Additionally, the system can be extended to translate ASL videos into text for users who do not know ASL.

Educational and Training Purposes:

The system can serve as a learning platform for users who are practicing sign langauge. It can be developed so that it can evaluate a user’s sign language accuracy and provide instant feedback.

Setting Up

NOTE

Make sure Python is installed.

Installation

  1. Install the necessary Python packages
pip install -r requirements.txt
  1. Change directory to the frontend and install necessary dependencies
cd frontend/
npm install
  1. Run the client
npm run dev
  1. Change directory to the backend and run the server that serves the YOLOv8 model
cd ..
cd backend/
python model_api.py
  1. Start signing!

Future Work

Interface Development: Build a GUI for real-time gesture-to-text translation. Dataset Expansion: Incorporate more ASL gestures for robust recognition. Performance Optimization: Optimize logging and frame processing speed.

Dataset

Dataset Link: ASL Letters Dataset

Contributors

Jay Noppone Pornpitaksuk, Claudio Perinuzzi, Loyd Flores, Kenneth Guillont

About

An educational platform for learning American Sign Language with constant feedback from a fine-tuned YOLOv8 model that has a 90%+ accuracy on hand gesture classification.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •