Sign2Sub

Overview

In many Southeast Asian countries, sign language support is scarce, underfunded, and not many normal people know how to use sign language. This creates challenges for deaf or hard-of-hearing people communicating with other people. This problem can be seen clearly, especially in schools and workplace settings, where it can lead to limiting social inclusion, equal access to information, and opportunities.

Solution

Sign2Sub is a computer vision-based application that converts sign language gestures into subtitle text in real time, by using MediaPipe and OpenCV to recognize hand movements and translate them into words and sentences. Our goal is to improve accessibility and communication between hearing-impaired users and the broader community.

A person demonstrating a sign language gesture with a graphical overlay indicating hand landmarks and the word 'Bathroom' displayed at the bottom, along with a confidence percentage.
  • Core Technology
    • Computer Vision: for analyzing hand gestures from live webcam video.
    • OpenCV: handling webcam access and video frame preprocessing.
    • Mediapipe: for detecting hands and tracking movements.
    • Teachable Machine (Google): for training a basic gesture recognition model for word-level sign language.
  • Key Features
    • Real-time hand gesture recognition using a webcam
    • Automatic conversion of recognized signs into subtitle text
    • Simple and accessible user interface for live demonstrations
  • User Benefits
    • Improves communication for deaf and hard-of-hearing individuals
    • Reduces reliance on human interpreters in basic interactions
    • Supports inclusive education and accessibility initiatives
  • What Makes It Unique
    • Works without audio, unlike speech-to-text tools
    • Focuses on accessibility and real-time performance

Conclusion

The system demonstrates that sign language gestures (ASL) can be effectively recognized using motion-based analysis. Initial experiment shows promising accuracy (40%-90% on 25 words) in identifying and converting them into text output.

The current system: focuses on word-level recognition, which is suitable for live demonstrations and also the early-stage deployment.

Next step:

  • Recognized more words and sentences.
  • Transition from no-code tools to custom-trained deep learning models using Python-based frameworks for greater control and accuracy.
  • Integrated artificial intelligence for coherent sentences.
  • Expand to other sign languages in Southeast Asia, starting with Khmer Sign Language (Cambodia).

Overall, Sign2Sub highlights the potential of visual-based AI solutions to promote inclusive communication and improve accessibility for deaf and hard-of-hearing communities.

About the team

No.Full NameMajorInstitutionRoles
1Chea Rithea VateyComputer ScienceUniversity of PécsProject Lead & AI Model Developer
2Soun SomanithSoftware DevelopmentAmerican University of Phnom PenhBackend Developer & System Integration Lead
3Heang Sophea VateyInformation Communication TechnologyAmerican University of Phnom PenhResearch & Documentation Lead
4Mey BunlongInformation Communication TechnologyAmerican University of Phnom PenhProduct Development & Communication Lead

Motivation
The team is motivated by the goal of using technology and what we learn from school to create inclusive solutions, and it would be a plus if it could address real-world problems, especially in Southeast Asia, by starting in our home country, Cambodia.

We hope that Sign2Sub will make a contribution to improved accessibility and more inclusive communication using technology.

Logo of Sign2Sub featuring a hand making a peace sign, overlaid with abstract design elements in green and blue.