Sign2Sub
Overview
In many Southeast Asian countries, sign language support is scarce, underfunded, and not many normal people know how to use sign language. This creates challenges for deaf or hard-of-hearing people communicating with other people. This problem can be seen clearly, especially in schools and workplace settings, where it can lead to limiting social inclusion, equal access to information, and opportunities.
Solution
Sign2Sub is a computer vision-based application that converts sign language gestures into subtitle text in real time, by using MediaPipe and OpenCV to recognize hand movements and translate them into words and sentences. Our goal is to improve accessibility and communication between hearing-impaired users and the broader community.

- Core Technology
- Computer Vision: for analyzing hand gestures from live webcam video.
- OpenCV: handling webcam access and video frame preprocessing.
- Mediapipe: for detecting hands and tracking movements.
- Teachable Machine (Google): for training a basic gesture recognition model for word-level sign language.
- Key Features
- Real-time hand gesture recognition using a webcam
- Automatic conversion of recognized signs into subtitle text
- Simple and accessible user interface for live demonstrations
- User Benefits
- Improves communication for deaf and hard-of-hearing individuals
- Reduces reliance on human interpreters in basic interactions
- Supports inclusive education and accessibility initiatives
- What Makes It Unique
- Works without audio, unlike speech-to-text tools
- Focuses on accessibility and real-time performance
Conclusion
The system demonstrates that sign language gestures (ASL) can be effectively recognized using motion-based analysis. Initial experiment shows promising accuracy (40%-90% on 25 words) in identifying and converting them into text output.
The current system: focuses on word-level recognition, which is suitable for live demonstrations and also the early-stage deployment.
Next step:
- Recognized more words and sentences.
- Transition from no-code tools to custom-trained deep learning models using Python-based frameworks for greater control and accuracy.
- Integrated artificial intelligence for coherent sentences.
- Expand to other sign languages in Southeast Asia, starting with Khmer Sign Language (Cambodia).
Overall, Sign2Sub highlights the potential of visual-based AI solutions to promote inclusive communication and improve accessibility for deaf and hard-of-hearing communities.
About the team
| No. | Full Name | Major | Institution | Roles |
| 1 | Chea Rithea Vatey | Computer Science | University of Pécs | Project Lead & AI Model Developer |
| 2 | Soun Somanith | Software Development | American University of Phnom Penh | Backend Developer & System Integration Lead |
| 3 | Heang Sophea Vatey | Information Communication Technology | American University of Phnom Penh | Research & Documentation Lead |
| 4 | Mey Bunlong | Information Communication Technology | American University of Phnom Penh | Product Development & Communication Lead |
Motivation
The team is motivated by the goal of using technology and what we learn from school to create inclusive solutions, and it would be a plus if it could address real-world problems, especially in Southeast Asia, by starting in our home country, Cambodia.
We hope that Sign2Sub will make a contribution to improved accessibility and more inclusive communication using technology.

