Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

BHARATI VIDYAPEETH DEEMED TO BE UNIVERSITY

DEPARTMENT OF ENGINEERING AND TECHNOLOGY, NAVI MUMBAI


CAMPUS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

MUDRATALK: INDIAN SIGN


LANGUAGE TRANSLATOR
Project Members:
Sanika Khankale
Bhaskar Chaurasia
Rohit Majumder
Aditya Tyagi
Guide: Prof. Datta Deshmukh
INTRODUCTION
 MudraTalk, a transformative project that aims to enhance communication for
individuals with hearing and speech disabilities through sign language.
 As social beings, communication is fundamental to our daily lives, yet those facing
hearing or speech challenges often encounter barriers in expressing
themselves."MudraTalk" will leverage cutting-edge technologies such as computer
vision and machine learning to interpret sign language gestures in real-time. By
translating these gestures into text or spoken language, our system empowers
individuals with hearing and speech disabilities to communicate effectively and
confidently and vise versa.
 Our project is rooted in the belief that communication is a fundamental human
right. "MudraTalk" not only improves communication but also promotes inclusivity
and accessibility in society, ensuring that everyone has the opportunity to connect
and engage with others.
LITERATURE REVIEW
Sr. No. Paper Title[Year of publication] Technology used Advantages Limitations
+ Accuracy
1 Sign Language Recognition Based on Dynamic Time CNNs for recognizing Combined DTW with deep May require large datasets for
Warping and Deep Learning:2023 fingerspelling gestures in learning for improved training deep learning models
American Sign Language. recognition accuracy. effectively.
Accuracy: 100%

2 Sign Language Recognition Using 3D Convolutional 3D CNNs for recognizing Improved recognition Requires complex training and
Neural Networks:2022 dynamic sign language accuracy for dynamic signs, processing due to 3D data
gestures. enabling more effective representation.
Accuracy: 99.87% communication.

3 Sign Language Recognition Using Hand Tracking and Hand tracking and CNNs Combined hand tracking with May require specialized
Convolutional Neural Networks:2021 for sign language CNNs for more accurate hardware for hand tracking.
recognition. recognition of sign language
Accuracy: 87.5% gestures.

4 Real-time American Sign Language Fingerspelling CNNs for recognizing Achieved high accuracy in Limited to fingerspelling
Recognition Using Convolutional Neural fingerspelling gestures in real-time recognition of recognition and may not
Networks:2021 American Sign Language. fingerspelling, improving generalize to other sign
Accuracy: 98.53% communication efficiency. language gestures.

5 Enhancing Sign Language Recognition Using Pose Pose estimation and Improved recognition Requires robust pose
Estimation and Attention Mechanism:2020 attention mechanism for accuracy by focusing on key estimation algorithms for
improving sign language points in sign language effective implementation.
recognition. gestures.
Accuracy: 79.80%
PROBLEM STATEMENT

• Individuals with hearing and speech disabilities face challenges in expressing


themselves and connecting with others due to limited communication options.
• Existing tools for translating sign language gestures into text or spoken
language are often expensive, cumbersome, or inaccurate.
• There is a need for a cost-effective, user-friendly, and accurate system to bridge
the communication gap between sign language users and non-users.
• This system should empower individuals with hearing and speech disabilities to
communicate effectively, improving their quality of life and integration into
society.
• Innovative technological solutions leveraging machine learning, computer
vision, and natural language processing are required to develop such a system
OBJECTIVES
• Develop a real-time sign language recognition system that accurately interprets
and translates sign language gestures into text or spoken language.
• Create a user-friendly interface that enables individuals with hearing and speech
disabilities to easily communicate with others.
• Ensure the system is cost-effective and accessible, making it widely available to
those who need it most.
• Conduct thorough testing and validation to ensure the system's accuracy and
reliability in various real-world scenarios.
• Collaborate with experts in sign language and assistive technology to
incorporate their feedback and improve the system's effectiveness.
• Raise awareness about the importance of inclusive communication and
advocate for the adoption of such systems in public and private settings
EXISTING SYSTEM

• Computer Vision-Based Approaches: Several research studies have explored the use of
computer vision techniques for sign language recognition. These approaches typically
involve capturing video of sign language gestures and using algorithms to track and
interpret hand movements. For example, Li et al. (2020) proposed a system that uses a
combination of deep learning and computer vision to recognize American Sign Language
(ASL) gestures in real-time [1]. While these approaches show promise, they often require
complex hardware setups and may struggle with accuracy for certain signs.

• Wearable Devices: Some existing systems use wearable devices, such as gloves or
wristbands, to capture hand movements and translate them into text or speech. For
example, the SignAloud gloves developed by Roy et al. (2016) use sensors to detect hand
gestures and translate them into spoken English [2]. While wearable devices can provide a
more natural and intuitive way to communicate, they can be expensive and may not be
suitable for all users.
EXISTING SYSTEM

• Mobile Applications: There are several mobile applications available that claim to
translate sign language gestures into text or speech. These apps often use smartphone
cameras to capture gestures and then use machine learning algorithms to interpret them.
However, the accuracy of these apps can vary widely, and they may not always be reliable
for real-time communication.

• Research Prototypes: In the research community, there have been several prototypes
developed for sign language recognition and translation. For example, the SignSpeak
system developed by Vogler et al. (2018) uses a combination of depth sensors and
machine learning to translate British Sign Language (BSL) into spoken English [3]. While
these prototypes show promise, they are often not widely available or scalable for
practical use.
PROPOSED SYSTEM

• System Design

CNN
Model
IMPLEMENTATION DETAILS
Input Module:
• Utilize a webcam or depth sensor to capture sign language gestures.
• Use OpenCV library to access and process video frames.

Preprocessing Module:
• Apply image preprocessing techniques like noise reduction, image enhancement, and normalization to
improve gesture recognition accuracy.
• Convert the captured images into a suitable format for feature extraction.

Feature Extraction:
• Use techniques like Histogram of Oriented Gradients (HOG), Convolutional Neural Networks (CNNs),
or Recurrent Neural Networks (RNNs) to extract features from the preprocessed images.
• Extract features such as hand shape, movement, and orientation to represent the sign language
gestures.

Recognition Module:
• Train a machine learning model, such as a CNN or RNN, using a dataset of sign language gestures to
recognize the extracted features.
• Use the trained model to classify the gestures into corresponding sign language symbols or words.
IMPLEMENTATION DETAILS
Translation Module:
• Translate the recognized gestures into text or spoken language using natural language processing
techniques.
• Use a dictionary or mapping to convert sign language symbols or words into their corresponding text
or spoken language equivalents.

Output Module:
• Display the translated text on a screen or output it through a speech synthesis module to enable
communication with non-sign language users.

Integration and Testing:


• Integrate all modules into a cohesive system and test its functionality using a variety of sign language
gestures.
• Conduct user testing to evaluate the system's accuracy, speed, and user-friendliness.

Deployment:
• Deploy the system on a suitable platform, such as a desktop application, web application, or mobile
application, depending on the target users' needs and preferences.
• Ensure the system is accessible and easy to use for individuals with hearing and speech disabilities.
• Implementation Details
REQUIREMENTS

Software Requirements:
• Operating System: Windows, Linux, or macOS.
• Python: Version 3.x.
• OpenCV: Library for image processing.
• TensorFlow or PyTorch: Deep learning frameworks for machine learning models.
• Flask or Django: Web frameworks for developing the user interface (optional for web-
based applications).
• IDE: PyCharm, Jupyter Notebook, or any Python-compatible IDE for development.
REQUIREMENTS

Hardware Requirements:
• Camera: Webcam or depth sensor for capturing sign language gestures.
• Computer: With sufficient processing power (e.g., Intel Core i5 or higher) and memory
(e.g., 8GB RAM or more).
• Storage: Hard drive with enough space to store the software and datasets.
• Internet Connection: For downloading libraries and updates (if applicable).
• Optional: External microphone or speakers for audio output (if speech synthesis is
included).
OUTCOMES
• Improved Communication: The sign language recognition system will enable
individuals with hearing and speech disabilities to communicate more effectively with
others, improving their overall quality of life.
• Enhanced Accessibility: By providing a user-friendly and accurate tool for sign language
translation, the system will enhance accessibility for individuals with disabilities in
various settings, such as education, healthcare, and social interactions.
• Empowerment: The system will empower individuals with disabilities to express
themselves more confidently and independently, fostering a sense of empowerment and
inclusion in society.
• Social Impact: By promoting inclusivity and accessibility, the system can have a positive
social impact, raising awareness about the needs and capabilities of individuals with
disabilities.
• Research and Development: The project can contribute to research and development in
the field of assistive technology, advancing the state of the art in sign language
recognition and translation.
CONCLUSION

In conclusion, the development of a sign language recognition system has the


potential to greatly enhance communication for individuals with hearing and
speech disabilities. By leveraging technology, we can create a tool that
improves accessibility, fosters empowerment, and promotes inclusivity in
society. Continued research and collaboration are essential to further refine
and implement this system for the benefit of all.
REFERENCES
1. Li, Y., et al. (2020). Real-Time American Sign Language Recognition Using Convolutional Neural
Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(9), 3421-3434.

2. Roy, N., et al. (2016). SignAloud: A Glove-based American Sign Language Interpreter. In Proceedings of
the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
(pp. 481-487).

3. Vogler, C., et al. (2018). SignSpeak: A Sign Language Translation System. In Proceedings of the 20th
International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-
10).
4. http://en.wikipedia.org/wiki/YIQ
5. http://en.wikipedia.org/wiki/YUV
6. http://cs229.stanford.edu/proj2011/ChenSenguptaSundaram-
SignLanguageGestureRecognitionWithUnsupervisedFeatureLearning.pdf
7. http://en.wikipedia.org/wiki/Bag-of-words_model
8. Neha V. Tavari, P. A. V. D.,Indian sign language recognition based on histograms of oriented
gradient,International Journal of Computer Science and Information Technologies 5, 3 (2014), 3657-3660
THANK
YOU

You might also like