Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 11

SUBJECT CODE

TYPE THE SUBJECT NAME HERE

Machine Learning based Sign Language


Recognition and Translation Methods

DATE: 03.12.2022
BATCH NUMBER :13
PROJECT CODE : SEC24ITTE30169
BATCH MEMBER : Abinaya k, Sharmila B
III V SUPERVISOR : Dr Susila Sakthy

20ITTE501- LIVE IN
LAB
20ITTE301
LIVE IN LAB

INTRODUCTION

Sign language is used for communication purposes by Dumb people. Communication is the important
term in real world to allow others and our self to understand information more accurately and quickly.
But it is not easy for the Deaf and Dumb people to communicate with the normal peoples as they all
can’t understand this Sign Language. This project provides one of the solutions to increase the
communication of Deaf and Dumb peoples with the normal peoples In this digital era, Mobile
application is the best solution for everyone to use, so by using the capabilities of Machine Learning
and Image processing algorithms available in Tensor Flow Library we make the working mobile
application. In which, user has to capture image as input and got the output in terms of text and audio.
This will ease the medium for those special peoples and in some cases using the Frequent Phrases
feature of our application, there is no need of capturing the photos just they need to press single
button.
20ITTE301
LIVE IN LAB

ABSTRACT
In Today’s world, Communication is very important for every single person to allow others and our
self to understand information more accurately and quickly, but it is very difficult for the deaf and
dumb people to communicate with normal people. They convey their message using Sign Language
which is a combination of gestures, orientations, movements of hands, arms or body and facial
expression, but not every normal person can understand that language. Our System is used to help
those people who are unable to hear and unable to speak to make conversation with other normal
people. Creating an Android Application which is used to translate Sign Language into normal
language. People just need to capture image word by word then after a single click a background
process is executed which can process images using libraries of Tensor Flow and within few
seconds it is converted into meaning full sentence or phrases.
20ITTE301
LIVE IN LAB

EXISTING WORK
There are existing systems that serve the purpose of converting sign language to
speech which use sensory input to convert each letter from sign language to
produce output. This is very easy to use for a person who knows sign language.
These systems have been able to recognize gestures with high latency as it uses
only image processing. In our project we aim to develop a cognitive system which
would be responsive and robust so as to be used in day to day applications by
hearing and speech disabled people.
PROPOSED SYSTEM

This app will serve as a learning tool for understanding and knowing sign languages. This
mobile app aimed to provide a unique education for people to easily, handy and very convenient
learning on different sign languages. This app will serve as a hearing aid for those in need and
for those in learning. The proposed system would be a real-time system wherein live sign
gestures would be processed using image processing. Then classifiers would be used to
differentiate various signs and the translated output would be displaying text. Machine Learning
algorithms will be used to train on the data set. The purpose of the system is to improve the
existing system in this area in terms of response time and accuracy with the use of efficient
algorithms, high-quality data sets, and better sensors.
SYSTEM ARCHITECTURE
System Design: The basic flow of our system shown in below flowchart which is starting from
the basic step of authentication ,
filling user details home screen capturing image background prediction process
output.
MODULES
Creating an Android Application which is used to translate Sign Language into normal language Requires
four effective steps to provide the accurate outputs. For the project implementation first of all back-end is
implemented by going through the following functions explained in a brief manner.

 MODULE 1: Data Collection and model building

 MODULE 2: Integrating model with Android App

For front-end implementation of the project is as follows :

 MODULE 3: Camera 2 API

 MODULE 4: Developing Android Application


MODULE 1 -Data Collection and model building
Data Collection:
The project begins by first forming the back-end process using Machine Learning and Image
Processing Algorithms written in Python. Model training is required to predict the signs and we supply
it with a Image data set which is categorized in different Sign classes. Then a customized model is
created and exported in required format.
For this project, we use real images taken from webcam and then it can be categorized into different
classes according to their signs, some of them are shown below :

Figure 1: Dataset according to class


Model Building :
For model building, we use Teachable Machine web-platform in which we add the images according to the
classes and then by changing some of the attributes like epochs, batch-size etc. And the last stage is of
exporting model with given types i) Tensorflow ii) Tensorflow.js & iii) Tensor flow Lite.

Pattern
VIDEO Capture
with smart
Video Segment Feature
Classificatio
phone Camera Processing ation Extraction n
20ITTE301
LIVE IN LAB

REFERENCES
[1] Sumathi M S, Ashwin B M, Balaji V, Sharath M Kumar, “ Give Voice to the Voiceless Using Microcontroller and digital
Gloves”, International Journal of Scientific & Engineering Research, Vol. 5, Issue 5, May-2014, ISSN 2229-5518

[2] M. Mohandes, M. Deriche, and J. Liu, “Image-Based and Sensor-Based Approaches to Arabic Sign Language
recognition”, IEEE Transactions on Human-machine and computer systems, Vol. 44, No.4, August 2014.

[3] Gunasekaran K, Manikandan R., ”Sign Language to Speech Translation System Using PIC Microcontroller”,
International journal of engineering and technology, Vol.5, No. 2, Apr-May 2013.

[4] Sakshi Goyal, Ishita Sharma, Shanu Sharma’ “Sign language Recognition System for Deaf and Dumb People”
International Journal of Engineering Research & Technology, vol. 2, issue 4, April 2013.

[5] Praveen Kumar S Havalagi, Shruthi Urf Nivedita, “The Amazing Digital Gloves that give voice to the voiceless,”,
International Journal of Advances in Engineering and Technology, Vol.6, Issue 1, pp 471- 480, March 2013.

[6] Ch. K. Volos, I. M. Kyprianidis and I. N. Stouboulos, “Text Encryption Scheme Realized with a Chaotic Pseudo-
Random Bit Generator“. 2013.
THANK YOU

You might also like