Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

A

Mini -project

on

REAL TIME SIGN LANGUAGE DETECTION SYSTEM


Submitted for Registration as a research student for the degree of Bachelor

of Engineering in Computer Science & Engineering-Data Science

Abha Gaikwad Patil College of Engineering and Technology Nagpur

Submitted by

Mr. Shreyash V. Ghogare

Mr. Yash R. Wanjari

Mr. Gaurav S. Sontakke

Under the Guidance of

Prof. Renuka Naukarkar

Submitted to

Abha Gaikwad-Patil College of Engineering & Technology, Nagpur

NAAC A+ Accredited
Rashtrasant Tukadoji Maharaj Nagpur University, Nagpur

(Faculty of Engineering and Technology)


Academic Year: 2023-2024
CERTIFICATE

This is to certify that the project entitled “REAL TIME SIGN LANGUAGE DETECTION
SYSTEM” is a confide work done under our supervision by Mr. Shreyash V. Ghogare, Mr.
Yash R. Wanjari and Mr. Gaurav S. Sontakke and is submitted to Rashtrasant Tukadoji Maharaj
Nagpur University, Nagpur for the requirement for the Degree of Bachelor of Technology in
Computer Science & Engineering (Data Science).

Guide Project In-charge

HOD (CSE – DS)


DECLARATION
I certify that

1. The work contained in this project has been done by me under the guidance of my
supervisors.
2. The work has not been submitted to any institute for any degree or diploma.
3. I have followed the guidelines provided by the institute in preparing the microproject
report.
4. I have conformed to the norms and guidelines given in the ethical code of conduct of the
institute.
5. Whenever I have used materials such as data, theoretical analysis, figures and text from
other sources, I have given due credit to them by citing them in the text of the report and
giving their details in the references.

Mr. Shreyash V. Ghogare

Mr. Yash R. Wanjari

Mr. Gaurav S. Sontakke


ACKNOWLEDGEMENT
On the submission of my thesis report of “REAL TIME SIGN MOTION MANAGEMENT” I
would like to extend my gratitude & sincere thanks to my supervisors, Prof. Renuka Naukarkar for
their constant motivation and support during the course of my work. It is all because of their untiring
endeavors, able guidance and valuable suggestions, that could synchronize my efforts in covering the
many diverse features of the project and thus helped me for the smooth progress and success of the
project. I truly appreciate and value their guidance and encouragement from the commencement to the
end of this microproject thesis. Their knowledge and company at the time of crisis would be remembered
lifelong.

I would like to further extend my hearty and sincere thanks with gratitude to respected Dr. Prashant
Thakre, Head of Computer Engineering Department (Data Science) and Prof. Renuka Naukarkar,
coordinator of B. Tech CSE-DS for their constant support. I truly appreciate and value the immense
guidance and encouragements during my Microproject work.

I also thanks to all whose direct and indirect support helped me in completing my microproject thesis in
time. Last but not least, I would like to thank almighty and my parents, for their support and co-operation
in completing microproject. I would like to share this moment of happiness with them.

Mr. Shreyash V. Ghogare

Mr. Yash R. Wanjari

Mr. Gaurav S. Sontakke


CONTENTS

Sr. No. Contents Page No


LIST OF FIGURES
LIST OF SCREENSHOTS
ABBREVIATIONS
ABSTRACT
1. CHAPTER 1: INTRODUCTION 1
2. CHAPTER 2: LITERATURE REVIEW 3
3. CHAPTER 3: STUDY, SCOPE 4
OBJECTIVES 5
4. CHAPTER 4: PROPOSED DESIGN 6
5. CHAPTER 5: IMPLEMENTATION 7
6. CHAPTER 6: RESULT ANALYSIS 14
7. CHAPTER 7: CONCLUSION AND FUTURE WORK 15
8. REFERENCES 16
LIST OF FIGURES

Figure. No. Figure Name Page No.

1.1 Sign language recognition system 2

4.1 Flowchart of proposed work 6

LIST OF SCREENSHOTS
Screenshot. No. Screenshot Name Page No
5.1 Front end code for the project 13

6.1 Final output of the project 14

ABBREVIATIONS
Sr. No. Abbreviations Full Form
1. ASLR Automatic Sign Language Recognition
2. CNN Convolutional Neural Network
3. SSD Solid State Drive
4. SLR Sign Language Recognition
5. ML Machine Learning

6. CV2 Computer Vision 2

7. IEEE Institute of Electrical and Electronics Engineers


ABSTRACT

A real time sign language detector is a significant step forward in improving


communication between the deaf and the general population. We are pleased to showcase the
creation and implementation of sign language recognition model based on a Convolutional
Neural Network(CNN).We utilized a Pre-Trained SSD Mobile net V2 architecture trained on our
own dataset in order to apply Transfer learning to the task. We developed a robust model that
consistently classifies Sign language in majority of cases. Additionally, this strategy will be
extremely beneficial to sign language learners in terms of practising sign language. Various
human-computer interface methodologies for posture recognition were explored and assessed
during the project. A series of image processing techniques with Human movement classification
was identified as the best approach. The system is able to recognize selected Sign Language
signs with the accuracy of 70-80% without a controlled background with small light.
CHAPTER 1: INTRODUCTION

Sign language is an essential tool to bridge the communication gap between normal and
hearing-impaired people. However, the diversity of over 7000 present-day sign languages with
variability in motion position, hand shape, and position of body parts making automatic sign
language recognition (ASLR) a complex system.

In order to overcome such complexity, researchers are investigating better ways of


developing ASLR systems to seek intelligent solutions and have demonstrated remarkable
success. This paper aims to analyze the research published on intelligent systems in sign
language recognition over the past two decades. A total of 649 publications related to decision
support and intelligent systems on sign language recognition (SLR) are extracted from the
Scopus database and analyzed.

The extracted publications are analysed using bibliometric VOSViewer software to obtain
the publications temporal and regional distributions, create the cooperation networks between
affiliations and authors and identify productive institutions in this context. Moreover, reviews of
techniques for vision-based sign language recognition are presented.

Various features extraction and classification techniques used in SLR to achieve good
results are discussed. The literature review presented in this paper shows the importance of
incorporating intelligent solutions into the sign language recognition systems and reveals that
perfect intelligent systems for sign language recognition are still an open problem. Overall, it is
expected that this study will facilitate knowledge accumulation and creation of intelligent-based
SLR and provide readers, researchers, and practitioners a roadmap to guide future direction.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
Figure 1.1: Sign language recognition system

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
CHAPTER 2: LITERATURE REVIEW

Recent progress in sign language recognition: a review. Machine Vision and Applications
Original Paper Published: 21 October 2023
Published By: Aamir Wali, Roha Shariq, Sajdah Shoaib, Sukhan Amir & Asma Ahmad Farhan
The paper provides us the results with the compilation of the recent progress in the field
of the Sign Language Recognition (SLR). It also provides the comprehensive review of the
emerging SLR frameworks and algorithms Research in the field of sign language recognition
(SLR) can help reduce the barrier between deaf and able people.
Hand Sign Recognition using Image Processing
Published By: Hitesh Sri Sai Kaushik Vezzu, Sunny Nalluri
Original Paper Published: 2023 International Conference on Intelligent Data Communication
Technologies and Internet of Things (IDCIoT)
The deaf and those with minor difficulty hearing and speaking use sign language to
engage with people and communicate within their group. According to a World Health
Organization assessment, there are around 9.1 billion dumb people and 466 million persons with
hearing impairment worldwide. It's challenging for these folks to interact with other people. An
English translation of the sign language is necessary in order to solve their problem.

Sign Language Recognition Using OpenCV and Convolutional Neural Networks


Published By: T Manogna Reddy, Sole Abhishek, Vuppala Venkata Kalyan, Pavan Ram Varma
Original Paper Published: International Conference on Research Methodologies in Knowledge
Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), 2023
Sign Language is the method of communication of deaf and dumb people all over the
world. However, it has always been a difficulty in communication between a verbal impaired
person and a normal person. Sign Language Recognition is a breakthrough for helping deaf-mute
people to communicate with others. The commercialization of an economical and accurate
recognition system is today’s concern of researchers all over the world.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
CHAPTER 3: RESEARCH

3.1: STUDY

Sign language is a predominant form of communication among a large group of society.


The nature of sign languages is visual, making them distinct from spoken languages.
Unfortunately, very few able people can understand sign language making communication with
the hearing-impaired infeasible. Research in the field of sign language recognition (SLR) can
help reduce the barrier between deaf and able people.

3.2: SCOPE

Real-time sign language detection systems can recognize hand gestures and label them
with specific meaning tags. The systems are designed to allow normal and deaf people to
communicate through hand gestures. They can be used for a variety of applications, including
computer-based interactions: for young children to learn sign language, communication: for
people with speech impairments to break through communication barriers, human-machine
interaction: to enrich nonverbal instructions.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
3.3: OBJECTIVES

1. Provides comprehensive review in the context of sign language recognition systems with an
explicit focus on decision support and expert system technologies over the past two decades.

2. Highlight open issues and possible research areas for future consideration.

3. A good compromise between accuracy of recognition and computational load for a real-time
application.

4. People with disabilities may find sign language beneficial for communicating.

5. Generating a dataset: Use a camera to create a large dataset and label it using Label-Img
software.

6. Pre-processing images: Use image pre-processing techniques to remove noise and obtain the
ROI.

7. Designing a model: Design a model and architecture for CNN to train the pre-processed
images and achieve maximum accuracy.

8. Developing an algorithm: Develop an algorithm to predict the gesture in real time.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
CHAPTER 4: PROPOSED DESIGN

We aim to gather the datasets from the internet and then perform various operations on

the data such as data collection, data preprocessing, data labeling, real time image detection,
training of model.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
Figure 4.1: Flowchart of the proposed word

CHAPTER 5: IMPLEMENTATION

First of all, we imported the datasets from the internet and then uploaded it in the
worksheet where the data gets structured and which proves helpful to access the data-set from
the software which we used to perform the programming for the front-end as well as the back-
end to create the spam message recognizer. For the front-end programming as well as the
backend programming we used the Jupyter notebook where we initiated the work of coding and
has proven helpful to us. After the gathering of the datasets in the computer software we
performed the first operation on the data-set imported which is the Data preprocessing which is
the process of identifying and correcting or removing corrupt or inaccurate data from a dataset.
After the data preprocessing, we performed the process of labeling which is the process of
identifying raw data and adding labels to provide context for the model to learn from. Real time
image detection is the very next process we performed is a computer vision task that involves
identifying and locating objects in real-time video sequences with fast inference while
maintaining a base level of accuracy. The last stage in which we used the machine learning
algorithm to train the model which is the process of learning the best values for the model's
weights and bias from labeled data. The goal is to minimize loss while building the model. This
is done by providing the model's learning algorithm with training data, and then optimizing the
algorithm to find certain patterns or outputs. The resulting model is called the trained machine
learning model.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
Figure 5.1: Front end code for the project [Module 6]

This is the program for the front-end in which we have used the jupyter notebook as well
in which we imported libraries such as tkinter, CV2, numpy, math with the help of which we
have written the program in the python language to identify various sign languages such as
“Hello”, “Yes”, “No”, “Okay”, “Thank you”, “I Love You”. In this program after running this
code we are taken to a new web-page where we need to turn the access on for the front camera
by clicking on START and after that we can show various sign langauges and it will detect
whether the hand is right or left as well as the sign language as well.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
CHAPTER 6: RESULT ANALYSIS

After performing the process of programming, we tested the accuracy of the program
and checked whether it runs in the desired manner and displays the result of whether the sign
language we gave the program as the input shows the result of which sign language it actually is.

Figure 6.1: Final output of the project

This is the page we will be displayed after the testing of the program in which we will
be shown with the page in which the Sign Language Recognition is displayed and below which
we have the block in which we have to click on START to let the webcam be open in the system
and after we give input we have to show the sign language in the camera and it will give the
output in the form of result.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
CHAPTER 7: CONCLUSION AND FUTURE WORK

The main purpose of sign language detection system is providing a feasible way of
communication between a normal and dumb people by using hand gesture. The proposed system
can be accessed by using webcam or any in-built camera that detects the signs and processes
them for recognition. From the result of the model, we can conclude that the proposed system
can give accurate results under controlled light and intensity. Furthermore, custom gestures can
easily be added and more the images taken at different angle and frame will provide more
accuracy to the model. Thus, the model can easily be extended on a large scale by increasing the
dataset. The model has some limitation such as environmental factors like low light intensity and
uncontrolled background which cause decrease in the accuracy of the detection. Therefore, we’ll
work next to overcome these flaws and also increase the dataset for more accurate results.

The future work related to our project is the implementation of our model for other sign
languages such as Indian sign language or American sign language, further training the neural
network to efficiently recognize symbols, enhancement of model to recognize expressions.

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
REFERENCES

[1] Aamir Wali, Roha Shariq, Sajdah Shoaib, Sukhan Amir. Recent progress in sign language
recognition: a review. Machine Vision and Applications, 2023, 34(6)

[2] Piyusha Vyavahare, Sanket Dhawale, Priyanka Takale, Vikrant Koli. Detection and
Interpretation of Indian Sign Language Using LSTM Networks. Journal of Intelligent Systems
and Control, 2023, 2(3):132-142
[3] Vezzu Hitesh Sri Sai Kaushik, Nalluri Sunny, Swetha. Kakumanu, Vadapalli SaiKumar. Hand
Sign Recognition using Image Processing. International Conference on Intelligent Data
Communication Technologies and Internet of Things (IDCIoT), 2023
[4] T Manogna Reddy, Sole Abhishek, Vuppala Venkata Kalyan, Pavan Ram Varma. Sign
Language Recognition Using OpenCV and Convolutional Neural Networks. International
Conference on Research Methodologies in Knowledge Management, Artificial Intelligence
and Telecommunication Engineering (RMKMATE), 2023
[5] Bhawna Jain, Manya Chandna, Abantika Dasgupta, Kashish Bansal. Sign Language
Recognition: Current State of Knowledge and Future Directions. 14th International
Conference on Computing Communication and Networking Technologies (ICCCNT), 2023
[6] Rohit Gairola, Pawan Singh Chaudhary, Abhishek Chaudhary, Shivanshu Kumar Singh.
Action Analyzer for Differently Abled People. IEEE 4th Annual Flagship India Council
International Subsections Conference (INDISCON), 2023
[7] Hope Orovwode, Ibukun Deborah Oduntan, John Amanesi Abubakar. Development of a Sign
Language Recognition System Using Machine Learning. International Conference on
Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD),
2023

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur
[8] Ravi Kishore Veluri, Rama Sree Sripada, A. Vanathi, G. Aparna. Hand Gesture Mapping
Using MediaPipe Algorithm. Proceedings of Third International Conference on
Communication, Computing and Electronics Systems (pp.597-614), 2022

PAGE \* MERGEFORMAT 2

AGPCE, Nagpur

You might also like