G1 Sign Language Identifier PPT

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 18

Mahatma Education Society’s

Pillai College of Engineering


(Department of Information Technology)

Sign Language Identifier


MAJOR PROJECT

PROJECT GUIDE GROUP MEMBERS


➢ Prof. Suhas Lawand ➢ Harshal Tamboli
➢ Dhruv Karotra
➢ Shashwat Gaikwad
➢ Brijesh Mali
INTRODUCTION

● Hearing and speech impaired people has been dependent on sign language.

● Difficult to communicate between a verbal impaired person and a normal person.

● Hands and facial parts are immensely influential to express the thoughts of human
in confidential communication.

● The sign language identifier will help people to communicate with verbally
impaired people.
PROBLEM STATEMENT

The community of hearing and speech impaired people has been dependent on sign
language as a medium to communicate with people. It has always been difficult to
communicate between a verbal impaired person and a normal person. The sign
language identifier will help people to communicate with verbally impaired people by
detecting the signs and providing the output in real time. The model is based on You
Only Look Once(YOLO). We would be building the model with Yolov3s. The existing
recognition systems are less reliable as they detect letters and take time to make a
single word or they detect very few words. Our model will solve this problem as it will
recognise more words than in any existing system.The dataset will be built using
Label-Img software and detect sign language through live feed.
LITERATURE SURVEY
Sr.No Title and Year Algorithms/ Description
Models

1 Real Time Sign Language SSD MobileNet V2 Advantages:


● custom gestures can be added easily.
Detection, 2021
● Increase in images at different angle and
frame will
increase accuracy of the model.
Disadvantages :
● Uncontrolled background will decrease
accuracy.
● Low light intensity may affect the prediction.

2 A Review Paper on Sign CNN Advantages:


Language Recognition for ● Two layers of algorithms are used to verify
The Deaf and Dumb, 2021 and predict symbols which are similar to each
other.
● Improved input image size and dataset
provide more accuracy.
Disadvantages:
● Unable to predict symbols which are not
shown properly.
● Background noise and adequate lighting
makes prediction difficult.
LITERATURE SURVEY

3 Real Time Sign Language CNN VGG-16 Advantages :


Recognition and Speech ● System can interpret hand gestures and
Generation ,2020 convert it into text.
● Model is also capable of converting text into
sign language.

Disadvantages :
● Can interpret only hand gestures from the
American Sign Language.
● The method used requires more processing
time than any other method
EXISTING SYSTEM
1. Predicts only letters.
2. Some existing system predicts very few words.
3. This makes communication in actual scenario difficult.
PROPOSED SYSTEM
FLOWCHART
PROPOSED SYSTEM
ARCHITECTURE
REQUIREMENTS

HARDWARE REQUIREMENTS SOFTWARE REQUIREMENTS

➢ Basic computer with I/O devices ➢ OS : Windows 7 or above

➢ Ram: 2GB or more ➢ Language : Python

➢ Processor: I3 or higher ➢ Application Software: Google


colab, Visual Studio Code
IMPLEMENTATION
Algorithm Used :- YOLO

➢ Object Detection Algorithm - You Look Only Once

➢ Divides images into a grid system.

➢ Each cell in the grid is responsible for detecting objects within itself.

➢ Version used - Yolov3


HOW THE YOLO ALGORITHM WORKS

YOLO algorithm works using the following three techniques:

● Residual blocks
● Bounding box regression
● Intersection Over Union (IOU)
HOW THE YOLO ALGORITHM WORKS
❏ Residual Blocks:
First, the image is divided into various grids. Each grid has a dimension of S x S.
The following image shows how an input image is divided into grids.
HOW THE YOLO ALGORITHM WORKS
➢ Bounding box regression:

A bounding box is an outline that highlights an object in an image.


Every bounding box in the image consists of the following attributes:

● Width (bw)

● Height (bh)

● Class
HOW THE YOLO ALGORITHM WORKS
❏ Intersection over union (IOU)

➢ Specifies the amount of overlap between the predicted and ground truth bounding
box.
➢ Its value ranges between 0 to 1.
CONCLUSION

In this project, we proposed an idea for feasible communication


between hearing and vision impaired people and a person with ability
to hear and see more compared to a former, with the help of Deep
Learning approach.
FUTURE SCOPE

❏ Improved prediction in low light situations.

❏ Train more hand signs → more gestures recognized.

❏ Introduce feature of text to speech generation.


REFERENCES
[1] Real Time Sign Language Detection, International Journal for Modern Trends in Science
and Technology 2022.

[2] A Review Paper on Sign Language Recognition for The Deaf and Dumb, International
Journal of Engineering Research & Technology (IJERT) Vol. 10 Issue 10, October-2021.

[3] YOLO - Yolov3 Documentation.

[4] Real Time Sign Language Recognition and Speech Generation, Journal of Innovative
Image Processing (JIIP) (2020) Vol.02/ No. 02
THANK YOU

You might also like