Professional Documents
Culture Documents
Sign Language Recognition: Guide: G.Vihari Sir
Sign Language Recognition: Guide: G.Vihari Sir
RECOGNITION
GUIDE:
G.VIHARI SIR
PRESENTED BY:
A.POOJA-18B81A1204
A.NARENDRA REDDY-18B81A1205
G.MANOJ KUMAR -18B81A1230
CH.DEEKSHITHA-18B81A1227
CONTENTS
• Abstract
• Introduction
• Existing model
• Proposed model
• Functional Requirements
• Non functional Requirements
• Requirement Specification
• Design
• Implementation
ABSTRACT
• Sign Language Recognition (SLR) targets on interpreting the sign language into text or
speech, so as to facilitate the communication between deaf-mute people and ordinary
people.
• This task has broad social impact and it has large variations in the hand actions.
INTRODUCTION
• Existing methods for SLR use hand-crafted features to describe sign language motion and
build classification models based on those features.
• However, it is difficult to design reliable features to adapt to the large variations of hand
gestures.
PROPOSED MODEL
• To approach this problem, we propose a novel convolutional neural network(CNN) which extracts
discriminative spatial-temporal features from the image automatically without any prior knowledge, avoiding
design features.
• In this CNN model,we are using deep learning algorithm to train the dataset to the system to overcome the
difficulties in existing models.
• Deep learning is a subset of machine learning,which is essentially a neural network with three or more
layers.These neural networks attempt to stimulate the behaviour of human brain.
• While a neural network with a single layer can still make approximate predictions,additional hidden layers can
help to optimize and refine for accuracy
• Deep learning eliminates some of data pre-processing that is typically involved with machine learning.This
algorithm can ingest and process unstructured data ,like text and images,and it automates feature
extraction,removing some of the dependency on human experts.
• It takes the image sign as input and it classifies and gives the corresponding output as text for the image.
• The CNN’s takes the multiple types of data.
FUNCTIONAL REQUIREMENTS
• Gesture Recognition: Software should automatically recognize the gesture through the videoinput.
• Authentic representation: Software should give out the correct meaning of the gesture.
• Cross platform support: Software should run on as many platforms as possible.
• Threshold value: Software should calculate the threshold value for every frame and determine the contours.
• Maintainability: The software should be coded in a away which is easily readable and maintainable.
DESIGN
• Here we take the input as the image that is taken from the file.
• Then it classifies the image and segments the hand in the images and produce the output
of the text for the hand gesture.
• Here we can mount the data from the drive
REQUIREMENT SPECIFICATIONS
• Hardware Requirements:
• Processor-Intel core i5
• Ram-4G,HardDisk-100GB
• Software Requirements:
• Operating System – Windows and Android
• Front end & Back end - Tkinter & Python
• Tool – Open CV
UML DIAGRAMS
• UML is an acronym that stands for Unified Modeling Language. Simply put, UML is a modern
approach to modeling and documenting software.
• It is based on diagrammatic representations of software components.
USECASE DIAGRAM FOR SYSTEM TRAINING
USECASE DIAGRAM FOR USER
CLASS DIAGRAM FOR SYSTEM TRAINING
CLASS DIAGRAM FOR USER
SEQUENCE DIAGRAM FOR SYSTEM TRAINING
SEQUENCE DIAGRAM FOR USER
MODULES
• Training Module
• User Module
IMPLEMENTATION OF TRAINING MODULE
• We have trained so many images as dataset for both image recognition and voice implementation.
• In this project using CNN we are recognizing hand gesture movement and to train CNN we are using following
images shown in below screen shots
SCREENSHOTS TO IMPLEMENT DATASETS
• In above screen CNN model trained on 2868 images and its prediction
accuracy we got as 100% and now model is ready and now click on ‘Upload
Test Image & Recognize Gesture’ button to upload image and to gesture
recognition
THANK YOU