Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

A PROPOSAL ON THE PROJECT TOPIC:

DESIGN OF SMART GLOVE FOR SIGN LANGUAGE TRANSLATION TO TEXT OR


SPEECH
Submitted By:

Osuala Justin Caleb

MATRIC NO: 170408004

To:

ENGR. HISHAM MUHAMMED

Department of Computer Engineering


University of Lagos (Akoka)
CERTIFICATION

This is to certify that this project was carried out by OSUALA JUSTIN with the matriculation
number 170408004 in the department of Electrical and Electronics Engineering, Faculty of
Engineering, University of Lagos under the supervision of ENGR. HISHAM MUHAMMED.

– – – – – – – – – – – –– – –– ––––––––––––––––
OSUALA JUSTIN DATE
Author

– – – – – – – – – – – – – – –– –––––––––––––––
ENGR. HISHAM MUHAMMED DATE
Supervisor

–––––––––––––––– ––––––––––––––––
Dr. (Mrs.) Gbenga-Ilori DATE
Project Coordinator

–––––––––––––––– – ––––––––––––––––

Prof. Akinbulire (H.O.D) DATE


TABLE OF CONTENTS
ACKNOWLEDGEMENT ........................................................................

ABSTRACT .................................................................................................

LIST OF TABLES...................................................................................
FIGURE 2.1 American Sign Language
FIGURE 2.2 FRENCH SIGN LANGUAGE
FIGURE 2.3 BRITISH, AUSTRALIAN AND NEW ZEALAND SIGN LANGUAGE
FIGURE 3.0 Block diagram of Smart Glove.
FIGURE 3.1 Flow chart of Smart Glove.
FIGURE 3.2 Flex Sensor
FIGURE 3.3 ACCELEROMETER MPU6050
FIGURE 3.4 MPU6050 Module Pin out
FIGURE 3.5 Touch Sensor
FIGURE 3.6 HC-05 Bluetooth Module
FIGURE 3.7 ARDUINO NANO
FIGURE 3.8 ARDUINO UNO POINT
LIST OF ABBREVIATIONS....................................................................

CHAPTER ONE – INTRODUCTION......................................................


1.1 BACKGROUND........................................................................
1.2 STATEMENT OF PROBLEM................................................
1.3 AIM AND OBJECTIVES OF STUDY....................................
1.4 SCOPE.......................................................................................
1.5 REPORT OUTLINE.................................................................

CHAPTER TWO – LITERATURE REVIEW......................................................


2.1 Different Sign Languages Different Sign Languages
2.2 PAPER REVIEW AND FOCUS
CHAPTER THREE – METHODOLOGY......................................................
3.0 Working Principle of Smart Glove for sign language
3.1 FLEX SENSORS
3.2 ACCELEROMETER (MPU6050)
3.3 TOUCH SENSOR
3.4 HC-05 Bluetooth Modules
3.5 ARDUINO
CONCLUSION......................................................

REFERENCE……………………………………………………………

ACKNOWLEDGEMENT
I would like to express my gratitude to the entire staff of the Electrical/Electronics and Computer
Engineering Department, University of Lagos, Akoka, for the knowledge, skills and values
passed onto the students, and also for the opportunity to contribute to the field of study through
this research.
I would also like to acknowledge, Engr. Hisham Muhammed who is my project supervisor for
being very resourceful and for his support on this project.

ABSTRACT

This paper discusses the creation of a sign language translator based on wearable technology. It
has the ability to convert sign language into text and speech. One arm and five fingers can be
read by the glove-based device. To measure arm and finger motions, the system is composed of
five flex sensors and an accelerometer. With the help of a mobile phone application, the device
may translate certain motions that in American Sign Language (ASL) correspond to the alphabet
into speech and text based on the interaction of these sensors. This paper explains the device's
hardware design in great detail.
CHAPTER ONE
INTRODUCTION

1.1 BACKGROUND OF THE STUDY

In a contemporary society, where the principle of equality is of utmost importance, individuals

with speech and hearing impairments mustn't be marginalized Sign language has long been

employed by individuals to communicate and convey messages, particularly within deaf

communities. To represent the thoughts of a speaker through sign language, a sequence of hand

shapes, orientations, movements, and facial expressions can be utilized. The use of sign language

in communication has proven to be an effective means of conveying complex information, as

well as providing a means of interaction and socialization within deaf communities. As such, the

study of sign language has become an area of increasing interest within both academic and

business circles [1].While sign language provides numerous benefits, it also presents several

challenges. One significant challenge is the existence of multiple sign languages each language
has its distinct vocabulary, syntax, and grammar. Such diversity creates a communication gap

between individuals who are proficient in a particular sign language and those who use a

different sign language, thus hindering effective communication. Furthermore, not all non-

hearing and non-speech impaired people are skilled in sign language, leading to further

communication barriers in society [2] . As such, it is crucial to find ways to bridge the

communication gap effectively using technology. As technology continues to advance at a rapid

pace, various tools have been developed to assist members of the deaf community with hearing

and communicating with others. An array of over-the-counter hearing aids are now available to

aid with deafness and other communication disorders, including behind-the-ear, in-the-ear, and

canal aids [3]. Although they can be quite helpful, wearing a hearing aid can make the user feel

uneasy or increase background noise. Consequently, researchers have been developing a variety

of methods for translating sign language movements. Wearable technology and vision-based

systems are the two primary methods. Vision-based systems employ image processing feature

extraction techniques to interpret hand and finger movements. Hand gesture recognition is often

performed via a sensor- or vision-based technique [4]. The fundamental advantage of sensor-

based systems over vision-based systems is the reduction in the requirement to convert raw data

into usable values [5]. Sensors are not required when using vision-based systems. Nonetheless,

previous research has shown many flaws in vision based SLR systems [6]. Some of the most

significant problems, like high processing costs, the capturing device's narrow field of vision,

and the need for multiple cameras to obtain adequate results, are brought on by large storage

spaces, complex environments, inaccurate gesture capture, and variable lighting [7]. Conversely,

the use of wearable sensing devices offers many benefits, including streamlined data processing,

direct information about finger-bend data, and wrist orientation to the recognition system, taking
voltage values into account [8], movement constraints (such as sitting in front of a desk or chair),

adaptability to different environments (such as background conditions having little effect on

hand shape recognition), and portable, light, and comfortable SLR-based glove devices [9].

Thus, sensor-based wearable systems (also known as wearable sensory devices) are convenient

for capturing SL motions without environmental constraints [9]. The aim of this project is to

bridge the gap between speech-impaired persons and non-speech-impaired persons by designing

a hand glove equipped with sensors such as a Flex sensor, Accelerometer, and Touch sensor

which sense different sign language gestures.

The layout of this paper is organized as follows: Section 2 reviews the recent studies evaluating

real-time SLRS-based wearable sensory devices and the existing multicriteria decision-making

(MCDM) solutions. Section 3 provides the methodology used to evaluate and benchmark real-

time SLRS-based wearable sensory devices. Section 4 presents the results and discussions of

PFDOSM-PBM. Section 5 introduces the evaluation of the findings of the proposed

methodology. Section 6 outlines the conclusion, limitations, and directions for future research.

1.2 PROBLEM STATEMENT

Sign language is the primary mode of communication for individuals who are deaf or hard of

hearing. While there have been advancements in sign language interpretation through human

interpreters or video-based translation systems, these solutions are often costly, time-consuming,

and not readily available in all situations [10]. The need for real-time translation of sign language

has led to the exploration of wearable devices, such as the smart glove for sign language

translation using Arduino. This technology aims to capture hand gestures and convert them into

meaningful text or speech output [11].


Despite the potential benefits of the smart glove for sign language translation, there are several

gaps in the existing knowledge and technology such as:

 The need for a comprehensive understanding of the design considerations and challenges

involved in developing a smart glove that is comfortable, ergonomic, and easy to use.

 Lack of adequate research on the implementation of a robust and adaptable sign

language recognition algorithm that can accurately recognize and classify different sign

language gestures, considering variations in hand shapes, speeds, and lighting

conditions.

The lack of comprehensive knowledge and technology in the design and implementation of the

smart glove for sign language translation using Arduino poses several problems [6]:

 Firstly, without a thorough understanding of the design considerations, the resulting

glove may be uncomfortable, bulky, or difficult to wear, which can hinder its adoption

and usability. If the sensors are not placed optimally, the accuracy of gesture recognition

may be compromised, leading to incorrect translations and miscommunication.

 Additionally, without a robust and adaptable sign language recognition algorithm, the

system may struggle to accurately interpret and translate sign language gestures in real-

time, limiting its effectiveness and reliability.

These gaps in knowledge and technology pose a significant problem as they hinder the

development of a reliable and user-friendly smart glove for sign language translation. Without

addressing these issues, individuals with hearing impairments will continue to face

communication barriers, limiting their access to education, employment, and social interactions

[10]. Therefore, it is crucial to fill these gaps and develop a comprehensive understanding of the

design considerations, implementation challenges, and performance evaluation of the smart


glove for sign language translation using Arduino. This dissertation aims to address these

problems and contribute to the advancement of assistive technologies, ultimately improving the

quality of life for individuals with hearing impairments.

1.3 Aim of the study

The aim of this project is to help to bridge the gap between the speech-impaired persons and

normal persons by designing a hand glove equipped with sensors such as Flex sensor,

Accelerometer, Touch sensor which sense different sign language gestures.

1.4 Objectives of the study

 To develop a comprehensive smart glove system that translates sign language into text

and speech, while also translating speech and typed text into both text and sign language.

 To Integrate essential hardware components, such as Flex sensors, Accelerometer, and

Touch sensors, into the glove using Arduino technology.

 To enable seamless interaction by implementing wireless communication protocols, like

Bluetooth, on the Arduino, facilitating connectivity between the smart glove and external

devices such as smartphones or computers for widespread accessibility and usage.

1.5 Scope of study


The primary objective of this project is to design and develop a Smart Glove that can effectively

translate sign language, thereby facilitating communication between individuals with speech

impairments and the public. The glove will be equipped with a range of sensory technologies,

including Flex sensor, Accelerometer, and Touch sensor, which will enable it to interpret various

sign language gestures. Advanced Arduino technology will be utilized to integrate the hardware

components into the glove, and Arduino code will be developed for signal processing.

Additionally, Bluetooth wireless communication protocols will be incorporated into the Arduino

platform, enabling seamless connectivity with external devices such as smartphones and

computers.

Overall, the ultimate goal of this project is to promote effective communication and interaction

between speech-impaired individuals and the broader community. By providing a reliable and

accurate means of translating sign language, the Smart Glove will serve as a tool for fostering

inclusivity and reducing communication barriers.

CHAPTER TWO

LITERATURE REVIEW
Introduction

Smart gloves for sign language interpretation have emerged as a promising technology to bridge

the communication gap between individuals who are deaf or hard of hearing and those who do

not understand sign language. These innovative devices utilize sensors, actuators, and advanced

technology to translate hand gestures into spoken language or text in real-time (Praveen).

This literature review aims to explore the design aspects, technological advancements,

applications, challenges, and future prospects of smart gloves for sign language interpretation.

Historical Overview of Sign Language Interpretation

Sign language has been used as a means of communication by deaf individuals for centuries. For

those who are hard of hearing or speech disabled, sign language is their only means of

communication. However, sign language is not understood by the general public, which will

greatly hinder communication between the hearing-impaired, speech-impaired, and non-

hearing/speech impaired individuals. Asides this , Madek states that every sign language in the

world dependent on three components such as finger, word and expression level (Madek).

Furthermore, learning sign language is challenging because of its inherent grammatical and

sentence structure variations. Consequently, in order to guarantee efficient and simple

communication between various populations, a system that can assist in translating voice to sign

language as well as sign language to voice is needed (TOK B). This need has led to the

development of various technologies over the years ranging from traditional methods such as

interpreters to video relay services which have in the long run exhibited limitations in terms of

accessibility, cost, and real-time communication. These limitations have led several scholars to

research and invent other means of communication to promote equality and communication in
the society (Bower, 2015). One of these designs is the “smart gloves for interpretation of sign

language (Aboug).

Design Considerations for Smart Gloves

Smart gloves have emerged as a promising technology with diverse applications ranging from

virtual reality to healthcare to a more convenient and portable solution for sign language

interpretation.

The design of these gloves plays a pivotal role in their functionality and effectiveness.

The design of smart gloves for sign language involves several key considerations to ensure

accuracy, Sensing Technologies:

Sensing technologies are fundamental to smart gloves, enabling them to interact with the user's

hand movements and gestures. Capacitive, resistive, and optical sensors are commonly employed

to detect finger flexion, hand gestures, and touch input. Research by Jones et al. (2019)

emphasized the importance of selecting sensors with high sensitivity and accuracy to ensure

precise tracking of hand movements.

Flexibility and Comfort:

Designing smart gloves that are comfortable to wear for extended periods is essential to user

acceptance and usability. Achieving a balance between flexibility and structural integrity is

crucial. Li and Zhang (2020) conducted a study on ergonomic design principles for wearable

devices, highlighting the significance of incorporating breathable materials and adjustable straps

to enhance comfort and fit.

Data Transmission and Processing:


Efficient data transmission and processing mechanisms are critical for real-time interaction and

feedback in smart gloves. Wireless communication protocols such as Bluetooth Low Energy

(BLE) and Wi-Fi are commonly used for seamless connectivity with external devices. Moreover,

onboard processing units equipped with microcontrollers or embedded systems are essential for

quick data analysis and interpretation (Chen et al., 2018).

Power Management:

Smart gloves rely on battery-powered systems, making power management a crucial aspect of

their design. Balancing power consumption with performance is essential to prolong battery life

and enhance user experience. Li et al. (2017) proposed energy-efficient algorithms for gesture

recognition in smart gloves, optimizing power usage without compromising accuracy.

User Interface and Feedback:

Designing intuitive user interfaces and providing effective feedback mechanisms are essential for

enhancing user experience and usability. Incorporating haptic feedback systems or visual

displays enables users to receive real-time feedback on their interactions. Research by Kim et al.

(2021) explored the integration of actuators in smart gloves to provide tactile feedback for virtual

reality applications. By incorporating these design considerations into the development process,

engineers can create smart gloves that offer enhanced functionality, usability, and user

experience across diverse applications translation.

Technological Advancements in Smart Gloves


In recent years, a number of methods for "Hand Gesture Recognition System using Image

Processing" have been presented. These methods process the digital image using a modified

SIFT algorithm, which can yield results in real-time, but they are costly. Another approach is to

use MATLAB to create the system and the SURF (Speed up Robust Features) technique

(Mugala, 2019). Sapkota in 2019 designs Another approach using a sensor-based technique

comprising of flex sensor, tactile sensors, and accelerometer to translate sign language gestures

to both text and auditory voice using gloves and terms it the “smart gloves”. Sadek supports the

notion of smart gloves and states the three types of smart gloves for hand gesture recognition as:

 DG5-Vhand gloves; includes a gyroscope, magnetometer, and accelerometer built in the

glove, along with several finger sensors.

 The wireless interface cables and wireless data transmitter and wireless belt plate that

come with 5DT gloves.

 Cyber gloves: Have wireless options for its users.

Based on the approach, the two primary types of currently available gesture recognition

techniques [ABHAYA ] are as follows: A approach based on computer vision B): Sensor-based

approach In order for the system to precisely recognize the gesture, a vision-based approach

places a camera on the desk and interfaced with the user's cap while the user is seated in a fixed

position. Rizwan states that though vision based approach has its advantages, the effect of

lighting conditions in the surrounding area can reduce the accuracy of an image processing

method.

In recent years, there has been a growing interest in designing smart gloves for sign language

interpretation using Arduino. Several researchers have presented low-cost, open-source


prototypes of gesture recognition and interpretation gloves that help audio-vocally impaired

individuals communicate. These gloves are highlighted for their ergonomic design, portability,

and intelligent gesture recognition abilities. The aim of these designs is to bridge the

communication gap between hearing-impaired individuals and the general public. Some

researchers have focused on developing cost-effective gloves with sign language interpretation

abilities. For instance, Sengupta et al. (2019) designed an Arduino-based sign language

interpreter. Haq et al. (2018) developed a two-way communication system with Indonesian Sign

Language recognition and interpretation hand gestures into an Android mobile device, aiming to

eliminate communication distress between deaf-mute individuals and normal people. Chougule

et al. (2019) worked on a smart glove that interprets American Sign Language into text or

speech, aiming to bridge the communication gap between the deaf-dumb and the general public.

Other researchers have focused on designing gesture detection systems that provide electronic

hand gesture recognition at an affordable cost. For example, Telluri et al. (2020) introduced a

low-cost flex-powered gesture detection system using Arduino, flex sensors, gyroscope, and

accelerometer. Lee et al. (2020) focused on sensor fusion of motion-based sign language

interpretation using deep learning, aiming to design a smart wearable American Sign Language

interpretation system.

Bairagi et al., proposed a gesture recognition system called Glove based gesture recognition.

This system aims to help individuals who are unable to communicate verbally, such as dumb or

mute individuals, to communicate with others through hand gestures. The system uses hand

gloves that are equipped with pressure and accelerometer sensors to track the movement and

bending of the wearer's hands. This eliminates the need for a camera-based system, which may
not always be feasible. The gloves are designed to be easily worn and operated by the wearer,

bridging the gap in communication between individuals who are unable to speak and those who

can(Bairagi et al). These hand signs are converted into a digital signal using an ADC. A

microcontroller is used to detect the hand gesture. Once the gesture is detected, the signals are

sent to an Android phone via a Bluetooth module. The received text signal on the Android phone

is converted into a speech signal using a Text to Speech Android application. The converted

speech output is provided through a speaker system so that it can be understood by ordinary

people to interpret the sign language of the deaf person. The LCD module is also used to display

the recognized gesture on the microcontroller (Patel., 2018).

In his paper, Subramanyam suggests the use of a smart glove that can convert gestures to speech

output. The glove is equipped with flex sensors and a mem sensor. A new State Estimation

technique has been developed to track the movement of the hand in three-dimensional space. The

model was tested for its feasibility in converting Indian Sign Language to speech output,

although the glove was intended for communication through sign language to speech conversion.

An artificial mouth has been created to help people with speech difficulties. It works on a motion

sensor, where the sensor responds to every action by the user. The database stores all the

messages and templates. In real-time, the template database is fed into a microcontroller, and the

motion sensor is attached to the user's hand. For every action, the motion sensor gets activated

and sends a signal to the microcontroller. The microcontroller matches the motion with the

database and produces the speech signal. The artificial mouth assists in retrieving data from the

database, and the user can speak like a normal person. The speakers are the output device in

which the system blocks interpret the matched gestures as text to speech conversion. The
embedded hardware with flex sensors and micro sensors helps in reading the gestures performed

by the user.

Rastogi uses five flex sensors and accelerometer attached on the back of the glove to measure

the bending and motion of the hand. The problem with this work is it is only able to recognize

some letters. Letters M, N, O, R, S, T, V and X cannot be displayed due similar in gesture with

another letters.

Overall, these studies show that there is a growing interest in developing smart gloves for sign

language interpretation using Arduino. These gloves are designed to be cost-effective, portable,

and provide intelligent gesture recognition abilities to bridge the communication gap between

hearing-impaired individuals and the general public. It is the need to bridge this gap cost

effectively globally while supporting equality that makes the subject matter of the research so

important.

Smart gloves have a range of applications in the field of sign language interpretation. One of the

most significant uses is:

Real-time translation: By using sensors embedded in the gloves to capture hand movements

and gestures, these devices can translate sign language into written text or spoken language in

real-time. Research by Hu et al. (2019) has shown that smart gloves equipped with machine

learning algorithms are highly effective in accurately translating American Sign Language (ASL)

gestures into text.

Accessibility and Inclusion: Smart gloves can also enhance accessibility and promote inclusion

for the deaf and hard of hearing community. By providing real-time translation of sign language,

these devices make communication between sign language users and non-signers seamless in
various settings, including educational institutions, workplaces, and public spaces (Li et al.,

2020).

Communication and interaction

Furthermore, smart gloves serve as valuable assistive technology tools for individuals with

hearing impairments. They facilitate communication and interaction with the hearing population,

and with advancements in sensor technology and machine learning algorithms, these gloves can

recognize and interpret a wide range of sign language gestures. This empowers users to express

themselves more effectively (Wang et al., 2021).

Education and training purposes

Smart gloves can also be used for sign language education and training purposes. By providing

instant feedback on sign language gestures, these devices help learners improve their signing

skills and enhance their proficiency in sign language communication (Chen et al., 2020).

LIMITATION OF EXISTING SYSTEM

Understanding the challenges and limitations associated with any technology is crucial for its

successful development, deployment, and adoption.

Sensor Accuracy and Reliability:

One of the primary challenges in smart glove technology is ensuring the accuracy and reliability

of sensors used for gesture recognition and hand tracking. Factors such as sensor drift, noise

interference, and calibration issues can affect the precision of hand movements captured by the
gloves (Johnston et al., 2018). Addressing these challenges requires advanced sensor

technologies and robust calibration algorithms to improve accuracy and reliability.

Complex Gesture Recognition:

Smart gloves are often tasked with recognizing complex hand gestures and movements, which

can pose significant challenges, especially in dynamic environments. Gesture ambiguity, where

similar hand movements convey different meanings, complicates the recognition process (Kumar

et al., 2020). Developing sophisticated machine learning algorithms capable of accurately

distinguishing between subtle variations in gestures is essential to overcome this limitation.

Limited Battery Life:

Power management remains a significant limitation in smart glove technology, with limited

battery life hindering prolonged use and usability. The integration of multiple sensors, wireless

communication modules, and onboard processing units contributes to high power consumption

(Park et al., 2019). Addressing this challenge requires optimizing hardware components and

implementing energy-efficient algorithms to extend battery life without compromising

performance.

Ergonomics and Comfort:

Designing smart gloves that are ergonomic and comfortable to wear for extended periods

presents a significant challenge. Factors such as glove size, weight distribution, and material

choice influence user comfort and usability (Wang et al., 2019). Balancing the need for structural

integrity with flexibility and comfort requires careful consideration during the design and

development process.
Cost and Accessibility:

Cost remains a barrier to widespread adoption of smart glove technology, particularly in

healthcare and education sectors where affordability is critical. The integration of high-quality

sensors, processors, and communication modules contributes to the overall cost of smart gloves

(Choi et al., 2021). Developing cost-effective solutions without compromising performance is

essential to enhance accessibility and promote broader adoption.

Addressing the challenges and limitations associated with smart glove technology is essential to

unlock its full potential across various applications. By leveraging advancements in sensor

technology, machine learning algorithms, and energy-efficient design principles, researchers and

engineers can overcome these challenges and pave the way for innovative and impactful smart

glove solutions.
CHAPTER THREE

METHODOLOGY

3.1 INTRODUCTION

In recent times Technology enables impaired individuals to live regular and healthy lives

alongside others. This chapter entails the method utilized to actualize the research objectives

such as bridging the gap between the speech-impaired persons and non-speech impaired persons

by designing a hand glove equipped with sensors. To achieve this, software and hardware tools

will be used to design and structure this device and data collection would be done using publicly

available datasets, such as the World Level American Sign Language (WLASL) or the American

Sign Language Finger-spelling.

3.2 HARDWARE AND SOFTWARE REQUIREMENTS

A. Hardware Requirements

• Arduino NANO R3 • Accelerometers • HC-05 Bluetooth module •Flex sensors •USB-

A to micro-USB Cable Battery •Analog Accelerometers (ADXL335) • Alphanumeric

LCD, 16 X 2• Inertial measurement unit (IMU) (6 degree of freedom) •USB-A to

micro-USB Cable Battery • Glove • Connecting Wires • Power bank • Solder less

Breadboard full size • Resistor 10k Ohm• Soldering iron • Lead-free Wires • Hot glue

gum • PCB holder•


B. Software Requirements

• Arduino IDE • Android Application • Android studio

3.2.1 DATA Collection:


This project will make use of publicly available datasets, such as the World Level American
Sign Language (WLASL) or the American Sign Language Finger-spelling

3.3 THE SMART GLOVES


Fig: 3.1 Block Diagram of the Smart Gloves
 Accelerometer sensor measures the linear movements of hand in X-axis and outputs
different values of X corresponding to the movement in X-axis.
 After sensing the gesture using 5 accelerometers all the data from sensors are then
processed on node MCU.
 For this, appropriate ranges are set for each alphabet and the words that can be
recognized with single hand based on the measured data obtained from repeated
measurements.
 A Bluetooth module is connected to Arduino UNO which displays the result on android
phone via Bluetooth

The smart glove would display the following qualities/characteristics:

Sensing the gestures


An accelerometer is a tool used to monitor appropriate acceleration.
The accelerometer is used to detect finger bending and hand orientation. These sensors detect
motions, and the glove's battery has to be attached.
Recognition of gesture:
The analog signals are converted to digital signals, and the Arduino UNO's code is compared to
the resulting signals from the sensors. If the code matches, an output is produced; if not, the
device waits for the correct input.

Transmission of Output:
The output so produced is sent to the Android application as a text and voice conversation using
the Glove and Android mobile's Bluetooth connection, which enables wireless communication.
Five accelerometers—one for each finger—are used to detect the gesture based on movement.
A small, thin, low-power device suitable for determining total 3-pivot rising speed is the
ADXL335.
The accelerometer utilized is the ADXL335, which has a base full-scale scope of ±3g and can
measure rising speed. Less power is used, and yield movements are produced in relation to
simple voltages that are speeding.

3.3.1 Parts of the Smart gloves


Arduino Nano (Microcontroller)
Arduino Nano is the main controller used in this project. It is a microcontroller based board on
the Atmega 328k. It has 20 digital input and output pins with 7 pins for PWM.

Fig: 3.2 Arduino Nano Atmega328 Microcontrollers


Flex Sensor
Flex sensor is the most used and suitable sensor to capture and measure the bends of the fingers.
The Flex sensor is so flexible that even with a little pressure it bends easily. It is very
comfortable as it is light weight and thin. Within a thin flexible substrate it consist carbon
resistive elements. According to curvature of substrate a resistance output relative to the bend
radius is produced by flex sensors.
Fig 3.3: Flex Sensor
The resistance range of bending varies from 45Kohm to 125Kohms for flex sensor.
Relation between Bend and Resistance is shown in Fig. 3.4

Fig 3.4: Relation between Bend and Resistance

HC 05 Bluetooth Module
For Data transmission from microcontroller to smart phone Bluetooth module is used. This
Bluetooth module has an on-board antenna. It can be operated with variety of Phones and
Bluetooth adapter and also acts like a transparent serial port. Operating voltage is 3.0-3.6V.

Fig 3.5: HC 05 Bluetooth Module


Arduino Software
The Arduino IDE (Integrated Development Environment) is a written java cross platform
application. It includes a code editor with features such as brace matching, automation
indentation, syntax highlighting and. It is capable of uploading and compiling programs to the
Board with a single click.

RF 433 MHz Transmitters and Receiver


These RF modules are used to transmit data from one Arduino to another. Operating voltage for
Receiver is 5V and for Transmitter are 3- 12V.

Fig 3.6: RF 433 MHz Transmitters and Receiver

3.4 Constructing the Smart Glove:

The components used in glove have been already discussed above. The flex sensor detects how
much a finger bends in response to a motion and provides a change in resistance that indicates
how much the finger bends. The hand's linear motions on the X, Y, and Z axes are measured by
the accelerometer sensor, which then produces various values for X, Y, and Z that correspond to
the movement on these axes.
The Arduino Nano then processes all of the sensor data, logically combining all of the sensor
outputs in the ways of IF/ELSE, AND, and OR to match the final output with previously
recorded values of various indications pertaining to the alphabets. For this, based on the observed
data gathered from repeated measurements, suitable ranges are established for every alphabet
and the words that can be identified with a single hand.
Fig 3.7: Hardware Case Diagram

Fig 3.7.1: Software Case Diagram


The design of this system is based on three key principles: real-time systems, change in
resistance, and acceleration due to gravity.
Resistance has an impact on the flex sensor's functionality. The resistance increases with the flex
sensor's bend. The sensor's conductive ink serves as a resistor. This resistance is around 25k
when the sensor is straight. When the sensor is bent, the conducting layer is stretched, which
lowers the cross-section. Resistance rises as a result of this smaller cross-section. At a 90° angle,
this resistance is about 100KΩ.The resistance recovers to its initial value when the sensor is
straightened once again.
Measuring the resistance allows one to ascertain how far the sensor has been twisted.
Connecting the flex sensor to a fixed value resistor (often 10 or 47 kΩ) to establish a voltage
divider, as seen in figure 3.9, is the simplest approach to read the sensor. In order to accomplish
this, the sensor's two ends are linked to a pull-down resistor and power, respectively.
Next, an Arduino's ADC input is connected to the location between the flex sensor and the fixed
value pull-down resistor. In this manner, a variable voltage output that the Arduino ADC input
can read is produced. When the sensor is aligned again, the resistance returns to its starting point.
The amount of twist in the sensor may be determined by measuring the resistance. The easiest
way to read the sensor is to connect the flex sensor to a fixed value resistor (often 10 or 47 kΩ)
in order to create a voltage divider, as shown in figure 5. The sensor's two ends are connected to
power and a pull-down resistor, respectively, to achieve this. The position between the flex
sensor and the fixed value pull-down resistor is then linked to the Arduino's ADC input. This
generates a variable voltage output that can be read by the Arduino ADC input.

Fig 3.8: Voltage Divider


Bend Sensing Module
five flex sensors, one for each of the five fingers, are attached to the glove. The resistance of the
flex sensors varies based on how much the fingers bend. Pin 2 of the flex sensor is linked to a
10k ohm resistor, which serves as a voltage divider, while Pin 1 of the sensor is connected to the
Arduino Nano's 3.3v supply. The Arduino's analog pin is linked to the location between the flex
sensor and the pull-down resistor.
Fig 3.9: Bend sensing module
Module for position sensing
in this system, the accelerometer (ADXL 335) serves as a tilt detector. The analog output of this
device ranges from 1.5 to 3.5 volts. An integrated circuit (IC) called the ADXL335 is a three-
axis analog accelerometer that measures acceleration in the X, Y, and Z directions as analog
voltages. An accelerometer measures the acceleration caused by gravity to determine the tilt of
an object with regard to the earth. It determines the device's speed and direction of motion by
measuring the dynamic acceleration.

Fig 3.9.1 Module for position sensing

Transmission module

The Bluetooth module is responsible for sending output data to the mobile app, enabling text-to-
speech conversion, and comprises the HC-05 Bluetooth and the Arduino microcontroller. The
microcontroller processes what phrase has been represented and sends that data to the HC-05
module to send the data to the android device which is connected to it through Bluetooth.
The Arduino NANO is linked to the HC-05 Bluetooth module. Following processing, the string-
formatted data are sent to the Bluetooth module (transmitter). Additionally, the Android phone
features built-in Bluetooth functionality. Following the pairing of these two Bluetooth devices,
the string is sent to the Android application.

Fig 3.9.2: Transmission module

Through Bluetooth, the Android application gets data in bytes and transforms it into a string.
Finally, an Android smartphone text-to-speech application is used to transform the string into a
voice. For ease of use, the entire system is placed over a standard glove and has accurate hand
gesture recognition.
FLOW CHART OF SMART GLOVES (TRANSMITTING AND RECEIVING)

Fig 3.9.3 FLOW CHART OF SMART GLOVES (TRANSMITTING) FLOW CHART OF


SMART GLOVES (RECEIVING)

You might also like