Professional Documents
Culture Documents
Design of Smart Gloves For Sign Language Translation
Design of Smart Gloves For Sign Language Translation
To:
This is to certify that this project was carried out by OSUALA JUSTIN with the matriculation
number 170408004 in the department of Electrical and Electronics Engineering, Faculty of
Engineering, University of Lagos under the supervision of ENGR. HISHAM MUHAMMED.
– – – – – – – – – – – –– – –– ––––––––––––––––
OSUALA JUSTIN DATE
Author
– – – – – – – – – – – – – – –– –––––––––––––––
ENGR. HISHAM MUHAMMED DATE
Supervisor
–––––––––––––––– ––––––––––––––––
Dr. (Mrs.) Gbenga-Ilori DATE
Project Coordinator
–––––––––––––––– – ––––––––––––––––
ABSTRACT .................................................................................................
LIST OF TABLES...................................................................................
FIGURE 2.1 American Sign Language
FIGURE 2.2 FRENCH SIGN LANGUAGE
FIGURE 2.3 BRITISH, AUSTRALIAN AND NEW ZEALAND SIGN LANGUAGE
FIGURE 3.0 Block diagram of Smart Glove.
FIGURE 3.1 Flow chart of Smart Glove.
FIGURE 3.2 Flex Sensor
FIGURE 3.3 ACCELEROMETER MPU6050
FIGURE 3.4 MPU6050 Module Pin out
FIGURE 3.5 Touch Sensor
FIGURE 3.6 HC-05 Bluetooth Module
FIGURE 3.7 ARDUINO NANO
FIGURE 3.8 ARDUINO UNO POINT
LIST OF ABBREVIATIONS....................................................................
REFERENCE……………………………………………………………
ACKNOWLEDGEMENT
I would like to express my gratitude to the entire staff of the Electrical/Electronics and Computer
Engineering Department, University of Lagos, Akoka, for the knowledge, skills and values
passed onto the students, and also for the opportunity to contribute to the field of study through
this research.
I would also like to acknowledge, Engr. Hisham Muhammed who is my project supervisor for
being very resourceful and for his support on this project.
ABSTRACT
This paper discusses the creation of a sign language translator based on wearable technology. It
has the ability to convert sign language into text and speech. One arm and five fingers can be
read by the glove-based device. To measure arm and finger motions, the system is composed of
five flex sensors and an accelerometer. With the help of a mobile phone application, the device
may translate certain motions that in American Sign Language (ASL) correspond to the alphabet
into speech and text based on the interaction of these sensors. This paper explains the device's
hardware design in great detail.
CHAPTER ONE
INTRODUCTION
with speech and hearing impairments mustn't be marginalized Sign language has long been
communities. To represent the thoughts of a speaker through sign language, a sequence of hand
shapes, orientations, movements, and facial expressions can be utilized. The use of sign language
well as providing a means of interaction and socialization within deaf communities. As such, the
study of sign language has become an area of increasing interest within both academic and
business circles [1].While sign language provides numerous benefits, it also presents several
challenges. One significant challenge is the existence of multiple sign languages each language
has its distinct vocabulary, syntax, and grammar. Such diversity creates a communication gap
between individuals who are proficient in a particular sign language and those who use a
different sign language, thus hindering effective communication. Furthermore, not all non-
hearing and non-speech impaired people are skilled in sign language, leading to further
communication barriers in society [2] . As such, it is crucial to find ways to bridge the
pace, various tools have been developed to assist members of the deaf community with hearing
and communicating with others. An array of over-the-counter hearing aids are now available to
aid with deafness and other communication disorders, including behind-the-ear, in-the-ear, and
canal aids [3]. Although they can be quite helpful, wearing a hearing aid can make the user feel
uneasy or increase background noise. Consequently, researchers have been developing a variety
of methods for translating sign language movements. Wearable technology and vision-based
systems are the two primary methods. Vision-based systems employ image processing feature
extraction techniques to interpret hand and finger movements. Hand gesture recognition is often
performed via a sensor- or vision-based technique [4]. The fundamental advantage of sensor-
based systems over vision-based systems is the reduction in the requirement to convert raw data
into usable values [5]. Sensors are not required when using vision-based systems. Nonetheless,
previous research has shown many flaws in vision based SLR systems [6]. Some of the most
significant problems, like high processing costs, the capturing device's narrow field of vision,
and the need for multiple cameras to obtain adequate results, are brought on by large storage
spaces, complex environments, inaccurate gesture capture, and variable lighting [7]. Conversely,
the use of wearable sensing devices offers many benefits, including streamlined data processing,
direct information about finger-bend data, and wrist orientation to the recognition system, taking
voltage values into account [8], movement constraints (such as sitting in front of a desk or chair),
hand shape recognition), and portable, light, and comfortable SLR-based glove devices [9].
Thus, sensor-based wearable systems (also known as wearable sensory devices) are convenient
for capturing SL motions without environmental constraints [9]. The aim of this project is to
bridge the gap between speech-impaired persons and non-speech-impaired persons by designing
a hand glove equipped with sensors such as a Flex sensor, Accelerometer, and Touch sensor
The layout of this paper is organized as follows: Section 2 reviews the recent studies evaluating
real-time SLRS-based wearable sensory devices and the existing multicriteria decision-making
(MCDM) solutions. Section 3 provides the methodology used to evaluate and benchmark real-
time SLRS-based wearable sensory devices. Section 4 presents the results and discussions of
methodology. Section 6 outlines the conclusion, limitations, and directions for future research.
Sign language is the primary mode of communication for individuals who are deaf or hard of
hearing. While there have been advancements in sign language interpretation through human
interpreters or video-based translation systems, these solutions are often costly, time-consuming,
and not readily available in all situations [10]. The need for real-time translation of sign language
has led to the exploration of wearable devices, such as the smart glove for sign language
translation using Arduino. This technology aims to capture hand gestures and convert them into
The need for a comprehensive understanding of the design considerations and challenges
involved in developing a smart glove that is comfortable, ergonomic, and easy to use.
language recognition algorithm that can accurately recognize and classify different sign
conditions.
The lack of comprehensive knowledge and technology in the design and implementation of the
smart glove for sign language translation using Arduino poses several problems [6]:
glove may be uncomfortable, bulky, or difficult to wear, which can hinder its adoption
and usability. If the sensors are not placed optimally, the accuracy of gesture recognition
Additionally, without a robust and adaptable sign language recognition algorithm, the
system may struggle to accurately interpret and translate sign language gestures in real-
These gaps in knowledge and technology pose a significant problem as they hinder the
development of a reliable and user-friendly smart glove for sign language translation. Without
addressing these issues, individuals with hearing impairments will continue to face
communication barriers, limiting their access to education, employment, and social interactions
[10]. Therefore, it is crucial to fill these gaps and develop a comprehensive understanding of the
problems and contribute to the advancement of assistive technologies, ultimately improving the
The aim of this project is to help to bridge the gap between the speech-impaired persons and
normal persons by designing a hand glove equipped with sensors such as Flex sensor,
To develop a comprehensive smart glove system that translates sign language into text
and speech, while also translating speech and typed text into both text and sign language.
Bluetooth, on the Arduino, facilitating connectivity between the smart glove and external
translate sign language, thereby facilitating communication between individuals with speech
impairments and the public. The glove will be equipped with a range of sensory technologies,
including Flex sensor, Accelerometer, and Touch sensor, which will enable it to interpret various
sign language gestures. Advanced Arduino technology will be utilized to integrate the hardware
components into the glove, and Arduino code will be developed for signal processing.
Additionally, Bluetooth wireless communication protocols will be incorporated into the Arduino
platform, enabling seamless connectivity with external devices such as smartphones and
computers.
Overall, the ultimate goal of this project is to promote effective communication and interaction
between speech-impaired individuals and the broader community. By providing a reliable and
accurate means of translating sign language, the Smart Glove will serve as a tool for fostering
CHAPTER TWO
LITERATURE REVIEW
Introduction
Smart gloves for sign language interpretation have emerged as a promising technology to bridge
the communication gap between individuals who are deaf or hard of hearing and those who do
not understand sign language. These innovative devices utilize sensors, actuators, and advanced
technology to translate hand gestures into spoken language or text in real-time (Praveen).
This literature review aims to explore the design aspects, technological advancements,
applications, challenges, and future prospects of smart gloves for sign language interpretation.
Sign language has been used as a means of communication by deaf individuals for centuries. For
those who are hard of hearing or speech disabled, sign language is their only means of
communication. However, sign language is not understood by the general public, which will
hearing/speech impaired individuals. Asides this , Madek states that every sign language in the
world dependent on three components such as finger, word and expression level (Madek).
Furthermore, learning sign language is challenging because of its inherent grammatical and
communication between various populations, a system that can assist in translating voice to sign
language as well as sign language to voice is needed (TOK B). This need has led to the
development of various technologies over the years ranging from traditional methods such as
interpreters to video relay services which have in the long run exhibited limitations in terms of
accessibility, cost, and real-time communication. These limitations have led several scholars to
research and invent other means of communication to promote equality and communication in
the society (Bower, 2015). One of these designs is the “smart gloves for interpretation of sign
language (Aboug).
Smart gloves have emerged as a promising technology with diverse applications ranging from
virtual reality to healthcare to a more convenient and portable solution for sign language
interpretation.
The design of these gloves plays a pivotal role in their functionality and effectiveness.
The design of smart gloves for sign language involves several key considerations to ensure
Sensing technologies are fundamental to smart gloves, enabling them to interact with the user's
hand movements and gestures. Capacitive, resistive, and optical sensors are commonly employed
to detect finger flexion, hand gestures, and touch input. Research by Jones et al. (2019)
emphasized the importance of selecting sensors with high sensitivity and accuracy to ensure
Designing smart gloves that are comfortable to wear for extended periods is essential to user
acceptance and usability. Achieving a balance between flexibility and structural integrity is
crucial. Li and Zhang (2020) conducted a study on ergonomic design principles for wearable
devices, highlighting the significance of incorporating breathable materials and adjustable straps
feedback in smart gloves. Wireless communication protocols such as Bluetooth Low Energy
(BLE) and Wi-Fi are commonly used for seamless connectivity with external devices. Moreover,
onboard processing units equipped with microcontrollers or embedded systems are essential for
Power Management:
Smart gloves rely on battery-powered systems, making power management a crucial aspect of
their design. Balancing power consumption with performance is essential to prolong battery life
and enhance user experience. Li et al. (2017) proposed energy-efficient algorithms for gesture
Designing intuitive user interfaces and providing effective feedback mechanisms are essential for
enhancing user experience and usability. Incorporating haptic feedback systems or visual
displays enables users to receive real-time feedback on their interactions. Research by Kim et al.
(2021) explored the integration of actuators in smart gloves to provide tactile feedback for virtual
reality applications. By incorporating these design considerations into the development process,
engineers can create smart gloves that offer enhanced functionality, usability, and user
Processing" have been presented. These methods process the digital image using a modified
SIFT algorithm, which can yield results in real-time, but they are costly. Another approach is to
use MATLAB to create the system and the SURF (Speed up Robust Features) technique
(Mugala, 2019). Sapkota in 2019 designs Another approach using a sensor-based technique
comprising of flex sensor, tactile sensors, and accelerometer to translate sign language gestures
to both text and auditory voice using gloves and terms it the “smart gloves”. Sadek supports the
notion of smart gloves and states the three types of smart gloves for hand gesture recognition as:
The wireless interface cables and wireless data transmitter and wireless belt plate that
Based on the approach, the two primary types of currently available gesture recognition
techniques [ABHAYA ] are as follows: A approach based on computer vision B): Sensor-based
approach In order for the system to precisely recognize the gesture, a vision-based approach
places a camera on the desk and interfaced with the user's cap while the user is seated in a fixed
position. Rizwan states that though vision based approach has its advantages, the effect of
lighting conditions in the surrounding area can reduce the accuracy of an image processing
method.
In recent years, there has been a growing interest in designing smart gloves for sign language
individuals communicate. These gloves are highlighted for their ergonomic design, portability,
and intelligent gesture recognition abilities. The aim of these designs is to bridge the
communication gap between hearing-impaired individuals and the general public. Some
researchers have focused on developing cost-effective gloves with sign language interpretation
abilities. For instance, Sengupta et al. (2019) designed an Arduino-based sign language
interpreter. Haq et al. (2018) developed a two-way communication system with Indonesian Sign
Language recognition and interpretation hand gestures into an Android mobile device, aiming to
eliminate communication distress between deaf-mute individuals and normal people. Chougule
et al. (2019) worked on a smart glove that interprets American Sign Language into text or
speech, aiming to bridge the communication gap between the deaf-dumb and the general public.
Other researchers have focused on designing gesture detection systems that provide electronic
hand gesture recognition at an affordable cost. For example, Telluri et al. (2020) introduced a
low-cost flex-powered gesture detection system using Arduino, flex sensors, gyroscope, and
accelerometer. Lee et al. (2020) focused on sensor fusion of motion-based sign language
interpretation using deep learning, aiming to design a smart wearable American Sign Language
interpretation system.
Bairagi et al., proposed a gesture recognition system called Glove based gesture recognition.
This system aims to help individuals who are unable to communicate verbally, such as dumb or
mute individuals, to communicate with others through hand gestures. The system uses hand
gloves that are equipped with pressure and accelerometer sensors to track the movement and
bending of the wearer's hands. This eliminates the need for a camera-based system, which may
not always be feasible. The gloves are designed to be easily worn and operated by the wearer,
bridging the gap in communication between individuals who are unable to speak and those who
can(Bairagi et al). These hand signs are converted into a digital signal using an ADC. A
microcontroller is used to detect the hand gesture. Once the gesture is detected, the signals are
sent to an Android phone via a Bluetooth module. The received text signal on the Android phone
is converted into a speech signal using a Text to Speech Android application. The converted
speech output is provided through a speaker system so that it can be understood by ordinary
people to interpret the sign language of the deaf person. The LCD module is also used to display
In his paper, Subramanyam suggests the use of a smart glove that can convert gestures to speech
output. The glove is equipped with flex sensors and a mem sensor. A new State Estimation
technique has been developed to track the movement of the hand in three-dimensional space. The
model was tested for its feasibility in converting Indian Sign Language to speech output,
although the glove was intended for communication through sign language to speech conversion.
An artificial mouth has been created to help people with speech difficulties. It works on a motion
sensor, where the sensor responds to every action by the user. The database stores all the
messages and templates. In real-time, the template database is fed into a microcontroller, and the
motion sensor is attached to the user's hand. For every action, the motion sensor gets activated
and sends a signal to the microcontroller. The microcontroller matches the motion with the
database and produces the speech signal. The artificial mouth assists in retrieving data from the
database, and the user can speak like a normal person. The speakers are the output device in
which the system blocks interpret the matched gestures as text to speech conversion. The
embedded hardware with flex sensors and micro sensors helps in reading the gestures performed
by the user.
Rastogi uses five flex sensors and accelerometer attached on the back of the glove to measure
the bending and motion of the hand. The problem with this work is it is only able to recognize
some letters. Letters M, N, O, R, S, T, V and X cannot be displayed due similar in gesture with
another letters.
Overall, these studies show that there is a growing interest in developing smart gloves for sign
language interpretation using Arduino. These gloves are designed to be cost-effective, portable,
and provide intelligent gesture recognition abilities to bridge the communication gap between
hearing-impaired individuals and the general public. It is the need to bridge this gap cost
effectively globally while supporting equality that makes the subject matter of the research so
important.
Smart gloves have a range of applications in the field of sign language interpretation. One of the
Real-time translation: By using sensors embedded in the gloves to capture hand movements
and gestures, these devices can translate sign language into written text or spoken language in
real-time. Research by Hu et al. (2019) has shown that smart gloves equipped with machine
learning algorithms are highly effective in accurately translating American Sign Language (ASL)
Accessibility and Inclusion: Smart gloves can also enhance accessibility and promote inclusion
for the deaf and hard of hearing community. By providing real-time translation of sign language,
these devices make communication between sign language users and non-signers seamless in
various settings, including educational institutions, workplaces, and public spaces (Li et al.,
2020).
Furthermore, smart gloves serve as valuable assistive technology tools for individuals with
hearing impairments. They facilitate communication and interaction with the hearing population,
and with advancements in sensor technology and machine learning algorithms, these gloves can
recognize and interpret a wide range of sign language gestures. This empowers users to express
Smart gloves can also be used for sign language education and training purposes. By providing
instant feedback on sign language gestures, these devices help learners improve their signing
skills and enhance their proficiency in sign language communication (Chen et al., 2020).
Understanding the challenges and limitations associated with any technology is crucial for its
One of the primary challenges in smart glove technology is ensuring the accuracy and reliability
of sensors used for gesture recognition and hand tracking. Factors such as sensor drift, noise
interference, and calibration issues can affect the precision of hand movements captured by the
gloves (Johnston et al., 2018). Addressing these challenges requires advanced sensor
Smart gloves are often tasked with recognizing complex hand gestures and movements, which
can pose significant challenges, especially in dynamic environments. Gesture ambiguity, where
similar hand movements convey different meanings, complicates the recognition process (Kumar
Power management remains a significant limitation in smart glove technology, with limited
battery life hindering prolonged use and usability. The integration of multiple sensors, wireless
communication modules, and onboard processing units contributes to high power consumption
(Park et al., 2019). Addressing this challenge requires optimizing hardware components and
performance.
Designing smart gloves that are ergonomic and comfortable to wear for extended periods
presents a significant challenge. Factors such as glove size, weight distribution, and material
choice influence user comfort and usability (Wang et al., 2019). Balancing the need for structural
integrity with flexibility and comfort requires careful consideration during the design and
development process.
Cost and Accessibility:
healthcare and education sectors where affordability is critical. The integration of high-quality
sensors, processors, and communication modules contributes to the overall cost of smart gloves
Addressing the challenges and limitations associated with smart glove technology is essential to
unlock its full potential across various applications. By leveraging advancements in sensor
technology, machine learning algorithms, and energy-efficient design principles, researchers and
engineers can overcome these challenges and pave the way for innovative and impactful smart
glove solutions.
CHAPTER THREE
METHODOLOGY
3.1 INTRODUCTION
In recent times Technology enables impaired individuals to live regular and healthy lives
alongside others. This chapter entails the method utilized to actualize the research objectives
such as bridging the gap between the speech-impaired persons and non-speech impaired persons
by designing a hand glove equipped with sensors. To achieve this, software and hardware tools
will be used to design and structure this device and data collection would be done using publicly
available datasets, such as the World Level American Sign Language (WLASL) or the American
A. Hardware Requirements
micro-USB Cable Battery • Glove • Connecting Wires • Power bank • Solder less
Breadboard full size • Resistor 10k Ohm• Soldering iron • Lead-free Wires • Hot glue
Transmission of Output:
The output so produced is sent to the Android application as a text and voice conversation using
the Glove and Android mobile's Bluetooth connection, which enables wireless communication.
Five accelerometers—one for each finger—are used to detect the gesture based on movement.
A small, thin, low-power device suitable for determining total 3-pivot rising speed is the
ADXL335.
The accelerometer utilized is the ADXL335, which has a base full-scale scope of ±3g and can
measure rising speed. Less power is used, and yield movements are produced in relation to
simple voltages that are speeding.
HC 05 Bluetooth Module
For Data transmission from microcontroller to smart phone Bluetooth module is used. This
Bluetooth module has an on-board antenna. It can be operated with variety of Phones and
Bluetooth adapter and also acts like a transparent serial port. Operating voltage is 3.0-3.6V.
The components used in glove have been already discussed above. The flex sensor detects how
much a finger bends in response to a motion and provides a change in resistance that indicates
how much the finger bends. The hand's linear motions on the X, Y, and Z axes are measured by
the accelerometer sensor, which then produces various values for X, Y, and Z that correspond to
the movement on these axes.
The Arduino Nano then processes all of the sensor data, logically combining all of the sensor
outputs in the ways of IF/ELSE, AND, and OR to match the final output with previously
recorded values of various indications pertaining to the alphabets. For this, based on the observed
data gathered from repeated measurements, suitable ranges are established for every alphabet
and the words that can be identified with a single hand.
Fig 3.7: Hardware Case Diagram
Transmission module
The Bluetooth module is responsible for sending output data to the mobile app, enabling text-to-
speech conversion, and comprises the HC-05 Bluetooth and the Arduino microcontroller. The
microcontroller processes what phrase has been represented and sends that data to the HC-05
module to send the data to the android device which is connected to it through Bluetooth.
The Arduino NANO is linked to the HC-05 Bluetooth module. Following processing, the string-
formatted data are sent to the Bluetooth module (transmitter). Additionally, the Android phone
features built-in Bluetooth functionality. Following the pairing of these two Bluetooth devices,
the string is sent to the Android application.
Through Bluetooth, the Android application gets data in bytes and transforms it into a string.
Finally, an Android smartphone text-to-speech application is used to transform the string into a
voice. For ease of use, the entire system is placed over a standard glove and has accurate hand
gesture recognition.
FLOW CHART OF SMART GLOVES (TRANSMITTING AND RECEIVING)