Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Review : “ The Techniques of Human-Machine Communication through Expressive

Gestures. ”

1.0 Introduction: methodologies driving advancements in the


HGR is a vital component of human-computer field.
interaction, leveraging advancements in
computational intelligence, sensor technologies, In another review by Prashant Rawat et al.[3],
and computer vision. This progress is a result of the authors focus on vision-based HGR
extensive research and development, as evident employing RGB-Depth sensors. The paper
in the works explored in this introduction. critically analyses existing methodologies and
explores the potential of incorporating depth
In their review, N. Mohamed et al. [1] provide information to enhance the accuracy and
an insightful overview of the current status and robustness of HGR systems.
future prospects of HGR systems, highlighting
the advancements and challenges that have AashniHaria et al.'s[4] work delves into the
propelled the field forward. The study reflects application of HGR for human-computer
on the evolving methodologies and technologies interaction, underlining its significance in
that have shaped the landscape of HGR, setting facilitating seamless interactions between
the stage for upcoming innovations. individuals and computing devices. The paper
discusses novel techniques and approaches that
Hindawi's Computational Intelligence and have been proposed to improve the efficiency
Neuroscience journal [2] by unknown authors and usability of such systems.
presents a comprehensive

Figure 1. Finger Angle-Based HGR for Smart Infrastructure Using Wearable Wrist-Worn Camera

Their review sheds light on wearable


analysis and discusses computational technology and its role in enabling noninvasive
intelligence aspects related to HGR. The article and
delves into emerging trends and approaches, accessible HGR, paving the way for more
shedding light on the computational practical and user-friendly applications.

Collectively, these research papers and articles directions. The insights gained from these works
offer a comprehensive understanding of the will be instrumental in driving further
state-of-the-art in HGR, highlighting the innovation and enhancing the effectiveness of
advancements, challenges, and potential future HGR systems in various domains.
Figure 2. Examples of gestures programmed into BMW Series 7 cars

The advancement of computational systems, which can be particularly useful in


intelligence and computer vision has scenarios where touch-based or
significantly enhanced the accuracy and conventional input methods are impractical
capabilities of HGR systems, making them cumbersom
an integral part of modern human-machine
interfaces.

HGR is of paramount importance due to its


potential to revolutionise human-computer
interaction and several other domains. By
allowing individuals to communicate with
machines through natural hand movements
and

accessibility, ease of use, and efficiency of


interacting with electronic devices. This
technology has the ability to create more
intuitive interfaces, facilitate immersive
experiences in virtual environments,
improve accessibility for people with
disabilities, and enable applications in
healthcare, gaming, robotics, and various
other fields. Moreover, it aligns with the
increasing demand for gesture-based control
1.1 Challenges in HGR can make it difficult to use them in outdoor
or poorly lit environments. [6] [7] [11] [12]
Complexity of the human hand: The
human hand is a complex structure with Noise: HGR systems are also susceptible to
many degrees of freedom. This makes it noise in the sensor data. This noise can be

Figure 3.Survey on Gesture Recognition for Hand Image Postures

difficult to develop HGR systems that can caused by a variety of factors, such as
accurately and reliably recognize a wide electrical interference from other devices or
range of hand gestures. [6] [7] [8] [9] [10] environmental factors such as dust and
smoke. [11] [12] [13]
Occlusion: When hands are partially or
fully occluded, it can be difficult for HGR Real-time performance: For many
systems to recognize them correctly. This is applications, such as sign language
a particular challenge for wearable HGR interpretation and virtual reality, it is
systems, as the user's own body can often important for HGR systems to be able to
occlude their hands. [8] [9] [10] perform in real time. This is a challenging
task, as HGR algorithms can be
Illumination: HGR systems are typically computationally expensive. [6] [7] [11] [12]
affected by changes in illumination. This
2.0 Background computing power, and machine learning
algorithms.
HGR (HGR) is a technology that uses
sensors to capture and interpret hand Sensor-based HGR systems use sensors to
movements as commands or instructions. measure the movement and position of the
HGR systems have HGR research has a user's hands. The hand gestures are then
long history, dating back to the early 1960s. extracted from the sensor data using
However, it is only in recent years that machine learning algorithms.
HGR systems have become practical and
affordable for real-world applications. This
is due to advances in sensor technology,
Vision-based HGR systems are more One of the most promising recent
common than sensor-based HGR systems. advances in HGR is the development of
This is because vision-based HGR systems wearable HGR devices. Wearable HGR
are more flexible and can be used to devices are typically gloves or bracelets
recognize a wider range of hand gestures. that contain sensors to measure the
However, vision-based HGR systems are movement and position of the user's hands.
also more susceptible to noise and Wearable HGR devices are more robust to
occlusion. noise and occlusion than vision-based
HGR systems, and they can be used to
Sensor-based HGR systems are more recognize a wider range of hand gestures
robust to noise and occlusion than vision- than sensor-based HGR systems.
based HGR systems. This makes them
well-suited for applications where the Another promising recent advance in HGR
user's hands may be partially or fully is the use of deep learning for HGR. Deep
occluded, such as sign language learning algorithms are able to learn
interpretation and virtual reality. However, complex patterns in data, and they have
sensor-based HGR systems are also more been shown to be very effective for HGR.
restrictive and can only recognize a limited
number of hand gestures.
2.2 Key Concepts and Key Theories:

Key concepts:
Recent advances in HGR
Feature extraction: The first step in HGR
In recent years, there has been significant
is to extract features from the input data.
progress in HGR research. Researchers
This can be done using a variety of
have developed new sensor technologies,
methods, such as computer vision
computer vision algorithms, and machine
techniques for vision-based HGR systems
learning algorithms that have improved the
and sensor data analysis for wearable HGR
accuracy, robustness, and performance of
systems.
HGR systems.
Gesture classification: Once the features tracking the trajectory of the hand and
have been extracted, they need to be fingers.
classified into different gesture categories.
This can be done using a variety of Appearance-based HGR: Appearance-
machine learning algorithms, such as based HGR systems use the appearance of
the hand to recognize gestures. This can be

Figure 4 . Gesture recognition pipeline

support vector machines (SVMs), random done by analyzing the color, texture, and
depth of the hand.
Key theories:

Shape-based HGR: Shape-based HGR


systems use the shape of the hand to 2.3 Different types of HGR systems
recognize gestures. This can be done by
Vision-based HGR systems use cameras to
tracking the position and orientation of the
capture images or videos of the user's
hand and fingers.
hands. The hand gestures are then
Motion-based HGR: Motion-based HGR extracted from the images or videos using
systems use the motion of the hand to computer vision algorithms.[1]
recognize gestures. This can be done by
Wearable HGR systems use sensors worn
on the user's hands to capture data about
the movement and position of the hands. systems have a wide range of applications,
The hand gestures are then recognized including human-computer interaction
from the sensor data using machine (HCI), sign language recognition, virtual
learning algorithms.[5] reality (VR), and gaming.

Examples of vision-based HGR systems: Types of HGR systems:

The Microsoft Kinect sensor uses a Vision-based HGR systems use cameras to
combination of infrared cameras and a capture images or videos of the user's
depth sensor to track the user's body hands. The hand gestures are then
movements, including hand gestures. extracted from the images or videos using
computer vision algorithms.
The Google Glass wearable computer uses
a camera to capture images of the user's Wearable HGR systems use sensors worn
surroundings. The user can then interact on the user's hands to capture data about
with the computer by making hand the movement and position of the hands.
gestures in front of the camera. The hand gestures are then recognized
from the sensor data using machine
Examples of wearable HGR systems: learning algorithms.
The Myo armband is a wearable device Challenges in HGR:
that uses electromyography (EMG)
sensors to track the user's muscle activity. HGR is a challenging task due to a number
The user can then control devices, such as of factors, including:
computers and drones, by making hand
gestures. Background clutter: It can be difficult to
distinguish the user's hands from the
The Leap Motion sensor is a device that background in images or videos, especially
can be placed on a desk or mounted on a in cluttered environments. [1]
wall. It uses two infrared cameras to track
the user's hands and fingers in 3D space. Illumination changes: Changes in
lighting can affect the appearance of the
user's hands, making it difficult for HGR
systems to recognize gestures.[1]
3.0 Literature Review:
Occlusion: When the user's hands are
HGR (HGR) is a computer vision occluded by other objects, it can be
technique that allows computers to difficult for HGR systems to track their
interpret human hand gestures. HGR movements.[1].
3.1 Recent advances in HGR: Here are some specific examples of recent
advances in HGR:
Recent advances in HGR have been driven
by the development of new machine Vision-based HGR:
learning algorithms, such as deep learning.
Deep learning algorithms have been shown Researchers at Google have developed a
to be very effective at feature extraction and new vision-based HGR system that can
gesture classification. recognize gestures in real time with high
accuracy.
Another recent advance in HGR is the
development of new wearable sensors. Researchers at Microsoft have developed a
Wearable sensors can be used to track the new vision-based HGR system that can
movement and position of the hands more recognize gestures in 3D space. The system
accurately than vision-based systems. This uses two infrared cameras to track the
is especially important in challenging movement of the hands and then uses a deep
environments, such as those with low learning algorithm to classify the gestures.
lighting or occlusion. [3]

Wearable HGR: the movement of the hands. The wristband


contains a number of sensors, including
Researchers at the University of California, gyroscopes, accelerometers, and
Berkeley have developed a new wearable electromyography (EMG) sensors. The data
HGR system that uses a sensor glove to from the sensors is used to train a deep
track the movement of the hands. The learning algorithm to classify the gestures.
sensor glove contains a number of sensors, [5]
including gyroscopes, accelerometers, and
magnetometers. The data from the sensors
is used to train a deep learning algorithm to
classify the gestures.

Researchers at the University of


Washington have developed a new wearable
HGR system that uses a wristband to track
Figure 5. Arduino and sensory glove

3.2 Different approaches to HGR:

Different approaches to HGR: HGR system might use a camera to track


the user's hands and wearable sensors to
Vision-based HGR: Vision-based HGR capture additional data, such as finger
systems use cameras to capture images or flexion.
videos of the user's hands. The hand
gestures are then extracted from the images Examples of different HGR approaches
or videos using computer vision from the reference papers:
algorithms.[1]
Vision-based HGR:
Wearable HGR: Wearable HGR systems
use sensors worn on the user's hands to The Microsoft Kinect sensor uses a
capture data about the movement and combination of infrared cameras and a
position of the hands. The hand gestures are depth sensor to track the user's body
then recognized from the sensor data using movements, including hand gestures.
machine learning algorithms.[1]
The Google Glass wearable computer uses a
Hybrid HGR: Hybrid HGR systems camera to capture images of the user's
combine both vision-based and wearable surroundings. The user can then interact
with the computer by making hand gestures
in front of the camera.
Wearable HGR:

The Myo armband is a wearable device that


uses electromyography (EMG) sensors to
On the advantages and disadvantages of
track the user's muscle activity. The user
different HGR approaches:
can then control devices, such as computers
and drones, by making hand gestures. "Vision-based systems are typically more
affordable and easier to use than wearable
The Leap Motion sensor is a device that can
systems. However, they can be susceptible
be placed on a desk or mounted on a wall. It
to noise and occlusion. Wearable systems
uses two infrared cameras to track the user's
are more robust to noise and occlusion, but
hands and fingers in 3D space.
they can be more expensive and difficult to
Hybrid HGR: use." [3]

The AI-enabled sign language recognition On the future of HGR:


and VR space bidirectional communication
"Hybrid HGR systems that combine vision-
system developed by Wen et al. [7] uses a
based and wearable approaches are a
combination of a triboelectric smart glove
promising area of research. These systems
and a camera. The glove provides data
have the potential to overcome the
about the user's hand gestures, while the
limitations of both individual approaches."
camera provides data about the user's head
[4]
pose and gaze.

Figure 6. HGR using RGB


3.3 Evaluation of different HGR systems: Cost: The cost of an HGR system includes
the cost of the hardware and software, as
Evaluation of different HGR systems: well as the cost of developing and
maintaining the system.
HGR [HGR] systems can be evaluated
based on a number of factors, including: Different HGR approaches have different
strengths and weaknesses:
Accuracy: The accuracy of an HGR system
is measured by its ability to correctly Vision-based HGR systems are typically
recognize hand gestures. This is typically more affordable and easier to use than
measured using a precision-recall curve wearable HGR systems. However, they can
[PRC], which shows the trade-off between be susceptible to noise and occlusion.
precision and recall.
Wearable HGR systems are more robust to
Robustness: The robustness of an HGR noise and occlusion than vision-based
system is its ability to perform well in a systems, but they can be more expensive
variety of conditions, such as different and difficult to use. Hybrid HGR systems
lighting conditions, backgrounds, and that combine vision-based and wearable
occlusion. approaches are a promising area of research.
These systems have the potential to
Speed: The speed of an HGR system is its
overcome the limitations of both individual
ability to recognize hand gestures in real
approaches.
time.

Figure 7. HGR using RGB camera to control a robot


On the evaluation of HGR systems: Wearable HGR systems are more robust to
noise and occlusion than vision-based
"The performance of HGR systems is systems, but they can be more expensive
typically evaluated using accuracy, and difficult to use." - [3]
robustness, speed, and cost metrics." - [1]
On the future of HGR evaluation:
On the strengths and weaknesses of
different HGR approaches: "New evaluation metrics are needed to
assess the performance of HGR systems in
"Vision-based HGR systems are typically real-world applications. For example,
more affordable and easier to use than metrics that measure the usability and user
wearable HGR systems. However, they can satisfaction of HGR systems are important."
be susceptible to noise and occlusion. - [4]

4.0 Discussion: Challenges and future directions:

One of the most promising recent advances Despite the recent advances in HGR, One
in HGR is the use of deep learning challenge is the need to develop HGR
algorithms. Deep learning algorithms have systems that are more robust to noise and
been shown to be very effective at learning occlusion. Another challenge is the need to
complex patterns from data, such as the develop HGR systems that are more
patterns that correspond to different hand computationally efficient and can be run on
gestures. This has led to the development of mobile devices.
HGR systems that can recognize a wide
Future research in HGR is likely to focus on
range of gestures with high accuracy, even
addressing these challenges and on
in challenging environments.
developing new applications for HGR
Another promising recent advance in HGR technology. For example, researchers are
is the development of new wearable working on developing HGR systems that
sensors. This is especially important in can recognize gestures in 3D space and that
challenging environments, such as those can be used to interact with augmented
with low lighting or occlusion. reality and virtual reality applications.

Hybrid HGR systems that combine vision- Overall, the future of HGR is very
based and wearable approaches are also promising. As HGR systems become more
becoming increasingly popular. Hybrid accurate, robust, and versatile, we can
systems have the potential to overcome the expect to see them used in a wide range of
limitations of both individual approaches. applications, from human-computer
For example, hybrid systems can be used to interaction to sign language recognition to
recognize gestures in a wider range of robotics.
environments and with higher accuracy.
4.1 Limitations of current systems

Despite the recent advances in HGR, there


are still some limitations that need to be
addressed before HGR systems can be Robustness: Current HGR systems are not
widely deployed in real-world applications. always robust to noise and occlusion. This
These limitations include: means that they may not be able to
recognize gestures correctly in noisy
Accuracy: Current HGR systems are not environments.
always 100% accurate, especially in
challenging conditions such as low lighting
or occlusion.
Speed: Some HGR systems are not fast Susceptibility to noise and occlusion:
enough for real-time applications. This Vision-based HGR systems are susceptible
means that there may be a noticeable delay to noise and occlusion. This is because they
between the user performing a gesture and rely on images of the hands to recognize
the system recognizing it. gestures. If the hands are occluded or if the
image is noisy, the system may not be able
Computational cost: Some HGR systems to recognize the gesture correctly.
are computationally expensive to train and
run. This makes them difficult to deploy on Need for good lighting conditions: Vision-
mobile devices or other resource- based HGR systems typically require good
constrained devices. lighting conditions to perform well. In low
lighting conditions, the system may not be
Ease of use: Some HGR systems are able to see the hands clearly, which can lead
difficult to use and require users to learn to errors in gesture recognition.
complex gestures. This can make them
inaccessible to some users, such as the Wearable HGR systems:
elderly or disabled.
Discomfort: Wearable HGR systems can be
In addition to these general limitations, uncomfortable to wear, especially for
there are also some specific limitations extended periods of time. This can limit the
associated with different types of HGR usability of wearable HGR systems in some
systems. applications.

Vision-based HGR systems:

Cost: Wearable HGR systems can be make hybrid HGR systems more difficult to
expensive, especially those that use high- develop and maintain.
end sensors. This can make wearable HGR
systems inaccessible to some users.

Hybrid HGR systems:

Complexity: Hybrid HGR systems are 4.2 Areas for future research : Current
more complex than vision-based or HGR systems are not always accurate or
wearable HGR systems. This is because robust in challenging environments, such as
they need to combine data from both vision those with low lighting or occlusion. Future
and wearable sensors. This complexity can research should focus on developing new
algorithms and techniques in these conditions without sacrificing accuracy or
conditions. performance.

Reducing the computational cost of HGR Wearable HGR systems:


systems: Some HGR systems are
computationally expensive to train and run. Developing more comfortable and
This makes them difficult to deploy on wearable sensors: Wearable HGR systems
mobile devices or other resource- can be uncomfortable to wear, especially for
constrained devices. Future research should extended periods of time. Future research
focus on developing new algorithms and should focus on developing more
techniques that can reduce the comfortable and wearable sensors.
computational cost of HGR systems without
Reducing the cost of wearable sensors:
sacrificing accuracy or performance.
Wearable HGR systems can be expensive.
Making HGR systems more user- Future research should focus on developing
friendly: Some HGR systems are difficult new sensors that are more affordable and
to use and require users to learn complex accessible to a wider range of users.
gestures. This can make them inaccessible
\
to some users, such as the elderly or
disabled. Future research should focus on Hybrid HGR systems:
developing new HGR systems that are more
user-friendly and can be used by people of Developing new algorithms for fusing
all abilities. data from vision and wearable sensors:
Hybrid HGR systems need to fuse data from
In addition to these general areas of both vision and wearable sensors. Future
research, there are also some specific areas research should focus on developing new
of research that are relevant to different algorithms for fusing this data more
types of HGR systems. effectively.

Developing new hybrid HGR systems


that are more efficient and effective:
Vision-based HGR systems: Hybrid HGR systems are more complex
Developing new algorithms for handling than vision-based or wearable HGR
noise and occlusion: Vision-based HGR systems. Future research should focus on
systems are susceptible to noise and developing new hybrid HGR systems that
occlusion. Future research should focus on are more efficient and effective, without
developing new algorithms that can handle sacrificing accuracy or performance.
noise and occlusion more effectively.
Overall, there is a lot of potential for future
Developing new algorithms for working research in HGR. By addressing the
in low lighting conditions: Vision-based limitations of current HGR systems and
HGR systems typically require good developing new algorithms and techniques,
lighting conditions to perform well. Future researchers can make HGR systems more
research should focus on developing new accurate, robust, user-friendly, and
algorithms that can work in low lighting affordable. This will enable HGR
technology to be used in a wider range of 5.0 Conclusion:
applications, from human-computer
interaction to sign language recognition to Human-computer interaction, sign language
robotics. understanding, and robotics are just a few of
the many possible uses for the quickly
Here are some accurate quotations or growing subject of HGR (HGR). The
paraphrased content from the respective accuracy and robustness of HGR systems
references: have significantly improved..

Key findings from the reference


papers/articles:
On improving the accuracy and
robustness of HGR systems in Deep learning algorithms have been shown
challenging environments: to be very effective at learning complex
patterns from data, such as the patterns that
"A major challenge in HGR is the correspond to different hand gestures. This
development of systems that are robust to has led to the development of HGR systems
noise and occlusion. This is because HGR that can recognize a wide range of gestures
systems often operate in real-world with high accuracy, even in challenging
environments, where noise and occlusion environments.
are common." [1]
This is especially important in challenging
"HGR systems also need to be able to environments, such as those with low
operate in low light conditions. This is lighting or occlusion.
important for many applications, such as
controlling smart home devices in the dark." Hybrid HGR systems that combine vision-
[2] based and wearable approaches are
becoming increasingly popular. Hybrid
On reducing the computational cost of systems have the potential to overcome the
HGR systems: limitations of both individual approaches.
For example, hybrid systems can be used to
"Another challenge in HGR is the
recognize gestures in a wider range of
development of systems that are
environments and with higher accuracy.
computationally efficient. This is important
for deploying HGR systems on mobile Limitations of current HGR systems:
devices and other resource-constrained
devices." [3] Current HGR systems are not always 100%
accurate, especially in challenging
On making HGR systems more user- conditions such as low lighting or
friendly: occlusion.
"HGR systems also need to be easy to use. Current HGR systems are not always robust
This means that the systems should not to noise and occlusion. This means that they
require users to learn complex gestures or may not be able to recognize gestures
wear uncomfortable sensors." [4] correctly in noisy or cluttered environments.
Some HGR systems are not fast enough for "Wearable sensors have the potential to
real-time applications. This means that there overcome the limitations of vision-based
may be a noticeable delay between the user HGR systems, such as susceptibility to
performing a gesture and the system noise and occlusion." [2]
recognizing it.
"Hybrid HGR systems that combine vision-
Some HGR systems are computationally based and wearable approaches are
expensive to train and run. This makes them becoming increasingly popular, as they have
difficult to deploy on mobile devices or the potential to achieve better accuracy and
other resource-constrained devices. robustness than either approach alone." [3]

Some HGR systems are difficult to use and "One of the main challenges in HGR is the
require users to learn complex gestures. development of systems that are robust to
This can make them inaccessible to some noise and occlusion." [4]
users, such as the elderly or disabled.
"Another challenge in HGR is the
Quotations or paraphrased content from development of systems that are
the reference papers/articles: computationally efficient." [5

"Deep learning has shown promising results


in HGR, achieving state-of-the-art
performance on various benchmarks." [1]
References spacebidirectional communication
usingtriboelectric smart glove. Nat Commun 12,
1. N. Mohamed, M. B. Mustafa and N. Jomhari, 5378 (2021).
"A Review of the HGR System: Current Progress
and Future Directions," in IEEE Access, vol. 9, 8. Wang, Z. et al. Hear sign language: A real-
pp. 157422-157436, 2021, doi: time end-to-end sign language recognition
10.1109/ACCESS.2021.3129650. system. IEEE Trans. Mob. Comput. (2018)

2. Hindawi, Computational Intelligence and 9. Saquib, N. & Rahman, A. Application of


Neuroscience, Volume 2022, Article ID 8777355 machine learning techniques for real-time sign
language detection using wearable sensors. In
3. P. K and S. B.J, "Hand Landmark Distance Proceedings of the 11th ACM Multimedia
Based Sign Language Recognition using Systems Conference 178–189. (Association for
MediaPipe," 2023 International Conference on Computing Machinery, 2020).
Emerging Smart Computing and Informatics
(ESCI), Pune, India, 2023, pp. 1-7, doi: 10. Lee, B. G. & Lee, S. M. Smart wearable hand
10.1109/ESCI56872.2023.10100061. device for sign language interpretation system
with sensors fusion. IEEE Sens. J. 18, 1224–
4. AashniHaria, Archanasri Subramanian, 1232 (2018).
NivedhithaAsokkumar,ShristiPoddar, Jyothi S
Nayak(2021) 11. Chong, T.-W. & Kim, B.-J. American sign
language recognition system using wearable
5. Rayane Tchantchane, Hao Zhou, Shen sensors with deep learning approach. J. Korea
Zhang, Gursel Alici, A Review of HGR Systems Inst. Electron. Commun. Sci. 15, 291–298
Based on Noninvasive Wearable Sensors, Wiley (2020).
Online Library, 20 July 2023.
12. Zhang, Y. et al. Static and dynamic human
6. Md. AhasanAtick Faisal and FarhanFuadAbir arm/hand gesture capturing and recognition via
and Mosabber Uddin Ahmed},2021 Joint 10th multiinformation fusion of fexible strain sensors.
International Conference on Informatics, IEEE Sens. J. 20, 6450–6459 (2020).
Electronics \& Vision (ICIEV) and 2021 5th
International Conference on Imaging, Vision 13. Yu, Y., Chen, X., Cao, S., Zhang, X. & Chen,
\&Pattern Recognition (icIVPR) (2021) X. Exploration of Chinese sign language
recognition using wearable sensors based on
7. Wen, F., Zhang, Z., He, T. et al. AI deep belief net. IEEE J. Biomed. Health Inform.
enabledsignlanguage recognition and VR 24, 1310–1320 (2020).

You might also like