Professional Documents
Culture Documents
Vfinal RP
Vfinal RP
Gestures. ”
Figure 1. Finger Angle-Based HGR for Smart Infrastructure Using Wearable Wrist-Worn Camera
Collectively, these research papers and articles directions. The insights gained from these works
offer a comprehensive understanding of the will be instrumental in driving further
state-of-the-art in HGR, highlighting the innovation and enhancing the effectiveness of
advancements, challenges, and potential future HGR systems in various domains.
Figure 2. Examples of gestures programmed into BMW Series 7 cars
difficult to develop HGR systems that can caused by a variety of factors, such as
accurately and reliably recognize a wide electrical interference from other devices or
range of hand gestures. [6] [7] [8] [9] [10] environmental factors such as dust and
smoke. [11] [12] [13]
Occlusion: When hands are partially or
fully occluded, it can be difficult for HGR Real-time performance: For many
systems to recognize them correctly. This is applications, such as sign language
a particular challenge for wearable HGR interpretation and virtual reality, it is
systems, as the user's own body can often important for HGR systems to be able to
occlude their hands. [8] [9] [10] perform in real time. This is a challenging
task, as HGR algorithms can be
Illumination: HGR systems are typically computationally expensive. [6] [7] [11] [12]
affected by changes in illumination. This
2.0 Background computing power, and machine learning
algorithms.
HGR (HGR) is a technology that uses
sensors to capture and interpret hand Sensor-based HGR systems use sensors to
movements as commands or instructions. measure the movement and position of the
HGR systems have HGR research has a user's hands. The hand gestures are then
long history, dating back to the early 1960s. extracted from the sensor data using
However, it is only in recent years that machine learning algorithms.
HGR systems have become practical and
affordable for real-world applications. This
is due to advances in sensor technology,
Vision-based HGR systems are more One of the most promising recent
common than sensor-based HGR systems. advances in HGR is the development of
This is because vision-based HGR systems wearable HGR devices. Wearable HGR
are more flexible and can be used to devices are typically gloves or bracelets
recognize a wider range of hand gestures. that contain sensors to measure the
However, vision-based HGR systems are movement and position of the user's hands.
also more susceptible to noise and Wearable HGR devices are more robust to
occlusion. noise and occlusion than vision-based
HGR systems, and they can be used to
Sensor-based HGR systems are more recognize a wider range of hand gestures
robust to noise and occlusion than vision- than sensor-based HGR systems.
based HGR systems. This makes them
well-suited for applications where the Another promising recent advance in HGR
user's hands may be partially or fully is the use of deep learning for HGR. Deep
occluded, such as sign language learning algorithms are able to learn
interpretation and virtual reality. However, complex patterns in data, and they have
sensor-based HGR systems are also more been shown to be very effective for HGR.
restrictive and can only recognize a limited
number of hand gestures.
2.2 Key Concepts and Key Theories:
Key concepts:
Recent advances in HGR
Feature extraction: The first step in HGR
In recent years, there has been significant
is to extract features from the input data.
progress in HGR research. Researchers
This can be done using a variety of
have developed new sensor technologies,
methods, such as computer vision
computer vision algorithms, and machine
techniques for vision-based HGR systems
learning algorithms that have improved the
and sensor data analysis for wearable HGR
accuracy, robustness, and performance of
systems.
HGR systems.
Gesture classification: Once the features tracking the trajectory of the hand and
have been extracted, they need to be fingers.
classified into different gesture categories.
This can be done using a variety of Appearance-based HGR: Appearance-
machine learning algorithms, such as based HGR systems use the appearance of
the hand to recognize gestures. This can be
support vector machines (SVMs), random done by analyzing the color, texture, and
depth of the hand.
Key theories:
The Microsoft Kinect sensor uses a Vision-based HGR systems use cameras to
combination of infrared cameras and a capture images or videos of the user's
depth sensor to track the user's body hands. The hand gestures are then
movements, including hand gestures. extracted from the images or videos using
computer vision algorithms.
The Google Glass wearable computer uses
a camera to capture images of the user's Wearable HGR systems use sensors worn
surroundings. The user can then interact on the user's hands to capture data about
with the computer by making hand the movement and position of the hands.
gestures in front of the camera. The hand gestures are then recognized
from the sensor data using machine
Examples of wearable HGR systems: learning algorithms.
The Myo armband is a wearable device Challenges in HGR:
that uses electromyography (EMG)
sensors to track the user's muscle activity. HGR is a challenging task due to a number
The user can then control devices, such as of factors, including:
computers and drones, by making hand
gestures. Background clutter: It can be difficult to
distinguish the user's hands from the
The Leap Motion sensor is a device that background in images or videos, especially
can be placed on a desk or mounted on a in cluttered environments. [1]
wall. It uses two infrared cameras to track
the user's hands and fingers in 3D space. Illumination changes: Changes in
lighting can affect the appearance of the
user's hands, making it difficult for HGR
systems to recognize gestures.[1]
3.0 Literature Review:
Occlusion: When the user's hands are
HGR (HGR) is a computer vision occluded by other objects, it can be
technique that allows computers to difficult for HGR systems to track their
interpret human hand gestures. HGR movements.[1].
3.1 Recent advances in HGR: Here are some specific examples of recent
advances in HGR:
Recent advances in HGR have been driven
by the development of new machine Vision-based HGR:
learning algorithms, such as deep learning.
Deep learning algorithms have been shown Researchers at Google have developed a
to be very effective at feature extraction and new vision-based HGR system that can
gesture classification. recognize gestures in real time with high
accuracy.
Another recent advance in HGR is the
development of new wearable sensors. Researchers at Microsoft have developed a
Wearable sensors can be used to track the new vision-based HGR system that can
movement and position of the hands more recognize gestures in 3D space. The system
accurately than vision-based systems. This uses two infrared cameras to track the
is especially important in challenging movement of the hands and then uses a deep
environments, such as those with low learning algorithm to classify the gestures.
lighting or occlusion. [3]
One of the most promising recent advances Despite the recent advances in HGR, One
in HGR is the use of deep learning challenge is the need to develop HGR
algorithms. Deep learning algorithms have systems that are more robust to noise and
been shown to be very effective at learning occlusion. Another challenge is the need to
complex patterns from data, such as the develop HGR systems that are more
patterns that correspond to different hand computationally efficient and can be run on
gestures. This has led to the development of mobile devices.
HGR systems that can recognize a wide
Future research in HGR is likely to focus on
range of gestures with high accuracy, even
addressing these challenges and on
in challenging environments.
developing new applications for HGR
Another promising recent advance in HGR technology. For example, researchers are
is the development of new wearable working on developing HGR systems that
sensors. This is especially important in can recognize gestures in 3D space and that
challenging environments, such as those can be used to interact with augmented
with low lighting or occlusion. reality and virtual reality applications.
Hybrid HGR systems that combine vision- Overall, the future of HGR is very
based and wearable approaches are also promising. As HGR systems become more
becoming increasingly popular. Hybrid accurate, robust, and versatile, we can
systems have the potential to overcome the expect to see them used in a wide range of
limitations of both individual approaches. applications, from human-computer
For example, hybrid systems can be used to interaction to sign language recognition to
recognize gestures in a wider range of robotics.
environments and with higher accuracy.
4.1 Limitations of current systems
Cost: Wearable HGR systems can be make hybrid HGR systems more difficult to
expensive, especially those that use high- develop and maintain.
end sensors. This can make wearable HGR
systems inaccessible to some users.
Complexity: Hybrid HGR systems are 4.2 Areas for future research : Current
more complex than vision-based or HGR systems are not always accurate or
wearable HGR systems. This is because robust in challenging environments, such as
they need to combine data from both vision those with low lighting or occlusion. Future
and wearable sensors. This complexity can research should focus on developing new
algorithms and techniques in these conditions without sacrificing accuracy or
conditions. performance.
Some HGR systems are difficult to use and "One of the main challenges in HGR is the
require users to learn complex gestures. development of systems that are robust to
This can make them inaccessible to some noise and occlusion." [4]
users, such as the elderly or disabled.
"Another challenge in HGR is the
Quotations or paraphrased content from development of systems that are
the reference papers/articles: computationally efficient." [5