Docs

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

HAPTER 1.

0
INTRODUCTION

In recent years, the convergence of artificial intelligence (AI) and biomechanics has
sparked innovative advancements in the development of wearable technologies,
particularly in the realm of exoskeletons. The integration of AI, specifically Long Short-
Term Memory (LSTM) networks, with Inertial Measurement Unit (IMU)-based Human
Activity Recognition (HAR) and Payload Estimation for Low-Back Exoskeletons has
become a focal point in the quest to enhance human performance, reduce ergonomic strain,
and mitigate the risk of musculoskeletal injuries.

The significance of this AI-driven approach lies in its potential to revolutionize the field of
human augmentation and rehabilitation. Exoskeletons, wearable robotic devices designed
to augment and support human physical capabilities, have shown promise in various
domains, from assisting individuals with mobility impairments to enhancing the efficiency
and safety of industrial workers. By incorporating LSTM-based AI models with IMU data,
we aim to unlock a new level of adaptability and responsiveness in low-back exoskeletons.

Exoskeletons are mechanical structures worn by individuals that mirror the human skeletal
system. These wearable devices are equipped with sensors, actuators, and often AI
algorithms to amplify or assist human movements. Low-back exoskeletons specifically
target the lumbar region, offering support and augmentation for tasks involving lifting,
bending, and maintaining a stable posture. The potential applications span industries such
as manufacturing, healthcare, and military, where the alleviation of physical strain can
enhance overall performance and well-being.

The cornerstone of our approach involves the use of LSTM networks, a type of recurrent
neural network (RNN) known for its ability to capture long-range dependencies in
sequential data. The LSTM architecture is particularly well-suited for time series analysis,
making it an ideal candidate for processing IMU data collected from the human body during
various activities. By leveraging this deep learning methodology, our AI model aims to
discern intricate patterns in movement, facilitating accurate Human Activity Recognition.

The integration of IMU-based Payload Estimation adds an additional layer of sophistication


to the model. Through the analysis of acceleration data, we seek to estimate the load borne
by the exoskeleton wearer, allowing for real-time adjustments in the level of

page 1
assistanceprovided. This dynamic adaptation enhances the exoskeleton's efficiency,
ensuring optimal support tailored to the wearer's activities and physical state.

In conclusion, the fusion of LSTM-based AI models with IMU-driven data holds great
promise for the advancement of low-back exoskeleton technology. This research not only
contributes to the understanding of human-machine interaction but also opens avenues for
the development of intelligent, adaptive exoskeletons capable of enhancing human
performance and mitigating the physical toll associated with demanding tasks. As we delve
into this interdisciplinary exploration, the potential for transformative impact on both
industrial and rehabilitative landscapes becomes increasingly apparent.

page 2
CHAPTER 2.0
LITERATURE REVIEW
PAPER 1:
Title: "Deep learning for human activity recognition: A resource-efficient implementation
on low-power devices". Authors: Ronao, C.A., Cho, S.B. Published in: Sensors (Basel),
2017.

Summary: This paper explores the implementation of deep learning techniques for human
activity recognition, specifically focusing on a resource-efficient approach suitable for low-
power devices. The authors investigate the effectiveness of deep learning models in
recognizing activities using sensor data and discuss the implications for applications with
limited computational resources.

PAPER 2:
Title: "Human activity recognition using wearable sensors by deep convolutional neural
networks". Authors: Ronao, C.A., Cho, S.B. Published in: Future Generation Computer
Systems, 2016.

Summary:

The paper investigates the use of deep convolutional neural networks (CNNs) for human
activity recognition based on data from wearable sensors. The authors explore the
effectiveness of CNNs in extracting features from sensor data, providing insights into the
application of deep learning techniques for accurate and robust activity recognition in
various context.

page 3
PAPER 3:
Title: "Deep learning for sensor-based activity recognition: A survey".

Authors: Reyes-Ortiz, J.L., Oneto, L., Samà, A., Parra, X., Carrault, G., Cabestany, J.,
Rodríguez-Martínez, D., Samà, A., Parra, X., Carrault, G., Cabestany, J. Published in:
Pattern Recognition, 2016.

Summary: Focused on wearable and wireless bio-signal monitoring, this paper surveys
applications involving EEG and ECG signals. It explores the integration of these signals
into health informatics systems, offering insights into the potential of wearable
technologies for health monitoring and discusses the challenges and opportunities in this
emerging field.

2.1 Existing System

Existing systems in the realm of human activity recognition (HAR) and low-back
exoskeletons predominantly leverage wearable technologies equipped with Inertial
Measurement Units (IMUs) to capture and analyze human movement data. These systems
often employ advanced signal processing techniques and machine learning algorithms to
interpret the information gathered from sensors. Recognizing the significance of accurate
activity recognition, researchers have explored various approaches, including deep learning
architectures like Long Short-Term Memory (LSTM) networks, convolutional neural
networks (CNNs), and hybrid models. Additionally, efforts have been directed toward
developing robust exoskeletons designed to augment the lower back, with many systems
incorporating sophisticated control mechanisms and AI-driven algorithms for real-time
adaptation to user activities. The integration of AI models with IMU data has played a
pivotal role in enhancing the adaptability and effectiveness of these exoskeletons, fostering
the development of intelligent, user-centric systems capable of providing personalized
support and reducing the risk of musculoskeletal injuries. As the field continues to evolve,
the synergy between AI and wearable technologies holds promise for creating more
responsive and efficient systems for human augmentation and rehabilitation.

2.2 Problem Statement


The problem statement in the context of human activity recognition (HAR) and low-back
exoskeletons centers around the need for advanced systems that can accurately interpret
human movements and provide optimal support through wearable technologies. Despite

page 4
significant advancements in wearable sensors and exoskeleton designs, challenges persist
in achieving precise and context-aware recognition of diverse human activities, particularly
those involving the lower back. Existing systems may face limitations in adaptability, real-
time responsiveness, and personalized support, which are crucial for ensuring user comfort
and preventing musculoskeletal injuries. The integration of AI models, such as Long Short-
Term Memory (LSTM) networks, with Inertial Measurement Unit (IMU) data offers a
potential solution, but the effectiveness of these systems needs to be thoroughly explored
and refined. The problem at hand involves refining and advancing the capabilities of HAR
and low-back exoskeleton systems to create intelligent, user-centric solutions that
seamlessly integrate with daily activities, promoting enhanced user experience and overall
well-being.

2.3 Proposed System


The proposed systems aim to address the challenges in human activity recognition
(HAR) and low-back exoskeletons by leveraging advanced technologies, particularly
integrating Inertial Measurement Units (IMUs) with Long Short-Term Memory (LSTM)
networks. The core features of the proposed systems include:
Enhanced Activity Recognition Models:Develop and implement advanced LSTM-based
deep learning models for human activity recognition. These models should be capable of
efficiently processing time-series data from IMUs, capturing complex patterns in human
movements, and accurately classifying various activities with a focus on lower back-related
tasks.
Real-time Adaptability:Incorporate real-time adaptability mechanisms into the proposed
systems. Utilize the capabilities of LSTM networks to dynamically adjust the recognition
and support parameters based on the continuously evolving patterns in the user's
movements. This ensures that the system remains responsive and adaptable to different
activities and user conditions.
Personalized Support:Introduce personalized support features by integrating user-specific
data into the LSTM models. This could include considering individual biomechanical
characteristics, preferences, and historical activity patterns to tailor the assistance provided
by the low-back exoskeleton. The goal is to enhance user comfort and optimize support
based on the unique needs of each wearer.
Integration of AI in Exoskeleton Control:Embed AI-driven algorithms in the control
mechanisms of low-back exoskeletons. These algorithms should utilize LSTM predictions
to inform the exoskeleton's actuators for seamless coordination with the wearer's

page 5
movements. This integration aims to create a symbiotic relationship between the wearer
and the exoskeleton, enhancing overall performance and reducing the risk of strain or
discomfort.
User Interface and Feedback:Develop user-friendly interfaces that provide real-time
feedback to both the wearer and caregivers. Visualizations and alerts can inform users about
the recognized activities, estimated payload, and the exoskeleton's current mode of
operation. This fosters better communication between the human and the machine, ensuring
a transparent and user-centric interaction.
Validation and Testing Protocols:Establish robust validation and testing protocols to
assess the performance and safety of the proposed systems. This includes rigorous testing
under various conditions, such as different activity scenarios, user demographics, and
environmental factors. Validation should involve both laboratory and real-world testing to
ensure the reliability and generalizability of the proposed systems.

By combining these elements, the proposed systems aim to advance the state-of-the-art in
HAR and low-back exoskeleton technology, providing more intelligent, adaptive, and user-
specific solutions for enhanced human-machine interaction and support.

page 6
CHAPTER 3
SYSTEM REQUIREMENT SPECIFICATION
3.1 HARDWARE REQUIREMENTS:
• Processor : Intel I5 2.1 Ghz.
• Storage : 100 GB.
• RAM : 8 GB
3.2 SOFTWARE REQUIREMENTS:
• Platform: Windows/Linux/macOS
• Language used: Python
• Technologies used: Pytorch , Pandas , Numpy , Scikit Learn

page 7
CHAPTER 4
SYSTEM ARCHITECTURE

4.1 Diagram of LSTM

4.1 A single LSTM


(source : https://towardsdatascience.com/)

4.2 Working of LSTM


Long Short-Term Memory (LSTM) is a type of recurrent neural network (RNN)
architecture designed to address the vanishing gradient problem, which is a common issue
in traditional RNNs. LSTMs are particularly useful for tasks involving sequences, such as
time series prediction, natural language processing, and speech recognition.
Here's a brief overview of how LSTM works:
1. Memory Cell:
• LSTMs have a memory cell that can maintain information over long periods
of time. This allows them to capture dependencies in data that involve longer
time lags.
2. Gates:
• LSTMs use three types of gates to control the flow of information into and
out of the memory cell:
• Forget Gate: Determines what information from the cell's previous
state should be thrown away or kept.
• Input Gate: Modifies the cell state to include new information.
• Output Gate: Controls the output based on the current cell state.
3. Cell State:
• The cell state runs through the entire sequence, and its information is
selectively updated or discarded by the gates.
4. Hidden State:
• The hidden state is the output of the LSTM cell and is based on the current
input, the previous hidden state, and the current cell state. The hidden state

page 8
carries information that is useful for predicting the next element in the
sequence.

5. Training:
• During training, LSTMs use backpropagation through time (BPTT) to
update their parameters. The gates' weights are adjusted to minimize the
difference between the predicted output and the actual output.
In summary, LSTMs are designed to capture long-range dependencies in sequential data
by maintaining a memory cell with gates that regulate the flow of information. This
architecture enables them to effectively model and learn patterns in time series or sequential
data, making them well-suited for a variety of tasks in machine learning and artificial
intelligence.

4.3 Neural Network Architecture

4.4 Methodology
1. Data Preprocessing:
• Load the Data: Import the raw dataset that includes time-series data and
corresponding labels.

page 9
• Handle Missing Values: Address any missing or incomplete data points by
imputation or removal.
• Normalize/Standardize Data: Scale the features to a similar range to enhance model
convergence. Common techniques include Min-Max scaling or z-score
normalization.

2. Sequence Creation:
• Define Time Windows: Determine the size of the time window or sequence length
for creating input-output pairs for the LSTM model. This involves deciding how
many previous time steps the model should consider to predict the next time step.
• Create Sequences: Slide the time window across the dataset to create sequences of
input-output pairs. Each sequence consists of input features (past time steps) and
the corresponding target (next time step or sequence).
3. Train-Test Split:
• Split the Data: Divide the dataset into training and testing sets. The training set is
used to train the model, and the testing set evaluates its generalization performance.
4. LSTM Model Architecture:
• Define Model Architecture: Create an LSTM-based model using deep learning
frameworks such as TensorFlow or PyTorch. Design the input layer to accept the
created time series sequences.
• Configure LSTM Layers: Stack one or more LSTM layers to capture temporal
dependencies in the data. Adjust the number of units in each layer based on the
complexity of the problem.
• Add Output Layer: Include an output layer with an appropriate activation function
for the specific task (e.g., softmax for classification).
5. Model Compilation and Training:
• Compile the Model: Specify the loss function, optimizer, and evaluation metric.
This depends on the nature of the problem (e.g., categorical crossentropy for
classification tasks).
• Train the Model: Fit the model to the training data, specifying the batch size and
the number of epochs. Monitor the training process for convergence and potential
overfitting.

page 10
6. Model Evaluation:
• Evaluate on Test Data: Assess the model's performance on the held-out test set
using metrics such as accuracy, precision, recall, and F1-score.
• Visualize Predictions: Visualize the model predictions against the actual values to
gain insights into its performance.
7. Fine-Tuning and Optimization:
• Hyperparameter Tuning: Fine-tune hyperparameters like learning rate, dropout
rates, or the number of LSTM units to optimize model performance.
• Iterative Improvement: Make adjustments based on the evaluation results, and
iterate the training process if necessary.
This methodology provides a systematic approach to convert time-series data into tensors
suitable for training an LSTM-based model.

4.5 Algorithm
Neural Networks (NN):
Neural Networks are computational models inspired by the structure and functioning of the
human brain. They consist of layers of interconnected nodes, also known as neurons or
units. The layers are typically categorized into an input layer, one or more hidden layers,
and an output layer. Each connection between neurons has an associated weight, and each
neuron applies an activation function to its weighted inputs, producing an output.
The process of training a neural network involves adjusting the weights based on the error
between the predicted output and the actual output. This is done through a process called
backpropagation, where the network learns to recognize patterns and relationships in the
training data. Neural Networks have been successfully applied to a wide range of tasks,
including image recognition, natural language processing, and regression.
Long Short-Term Memory (LSTM) Networks:
LSTM networks are a type of recurrent neural network (RNN) designed to handle
sequences and time-dependent data. They were specifically developed to address the
vanishing gradient problem in traditional RNNs, which made it challenging for these
networks to capture long-term dependencies in sequences.
LSTMs introduce a memory cell and use three gates — input gate, forget gate, and output
gate — to control the flow of information into, out of, and within the memory cell. This
architecture enables LSTMs to selectively remember or forget information over long
sequences, making them well-suited for tasks such as natural language processing, time-
series analysis, and speech recognition.
In summary, while Neural Networks are general-purpose models that can learn complex
relationships in data, Long Short-Term Memory networks are a specialized type of neural
network designed for handling sequential and time-dependent data, with the ability to
capture long-range dependencies.

page 11
Loss during 20 epochs of training.

Graph of loss plotted with each epoch of training

Accuracy , precision and f1-score

page 16

You might also like