Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

DEEP LEARNING IN

AUTOMATED ECG NOISE


DETECTION

DONE BY
KISHEN KANTH J
2020105542
ECG BASICS
⮚An Electrocardiogram (ECG or EKG) is a non-invasive medical test that records
the electrical activity of the heart over time. It is a crucial diagnostic tool for
assessing cardiac health.
⮚The heart's electrical activity originates in the sinoatrial (SA) node, spreads
through the atria, passes through the atrioventricular (AV) node, and then travels
down the ventricles, leading to muscle contractions.
⮚ ECG electrodes are placed on the patient's skin at specific locations, typically on
the limbs and chest. These electrodes detect and record the electrical signals
generated by the heart.
⮚An ECG waveform consists of several components, including the P-wave (atrial
depolarization), the QRS complex (ventricular depolarization), and the T-wave
(ventricular repolarization).
TYPES OF ECG NOISES
⮚Muscle Artifacts
⮚Baseline Wander
⮚Electrode Interference
⮚Powerline Interference
⮚Movement Artifacts
TRADITIONAL NOISE REMOVAL METHODS
⮚High-Pass Filters:High-pass filters are utilized to remove baseline wander and
slow variations from the ECG signal. They allow higher-frequency components,
such as the QRS complex, to pass through.

⮚Low-Pass Filters: Low-pass filters are applied to eliminate high-frequency noise,


particularly muscle artifacts and electrode interference, by allowing only the
lower-frequency components of the ECG signal to pass.

⮚Baseline Correction: Baseline correction techniques focus on removing baseline


wander and stabilizing the baseline of the ECG signal. Methods like polynomial
fitting or wavelet-based denoising are commonly employed.
INTRODUCTION TO DEEP LEARNING
⮚Deep learning is a subset of machine learning that focuses on training artificial neural
networks with multiple layers to perform tasks without explicitly programmed
instructions. It excels in pattern recognition, feature learning, and complex data analysis.
⮚At the heart of deep learning are artificial neural networks, which are inspired by the
structure and function of the human brain. These networks consist of interconnected
nodes (neurons) organized in layers.
⮚Deep learning has found applications in diverse fields, including computer vision, natural
language processing, speech recognition, and healthcare. In the context of ECG analysis,
it offers the potential to automate and improve various aspects of signal processing and
diagnosis.
⮚Deep learning techniques have demonstrated promise in healthcare for tasks such as
disease diagnosis, medical image analysis, drug discovery, and predictive modeling.
They are particularly valuable in scenarios where large datasets and complex patterns are
involved.
DEEP LEARNING IN NOISE DETECTION
⮚Complex Pattern Recognition: Deep learning algorithms, such as Convolutional
Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), are capable of
complex pattern recognition.
⮚Data-Driven Approach: Deep learning adopts a data-driven approach, which
means that it learns from the patterns and features present in the training data.
⮚Adaptability:Deep learning models can be trained to recognize different types of
noise in ECG signals, making them adaptable to various noise sources. This
adaptability enables a more comprehensive approach to noise detection.
DATA COLLECTION AND PREPROCESSING

• High-Quality Labelled Datasets


• Data Normalization
• Data Augmentation
• Annotation of Noise Labels
• Preprocessing Pipelines
CONVOLUTIONAL NEURAL NETWORK (CNN)
CONVOLUTIONAL NEURAL NETWORK
(CNN)
• CNN is one of the most popular DNN architecture usually trained by a
gradient-based optimization algorithm. In general, a CNN consists of
multiple back-to-back layers connected in a feed-forward manner. The
main layers are including convolutional layer, normalization layer,
pooling layer, and fully-connected layer. Three first layers are
responsible for extracting features, while fully-connected layers are in
charge of classification.
RECURRENT NEURAL NETWORKS (RNN)
RECURRENT NEURAL NETWORKS (RNN)

• RNN is an extension of an Artificial Neural Network (ANN) whose


weights are shared across time. RNN is the most proper learning
model for learning sequential input data and the time-series data
classification where the feedback and the present value is fed again
into the network and the output contains the adding of values in the
memory. At each time step, the RNN receives an input, updates its
hidden state, and makes a prediction. RNN uses gradient
descent algorithm through time for training the weights.
LONG SHORT TERM MEMORY NETWORK
(LSTM)
LONG SHORT TERM MEMORY NETWORK
(LSTM)
• To solve the problem of Vanishing and Exploding Gradients in a Deep Recurrent
Neural Network, many variations were developed. One of the most famous of
them is the Long Short Term Memory Network(LSTM). In concept, an LSTM
recurrent unit tries to “remember” all the past knowledge that the network is seen
so far and to “forget” irrelevant data. This is done by introducing different
activation function layers called “gates” for different purposes. Each LSTM
recurrent unit also maintains a vector called the Internal Cell State which
conceptually describes the information that was chosen to be retained by the
previous LSTM recurrent unit.
• Forget Gate(f)
• Input Gate(i)
• Input Modulation Gate(g)
• Output Gate(o)
GATED RECURRENT UNIT NETWORKS
GATED RECURRENT UNIT NETWORKS
• The basic idea behind GRU is to use gating mechanisms to selectively
update the hidden state of the network at each time step. The gating
mechanisms are used to control the flow of information in and out of
the network. The GRU has two gating mechanisms, called the reset
gate and the update gate.
• The reset gate determines how much of the previous hidden state
should be forgotten, while the update gate determines how much of the
new input should be used to update the hidden state. The output of the
GRU is calculated based on the updated hidden state.
TRANSFER LEARNING
• Transfer learning is a technique in machine learning where a model trained on
one task is used as the starting point for a model on a second task. This can be
useful when the second task is similar to the first task, or when there is limited
data available for the second task. By using the learned features from the first
task as a starting point, the model can learn more quickly and effectively on the
second task. This can also help to prevent overfitting, as the model will have
already learned general features that are likely to be useful in the second task.
TRANSFER LEARNING
1.Leveraging Pre-trained Models: Pre-trained deep learning models, trained on large and
diverse datasets for tasks like image recognition or signal processing, can be leveraged for
ECG noise detection. This is particularly beneficial when labeled ECG noise data is limited.
2.Adaptation to Specific Task: Transfer learning allows the model to adapt its knowledge
from the original task to the specific requirements of ECG noise detection. This adaptability
enhances the model's performance and efficiency.
3.Reduced Training Data Requirements: By starting with a pre-trained model, transfer
learning can significantly reduce the amount of labeled data needed for training. This is
especially advantageous in healthcare applications where collecting large labeled datasets
can be challenging.
4.Generalization to New Data: Transfer learning contributes to the generalization of models
to new and unseen ECG data, making them more effective in real-world scenarios. This
approach ensures that the model captures relevant features and patterns beyond the training
data.
Training a deep CNN prediction model from scratch requires a large amount of training
data and computational power. An adequate quantity of training data is unavailable in
various applications, and producing new realistic synthetic data is impossible. In these
cases, reusing available CNN trained on large datasets for conceptually comparable tasks is
beneficial. Knowledge learned from a pattern in one domain may apply to other domains.
The transfer learning technique allows this knowledge to be assigned to a new domain and
perform classification. This technique uses a pre-trained deep neural network as leverage
for automated feature extraction. The convolution layers inside this model contain features
learned during model training and possess knowledge about the patterns in the dataset.
These feature representations can act as feature extractors for a new dataset. These
available extracted features from the mid-layers of a deep CNN are robust enough to
capture the hand-engineered features and are perfect for feature extraction. The transfer
learning technique has a great potential to be applied for automation in various domains
such as sound signals, aircraft design, and medical application development cycle.
REFERENCES
• https://www.sciencedirect.com/science/article/pii/S25901885203001
23
• https://ieeexplore.ieee.org/document/9476957
• https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true
&queryText=ecg%20noise%20detection%20using%20deep%20learnin
g
• https://www.geeksforgeeks.org/introduction-convolution-neural-net
work/
• https://www.geeksforgeeks.org/introduction-to-recurrent-neural-net
work/
• https://www.geeksforgeeks.org/long-short-term-memory-networks-e
xplanation/
• https://www.geeksforgeeks.org/gated-recurrent-unit-networks/
THANK YOU

You might also like