Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

NOISY DEEP DICTIONARY LEARNING:

APPLICATION TO ALZHEIMER`S DISEASE


CLASSIFICATION

Under the Guided By: Submitted By:


ABSTRACT

 This project proposes a novel approach for identifying artifactual


components separated by wavelet-ICA using a pre-trained support vector
machine (SVM). Our method presents a robust and extendable system that
enables fully automated identification and removal of artifacts from EEG
signals, without applying any arbitrary thresholding. Using test data
contaminated by eye blink artifacts, we show that our method performed
better in identifying artifactual components than did existing thresholding
methods.
 Furthermore, wavelet-ICA in conjunction with SVM successfully removed
target artifacts, while largely retaining the EEG source signals of interest..
Softwares Used:

 MATLAB 2013B

 Signal processing tools


DICTIONARY LEARNING. LEFT – CONVENTIONAL
INTERPRETATION AND RIGHT – OUR INTEREPRETATION
 The interpretation of dictionary learning is different . It
learns a basis (D) for representing (Z) the data (X) .
 The columns of D are called ‘atoms’. In this work, we
look at dictionary learning in a different manner.
 Instead of interpreting the columns as atoms, we can
think of them as connections between the input and the
representation layer.
 To showcase the similarity, we have kept the color
scheme intact
 Unlike a neural network, which is directed from the
input to the representation, the dictionary learning can
be viewed as a network that points in the reverse
direction – from representation to the input.
 This is what is called ‘synthesis dictionary learning’ in
signal processing.
 The dictionary is learnt so that the features (along with
the dictionary) can synthesize / generate the data.
 It employs a Euclidean cost function (1), given by

 Building on the neural network type interpretation, we propose


deeper architecture with dictionary learning.
 For the first layer, a dictionary is learnt to represent the data.
 In the second layer, the representation from the first layer acts as input and
it learns a second dictionary to represent the features from the first level.
 This concept can be further extended to deeper layers.
DEEP DICTIONARY LEARNING
 Instead of only training the deep dictionaries with clean
data, we augment the training data with noisy samples.
 Thus ensuring the learnt dictionaries to be more robust.

 This kind of training helps in two ways – 1. Augmenting


the training data helps combat over-fitting, and 2.
Learning from noisy data makes the dictionaries robust.
We carry out experiments on the benchmark deep
learning datasets as well as on the practical problem of
Alzheimer’s Disease classification.
PROPOSED BLOCK
EEG Recording :

An electroencephalogram (EEG) is a recording of


brain activity. During this painless test, small
sensors are attached to the scalp to pick up the
electrical signals produced by the brain. These
signals are recorded by a machine and are looked
at by a doctor.

Multiresolution analysis DWT :

 consists of a sequence of nested


subspaces:
 method of most of the practically
relevant discrete wavelet transforms
(DWT)
 Dividing and filtering with multiple
sampling frequencies for decomposition.
ICA Decomposition :

Independent Component Analysis


Removing artifacts
Is a statistical and computational technique for revealing hidden factors that
underlie sets of random variables, measurements, or signals.
 Is a computational method for separating a multivariate signal into additive
subcomponents.
Subcomponents are non-Gaussian signals and that they are statistically
independent from each other.
Non-Gaussian signals occur frequently in practical situations.
ICA is used to separate EEG artifacts .
SVM classification:

Support Vector Machine


In machine learning, support vector machines (SVMs, also
support vector networks[1]) are supervised learning models with
associated learning algorithms that analyze data used for
classification and regression analysis. Given a set of training
examples, each marked as belonging to one or the other of two
categories, an SVM training algorithm builds a model that assigns
new examples to one category or the other.
TYPES OF SVM

oThere are two types of SVM: linear and non-linear,


they are used depending on the type of data.
oNon-linear SVM uses the Radial Basis Function
Kernel that takes the data points to a higher dimension
so that they are linearly separable in that dimension and
then the algorithm classifies them.
SVM
NON LINEAR MODEL
SVM KERNEL:

 The SVM kernel is a function that takes low dimensional


input space and transforms it into higher-dimensional
space, ie it converts non separable problem to separable
problem. It is mostly useful in non-linear separation
problems. Simply put the kernel, it does some extremely
complex data transformations then finds out the process
to separate the data based on the labels or outputs
defined.
 The SVM algorithm steps include the following:
 Step 1: Load the important libraries. ...

 Step 2: Import dataset and extract the X variables and Y


separately. ...
 Step 3: Divide the dataset into train and test. ...

 Step 4: Initializing the SVM classifier model. ...

 Step 5: Fitting the SVM classifier model. ...

 Step 6: Coming up with predictions.


 The goal of SVM is to divide the datasets into classes to
find a maximum marginal hyper plane.
 Support Vectors − Data points that are closest to the
hyper plane is called support vectors. Separating line
will be defined with the help of these data points.
ADVANTAGES OF SVM:

 Effective in high dimensional cases


 Its memory efficient as it uses a subset of training points
in the decision function called support vectors
 Different kernel functions can be specified for the
decision functions and its possible to specify custom
kernels
Input Brain signal

Separated signal
CONCLUSION
 EEG signals analysis is accurate, simple and reliable
enough to use in brain computer interfaceThe SVM
substantially improves identification of artifactual
components and is found more reliable than standard
thresholding method. Moreover, it promises to be
generalizable for diverse kinds of artifacts, upon
selecting proper features and training data. Our system
functions automatically to isolate a distinctly cleaned
EEG signal directly from a raw EEG recording, thus
potentially lending itself for applications such as clinical
diagnosis or BCI.
REFERENCES
 J.Wolpaw,N.Birbaumer,D.McFarland,G.P.furtscheller,andT.Vaughan, ”Brain–
computer interfaces for communication and control”,vol.113, no.6, pp.767–
791, 2002.
 B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K. R.
Muller”Optimizing spatial filters for robust EEG single-trial analysis” IEEE
Signal Process.Mag.,vol. 25, no.1, pp.41–56, 2008.
 H. Higashi and T. Tanaka,”Simultaneous design of fir filter banks and spatial
patterns for the EEG signal classification”,IEEE Trans. Biomed. Eng.,vol. 60,
no. 4, pp. 1100– 1110, Apr. 2013.
 Haihong Zhang, Kai Keng Ang, Cuntai Guan,Chuanchu Wang, ’’Spatio-
Spectral feature selection based on the robust mutual information estimate for
brain computer interfaces’’,IEEE EMBS Minneapolis,2009.
 S.Park,E.Serpedin and K.Qaraqe,”Guassian assumption:The least favorable but
the most useful,”IEEE Signal Process.Mag.,vol.30,no.3,pp. 183-186,may-2013.
 F.Lotte and C.Guan,”Regularizing common spatial patterns to improve BCI
Designs:unified theory and new algorithms”, IEEE Trans. Biomed. Eng.,
vol.58,no.2,pp.355-362,Feb-2011.

You might also like