Professional Documents
Culture Documents
Art 1
Art 1
Review
a r t i c l e
i n f o
Article history:
Received 11 December 2012
Received in revised form
26 September 2013
Accepted 13 December 2013
Available online 15 January 2014
Keywords:
Sleep
EEG
Signal processing
Quantitative analysis
Artefact reduction
Feature selection/classication
a b s t r a c t
A bewildering variety of methods for analysing sleep EEG signals can be found in the literature. This article
provides an overview of these methods and offers guidelines for choosing appropriate signal processing
techniques. The review considers the three key stages required for the analysis of sleep EEGs namely,
pre-processing, feature extraction, and feature classication. The pre-processing section describes the
most frequently used signal processing techniques that deal with preparation of the sleep EEG signal
prior to further analysis. The feature extraction and classication sections are also dedicated to highlight the most commonly used signal analysis methods used for characterising and classifying the sleep
EEGs. Performance criteria of the addressed techniques are given where appropriate. The online supplementary materials accompanying this article comprise an extended taxonomy table for each section,
which contains the relevant signal processing techniques, their brief descriptions (including their pros
and cons where possible) and their specic applications in the eld of sleep EEG analysis. In order to
further increase the readability of the article, signal processing techniques are also categorised in tabular
format based on their application in intensively researched sleep areas such as sleep staging, transient
pattern detection and sleep disordered breathing diagnosis.
2013 Elsevier Ltd. All rights reserved.
Contents
1.
2.
3.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.
Artefact processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1.
Artefact detection and rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2.
Methods for suppressing artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.
Sleep EEG segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.
Temporal features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1.
Instantaneous statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2.
Zero-crossing and period-amplitude analysis (PAA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.3.
Hjorth parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.4.
Detrended uctuation analysis (DFA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.
Spectral features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1.
Non-parametric spectral estimation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2.
Coherence analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3.
Parametric spectral estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.4.
Subspace methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.5.
Higher-order spectral analysis (HOSA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Corresponding author at: University of Southampton, Room 3026, Building 13, Higheld, Southampton SO17 1BJ, UK. Tel.: +44 2380592873.
E-mail addresses: Smf1g08@Soton.ac.uk (S. Motamedi-Fakhr), M.M.Torbati@soton.ac.uk (M. Moshre-Torbati), M.Hill@soton.ac.uk (M. Hill),
C.M.Hill@soton.ac.uk (C.M. Hill), Pwr@isvr.soton.ac.uk (P.R. White).
1746-8094/$ see front matter 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.bspc.2013.12.003
22
22
23
23
23
24
25
25
25
25
25
25
25
25
25
26
26
26
22
3.3.
4.
5.
Timefrequency features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1.
Short time Fourier transform (STFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2.
The wavelet transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3.
Matching pursuits (MP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.4.
Empirical mode decomposition (EMD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.
Nonlinear features/complexity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.1.
Fractal dimension (FD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2.
Correlation dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.3.
Entropy measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.4.
Lyapunov exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.
Neural network (NN) classication (supervised learning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1.
Multilayer perceptron (MLP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.
Clustering (unsupervised learning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1.
Self-organising maps (SOM) or Kohonen maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.
Statistical classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.
Linear discriminant analysis (LDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.2.
Support vector machines (SVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.3.
Hidden Markov model (HMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.
Fuzzy classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.
Combined classiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conicts of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix A.
Supplementary data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Introduction
Sleep is a crucial part of everyday life. It directly affects our cognitive performance, learning capabilities, and general physical and
emotional well-being. Sleep is the primary activity of the brain
in infancy and is thought to be a factor in neural plasticity [1,2].
Sleep problems in early life may result in lasting neurocognitive
decits. Krueger et al. [3] point out that during sleep one gives up
the opportunities to reproduce, eat, drink or socialise and one is
subject to predation. Sleep could only have evolved despite these
high evolutionary costs if it serves a crucial, primordial function.
Understanding and measuring brain activity in sleep is an exciting
frontier of neuroscience, and polysomnography (PSG) provides a
data-rich source for understanding sleep in both health and disease. PSG combines multiple signals in sleep typically including
neurophysiological signals:
EEG (usually 48 channels), EOG and EMG (submentalis and/or
tibialis muscle)combined with cardiorespiratory signals such as:
ECG
Oxyhaemoglobin saturation
Oral-nasal air ow
Abdominal and thoracic excursions
Visual inspection of these neurophysiological signals forms the
basis for standard sleep staging [4]. Signal processing allows the
extraction of detailed information from such signals. Applications
of these methods in relation to sleep EEG range from simple time
and frequency domain analysis to implementation of sophisticated nonlinear pattern recognition and classication algorithms.
Kubicki et al. [5] emphasise that going beyond the well-known and
commonly used Rechtschaffen and Kales scoring criteria [4] will
not be possible without the use of signal processing techniques
and computer aided analyses to reveal further information on the
microstructure of sleep. The body of literature developed for the
analysis of sleep EEG is vast and therefore this review paper provides a synthesis of a selection of this literature to generate an
overview of signal processing techniques applied to human sleep
EEG analysis and their relative merits.
26
26
26
26
27
27
27
27
27
28
28
28
29
29
29
29
29
29
29
30
30
30
30
30
30
1
Note that Tables 13 in the supplementary materials are designed to be
self-contained; each table includes the names of the relevant signal processing techniques (sorted based on frequency of use in the literature), a brief description of each
technique and an accompanying key reference, and instances where each technique
has been employed in sleep EEG analysis followed by the corresponding references.
We recommend the reader to use the supplemental materials provided alongside
this article.
23
24
Fig. 1. Common EEG artefacts. (A) 50 Hz mains interference appears as a thickened signal caused by superposition of 50 Hz mains waves on the EEGs. (B) Movement causes
a sudden and signicant deviation from the background EEG and (C) severe movement can clip the EEG. (D) ECG interference appears as a pulsed EEG, it occurs when the
pulses on the ECG are superimposed on the EEG. (E) Sweat artefact is a slow drift of the baseline EEG.
within the segment; then the reliability of estimates from wellconstructed processing algorithms will be enhanced. The use of
longer segments will also usually increase the overall computational burden of the algorithm, since in most instances the
computation load is related to the data length in a super-linear
fashion. The choice of segment length is usually also inuenced
by the algorithm being used, some methods are more data hungry than others and require the use of longer segment lengths to
obtain reliable results even in truly stationary environments. It
should be noted that this conict can force one to adopt segment
lengths which exceed those over which the signal can be reasonably regarded as stationary and the resulting method can suffer
signicantly.
Adaptive (non-uniform) segmentation is a more sophisticated
approach to segmentation, it aims to examine the underlying signals to determine a suitable segment length, allowing the use
of longer segments when the signal is approximately stationary and shorter segments when there are rapid changes [20,21].
An advantage of this approach is that feature extraction after
adaptive segmentation will be more reliable since features are
extracted from homogenous epochs rather than constant length
ones [22]. In order to perform such a segmentation one needs to
be able to detect changes in the underlying signal structure, this
can be done using a wide variety of metrics including statistical and spectral measures [2326], auto-regressive (AR) modelling
[20], time varying AR models [27] (which allow for limited temporal variation within a segment) and various energy measures
[28,29].
Whilst adaptive segmentation offers the potential for improved
performance it suffers from two distinct disadvantages. The rst
is that it clearly imposes an additional computational burden; the
degree to which this is important depends on the application and
method adopted. The more fundamental issue is the rather circular
nature of the principle, specically, in order to detect boundaries
to perform the segmentation one needs to, in some manner, analyse the data within the segment. The analysis used as the basis of
the segmentation should be as powerful as the method used postsegmentation, or the segment boundaries will only occur when
there is a gross non-stationarity and more subtle features in the
data will be obscured. This leads to a vision where segmentation is
inherent in the whole processing strategy, rather than a separate
process as is traditionally the case, such approaches whilst conceptually feasible are not currently described and so are not discussed
further.
3. Feature extraction
Analyses of time series are often carried out by extracting
features from the signal of interest. Features can be dened as
parameters which provide information about the underlying structure of a signal. There have been numerous techniques applied to
sleep EEG signals for the purpose of feature extraction. Names,
descriptions and applications of these techniques in sleep EEG
processing are provided in supplementary Table 2. Note that in
general, studies employ more than one feature in their analyses
and hence more often than not, features are complementary. The
rest of this section briey describes some of the most frequently
used features and feature extraction techniques in sleep EEG analysis. A review of quantitative analysis techniques applied to EEG
signal can be found in [30].
3.1. Temporal features
Temporal features are characteristics obtained from the signal in
the time domain. Some of the more widely used temporal features
and the associated processing techniques are described below. For
specic applications of the features/methods described below in
the analysis of sleep EEG signals see supplementary Table 2 which
also contains a wider range of temporal features.
3.1.1. Instantaneous statistics
These are among the simplest features which can be derived
from a time series and are the most frequently used temporal features in sleep EEG analysis. Such measures consider each sample
as if it were a univariate process and overlook any correlations
between samples. These statistics include measures derived from
moments of the waveform including the mean absolute amplitude, standard deviation/variance, skewness and kurtosis, as well as
measures relating to the probability density function of the waveform, such as mode, median or the entropy.
3.1.2. Zero-crossing and period-amplitude analysis (PAA)
Zero-crossings are the points at which the waveform crosses
the x-axis. They are simple to compute and zero crossing rate
encodes frequency information, because data which is dominated
by high frequencies has a rapid zero crossing rate, whereas low
zero-crossing rates are associated with low frequency processes
[31,32]. The limited information offered by zero crossings can be
supplemented by also characterising the waveform between the
zero-crossing points, leading to PAA [33]. This approach can be
adopted within frequency bands to mitigate the effects of noise and
to reduce the issues associated with signals comprised of multiple
components [33], but this is not always necessary and the results
can compare favourably with spectral based approaches [34].
3.1.3. Hjorth parameters
The parameters are based on variance of the derivatives of the
waveform and have been used for some time to characterise EEG
waveforms [35]. There are three Hjorth parameters which are commonly dened (although one can readily extend the principle) to
describe activity, mobility (shape) and complexity of EEG signals
[35,36]. Being based on derivatives, the Hjorth parameters are sensitive to noise and hence the signal of interest is commonly ltered
prior to calculation of these parameters [37].
3.1.4. Detrended uctuation analysis (DFA)
DFA is a method to characterise long range temporal correlations in a time series [38,39] and can be used as a measure of
self-similarity [40]. It is based on identify trends in the signals variance when analysed with different block length and is inherently
25
26
27
and interpretation of these techniques requires a good understating of the method and the application [74,78]. However, careful
analysis of nonlinearities can reveal useful information which is
otherwise hidden [79]. The rest of this section briey describes
some of the nonlinear features/techniques which have been more
frequently applied to the analysis of sleep EEGs.
3.4.1. Fractal dimension (FD)
The FD directly measures the complexity of the measured signal. The basic idea comes from quantication of dimensionalities
of fractals (i.e. geometries which are self-similar on different scales
[80]). A fractal is an object with a non-integer dimension. Familiar geometric objects have integer dimensions: curves have a
dimension of one, whilst planes are two dimensional. Fractals are
mathematical objects whose dimension is non-integer; for instance
an object of fractal dimension between one and two might be a line
of innite length which is contained within a nite area. The concept of FD has been expanded to the analysis of time series, the
principle being that a simple time-series would have a lower fractal dimension than a more complex one. For a single-channel sleep
EEG signal, FD can range from one to two, that is, its dimension is
at least one and cannot be greater than two [81]. There are several algorithms for the calculation of FD from a time series [82,83]
which have been used in the analysis of sleep EEGs to date. FDs are
suitable for detection of transients in EEG signals [84,85] since they
can be applied to short segments of data and are relatively stable
measures of complexity [86].
3.4.2. Correlation dimension
The correlation dimension provides a bound on the fractal
dimension of the attractor of the underlying dynamical system [87].
The dimension of the underlying attractor is fractal (non-integer) if
the system is chaotic. The correlation dimension is commonly considered for use in the analysis of sleep EEGs [88], this relatively wide
usage is, in part, a consequence of an efcient and straightforward
numerical algorithm for its estimation [87]. The correlation dimension has been applied to the analysis of both neonatal [89,90] and
adult [91,92] sleep EEGs and has, arguably, been most successful
used for sleep staging. Its use in identifying the different stages of
sleep is based on the observation that in deeper sleep the measured
complexity tends to be lower [45,84,89,90,93]. Accurate estimation
of correlation dimension requires a large sample size (large segments) which limits its applicability; it is therefore generally not
suitable for parameterisation of short transient events such as sleep
spindles or arousals. Extensions of the approach to non-stationary
signals do exist [94] but have not been widely adopted.
3.4.3. Entropy measures
Entropy is a statistical measure of complexity and does not rely
upon a nonlinear description of the data. There are a variety of denitions of entropy measures which have been proposed. Estimating
the entropy directly from the time-series requires computation of
the joint probability density function between time series, a process
which is data intensive, many entropy measures therefore attempt
to approximate this quantity in an efcient manner. Approximate
entropy (ApEn) [95] is one such widely considered method. ApEn
reects the conditional probability that two time series remain similar to each other for the next m samples, given that they have
previously been similar. If the signals have a high degree of regularity (i.e. low degree of complexity) it is more likely that they will
remain similar for subsequent samples and hence they produce
a low ApEn value [9698]. ApEn has several desirable properties
[108]: it is robust in the analysis of short data segments, resilient
to outliers and strong transients, is capable of dealing with noise
by appropriate estimation of its parameters and can be applied to
both stochastic and deterministically chaotic signals.
28
Fig. 2. Illustration of two timefrequency analysis methods. The EEG signal used here is the one from Fig. 1 highlighting 50 Hz main interference. The spectrogram clearly
shows the 50 Hz noise and its rst harmonic at 100 Hz, the low frequency activity around 17 s mark is also emphasised. Similarly, the Hilbert spectrum shows an activity
around 50 Hz localised in both time and frequency, the low frequency activity around the 17 s mark is also highlighted.
4. Feature classication
Features are measurable characteristics of a time series used
to reduce the signals dimension whilst maintaining information
vital to subsequent operations, e.g. classication. Once features are
extracted from a signal one can perform classication by grouping
the features; that is dividing the feature space into a discrete number of regions: one for each class to be classied. For instance, in
sleep staging we may have ve possible classes namely, wake, stage
29
30
31
32
[61] J.W.A. Fackrell, P.R. White, J.K. Hammond, R.J. Pinnington, A.T. Parsons, The
interpretation of the bispectra of vibration signals: I. Theory, Mech. Syst.
Signal Process. 9 (1995) 257266.
[62] L. Cohen, TimeFrequency Analysis, Prentice-Hall, New Jersey, 1995.
[63] C.K. Chui, An Introduction to Wavelets, Academic Press, Inc., 1992.
[64] M. Jobert, C. Tismer, E. Poiseau, H. Schulz, Wavelets a new tool in sleep
biosignal analysis, J. Sleep Res. 3 (1994) 223232.
[65] P.S. Addison, The Illustrated Wavelet Transform Handbook, Taylor & Francis,
2002.
[66] P.J. Durka, From wavelets to adaptive approximations: timefrequency
parametrization of EEG, Biomed. Eng. Online 2 (2003) 1.
[67] J. Zygierewicz, K.J. Blinowska, P.J. Durka, W. Szelenberger, S. Niemcewicz, W.
Androsiuk, High resolution study of sleep spindles, Clin. Neurophysiol. 110
(1999) 21362147.
[68] S.G. Mallat, Z.F. Zhang, Matching pursuits with timefrequency dictionaries,
IEEE Trans. Signal Process. 41 (1993) 33973415.
[69] P.J. Durka, D. Ircha, K.J. Blinowska, Stochastic timefrequency dictionaries for
matching pursuit, IEEE Trans. Signal Process. 49 (2001) 507510.
[70] K.J. Blinowska, P.J. Durka, Unbiased high resolution method of EEG analysis in timefrequency space, Acta Neurobiol. Exp. (Warsz.) 61 (2001)
157174.
[71] E. Huupponen, W. De Clercq, G. Gomez-Herrero, A. Saastamoinen, K. Egiazarian, A. Varri, B. Vanrumste, A. Vergult, S. Van Huffel, W. Van Paesschen, J.
Hasan, S.L. Himanen, Determination of dominant simulated spindle frequency
with different methods, J. Neurosci. Methods 156 (2006) 275283.
[72] N.E. Huang, Z. Shen, S.R. Long, M.C. Wu, H.H. Shih, Q. Zheng, N.-C. Yen, C.C.
Tung, H.H. Liu, The empirical mode decomposition and the Hilbert spectrum
for nonlinear and non-stationary time series analysis, Proc. Math. Phys. Eng.
Sci. 454 (1998) 903995.
[73] B.H. Jansen, Quantitative-analysis of electroencephalograms is there chaos
in the future, Int. J. Bio-Med. Comput. 27 (1991) 95123.
[74] D. Gallez, A. Babloyantz, Predictability of human EEG a dynamic approach,
Biol. Cybern. 64 (1991) 381391.
[75] J. Fell, J. Roschke, C. Schaffner, Surrogate data analysis of sleep electroencephalograms reveals evidence for nonlinearity, Biol. Cybern. 75 (1996)
8592.
[76] J.W. Milnor, Attractor, in: Scholarpedia, 2006.
[77] C.J. Stam, Nonlinear dynamical analysis of EEG and MEG: review of an emerging eld, Clin. Neurophysiol. 116 (2005) 22662301.
[78] B. Henry, N. Lovell, F. Camacho, Nonlinear dynamics time series analysis, in:
M. Akay (Ed.), Nonlinear Biomedical Signal Processing, vol. 2, 2001.
[79] M. Palus, Nonlinearity in normal human EEG: cycles, temporal asymmetry, nonstationarity and randomness, not chaos, Biol. Cybern. 75 (1996)
389396.
[80] P.S. Addison, Fractals and Chaos: An Illustrated Course, IOP publishing Ltd.,
1997.
[81] A. Accardo, M. Afnito, M. Carrozzi, F. Bouquet, Use of the fractal dimension for
the analysis of electroencephalographic time series, Biol. Cybern. 77 (1997)
339350.
[82] T. Higuchi, Approach to an irregular time series on the basis of the fractal
theory, Phys. D 31 (1988) 277283.
[83] M.J. Katz, Fractals and the analysis of waveforms, Comput. Biol. Med. 18 (1988)
145156.
[84] R. Acharya, O. Faust, N. Kannathal, T. Chua, S. Laxminarayan, Non-linear analysis of EEG signals at various sleep stages, Comput. Methods Programs Biomed.
80 (2005) 3745.
[85] J.E. Arle, R.H. Simon, An application of fractal dimension to the detection
of transients in the electroencephalogram, Electroencephalogr. Clin. Neurophysiol. 75 (1990) 296305.
[86] M.T.R. Peiris, R.D. Jones, P.R. Davidson, P.J. Bones, D.J. Myall, Fractal Dimension
of the EEG for Detection of Behavioural Microsleeps, IEEE, Shanghai, China,
2006, pp. 4.
[87] P. Grassberger, I. Procaccia, Characterization of strange attractors, Phys. Rev.
Lett. 50 (1983) 346.
[88] R. Ferri, L. Parrino, A. Smerieri, M.G. Terzano, M. Elia, S.A. Musumeci, S. Pettinato, C.J. Stam, Non-linear EEG measures during sleep: effects of the different
sleep stages and cyclic alternating pattern, Int. J. Psychophysiol. 43 (2002)
273286.
[89] S. Janjarasjitt, M.S. Scher, K.A. Loparo, Nonlinear dynamical analysis of the
neonatal EEG time series: the relationship between sleep state and complexity, Clin. Neurophysiol. 119 (2008) 18121823.
[90] M.S. Scher, H. Waisanen, K. Loparo, M.W. Johnson, Prediction of neonatal state
and maturational change using dimensional analysis, J. Clin. Neurophysiol. 22
(2005) 159165.
[91] J. Roeschke, J. Aldenhoff, The dimensionality of humans electroencephalogram during sleep, Biol. Cybern. 64 (1991) 307314.
[92] P. Achermann, R. Hartmann, A. Gunzinger, W. Guggenbuhl, A.A. Borbely,
All-night sleep EEG and articial stochastic-control signals have similar
correlation dimensions, Electroencephalogr. Clin. Neurophysiol. 90 (1994)
384387.
[93] T. Kobayashi, S. Madokoro, Y. Wada, K. Misaki, H. Nakagawa, Human sleep
EEG analysis using the correlation dimension, Clin. Electroencephalogr. 32
(2001) 112118.
[94] J. Skinner, M. Molnar, C. Tomberg, The point correlation dimension: performance with nonstationary surrogate data and noise, Integr. Psychol. Behav.
Sci. 29 (1994) 217234.
[95] S.M. Pincus, I.M. Gladstone, R.A. Ehrenkranz, A regularity statistic for medical
data-analysis, J. Clin. Monit. 7 (1991) 335345.
[96] S.M. Pincus, Approximate entropy as a measure of system-complexity, Proc.
Natl. Acad. Sci. U.S.A. 88 (1991) 22972301.
[97] S.M. Pincus, A.L. Goldberger, Physiological time-series analysis what does
regularity quantify, Am. J. Physiol. 266 (1994) H1643H1656.
[98] Y. Fusheng, H. Bo, T. Qingyu, Approximate entropy and its application in
biosignal analysis, in: M. Akay (Ed.), Nonlinear Biomedical Signal Processing,
IEEE Press, 2000.
[99] J.S. Richman, J.R. Moorman, Physiological time-series analysis using approximate entropy and sample entropy, Am. J. Physiol. Heart Circ. Physiol. 278
(2000) H2039H2049.
[100] J. Ge, P. Zhou, X. Zhao, M. Wang, Sample Entropy Analysis of Sleep EEG Under
Different Stages, IEEE, Beijing, China, 2007, pp. 15271530.
[101] H.G. Schuster, W. Just, Deterministic Chaos: An Introduction, Wiley-VCH,
2005.
[102] J. Roschke, J. Fell, P. Beckmann, The calculation of the 1st positive Lyapunov
exponent in sleep EEG data, Electroencephalogr. Clin. Neurophysiol. 86 (1993)
348352.
[103] A. Wolf, J.B. Swift, H.L. Swinney, J.A. Vastano, Determining Lyapunov exponents from a time-series, Phys. D 16 (1985) 285317.
[104] J. Fell, J. Roschke, P. Beckmann, Deterministic chaos and the 1st positive Lyapunov exponent a nonlinear-analysis of the human electroencephalogram
during sleep, Biol. Cybern. 69 (1993) 139146.
[105] X.Y. Wang, L. Chao, M. Juan, Nonlinear dynamic research on EEG signals in
HAI experiment, Appl. Math. Comput. 207 (2009) 6374.
[106] T. Shimada, T. Shiina, Y. Saito, CT sleep stage diagnosis system with neural
network analysis, in: H.K. Chang, Y.T. Zhang (Eds.), Engineering in Medicine
and Biology Society, 1998. Proceedings of the 20th Annual International Conference of the IEEE, vol. 2074, IEEE, Hong Kong, China, 1998, pp. 20742077.
[107] I.N. Bankman, V.G. Sigillito, R.A. Wise, P.L. Smith, Feature-based detection
of the K-complex wave in the human electroencephalogram using neural
networks, IEEE Trans. Biomed. Eng. 39 (1992) 13051310.
[108] D.R. Liu, Z.Y. Pang, S.R. Lloyd, A neural network method for detection of
obstructive sleep apnea and narcolepsy based on pupil size and EEG, IEEE
Trans. Neural Netw. 19 (2008) 308318.
[109] M.E. Tagluk, M. Akin, N. Sezgin, Classication of sleep apnea by using
wavelet transform and articial neural networks, Expert Syst. Appl. 37 (2009)
16001607.
[110] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall PTR,
1994.
[111] C. Robert, J.F. Gaudy, A. Limoge, Electroencephalogram processing using neural networks, Clin. Neurophysiol. 113 (2002) 694701.
[112] C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University
Press, Inc., 1995.
[113] T.M. Mitchell, Machine learning and data mining, Commun. ACM 42 (1999)
3036.
[114] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math.
Control Signals Syst. 2 (1989) 303314.
[115] D. Balakrishnan, S. Puthusserypady, Multilayer perceptrons for the classication of brain computer interface data, in: Bioengineering Conference, 2005.
Proceedings of the IEEE 31st Annual Northeast, 2005, pp. 118119.
[116] J.C. Bezdek, S.K. Pal, Fuzzy models for pattern recognition: background, significance, and key points, in: Fuzzy Models for Pattern Recognition, IEEE Press,
1992.
[117] A.K. Jain, M.N. Murty, P.J. Flynn, Data clustering: a review, ACM Comput. Surv.
31 (1999) 264323.
[118] T. Kohonen, T. Honkela, Kohonen Network, 2007.
[119] T. Kohonen, The self-organizing map, Proc. IEEE 78 (1990) 14641480.
[120] T. Kohonen, Self-organized formation of topologically correct feature maps,
Biol. Cybern. 43 (1982) 5969.
[121] S. Roberts, L. Tarassenko, Analysis of the Human EEG Using Self-organising
Neural Nets, IEE, London, UK, 1992, pp. 6/16/3.
[122] S. Roberts, L. Tarassenko, Analysis of the sleep EEG using a multilayer network with spatial-organization, IEE Proc. Radar Signal Process. 139 (1992)
420425.
[123] T. Kohonen, Self-organizing Maps, Springer, 2001.
[124] J.Y. Tian, J.Q. Liu, Automated sleep staging by a hybrid system comprising
neural network and fuzzy rule-based reasoning, in: 2005 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society,
vols. 17, IEEE, New York, 2005, pp. 41154118.
[125] S. Roberts, L. Tarassenko, New method of automated sleep quantication,
Med. Biol. Eng. Comput. 30 (1992) 509517.
[126] M. Golz, D. Sommer, T. Lembcke, B. Kurella, Classication of Pre-stimulus
EEG of K-complexes Using Competitive Learning Networks, vol. 1763, Verlag
Mainz, Aachen, Germany, 1998, pp. 17671771.
[127] D. Michie, D.J. Spiegelhalter, C.C. Taylor, Machine Learning, Neural and Statistical Classication, Ellis Horwood, 1994, pp. 289.
[128] R. Fisher, The use of multiple measurements in taxonomic problems, Ann.
Eugen. 7 (1936) 179188.
[129] L. Devroye, L. Gyr, G. Lugosi, A Probabilistic Theory of Pattern Recognition
(Stochastic Modelling and Applied Probability), Springer, 1996.
[130] V.N. Vapnik, Statistical Learning Theory, Wiley Inter-Science, 1998.
[131] F. Lotte, M. Congedo, A. Lecuyer, F. Lamarche, B. Arnaldi, A review of classication algorithms for EEG-based braincomputer interfaces, J. Neural Eng. 4
(2007) R1R13.
33
[137] A. Flexer, G. Dorffner, P. Sykacek, I. Rezek, An automatic, continuous and probabilistic sleep stager based on a hidden Markov model, Appl. Artif. Intell. 16
(2002) 199207.
[138] L.A. Zadeh, Fuzzy sets, Inf. Control 8 (1965) 338353.
[139] I. Gath, A.B. Geva, Unsupervised optimal fuzzy clustering, IEEE Trans. Pattern
Anal. Mach. Intell. 11 (1989) 773781.
[140] C.M. Held, J.E. Heiss, P.A. Estevez, C.A. Perez, M. Garrido, C. Algarin, P. Peirano,
Extracting fuzzy rules from polysomnographic recordings for infant sleep
classication, IEEE Trans. Biomed. Eng. 53 (2006) 19541962.
[141] H. Laufs, M.C. Walker, T.E. Lund, Brain activation and hypothalamic functional
connectivity during human non-rapid eye movement sleep: an EEG/fMRI
study its limitations and an alternative approach, Brain 130 (2007)
e75.