Professional Documents
Culture Documents
Ebme Chapter Afk 2006
Ebme Chapter Afk 2006
net/publication/229766295
CITATIONS READS
18 6,994
1 author:
Andre Kohn
University of São Paulo
117 PUBLICATIONS 1,416 CITATIONS
SEE PROFILE
All content following this page was uploaded by Andre Kohn on 01 December 2017.
AUTOCORRELATION AND
CROSSCORRELATION METHODS
1. INTRODUCTION
estimated by
5000
1
PN1
N i ¼ 0 ðwðiÞ wÞ ðhðiÞ hÞ
r ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P P ; ð1Þ
0 N1 2
1
N i¼0ðwðiÞ wÞ 2 1 N1 ðhðiÞ hÞ
N i¼0
−5000
where ½wðiÞ; hðiÞ, i ¼ 0; 1; 2; . . . ; N 1 are the N pairs of
0 5 10 15 20 measurements (e.g., from subject number 0 up to subject
number N 1); w and h are the mean values computed
(a)
from the N values of w(i) and h(i), respectively. Sometimes
1000 r is called the linear correlation coefficient to emphasize
that it quantifies the degree of linear relation between two
800
variables. If the correlation coefficient between the two
600 variables w and h is near the maximum value 1, it is said
that the variables have a strong positive linear correla-
400 tion, and the measurements will gather around a line with
positive slope when one variable is plotted against the
200
other. On the contrary, if the correlation coefficient is near
0 the minimum attainable value of 1, it is said that the
0 5 10 15 20
two variables have a strong negative linear correlation. In
t (s) this case, the measured points will gather around a neg-
(b) atively sloped line. If the correlation coefficient is near the
Figure 1. Two signals obtained from an experiment involving a value 0, the two variables are not linearly correlated and
human pressing a pedal with his right foot. (a) The EMG of the the plot of the measured points will show a spread that
soleus muscle and (b) the force or torque applied to the pedal are does not follow any specific straight line. Here, it may be
represented. The abscissae are in seconds and the ordinates are in important to note that two variables may have a strong
arbitrary units. The ordinate calibration is not important here nonlinear correlation and yet have almost zero value
because in the computation of the correlation a division by the for the linear correlation coefficient r. For example,
standard deviation of each signal exists. Only the first 20 s of
100 normally distributed random samples were generated
the data are shown here. A 30 s data record was used to compute
the graphs of Figs. 2 and 3.
by computer for a variable h, whereas variable w
was computed according to the quadratic relation
w ¼ 300 ðh hÞ2 þ 50. A plot of the pairs of points ðw; hÞ
will show that the samples follow a parabola, which means
started being observed in the experiment and t2 the final that they are strongly correlated along such a parabola.
time of observation). Such signals are indicated as y(t), On the other hand, the value of r was 0.0373. Statistical
x(t), w(t), and so on. On the other hand, a discrete-time analysis suggests that such a low value of linear correla-
signal is a set of measurements taken sequentially in tion is not significantly different to zero. Therefore, a near
time (e.g., at every millisecond). Each measurement point zero value of r does not necessarily mean the two variables
is usually called a sample, and a discrete-time signal is are not associated with one another, it could mean
indicated by by y(n), x(n), and w(n), where the index n is that they are nonlinearly associated (see Extensions and
an integer that points to the order of the measurements in Further Applications).
the sequence. Note that the time interval T between two On the other hand, a random signal is a broadening of
adjacent samples is not shown explicitly in the y(n) rep- the concept of a random variable by the introduction of
resentation, but this information is used whenever an in- variations along time and is part of the theory of random
terpretation is required based on continuous-time units processes. Many biological signals vary in a random way
(e.g., seconds). As a result of the low price of computers in time (e.g., the EMG in Fig. 1) and hence their mathe-
and microprocessors, almost any equipment used today matical characterization has to rely on probabilistic con-
in medicine or biomedical research uses digital signal cepts (3–5). For a random signal, the mean and the
processing, which means that the signals are functions autocorrelation are useful quantifiers, the first indicating
of discrete time. the constant level about which the signal varies and the
From basic probability and statistics theory, it is known second indicating the statistical dependencies between the
that in the analysis of a random variable (e.g., the height values of two samples taken at a certain time interval. The
of a population of human subjects), the mean and the time relationship between two random signals may be an-
variance are very useful quantifiers (2). When studying alyzed by the cross-correlation, which is very often used in
the linear relationship between two random variables biomedical research.
(e.g., the height and the weight of individuals in a popu- Let us analyze briefly the problem of studying quanti-
lation), the correlation coefficient is an extremely useful tatively the time relationship between the two signals
quantifier (2). The correlation coefficient between N shown in Fig. 1. Although the EMG in Fig. 1a looks
measurements of pairs of random variables, such as erratic, its amplitude modulations seem to have some pe-
the weight w and height h of human subjects, may be riodicity. Such slow amplitude modulations are sometimes
262 AUTOCORRELATION AND CROSSCORRELATION METHODS
called the ‘‘envelope’’ of the signal, which may be esti- negative delay, meaning the EMG precedes the soleus
mated by smoothing the absolute value of the signal. The muscle force. Many factors, experimental and physiologic,
force in Fig. 1b is much less erratic and exhibits a clearer contribute to such a delay between the electrical activity of
oscillation. Questions that may develop regarding such the muscle and the torque exerted by the foot.
signals (the EMG envelope and the force) include: what Signal processing tools such as the autocorrelation and
periodicities are involved in the two signals? Are they the the cross-correlation have been used with much success in
same in the two signals? If so, is there a delay between the a number of biomedical research projects. A few examples
two oscillations? What are the physiological interpreta- will be cited for illustrative purposes. In a study of absence
tions? To answer the questions on the periodicities of each epileptic seizures in animals, the cross-correlation be-
signal, one may analyze their respective autocorrelation tween waves obtained from the cortex and a brain region
functions, as shown in Fig. 2. The autocorrelation of the called the subthalamic nucleus was a key tool to show that
absolute value of the EMG (a simple estimate of the en- the two regions have their activities synchronized by a
velope) shown in Fig. 2a has low-amplitude oscillations, specific corticothalamic network (6). The cross-correlation
those of the force (Fig. 2b) are large, but both have the function was used in Ref. 7 to show that insulin secretion
same periodicity. The much lower amplitude oscillations by the pancreas is an important determinant of insulin
in the autocorrelation function of the absolute value of the clearance by the liver. In a study of preterm neonates, it
EMG when compared with that of the force autocorrela- was shown in Ref. 8 that the correlation between the heart
tion function reflects the fact that the periodicity in the rate variability (HRV) and the respiratory rhythm was
EMG amplitude modulations is masked to a good degree similar to that found in the fetus. The same authors also
by a random activity, which is not the case for the force employed the correlation analysis to compare the effects of
signal. To analyze the time relationship between the EMG two types of artificial ventilation equipment on the HRV-
envelope and the force, their cross-correlation is shown in respiration interrelation.
Fig. 3a. The cross-correlation function in this figure has After an interpretation is drawn from a cross-correla-
the same period of oscillation as that of the random sig- tion study, this signal processing tool may be potentially
nals. In the more refined view of Fig. 3b, it can be seen that useful for diagnostic purposes. For example, in healthy
the peak occurring closer to zero time shift does so at a subjects, the cross-correlation between arterial blood pres-
sure and intracranial blood flow showed a negative peak
at positive delays, differently from patients with a mal-
1 functioning cerebrovascular system (9).
Next, the step-by-step computations of an autocorrela-
tion function will be shown based on the known concept of
0.5 correlation coefficient of statistics. Actually, different, but
related, definitions of autocorrelation and cross-correlation
exists in the literature. Some are normalized versions of
0 others, for example. The definition to be given in this sec-
tion is not the one usually studied in undergraduate engi-
neering courses, but is being presented here first because it
−0.5 is probably easier to understand by readers from other
−6 −4 −2 0 2 4 6
backgrounds. Other definitions will be presented in later
(a)
sections and the links between them will be readily appar-
1 ent. In this section, the single term autocorrelation shall be
used for simplicity and, later (see Basic Definitions), will
present some of the more precise names that have been
0.5
associated with the definition presented here (10,11).
The approach of defining an autocorrelation function
0 based on the cross-correlation coefficient should help in
the understanding of what the autocorrelation function
−0.5 tells us about a random signal. Assume that we are given
a random signal x(n), with n being the counting variable:
−1
n ¼ 0; 1; 2; . . . ; N 1. For example, the samples of x(n) may
−6 −4 −2 0 2 4 6 have been measured at every 1 ms, there being a total of N
samples.
time shift (s)
(b) The mean or average of signal x(n) is the value x given
by
Figure 2. Autocorrelation functions of the signals shown in
Fig. 1. (a) shows the autocorrelation function of the absolute X
value of the EMG and (b) shows the autocorrelation function of
1 N1
x ¼ xðnÞ; ð2Þ
the force. These autocorrelation functions were computed based N n¼0
on the correlation coefficient, as explained in the text. The ab-
scissae are in seconds and the ordinates are dimensionless, rang- and gives an estimate of the value about which the signal
ing from 1 to 1. For these computations, the initial transients varies. As an example, in Fig. 4a, the signal x(n) has a
from 0 to 5 s in both signals were discarded. mean that is approximately equal to 0. In addition to the
AUTOCORRELATION AND CROSSCORRELATION METHODS 263
0.5
−0.5
−25 −20 −15 −10 −5 0 5 10 15 20 25
(a)
0.5
peak at - 170 ms
mean, another function is needed to characterize how x(n) ½xðN 1Þ; xðN 2Þ). The correlation coefficient of
varies in time. In this example, one can see that x(n) these pairs of points is denoted rxx ð1Þ.
has some periodicity, oscillating with positive and nega- *
Repeat for a two-sample shift and compute rxx ð2Þ, for a
tive peaks repeating approximately at every 10 samples three-sample shift and compute rxx ð3Þ, and so on.
(Fig. 4a). The new function to be defined is the autocorre- When x(n) is shifted by 3 samples to the right
lation rxx ðkÞ of x(n), which will quantify how much a given (Fig. 4b), the resulting signal xðn 3Þ has its peaks
signal is similar to time-shifted versions of itself (5). One and valleys still repeating at approximately 10 sam-
way to compute it is by using the following formula based ples, but these are no longer aligned with those of x(n).
on the definition (Equation 1) When their scatter plot is drawn (Fig. 5b), it seems
PN1 that their correlation coefficient is near zero, so we
1 ðxðnÞ
N n ¼ 0 ðxðn kÞ xÞ xÞ should have rxx ð3Þ 0. Note that as xðn 3Þ is equal
rxx ðkÞ ¼ PN1 ; ð3Þ
1
ðxðnÞ 2
xÞ to x(n) delayed by 3 samples, the need exists to define
N n¼0
what the values of xðn 3Þ are for n ¼ 0; 1; 2. As x(n) is
known only from n ¼ 0 onwards, we make the three
where x(n) is supposed to have N samples. Any sample
initial samples of xðn 3Þ equal to 0, which has some-
outside the range ½0; N 1 is taken to be zero in the com-
times been called in the engineering literature as zero
putation of rxx ðkÞ.
padding.
The computation steps are as follows:
*
Shifting x(n) by 5 samples to the right (Fig. 4c), gen-
*
Compute the correlation coefficient between the N erates a signal xðn 5Þ still with the same periodicity
samples of x(n) paired with the N samples of x(n) and as the original x(n), but with peaks aligned with
call it rxx ð0Þ. The value of rxx ð0Þ is equal to 1 because the valleys in x(n). The corresponding scatter plot
any pair is formed by two equal values [e.g., (Fig. 5c) indicates a negative correlation coefficient.
½xð0Þ; xð0Þ, ½xð1Þ; xð1Þ; . . . ; ½xðN 1Þ; xðN 1Þ] as seen *
Finally, shifting x(n) by a number of samples equal to
in the scatter plot of the samples of x(n) with those of the approximate period gives xðn 10Þ, which has its
x(n) in Fig. 5a. The points are all along the diagonal, peaks (valleys) approximately aligned with the peaks
which means the correlation coefficient is unity. (valleys) of xðnÞ, as can be seen in Fig. 4d. The corre-
*
Next, shift x(n) by one sample to the right, obtai- sponding scatter plot (Fig. 5d) indicates a positive
ning xðn 1Þ, and then determine the correlation correlation coefficient. If x(n) is shifted by multiples of
coefficient between the samples of x(n) and xðn 1Þ 10, there will again be coincidences between its peaks
(i.e., for n ¼ 1, take the pair of samples ½xð1Þ; xð0Þ, for and those of x(n), and again the correlation coefficient
n ¼ 2 take the pair ½xð2Þ; xð1Þ and so on, until the pair of their samples will be positive.
264 AUTOCORRELATION AND CROSSCORRELATION METHODS
X(n) 2
−2
0 5 10 15 20 25 30 35 40
(a)
X(n−3) 2
−2
0 5 10 15 20 25 30 35 40
(b)
X(n−5) 2
−2
0 5 10 15 20 25 30 35 40
(c)
Figure 4. Random discrete-time signal x(n) in
(a) is used as a basis to explain the concept of X(n−10) 2
autocorrelation. In (b)–(d), the samples of x(n)
were delayed by 3, 5, and 10 samples, respec-
0
tively. The two vertical lines were drawn to help
visualize the temporal relations between the
samples of the reference signal at the top and −2
the three time-shifted versions below. (This 0 5 10 15 20 25 30 35 40
figure is available in full color at http:// n
www.mrw.interscience.wiley.com/ebe.) (d)
2 2
1 1
0 0
−1 −1
−2 −2
Sample amplitudes
−2 −1 0 1 2 −2 −1 0 1 2
(a) (b)
2 2
1 1
0 0
0.5
2. AUTOCORRELATION OF A STATIONARY RANDOM
0
PROCESS
−0.5
2.1. Introduction
−30 −20 −10 0 10 20 30
Initially, some concepts from random process theory shall
t (ms) be reviewed briefly, as covered in undergraduate courses
(b)
in electrical or biomedical engineering, see, for example,
Figure 8. The respective autocorrelation functions of the two Peebles (3) or Papoulis and Pillai (10). A random process
signals in Fig. 7. They are quite different from each other: in (a) is an infinite collection or ensemble of functions of time
the decay is monotonic to both sides from the peak value of 1 at
(continuous or discrete time), called sample functions or
t ¼ 0, and in (b) the decay is oscillatory. These major differences
between the two random signals shown in Fig. 7 are not visible realizations (e.g., segments of EEG recordings). In contin-
directly from their time courses. The abscissae are in miliseconds. uous time, one could indicate the random process as X(t)
and in discrete time as X(n). Each sample function is as-
sociated with the outcome of an experiment that has a
addition, the autocorrelation is able to uncover a periodic given probabilistic description and may be indicated by
signal masked by noise (e.g., Fig. 1a and Fig. 2a), which is x(t) or x(n), for continuous or discrete time, respectively.
relevant in the biomedical setting because many times the When the ensemble of sample functions is viewed at any
biologically interesting signal is masked by other ongoing single time, say t1 for a continuous time process, a random
biological signals or by measurement noise. Finally, when variable is obtained whose probability distribution func-
validating stochastic models of biological systems such as tion is FX;t1 ða1 Þ ¼ P½Xðt1 Þ a1 , for a1 2 R, and where P½:
neurons, autocorrelation functions obtained from the stands for probability. If the process is viewed at two times
mathematical model of the physiological system may be t1 and t2 , a bivariate distribution function is needed to
compared with autocorrelations computed from experi- describe the pair of resulting random variables Xðt1 Þ and
mental data obtained from the physiological system, see, Xðt2 Þ: FX;t1 ;t2 ða1 ; a2 Þ ¼ P½Xðt1 Þ a1 ; Xðt2 Þ a2 , for a1,
for example, Kohn (12). a2 2 R. The random process is fully described by the
When two signals x and y are measured simultaneously joint distribution functions of any N random variables de-
in a given experiment, as in the example of Fig. 1, one may fined at arbitrary times t1 ; t2 ; . . . ; N, for arbitrary integer
be interested in knowing if the two signals are entirely number N
independent from each other, or if some correlation exists
between them. In the simplest case, there could be a delay
P½Xðt1 Þ a1 ; Xðt2 Þ a2 ; . . . ; XðtN Þ aN ;
between one signal and the other. The cross-correlation is ð5Þ
a frequently used tool when studying the dependency be- for a1 ; a2 ; . . . ; aN 2 R:
tween two random signals. A (normalized) cross-correla-
tion rxy ðkÞ may be defined and explained in the same way
In many applications, the properties of the random pro-
as we did for the autocorrelation in Figs. 4–6 (i.e., by com-
cess may be assumed to be independent of the specific
puting the correlation coefficients between the samples of
values t1 ; t2 ; . . . ; tN , in the sense that if a fixed time shift T
one of the signals, x(n), and those of the other signal time-
is given to all instants ti i ¼ 1; . . . ; N, the probability dis-
shifted by k, yðn kÞ) (5). The formula is the following:
tribution function does not change:
PN11
Nn ¼ 0 ðxðnÞ xÞ ðyðn kÞ yÞ
rxy ðkÞ ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P P ffi ; P½Xðt1 þ TÞ a1 ; Xðt2 þ TÞ a2 ; . . . ; XðtN þ TÞ aN
1 N1
ðxðnÞ
xÞ 2
1 N1
ðyðnÞ
yÞ 2 ð6Þ
N n ¼ 0 N n ¼ 0 ¼ P½Xðt1 Þ a1 ; Xðt2 Þ a2 ; . . . ; XðtN Þ aN :
ð4Þ
If Equation 6 holds for all possible values of ti, T
where both signals are supposed to have N samples each. and N, the process is called strictsense stationary (3).
Any sample of either signal outside the range ½0; N 1 This class of random processes has interesting properties,
AUTOCORRELATION AND CROSSCORRELATION METHODS 267
Cxx ðkÞ ¼ E½ðXðn þ kÞ mx Þ ðXðnÞ mx ¼ Rxx ðkÞ m2x ; ð14Þ jRxx ðkÞj Rxx ð0Þ ¼ s2x þ m2x 8k 2 Z ð19Þ
where again Cxx ð0Þ ¼ s2x ¼ E½ðXðkÞ mx Þ2 is the constant jCxx ðkÞj Cxx ð0Þ ¼ s2x 8k 2 Z; ð20Þ
variance of the stationary random process X(k). Also
mx ¼ E½XðnÞ, a constant, is its expected value. In many and
applications, the interest is in studying the variations of a
random processes about its mean, which is what the au- jrxx ðkÞ 1 8k 2 Z; ð21Þ
tocovariance represents. For example, to characterize the
variability of the muscle force exerted by a human subject with rxx ð0Þ ¼ 1.
in a certain test, the interest is in quantifying how the These relations say that the maximum of either the
force varies randomly around the mean value, and hence autocorrelation or autocovariance occurs at lag 0.
the autocovariance is more interesting than the autocor- Any discrete-time ergodic random process without a
relation. periodic component will satisfy the following limits:
The independent variable t or k in Equations 10, 11, 13
or 14 will be called, interchangeably, time shift, lag, or lim Rxx ðkÞ ¼ m2x ð22Þ
delay. jkj!1
It should be mentioned that some books and papers,
mainly those on time series analysis (5,11), define the au- and
tocorrelation function of a random process as (for discrete
time): lim Cxx ðkÞ ¼ 0 ð23Þ
jkj!1
where the angular frequency o is in rad/s. The average The autocorrelation of continuous-time white noise is:
power Pxx of the random process X(t) is
Z 1
Rxx ðtÞ ¼ C dðtÞ þ m2x ; ð34Þ
1
Pxx ¼ Sxx ðjoÞ do ¼ Rxx ð0Þ: ð26Þ
2p 1 where mx is the mean of the process.
From Equations 12 and 30 it follows that the variance
If the average power in a given frequency band ½o1 ; o2 of the continuous-time white process is infinite (14), which
is needed, it can be computed by indicates that it is not physically realizable. From Equa-
Z tion 30 we conclude that, for any time shift value t, no
o2
1 matter how small, the correlation coefficient between any
Pxx ¼ Sxx ðjoÞ do: ð27Þ
p o1 value in X(t) and the value at Xðt þ tÞ would be equal to
zero, which is certainly impossible to satisfy in practice
For discrete time because of the finite risetimes of the outputs of any phys-
ical system. From Equation 25 it follows that the power
Sxx ðejO Þ ¼ Discrete Time Fourier transform ½Rxx ðkÞ; ð28Þ spectrum of continuous-time white noise (with mx ¼ 0) has
a constant value equal to C at all frequencies. The name
where O is the normalized angular frequency given in rad white noise comes from an extension of the concept of
(O ¼ o T, where T is the sampling interval). In Equation ‘‘white light,’’ which similarly has constant power over the
28 the power spectrum is periodic in O, with a period equal range of frequencies in the visible spectrum. White noise is
to 2p. The average power Pxx of the random process X(n) is non-realizable, because it would have to be generated by a
system with infinite bandwidth.
Z p Engineering texts circumvent the difficulties with the
1
Pxx ¼ Sxx ðejO ÞdO ¼ Rxx ð0Þ: ð29Þ continuous-time white noise by defining a band-limited
2p p
white noise (3,13). The corresponding power spectral
density is constant up to very high frequencies ðoc Þ, and
Other common names for the power spectrum are power
is zero elsewhere, which makes the variance finite. The
spectral density and autospectrum. The power spectrum is
maximum spectral frequency oc is taken to be much
a real non-negative and even function of frequency
higher than the bandwidth of the system to which the
(10,15), which requires a positive semi-definite autocorre-
noise is applied. Therefore, in approximation, the power
lation function (10), which means that not all functions
spectrum is taken to be constant at all frequencies, the
that satisfy Equations 16 and 19 are valid autocorrelation
autocovariance is a Dirac delta function, and yet a finite
functions because the corresponding power spectrum
variance is defined for the random process.
could have negative values for some frequency ranges,
The utility of the concept of white noise develops when
which is absurd.
it is applied at the input of a finite bandwidth system, be-
The power spectrum should be used instead of the au-
cause the corresponding output is a well-defined random
tocorrelation function in situations such as: (1) when the
process with physical significance (see next section).
objective is to study the bandwidth occupied by a random
In discrete time, the white-noise process has an auto-
signal, and (2) when one wants to discover if there are
covariance proportional to the unit sample sequence:
several periodic signals masked by noise (for a single pe-
riodic signal masked by noise the autocorrelation may be
Cxx ðkÞ ¼ C dðkÞ; ð35Þ
useful too).
of zero mean and unit variance for a normal or Gaussian triangular sequence centered at k ¼ 0, with amplitude 0.5
distribution. To achieve desired values of C in Equation 35 at k ¼ 0, amplitude 0.25 at k ¼
1 and 0 elsewhere.
and m2x in Equation 36 one should multiply the (zero mean, If two new random processes are defined as U ¼ X mx
unit variance)
pffiffiffiffivalues of the computer-generated white se- and Q ¼ Y my , it follows from the definitions of autocor-
quence by C and sum to each resulting value the con- relation and autocovariance that Ruu ¼ Cxx and Rqq ¼ Cyy .
stant value mx . Therefore, applying Equation 37 or Equation 39 to a sys-
tem with input U and output Y, similar expressions to
Equations 37 and 39 are obtained relating the input and
3. AUTOCORRELATION OF THE OUTPUT OF A LINEAR
output autocovariances, shown below only for the discrete-
SYSTEM WITH RANDOM INPUT
time case:
In relation to the examples presented in the previous sec-
Cyy ðkÞ ¼ hðkÞ hðkÞ Cxx ðkÞ: ð41Þ
tion, one may ask how may two random processes develop
such differences in the autocorrelation as seen in Fig. 8.
How may one autocorrelation be monotonically decreasing Furthermore, if U ¼ ðX mx Þ=sx and Q ¼ ðY my Þ=sy are
(for increasing positive t) while the other exhibits oscilla- new random processes, we have Ruu ¼ rxx and Rqq ¼ ryy .
tions? For this purpose, how the autocorrelation of a signal From Equations 37 or 39 similar relations between the
changes when it is passed through a time-invariant linear normalized autocovariance functions of the output and the
system will be examined. input of the linear system are obtained shown below only
If a continuous-time linear system has an impulse re- for the discrete-time case (for the continuous-time, use t
sponse h(t) and a random process X(t) with an autocorre- instead of k):
lation Rxx ðtÞ is applied at its input the resultant output y(t)
will have an autocorrelation given by the following convo- ryy ðkÞ ¼ hðkÞ hðkÞ rxx ðkÞ: ð42Þ
lutions (10):
A monotonically decreasing autocorrelation or autoco-
Ryy ðtÞ ¼ hðtÞ hðtÞ Rxx ðtÞ: ð37Þ variance may be obtained (e.g., Fig. 8a), when, for exam-
ple, a white noise is applied at the input of a system that
Note that hðtÞ hðtÞ may be viewed as an autocorrela- has a monotonically decreasing impulse response (e.g., of a
tion of hðtÞ with itself and, hence, is an even function. first-order system or a second-order overdamped system).
Taking the Fourier transform we conclude Equation 37, As an example, apply a zero-mean white noise ‘ to a system
‘
at
it is that the output power spectrum Syy ðjoÞ is the absolute that has an impulse response equal
‘ to e ðtÞ, where
‘ ðtÞ
value squared of the frequency response function HðjoÞ is the Heaviside step function ( ðtÞ ¼ 1, t 0 and ðtÞ ¼ 0,
times the input power spectrum Sxx ðjoÞ: to0). From Equation 37 this system’s output random sig-
nal will have an autocorrelation that is
Syy ðjoÞ ¼ jHðjoÞj2 Sxx ðjoÞ; ð38Þ
a a eajtj
Ryy ðtÞ ¼ eat ðtÞ eat ðtÞ dðtÞ ¼ : ð43Þ
where HðjoÞ is the Fourier transform of h(t), Sxx ðjoÞ is the 8
Fourier transform of Rxx ðtÞ, and Syy ðjoÞ is the Fourier
transform of Ryy ðtÞ. This autocorrelation has its peak at t ¼ 0 and decays
The corresponding expressions for the autocorrelation exponentially on both sides of the time shift axis, qualita-
and power spectrum for the output signal from a discrete- tively following the shape seen in Fig. 8a. On the other
time system are (15) hand, an oscillatory autocorrelation or autocovariance, as
seen in Fig. 8b, may be obtained when the impulse re-
Ryy ðkÞ ¼ hðkÞ hðkÞ Rxx ðkÞ ð39Þ sponse h(t) is oscillatory, which may occur, for example, in
a second-order underdamped system‘that would have an
and impulse response hðtÞ ¼ eat cosðo0 tÞ ðtÞ.
(e.g., two interconnected sites in the brain (17)) or a phys- correlation as the normalized cross-covariance:
iological coupling between the two recorded signals (e.g.,
heart rate variability and respiration (18)). On the other Cxy ðkÞ ðXðn þ kÞ mx Þ ðYðnÞ my Þ
rxy ðkÞ ¼ ¼E ð46Þ
hand, they could be independent because no anatomical sx sy sx sy
and physiological link exists between the two signals.
Start with a formal definition of independence of two ran- where sx and sy are the standard deviations of the two
dom processes X(n) and Y(n). random processes X(n) and Y(n), respectively.
Two random processes X(n) and Y(n) are independent
when any set of random variables fXðn1 Þ; Xðn2 Þ; . . . ; XðnN Þg 4.2. Basic Properties
taken from XðnÞ is independent of another set of random
variables fYðn01 Þ; Yðn02 Þ; . . . ; Yðn0M Þg taken from YðnÞ. Note As Xðn þ kÞ YðnÞ is in general different from
that the time instants at which the random variables are
½Xðn kÞ YðnÞ, the cross-correlation and the cross-
defined from each random process are taken arbitrarily, as covariance have a more subtle symmetry property than
indicated by the set of integers ni, i ¼ 1; 2; . . . ; N and n0i , the autocorrelation and autocovariance:
i ¼ 1; 2; . . . ; M, with N and M being arbitrary positive in-
tegers. This definition may be useful when conceptually it Rxy ðkÞ ¼ Ryx ðkÞ ¼ E½Yðn kÞ XðnÞ: ð47Þ
is known beforehand that the two systems that generate
X(n) and Y(n) are totally uncoupled. However, in practice, It should be noted that the time argument in Ryx ðkÞ is
usually no a priori knowledge about the systems exists negative and the order xy is changed to yx. For the cross-
and the objective is to discover if they are coupled, which covariance, a similar symmetry relation applies:
means that we want to study the possible association or
coupling of the two systems based on their respective out- Cxy ðkÞ ¼ Cyx ðkÞ ð48Þ
put signals x(n) and y(n). For this purpose, the definition of
independence is unfeasible to test in practice and one has and
to rely on concepts of association based on second-order
moments. rxy ðkÞ ¼ ryx ðkÞ: ð49Þ
In the same way as the correlation coefficient quantifies
the degree of linear association between two random vari- For the autocorrelation and autocovariance, the peak
ables, the cross-correlation and the cross-covariance quan- occurs at the origin, as given by the properties in Equa-
tify the degree of linear association between two random tions 19 and 20. However, for the crossed moments, the
processes X(n) and Y(n). Their cross-correlation is defined peak may occur at any time shift value, with the following
as: inequalities being valid:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Rxy ðkÞ ¼ E½Xðn þ kÞ YðnÞ; ð44Þ jRxy ðkÞj Rxx ð0ÞRyy ð0Þ ð50Þ
It should be noted that some books or papers define the lim Rxy ðkÞ ¼ mx my ð53Þ
cross-correlation and cross-covariance as Rxy ðkÞ ¼ E½XðnÞ jkj!1
Yðn þ kÞ and Cxy ðkÞ ¼ E½ðXðnÞ mx Þ ðYðn þ kÞ my Þ, which
are time-reversed versions of the definitions above Equa- and
tions 44 and 45. The distinction is clearly important when
viewing a cross-correlation graph between two experimen- lim Cxy ðkÞ ¼ 0; ð54Þ
jkj!1
tally recorded signals x(n) and y(n), coming from random
processes X(n) and Y(n), respectively. A peak at a positive
with similar results being valid for continuous time. These
time shift k according to one definition would appear at a
limit results mean that the ‘‘effect’’ of a random variable
negative time shift —k in the alternative definition. There-
taken from process X on a random variable taken from
fore, when using a signal processing software package or
process Y decreases as the two random variables are taken
when reading a scientific text, the reader should always
farther apart. In the limit they become uncorrelated.
verify how the cross-correlation was defined. In Matlab
(MathWorks, Inc.), a very popular software tool for signal
4.3. Independent and Uncorrelated Random Processes
processing, the commands xcorr(x,y) and xcov(x,y) use the
same conventions as in Equations 44 and 45. When two random processes X(n) and Y(n) are indepen-
Similarly to what was said before for the autocorrela- dent, their cross-covariance is always equal to 0 for any
tion definitions, texts on time series analysis define cross- time shift, and hence the autocorrelation is equal to the
272 AUTOCORRELATION AND CROSSCORRELATION METHODS
Rxy ðkÞ ¼ mx my 8k: ð56Þ but the last term is zero because X(n) and W(n) are un-
correlated. Hence,
Two random processes that satisfy the two expressions Cxy ðkÞ ¼ aCxx ðk þ LÞ: ð59Þ
above are called uncorrelated (10,15) whether they are
independent or not. In practical applications, one usually
has no way to test for the independence of two random As Cxx ðk þ LÞ attains its peak value when k ¼ L (i.e.,
processes, but it is feasible to test if they are correlated or when the argument ðk þ LÞ is zero) this provides a very
not. Note that the term uncorrelated may be misleading practical method to estimate the delay between two ran-
because the cross-correlation itself is not zero (unless one dom signals: Find the time-shift value where the cross-co-
of the mean values is zero, when the processes are called variance has a clear peak. If an additional objective is to
orthogonal), but the cross-covariance is. estimate the amplitude-scaling parameter a, it may be
Two independent random processes are always uncor- achieved by dividing the peak value of the cross-covari-
related, but the reverse is not necessarily true. Two ran- ance by s2x (see Equation 59).
dom processes may be uncorrelated (i.e., have zero Additionally, from Equations 46 and 59:
cross-covariance) but still be statistically dependent, be-
cause other probabilistic quantifiers (e.g., higher-order aCxx ðk þ LÞ
rxy ðkÞ ¼ : ð60Þ
central moments (third, fourth, etc.)), will not be zero. sx sy
However, for Gaussian (or normal) random processes, a
one-to-one correspondence exists between independence To find sy, we remember that Cyy ð0Þ ¼ s2y from Equation
and uncorrelatedness (10). This special property is 12. From Equation 57 we have
valid for Gaussian random processes because they are
specified entirely by the mean and second moments (au- Cyy ð0Þ ¼ E½aðXðn LÞ mx Þ aðXðn LÞ mx Þ
tocovariance function for a single random process
and cross-covariance for the joint distribution of two ran- þ E½ðWðnÞ mw Þ ðWðnÞ mw Þ; ð61Þ
dom processes).
where again we used the fact that X(n) and W(n) are un-
4.4. Simple Model of Delayed and Amplitude-Scaled Random correlated. Therefore,
Signal
In some biomedical applications, the objective is to esti- Cyy ð0Þ ¼ a2 Cxx ð0Þ þ Cww ð0Þ ¼ a2 s2x þ s2w ; ð62Þ
mate the delay between two random signals. For example,
the signals may be the arterial pressure and cerebral and therefore
blood flow velocity for the study of cerebral autoregula-
tion, or EMGs from different muscles for the study of aCxx ðk þ LÞ
rxy ðkÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ð63Þ
tremor (19). The simplest model relating the two signals is sx a2 s2x þ s2w
yðnÞ ¼ a xðn LÞ where a 2 R, and L 2 Z, which means
that y is an amplitude-scaled and delayed (for L > 0) ver- When k ¼ L, Cxx ðk þ LÞ will reach its peak value equal
sion of x. From Equation 46 rxy ðkÞ will have either a max- to s2x , which means that rxy ðkÞ will have a peak at k ¼ L
imum peak equal to 1 (if a > 0) or a trough equal to 1 (if equal to
ao0) located at time shift k ¼ L. Hence, peak location in
the time axis of the cross-covariance (or cross-correlation)
asx a 1
indicates the delay value. rxy ðLÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð64Þ
A slightly more realistic model in practical applications a2 s2x þ s2w jaj 1 þ s2w :
a2 s2x
assumes that one signal is a delayed and scaled version of
another but with an extraneous additive noise: Equation 64 is consistent with the case in which no
noise W(n) ðs2w ¼ 0Þ exists because the peak in rxy ðkÞ will
yðnÞ ¼ axðn LÞ þ wðnÞ: ð57Þ equal þ 1 or 1, if a is positive or negative, respectively.
From Equation 64, when the noise variance s2w increases,
In this model, w(n) is a random signal caused by exter- the peak in rxy ðkÞ will decrease in absolute value, but will
nal interference noise or intrinsic biological noise that still occur at time shift k ¼ L. Within the context of this
cannot be controlled or measured. Usually, one can as- example, another way of interpreting the peak value in
sume that x(n) and w(n) are signals from uncorrelated or the normalized cross-covariance rxy ðkÞ is by asking what
independent random processes X(n) and W(n). Let us de- fraction Gx!y of Y is because of X in Equation 57, meaning
termine rxy ðkÞ assuming access to X(n) and Y(n). From the ratio of the standard deviation of the term aX(n L) to
AUTOCORRELATION AND CROSSCORRELATION METHODS 273
jajsx
Gx!y ¼ : ð65Þ U
sy
h2 Y
From Equation 60
(a)
asx
rxy ðLÞ ¼ ; ð66Þ
sy Rxy(k)
2
and from Equations 65 and 66 it is concluded that the
peak size (absolute peak value) in the normalized cross- 1.5
covariance indicates the fraction of Y because of the ran-
dom process X: 1
random signal applied to their inputs. Without prior Each reflex peak-to-peak amplitude depends on the up-
information (e.g., on the possible existence of a common coming sensory activity discharged by each stimulus and
input) or additional knowledge (e.g., of the impulse re- also on random inputs from the spinal cord that act on the
sponse of one of the systems and Ruu(k)) it would certainly motoneurons. In Fig. 10, a ‘‘U?’’ indicates a hypothesized
be difficult to interpret such a cross-correlation. common input random signal that would modulate syn-
An example from the biomedical field shall illustrate a chronously the reflexes from the two sides of the spinal
case where the existence of a common random input was cord. The peak-to-peak amplitudes of the right- and left-
hypothesized based on empirically obtained cross-covari- side H reflexes to a train of 500 bilateral stimuli were
ances. The experiments consisted of evoking spinal cord measured in real time and stored as discrete-time signals
reflexes bilaterally and simultaneously in the legs of each x and y. Initial experiments had shown that a unilateral
subject, as depicted in Fig. 10. A single electrical stimulus, stimulus only affected the same side of the spinal cord.
applied to the right or left leg, would fire an action poten- This finding meant that any statistically significant peak
tial in some of the sensory nerve fibers situated under the in the cross-covariance of x and y could be attributed to a
stimulating electrodes (st in Fig. 10). The action potentials common input. The first 10 reflex amplitudes in each sig-
would travel to the spinal cord (indicated by the upward nal were discarded to avoid the transient (nonstationarity)
arrows) and activate a set of motoneurons (represented by that occurs at the start of the stimulation.
circles in a box). The axons of these motoneurons would Data from a subject are shown in Fig. 11, the first 51
conduct action potentials to a leg muscle, indicated by H-reflex amplitude values shown in Fig. 11a for the right
downward arrows in Fig. 10. The recorded waveform from leg, and the corresponding simultaneous H reflex ampli-
the muscle (shown either to the left or right of Fig. 10) is tudes in Fig. 11b for the left leg (R. A. Mezzarane and A. F.
the so-called H reflex. Its peak-to-peak amplitude is of in- Kohn, unpublished data). In both, a horizontal line shows
terest, being indicated as x for the left leg reflex waveform the respective mean value computed from all the 490 sam-
and y for the right leg waveform in Fig. 10. The experi- ples of each signal. A simple visual analysis of the data
mental setup included two stimulators (indicated as st in probably tells us close to nothing about how the reflex
Fig. 10) that applied simultaneous trains of 500 rectan- amplitudes vary and if the two sides fluctuate together to
gular electrical stimuli at 1 per second to the two legs. If some degree. Such quantitative questions may be an-
the stimulus pulses in the trains are numbered as n ¼ 0, swered by the autocovariance and cross-covariance func-
n ¼ 1; . . . ; n ¼ N (the authors used N ¼ 500), the respective tions. The right- and left-leg H-reflex amplitude normali-
sequences of H reflex amplitudes recorded on each side zed autocovariances, computed according to Equation 15,
will be xð0Þ; xð1Þ; . . . ; xðNÞ and yð0Þ; yð1Þ; . . . ; yðNÞ, as de- are shown in Fig. 12a and 12b, respectively. The value at
picted in the inset at the lower right side of Fig. 10. The time shift 0 is 1, as expected from the normalization. Both
two sequences of reflex amplitudes were analyzed by the autocovariances show that for small time shifts, some
cross-covariance function (22). degree of correlation exists between the samples because
U?
Spinal
cord
st
+ st +
Figure 10. Experimental setup to elicit and record
bilateral reflexes in human legs. Each stimulus ap- − −
plied at the points marked st causes upward-prop-
agating action potentials that activate a certain
X Y
number of motoneurons (circles in a box). A hypoth-
esized common random input to both sides is indi-
cated by ‘‘U?’’. The reflex responses travel down the
nerves located on each leg and cause each calf mus-
cle to fire a compound action potential, which is the
so-called H reflex. The amplitudes x and y of the re-
flexes on each side are measured for each stimulus. 0 1 2
Actually, a bilateral train of 500 stimuli is applied st
and the corresponding reflex amplitudes x(n) and
y(n) are measured, for n ¼ 0; 1; . . . ; 499, as sketched x(0) x(1) x(2)
in the inset in the lower corner. Later, the two sets of
reflex amplitudes are analyzed by auto and cross- y(0) y(1) y(2)
covariance.
AUTOCORRELATION AND CROSSCORRELATION METHODS 275
2 (k)
1
1.5
1 0.5
0.5
0
0
0 5 10 15 20 25 30 35 40 45 50
−0.5
(a) −20 −15 −10 −5 0 5 10 15 20
3 (a)
2.5
1
2
1.5
0.5
1
0.5 0
0
0 5 10 15 20 25 30 35 40 45 50
n −0.5
(b) −20 −15 −10 −5 0 5 10 15 20
Figure 11. The first 51 samples of two time series y(n) and k
(b)
x(n) representing the simultaneously-measured H-reflex
amplitudes from the right (a) and left (b) legs in a subject. The Figure 12. The respective autocovariances of the signals shown
horizontal lines indicate the mean values of each complete series. in Fig. 11. The autocovariance in (a) decays faster to the confi-
The stimulation rate was 1 Hz. Ordinates are in mV. (This figure dence interval than that in (b). The two horizontal lines represent
is available in full color at http://www.mrw.interscience.wiley. a 95% confidence interval. (This figure is available in full color at
com/ebe.) http://www.mrw.interscience.wiley.com/ebe.)
the autocovariance values are above the upper level of a 6. ESTIMATION AND HYPOTHESIS TESTING FOR
95% confidence interval shown in Fig. 12 (see subsection AUTOCORRELATION AND CROSS-CORRELATION
Hypothesis testing). This fact supports the hypothesis
that randomly varying neural inputs exists in the spinal This section will focus solely on discrete-time random sig-
cord that modulate the excitability of the reflex circuits. nals because, today, the signal processing techniques are
As the autocovariance in Fig. 12b decays much slower almost all realized in discrete-time in very fast computers.
than that in Fig. 12a, it suggests that the two sides re- If a priori knowledge about the stationarity of the ran-
ceive different randomly varying inputs. The normalized dom process exists a test should be applied for stationarity
cross-covariance, computed according to Equation 46, is (4). If a trend of no physiological interest is discovered in
seen in Fig. 13. Many cross-covariance samples around the data, it may be removed by linear or nonlinear regres-
k ¼ 0 are above the upper level of a 95% confidence in- sion. After a stationary signal is obtained, then the prob-
terval (see subsection Hypothesis testing), which sug- lem is to estimate first and second moments as presented
gests a considerable degree of correlation between the in the previous sections.
reflex amplitudes recorded from both legs of the subject. Let us assume a stationary signal x(n) is known for
The decay on both sides of the cross-covariance in Fig. 13 n ¼ 0; 1; . . . ; N 1. Any estimate YðxðnÞÞ based on this sig-
could be because of the autocovariances of the two sig- nal would have to present some desirable properties, de-
nals (see Fig. 12), but this issue will only be treated la- rived from its corresponding estimator YðXðnÞÞ, such as
ter. Results such as those in Fig. 13, also found in other unbiasedness and consistency (15). Note that an estimate
subjects (22), suggested the existence of common inputs is a particular value (or a set of values) of the estimator
acting on sets of motoneurons at both sides of the spinal when a given sample function x(n) is used instead of the
cord. However, as the peak of the cross-covariance was whole process X(n) (which is what happens in practice).
lower than 1 and as the autocovariances were different For an unbiased estimator YðXðnÞÞ, its expected value is
bilaterally, it can be stated that each side receives com- equal to the actual value being estimated y:
mon sources to both sides plus random inputs that are
uncorrelated with the other side’s inputs. Experiments E½YðXðnÞÞ ¼ y: ð72Þ
in cats are being pursued, with the help of the cross-
covariance analysis, to unravel the neural sources of For example, if we want to estimate the mean mx of the
the random inputs found to act bilaterally in the spinal process X(n) using x Equation 2, the unbiasedness means
cord (23). that the expected value of the estimator has to equal mx. It
276 AUTOCORRELATION AND CROSSCORRELATION METHODS
0.2
From Equation 23 it is concluded that the mean esti-
mator is consistent. It is interesting to observe that the
0.1 variance of the mean estimator depends on the autocovari-
ance of the signal and not only on the number of samples,
0 which is because of the statistical dependence between the
samples of the signal, quite contrary to what happens in
−0.1 conventional statistics, where the samples are i.i.d..
−0.2
6.1. Estimation of Autocorrelation and Autocovariance
−0.3
Which estimator should be used for the autocorrelation of
−0.4 a process X(n)? Two slightly different estimators will be
presented and their properties verified. Both are much
−0.5 used in practice and are part of the options of the Matlab
−20 −15 −10 −5 0 5 10 15 20 command xcorr.
k The first estimator will be called the unbiased autocor-
relation estimator and defined as
Figure 13. Cross-covariance between the two signals with 490
bilateral reflexes shown partially in Fig 11. Only the time shifts
near the origin are shown here because they are the most reliable. X
1 N1jkj
The two horizontal lines represent a 95% confidence interval. R^ xx;u ðkÞ ¼ Xðn þ jkjÞXðnÞ;
N jkj n ¼ 0 ð77Þ
(This figure is available in full color at http://www.mrw.
interscience.wiley.com/ebe.) jkj N 1:
is easy to show that the finite average ½Xð0Þ þ The basic operations are sample-to-sample multiplica-
Xð1Þ þ þ XðN 1Þ=N of a stationary process is an un- tions of two time-shifted versions of the process and arith-
biased estimator of the expected value of the process: metic average of the resulting samples, which seems to be
" # a reasonable approximation to Equation 13. To confirm
X
1 N1 X
1 N1 that this estimator is indeed unbiased, its expected value
E½YðXðnÞÞ ¼ E XðnÞ ¼ E½XðnÞ shall be determined:
N n¼0 N n¼0
X
1 N1 X
1 N1jkj
¼ m ¼ mx : E½R^ xx;u ðkÞ ¼ E½Xðn þ jkjÞXðnÞ;
N n¼0 x ð73Þ N jkj n ¼ 0 ð78Þ
jkj N 1;
If a given estimator is unbiased, it still does not assure
its usefulness, because if its variance is high it means that
for a given signal x(n) - a single sample function of the and from Equations 13 and 16 it can be concluded that the
process X(n) - the value of YðxðnÞÞ may be quite far from estimator is indeed unbiased (i.e., E½R^ xx;u ðkÞ ¼ Rxx ðkÞ). It is
the actual value y. Therefore, a desirable property of an more difficult to verify consistency, and the reader is re-
unbiased estimator is that its variance tends to 0 as ferred to Therrien (15). For a Gaussian process X(n), it can
the number of samples N tends to 1, a property called be shown that the variance indeed tends to 0 for N ! 1,
consistency: which is a good approximation for more general random
processes. Therefore, Equation 77 gives us a consistent
estimator of the autocorrelation. When applying Equation
s2Y ¼ E½ðYðXðnÞÞ yÞ2 ! 0: ð74Þ 77 in practice, the available signal x(n), n ¼ 0; 1; . . . ; N 1,
N!1
is used instead of X(n) giving a sequence of 2N 1 values
Returning to the unbiased mean estimator Equation 2, R^ xx;u ðkÞ.
let us check its consistency: The unbiased autocovariance estimator is similarly de-
fined as
s2Y ¼ E½ðYðXðnÞÞ mx Þ2
X
1 N1jkj
2 !2 3 C^ xx;u ðkÞ ¼ ðXðnÞ xÞ;
ðXðn þ jkjÞ xÞ
X
N1 N jkj n ¼ 0 ð79Þ
1
¼ E4 ðXðnÞ mx Þ 5; ð75Þ
N n¼0 jkj N 1;
AUTOCORRELATION AND CROSSCORRELATION METHODS 277
where x is given by Equation 2. Similarly, C^ xx;u ðkÞ is a highest reliability. On the other hand, if the estimated
consistent estimator of the autocovariance. autocorrelation or autocovariance will be Fourier-trans-
Nevertheless, one of the problems with these two esti- formed to provide an estimate of a power spectrum,
mators defined in Equations 77 and 79 is that they may then the biased autocorrelation or autocovariance estima-
not obey Equations 19, 20 or the positive semi-definite tors should be chosen to assure a non-negative power
property (14,15). spectrum.
Therefore, other estimators are defined, such as the bi- For completeness purposes, the normalized autocovari-
ased autocorrelation and autocovariance estimators ance estimators are given below. The unbiased estimator
R^ xx;b ðkÞ and C^ xx;b ðkÞ is:
X
1 N1jkj C^ xx;u ðkÞ
R^ xx;b ðkÞ ¼ ðXðn þ jkjÞXðnÞ; jkj N 1 ð80Þ r^ xx;u ðkÞ ¼ ; jkj N 1; ð84Þ
N n¼0 C^ xx;u ð0Þ
previous section. An important difference, however, is that package calculates the unbiased variance (standard devi-
the cross-correlation and cross-covariance do not have ation) estimates
even symmetry. The expressions for the unbiased and bi-
ased estimators of cross-correlation given below are in a ^
N Cxy;u ðkÞ
form appropriate for signals x(n) and y(n), both defined for r^ xy;u ðkÞ ¼ ; ð92Þ
N1 s^ x s^ y
n ¼ 0; 1; . . . ; N 1. The unbiased estimator is
8 PN1jkj which may be useful, for example, when using the Matlab
1
< Njkj n¼0 Xðn þ kÞYðnÞ; 0kN1 command xcov, because none of its options include the di-
^
Rxy;u ðkÞ ¼
: 1 PN1jkj XðnÞYðn þ jkjÞ; ðN 1Þ ko0;
rect computation of an unbiased and normalized cross-co-
Njkj n¼0 variance between two signals x and y. The Equation 92 in
ð87Þ Matlab would be written as:
structure of the heart rate variability of cardiac patients 17. R. M. Bruno and D. J. Simons, Feedforward mechanisms of
and to evaluate the level of respiratory heart rate excitatory and inhibitory cortical receptive fields. J. Neurosci.
modulation. 2002; 22:10996–10975.
Higher-order cross-correlation functions have been 18. D. Hoyer, U. Leder, H. Hoyer, B. Pompe, M. Sommer,
used in the Wiener–Volterra kernel approach to charac- and U. Zwiener, Mutual information and phase dependen-
terize a nonlinear association between two biological sig- cies: measures of reduced nonlinear cardiorespiratory inter-
nals (36,37). actions after myocardial infarction. Med. Eng. Phys. 2002;
24:33–43.
Finally, the class of oscillatory biological systems has
received special attention in the literature with respect to 19. T. Muller, M. Lauk, M. Reinhard, A. Hetzel, C. H. Lucking,
and J. Timmer, Estimation of delay times in biological sys-
the quantification of the time relationships between dif-
tems. Ann. Biomed. Eng. 2003; 31:1423–1439.
ferent rhythmic signals. Many biological systems exist
20. J. C. Hassab and R. E. Boucher, Optimum estimation of time
that exhibit rhythmic activity, which may be synchronized
delay by a generalized correlator. IEEE Trans. Acoust. Speech
or entrained either to external signals or with another
Signal Proc. 1979; ASSP-27:373–380.
subsystem of the same organism (38). As these oscillatory
21. D. G. Manolakis, V. K. Ingle, and S. M. Kogon, Statistical
(or perhaps chaotic) systems are usually highly nonlinear
and Adaptive Signal Processing. New York: McGraw-Hill,
and stochastic, specific methods have been proposed to 2000.
study their synchronization (39,40).
22. R. A. Mezzarane and A. F. Kohn, Bilateral soleus H-reflexes
in humans elicited by simultaneous trains of stimuli: sym-
metry, variability, and covariance. J. Neurophysiol. 2002;
BIBLIOGRAPHY 87:2074–2083.
23. E. Manjarrez, Z. J. Hernández-Paxtián, and A. F. Kohn, A
1. R. Merletti and P. A. Parker, Electromyography. Hoboken, NJ: spinal source for the synchronous fluctuations of bilateral
Wiley Interscience, 2004. monosynaptic reflexes in cats. J. Neurophysiol. 2005;
2. J. H. Zar, Biostatistical Analysis. Upper Saddle River, NJ: 94:3199–3210.
Prentice-Hall, 1999. 24. H. Akaike, A new look at the statistical model identification.
3. P. Z. Peebles, Jr., Probability, Random Variables, and Random IEEE Trans. Automat. Control 1974; AC-19:716–722.
Signal Principles, 4th ed., New York: McGraw-Hill, 2001. 25. M. Akay, Biomedical Signal Processing. San Diego, CA:
4. J. S. Bendat and A. G. Piersol, Random Data: Analysis and Academic Press, 1994.
Measurement Procedures. New York: Wiley, 2000. 26. R. M. Rangayyan, Biomedical Signal Analysis. New York:
5. C. Chatfield, The Analysis of Time Series: An Introduction, IEEE Press - Wiley, 2002.
6th ed., Boca Raton, FL: CRC Press, 2004. 27. D. M. Simpson, A. F. Infantosi, and D. A. Botero Rosas, Es-
6. P. J. Magill, A. Sharott, D. Harnack, A. Kupsch, W. Meissner, timation and significance testing of cross-correlation between
and P. Brown, Coherent spike-wave oscillations in the cortex cerebral blood flow velocity and background electro-encepha-
and subthalamic nucleus of the freely moving rat. Neurosci- lograph activity in signals with missing samples. Med. Biolog.
ence 2005; 132:659–664. Eng. Comput. 2001; 39:428–433.
7. J. J. Meier, J. D. Veldhuis, and P. C. Butler, Pulsatile insulin 28. Y. Xu, S. Haykin, and R. J. Racine, Multiple window time-
secretion dictates systemic insulin delivery by regulating he- frequency distribution and coherence of EEG using Slepian
patic insulin extraction in humans. Diabetes 2005; 54:1649– sequences and Hermite functions. IEEE Trans. Biomed. Eng.
1656. 1999; 46:861–866.
8. D. Rassi, A. Mishin, Y. E. Zhuravlev, and J. Matthes, Time 29. D. M. Simpson, C. J. Tierra-Criollo, R. T. Leite, E. J. Zayen,
domain correlation analysis of heart rate variability in pre- and A. F. Infantosi, Objective response detection in an elec-
term neonates. Early Human Develop. 2005; 81:341–350. troencephalogram during somatosensory stimulation. Ann.
9. R. Steinmeier, C. Bauhuf, U. Hubner, R. P. Hofmann, and R. Biomed. Eng. 2000; 28:691–698.
Fahlbusch, Continuous cerebral autoregulation monitoring 30. C. A. Champlin, Methods for detecting auditory steady-state
by cross-correlation analysis: evaluation in health volunteers. potentials recorded from humans. Hear. Res. 1992; 58:63–69.
Crit. Care Med. 2002; 30:1969–1975. 31. L. A. Baccala and K. Sameshima, Partial directed coherence:
10. A. Papoulis and S. U. Pillai, Probability, Random Variables a new concept in neural structure determination. Biolog.
and Stochastic Processes. New York: McGraw-Hill, 2001. Cybernet. 2001; 84:463–474.
11. P. J. Brockwell and R. A. Davis, Time Series: Theory and 32. H. K. Meeren, J. P. M. Pijn, E. L. J. M. van Luijtelaar, A. M. L.
Models. New York: Springer Verlag, 1991. Coenen, and F. H. Lopes da Silva, Cortical focus drives wide-
12. A. F. Kohn, Dendritic transformations on random synaptic spread corticothalamic networks during spontaneous absence
inputs as measured from a neuron’s spike train — modeling seizures in rats. J. Neurosci. 2002; 22:1480–1495.
and simulation. IEEE Trans. Biomed. Eng. 1989; 36:44–54. 33. N. Abramson, Information Theory and Coding. New York:
13. J. S. Bendat and A. G. Piersol, Engineering Applications of McGraw-Hill, 1963.
Correlation and Spectral Analysis, 2nd ed. New York: John 34. C. E. Shannon, A mathematical theory of communication.
Wiley, 1993. Bell Syst. Tech. J. 1948; 27:379–423.
14. M. B. Priestley, Spectral Analysis and Time Series. London: 35. N. J. I. Mars and G. W. van Arragon, Time delay estimation in
Academic Press, 1981. non-linear systems using average amount of mutual infor-
15. C. W. Therrien, Discrete Random Signals and Statistical Sig- mation analysis. Signal Proc. 1982; 4:139–153.
nal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1992. 36. P. Z. Marmarelis and V. Z. Marmarelis, Analysis of Physio-
16. A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-Time logical Systems. The White-Noise Approach. New York: Ple-
Signal Processing. Englewood Cliffs: Prentice-Hall, 1999. num Press, 1978.
AUTOLOGOUS PLATELET-BASED THERAPIES 283