Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

What is the Difference Between

Random and Chaotic Sequences


In everyday language, people tend to use the words random and chaotic interchangably. You will recall from the
text that chaotic sequences are in fact generated deterministically from the dynamical system
xn+1 = f (xn )

(1)

where f is a smooth function on Rm .


This implies that if you create two orbits with the identical initial data, x0 , then the orbits are the same. What makes
the dynamical system chaotic is the fact that orbits arising from initial data which are arbitrarily close grow apart
exponentially.
A bounded sequence of values {xi }
i=1 coming from (1) is chaotic if
1. {xi } is not asymptotically periodic
2. No Lyapunov exponent vanishes
3. The largest Lyapunov exponent is strictly positive
Random (sometimes called stochastic) processes are fundamentally different. Two successive realizations of a random process will give different sequences, even if the initial state is the same.
Since a random process is non-deterministic, numerical computation of a Lyapunov exponent is not well defined.
Wolfs algorithm (applied to a sequence) looks at the closest neighbor y0 = xK of a point xJ and uses it as the initial
value of another sequence. The average value of the rate of separation

ln

kxK+1 xJ+1 k
kxK xJ k

(2)

is used as a measure of the Lyapunov exponent.


Consider the case when the xi are uniformly randomly distributed on [0, 1]. If the xi s are truly random then the
denominator of (2) may be arbitrarily small while the numerator is large. As the number of points increases, the
minimal separation distance approaches zero, while the mean separation of two arbitrary points (correlation length)
remains bounded away from zero. Consequently, the Lyapunov exponent will increase without bound as the number
of points approachs infinity.
We can actually estimate the following for a uniform random sequence of values on [0, 1]
N
1
X

|xi+1 xi |2 =

i=1

N
1
X

|(xi+1 x
) (xi x
)|2

i=1

|xi+1 x
|2 +

i=1

N
1
X

N
1
X

|xi x
|2 2

i=1

N
1
X

(xi+1 x
)(xi x
)

i=1

2N 2 2N
where is the autocorrelation coefficient
PN 1

(xi+1 x
)(xi x
)
PN 1
|2
i=1 |xi x

i=1

and 2 is the variance


2 =

PN

|xi x
|2
N 1

i=1

Therefore the expected value of |xi+1 xi |2 is approximately 2 2 2. Since 2 = 1/12 and 1/N for uniform
random data, the mean length squared is approximately twice the variance (i.e. 1/6).
This can be verified by running the following matlab program for large enough values:
function [d1,d2]=clen(n)
d1=0;
d2=0;
for i=1:n
x = rand;
y = rand;
d1 = d1 + abs(x-y);
d2 = d2 + (x-y)2;
end;
d1 = d1/n;
d2 = d2/n;

Interestingly enough, this program returns d1 = 1/3 and d2 = 1/6 asymptotically. I can prove the second result but
not the first ...

You might also like