Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Calculation of Lyapunov spectrum of systems of ordinary or delay

differential equations
Gábor Csernák, PhD
Department of Applied Mechanics
Budapest University of Technology and Economics
csernak@mm.bme.hu

October 20, 2011

Abstract
The algorithm of the calculation of the Lyapunov spectrum is discussed in details, for both ordinary
and delay differential equations. The C++ code of the examples (Lorenz model, Ikeda equation, Pálmai
model) are attached to this document.

1 Introduction
1.1 Definition of Lyapunov exponents
In dissipative systems, the phase space volumes are contracted. The local rate of this contraction is given
by the determinant of the Jacobian matrix, the so-called Jacobian. The Jacobian matrix Jt (ξ) describes
the linearized motion in the neighbourhood of a trajectory x(t) ≡ f (t, ξ), which emerged from the initial
position ξ = x(0). In case of flows, the elements of the Jacobian matrix can be expressed as the derivative
of the flow f (t, x):
∂fi (t, ξ)
Jtij (ξ) = . (1)
∂xj
The Jacobian matrix can be defined for the discrete map

xm+1 = f (xm ) (2)

similarly, where f : Rn → Rn is a piecewise differentiable vector function, and ξ = x0 :


∂fi (ξ)
Jij (ξ) = . (3)
∂xj

For the mth iterate f m of the map f , the corresponding Jacobian matrix of partial derivatives is given by
the chain rule:
Jm (ξ) = J(f m−1 (ξ)) · . . . · J(f (ξ)) · J(ξ). (4)
This definition shows, that the Jacobian matrix Jm is the product of local derivative matrices – or product
of local slopes, in case of 1D maps – along a trajectory. Since a sufficiently long trajectory visits different
parts of the attractor according to the natural measure µnat , the properties of this measure are also reflected
in the Jacobian matrix.
According to the so-called multiplicative ergodic theorem [1, 2], if the natural measure µnat is ergodic,
the following limits (5), (6) exist for almost all initial conditions:
1
limt→∞ J∗t (ξ)Jt (ξ) 2t = Jflow , and
1
limm→∞ (J∗m (ξ)Jm (ξ)) 2m = Jmap , (5)

where J∗ (ξ) denotes the adjoint of J(ξ). The eigenvalues of the matrices Jflow and Jmap are called Lyapunov-
or characteristic numbers, Λi , where Λ1 ≥ Λ2 ≥ . . . . Let Ei (ξ) denote the subspace corresponding to the

1
eigenvalues Λj less than or equal to Λi . Then E1 ⊃ E2 ⊃ · · · . Now, the practical definition of the ith
Lyapunov exponent is as follows:
1
limt→∞ log kJt (ξ)uk = λi for flows, and
t
1
limm→∞ log kJm (ξ)uk = λi for maps, (6)
m
where u ∈ Ei (ξ) \ Ei+1 (ξ) is a unit vector, k.k denotes the Euclidean norm, and we use the natural base for
logarithms. The numbers λi do not depend on the actual initial condition ξ, provided the natural measure
µnat is ergodic [1]. The whole set of Lyapunov exponents is referred to as Lyapunov spectrum.
Following from the arguments above, the sum of the Lyapunov exponents equals the logarithm of the
determinant of the Jacobian matrices Jflow and Jmap , respectively: λ1 + · · · + λn = log |Jflow/map |. Since
this determinant gives the average rate of expansion of phase-space volumes, this sum must be zero in case
of conservative systems, and negative for dissipative ones.
The directions belonging to positive and negative Lyapunov exponents are referred to as unstable and
stable directions, respectively. Since in dissipative systems the phase-space volumes are contracted, In case
of flows, there is no contraction and no expansion along the flow, thus the Lyapunov exponent corresponding
to this direction is zero.

1.2 Divergence of trajectories


As many Lyapunov exponentscan be defined as there are phase space dimensions, but one of them has a
distinguished role in the theory of dynamical systems, the maximal Lyapunov exponent λ. Consider two
nearby points in the phase-space of a chaotic system, and start trajectories from these points. Let the
distance between these trajectories at time t be given by the vector δx(t). For almost every choice of δx(0),
the maximal Lyapunov exponent λ describes the rate of stretching of this distance vector:
|δx(t)| ≈ eλt |δx(0)|, |δx(0)| ≪ 1, t ≫ 0. (7)
The positivity of the maximal Lyapunov exponent refers to the separation of trajectories. It means that
nearby trajectories diverge as time passes, until the separation attains the characteristic size of the system.
This phenomenon is called sensitive dependence on initial conditions. Note, that divergence of trajectories
may occur around center points, too, but this divergence is quite slow, i.e., its rate is polynomial. We speak
of sensitive dependence on initial conditions only, if this separation is exponentially fast.
The state of a deterministic system at time t (or at the mth iteration step in case of maps) is fully
determined by its initial conditions, while in a stochastic system, the actual state is influenced also by some
noise. However, if the system has the property of sensitive dependence on initial conditions, the predictability
is lost after a short time, since the initial conditions can never be specified to infinite precision. Thus, the
behaviour of a deterministic system with sufficiently complicated dynamics may seem to be random, too [4].
The sensitive dependence on initial conditions is a necessary condition of chaos. But it does not lead in
itself to chaos, since λ is positive in the neighbourhood of repelling fixed points, too, where the trajectories
tend to the infinity, which is not a complicated behaviour. Another necessary condition of chaos is the
mixing of trajectories – this is fulfilled if the solutions are confined in a bounded set, in an attractor.
Note, that the case of more than one positive Lyapunov exponent is often called hyperchaos [3].

1.3 Lyapunov exponents in case of Poincaré maps


In engineering problems, maps usually arise as discretized versions of flows. The reason of this discretization
is that maps are easier to handle than flows. This discretization is usually done by the construction of
Poincaré maps. It means, that one observes the flow when a special event happens. This special event can
be defined as an intersection of the flow and a hypersurface Σ in the phase-space M of the continuous-time
dynamical system. Successive intersections of this Poincaré surface by the trajectory define the Poincaré
map. A special kind of Poincaré map can be constructed if the flow is observed periodically, with a constant
period. The obtained map is called stroboscopic map. Some relations are fulfilled between the Lyapunov
exponents of a Poincaré map, and the Lyapunov exponents of the underlying flow. Poincaré maps have one
less Lyapunov exponents than flows, since the exponent belonging to the direction of the flow is discarded
by the construction of the map. For the other exponents the following relation holds [1, 3]:
λmap
i = λflow
i τm , (8)

2
where τm is the average time between two crossings of the Poincaré surface Σ.

2 Algorithm for the determination of the Lyapunov spectrum


The algorithm, described below is based on the definition (6) [8, 9]. Consider first a set of N ordinary
differential equations. To be consistent with the C++ code given below, we start enumeration of indices
from 0:

ẋ0 = f0 (x0 , . . . , xN −1 )
..
. (9)
ẋN −1 = fN −1 (x0 , . . . , xN −1 )

To measure the rate of divergence of neighbouring trajectories, we examine the time evolution of another
trajectory, too, that is close to the solution ξi , i = 0, . . . , N − 1 of (9):

d
(ξ0 + ∆x0 ) = f0 (ξ0 + ∆x0 , . . . , ξN −1 + ∆xN −1 )
dt
..
. (10)
d
(ξN −1 + ∆xN −1 ) = fN −1 (ξ0 + ∆x0 , . . . , ξN −1 + ∆xN −1 )
dt
Here the differences ∆xi , i = 0, . . . N − 1 are new unknown functions. By expanding the right-hand sides of
the equations into Taylor series in these variables, about the original solution of (9) ξ ∈ RN , we obtain
N −1
˙ i ) = fi (ξ) +
X ∂fi (ξ)
ξ̇i + (∆x ∆xj , i = 0, . . . N − 1. (11)
j=0
∂xj

Since ξ˙i = fi (ξ), i = 0, . . . N − 1, we arrive at


N −1
˙ i) =
X ∂fi (ξ)
(∆x ∆xj , i = 0, . . . N − 1. (12)
j=0
∂xj

To apply the definition (6), one has to perform the following steps:
• Choose the initial conditions of (9) on the attractor – or at least in the domain of attraction of the
attractor.
• Find N orthonormal initial condition vectors (separation vectors) ∆xk , k = 0, . . . N − 1 for (12) –
practically, ∆xki = δik is a good choice, where δik = 1 if i = k, and δik = 0 if i 6= k.
• Integrate equations (9) to obtain a solution ξ. This solution is used during the integration of the N
sets of equations of the form (12), according to the N orthonormal initial condition vectors. Thus,
N (N + 1) scalar differential equations must be solved simultanously.
• As time goes on, the separation vectors ∆xk stretch and turn towards the unstable manifold corre-
sponding to the maximal Lyapunov exponent λ1 . To determine the whole Lyapunov spectrum, one has
to re-orthonormalize the separation vectors regularly, using the Gram-Schmidt procedure. Due to the
orthogonality of the vectors, these will be stretched or contracted according to the different Lyapunov
exponents.
• To obtain the Lyapunov spectrum, the norms of the stretched vectors are calculated just before the
application of the Gram-Schmidt method, and the logarihms of these norms are summarized in N
variables. The Lyapunov exponents can be calculated by dividing the values of these variables by the
integration time.

3
• Once one has the Lyapunov spectrum, it is worth to calculate the Kaplan-Yorke dimension, too:
Pj
i=1 λi
DKY = j + , (13)
|λj+1 |

where j is the largest integer for which λ1 + · · · + λj ≥ 0, and λ1 ≥ λ2 ≥ · · · ≥ λN . Thus, the exponents
must be arranged in a decreasing order before the calculation of DKY .
This procedure is illustrated in the file lor_lyap_eng.cc via the example of the Lorenz system:

ẋ0 = −σx0 + σx1


ẋ1 = ρx0 − x1 − x0 x2 (14)
ẋ2 = −βx2 + x0 x2 .

The Jacobian can be given in the form


 
−σ σ 0
J =  ρ − x2 −1 −x0  . (15)
x1 x0 −β

There are two fixed points if ρ > 1:


 p p 
P± = ± β(ρ − 1), ± β(ρ − 1), ρ − 1 , (16)

that are asymptotically stable if


σ(σ + β + 3)
ρ < ρcr = . (17)
σ−β−1
Here σ > β + 1 is assumed, which is fulfilled by the standard parameters, β = 8/3 and σ = 10. At these
parameter values we have ρcr ≈ 24.7368. As ρ exceeds ρcr , the points P± become unstable via a Hopf
bifurcation.As numerical investigations show [5], there are no limit cycles in the entire range 1 < ρ < ρcr ,
because as we decrease ρ from ρcr , the limit cycles expand and they touch the origin at ρ = 13.926, and
so a homoclinic bifurcation occurs. Thus, in the parameter range 1 < ρ < 13.926, the trajectories begin to
tend to one of the stable fixed points P± immediately. On the other hand, at ρcrisis ≈ 24.06 the so-called
Lorenz attractor evolves during a crisis [6, 7], and for ρ > 24.06, the trajectories tend to the fixed points
only with zero probability. In the range 13.926 . ρ . 24.06, transient chaotic motion occurs. According to
these results, we chose the initial conditions on the line section between the fixed points P± .

3 Generalization of the algorithm for delay differential equations


The algorithm can be generalized for delay differential equations, whose phase-space is infinite dimensional.
The following procedure is a variant of what was published in [10]. The number of equations will be denoted
by D and we assume that the time delay τ is the same for all equations.

ẏ0 (t) = f0 (y0 (t), y0 (t − τ ), . . . , yD−1 (t), yD−1 (t − τ ))


..
. (18)
ẏD−1 (t) = fD−1 (y0 (t), y0 (t − τ ), . . . , yD−1 (t), yD−1 (t − τ ))

The state of the system described by this set of delay differential equations is determined by the functions
y0 (t) . . . yD−1 (t) on the interval [t, t − τ ]. The main idea is that these functions can be approximated by
(N +1) equidistant samples taken from the interval [t, t−τ ], with time step h = τ /N . Thus, we can introduce

4
S = D(N + 1) new variables:

x0 = y0 (t)
x1 = y0 (t − τ /N )
..
.
xN = y0 (t − τ )
xN +1 = y1 (t)
xN +2 = y1 (t − τ /N )
..
.
x2N +1 = y1 (t − τ ) (19)
..
.
xi(N +1) = yi (t)
..
.
xi(N +1)+N = yi (t − τ )
..
.
xS−1 = yD−1 (t − τ )

In the next step, the system of differential equations is approximated by discrete mappings. We choose
the simplest integration scheme, Euler’s method. Thus, the scalar maps assume the following forms for
i, j = 0, . . . , D − 1, l = 1, . . . , N

x′0 = x0 + hf0 (x0 , xN , . . . , xj(N +1) , xj(N +1)+N , . . . , x(D−1)(N +1) , xS−1 )
x′1 = x0
..
.
x′N = xN −1
x′N +1 = xN +1 + hf1 (x0 , xN , . . . , xj(N +1) , xj(N +1)+N , . . . , x(D−1)(N +1) , xS−1 )
x′N +2 = xN +1
..
. (20)
x′2N +1 = x2N
..
.
x′i(N +1) = xi(N +1) + hfi (x0 , xN , . . . , xj(N +1) , xj(N +1)+N , . . . , x(D−1)(N +1) , xS−1 )
..
.
x′i(N +1)+l = xi(N +1)+l−1
..
.
x′S−1 = xS−2

Thus, D would be the dimension of the system without delay, N + 1 is the number of samples and S =
D(N + 1) denotes the dimension of the new phase-space of the discrete mapping.
To determine the Lyapunov spectrum, S-dimensional orthonormal separation vectors must be introduced.
The elements of these vectors can be found in the array x: xS is the 0th element of the 0th vector, x(j+1)S+k
is the kth element of the jth vector, x(j+2)S−1 is the last element of the jth vector and xS 2 +S−1 is the last
element of the last (i.e., the S − 1st) vector, and k, j = 0, . . . , S − 1.
Although there are S variables, the right-hand side of the equations depend only on the actual and the
delayed values of the original variables yi . Thus, it is sufficient to store only the corresponding derivatives
of the functions fi . For the sake of computational simplicity, we defined in the C++ code an array –

5
func_array, now denoted by F – of functions. These functions all depend on the actual state, an array of
parameters (param_array), the actual time, etc. The elements of the function array F are the following:

F0 = f0
F1 = f1
..
.
FD−1 = fD−1
..
.
∂f0
FD =
∂y0 (t)
∂f0
FD+1 =
∂y0 (t − τ )
∂f0
FD+2 =
∂y1 (t)
∂f0
FD+3 = (21)
∂y1 (t − τ )
..
.
∂f0
FD+2j =
∂yj (t)
∂f0
FD+2j+1 =
∂yj (t − τ )
..
.
∂fi
FD+2Di+2j =
∂yj (t)
∂fi
FD+2Di+2j+1 =
∂yj (t − τ )
..
. ,

where i, j = 0, . . . D − 1. Thus, each function fi has 2D derivatives stored in the function array F . The
arguments of the functions fi are not shown for brevity (see (18) and (20)). yi (t) denotes the actual value
of a variable, while yi (t − τ ) refers to the delayed value of the same variable.
To examine the evolution of separation vectors, the discretized equations (20) are considered, in the case
of a slightly modified solution x + ∆x. The indices are in the ranges i, j = 0, . . . D − 1, l = 1, . . . , N .

x′i(N +1) + ∆x′i(N +1) = xi(N +1) + ∆xi(N +1) + hfi (. . . , xj(N +1) + ∆xj(N +1) , xj(N +1)+N + ∆xj(N +1)+N , . . . )
..
.
x′i(N +1)+l + ∆x′i(N +1)+l = xi(N +1)+l−1 + ∆xi(N +1)+l−1 .

After linearization, we obtain


D−1
X ∂fi (. . . , xj(N +1) , xj(N +1)+N , . . . )
∆x′i(N +1) = ∆xi(N +1) + h ∆xj(N +1)
j=0
∂xj(N +1)
D−1
X ∂fi (. . . , xj(N +1) , xj(N +1)+N , . . . )
+ h ∆xj(N +1)+N (22)
j=0
∂xj(N +1)+N
∆x′i(N +1)+l = ∆xi(N +1)+l−1 .

6
According to (19) and (21), we obtain the following formula:
D−1
X
∆x′i(N +1) = ∆xi(N +1) + h FD+2Di+2j ∆xj(N +1)
j=0
D−1
X
+ h FD+2Di+2j+1 ∆xj(N +1)+N (23)
j=0

∆x′i(N +1)+l = ∆xi(N +1)+l−1 ,

where the functions Fi are evaluated at the actual values of the variables x0 , . . . , xS−1 . The evolution of S
different separation vectors must be followed by the algorithm in order to calculate S Lyapunov exponents.
For the sake of simplicity, the elements of these separation vectors ∆xk can be stored in a single vector x,
together with the variables x0 , . . . , xS−1 :

xS = ∆x10
..
. (24)
xkS+i(N +1) = ∆xki(N +1) , (25)
..
.
(26)

where k = 1, . . . , S is the index of the separation vectors, and i = 0, . . . , D − 1 is the index of the elements.
The calculation of the Lyapunov spectrum is done similarly as in the case of ordinary differential equa-
tions:
• Choose the initial conditions of (20) on the attractor – or at least in the domain of attraction of the
attractor.
• Find S orthonormal initial condition vectors (separation vectors) ∆xk , k = 1, . . . S for (23) – practically,
∆xki = δik is a good choice, where δik = 1 if i = k, and δik = 0 if i 6= k.
• Iterate equations (20) to obtain a solution ξ. This solution is used during the iteration of the S sets of
equations of the form (23), according to the S orthonormal initial condition vectors. Thus, S(S + 1)
scalar difference equations (mappings) must be solved simultanously.
• As time goes on, the separation vectors ∆xk stretch and turn towards the unstable manifold corre-
sponding to the maximal Lyapunov exponent λ1 . To determine the whole Lyapunov spectrum, one has
to re-orthonormalize the separation vectors regularly, using the Gram-Schmidt procedure. Due to the
orthogonality of the vectors, these will be stretched or contracted according to the different elements
of the Lyapunov spectrum.
• To obtain the Lyapunov spectrum, the norms of the stretched vectors are calculated just before the
application of the Gram-Schmidt method, and the logarihms of these norms are summarized in S
variables. The Lyapunov exponents can be calculated by dividing the values of these variables by the
integration time.
• The Kaplan-Yorke dimension can be calculated according to (13).

References
[1] J.-P. Eckmann, D. Ruelle, Ergodic Theory of Chaos and Strange Attractors, Reviews of Modern Physics,
57 (3), Part I, pp. 617-656, 1985
[2] V.I. Oseledec, A Multiplicative Ergodic Theorem. Lyapunov Characteristic Numbers for Dynamical
Systems, Trudy Mosk. Mat. Obsc., 19, 179, 1968 (Moscow Math Soc., 19, 197, 1968)
[3] H. Kantz, T. Schreiber, Nonlinear Time Series Analysis, Cambridge University Press, Cambridge,
1997

7
[4] R. Shaw, Strange Attractors, Chaotic Behavior, and Information Flow, Z. Naturforsch., 36a, pp. 80-
112, 1981
[5] J.A. Yorke, E.D. Yorke, Metastable Chaos: The Transition to Sustained Chaotic Behavior in the Lorenz
Model, Journal of Statistical Physics, 21 (3), 1979
[6] C. Grebogi, E. Ott, J.A. Yorke, Crises, Sudden Changes in Chaotic Attractors, and Transient Chaos,
Physica, 7D, pp. 181-200, 1983
[7] J.A. Kaplan, J.A. Yorke, Preturbulence: A Regime Observed in a Fluid Flow Model of Lorenz, Commun.
Math. Phys, 67, pp. 93-108, 1979
[8] A. Wolf, J.B. Swift, H.L. Swinney, J.A. Vastano, Determining Lyapunov Exponents From a Time
Series, Physica 16D, pp. 285-317, 1985
[9] J. C. Sprott, Chaos and Time-Series Analysis (Oxford University Press, 2003), pp.116-117.
[10] J.D. Farmer, Chaotic attractors of an infinite-dimensional system, Physica 4D, pp. 366-393, 1982

You might also like