Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

CHAPTER I - Introduction

NONLINEAR SYSTEMS

JOSÉ C. GEROMEL

DSCE / School of Electrical and Computer Engineering


UNICAMP, CP 6101, 13081 - 970, Campinas, SP, Brazil,
geromel@dsce.fee.unicamp.br

Campinas, Brazil, August 2008

1 / 58
CHAPTER I - Introduction

Contents

1 CHAPTER I - Introduction
Note to the reader
Introduction
Preliminaries
Linear differential equations
Transfer function and frequency response
Nonlinear differential equations
Preliminaries
Existence and uniqueness
Periodic solutions
Equilibrium points
Linearization
Problems

2 / 58
CHAPTER I - Introduction

Note to the reader

Note to the reader

This text is based on the following main references :

J. C. Geromel e R. H. Korogui, “Controle Linear de Sistemas


Dinâmicos : Teoria, Ensaios Práticos e Exercı́cios (in
Portuguese), Edgard Blucher Ltda, 2011.

H. K. Khalil, “Nonlinear Systems”, Macmillan Publishing Co.,


1992.

M. Vidyasagar, “Nonlinear Systems Analysis”, Prentice Hall,


NJ, 1993.

3 / 58
CHAPTER I - Introduction

Introduction

Preliminaries

Nonlinear systems analysis is based on the following


observations:
Classes of nonlinear systems are identified allowing the
adoption of specific results to analyze them. For example, the
Lur’e and Persidiskii nonlinear systems.
Some approximations are adopted. For instance, in time
domain (power series) and in frequency domain (Fourie series).
Global and local analysis. Putting in evidence the validity of
properties in the overall state space or only in some regions or
neighborhoods.

4 / 58
CHAPTER I - Introduction

Introduction

Preliminaries

The next figure shows a pendulum of length ℓ and mass m

y θ
mg

From Newton´s Law we obtain the model


d2
m (ℓsinθ) + F sinθ = 0
dt 2
d2
m 2 (ℓcosθ) + F cosθ = mg
dt
5 / 58
CHAPTER I - Introduction

Introduction

Preliminaries

Eliminating the radial force F we obtain the swinging equation

θ̈ + (g /ℓ)sinθ = 0

which is nonlinear. Trajectories with (θ̇(t), θ(t)) small enough


for all t ≥ 0, can be obtained from the linear approximation

θ̈ + (g /ℓ)θ = 0

which is solved with no difficulty, that is


r 
g
θ(t) = A sin t+B

where the constants A and B are determined from the initial


conditions (θ̇(0), θ(0)).
6 / 58
CHAPTER I - Introduction

Introduction

Preliminaries

The nonlinear equation does not admit a solution θ(t) in


closed form. However, defining the angular velocity ν = θ̇,
noticing that
dν dν
= ν
dt dθ
and integrating νdν + (g /ℓ)sinθdθ = 0 we obtain the
trajectories in the plane (θ, ν) given by

ν 2 − 2(g /ℓ)cosθ = const

Moreover, adopting the same reasoning to the linear


approximation we have

ν 2 + (g /ℓ)θ 2 = const

7 / 58
CHAPTER I - Introduction

Introduction

Preliminaries

The next figure shows the trajectories in the plane (ν, θ) for
the nonlinear system (on the left side) and for the linear
approximation (on the right side):

4 4

3 3

2 2

1 1
ν = θ̇

0 0

−1 −1

−2 −2

−3 −3

−4 −4
−4 −2 0 2 4 −4 −2 0 2 4

θ θ

Notice the difference for (ν, θ) far from the origin.


8 / 58
CHAPTER I - Introduction

Introduction

Linear differential equations

Our interest are linear differential equations with state space


realization

ẋ = A(t)x + B(t)u, x(0) = x0


y = C (t)x + D(t)u

where x(t) ∈ Rn , u(t) ∈ Rm and y (t) ∈ Rr for all t ∈ [0, T ]


for some T > 0.
Generally, the final time is unbounded, that is T = +∞.
Matrices (A(t), B(t), C (t), D(t)) are time varying with
compatible dimensions.
Matrices (A(t), B(t), C (t), D(t)) are continuous functions of
the independent variable t ≥ 0.

9 / 58
CHAPTER I - Introduction

Introduction

Linear differential equations

Fact (Existence and unicity)


Under the previous assumptions, for each given initial condition
x0 ∈ Rn , the liner differential equation admits an unique solution in
the time interval [0, T ].

The solution follows from the so called state transition matrix


Φ(t, τ ) ∈ R n×n for all (t, τ ) ∈ [0, T ] × [0, T ] which satisfies
the partial differential equation
∂Φ
(t, τ ) = A(t)Φ(t, τ ) , Φ(τ, τ ) = I
∂t

10 / 58
CHAPTER I - Introduction

Introduction

Linear differential equations

Theorem (General solution)


The general solution of the linear differential equation is
Z t
x(t) = Φ(t, 0)x0 + Φ(t, τ )B(τ )u(τ )dτ
0

To prove this theorem we need the following result related to


the time derivative of the function
Z t
g (t) = f (t, τ )dτ
0

which is given by
Z t
dg ∂f
(t) = f (t, t) + (t, τ )dτ
dt 0 ∂t
11 / 58
CHAPTER I - Introduction

Introduction

Linear differential equations

The proof follows from the calculation


Z t
ẋ(t) = A(t)Φ(t, 0) + B(t)u(t) + A(t)Φ(t, τ )B(τ )u(τ )dτ
0
 Z t 
= A(t) Φ(t, 0) + Φ(t, τ )B(τ )u(τ )dτ + B(t)u(t)
0
= A(t)x(t) + B(t)u(t)

and from the fact that the proposed solution satisfies the
initial condition, that is

x(0) = Φ(0, 0)x0 = x0

12 / 58
CHAPTER I - Introduction

Introduction

Linear differential equations

For the time invariant case, characterized by constant


matrices A(t) = A, B(t) = B, C (t) = C , D(t) = D we have

Φ(t, τ ) = e A(t−τ )

where the matrix exponential for any X ∈ Rn×n is



X
X Xi
e =
i!
i =0

yielding the following well known solution


Z t
At
x(t) = e x0 + e A(t−τ ) Bu(τ )dτ
0

13 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

For time invariant systems it is useful to write the output y (t)


as follows
Z t
y (t) = Ce At x0 + Ce A(t−τ ) Bu(τ )dτ + Du(t)
Z 0t n o
= Ce At x0 + Ce A(t−τ ) B + Dδ(t − τ ) u(τ )dτ
| {z } 0 | {z }
H (t)
0
H(t−τ )

where H(t) = Ce At B + Dδ(t) is the impulse response. Notice


that
y (t) = H(t) ∗ u(t)
under zero initial condition.

14 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

The transfer function H(s) is obtained from the Laplace


transform of the impulse response H(t), yielding

H(s) = C (sI − A)−1 B + D

For SISO systems we can write H(s) = N(s)/D(s) where


N(s) and D(s) are polynomials.
The poles of H(s) are the roots of D(s) = det(sI − A) = 0.
The zeros of H(s) are the roots of

N(s) = det(sI − A)H(s)


 
sI − A −B
= det
C D
= 0

15 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

The frequency response of the system with transfer function


H(s) is simply
H(jω) , ∀ω ∈ R
which clearly imposes that jω ∈ D(H). The imaginary axis
must belong to the domain of H(s) and consequently all poles
of H(s) must be located in the region Re(s) < 0.
In this case, with the input u(t) = e jωt , ∀t ≥ 0 we obtain

H(s)
ŷ (s) =
(s − jω)
H(jω)
= R(s) +
(s − jω)

where R(s) and H(s) have the same poles.


16 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

The fact r (t) = L−1 (R(s)) vanishes as t → ∞ implies that

y (t) ≈ yperm (t) = H(jω)e jωt

for t ≥ 0 big enough. The quantity yperm (t) is the steady


state response of the system. Based on this we conclude that
the steady state response corresponding to the input
u(t) = sin(ωt), ∀t ≥ 0 is

yperm (t) = |H(jω)|sin(ωt + ∠H(jω))

The output oscillates with the same frequency as the input


with amplitude and phase depending on H(jω) exclusively.

17 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

The Laplace transform domain D(fˆ) of a function f (t) with


time domain t ≥ 0 assures that the equality
Z ∞
fˆ(s) = f (t)e −st dt
0

holds for all s ∈ D(fˆ). If jω ∈ D(fˆ) then the equality


Z ∞
ˆ
f (jω) = f (t)e −jωt dt
0

holds and provides the Fourier transform of f (t). On the


/ D(fˆ) the same equality is not verified.
contrary, if jω ∈
However, the quantity fˆ(jω) can be calculated whenever fˆ(s)
does not have poles on the imaginary axis.
18 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response


In minimal state space realization, the poles of H(s) are such
that det(sI − A) = 0. With A ∈ Rn×n the pairs of eigenvalues
and eigenvectors satisfy
Avi = λi vi , i = 1, · · · , n , vi 6= 0
where λi ∈ C and vi ∈ Cn .
Lemma
If the eigenvalues of A ∈ Rn×n are distinct then the eigenvectors
v1 , · · · , vn are linearly independent.

The proof is by absurd, suppose there exists p ≤ n − 1 such


that
Xp
vk = αi vi
i =1

19 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

Multiplying this equality to the left by A and by λk we obtain


p
X
αi (λi − λk )vi = 0
i =1

showing that the vectors v1 , · · · , vp are linear dependent as


well. Applying this argument successively, until p = 1, we
conclude that vp = 0 for some p = 1, · · · , n which is an
absurd since vp is an eigenvector.

Matrix V = [vi · · · vn ] ∈ Cn×n is nonsingular!

20 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

Matriz V defines the so called Similarity Transformation with


the following properties :
AV = V Λ where Λ ∈ Cn×n is the diagonal matrix

Λ = diag(λ1 , · · · , λn )

The state space representation (A, B, C , D) expressed in x


coordinates is converted to (V −1 AV , V −1 B, CV , D) expressed
in x̃ = V −1 x coordinates with invariant transfer function, that
is

H̃(s) = CV (sI − V −1 AV )−1 V −1 B + D


= C (sI − A)−1 B + D
= H(s)

21 / 58
CHAPTER I - Introduction

Introduction

Transfer function and frequency response

It is important to notice that if λ ∈ C is an eigenvalue of


A ∈ Rn×n then the same occurs to the complex conjugate λ∗ .
Hence, by definition Av = λv implies Av ∗ = λ∗ v ∗ and
denoting λ = σ + jω and v = vR + jvI we have
 
σ ω
A[vR vI ] = [vR vI ]
−ω σ

If a matrix A ∈ Rn×n has n − 2 real eigenvalues and λn−1 , λn


complex eigenvalues, then the similarity transformation with
V = [v1 , · · · , vn−2 , vR , vI ] ∈ Rn×n provides
  
σ ω
Λ = diag λ1 , · · · , λn−2 , ∈ Rn×n
−ω σ

22 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

The nonlinear differential equation under analysis is as follows

ẋ(t) = f (t, x(t)) , x(0) = x0

where
x(t) ∈ Rn × [0, T ]
f (t, x) : [0, T ] × R n → Rn
The final time T may be unbounded
The existence of a solution follows from the Picard’s method.
By simple integration we obtain the equivalent version
Z t
x(t) = x0 + f (τ, x(τ ))dτ , t ∈ [0, T ]
0

depending only on the trajectory x(t) for all t ∈ [0, T ].


23 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

Picard’s method is an important device to solve nonlinear


differential equations. It is based on the sequence of
trajectories x k (t), ∀t ∈ [0, T ] for k ∈ N defined as

x k+1 = Px k

where P is the nonlinear operator


Z t
Px = x(0) + f (τ, x(τ ))dτ , t ∈ [0, T ]
0

Clearly, a solution of the previous nonlinear equation is an


equilibrium trajectory of the operator P, that is x ∗ = Px ∗ .
The method starts with any trajectory x 0 (t) satisfying the
initial condition x 0 (0) = x0 .
24 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

For illustration let us solve the linear equation ẋ = Ax with


initial condition x(0) = x0 by Picard’s method. As it can be
easily verified, it provides the sequence
k
X (At)i
x k (t) = x0
i!
i =0

which, as expected, converges (globally) to x ∗ (t) = e At x0 for


all t ∈ [0, ∞]. Each iteration adds a new term in the Taylor
series of e At developed at t = 0.

25 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

The operator P is not unique. For the important class of


nonlinear differential equation

ẋ(t) = Ax(t) + Bg (t, x(t)) , x(0) = x0


the Picard’s method can be applied with:
the operator obtained from direct integration
Z t
Px = x(0) + (Ax(τ ) + Bg (τ, x(τ ))dτ
0

the operator obtained from the solution of the linear part


Z t
Px = e At x(0) + e A(t−τ ) Bg (τ, x(τ ))dτ
0

This is particulary useful for time-delay systems where


g (t, x(t)) = x(t − d) for some d > 0.
26 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries
Before proceed we need to introduce some well known
concepts. A linear vector space is a set of elements together
with the addition and multiplication by scalar operations. As
for instance, the set Rn and the operations
   
x1 + y1 αx1
 ..   . 
x +y = .  , αx =  ..  , α ∈ R
xn + yn αxn
or the set of functions F n defined on the interval [0, T ] and
the operations
   
x1 (t) + y1 (t) αx1 (t)
(x+y )(t) = 
 ..   .. 
.  , (αx)(t) =  . , α ∈ R
xn (t) + yn (t) αxn (t)
27 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries
A normed linear space is a linear vector space X equipped
with a real valued function called norm k · k : X → R which
satisfies the following axioms:
kxk ≥ 0, ∀x ∈ X and kxk = 0 iff x = 0 ∈ X
kαxk = |α|kxk for any scalar α
kx + y k ≤ kxk + ky k, ∀x, y ∈ X
Consider the vector linear vector space Rn and
n
!1/p
X
kxkp = |xi |p
i =1

valid for p = 1, 2, · · · . The more important are


v
n u n
X uX
kxk1 = |xi |, kxk2 = t |xi |2 , kxk∞ = max |xi |
i =1,··· ,n
i =1 i =1

28 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

Consider a sequence {x k }∞ k=0 in a normed linear space


(X , k · k). It converges to x ∗ ∈ X if for every ǫ > 0 there exist
an integer N(ǫ) such that

kx k − x ∗ k < ǫ, ∀k ≥ N(ǫ)

Notice that, to apply this test, x ∗ must be known.


A sequence {x k }∞
k=0 in a normed linear space (X , k · k) is a
Cauchy sequence if for every ǫ > 0 there exist an integer N(ǫ)
such that
kx k − x r k < ǫ, ∀k, r ≥ N(ǫ)

29 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

Any convergent sequence is a Cauchy sequence. The contrary


in not true since there exist Cauchy sequences which do not
converge to some element x ∗ ∈ X .
A normed linear space (X , k · k) is a complete normed linear
space or a Banach space if each Cauchy sequence converges
to some element of X .
Important example : X = C[0,T ] is the set of continuous
function f (t) : [0, T ] → R and kf k = maxt∈[0,T ] |f (t)| Notice
that the first two axioms hold and

kf1 + f2 k = max |f1 (t) + f2 (t)|


t∈[0,T ]
≤ max |f1 (t)| + |f2 (t)|
t∈[0,T ]
≤ kf1 k + kf2 k
30 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

On the other hand, with ǫ > 0 arbitrarily small and any


convergent sequence f k ∈ X , the constraint

kf k − f ∗ k = max |f k (t) − f ∗ (t)| < ǫ


t∈[0,T ]

implies that f ∗ is continuous and consequently (X , k · k) is a


Banach space.
The same set of continuous functions X = C[0,T ] , together
with the Euclidean norm
s
Z T
kf k = f (τ )2 dτ
0

is not complete. Hence, it is not a Banach space.


31 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries
Indeed, this follows from the fact that it is possible to generate
a sequence of continuous function f k ∈ X that converges to a
discontinuous function f ∗ ∈
/ X . The next figure shows in solid
line the discontinuous function in the interval X = C[−1,1]

∗ 1 0<t ≤1
f (t) =
−1 −1 ≤ t < 0

f∗ fk
0

−1

−1 0 1

32 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

The same figure shows in dashed lines the sequence of


continuous functions

k 1 − e −t/τk 0<t≤1
f (t) =
−1 + e t/τk −1 ≤ t < 0

where τk > 0 goes to zero as k goes to infinity. With the


Euclidean norm we obtain

kf k − f ∗ k2 = τk (1 − e −2/τk )
< τk

which shows that the sequence of continuous functions f k


converges to the discontinuous function f ∗ .

33 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries

Consider (X , k · k) and (Y , k · k) normed linear spaces. The


function f : X → Y is uniformly continuous at z ∈ X if for
every ǫ > 0 there exist δ(ǫ) such that

kf (x) − f (z)k < ǫ whenever kx − zk < δ(ǫ)

If f (x) is continuous at x0 ∈ X then a convergent sequence


x k → x0 provides a convergent sequence f (x k ) → f (x0 ).
A particularly important concept is the inner product of
elements x, y ∈ X . For instance:
Pn
For X = Rn then x · y = i =1 xi yi = x ′ y .
n
RT ′
For X = C[0,T ] then x · y = 0 x(t) y (t)dt.

The quantity x · x is a norm of x ∈ X .

34 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Preliminaries


A complete normed linear space (X , k · k) with kxk = x ·x
is denominated Hilbert space.

Notice that:

The normed linear space (R n , k · k) with kxk = x ′ x is a
Hilbert space.
n
The normed linear space (C[0,T ] , k · k) with
qR
T
kxk = 0 x(t) x(t)dt is not a Hilbert space.

n
The normed linear space (C[0,T ] , k · k) with
p
kxk = maxt∈[0,T ] x(t) x(t) is not a Hilbert space but it is a

Banach space.

35 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

The analysis of existence and uniqueness of a solution of a


given nonlinear differential equation depends on two
important results to be discussed in the sequel.
Theorem (Contraction mapping theorem)
Let (X , k · k) be a Banach space and Q : X → X be an operator.
If there exists 0 ≤ ρ < 1 such that

kQx − Qy || ≤ ρkx − y k, ∀x, y ∈ X

then the following hold:


There exists an unique x ∗ ∈ X satisfying x ∗ = Qx ∗ .
The sequence x k+1 = Qx k , x 0 = x0 converges to x ∗ for all x0 ∈ X .
For all k ≥ 1 we have kx k − x ∗ k ≤ (ρk /(1 − ρ))kQx0 − x0 k.

36 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness


Proof : Each item is proven as follows:
If here exist x, y ∈ X such that Qx = x and Qy = y then
kQx − Qy k = kx − y k ≤ ρkx − y k
which together with ρ < 1 yields x = y .
For all k, r ≥ 1, we have kx k+1 − x k k ≤ ρk kQx0 − x0 k and
r −1
X
kx k+r − x k k ≤ kx k+i +1 − x k+i k
i =0
r −1
X
≤ ρk kQx0 − x0 k ρi
i =0
k
≤ (ρ /(1 − ρ))kQx0 − x0 k
The sequence is a Cauchy sequence and converges to some
x ∗ ∈ X due to the fact that (X , k · k) is a Banach space.
37 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

The third item is important because it provides an estimate of


the rate of convergence to the fixed point x ∗ ∈ X .
Form the previous calculations we have

kx ∗ − x k k = lim kx k+r − x k k
r →∞
kQx0 − x0 k
≤ ρk
1−ρ
and the proof is concluded.
Example : Consider X = R and f (x) : R → R continuously
differentiable. Since f (x) = f (y ) + f ′ (ξ)(x − y ) for some
ξ ∈ [x, y ] ⊂ R, then |f ′ (ξ)| < 1, ∀ξ ∈ R implies that the
sequence x k+1 = f (x k ) always converges to the unique
solution of the equation x = f (x).

38 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

Theorem (Gronwall’s inequality)


Let a(t) be a continuous function and c ≥ 0, b scalars. The
inequality Z t
a(t) ≤ b + c a(τ )dτ, ∀t ≥ 0
0

implies that a(t) ≤ be ct for all t ≥ 0.


Rt
Proof : Defining q(t) = b + c 0 a(τ )dτ , it follows that
a(t) ≤ q(t) and q̇(t) = ca(t) ≤ cq(t). With ẋ(t) = cx(t) and
x(0) = q(0) = b it is seen that x(t) ≥ q(t), ∀t ≥ 0 because
the existence of t > 0 such that x(t) = q(t) implies
ẋ(t) = cx(t) = cq(t) ≥ q̇(t). Hence

a(t) ≤ q(t) ≤ x(t) = be ct , ∀t ≥ 0


39 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

Let us now consider the nonlinear differential equation

ẋ(t) = f (t, x(t)) , x(0) = x0

where x(t) ∈ Rn and t ∈ [0, T ] with T > 0. Defining the


nonlinear operator
Z t
Px = x(0) + f (τ, x(τ ))dτ
0

it is clear that a solution of the above differential equation


satisfies x = Px which can be calculated iteratively by the
Picard’s method
x k+1 = Px k
from an initial trajectory x 0 .
40 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness


The next theorem is important for further developments.
Theorem
Assume there exist positive constants κ and µ such that
kf (t, x) − f (t, y )k ≤ κkx − y k, ∀x, y ∈ Rn , t ∈ [0, T ]
kf (t, x0 )k ≤ µ, ∀t ∈ [0, T ].
The differential equation ẋ = f (t, x), x(0) = x0 admits an unique
solution in the time interval [0, T ].
n
Proof : For X = C[0,T ] and kxkC = maxt∈[0,T kx(t)k the
normed linear space is a Banach space. On the other hand
Z t

Px(t) − Py (t) = f (τ, x(τ )) − f (τ, y (τ )) dτ
0
since x(0) = y (0) = x0 .
41 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness


Yielding
Z t
kPx(t) − Py (t)k ≤ κ kx(τ ) − y (τ )kdτ
0
≤ (κt)kx − y kC
and, with the same reasoning,
Z t
2 2
kP x(t) − P y (t)k ≤ κ kPx(τ ) − Py (τ )kdτ
0
2 2
≤ (κ t /2)kx − y kC
which implies, by induction, that
kP m x − P m y kC ≤ (κm T m /m!)kx − y kC
for all m ≥ 1.
42 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

Choosing an integer m ≥ 1 such that (it always exists)

ρ = κm T m /m! < 1

and considering Q = P m the proof follows from the


contraction mapping theorem provided that kQx 0 − x 0 k is a
bounded quantity which is assured by the second assumption.
This theorem puts in evidence two possible procedures (to be
discussed afterwards) to solve numerically a given differential
equation:
Time decomposition
Space decomposition

43 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

Time decomposition : Divide the time interval [0, T ] into


sub-intervals of length ∆T such that ρ = κ∆T < 1 and apply
the Picard’s method in each time interval, successively.
Spacial decomposition : Determine m ≥ 1 such that
ρ = κm T m /m! < 1 and apply the Picard’s method once to
the time interval [0, T ].
In both cases, global convergence is assured by the fact that

kQx − Qy kC ≤ ρkx − y kC

holds for ρ < 1 and Q = P for the first case and Q = P m for
the second one.

44 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Existence and uniqueness

Remarks :
The conditions of the previous theorem are only sufficient. For
instance, the differential equation ẋ = −x 2 , x(0) = 1 admits
the solution x(t) = 1/(1 + t) ∈ C[0,∞) but does not satisfy the
first condition.
The first condition of the previous theorem is also called
Lipschitz condition.
In the linear case f (x) = Ax, we have

kf (x) − f (y )k = kA(x − y )k
≤ kAkkx − y k

providing µ = κ = kAk.

45 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Periodic solutions
Linear and nonlinear differential equations may admit periodic
solutions. However, they are intrinsically different as we will
show in the sequel.
Consider the linear differential equation
ẋ1 = x2
ẋ2 = −x1
which can be expressed in polar coordinates (r , φ) where
r 2 = x12 + x22 and tg(φ) = x2 /x1 . Indeed, by simple derivation
with respect to time we obtain
x1 ẋ1 + x2 ẋ2
ṙ =
r
x1 ẋ2 − x2 ẋ1
φ̇ =
r2
46 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Periodic solutions

By substitution we have two decoupled equations ṙ = 0 and


φ̇ = −1 which provide the solution

r (t) = r (0) , φ(t) = φ(0) − t, ∀t ≥ 0

This is a periodic solution represented in the (x1 , x2 ) plane by


a circle. Notice however that the radius and consequently de
amplitude of the oscillation depends on the initial conditions
through
r (0)2 = x1 (0)2 + x2 (0)2
As it will be seen, the behavior of the periodic solutions of
nonlinear systems is quite different.

47 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Periodic solutions
Consider now the nonlinear differential equation
ẋ1 = x2 + x1 (1 − x12 − x22 )
ẋ2 = −x1 + x2 (1 − x12 − x22 )
Adopting again the polar coordinates (r , φ), the time
derivatives provide
ṙ = r (1 − r 2 )
φ̇ = −1
which as it can be verified, admits the solution
1
r (t) = √ , φ(t) = φ(0) − t, ∀t ≥ 0
1 + ce −2t
where c = −1 + 1/r (0)2 .
48 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Periodic solutions
On the left part of the next figure, the linear equation provides
periodic solutions. Notice that different periodic solutions are
obtained corresponding to each initial condition. On the right
part, for the nonlinear equation, only one periodic solution is
obtained starting from any initial condition.

3 1.5

1
2
0.5

1 0
x1 (t)

−0.5
0
−1

−1 −1.5

−2
−2
−2.5

−3 −3
0 2 4 6 8 10 0 2 4 6 8 10

t t

49 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Equilibrium points

A point xe ∈ Rn is said to be an equilibrium point of the


autonomous differential equation

ẋ = f (x), x(0) = x0

where f (·) : Rn → Rn , if the initial condition x(0) = xe


implies that the corresponding solution is x(t) = xe ∀t ≥ 0.
From this definition, all equilibrium points are characterized
by the algebraic equation

f (xe ) = 0

which, clearly, may have multiple solutions.

50 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Linearization

Consider again the differential equation ẋ = f (x). In a


neighborhood of some equilibrium point xe ∈ Rn , each
component fi (x), i = 1, · · · , n, can be approximated by the
Taylor’s series
n
X ∂fi
fi (x) ≈ fi (xe ) + (xe )(xj − xej ), i = 1, · · · , n
∂xj
j=1

yielding
f (x) ≈ f (xe ) + A(x − xe )
where An×n is a square matrix with elements
∂fi
aij = (xe ) , i , j = 1, · · · , n
∂xj

51 / 58
CHAPTER I - Introduction

Nonlinear differential equations

Linearization

Defining the new variable z = x − xe , taking into account that


f (xe ) = 0, an approximate linear differential equation is
obtained
ż = Az, z(0) = x0 − xe
which provides the approximate solution

x(t) = xe + e At (x0 − xe ), ∀t ≥ 0

Remarks :
Of course this is only an approximation. Hence, it is valid only
near the equilibrium point.
The matrix A ∈ Rn×n depends on each equilibrium point
xe ∈ Rn .

52 / 58
CHAPTER I - Introduction

Problems

Problems

1. Show that the state transmission matrix Φ(t, τ ) can be


written as
Φ(t, τ ) = Θ(t)Θ(τ )−1
and determine Θ(t) for all t ≥ 0.
2. Consider X = Rn . Prove that

kxk∞ = max |xi |


i =1,··· ,n

3. Determine and plot the phase plane (ν, θ) where ν = θ̇ for the
differential equation
θ̈ + 2θ 3 = 0

53 / 58
CHAPTER I - Introduction

Problems

Problems

4. Consider A ∈ Rn×n , X = R n and define the induced norm

kAk = max{kAxkp : kxkp = 1}


x∈X

for some nonnegative integer p.


Prove that kAkp satisfies the norm axioms.
Prove that for all A, B ∈ Rn×n we have kABk ≤ kAkp kBkp
Determine kAkp for p = 1, 2, ∞.
5. Consider the normed linear space (X , k · k). Prove that the
function f (·) = kxk : X → R is convex.

54 / 58
CHAPTER I - Introduction

Problems

Problems

6. Consider R ∈ Rn×n a symmetric matrix. Prove that :


All eigenvalues and eigenvectors are real.
Determine V ∈ Rn×n and Λ ∈ Rn×n diagonal such that

V −1 RV = Λ , V −1 = V ′

For all x 6= 0 ∈ Rn it is true that

x ′ Rx
λmin ≤ ≤ λmax
x ′x
where λmin and λmax are the minimum and the maximum
eigenvalue of R, respectively.

55 / 58
CHAPTER I - Introduction

Problems

Problems

7. Consider the nonlinear differential equation

ẋ − e −x = 0 , x(0) = 1

and the time interval t ∈ [0, 10]. Determine:


its solution, analytically.
its solution by the Picard’s method.
8. Consider the nonlinear equation θ̈ + θ̇ + sin(θ) = 0 with initial
conditions θ(0) = π/4 and θ̇(0) = 0.
Solve it numerically.
Determine κ and Tmax such that Q = P 2 is a contraction
mapping.
For T = Tmax /2, solve it by the Picard’s method.

56 / 58
CHAPTER I - Introduction

Problems

Problems

9. For the dynamic system given in the next figure, determine :


Its mathematical model in terms of the state variables
(x, ẋ, θ, θ̇).
The linearized model valid around the equilibrium point
(0, 0, 0, 0).

x
M
f


θ
m

57 / 58
CHAPTER I - Introduction

Problems

Problems

10. For the same system car - pendulum considered in the


previous problem, determine:
The transfer function from the force f and the angular
displacement of the pendulum θ.
The impulse response.
The solution of the nonlinear model corresponding to f (t) = 1
for all t ≥ 0 and initial conditions

x(0) = ẋ(0) = 0 , θ(0) = π/4, θ̇(0) = 0

The solution of the linear model and compare.

58 / 58

You might also like