Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

MA 102: Lecture - 14

Linear System of First ODEs

March-June 2023

Lecture-14
(Sections 7.1-7.3 of Differential Equations by S. L. Ross, 3rd Edition)

(March-June 2023) MA 102-ODE Lecture-14 1 / 49


Systems of First Order Differential Equations

There are many physical problems in which more than one element or
object interact with each other. To mathematically model such
physical problems, we need a system of two or more differential
equations. For example,
• Biology: Predator-Prey Model
• Mechanics: Motion of certain Spring-Mass System (Two masses
moving on frictionless surface)
• Electrical Circuit: Parallel LRC Circuit
• Mixture Problems: Connected Mixing Tanks

(March-June 2023) MA 102-ODE Lecture-14 2 / 49


Lotka-Volterra Predator-Prey Model

Consider the population problem of two different species, say, Foxes and Rabbits
interact living in the same ecosystem. A simple model supposes that rabbits eat
only grass and foxes eat only rabbits. In other words, fox is a predator and rabbit
is a prey.

Let F (t) and R(t) denote the


population of foxes and rabbits
respectively, at time t.

(March-June 2023) MA 102-ODE Lecture-14 3 / 49


Lotka-Volterra Predator-Prey Model Contd...
If there were no rabbits, then foxes won’t have food to eat and hence the
population of foxes would decline in number according to
dF
= −aF where a > 0.
dt

When rabbits are present, the number of interactions/ encounters between these
two species per unit time is proportional to their populations F and R , that is,
proportional to the product F R .

Thus when rabbits are present, there is a supply of food to foxes, so foxes are
added to the system at the rate of bF R, b > 0 . Therefore
dF
= −aF + bF R, a > 0, b > 0.
dt

(March-June 2023) MA 102-ODE Lecture-14 4 / 49


If there were no foxes then the population of rabbits would increase in number
according to
dR
= cR where c > 0.
dt

But when foxes are present, the population of the rabbits is decreased by the rate
at which the rabbits are eaten during their encounters with the foxes, that is,
−dF R, d > 0. Therefore
dR
= cR − dF R
dt
where c > 0 and d > 0 .
Thus, this model leads to the following nonlinear system of ODEs.
Lotka-Volterra Predator-Prey Model: F -Foxes (Predator) and R -Rabbits (Prey)

dF
= −aF + bF R, a, b > 0
dt
dR
= −cR − dF R, c, d > 0.
dt
(March-June 2023) MA 102-ODE Lecture-14 5 / 49
Competition Model

Suppose two different species of animals occupy the same ecosystem, not as
predator and prey, but rather as competitors for the same resources (such as food
and living space) in the system. In absence of the other, let us assume that the
rate at which each population grows is given by
dx dy
= ax and = cy where a, c > 0.
dt dt

Since the two species compete, another assumption might be that each of these
rates is diminished simply by the influence or existence of other population.

(March-June 2023) MA 102-ODE Lecture-14 6 / 49


Thus, this model leads to the following linear system of ODEs.
Competition Model:

dx
= ax − by,
dt
dy
= −dx + cy,
dt

where a , b , c and d are positive real constants.

(March-June 2023) MA 102-ODE Lecture-14 7 / 49


Linear System of First Order ODEs
A Linear system of first order ODEs (in normal form) is given by

dx1
= a11 (t)x1 + · · · + a1n (t)xn + b1 (t)
dt
dx2
= a21 (t)x1 + · · · + a2n (t)xn + b2 (t)
dt
... ...............
dxn
= an1 (t)x1 + · · · + ann (t)xn + bn (t)
dt
It can be written in the vector differential equation (VDE) form as

dx dX dX~
= Ax + f or =AX + F or ~X
=A ~ + F~ ,
dt dt dt
where x(t) = [x1 (t), . . . , xn (t)]T , f (t) = [b1 (t), . . . , bn (t)]T , and A(t) = [aij (t)]
is a n × n matrix.

When f = 0 the above linear system is said to be homogeneous.


(March-June 2023) MA 102-ODE Lecture-14 8 / 49
Examples
Example 1: Here x(t) = (x1 (t), x2 (t), x3 (t)). That is, t is independent variable
and x1 , x2 , x3 are dependent variables.
dx1
= t2 x1 + (t + 1) x2 + 3 x3 + t3
dt
dx2
= tet x1 + t3 x2 + sin(t) x3 − et
dt
dx3
= t x1 + et x2 + 5 x3 + t
dt

Example 2: Here x(t) = (x(t), y(t)). That is, t is independent variable and x, y
are dependent variables.
dx
= t2 x + (t + 1) y + 3 t3
dt
dy
= 2 x + t3 y − e t
dt

(March-June 2023) MA 102-ODE Lecture-14 9 / 49


Solution to the Linear System of First Order ODEs
Definition
By a solution of the vector differential equation
dx
= Ax + f ,
dt
we mean n × 1 column vector function
 
φ1 (t)
 φ2 (t) 
Φ(t) =  ..
 

 . 
φn (t)

whose components φ1 , φ2 , . . ., φn each have a continuous derivative on the real


interval a ≤ t ≤ b, which satisfies that
dΦ(t)
= A(t) Φ(t) + f (t) for all t ∈ [a, b] .
dt
(March-June 2023) MA 102-ODE Lecture-14 10 / 49
Example
The column vector function Φ defined by
 5t 
2e − 1
Φ(t) =
e5t − 2

is a solution of the vector differential equation


dx
= Ax + f ,
dt
where    
6 −3 e5t
A(t) = and f (t) = .
2 1 4

Checking Φ0 (t) = A(t) Φ(t) + f (t):


 5t       5t   5t 
10e 6 −3 2e5t − 1 e 12e − 6 − 3e5t + 6 + e5t
= + = .
5e5t 2 1 e5t − 2 4 4e5t − 2 + e5t − 2 + 4

(March-June 2023) MA 102-ODE Lecture-14 11 / 49


Homogeneous Linear System of First Order ODEs
A Homogeneous Linear system of first order ODEs is given by
dx1
= a11 (t)x1 + · · · + a1n (t)xn
dt
dx2
= a21 (t)x1 + · · · + a2n (t)xn
dt
.. .. .. ..
. . . .
dxn
= an1 (t)x1 + · · · + ann (t)xn
dt
It can be written in the vector differential equation (VDE) form as

dx dX dX~
= Ax or =AX or =A~X ~ ,
dt dt dt
   
a11 (t) · · · a1n (t) x1 (t)
 .. .. ..   . 
A = A(t) =  . . .  & x = x(t) =  .. .
an1 (t) · · · ann (t) xn (t)
(March-June 2023) MA 102-ODE Lecture-14 12 / 49
Example
Consider the homogeneous linear VDE x0 (t) = A(t) x(t) where
   
7 −1 6 x1 (t)
A(t) = −10 4 −12 and x(t) = x2 (t) .
−2 1 −1 x3 (t)

Then the column vector function


 
e3t
Φ(t) = −2e3t 
−e3t

is a solution of the above given VDE on every real interval a ≤ t ≤ b.

Checking Φ0 (t) = A(t) Φ(t):


 3t     3t 
3e 7 −1 6 e
−6e3t  = −10 4 −12 −2e3t  for all t ∈ [a, b] .
−3e3t −2 1 −1 −e3t

(March-June 2023) MA 102-ODE Lecture-14 13 / 49


Existence and Uniqueness of Solution to the IVP-HLVDE
Note that x = 0 is a solution of the homogeneous linear VDE x0 = A x.
Definition
The Initial Value Problem (IVP) for the homogeneous linear VDE

x0 (t) = A(t) x(t) . (1)

is to find a vector function x(t) ∈ C 1 that satisfies the system (1) on an interval I
and the initial conditions x(t0 ) = x0 = (x0,1 , . . . , x0,n )T , where t0 ∈ I and
x0 ∈ Rn .

Theorem (Existence and Uniqueness of Solution to the IVP-HLVDE)


Let A(t) be continuous on a closed and bounded interval I. Let t0 ∈ I. Then, for
any choice of x0 ∈ Rn , there exists a unique solution x(t) on the whole interval I,
to the IVP involving homogeneous linear VDE

x0 (t) = A(t) x(t) with the initial condition x(t0 ) = x0 . (2)

(March-June 2023) MA 102-ODE Lecture-14 14 / 49


Outline of Proof for E & U of Solution to the IVP-HLVDE

Step 1:
The solution to the IVP (2) is equivalent to the solution of the vector integral
equation Z t
x(t) = x0 + A(s) x(s) ds (3)
t0

which gives x(t0 ) = x0 .


Step 2: Picard Iterates (For the case t ∈ I with t ≤ t0 , proof is similar).
Define the Picard iterates as follows:

x0 (t) = x0 ,
Z t
xn+1 (t) = x0 (t) + A(s) xn (s) ds t ≥ t0 , t ∈ I (4)
t0

for n = 0, 1, 2, 3, . . .. Then, the sequence {xn (t)} is well-defined and it is a


Cauchy sequence in Rn for each t. Therefore, the sequence {xn (t)} converges
uniformly on [a, b] to a function x(t) (say).

(March-June 2023) MA 102-ODE Lecture-14 15 / 49


Continuation of the Previous Slide

Since xn (t) → x(t) uniformly on I, we can take the limit under the integral sign
of (4) to get
Z t
x(t) = x0 (t) + A(s) x(s) ds
t0

which proves that x(t) is the solution of the integral equation (3) and hence x(t)
is a solution to the IVP (2).
Step 3: Uniqueness of the Solution
If x(t) and y(t) are the solutions of (2) then
Rt Rt
x(t)−y(t) = t0 A(s)(x(s)−y(s)) ds =⇒ |x(t)−y(t)| ≤ M t0 |x(s)−y(s)| ds.
Thus for any given  >R0, we get from the above inequality,
t
|x(t) − y(t)| <  + M t0 |x(s) − y(s)| ds. By Grownwall integral inequality, one
can get
|x(t) − y(t)| <  exp(M (t − t0 )) for t ≥ t0 .
Since the above inequality is true for each  > 0, we can get |x(t) − y(t)| = 0 and
hence x(t) = y(t) for t ≥ t0 .

(March-June 2023) MA 102-ODE Lecture-14 16 / 49


Corollary
Consider the homogeneous linear VDE x0 = A x where A(t) are continuous on a
closed and bounded interval I. Let t0 be any point of [a, b]; and let Φ(t), t ∈ [a, b]
be a solution of x0 = A x such that Φ(t0 ) = 0. Then Φ(t) = 0 for all t ∈ [a, b].

Proof.
By the Theorem on Existence and Uniqueness of solution to the IVP involving
homogeneous linear VDE, the solution Φ(t) is a unique solution to the IVP
satisfying Φ(t0 ) = 0. Then as given in the proof of that theorem, one can
compute the Picard’s iterates (successive approximation) xn (t) and observe that
xn (t) = 0 for all t ∈ [a, b] and for each n ∈ N. Therefore, {xn } → Φ and
Φ(t) = 0 for all t ∈ [a, b].

(March-June 2023) MA 102-ODE Lecture-14 17 / 49


Theorem
Consider the homogeneous linear VDE x0 = A x where A is an n × n matrix of
functions continuous on [a, b]. Then the set of all solutions of

x0 = A x

on an interval I form an n-dimensional vector space over the field of real/


complex numbers.

Proof.
Step 1: Let V = {x(t) : x0 (t) = A(t)x(t) for all t ∈ I}.
To show: V is a vector space over the real field.
Let x1 and x2 be any two elements in V .
To show: c1 x1 + c2 x2 ∈ V for any scalars c1 and c2 .
d dx1 dx2
(c1 x1 + c2 x2 ) = c1 + c2 = c1 Ax1 + c2 Ax2 = A(c1 x1 + c2 x2 ) .
dt dt dt
Therefore, c1 x1 + c2 x2 ∈ V . Hence V is a vector space.

(March-June 2023) MA 102-ODE Lecture-14 18 / 49


Continuation of Proof
Step 2: Showing the dimension of V is n.
Let ei = (0, 0, . . . , 1, 0, . . . , 0) for i = 1, 2, . . ., n. We know that
{ei : i = 1, . . . , n} is the standard basis for Rn .
By E & U theorem, given t0 ∈ I, there exists a unique solution
xi (t) = (xi1 (t), . . . , xin (t))T , say, satisfying x0 = Ax and the initial condition
xi (t0 ) = ei for each i = 1, 2, . . ., n.
To show: x1 , . . ., xn are linearly independent in V .
Suppose x1 , . . ., xn are not linearly independent. Then there exist scalars c1 , . . .,
cn not all zero, such that

c1 x1 + c2 x2 + · · · + cn xn = 0 .

This is true for all t ∈ I and in particular t = t0 so that we have

c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + cn xn (t0 ) = 0 .

This gives that c1 e1 + c2 e2 + · · · + cn en = 0 which contradicts the fact that e1 ,


e2 , . . ., en are linearly independent in Rn . Therefore, x1 , . . ., xn are linearly
independent in V .
(March-June 2023) MA 102-ODE Lecture-14 19 / 49
Continuation of Proof

To show: x1 , . . ., xn spans V .
Let Φ be the solution to the IVP x0 = Ax with x(t0 ) = x0 on I, where x0 ∈ Rn .
That is, Φ ∈ V .
Since x0 ∈ Rn , there exist unique scalars c1 , . . ., cn such that

c1 e1 + c2 e2 + · · · + cn en = x0 .
X n
Since xi (t0 ) = ei , we get x0 = ci xi (t0 ).
Pn i=1
Thus, the function Ψ(t) = i=1 ci xi (t) for t ∈ I is a solution to the IVP
x0 = Ax satisfying the initial condition Ψ(t0 ) = x0 .
Since the solution
Pn to the IVP is unique, we can conclude that
Φ ≡ Ψ = i=1 ci xi on I.
Therefore, every solution Φ in V is a linear combination of x1 , . . ., xn .
This completes the proof of the theorem.

(March-June 2023) MA 102-ODE Lecture-14 20 / 49


Existence and Uniqueness of Solution IVP-NHLVDE

Definition: The IVP for the system

x0 (t) = A(t)x(t) + f (t) (4)

is to find a vector function x(t) ∈ C 1 that satisfies the system (4) on an interval I
and the initial conditions x(t0 ) = x0 = (x1,0 , . . . , xn,0 )T , where t0 ∈ I and
x0 ∈ Rn .

Theorem: (Existence and Uniqueness)


Let A(t) and f (t) are continuous on I and t0 ∈ I. Then, for any choice of
x0 = (x1,0 , . . . , xn,0 )T ∈ Rn , there exists a unique solution x(t) to the IVP

x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0

on the whole interval I.

(March-June 2023) MA 102-ODE Lecture-14 21 / 49


Vector Functions and their Wronskian
Theorem: Let A be an n × n matrix. The following statements are equivalent:
• A is singular.
• det A = 0.
• Ax = 0 has nontrivial solution (x 6= 0).
• The columns of A form a linearly dependent set.
Definition: The Wronskian of n vector-valued functions x1 (t) = (x1,1 , . . . , xn,1 )T ,
. . ., xn (t) = (x1,n , . . . , xn,n )T
is defined as

x1,1 (t) x1,2 (t) ··· x1,n (t)

x2,1 (t) x2,2 (t) ··· x2,n (t)
W (x1 , . . . , xn )(t) := .. .. ..



. . .

xn,1 (t) xn,2 (t) ··· xn,n (t)
1
= det[x1 x2 . . . xn ].
1 [x x . . . ] denotes a n × n matrix with vector valed functions x as the i-th column for
1 2 i
i = 1, . . . n.
(March-June 2023) MA 102-ODE Lecture-14 22 / 49
Wronskian and Linear Independence of Functions

If x1 , x2 , . . ., xn are linearly dependent vector-valued functions


defined on the interval I, then W (t) := det[x1 x2 . . . xn ] = 0 on I.

The converse is not true in general


Consider following vectors
   2
t t
Φ1 (t) = and Φ2 (t) =
0 0
Then

t t2
W (Φ1 , Φ2 )(t) =
=0 for every t ∈ [a, b]
0 0

but Φ1 and Φ2 are linearly independent (LI) on [a, b].

(March-June 2023) MA 102-ODE Lecture-14 23 / 49


Wronskian and Linear Independence of Functions Contd....

We see that the converse is also true in the following case:


Theorem
Let A(t) is an n × n matrix of continuous functions. If x1 , x2 , . . ., xn
are linearly independent solutions to x0 (t) = A(t)x on I, then
W (t) := det[x1 x2 . . . xn ] 6= 0 on I.

Proof. Suppose W (t0 ) = 0 at some point t0 ∈ I. Now, W (t0 ) = 0


=⇒ x1 (t0 ), x2 (t0 ), . . ., xn (t0 ) are Linearly dependent.
Then, ∃ scalars c1 , . . . , cn , not all zero, such that
c1 x1 (t0 ) + c2 x2 (t0 ) + . . . + cn xn (t0 ) = 0.

(March-June 2023) MA 102-ODE Lecture-14 24 / 49


Set x(t) = c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t). Note that x(t) is a
solution to x0 (t) = A(t)x(t) with the initial condition
n
X
x(t0 ) = ci xi (t0 ) = 0.
i=1

By the theorem on the existence of only trivial/zero solution of the


homogeneous system with zero initial condition, x(t) ≡ 0, i.e.,
c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t) = 0, ∀ t ∈ I
which contradicts to the fact that x1 , . . ., xn are Linearly independent
in I. Hence, W (t0 ) 6= 0.
Since t0 ∈ I is arbitrary, W (t) 6= 0 for any t ∈ I.

(March-June 2023) MA 102-ODE Lecture-14 25 / 49


Theorem:(Abel’s formula)
If x1 , . . . , xn are n solutions to x0 (t) = A(t)x(t) on an interval I and
t0 is any point of I, then for all t ∈ I,
Z t (Xn
) !
W (t) = W (t0 ) exp aii (s) ds ,
t0 i=1

where aii ’s are the main diagonal elements of A.

For a n × n matrix A(t) := [aij (t)]1≤i,j≤n , the Trace of A(t) is defined


as n
X
Trace A(t) := aii (t)
i=1

where aii (t), i = 1, ..., n are the main diagonal elements of A(t).

Proof: See Theorem 11.12, page 528 of S. L. Ross’s book (third


edition).
(March-June 2023) MA 102-ODE Lecture-14 26 / 49
Fact:
0
• The Wronskian of solutions to x (t) = A(t)x(t) is

either zero or never zero on I.


0
• A set of n solutions to x (t) = A(t)x(t) on I is linearly

independent on I if and only if W (x1, . . . , xn )(t) 6= 0


on I.

(March-June 2023) MA 102-ODE Lecture-14 27 / 49


Representation of Solutions
Theorem:(Homogeneous case)
Let x1 , . . . , xn be n linearly independent solutions to
x0 (t) = A(t)x(t), t ∈ I, (∗)
where A(t) is continuous on I. Then, every solution to x0 (t) = A(t)x(t) can be
expressed in the form
x(t) = c1 x1 (t) + · · · + cn xn (t),
where ci ’s are constants. That is
General Solution (G.S.) of (∗)

c1 x1 (t) + · · · + cn xn (t)

Definition: A set {x1 , . . . , xn } of n linearly independent solutions to


x0 (t) = A(t)x(t), t ∈ I (∗)
is called a fundamental solution set for (∗) on I.
(March-June 2023) MA 102-ODE Lecture-14 28 / 49
The matrix Φ(t) obtained by taking each solution xi (t), i = 1, . . . , n
as the i-th column is called a fundamental matrix for (∗).

Φ(t) := [x1 (t) x2 (t) . . . xn (t)]


x1,1 (t) x1,2 (t) · · · x1,n (t)
 
 x2,1 (t) x2,2 (t) · · · x2,n (t) 
=  .. .. .. 
 . . . 
xn,1 (t) xn,2 (t) · · · xn,n (t)

is a fundamental matrix for (∗).


Definition: A matrix Φ(t) whose individual columns consists of a
fundamental set of solutions of x0 = A(t)x is called a fundamental
matrix of x0 = A(t)x.

(March-June 2023) MA 102-ODE Lecture-14 29 / 49


Note: If Φ(t) is a fundamental matrix of x0 = A(t)x, then
d
Φ(t) = A(t)Φ(t).
dt
Indeed, we have
d d
Φ(t) = [x1 (t) x2 (t) . . . xn (t)]
dt dt
= [x01 (t) x02 (t) . . . x0n (t)]
= [A(t)x1 (t) A(t)x2 (t) . . . A(t)xn (t)]
= A(t)[x1 (t) x2 (t) . . . xn (t)]
= A(t)Φ(t).

(March-June 2023) MA 102-ODE Lecture-14 30 / 49


General Solution of x0 = A(t) x in terms of a Fundamental
Matrix

We can use Φ(t) to express the general solution

x(t) = c1 x1 (t) + · · · + cn xn (t) = Φ(t)c, where c = (c1 , . . . , cn )T .

Therefore
x(t) = Φ(t) c for t ∈ [a, b]
is the general solution of x0 = A(t) x.

Consider the IVP: x0 = A(t) x with x(t0 ) = x0 where t0 ∈ [a, b] .


If Φ(t) is a fundamental matrix of x0 = A(t) x on [a, b] then

x(t) = Φ(t) Φ−1 (t0 ) x0

is the unique solution to the IVP. Here Φ−1 (t0 ) denotes the inverse matrix of
Φ(t0 ).
(March-June 2023) MA 102-ODE Lecture-14 31 / 49
Example: The set {x1 , x2 , x3 }, where
 2t     
e −e−t 0
x1 =  e2t  , x2 =  0  , x3 =  e−t  ,
e2t e−t −e−t
0
is a fundamental
 solution
 set for the system x (t) = A(t)x(t) on R,
0 1 1
where A = 1 0 1 .
1 1 0
Note that Axi (t) = x0i (t), i = 1, 2, 3. Further,

e2t −e−t 0
2t
W (0) = e 0 e−t = −3 6= 0.
2t −t
e e −e−t t=0

(March-June 2023) MA 102-ODE Lecture-14 32 / 49


 
e2t −e−t 0
The fundamental matrix Φ(t) =  e2t 0 e−t  .
2t −t
e e −e−t

Thus, the GS is
 2t       
e −e−t 0 c1
x(t) = c1  e2t +c2  0 +c3  e−t  = Φ(t)  c2  = Φ(t)c
e2t e−t −e−t c3
 
c1
where c =  c2  .
c3

(March-June 2023) MA 102-ODE Lecture-14 33 / 49


Theorem:(Non-homogeneous case)
let xp be a particular solution to

x0 (t) = A(t)x(t) + f(t), t ∈ I, (∗∗)


and let {x1 , . . . , xn } be a fundamental solution set on I for the
corresponding homogeneous system x0 (t) = A(t)x(t). Then every
solution to (∗∗) can be expressed in the form

x(t) = c1 x1 (t) + · · · + cn xn (t) + xp (t)


= Φ(t)c + xp (t).
where xp (t) is a particular solution of (∗∗).
*** End ***

(March-June 2023) MA 102-ODE Lecture-14 34 / 49

You might also like