Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

EE477 (Fall 2021) S.

Alghunaim

2. Linear Algebra Background

• linear independence
• system of linear equations
• eigenvalues and eigenvectors
• definite matrices

2.1
Outline

• linear independence
• system of linear equations
• eigenvalues and eigenvectors
• definite matrices
Linear independence

Linear independence: a set of vectors {a1 , a2 , . . . , ak } is linearly


independent if
α1 a1 + α2 a2 + · · · + αk ak = 0
holds only if
α1 = α2 = · · · = αk = 0

Linear dependence: a set of vectors {a1 , a2 , . . . , ak } is linearly dependent if


it is not linearly independent

(we often say that the vectors a1 , . . . , ak are linearly independent/dependent


to mean that the set {a1 , . . . , ak } is linearly independent/dependent)

linear independence SA — EE477 2.2


Examples

 a1 = (1, 2) and a2 = (2, 1) are linearly independent since


   
1 2
α1 + α2 =0
2 1

means that α1 = −2α2 and α2 = −2α1 ; hence, α1 = α2 = 0

 the unit vectors e1 , e2 , . . . , en are linearly independent:

0 = α1 e1 + · · · + αn en = (α1 , α2 , . . . , αn )

only if α1 = · · · = αn = 0
 a1 = (1, 1, 0), a2 = (2, 2, 0), a3 = (0, 0, 1) are linearly dependent:

−2a1 + a2 + 0a3 = 0

 a1 = (0.2, −7, 8.6), a2 = (−0.1, 2, −1), a3 = (0, −1, 2.2) are linearly
dependent:
a1 + 2a2 − 3a3 = 0
linear independence SA — EE477 2.3
Linear independence of matrix columns

for an m × n matrix A and an n-vector x


 
x1
   x2 
Ax = a1 a2 · · · an  ..  = x1 a1 + · · · + xn an ,

xn

 columns of A are linearly independent if

Ax = 0 holds only if x = 0

 columns of A are linearly dependent if Ax = 0 for some x 6= 0

linear independence SA — EE477 2.4


Supersets and subsets

Superset
 any superset of a linearly dependent set is linearly dependent
 example: if the vectors a1 , . . . , ak are linearly dependent, then the vectors

a1 , . . . , ak , ak+1

are linearly dependent as well for any vector ak+1

Subset
 removing vectors from a collection of vectors preserves linear independence
 example: if {a1 , a2 , a3 , a4 } is linearly independent, then the sets

{a1 , a2 , a4 }, {a1 , a4 }, {a2 }

are linearly independent

linear independence SA — EE477 2.5


Independence-dimension inequality

if a1 , a2 , . . . , ak be linearly independent vectors in Rn , then


 the number of vectors is less than the vectors dimension k ≤ n
 any collection of n + 1 or more n-vectors is linearly dependent

Graphical illustration

a2 a3 a3 α2 a2

a1 α1 a1

linear independence SA — EE477 2.6


Linear combination of independent vectors

if x is the linear combination

x = α1 a1 + · · · + αk ak

and a1 , . . . , ak are linearly independent, then the coefficients α1 , . . . , αk are


unique

Proof:
 assume that we can find β1 , . . . , βk such that

x = β1 a1 + · · · + βk ak

subtracting the last two equations, we get:

0 = (α1 − β1 )a1 + · · · + (αk − βk )ak

 since a1 , . . . , ak are linearly independent, we must have αi − βi = 0 and


thus αi = βi for all i = 1, . . . , k
linear independence SA — EE477 2.7
Linear independence and matrix inverse

the matrix A−1 is said to be the inverse of the n × n matrix A if

AA−1 = A−1 A = In

for a square invertible matrix A, the following are equivalent:


 A is invertible
 the columns of A are linearly independent
 the rows of A are linearly independent
 the determinant is nonzero

linear independence SA — EE477 2.8


Outline

• linear independence
• system of linear equations
• eigenvalues and eigenvectors
• definite matrices
Matrix rank

the rank of a matrix A, denoted by rank A, is the maximal number of linearly


independent columns of A
 rank A ≤ min{m, n}
 A has full rank if rank A = min{m, n}
 A has full column rank if rank A = n (linearly independent columns)
 A has full row rank if rank A = m (linearly independent rows)

Rank of matrix transpose: rank A = rank AT , i.e., the maximum number of


linearly independent columns of a matrix is equal to the maximum number of
linearly independent rows

system of linear equations SA — EE477 2.9


Example 2.1

 
3 0 2 2
A = −6 42 24 54 
21 −21 0 −15
the first two columns are linearly independent; moreover,
     
2 3 0
24 = 2 −6 + 2  42 
3 3
0 21 −21

and      
2 3 0
2
 54  = −6 + 29  42 
3 21
−15 21 −21
therefore, only two vectors are linearly independent, thus rank A = 2

system of linear equations SA — EE477 2.10


Finding rank from determinant
 a pth-order minor of an m × n matrix A is the determinant of a p × p
submatrix obtained by deleting m − p rows and n − p columns from A
 the rank of a matrix is equal to the highest order p of its nonzero minor(s)

Illustration  
3 0 2 2
A = −6 42 24 54 
21 −21 0 −15

 the 3th-order minors are all zero


 consider the 2-th order minor
 
3 0
det = 126 6= 0
−6 42

hence, rank A = 2

system of linear equations SA — EE477 2.11


System of linear equations

system of m linear equations:

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
..
am1 x1 + am2 x2 + · · · + amn xn = bm

 x1 , . . . , xn are called variables


 aij are called coefficients
 bi are called right-hand-sides

compact representation
Ax = b,
where the m × n matrix A is called the coefficient matrix and the m vector b is
called the right-hand side

system of linear equations SA — EE477 2.12


System of linear equations

a system of linear equations Ax = b is


 overdetermined system if the number of equations is more than the number
of unknowns (m > n)
 underdetermined system if the number of equations is less than the number
of unknowns: (m < n)
 square system if m = n

Solution
 an n-vector x̂ a solution of the linear equations Ax = b if Ax̂ = b
 there can be a unique solution, many solutions, or no solutions

system of linear equations SA — EE477 2.13


Examples

 the system of linear equations

x1 + x2 = 1, x1 = −1, x1 − x2 = 0
can be described as Ax = b with
   
1 1 1
A = 1 0, b = −1 ,
1 −1 0
which is an overdetermined system; it has no solutions.
 the system of linear equations

x1 + x2 = 1, x2 + x3 = 2
can be written as Ax = b with
   
1 1 0 1
A= , b= ,
0 1 1 2
which is an underdetermined system; it has multiple solutions, including
x̂ = (1, 0, 2) and x̂ = (0, 1, 1)
system of linear equations SA — EE477 2.14
Existence of solution

Fundamental theorem of linear systems: the system Ax = b has a solution


if and only if
rank A = rank[A b]

 solution is unique if and only if

rank A = rank[A b] = n

this implies that the columns of A are linearly independent

 system has infinitely many solutions for any b if and only if rank A = m < n

system of linear equations SA — EE477 2.15


Example 2.2

 
−3 −4
A= 4 6
1 1
rank A = n = 2
 the system Ax = (1, −2, 0) has a unique solution

x = (1, −1)

note that rank A = rank[A b] = 2

 the system Ax = (1, −1, 0) does not have a solution because

rank A = 2 6= rank[A b] = 3

system of linear equations SA — EE477 2.16


 
−3 4 1
C=
−4 6 1

 rank C = 2 = m
 the system  
1
Cx =
2
has multiple solutions, including
   
1 2 38 1
x1 = , , x2 = 0, , −1
3 3 9 2

system of linear equations SA — EE477 2.17


Finding a particular solution

consider the system of equations

Ax = b

where A is an m × n matrix with m ≤ n and assume that

the matrix A has linearly independent rows: rank A = m

 there is at least one solution and there can be many solutions


 the matrix A also has m linearly independent columns

 without loss of generality, we assume that the columns of the matrix A are
reordered such that the first m columns are linearly independent

system of linear equations SA — EE477 2.18


Partitioned system
 
x
A = [B D], x= B
xD

 B is an m × m matrix with linearly independent columns (invertible)


 D is an m × (n − m) matrix
 xB is an m vector
 xD is an n − m vector

we can then write


 
xB
Ax = [B D] = BxB + DxD = b
xD

solving for xB , we have

xB = B −1 b − B −1 DxD

system of linear equations SA — EE477 2.19


Particular solution

All solutions: the solutions of Ax = b can parametrized by


 −1  
−B −1 DxD

B b
x= + = x̂ + F xD
0 xD

where xD ∈ Rn−m is any arbitrary vector and


 −1 
−B −1 D
 
B b
x̂ = , F =
0 I

Basic solution: if we set xD = 0, then we get the solution

x = (B −1 b, 0)

this solution a basic solution with respect to the basis B

system of linear equations SA — EE477 2.20


Example 2.3

let us find a particular solution to the system of equations Ax = b given by


 
  x1  
2 3 −1 −1 
x2  = 1

4 1 1 −2 x3  3
x4

we can select any two linearly independent columns of A as basis vectors to


find a particular solution

system of linear equations SA — EE477 2.21


 selecting the first and second columns, we have xB = (x1 , x2 ),
xD = (x3 , x4 ) and
   
2 3 −1 −1
B= , D=
4 1 1 −2
hence,
" # " #
4 2
− 12
 
x1 −1 5 −1 5
xB = =B b= , B D=
x2 − 25 − 35 0

thus, a particular solution is x = ( 45 , − 15 , 0, 0) and the set of all solutions


can be written as  4   2 1
5 −5 2
 
− 1   3 0  x3
x= 5 +  5 
 0   1 0  x4
0 0 1
| {z } | {z }
x̂ F

system of linear equations SA — EE477 2.22


 if we select the first and third columns instead, then we have xB = (x1 , x3 ),
xD = (x2 , x4 ) and
   
2 −1 3 −1
B= , D=
4 1 1 −2

in this case, we have


" # " #
2 2
− 12
 
x1 −1 −1
xB = = B b = 31 , B D= 3
x3 3 − 35 0

therefore, a particular solution is x = ( 23 , 0, 13 , 0) and the set of all solutions


can be written as
2
− 23 12  
   
3
 0   1 0  x2
x=
 1  +  5 0  x4
  
3 3
0 0 1
|{z} | {z }
x̂ F

system of linear equations SA — EE477 2.23


Outline

• linear independence
• system of linear equations
• eigenvalues and eigenvectors
• definite matrices
Eigenvalues and eigenvectors

for an an n × n matrix A, a scalar λ is an eigenvalue of A if

Av = λv

for some non-zero vector v ∈ Rn


 v is called an eigenvector of A (associated with eigenvalue λ)
 eigenvalues can be computed by solving the characteristic equation:

det(λI − A) = 0

 the polynomial of degree n, det(sI − A), is called the characteristic


polynomial

 the characteristic equation must have n (possibly nondistinct) roots, which


are the eigenvalues of A

eigenvalues and eigenvectors SA — EE477 2.24


Some properties

let A ∈ Rn×n with eigenvalues λ1 , . . . , λn , then


 the eigenvalues of A + cI are equal to λ1 + c, . . . , λn + c

 the eigenvalues of Ak are equal to λk1 , . . . , λkn

 the eigenvalues of A−1 are 1/λ1 , . . . , 1/λn

 the eigenvalues of AT are equal to the eigenvalues of A

 if A is a triangular matrix, then its eigenvalues are equal to its diagonal


element

eigenvalues and eigenvectors SA — EE477 2.25


Symmetric matrices

Symmetric matrices: suppose that A is an n × n matrix (square).


 A is called symmetric matrix if A = AT
 all eigenvalues of a real symmetric matrix are real

Symmetric factorization: for symmetric matrix A ∈ Rn×n

A = U ΛU T

 U ∈ Rn×n is orthogonal (i.e., U T U = I )


 Λ = diag(λ1 , . . . , λn ) where λi denote the real eigenvalues of A
 this factorization is called symmetric eigenvalue decomposition or spectral
decomposition

eigenvalues and eigenvectors SA — EE477 2.26


Outline

• linear independence
• system of linear equations
• eigenvalues and eigenvectors
• definite matrices
Definiteness of symmetric matrices

the function f (x) = xT Qx with symmetric Q ∈ Rn×n is called a quadratic


form

a symmetric matrix Q ∈ Rn×n (or quadratic form xT Qx) is said to be

 positive semidefinite if xT Qx ≥ 0 for all x (we use Q ≥ 0 to indicate that Q


is positive semidefinite)

 positive definite if xT Qx > 0 for all x 6= 0 (we use Q > 0 to indicate that Q
is positive semidefinite)

 negative semidefinite if −Q is positive semidefinite (we use Q ≤ 0 to


indicate that Q is negative semidefinite)

 negative definite if −Q is positive definite (we use Q < 0 to indicate that Q


is negative definite)

 indefinite if Q is neither one of the above

definite matrices SA — EE477 2.27


Example 2.4

Identity matrix
 the identity matrix I is positive-definite since

xT Ix = xT x = kxk2 > 0

for all x 6= 0
 −I is negative definite

Diagonal matrices
 a diagonal matrix D with positive diagonal elements dii > 0 is positive
definite
 a diagonal matrix D with nonnegative diagonal elements dii ≥ 0 is positive
semi-definite
 a diagonal matrix with positive and negative diagonal elements is indefinite

definite matrices SA — EE477 2.28


Gram matrix: a gram matrix is any matrix of the form

BTB

for any matrix B

 a gram matrix is positive semi-definite:

xT B T Bx = (Bx)T (Bx) = kBxk2 ≥ 0


 positive definite if B has linearly independent columns since

xT B T Bx = kBxk2 = 0 ⇐⇒ Bx = 0

only holds for x = 0 if B has linearly independent columns

definite matrices SA — EE477 2.29


Principle submatrices
a principle submatrix of an n × n matrix A is the (n − k) × (n − k) obtained
by deleting k rows and the corresponding k columns of A

a leading principle submatrix of an n × n matrix A of order n − k , denoted


by Ak , is the matrix obtained by deleting the last k rows and columns of A

Example: the principle submatrices of


 
3 −4 4
A = −4 6 5
4 5 7
are  
      3 −4 4
3 −4 6 5 3 4 
3, 6, 7, , , , −4 6 5
−4 6 5 7 4 7
4 5 7
and the leading principle submatrices are
 
  3 −4 4
3 −4
A1 = 3, A2 = , A3 = −4 6 5
−4 6
4 5 7
definite matrices SA — EE477 2.30
Properties of symmetric matrices

Determinant test (Sylvester’s criterion):


 a symmetric matrix is positive definite if and only if the determinants of all
leading principal sub-matrices are positive (det Ak > 0)
 a symmetric matrix is positive semidefinite if and only if the determinants of
all principal sub-matrices are nonnegative

Definiteness and eigenvalues


 a symmetric matrix is positive (negative) definite if and only if its eigenvalues
are positive (negative)
 a symmetric matrix is positive (negative) semidefinite if and only if its
eigenvalues are nonnegative (nonpositive)
 a symmetric matrix is indefinite if and only if it has some positive and
negative eigenvalues

definite matrices SA — EE477 2.31


Example 2.5

a) the matrix  
3 −4 4
A = −4 6 5
4 5 7
is indefinite; this is because it is not positive semi-definite since
det A1 = 3 > 0 and det A3 = −317 < 0; it is also not negative
semi-definite since −A is not positive semidefinite

b) the matrix  
2 −1 0
A = −1 2 −1
0 −1 2
is positive definite since the determinant of all leading principle submatrices

det A1 = 2 > 1, det A2 = 3 > 0, det A3 = 4

are positive
definite matrices SA — EE477 2.32

You might also like