LinearAlgebra Notes1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Linear Algebra Part I (2016)

Matrices, Vectors, Determinants, and Solution of Linear Equation System


(Notes by Dr. Xiaogang Gao)
(Part B Chapter 7 in “Advanced Eng. Math.” By E. Kreyszig)

Matrix
A matrix is a rectangular array of numbers/variables enclosed in brackets. Matrices allow
us to express large amount of data and functions in an organized and concise manner.
Matrices are mathematical objects that can be treated similarly to numbers in high school
algebra.

Matrix Structure

Amn = [ aij]

ai, j = Element at the ith row, jth


column

Dimension: m  n
Special matrices
Vector (m  1 or n  1)

3
v   1  3 -1 0 ; V  3 0 0 0 13
T

 0 
31

Square matrix (m = n) Diagonal matrix ( aij  0 for i  j )


1 0 0 0  1 6 0 0 0 0
0 15 0 8 0  0  1 0 0 0
 
A  0 0  1 0 0 D  0 0 0 0 0
   
3 0 0 1 0 0 0 0 1 0
0 0 9 0 71 55 0 0 0 0 4

Identity matrix 1 i  j Symmetric matrix ( aij  a ji )


(aij   )
0 i  j

1 0 0 0 0 1 3 5 2
0
0 1 0 0 0   3 1 0 0 
4
  Rowi = Columni
I  0 0 1 0 0 S  5 0 1 0
0
   
0 0 0 1 0 0 4 0 1 0 
0 0 0 0 1  2 0 0 0 1 
 

1
Upper (Lower) triangular matrix (aij  0 for i  j )
1 5 0 0 4
0  1  1 3 0

A  0 0 1 4 2
 
 0 0 0 1 3
0 0 0 1 
 0

Matrix construction (Matrix subsets)



 a1,1 .. a1,i 1 a1,i .. a1, n   r1 
a 
 2,1 a22 a2,i 1 a2,i .. a2, n  r 
 2
 .. .. .. .. .. ..   A11  A1 p  .
    
 
A
ai ,1
ai 1,1
..
..
ai ,i 1
ai 1,i 1
ai ,i
..
..
..
ai , n 
ai 1, n 

 

   
r
 i 
 c1 .. c j , c j 1 
.. c n 1n
     
 .. .. .. .. .. ann   Aq1  Aqp  q p .
 .. .. .. .. .. .. 
Dim
.
   
 am,1 .. am,i 1 am,i .. am, n 
m n
rm  m1
main diagonal (aii ) submatrixes row vectors column vectors

Scalar Multiplication
cA = [c aij] c = a number, A = a matrix
N
(c1  c 2 ) A  c1 A  c 2 A  A(  ci )
i 1
N
c( A1  A2 )  cA1  cA2  c(  Ai )
i 1
N
c1 (c 2 A)  c1c 2 A  ( ci ) A
i -1

Examples
1 2 5 6  6 8  30 40
5(     )  5  
3 4  7 8  10 12 50 60
1 2 5 6  5 10  25 30 30 40
5 5   
3 4 7 8 15 20 35 40 50 60

Matrix operations (Dimension requirements)


Equality Amn  Bmn if aij = bij
Addition Am  n  Bm n  [aij  bij ]
Zero matrix O if all elements aij =0  A + O = A ; Am p  O pn  O mn
Negative A+B=O aij =  bij  A = B (Sift B to the other side of
equation)
Addition rules

2
A  ( A)  O
A B  B  A Commutative
(U  V )  W  U  (V  W ) Associative
Matrix Multiplication Row elements from A
k
Amk  Bkn  Cmn  [cij ]mn cij   ail blj  ai1b1 j  ai 2b2 j    aik bkj (i  1,...m; j  1,...n)
l 1
Column elements from B

Examples
 4 3  4  2  3 1 4  5  3  6  11 38
 2 5
AB  7 2   7  2  2 1 7  5  2  6  16 47
1 6 22    
9 0 32  9  2  0 1 9  5  0  6  32 18 45 32

Special for two vectors with the same dimension: a Dot b = a  b  a single number:
 b1   a1 
b  a 
 n 
a  b  b  a  a1 a2  an 1n  2   b1 b2  bn 1n  2     ak bk 
     k 1 11
   
bn  n1  an  n1

Rules with dimension conditions


Associative: Amk(Bkl Clm) = (Amk Bkl)Clm
Distributive: (Amn+Bmn)Cnk = A mnC nk+ B mnCnk ; C??(Amn+B??) = CA + CB

No commutative (different from number multiplication): Amn Bn p  Bn p Amn

Identity matrix I : Im m Am n = Am nIn n = Am n I~1

Zero matrix O OmmAmn = Omn; AmnOnn = Omn O~0

Notice
If AB = O, unnecessary B = O or A = O (unless A-1 or B-1 exists).

1 1    1 1   0 0 
2 2  1  1  0 0
    

3
If AB = AC, unnecessary B = C (unless A-1 exists).
1 1 2 1  4 3 1 1 3 0
2 2 2 2  8 6  2 2 1 3 Transposition
       

Amn  [aij ]mn then AT  [aijT  a ji ]nm

1 4   1 2 3 1 4 7 
1 2 3      
A    A  2 5; B  4 5 6  B  2 5 8 no change in aii
T T

 4 5 6  3 6 7 8 9 3 6 9

For a symmetric (square) matrix: AT = A


Rules
( A  B) T  A T  B T
(cA) T  cAT
( Amn B n p ) T  ( B T ) pn ( A T ) nm  C pm change the order !
( AT ) T  A

Example

T T  1 5  2  8 3  5  4  8 
 1 2 5 6 7   1  5  2  8 1  6  2  9 1  7  2 10   
     3  5  4  8 3  6  4  9 3  7  4 10   1  6  2  9 3  6  4  9 
 3 4 8 9 10    1  7  2 10 3  7  4 10

T T 5 8   5 1  8  2 5  3  8  4 
5 6 7  1 2   1 3   
8 9 10 3 4  6 9  2 4   6 1  9  2 6  3  9  4 
    7 10   7 1  10  2 7  3  10  4
 

Inverse matrix (Existing only for special square matrices)

AnnAnn-1 = A-1A = I A -1 is effective for either right or left multiplication. (~ reciprocal?)

Existence of Inverse Matrix: If Rank (Ann ) = n  A  0, (nonsingular matrix)

Calculation of A-1: Gauss Jord elimination

T
c11  c1n 
1 
A 1      c ij  cofactor of det A (See page 13)
det A 
c n1  c nn 
Rules
(AC)-1 = C-1A-1 (change the order of product)
(A-1)-1 = A (inverse each other)

4
If A-1 exists,
from AC = AD, we have C = D, also from AC =O, we have C = 0.
Linear Equation System (LES)
a11 x1  a12 x 2      a1n x n  R1
a 21 x1  a 22 x 2      a 2 n x n  R2 => Matrix form AmnXn1 =Rm1

a m1 x1  a n 2 x 2      a mn x n  Rm
 a11 a12  a1n   x1 
a 
a 22  a 2 n  x 
The coefficient matrix Amn   21 , the unknown vector: X n1   2  and,
      
   
a m1 a m 2  a mn   xn 

right-hand vector Rn1  R1 R2  Rn  m = No. of equations, n = No. of unknowns.


T

R = O, Homogeneous or R  O, non-homogeneous system

 a11 a12  a1n R1 


a a 22  a 2 n R2 
~
Combine A and R into Augmented Matrix Am( n1)   21
     
 
a m1 a m 2  a mn Rn  m( n 1)

Status of Solution Existence: (1) No solution, (2) Unique solution, and (3) Infinitely
many solutions.

Gauss Elimination – A General procedure to solve LES


If we transform the coefficient matrix into an upper triangle matrix, the LES is solved:
a11 x1  a12 x2      a1n xn  R1
0  x1  a22 x2      a2 n xn  R2

0  x1     a( n 1)( n 1) xn 1  a( n 1) n xn  Rn 1 xn 1  ( Rn 1 -a( n 1) n xn )/a( n 1)9 n 1)


0  x1     0  xn 1  ann xn  Rn  xn  Rn / ann

Gauss Elimination: A top-down row procedure to produce the equivalent triangular


matrix by
(1) Interchange of rows (changes the equation order) or columns (it changes the order of
xi).
(2) Multiply or divide a row by a non-zero constant.
(3) Add (abstract) a row’s elements to (from) another row’s corresponding elements.

5

~
(1) Set the augmented matrix A from the given LES,
(2) Exchange rows to obtain the non-zero a11 using the equality rules.
(3) Make all the elements in the first column below a11 equal zeroes: let ai1 = 0, i > 1,
(4) Repeat (2) and (3) for a22, a33 … until the procedure have to stop in 3 cases shown by
examples :

Example 1
 x1  x 2  x3  2
3x1  x 2  x3  6  (1) Form the Augmented matrix = Coefficient A | R.
 x1  3x 2  4 x3  4

 1 1 1 | 2 fixed  1 1 1 2  fixed  1 1 1 2   1 1 1 2 
 3  1 1 | 6 R  3R   0 2 4 12 fixed   0 2 4  
12   2   0 1 2 6 
  2 1   
 1 3 4 | 4 R3  R1  0 2 3 2  R3  R2  0 0  1  10  0 0  1  10

- 1 1 1   x1   2   x1  x 2  x 3  2 x1  6
 0 1 2 x    6     
  2    x 2 2 x 3 6 x 2  14 A unique solution
 0 0  1  x 3   10  x3  10 x 3  10

Example 2
3x1  2 x 2  x3  3
2 x1  x 2  x3  0
6 x1  2 x 2  4 x3  6

6 2 4 | 6 fixed 6 2 4 | 6  6 2 4 | 6  fixed
3 2 1 | 3 R 2  R1 / 2  0 1 1 | 0   0 1  1 | 0  fixed 

2 1 1 | 0 R3  R1 / 3 0 1 / 3  1 / 3 | 2 0 1  1 | 6 R3  R 2
6 2 4 |6 
0 1  1 | 0   No solution

0 0 0 | 6 0  x1  0  x 2  0  x 3  6 !

Example 3
3 x1  2 x2  2 x3  5 x4  8
2 x1  5 x2  5 x3  18 x4  9
4 x1  x2  x3  8 x4  7

6
3 2 2  5 | 8  4 12 8 8  20 32 fixed
2 5 5  18 | 9   2 5 5  18 9  6R 2  R1 

4  1  1 8 | 7  4  1  1 8 7  3R3  R1
12 8 8  20 32  12 8 8  20 32   4 3 2 2  5 8 
0 22 22  88 22   22   0 1 1
  4 1  fixed  0 1 1  4 1


 0  11  11 44  11  11  0  1  1 4  1 R3  R2 0 0 0 0 0
3 x1  2 x2  2 x3  5 x4  8
x1  2  x4
 x2  x3  4 x4  1  Infinite solutions
x2  1  x3  4 x4
00

~
Possible results for Gauss Elimination on A

a11 a12     a1n | b1 


0 a 22     a2n | b2  the final (top - left) upper trianglar submatrix

0 0      | b  r  Min (n ; m)
 
0 0 0 a rr a rr 1  a rn | br 
Bottom of the upper trianglar submatrix
0 0 0 0 0 0 0 | br 1 
  (by row/column excahnges)
0 0 0 0 0 0 0 |b . 
0 | bm  m( n 1)
 0 0 0 0 0 0

If br 1 ,  , bm not all zeroes, No solution,


Else ( br 1 ,  , bm all zero)
r  n, Unique solution (red solid box)


r  n , Infinite solutions (above the black dashed line)

Notice 1: that for the homogeneous equation system (bi = 0), the no-solution condition will
not occur. The trivial solution of “all unknowns equal to zero” is always a solution. For a
homogeneous system which also satisfies the unique solution condition, then, the trivial
solution becomes unique.
We usually are interested in the “non-zero solution” which occurs when a homogeneous
system has infinite solutions.
Notice 2: all conclusions are based on the final upper-triangle sub-matrix.

Matrix Rank
The rank of matrix A is represented as Rank A = r which is defined as the size of the final
upper-triangle sub-matrix (from a11  0 to arr  0) through the Gauss Elimination.
~
Identify the LES solution using the values of (rank A) and (rank A )

7
 a11 a12 . a1n   a11 . . a1n | b1 
a11 x1  a12 x2      a1n xn  b1 a
 21 . . .   .
 . . . | b2 
a21 x1  a22 x2      a2 n xn  b2
 A   . . . .  ~  .
 and A  
. . . . 


 an1 . . ann   an1 . . ann | bn 
am1 x1  am 2 x2      amn xn  bm  . . . .   . . . . 
   
am1 . . amn  am1 . . amn | bmn 

~
If (rank A)  (rank A ) (i.e., bi  0 i > r) then No solution
Else (rank A = rank Ã)
r  n A unique solution


 r  n Infinite solutions

Endif

Example
3 x1  2 x2  2 x3  5 x4  8
3 2 2  5 8  x1  2  x4
2 x1  5 x2  5 x3  18 x4  9
 0 1 1  4 1  x2  1  x3  4 x4
4 x1  x2  x3  8 x4  7 0 0 0 0 0 00
Infinite solutions with arbitrary x3 and x4. [rank A =2 < n =4, b3 = 0; rank A~  2
→ n – r = 4 – 2 → 2 variables (x3 and x4) with arbitrary values.
Usually you can select (n – r) independent vectors for [x3 x4] such as [0 1] and [1 0]
To form 2 types of solution for [x1 x2 x3 x4] such as [1 5 0 1] and [2 0 1 0].

Example
 x1  x 2  x3  2
3 x1  x 2  x3  6
 x1  3x 2  4 x3  4
  1 1 1 | 2   1 1 1 | 2  - 1 1 1  x1   2   x1  x2  x3  2 x1  6
 3  1 1 | 6   0 1 2 | 6    0 1 2  x    6     
      2    x2 2 x3 6 x 2  14

 1 3 4 | 4  0 0 1 | 10  0 0 1  x3  10 x3  10 x3  10


~
Unique solution. [rank A = rank A = 3 = n]

Example
3 x1  2 x 2  x 3  3  3 2 1 3 3 x1  2 x 2  x 3  3
 
2 x 1  x 2  x 3  0  0 1  1 6   x 2  x3  6
6 x1  2 x 2  4 x 3  6 0 0 0 6 0  6
~
Unreasonable 0 = -6, No solution. [rank A = 2, b3 ≠ 0, and rank A  3. ]

An additional use
Given a group of vectors, investigate if they can be represented linearly by each other or
not? In other words, are the vectors dependent or independent?

8
The number of coordinate vectors is the maximum number of independent vectors in the
matrix group or a space. All the other vectors in the matrix or space can be uniquely
represented by the coordinates (vectors).
Coordinate vectors can be different but the maximum number of coordinate vectors of a
matrix is keeping the same.

A group of given vectors (a1, a2,…., an) are independent to each other, iff the following
   
homogeneous linear eq. system x1a1  x2 a 2    xi ai    xn a n  O has the unique
all-zero solution: xi = 0 (i = 1,…,n.); otherwise (a1, a2,…., an) are linearly dependent (i.e.
the linear eq. system has non-zero solutions for the weights {xi}) (at lease one of the vector
can be linearly represented by other vectors of the group).
This definition equals the solution of a homogeneous linear equation system with the
weights corresponding to the unknowns {xi} and the coefficient matrix consists of the
given vectors: [a1 a2….an].
  
x1a1  x 2 a 2    x n a n  O

 a11   a12   a1n  a11 x1  a12 x 2   a1n x n  0  a11 a12  a1n  x1   0 


          
  a 21    a 22    a 2 n  a 21 x1  a 22 x 2   a 2 n x n  0  a 21 a 22  a 2 n  x 2   0 
a1    a 2    a n1     
                  
          
a  a  a  a x  a x   a x  0 a    
 m1   m2   mn  m1 1 m2 2 mn n  m1 a m 2  a mn  x n   0 
According to the solution existence of a homogenous system,
If the all-zero (xi = 0) solution is the unique solution of the homogenous system, it must
have Rank A = r = n. In this case, you can not find a set of non-zero weights to represent a
vector ai as a linear combination of the others.
Otherwise, Rank A = r < n, the system has infinitely many solutions including non-zero
solutions for {xi} therefore, some vector can be linearly represented by others—these
vectors are linearly dependent.

The homogeneous linear eq. system with the given vectors as coefficient matrix
  
A  a1 a 2  a n  has
(1) if Rank A = r = n, unique trivial (all-zero) solution  given vectors are linearly
independent,
(2) if Rank A = r < n, infinite solutions include non-zero solutions  given vectors are
linearly dependent.

In case (2), the first r column vectors consist a sub-matrix that has unique trivial solution
therefore all the r column vectors are independent.

Definition of Matrix Rank based on independent vectors: the maximum number of


linearly independent column vectors in matrix A is called the rank of A.

Review
Lesson 1:
Matrices derived from LES: Matrix Operations and Rules,
Gauss Elimination (transform to an upper triangle matrix)

9
Comparison between Matrix and Real Number operations and Rules in algebra
Similarities vs. Differences (special for matrix multiplication and inverse)
Lesson 2:
Processing Gauss Elimination on a matrix  Rank as a property of a matrix
~
Identify solution status of LES using LES upper-triangle results or rank(A) vs. rank( A )
Problem: are the given vectors Independent?  Solution of a homogeneous LES

Maximum No. of independent column vectors in a matrix  Rank of the vector matrix

Example
 3  6  21 
   42   21
 0      x a  x a  x a  O
a1  ; a 2   ; a3  
 2  24   0  1 1 2 2 3 2
     
 2  54    15
 3  6  21  0
0   42   21 0
 x1    x 2    x3   
 2  24   0  0 
       
 2  54    15 0
3 - 6 21  0  3  6 21 | 0
0 42 - 21  x1  0  0 42  21 | 0
   x     
2 24 0   2  0  2 24 0 | 0
   x3  31    
2 54 - 15 43 0 41 2 54  15 | 0

3  6 21  2 54  15  3 6 162  45


0 42  21 0 42  21  
    0 42  21
2 24 0  2 24 0  2 24 0   3  R1
     
2 54  15 3  6 21  3  6 21   2  R1
6 162  45  3 2 54  15 2 54  15
0 42  21  21 0 2 
1  0 2  1 
      r2
0  90 45   45 0  2 1   R 2 0 0 0 
     
0  174 87   87 0  2 1   R 2 0 0 0 

2 x1  54 x 2  15x3  0
  infinite solutions
2 x2  x3  0
  
 Non - zero solutions exist, such as x3  1, x 2  0.5, x1  6  6a1 - 0.5a 2 - a3  0.
Notice: checking if column exchanges are made in rank calculation when tracing the
original independent column vectors. For validation, choose two vectors (a1 and a2) to form

10
 3  6  2 54 
   
 0 42  0 2 
a matrix A   GE   , The Gauss Elimination process is the same
2 24  0 0
   
 2 54  0 0 
  42  
except for the missing of the 3rd column (vector a3), the rank is still 2 but is equal to n =2
 unique trivial solution  these two vectors are independent.
How about vectors (a2 and a3) and (a1 and a3)? From the above solution equation, each
vector can be uniquely represented by other two, therefore, (a2 and a3) or (a1 and a3) are
also independent vectors.
The rank value of a matrix is fixed (i.e., the number of independent vectors in the matrix
is fixed.), but the independent vectors can be chosen differently.

Features of Matrix Rank


(1) For Amn, rank A ≤ min(m; n) (See the arr location and size in Amn)

3 - 6 21 
0 42 - 21
 1 1 2 - 1 5
2 24 0  0 1 3 4 7 
  25
 
2 54 - 15 43
       
a1 a2 a3 a 1 a2 a3 a4 a5

Understand the important meaning of this feature:


For 2D-vectors with 2 components (m = 2, in a plane), no matter how many 2D-vectors are
included in a matrix (A2n) n > 2, rank A  2. That means if you find two 2D-vectors are
independent (these two vectors are not parallel) then, any other 2D-vectors in the matrix
can be linearly represented by these 2 independent vectors (called the coordinate vectors).
If you add any new 2D vector into the matrix, the new vector is still dependent on the 2
coordinate vectors. It is holds even when the matrix’s column number increases to infinite!
Therefore, all the 2D-vectors construct the R2 space, R =any real number, (A Space has
infinite number of vector members vs. a matrix has finite members.).
Similarly, any 3D vector in the space can be represented by three independent 3D vectors
(the three vector are not in the same plane) as the coordinates. All the 3D vectors construct
the R3 space.

(2) Rank A = Rank AT.


The transposition operation ([ ]T) does not change the rank of a matrix.
Example

11
1 2 3  1 2 3   1 3 2 
    r  2 (C 3  C 2 affects independ. column vectors)
2 4 5   0 0  1  0  1 0 
1 2 1 2  1 2 
     
2 4    0 0    0 - 1 r  2 (no column exchanges)
3 5   0 - 1  0 0 

For a given matrix, the independent vector number is the same for its column or row
vectors (notice that the row vectors of A become the column vectors in AT)
After finishing Gaussian Elimination, we find a matrix’s rank r=2. That means
 1 3
concerning the COLUMN EXCHANGE in calculation, the first 2 column vectors  
 2 5
 1 2
are independent [not   !]. Now, it says, the first r row vectors are also independent.
 2 4

Example (AT from the previous example)

3 0 2 2  3 0 2 2 
Rank A T   6 42 24 54    6 42 24 54  R2  2 R1
 21  21 0  15  21  21 0  15  R3  7 R1
3 0 2 2 3 0 2 2  3 0 
  
 0 42 28 58 0.5R2  0 21 14 29  rank  2   - 6 ,  42  independen t
0 21 14 29  R3  0.5R2 0 0 0 0   21   21
  
Example
1 0 1 0  1 0 1 0 
A  1 0 1 1  R2  R1  0 0 0 1  R2  R3 
0 1 1 0  0 1 1 0 
1 0 1 0  1 0 0 1 
0 1 1 0  C  C (the first 3 columns dependent)  0 1 0 1   RankA  3
  4 3  
0 0 0 1  0 0 1 0 
1 1 0  1 1 0   1 1 0   1 1 0  1 1 0 
 0 0 1  1 1 1  R  R  0 0 1  0 1 0  0 1 0 
AT     2 1
    RankA  RankAT  3  min(m, n)
1 1 1   0 1 0   0 1 0  0 0 1  0 0 1 
         
0 1 0  0 0 1   0 0 1  0 0 1  0 0 0 
Example
1 1 0 1 1 0  1 1 0 1 1 0 1 1 0
1 1 
1 R 2  R1 0 0 1  0 1 0  0 1 0 0 1 0
    
0 1 0  0 1 0   0 9  6  9 R 2  R 3  0 0 6   0 0 6
         
0 0 1 0 0 1  0 0 1 0 0 1  0 0 0
4  5 6 4 R1  R 4 0 9  6  0 0 1  0 0 1 0 0 0
    

Basis: A set of selected p independent vectors {a1 , a 2 ,  , a p } with the same dimension d.

12
Vector Space V: A set of infinite vectors which includes all the linear combinations from
the basis of p vectors.
b  b1 a 1  b2 a 2    b p a p with b1, b2, …,bp are any real constants, all such

vectors construct a vector space-V from the basis vectors {a1 , a 2 ,  , a p } .


Dimension of the Vector Space
the number of independent vectors (basis) in a vector space is called the dimension of the
space (p  d).
The dimension of vectors in a basis is not equal to the dimension of the V-space constructed
1 0
   
by the basis. For example, a basis of  0  and  1  can be constructed by 2 independent 3D-
 0 0
   
vectors (such as the x-y plane space R in x-y-z space R3). All the linear combinations of
2

these 2 basis vectors construct a V-space of dimension 2. The member vectors in the V-
space are 3D. From above, 3D vectors can have maximum 3 independent vectors which
construct the R3 space. Therefore, this 2D V-space is a sub-space of the R3 space.

Determinant defines a value for a square matrix


Det (A) or |A| is a number encoding certain properties of matrix A.
Such as a matrix is invertible, iff |A|  0.

Determinant Calculation
2-order:
a a12
det A 11  a11a 22  a12 a 21 ,

a 21 a 22 
n-order: “Laplace Expansion” by any row or column recursively
a11 a12   a1 n
a 21 a 22   a2n
D  det A        a j1 c j1    a ji c ji  .....  a jn c jn  a1 j c1 j    a ij c ij  .....  a nj c nj
    
a n1    a nn
where Expand row - j c ji  (1) j  i M ji . Expand column - j c ij  (1) j  i M ij .
cji = Cofactor of aji and Mji = Minor of aji (i.e., the sub-determinant without the j-row and
i-column).
Example

13
1 3 0
6 4 2 4 2 6
D  2 6 4  1  (1)11  3  (1)1 2  0  (1)13
0 2 1 2 1 0
1 0 2
 (12  0)  3(4  4)  0  8  24  12

1 3 1 3
D  0  4  (1) 23  2  (1) 33  4(3)  2(6  6)  12
1 0 2 6
A triangle determinant equals the pruduct of the diagonal elements (Equally transforming
the determinant matrix into an upper triangle matrix can make the calculation easy)
a11 a12  a1n a11 0  0
0 a 22  a 2 n 0 a 22  0
 a11 a 22  a nn  a11 a 22  a nn
     
0 0  a nn a n1 a n 2  a nn

Equality Rules (comparing with the rules for Gauss Elimination)


(1) Transposition does not change the determinant value
A  AT .
(2) Interchanging 2 rows (columns) changes the determinant sign
a11 a12  a1n a 21 a 22  a 21n
a 21 a 22  a 2 n a a12  a1n 
  11
     
a n1 a n 2  a nn a n1 a n 2  a nn
(3) Taking a common factor from a row/column out of the determinant
a11 a12  a1n a 21 a 22  a 21n
ka 21 ka 22  ka 2 n a a12  a1n
 k 11
     
a n1 a n 2  a nn a n1 a n 2  a nn
Note: kAnn  k n A because kA means the same factor k for all elements in A.
(4) A determinant with 2 same or proportional rows/columns equals to zero.
(5) Splitting a determinant into two
a11 a12  a1n
a 21  b211 a 22  b22  a 2 n  b2 n
 (1) 2 1 (a 21  b211 ) M 21    (1) 2  n (a 2 n  b2 n ) M 2 n 
  
a n1 a n2  a nn
a11 a12  a1n a11 a12  a1n
a a 22  a 2 n b21 b22  b2 n
 21 
     
a n1 a n 2  a nn a n1 a n 2  a nn

14
(6) Adding a multiple of any row (column) to another row (column) does not change the
determinant.
a11 a12 a1n a11 a12 a1n a11 a12 a1n a11 a12 a1n
Same
a21 a22 a2n a21  ka11 a22  ka12 a2n  ka1n a21 a22 a2 n a11 a12 a1n
  k
0
an1 a n2 ann an1 a n2 ann an1 a n2 ann an1 a n 2 ann

Notice: Gaussian Elimination is used to simply a matrix therefore if you use GE to simplify
determinant calculation, you must consider the impacts of GE row/column processing on
the determinant value. In GE, to make a triangle matrix you may process the matrix in two
ways:
1) Add a factor-multiplied row onto another row:
a11 a12 a13 a11 a12 a13
D  det A  a 21 a 22 a 23  a 21  ca11 a 22  ca12 a 23  ca13 R 2  cR1
a 31 a 32 a 33 a 31 a 32 a 33
a11 a12 a13 a11 a12 a13 a11 a12 a13
 a 21 a 22 a 23  c a11 a12 a13  a 21 a 22 a 23 no chnage for the determinant value
a 31 a 32 a 33 a 31 a 32 a 33 a 31 a 32 a 33
2) Multiply a row with a factor then add another row:
a11 a12 a13 a11 a12 a13 a11 a12 a13
a11  c a 21 a12  c a 22 a13  c a 23 R1  cR2  a11 a12 a13  c a 21 a 22 a 23
a 31 a 32 a 33 a 31 a 32 a 33 a 31 a 32 a 33
a11 a12 a13
 c a 21 a 22 a 23 change the determinant value!
a 31 a 32 a 33

Example
Use Gauss Elimination to simplify determinant calculation (keep the processing rows
without factors)

2 0 4 6 2 0 4 6 2 0 4 6
4 5 1 0  2 R1  R 2 0 5 9  12 0 5 9  12
D  
2 6 1  1  5 R 2  R3 0 0 125
2 19
0 0 2 6 5
3 8 9 1 1.5R1  R 4 0 8 3 10  85 R 2  R 4 0 0  57
5
146
5
12 19
12  146 57  19
 2  (1)11  5  (1)11 5 5
 10(  )  1134
 575
146
5
25 25

6. Cramer’s Rule (Solution of LES by determinant for m = n)


Rank(A nxn) vs. det(A nxn): 1). If rank A < n then |A| = 0 and 2) if rank A = n then |A|  0

15
a11 x1  a12 x 2      a1n x n  R1
a 21 x1  a 22 x 2      a 2 n x n  R2
AnnXn1 = Rn1

a n1 x1  a n 2 x 2      a nn x n  Rn
x1 
D1 D D
, x2  2 ,....... xn  n Dx k  Dk  A x k  Ak
D D D

where D = |A| and Dk is the determinant obtained from D by replacing in the kth column by
the column of right-hand vector R.

a11 a12     a1n | R1 


 0 a 22     a 2n | R 2  the final (top - left) upper trianglar submatrix

 0 0      |    r  Min (n ; m)
 
 0 0 0 a rr a rr 1  a rn | Rr  k
Bottom of the upper trianglar submatrix
 0 0 0 0 0 0 0 | R r 1  
  (by row/column excahnges)
        | . 
 0 | R n  n( n 1)
 0 0 0 0 0 0

For the independence of n column/row vectors (a1, a2,…, an): based on the solution of
Homogeneous
IF D = |A|  0  trivial solution  these n vector are independent
Else D = 0  infinite non-zero solutions  the n vector are dependent
Solution Existence (Cramer’s rule) Dx k  Dk  A x k  Ak

 R  0 Unique zero solution


D  0 
 R0 Uni que Non - zero
 
 R 0
 infinite solution
 
D0 
 
 k  Rank A infinite solution
R  0  
   k  Rank A no solution

Solution Structure of LES
Solutions for homogeneous system An x nXn x 1 = 0
(1) The X = O is always one of the (trivial) solution,
(2) Non-trivial infinite solutions exist, iff rank A< n, (D = 0).

Solution structure of the Non-homogeneous system AX = R


Let X = XS + XH  A(XS + XH)=R + O
Where XS is any special solution of the system AXS = R, XH is the non-trivial (infinite)
solutions of the homogeneous system, AXH = O.

16

You might also like