Professional Documents
Culture Documents
Department of Mathematics Indian Institute of Technology, Bombay
Department of Mathematics Indian Institute of Technology, Bombay
It should be remarked that if rank(A) < n, then a left inverse can not exist. If
rank(A) = n < m then infinitely many left inverses exist.
Analogous definition and remarks can be made for a right inverse.
The situation is most interesting for square matrices. First we note the following. Let A and
B be n × n matrices.
Theorem 0.1.1. If a square matrix A has another square matrix B as a left inverse, then B
is also a right inverse and it is unique.
Proof:
It is given that BA = I. Let B̂ = EN · · · E2 E1 B be a row echelon form of B, then its last
row can not be 0. Otherwise, the same will be true of B̂A = EN · · · E2 E1 I which is invertible.
1
Hence can assume that B̂ = I.
B̂ = I =⇒ B̂A = A
=⇒ EN · · · E2 E1 |{z}
BA = A
=I
=⇒ EN · · · E2 E1 = A.
But then A = EN · · · E2 E1 is a product of invertibles and therefore itself invertible and hence
has a unique inverse which is E1−1 E2−1 · · · EN
−1
. Also,
A = EN · · · E2 E1 =⇒ BEN · · · E2 E1 = I
−1
=⇒ B = E1−1 E2−1 · · · EN .
0.2.1 Dimension
Theorem 0.2.1. Let B = {v1 , ..., vk } and B 0 = {w1 , ..., w` } be two bases of a vector space
V , then k = `. Hence the dim V is well defined.
b1 w1 + b2 v2 + · · · + bk vk = 0
=⇒ b1 (a1 v1 + · · · + ak vk ) + b2 v2 + · · · + bk vk = 0
=⇒ b1 a1 = 0 =⇒ b1 = 0 ∵ a1 6= 0.
=⇒ b2 = · · · = bk = 0 =⇒ L.I. of B1 .
2
Next assuming that Br = {w1 , ..., wr , vr+1 , ..., vk } is a basis we show that Br+1 = {w1 , ..., wr+1 , vr+2 , ..., vk }
is a basis. Using the basis Br ,
In the above at least one of the cj 6= 0 else B 0 will become L.D. So w.l.o.g. let cr+1 6= 0. Then
Br+1 is also a basis. Can be proved just like above:
Continuing we find a basis B` = {w1 , ..., w` , v`+1 , ..., vk } = B 0 ∪{v`+1 , ..., vk }. But v`+1
is already in L(B 0 ) a contradiction to L.I. of B` . Hence v`+1 can not exist and k > ` is invalid.
Short proof from Kreyszig, Adv Engg Math (8th ed.) p.333: If A has r = rank(A) linearly
independent rows {Aj1 , ..., Ajr }, then writing all the m rows as their linear combinations gives
an m × r matrix C s.t.
A
j1
..
C . = A = [A1 ...An ].
Ajr
a
j1 k
.
Now writing J = {j1 , ..., jr } and AkJ = .. (partial k th column of A) we find
aj r k
Aj1
.
C .. = C[A1J ...AnJ ] = [CA1J ... CAnJ ] = [A1 ...An ].
|{z} |{z}
∈C(C) ∈C(C)
Ajr
This implies that each Ak ∈ C(C) =⇒ C(A) ⊂ C(C) =⇒ rankc (A) ≤ rankc (C) ≤ r = rank(A).
Therefore, rankc (A) ≤ rank(A). Now invoke AT . (Recall that M x ∈ C(M ) for any p × q
matrix M and q × 1 column vector x.)
3
0.2.3 Position of the pivots in REFs
Theorem 0.2.2. If  and à are two REFs of the same matrix A, then their pivots are exactly
in the same places.
Proof: Let A have n columns and rank(A) = r. By definition of REF, the pivots occur
in row numbers 1, 2, ..., r. So let them occur in the column numbers k1 < k2 < · · · < kr and
`1 < `2 < · · · < `r respectively. Suppose t is least such that kt 6= `t . Assume w.l.o.g. that
kt < `t . Delete the last n − kt columns from A,  and à to get A0 , Â0 and Ã0 respectively.
Then the last two are the REF’s of A0 . In Â0 there are t pivots while Ã0 has only t − 1 which
gives two different ranks for A0 , a contradiction.
D-1 Skew-symmetry
D-3 Normalization
In addition to the crucial 3 properties, the determinant satisfies two more useful properties:
4
0.3.2 Determinants of product and transpose
2. Let A be any square matrix of the same size as E. Then |EA| = |E||A|.
3. Let X be any square matrix with the last column 0. Then |X| = 0.
Proof:
T
1. • The first is skew-symmetry applied to In (the identity matrix) and Pjk = Pjk .
• For the second, assume j < k for definiteness and expand |Ejk (c)| by the j th row to
obtain
c.0 + · · · + 1.In−1 + · · · + 0 = 1 and Ejk (c)T = Ekj (c).
0 + · · · + |{z}
| {z }
j th term
kth term
2. • E = Pjk =⇒ |Pjk A| = −|A| (skew-symmetry) = Pjk |A|.
• E = Ejk (c) =⇒ |Ejk (c)A| = |A| + c.0 = Ejk (c)|A|. Due to linearity in the j th row,
the second term being a determinant whose j th row equals its k th row (j 6= k).
• E = Mj (λ) =⇒ |Mj (λ)A| = λ|A| = Mj (λ)|A| due to linearity in the j th row (or
expanding by the j th row).
3. Let X be n × n and Mjk be the (jk)th minor of X, then by inductive hypothesis Mjk =
0 if 1 ≤ k < n. Expanding by the first row, |X| = 0 + · · · + 0 + 0.M1n = 0.
5
Corollary 0.3.1. Let A and B be n × n matrices then
|AB| = |A||B| and AT = |A|.
Proof:
• First we consider A invertible. An invertible matrix is a product of ERM 0 s. So let A =
EN · · · E2 E1 . Then
|AB| = |EN · · · E2 E1 B| = |EN ||EN −1 · · · E2 E1 B| = · · · = |EN | · · · |E2 ||E1 | |B|.
| {z }
=|A|(see below)
Also by the same logic |A| = |AI| = |EN | · · · |E2 ||E1 ||I| = |EN | · · · |E2 ||E1 |.
Further, AT = E1T ...EN
T
is again a product of ERM’s and
T T T
A = E1 ...EN = E1 ...EN = |A|.
• If A is not invertible, then there are ERM’s E1 , E2 , ..., EN such that the last row of
EN · · · E2 E1 A = GA say, is 0. Expansion by the last row shows that
Remark 0.3.1. Invertible matrices are dense in Mn (R), and X 7→ |XB| − |X||B| is a con-
tinuous function (a polynomial of degree n in n2 variables xij ), hence if |XB| − |X||B| = 0
for invertible matrices X, then it so for all matrices X.
Remark 0.3.2. Like wise by the continuity of X 7→ X T −|X|, which vanishes for invertibles,
T
X = |X| for all X.
Corollary 0.3.2.
• Expansion of the determinant by the rows implies that by the columns and vice-versa.
6
0.4 Definition of determinant
We indicate how an n × n determinant can be expanded by any row. We use induction on n.
So we assume that (n − 1) × (n − 1) determinants are well defined. Moreover, we assume that
theorem has been verified for n = 2. So let n > 2. We expand determinant of n × n matrix
A by its first and the second row and show equality.
Theorem 0.4.1. Let A be an n×n matrix. Let |A|1 and |A|2 be expansions of the determinant
by the first and the second row respectively. Then |A|1 = |A|2 .
Proof:
For J, K ⊆ n we use the notation AJK to denote the submatrix of A by taking the rows and
the columns corresponding to J and K respectively. Thus e.g. Mjk = Acj k c
X
|A|1 = (−1)1+k a1k M1k
1≤k≤n
X
|A|2 = (−1)k a2k M2k
1≤k≤n
7
Finally on interchanging k and `,
n X
X k−1 n n
k+`
X X
|A|2 = (−1) a2` a1k A{1,2}c {k,`}c +
(−1)k+`−1 a2` a1k A{1,2}c {k,`}c ,
k=1 `=1 k=1 `=k+1
Z 1 Z x Z 1 Z 1
f (x, y) dy dx = f (x, y) dx dy.
x=0 y=0 y=0 x=y
n
X n
X n X
X `−1
Likewise for = .
k=1 `=k+1 `=1 k=1
When we rewrite the sum for |A|2 , in the first double sum we have k < ` hence it is the
second double sum for |A|1 and vice versa.
Exercise: Write the expansion |A|j by the j th row and compare with |A|1 for j > 2. This
will complete the proof of consistency of the definition. Should expand the minors M1k by its
(j − 1)th which will be the j th row of A. row. This is valid due to the induction hypothesis:
can expand n − 1 × n − 1 determinant by any of its rows.