Download as pdf or txt
Download as pdf or txt
You are on page 1of 89

Echelon Form

If A is m × n, we can find P, L and U as before. In this case, L and P


will be m × m and U will be m × n.
U has the following properties:
1. Pivots are the 1st nonzero entries in their rows.
2. Entries below pivots are zero, by elimination.
3. Each pivot lies to the right of the pivot in the row above.
4. Zero rows are at the bottom of the matrix.
U is called an echelon form of A.
Possible
  2  × 2 echelon
  forms:
  • = pivot
Let  entry.
• ∗ • ∗ 0 • 0 0
, , , and .
0 • 0 0 0 0 0 0

H. Ananthnarayan D1 - Lecture 6 16th January 2018 5 / 13


Row Reduced Form
To obtain the row reduced form R of a matrix A:
1) Get the echelon form U. 2) Make the pivots 1.
3) Make the entries above the pivots 0.
Ex: Find all possible 2 × 2 row reduced forms.
   
1 2 3 5 1 2 3 5
Example. Let A = 2 4 8 12 . Then U = 0 0 2 2.
3 6 7 13 0 0 0 0
 
1 2 3 5
Divide by pivots: R2 /2 gives 0 0 1 1 .
0 0 0 0
By R1 = R1 − 3R2 ,  
1 2 0 2
Row reduced form of A: R = 0 0 1 1
0 0 0 0
U and R are used to solve Ax = 0 and Ax = b.

H. Ananthnarayan D1 - Lecture 6 16th January 2018 6 / 13


Null Space: Solution of Ax = 0

Let A be m × n. The Null Space of A, denoted N(A),


is the set of all vectors x in Rn such that Ax = 0.
 
1 2 3 5
Example 1: A = 2 4 8 12. Are the following in N(A)?
3 6 7 13
     
−2 −2 −5
1 0 0
x =  0 ? y = −1? z =  0 ?
    

0 1 1
Check that the following vectors are in N(A):
T T
x + y = −4 1 −1 1 , −3 · x = 6 −3 0 0 .

H. Ananthnarayan D1 - Lecture 6 16th January 2018 7 / 13


Linear Combinations in N(A)

Remark: Let A be an m × n matrix, u, v be real numbers.


• The null space of A, N(A) contains vectors from Rn ,
• If x, y are in N(A), i.e., Ax = 0 and Ay = 0, then
A(ux + vy) = u(Ax) + v (Ay) = 0, i.e., ux + vy is in N(A).
i.e., a linear combination of vectors in N(A) is also in N(A).
Thus N(A) is closed under linear combinations .

H. Ananthnarayan D1 - Lecture 6 16th January 2018 8 / 13


Finding N(A)

Key Point: Ax = 0 has the same solutions as Ux = 0,


which has the same solutions as Rx = 0, i.e.,
N(A) = N(U) = N(R) .
Reason: If A is m × n, and Q is an invertible m × m matrix, then
N(A) = N(QA). (Verify this)!
Example 2:  
    t
1 2 3 5 1 2 0 2  
u
For A = 2 4 8 12 , we have Rx = 0 0 1 1   v .
3 6 7 13 0 0 0 0
w
R x = 0 gives t + 2u + 2w = 0 and v + w = 0.
i.e., t = −2u − 2w and v = −w.

H. Ananthnarayan D1 - Lecture 6 16th January 2018 9 / 13


Null Space: Solution of Ax = 0

Rx = 0 gives t = −2u − 2w and v = −w,


t and v are dependent on the values of u and w.
u and w are free and independent, i.e., we can choose any value for
these two variables.
Special solutions:
T
u = 1 and w = 0, gives x = −2 1 0 0 .
T
u = 0 and w = 1, gives x = −2 0 −1 1 .
The null space contains:
       
t −2u − 2w −2 −2
u   u  = u 1  + w  0 ,
    
x = = 
v   − w   0  −1
w w 0 1
i.e., all possible linear combinations of the special solutions.

H. Ananthnarayan D1 - Lecture 6 16th January 2018 10 / 13


Rank of A
Ax = 0 always has a solution: the trivial one, i.e., x = 0.
Main Q1: When does Ax = 0 have a non-zero solution?
A: When there is at least one free variable,
i.e., not every column of R contains a pivot.
To keep track of this, we define:
rank(A) = number of columns containing pivots in R .
If A is m × n and rank(A) = r , then
• rank(A) ≤ min{m, n}.
• no. of dependent variables = r .
• no. of free variables = n − r .
• Ax = 0 has only the 0 solution ⇔ r = n.
• m < n ⇒ Ax = 0 has non-zero solutions.
True/False: If m ≥ n, then Ax = 0 has only the 0 solution.

H. Ananthnarayan D1 - Lecture 6 16th January 2018 11 / 13


Rank of A

rank(A) = number of columns containing pivots in R .


= number of dependent variables in the system Ax = 0.
   
1 2 0 2 1 2 3 5
Example: R = 0 0 1 1 when A = 2 4 8 12 .
0 0 0 0 3 6 7 13
The number of columns containing pivots in R is 2. So, rank(A) = 2.
R contains a 2 × 2 identity matrix, namely the rows and columns
corresponding to the pivots.  
1 3
This is the row reduced form of the corresponding submatrix
2 8
of A, which is invertible, since it has 2 pivots.
Thus, rank(A) = r ⇒ A has an r × r invertible submatrix.
Is the converse is true?

H. Ananthnarayan D1 - Lecture 6 16th January 2018 12 / 13


Reading Slide - Finding N(A) = N(U) = N(R)

Let A be m × n. To solve Ax = 0, find R and solve Rx = 0.


1. Find free (independent) and pivot (dependent) variables:
pivot variables: columns in R with pivots (↔ t and v ).
free variables: columns in R without pivots (↔ u and w).
2. No free variables, i.e., rank(A) = n ⇒ N(A) = 0.
3a. If rank(A) < n, obtain a special solution:
Set one free variable = 1, the other free variables = 0.
Solve Rx = 0 to obtain values of pivot variables.
3b. Find special solutions for each free variable.
N(A) = space of linear combinations of special solutions.
• This information is stored in a compact form in:
Null Space Matrix: Special solutions as columns.

H. Ananthnarayan D1 - Lecture 6 16th January 2018 13 / 13


Recall: Echelon Form and Rank

Let A be an m × n matrix.
• An echelon form U (also m × n) is obtained by forward elimination
and has the following properties:
1. Pivots are the 1st nonzero entries in their rows.
2. Entries below pivots are zero, by elimination.
3. Each pivot lies to the right of the pivot in the row above.
4. Zero rows are at the bottom of the matrix.
• To obtain the row reduced form R of A:
1) Get the echelon form U.
2) Make the pivots 1.
3) Make the entries above the pivots 0.
U and R are used to solve Ax = 0 and Ax = b.
• Number of columns with pivots = rank(A).

H. Ananthnarayan D1 - Lecture 7 18th January 2018 3 / 17


Recall: Null Space of A
Given an m × n matrix A, the null space of A, denoted N(A),
is the set of all vectors x in Rn such that Ax = 0.
Key Point: N(A) = N(U) = N(R)
   
1 2 3 5 1 2 0 2
Example: If A = 2 4 8 12 , then R = 0 0 1 1.
3 6 7 13 0 0 0 0
x = (t, u, v , w)T is in N(A) if and only if Rx = 0,
i.e., t = −2u −  v = −w.
2w and    
t −2u − 2w −2 −2
u   u  1 0
Thus, x =   v  =  − w  = u  0  + w −1 ,
      

w w 0 1
i.e., all possible linear combinations of the special solutions.
This information is stored in a compact form in:
Null Space Matrix: Special solutions as columns.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 4 / 17


Recall: Finding N(A) = N(U) = N(R)

Let A be m × n. To solve Ax = 0, find R and solve Rx = 0.


1. Find free (independent) and pivot (dependent) variables:
pivot variables: columns in R with pivots (↔ t and v ).
free variables: columns in R without pivots (↔ u and w).
2. No free variables, i.e., rank(A) = n ⇒ N(A) = 0.
3a. If rank(A) < n, obtain a special solution:
Set one free variable = 1, the other free variables = 0.
Solve Rx = 0 to obtain values of pivot variables.
3b. Find special solutions for each free variable.
N(A) = space of linear combinations of special solutions.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 5 / 17


Solving Ax = b

Caution: If b 6= 0, solving Ax = b may not be the same as solving


Ux = b or Rx = b.  
  t  
1 2 3 5   b
u   1
Example: Ax = 2 4 8 12  v  = b2 = b.
3 6 7 13 b3
w
Convert
 to Ux = c and  thenRx = d. 
1 2 3 5 | b1 1 2 3 5 | b1
2 4 8 12 | b2  → 0 0 2 2 | b2 − 2b1 
3 6 7 13 | b3 0 0 −2 −2 | b3 − 3b1
 
1 2 3 5 | b1
→ 0 0 2 2 | b2 − 2b1 
0 0 0 0 | b3 + b2 − 5b1
System is consistent ⇔ b3 + b2 − 5b1 = 0, i.e., b3 = 5b1 − b2

H. Ananthnarayan D1 - Lecture 7 18th January 2018 6 / 17


Solving Ax = b or Ux = c or Rx = d

Ax = b has a solution ⇔ b3 = 5b1 − b2 .


T
e.g., there is no solution when b = 1 0 4 .
T
Suppose b = 1 0 5 . Then [A|b] →
   
1 2 3 5| b1 1 2 3 5| 1
0 0 2 2 | b2 − 2b1  = 0 0 2 2 | −2
0 0 0 0 | b3 + b2 − 5b1 0 0 0 0| 0
   
1 2 3 5| 1 1 2 0 2 | 4
→ 0 0 1 1 | −1 → 0 0 1 1
   | −1
0 0 0 0| 0 0 0 0 0 | 0
T
Ax = b is reduced to solving Ux = c = 1 −2 0 ,
T
which is further reduced to solving Rx = d = 4 −1 0 .

H. Ananthnarayan D1 - Lecture 7 18th January 2018 7 / 17


Solving Ax = b or Ux = c or Rx = d

Solving Ax = b is reduced to solving Rx = d ,


i.e., we want to solve
 
  t  
1 2 0 2   4
0 0 1 1  u  = −1
v 
0 0 0 0 0
w

i.e., t = 4 − 2u − 2w and v = −1 − w
Set the free variables u and w = 0 to get t = 4 and v = −1
T
A particular solution: x = 4 0 −1 0 .
Ex: Check it is a solution i.e., check Ax = b.
Observe: In Rx = d, the vector d gives values for the pivot variables,
when the free variables are 0.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 8 / 17


General Solution of Ax = b

From Rx = d, we get t = 4 − 2u − 2w and v = −1 − w, where u and


w are free. Complete set of solutions to Ax = b:
         
t 4 − 2u − 2w 4 −2 −2
u   u  0 1 0
 v   − 1 − w  = −1 + u  0  + w −1 .
 =        

w w 0 0 1
To solve Ax = b completely, reduce to Rx = d. Then:
1. Find xNullSpace , i.e., N(A), by solving Rx = 0.
2. Set free variables = 0, solve Rx = d for pivot variables.
This is a particular solution: xparticular .
3. Complete solutions: xcomplete = xparticular + xNullSpace

Ex: Verify geometrically for a 1 × 2 matrix, say A = 1 2 .

H. Ananthnarayan D1 - Lecture 7 18th January 2018 9 / 17


The Column Space of A

Q: Does Ax = b have a solution? A: Not always.


Main Q2: When does Ax = b have a solution?
If Ax = b has a solution,
  then we can find numbers x1 , . . . , xn such that
x1
.
A∗1 · · · A∗n  ..  = x1 A∗1 + · · · + xn A∗n = b,
xn
i.e., b can be written as a linear combination of columns of A.
The column space of A, denoted C(A)
is the set of all linear combinations of the columns of A
= {b in Rm such that Ax = b is consistent}.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 10 / 17


Finding C(A): Consistency of Ax = b
 
1 2 3 5
Example: Let A = 2 4 8 12 . Then Ax = b, where
3 6 7 13
T
b = b1 b2 b3 , has a solution whenever −5b1 + b2 + b3 = 0.
• C(A) is a plane in R3 passing through the origin with normal vector
T
−5 1 1 .
T
• a = 1 0 4 is not in C(A) as Ax = a is inconsistent.
T
• b = 1 0 5 is in C(A) as Ax = b is consistent.
Ex: Write b as a linear combination of the columns of A.
(A different way of saying: Solve Ax = b).
T
x = 4 0 −1 0 is a solution of Ax = b. Hence
T
1 0 5 = 4A∗1 + (−1)A∗3 .
Q: Can you write b as a different combination of A∗1 , . . . , A∗4 ?

H. Ananthnarayan D1 - Lecture 7 18th January 2018 11 / 17


Linear Combinations in C(A)

Let A be an m × n matrix, u and v be real numbers.


• The column space of A, C(A) contains vectors from Rm .
• If a, b are in C(A), i.e., Ax = a and Ay = b for some x, y in Rn , then
ua + vb = u(Ax) + v (Ay) = A(ux + vy) = Aw, where w = ux + vy.
T
Hence, if w = w1 · · · wn , then ua + vb = w1 A∗1 + · · · wn A∗n ,
i.e., a linear combination of vectors in C(A) is also in C(A).
Thus, C(A) is closed under linear combinations.
• If b is in C(A), then b can be written as a linear combination of the
columns of A in as many ways as the solutions of Ax = b.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 12 / 17


Summary: N(A) and C(A)

Remark: Let A be an m × n matrix.


• The null space of A, N(A) contains vectors from Rn .
• Ax = 0 ⇔ x is in N(A).
• The column space of A, C(A) contains vectors from Rm .
• If B is the nullspace matrix of A, then C(B) = N(A).
• Ax = b is consistent ⇔ b is in C(A) ⇔
b can be written as a linear combination of the columns of A. This can
be done in as many ways as the solutions of Ax = b.
• Let A be n × n.
A is invertible ⇔ N(A) = {0} ⇔ C(A) = Rn . Why?
• N(A) and C(A) are closed under linear combinations.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 13 / 17


Vector Spaces: Rn
We begin with R1 , R2 , . . . , Rn , etc., where Rn consists of all column
vectors of length n, i.e.,
T
Rn = {x = x1 · · · xn , where x1 , . . . , xn are in R}.
We can add two vectors, and we can multiply vectors by scalars, (i.e.,
real numbers). Thus, we can take linear combinations in Rn .
Examples:
R1 is the real line, R3 is the usual 3-dimensional space, and
R2 is represented by the x-y plane; the x and y co-ordinates are given
by the two components of the vector.
y
(−2, 1) (0, 1)

(−2, 0) (1, 0) x

H. Ananthnarayan D1 - Lecture 7 18th January 2018 14 / 17


Vector Spaces: Examples

1 V = 0, the space consisting of only the zero vector.


2 V = Rn , the n-dimensional space.
3 V = R∞ , vectors with infinite number of components,
i.e., a sequence of real numbers, e.g., x = (1, 1, 2, 3, 5, 8, . . .),
with component-wise addition and scalar multiplication.
4 V = M, the set of 2 × 2 matrices. What are + and ∗?
Q: Is this the ’same’ as R4 ?
5 V = C[0, 1], the set of continuous real-valued functions on the
closed interval [0, 1]. e.g., x 2 , ex are vectors in V .
Q: Is x1 a vector in V ? How about x−2 1
?
Vector addition and scalar multiplication are pointwise:
(f + g)(x) = f (x) + g(x) and (a ∗ f )(x) = af (x).

H. Ananthnarayan D1 - Lecture 7 18th January 2018 15 / 17


Vector Spaces: Definition

Defn. A non-empty set V is a vector space if it is closed under vector


addition ( i.e., if x, y are in V, then x + y must be in V ) and scalar
multiplication, (i.e., if x is in V , a is in R, then a ∗ x must be in V ).
Equivalently, x, y in V , a, b in R ⇒ a ∗ x + b ∗ y must be in V .
• A vector space is a triple (V , +, ∗) with vector addition + and scalar
multiplication ∗
• The elements of V are called vectors and the scalars are chosen to
be real numbers (for now).
• If the scalars are allowed to be complex numbers, then V is a
complex vector space.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 16 / 17


Vector Spaces: Definition continued
Let x, y and z be vectors, a and b be scalars. The vector addition and
scalar multiplication are also required to satisfy:
x + y = y + x Commutativity of addition
(x + y) + z = x + (y + z) Associativity of addition
There is a unique vector 0, such that x + 0 = x
Existence of additive identity
For each x, there is a unique −x such that x + (−x) = 0
Existence of additive inverse
1∗x =x Unit property
(a + b) ∗ x = a ∗ x + b ∗ x, a ∗ (x + y) = a ∗ x + a ∗ y
(ab) ∗ x = a ∗ (b ∗ x) Compatibility
Notation: For a scalar a, and a vector x, we denote a ∗ x by ax.

H. Ananthnarayan D1 - Lecture 7 18th January 2018 17 / 17


Vector Spaces: Definition

Defn. A non-empty set V is a vector space if it is closed under vector


addition ( i.e., if x, y are in V, then x + y must be in V ) and scalar
multiplication, (i.e., if x is in V , a is in R, then a ∗ x must be in V ).
Equivalently, x, y in V , a, b in R ⇒ a ∗ x + b ∗ y must be in V .
• A vector space is a triple (V , +, ∗) with vector addition + and scalar
multiplication ∗, with + and ∗ satisfying some additional properties
(see next slide).
• The elements of V are called vectors and the scalars are chosen to
be real numbers (for now).
• If the scalars are allowed to be complex numbers, then V is a
complex vector space.

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 3/9


Vector Spaces: Definition continued
Let x, y and z be vectors, a and b be scalars. The vector addition and
scalar multiplication are also required to satisfy:
x + y = y + x Commutativity of addition
(x + y) + z = x + (y + z) Associativity of addition
There is a unique vector 0, such that x + 0 = x
Existence of additive identity
For each x, there is a unique −x such that x + (−x) = 0
Existence of additive inverse
1∗x =x Unit property
(a + b) ∗ x = a ∗ x + b ∗ x, a ∗ (x + y) = a ∗ x + a ∗ y
(ab) ∗ x = a ∗ (b ∗ x) Compatibility
Notation: For a scalar a, and a vector x, we denote a ∗ x by ax.

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 4/9


Vector Spaces: Examples

1 V = 0, the space consisting of only the zero vector.


2 V = Rn , the n-dimensional space.
3 V = R∞ = sequences of real numbers, e.g.,
x = (0, 1, 0, 2, 0, 3, 0, 4, . . .), with component-wise addition and
scalar multiplication.
4 V = M, the set of m × n matrices, with entry-wise + and ∗.
5 V = P, the set of polynomials, e.g.
1 + 2x + 3x 2 + · · · + 2018x 2017 , with term-wise + and ∗.
6 V = C[0, 1], the set of continuous real-valued functions on the
closed interval [0, 1]. e.g., x 2 , ex are vectors in V .
Vector addition and scalar multiplication are pointwise:
(f + g)(x) = f (x) + g(x) and (a ∗ f )(x) = af (x).

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 5/9


Subspaces: Definition and Examples

If V is a vector space, and W is a non-empty subset, then W is a


subspace of V if:
x, y in W , a, b in R ⇒ a ∗ x + b ∗ y are in W .
i.e., linear combinations stay in the subspace.
Examples:
1 {0}: The zero subspace and Rn itself.
2 {(x1 , x2 ) : x1 ≥ 0, x2 ≥ 0} is not a subspace of R2 . Why?
3 The line x − y = 1 is not a subspace of R2 . Why?
Exercise: A line not passing through the origin is not a subspace
of R2 .
4 The line x − y = 0 is a subspace of R2 . Why?
Exercise: Any line passing through the origin is a subspace of R2 .

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 6/9


Subspaces: Examples

5. Let A be an m × n matrix.
The null space of A, N(A), is a subspace of Rn .
The column space of A, C(A), is a subspace of Rm .
Recall: They are both closed under linear combinations.
6. The set of 2 × 2 symmetric matrices is a subspace of M.
So is the set of 2 × 2 lower triangular matrices.
Is the set of invertible 2 × 2 matrices a subspace of M?
7. The set of convergent sequences is a subspace of R∞ .
What about the set of sequences convergent to 1?
8. The set of differentiable functions is a subspace of C[0, 1].
Is the same true for the set of functions integrable on [0, 1]?
9. - ? See the tutorial sheet for many more examples!
Exercise: A subspace must contain the 0 vector!

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 7/9


Examples: Subspaces of R2

What are the subspaces of R2 ?


T
V = { 0 0 }.
V = R2 .
What if V is neither of the above?
L: x +y =0 y

Suppose V contains a (−0.5)v

Example: non-zero vector,


T
say x
v = −1 1 .
v
1.5v
V must contain the entire line L : x + y = 0, i.e., all multiples of v .

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 8/9


Examples: Subspaces of R2
T
Let V be a subspace of R2 containing v1 = −1 1 . Then V must
contain the entire line L : x + y = 0.
If V 6= L, it contains a vector v2 , which is not a multiple of v1 , say
T
v2 = 0 1 .
 
 −1 0
Observe: A = v1 v2 = has two pivots,
1 1
⇔ A is invertible.
⇔ for any v in R2 , Ax = v is solvable,
⇔ v is in C(A),
⇔ v can be written as a linear combination of v1 and v2 .
⇒ v is in V , i.e., V = R2
To summarise: A subspace of R2 , which is non-zero, and not R2 , is a
line passing through the origin.

H. Ananthnarayan D1 - Lecture 8 22nd January 2018 9/9


Linear Span: Definition

Given a collection S = {v1 , v2 , . . . , vn } in a vector space V , the linear


span of S, denoted Span(S) or Span{v1 , . . . , vn },
is the set of all linear combinations of v1 , v2 , . . . , vn , i.e.,
Span(S) = {v = a1 v1 + · · · + an vn , for scalars a1 , . . . , an }.
Note:
1. If v1 , . . . , vn are in Rm , Span{v1 , . . . , vn } = C(A) for
A = v1 · · · vn , an m × n matrix. Thus
v is in Span{v1 , . . . , vn } ⇔ Ax = v is consistent.
2. Let {v1 , . . . , vn } be n vectors in Rn , A = v1 · · · vn .


Then A is invertible ⇔ A has n pivots ⇔ Ax = v is consistent for every


v in Rn ⇔ Span{v1 , . . . , vn } = Rn .
Example: Span{e1 , . . . , en } = Rn .

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 3 / 12


Linear Span: Examples

Examples:
1 Span{0} = {0}.
2 If v 6= 0 is a vector, Span{v } = {av , for scalars a}.
Geometrically (in Rm ): Span{v } = the line in the direction of v
passing through the origin.
   
−1 0
3 Span , = R2 .
1 1
4 If A is m × n, then Span{A1 , . . . , An } = C(A).
5 If v1 , . . . , vk are the special solutions of A,
then Span {v1 , . . . , vk } = N(A).
Remark: All of the above are subspaces.
Exercise: Span(S) is a subspace of V .

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 4 / 12


Linear Span: Examples
T
Example 1: Is v in Span{v1 , v2 , v3 , v4 }, where: v = 1 0 4 ,
T T T
v1 = 1 2 3 , v2 = 2 4 6 , v3 = 3 8 7 and
T
v4 = 5 12 13 ?
   
 1 2 3 5 b1
Let A = v1 · · · v4 . Recall Ax =2 4 8 12 x = b2  is
3 6 7 13 b3
solvable ⇔ 5b1 − b2 − b3 = 0.
⇒ v is not in Span{v1 , v2 , v3 , v4 }, and
T
w = 1 0 5 = 4v1 + (−1)v3 is in it.
Observe: v2 = 2v1 and v4 = 2v1 + v3 . Hence v2 , v4 are in
Span{v1 , v3 }. Therefore, Span{v1 , v3 } = Span{v1 , v2 , v3 , v4 } = C(A) =
the plane P : (5x − y − z = 0).
Q: Is the span of two vectors in R3 always a plane?

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 5 / 12


Linear Span: Examples

T
Example 2: Is v = 4 3 5 in Span{v1 , v2 , v3 , v4 }, where:
       
2 4 6 4
v1 = 2 , v2 = 5 , v3 = 7 and v4 = 6?
      
2 3 5 2
If yes, write v as a linear combination of {v1 , v2 , v3 , v4 }.

Let A = v1 · · · v4 . The question can be rephrased as:
Q: Is v in C(A), i.e., is Ax = v solvable? If yes, find a solution.
R2 7→ R2 − R1 , R3 7→ R3 − R1 , followed by R3 7→ R3 + R2 , gives
   
2 4 6 4|a 2 4 6 4 | a
[A|v ] = 2 5 7
 6 | b −→ [U|w] = 0 1 1 2 | b−a 
2 3 5 2|c 0 0 0 0 | c + b − 2a

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 6 / 12


Linear Span: Examples
   
2 4 6 4 a
Ax =2 5 7 6 x = b is solvable ⇔ 2a − b − c = 0
2 3 5 2 c
T
⇒ v = 4 3 5 is in Span{v1 , . . . , v4 },
T
(and, for example, 4 3 4 is not in it).
Observe: C(A) is a plane!
Solve Ax= v : Convert U to the row reduced
  form R: 
2 4 6 4 | a 2 4 6 4 | 4
[U|w] = 0 1 1 2 | b − a  = 0 1 1 2 | −1
0 0 0 0 | c + b − 2a 0 0 0 0 | 0
 
1 0 1 −2 | 4
−→ 0 1 1 2 | −1 by R1 7→ R1 − 4R2 , R1 7→ R1 /2.
0 0 0 0 | 0
T
Particular solution: 4 −1 0 0 and v = 4v1 + (−1)v2 .

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 7 / 12


Linear Independence: Example

       
2 4 6 4
With v1 = 2, v2 = 5, v3 = 7 and v4 = 6
2 3 5 2
Observe: v3 = v1 + v2 and v4 = −2v1 + 2v2 . Hence v3 and v4 are in
Span{v1 , v2 }. Therefore, Span{v1 , v2 } = Span{v1 , v2 , v3 , v4 } = C(A) =
the plane P : (2x − y − z = 0).
Q: Is the span of two vectors in R3 always a plane?
A: Not always. If v is a multiple of w, then Span{v , w} = Span{w},
which is a line through the origin or zero.
Q: If v and w are not on the same line through the origin?
A: Yes. v , w are examples of linearly independent vectors.

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 8 / 12


Linear Independence: Definition

The vectors v1 , v2 , . . . , vn in a vector space V , are linearly independent


if a1 v1 + a2 v2 + · · · + an vn = 0 ⇒ a1 = 0, a2 = 0, . . ., an = 0 .
Observe: When V = Rm , if A = v1 v2 · · · vn , then


v1 , v2 , . . . , vn are linearly independent ⇔


Ax = x1 v1 + x2 v2 + · · · + xn vn = 0 has only the trivial solution
⇔ N(A) = 0.
The vectors v1 , . . . , vn are linearly dependent if they are not linearly
independent. If V = Rm , this happens ⇔

Ax = v1 · · · vn x = 0 has non-trivial solutions.

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 9 / 12


Linear Independence: Remarks

Remarks/Examples:
1 The zero vector 0 is not linearly independent.
2 If v 6= 0, then it is linearly independent.
3 v , w are not linearly independent ⇔ one is a multiple of the other
⇔ (for V = Rm ) they lie on the same line through the origin.
4 More generally, v1 , . . . , vn are not linearly independent ⇔ one of
the vi ’s can be written as a linear combination of the others, i.e., vi
is in Span{vj : j = 1, . . . n, j 6= i}.
5 Let A be m × n. Then rank(A) = n ⇔ N(A) = 0 ⇔ A∗1 , · · · , A∗n
are linearly independent.
In particular, if A is n × n, A is invertible ⇔ A1 , · · · , An are linearly
independent.
Example: e1 , . . . , en are linearly independent vectors in Rn .

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 10 / 12


Linear Independence: Example

Example:
  Are the vectors
 v1
, v2, v3 , v4 linearly
 independent,
 where
2 4 6 4
v1 = 2, v2 = 5, v3 = 7 and v4 = 6?
2 3 5 2
 
 1 0 1 −2
For A = v1 · · · v4 , reduced form R = 0 1 1 2 .
0 0 0 0
A has only 2 pivots ⇒ N(A) 6= 0, so v1 , v2 , v3 , v4 are not independent.
A non-trivial linear combination which is zero is
(1)v1 + (1)v2 + (−1)v3 + (0)v4 , or (2)v1 + (−2)v2 + (0)v3 + (1)v4 . 
More generally, if v1 , . . . , vn are vectors in Rm , then A = v1 · · · vn
is m × n.
If m < n, then rank(A) < n ⇒ N(A) 6= 0. Thus
In Rm , any set with more than m vectors is linearly dependent.

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 11 / 12


Summary: Vector Spaces, Span and Independence
• Vector space: A triple (V , +, ∗) which is closed under + and ∗
• Subspace: A non-empty subset W of V closed under linear
combinations.
• Span{v1 , . . . , vn }
= {v = a1 v1 + · · · + an vn , for scalars a1 , . . . , an }.
Let V = Rm , v1 , . . . , vn be in V , and A = v1 · · ·

vn .
• For v in V , v is in Span{v1 , . . . , vn } ⇔ Ax = v is consistent
• v1 , . . . , vn are linearly independent
⇔ N(A) = 0 ⇔ rank(A) = n.
• In particular, with n = m, A is invertible
⇔ Span{v1 , . . . , vn } = Rn ⇔ v1 , . . . , vn are linearly independent
⇔ N(A) = 0 ⇔ rank(A) = n.
• Any subset of Rm with more than m vectors is dependent.

H. Ananthnarayan D1 - Lecture 9 23rd January 2018 12 / 12


Summary: Vector Spaces, Span and Independence
• Vector space: A triple (V , +, ∗) which is closed under + and ∗ with
some additional properties satisfied by + and ∗.
• Subspace: A non-empty subset W of V closed under linear
combinations.
Let V = Rm , v1 , . . . , vn be in V , and A = v1 · · · vn .


• For v in V , v is in Span{v1 , . . . , vn } ⇔ Ax = v is consistent


• v1 , . . . , vn are linearly independent
⇔ N(A) = 0 ⇔ rank(A) = n.
• In particular, with n = m, A is invertible
⇔ Ax = v is consistent for every v
⇔ Span{v1 , . . . , vn } = Rn ⇔ rank(A) = n
⇔ N(A) = 0 ⇔ v1 , . . . , vn are linearly independent.
• Any subset of Rm with more than m vectors is dependent.

H. Ananthnarayan D1 - Lecture 10 25th January 2018 3/8


Minimal Spanning Set
T T T
Let v1 = 2 2 2 , v2 = 4 5 3 , v3 = 6 7 5 and
T
v4 = 4 6 2 . If A = (v1 v2 v3 v4 ), can C(A) =Span{v1 , v2 , v3 , v4 }
be spanned by less than 4 vectors?
Note: v3 = v1 + v2 and v4 = −2v1 + 2v2 ⇒ C(A) = Span{v1 , v2 }.
Observe:
The span of only v1 or only v2 is a line. Clearly v1 is not on the line
spanned by v2 and vice versa.
Thus, {v1 , v2 } is a minimal spanning set for C(A).
v1 and v2 are linearly independent and span C(A).
If v is in C(A) = Span{v1 , v2 }, then v1 , v2 , v are linearly
dependent. Why?
Thus, {v1 , v2 } is a maximal linearly independent set in C(A).
Any such set of vectors gives a basis of C(A).

H. Ananthnarayan D1 - Lecture 10 25th January 2018 4/8


Basis: Definition
Defn. A subset B of a vector space V , is said to be a basis of V , if it is
linearly independent and Span(B) = V .
Theorem: For any subset S of a vector space V , the following are
equivalent:
• S is a maximal linearly independent set in V
• S is linearly independent and Span(S) = V .
• S is a minimal spanning set of V .
Note: Every vector space V has a basis.
Examples:
• By   the empty set is a basis for V = {0}.
convention,
−1 0
• , is a basis for R2 .
1 1
• {e1 , . . . , en } is a basis for Rn , called the standard basis.
• A basis of R is just {1}.

H. Ananthnarayan D1 - Lecture 10 25th January 2018 5/8


Basis: Remarks

• Let B = {v1 , . . . , vn } be a basis for V and v a vector in V .


Span(B) = V ⇒ v = a1 v1 + · · · + an vn for scalars a1 , . . . , an .
Linear independence ⇒ this expression for v is unique. Thus

Every v ∈ V can be uniquely written


as a linear combination of {v1 , . . . , vn }.

Exercise: Prove this.


Q: Is the basis of a vector space unique?
n A: No.
2
T T o
e.g. {e1 , e2 } is a basis for R , so is −1 1 , 0 1 , and so are
the columns of any 2 × 2 invertible matrix.
Exercise: Find two different basis of R3 .
The number of vectors in each basis of R3 is 3. Not a coincidence!

H. Ananthnarayan D1 - Lecture 10 25th January 2018 6/8


Dimension of a Vector Space
If v1 , . . . vm and w1 , . . . , wn are both basis of V , then m = n. This is
called the dimension of V . Thus
dim(V ) = number of elements in a basis of V .

Exercise: Prove that every basis of R3 has only three elements.


Examples:
• dim({0}) = 0. • dim(Rn ) = n.
• If L is a line through origin in R3 , what is its dimension as a vector
space? Recall L = {tu | t ∈ R} where u is some vector in R3 . Thus
dim(L) = 1.
•. Dimension of a plane (P) in R3 is 2. Why?
• A basis for C as a vector space over the scalars R is {1, i}.
A basis for C as a vector space over the scalars C is {1}.
i.e., dim(C) = 2 as a R-vector space and 1 as a C-vector space.
Thus, dimension depends on the choice of scalars!

H. Ananthnarayan D1 - Lecture 10 25th January 2018 7/8


Dimension and Basis

Let dim (V ) = n, S = {v1 , . . . , vk } ⊆ V .


Recall: A basis is a minimal spanning set.
In particular, if Span(S) = V , then k ≥ n, and S contains a basis of V ,
i.e., there exist {vi1 , . . . , vin } ⊆ S which is a basis of V .
Example: The columns of a 3 × 4 matrix A with 3 pivots span R3 .
Hence the columns contain a basis of R3 .
Recall: A basis is a maximal linearly independent set.
In particular, if S is linear independent, then k ≤ n, and S can be
extended to a basis of V , i.e., there exist w1 , . . . , wn−k in V such that
{v1 , . . . , vk , w1 , . . . , wn−k } is a basis of V .
Example: The columns of a 3 × 2 matrix A with 2 pivots has linearly
independent columns, and hence can be extended to a basis of R3 .

H. Ananthnarayan D1 - Lecture 10 25th January 2018 8/8


Recall: Basis and Dimension
• A basis of a vector space V is a linearly independent subset B which
spans V .
• A basis is a maximal linearly independent subset of V
⇒ any linearly independent subset in V can be extended to a basis of
V.
• A basis is a minimal spanning set of V
⇒ every spanning set of V contains a basis.
• The number of elements in each basis is the same,
and the dimension of V ,
dim(V ) = number of elements in a basis of V .
• B = {v1 , . . . , vn } is a basis for V ⇔ every v ∈ V can be uniquely
written as a linear combination of {v1 , . . . , vn }.
• dim (Rn ) = n, and the set B = {v1 , . . . , vn } ⊆ Rn is a basis of Rn ⇔
A = v1 · · · vn is invertible.

H. Ananthnarayan D1- Lecture 11 29th January 2018 3 / 13


Example: A basis for R2
Pick v1 6= 0. Choose v2 , not a multiple of v1 . For any v in R2 , there are
unique scalars a and b such that v = av1 + bv2 .
e.g., pick v1 = (1, −1)T , v2 = (0, 1)T , and let v = (1, 1)T .
Thus the lines a = 0
and b = 0 give a set
b = −1 b=5
of axes for R2 ,
v
and v = v1 + 2v2 .
b = −2 v2 b=4
With this basis
B = {v1 , v2 }, the
b = −3 b=3
coordinates of v will
1
b = −4 v1 be [v]B = .
b=2
2

b = −5 b=1

b=0
a = −2 a = −1 a=0 a=1 a=2

H. Ananthnarayan D1- Lecture 11 29th January 2018 4 / 13


Basis and Coordinates
A basis for M2×2 , the vector space of 2 × 2 matrices is
B = {e11 , e12, e21 , e22 },
 where    
1 0 0 1 0 0 0 0
e11 = , e12 = , e21 = , e22 = .
0 0 0 0 1 0 0 1
(Check this!) Hence dim(M2×2 ) = 4.
Every 2 × 2 matrix A = (aij ) can be written as
A = a11 e11 + a12 e12 + a21 e21 + a22 e22 .
For this fixed basis B, the coordinate vector of A with respect to B,
denoted
[A]B = (a11 , a12 , a21 , a22 )T
completely determines the matrix A.
Since dim (M2×2 ) = 4, once we fix a basis, we will need 4 coordinates
to describe each matrix.
Exercise: Find two bases and the dimension of Mm×n , the vector
space of m × n matrices.

H. Ananthnarayan D1- Lecture 11 29th January 2018 5 / 13


Coordinate Vectors

1 Consider the basis B = {v1 = (1, −1)T , v2 = (0, 1)T } of R2 , and


v = (1, 1)T . Note thatv =
 1v1 + 2v2 . Hence, the coordinate vector
1
of v w.r.t. B is [v ]B = .
2
2 Exercise: Show that B1 = {1, x, x 2 } is a basis of P2 .
The coordinate vector of v = 2x 2 − 3x + 1 w.r.t. B is
[v ]B = (1, − 3, 2)T .
3 Exercise: Show that B 0 = {1, (x − 1), (x − 1)2 , (x − 1)3 } is a basis
of P3 . H INT : Taylor expansion.
Then [x 3 ]B0 = ( , , , )T .

Observe: To write the coordinates, we have a to fix a basis B, with a


fixed order of elements in it!

H. Ananthnarayan D1- Lecture 11 29th January 2018 6 / 13


The Four Fundamental Subspaces

Let A be an m × n matrix. Associated to A, we have four fundamental


subspaces:
• The column space of A: C(A) = {v : Ax = v is consistent} ⊆ Rm .
• The null space of A: N(A) = {x : Ax = 0} ⊆ Rn .
• The row space of A = Span{A1∗ , . . . , Am∗ } = C(AT ) ⊆ Rn .
• The left null space of A = {x : x T A = 0} = N(AT ) ⊆ Rm .
Q: Why are the row space and the left null space subspaces?
Let U be the echelon form of A, and R its reduced form.
• Recall, N(A) = N(U) = N(R).
Observe: The rows of U(and R) are linear combinations of the rows of
A, and vice versa ⇒ their row spaces are same, i.e.,
• C(AT ) = C(U T ) = C(R T ).
We now compute bases and dimensions for these special subspaces.

H. Ananthnarayan D1- Lecture 11 29th January 2018 7 / 13


The Big Four: An Example

 
2 4 6 4
Let A = 2 5 7 6 . Find the four fundamental subspaces of A,
2 3 5 2
their bases and dimensions.
Recall:  
1 0 1 −2
The reduced form of A is R = 0 1 1 2 .
0 0 0 0
• The 1st and 2nd are pivot columns ⇒ rank(A) = 2.
T
• v = a b c is in C(A) ⇔ Ax = v is solvable ⇔ 2a − b − c = 0.
• We can compute special solutions to Ax = 0. The number of special
solutions to Ax = 0 is the number of free variables.

H. Ananthnarayan D1- Lecture 11 29th January 2018 8 / 13


The Big Four: N(A)
   
2 4 6 4 1 0 1 −2
For A = 2 5 7 6, reduced form R = 0 1 1 2 .
2 3 5 2 0 0 0 0
       

 a −c + 2d −1 2 
b −c − 2d −1 −2
       
N(A) =  c  = 
  =c +d  .

 c  1  0 

d d 0 1
 
n T T o
= Span w1 = −1 −1 1 0 , w2 = 2 −2 0 1 .
w1 , w2 are linearly independent (Why?)
⇒ B = {w1 , w2 } forms a basis for N(A) ⇒ dim(N(A)) = 2.

A basis for N(A) is the set of special solutions.


dim(N(A)) = no. of free variables = no. of variables - rank(A)

Exercise: Show that w = (−3, −7, 5, 1)T in N(A). What is [w]B ?

H. Ananthnarayan D1- Lecture 11 29th January 2018 9 / 13


The Big Four: C(A)
   
2 4 6 4 1 0 1 −2
For A = 2 5
 7 6 , reduced form R = 0
  1 1 2 .
2 3 5 2 0 0 0 0
 
Write A = v1 v2 v3 v4 and R = w1 w2 w3 w4 .
Recall: Relations between the column vectors of A are the same as
the relations between column vectors of R.
⇒ Ax = v3 has a solution has the same solution as Rx = w3 , and
Ax = v4 has a same solution as Rx = w4 .
Particular solutions are (1, 1, 0, 0)T and (−2, 2, 0, 0)T respectively ⇒
v3 = v1 + v2 , v4 = −2v1 + 2v2 .
Observe:
• v1 and v2 correspond to the pivot columns of A.
• {v1 , v2 } are linearly independent. Why?
• C(A) = Span{v1 , . . . , v4 } = Span{v1 , v2 }.
Thus B = {v1 , v2 } is a basis of C(A). Q: What is [vi ]B ?

H. Ananthnarayan D1- Lecture 11 29th January 2018 10 / 13


The Big Four: Rank-Nullity Theorem
More generally, for an m × n matrix A,
• Let rank(A) = r . The r pivot columns are linearly independent since
their reduced form contains an r × r identity matrix.
• For each non-pivot column A∗j of A, find particular solution of
Ax = A∗j . Use this to write A∗j as a linear combination of the pivots
columns. Thus

A basis for C(A) is given by the pivot columns of A.

dim(C(A)) = no. of pivot variables = rank(A).

Rank-Nullity Theorem: Let A be an m × n matrix. Then

dim(C(A)) + dim(N(A)) = no. of variables = n

H. Ananthnarayan D1- Lecture 11 29th January 2018 11 / 13


T
The Big Four:
 C(A )   
2 4 6 4 1 0 1 −2
Recall: If A = 2 5 7 6, then R = 0 1 1 2 .
2 3 5 2 0 0 0 0
Recall: R is obtained from A by taking non-zero scalar multiples of
rows and their sums ⇒ C(R T ) = C(AT ).
Observe: The non-zero rows of R will span C(AT ), and they contain
an identity submatrix ⇒ they are linearly independent.
Thus, the non-zero rows of R form a basis for C(R T ) = C(AT ).
Exercise: Give two different basis for C(AT ).
Since the number of non-zero rows of R = number of pivots of A, we
have:
dim C(AT )= no. of pivots of A = rank(A).

• Recall that dim C(AT ) = rank(AT ). Thus,

rank(AT ) = dim (C(AT )) = rank(A)

H. Ananthnarayan D1- Lecture 11 29th January 2018 12 / 13


The Big Four: N(AT )

The no. of columns of AT is m.


By Rank-Nullity Theorem, rank(AT )+ dim(N(AT )) = m.
Hence:
dim(N(AT )) = m− rank(A).
Exercise:
 Complete
 the example by finding N(AT ).
 a basis for
2 4 6 4 1 0 1 −2
A = 2 5 7 6, reduced form R = 0 1 1 2 .
2 3 5 2 0 0 0 0
Q. Can you use R to compute the basis for T
 N(A )? Why
 not?
1 0 2
0 1 −1
A. Need the reduced form of AT which is 
0 0 0 .

0 0 0

H. Ananthnarayan D1- Lecture 11 29th January 2018 13 / 13


Matrices as Transformations: Examples
   
1 0 0 1
Let A = . Then Let B = . Then
0 −1 1 0
       
x x1 x x2
A 1 = . Let x = (2, 1)T . B 1 = . If
x2 −x2 x2 x1
What is Ax? How does A transform x = (−1, 0.5)T , then
x? Bx = (0.5, −1)T . How does B
A reflects vectors across the X -axis. transform x?
B reflects vectors across the line
x1 = x2 .
Y Y
(2, 1)
(−1, 0.5)

X X

(0.5, −1)
(2, −1)

Q: Do reflections preserve scalar multiples? Sums of vectors?


H. Ananthnarayan D1- Lecture 12 30th January 2018 3 / 11
Matrices as Transformations: Examples
     x1 +x2 
1/2 1/2 x1 2
•P= transforms x = to Px = x1 +x .
1/2 1/2 x2 2
  2
y 1/2 1/2
(−1, 1) Pe1 = = Pe2 .
e2 1/2 1/2
 
−1
Pe1 = Pe2 P transforms the vector
1
to the origin.
Q: Geometrically, how is P
x
(0, 0) e1
transforming the vectors?
A: Projects onto the line x1 = x2 .
Q: What happens to sums of vectors when you project them?
What about scalar multiples?
 
1 0
Exercise: Understand the effect of P = on e1 and e2 and
0 0
interpret what P represents geometrically.

H. Ananthnarayan D1- Lecture 12 30th January 2018 4 / 11


Matrices as transformations: Examples
!
√1 − √1 cos(45◦ ) − sin(45◦ )
 
Let Q = 2 2 = .
√1 √1 sin(45◦ ) cos(45◦ )
2 2
How does Q transform the standard basis vectors e1 and e2 ?
Y
 T  T
−1 √1
√ , = Qe2 e2 Qe1 = √1 , √1
2 2 2 2

e1 X
 
x1
Q: What does the transformation x = 7→ Qx represent
x2
geometrically?
Again note that rotations map sum of vectors to sum of their images
and scalar multiple of a vector to scalar multiple of its image.

H. Ananthnarayan D1- Lecture 12 30th January 2018 5 / 11


Matrices as Transformations
• An m × n matrix A transforms a vector x in Rn into the vector Ax in
Rm . Thus T (x) = Ax defines a function T : Rn → Rm .
• The domain of T is . The codomain of T is .
m
• Let b ∈ R . Then b is in C(A) ⇔ Ax = b is consistent ⇔ T (x) = b,
i.e., b is in the image (or range) of T .
Hence, the range of T is .
 
2 4 6 4
Example: Let A = 2 5 7 6 . Then T (x) = Ax is a function with
2 3 5 2
domain R , codomain R3 , and range equal to
4

C(A) = {(a, b, c)T | 2a − b − c = 0} ⊆ R3 .


Q: How does T transform sums and scalar multiples of vectors?
A. Nicely! For scalars a and b, and vectors x and y ,
T (ax + by) = A(ax + by) = aAx + bAy = aT (x) + bT (y). Thus

T takes linear combinations to linear combinations.

H. Ananthnarayan D1- Lecture 12 30th January 2018 6 / 11


Linear Transformations
Defn. Let V and W be vector spaces. A linear transformation from V
to W is a function T : V → W which takes linear combinations of
vectors in V to the linear combinations of their images, i.e., for
x, y ∈ V , scalars a and b,
T (ax + by ) = aT (x) + bT (y )
• The image (or range) of T is defined to be
C(T ) = {y ∈ W | T (x) = y for some x ∈ V }.
• The kernel (or null space) of T is defined as
N(T ) = {x ∈ V | T (x) = 0}.
Main Example:
Let A be an m × n matrix. Define T (x) = Ax.
• This defines a linear transformation T : Rn → Rm .
• The image of T is the column space of A, i.e., C(T ) = C(A).
• The kernel of T is the null space of A, i.e., N(T ) = N(A).

H. Ananthnarayan D1- Lecture 12 30th January 2018 7 / 11


Linear Transformations: Examples

Which of the following functions are linear transformations?


• g : R3 → R3 defined as g(x1 , x2 , x3 )T = (x1 , x2 , 0)T
       
x1 y1 ax1 by1
ag(x) + bg(y) = ag x2  + bg y2  = ax2  + by2 
x3 y3 0 0
 
ax1 + by1
= ax2 + by2  = g(ax + by ) is a linear transformation.
0
Exercise: Find N(g) and C(g).
• h : R3 → R3 defined as h(x1 , x2 , x3 )T = (x1 , x2 , 5)T .
Note: h(0 + 0) 6= h(0) + h(0).
Observe: A linear transformation must map 0 ∈ V to 0 ∈ W .

H. Ananthnarayan D1- Lecture 12 30th January 2018 8 / 11


Linear Transformations: Examples

• f : R2 → R4 defined by f (x1 , x2 )T = (x1 , 0, x2 , x22 )T .


Note: f transforms the Y -axis in R2 to {(0, 0, y, y 2 )T | y ∈ R}.
Observe: A linear transformation must transform a subspace of V into
a subspace of W .
 
4 a b
• S : M2×2 → R defined by S = (a, b, c, d)T is a linear
c d
transformation.
Observe: The function S is onto ⇒ C(S) = R4 ,
and S(A) = S(B) ⇒ A = B , i.e., the function S is one-one. In
particular, N(S) = {0}.

H. Ananthnarayan D1- Lecture 12 30th January 2018 9 / 11


Linear Transformations: Examples
Show that the following functions are linear transformations.
T : R∞ → R∞ defined by T (x1 , x2 , . . .) = (x1 + x2 , x2 + x3 , . . . , ).
Exercise: What is N(T )?
S : R∞ → R∞ defined by S(x1 , x2 , . . .) = (x2 , x3 , . . .).
Exercise: Find C(S), and a basis of N(S).
Let T : P2 → P1 be S(a0 + a1 x + a2 x 2 ) = a1 + 4a2 x.
Exercise: Show that dim (N(T )) = 1, and find C(T ).
df
Let D : C ∞ ([0, 1]) → C ∞ ([0, 1]) defined as Df = dx .
Exercise: Is D 2 = D ◦ D linear? What about D 3 ?
Exercise: What is N(D)? N(D 2 )? N(D k )?
Question: Is integration linear?
Observe: Images and null spaces are subspaces!
Of which vector space?

H. Ananthnarayan D1- Lecture 12 30th January 2018 10 / 11


Properties of Linear transformations
Let B = {v1 , . . . , vn } ⊆ V , T : V → W be linear. Then:
• T takes linear combinations to linear combinations.
In particular, T (0) = 0.
• N(T ) is a subspace of V . Why? C(T ) is a subspace of W . Why?
• If Span(B) = V , is Span{T (v1 ), . . . , T (vn )} = W ?
Observe: Span{T (v1 ), . . . , T (vn )} = C(T ). Why?
Conclusion: (i) If dim (V ) = n, then dim (C(T )) ≤ n.
(ii) T is onto ⇔ Span{T (v1 ), . . . , T (vn )} = C(T ) = W .
• T (u) = T (v ) ⇔ u − v ∈ N(T ).
Conclusion: T is one-one ⇔ N(T ) = 0.
• If B ⊆ V is linearly independent, is {T (v1 ), . . . , T (vn )} ⊆ W linearly
independent?
H INT : a1 T (v1 ) + · · · an T (vn ) = 0 ⇒ a1 v1 + · · · an vn ∈ N(T ).
• If S : U → V , T : V → W are linear, then the composition
T ◦ S : U → W is linear. Exercise: Show that N(S) ⊆ N(T ◦ S). How
are C(T ◦ S) and C(T ) related?
H. Ananthnarayan D1- Lecture 12 30th January 2018 11 / 11
Properties of Linear transformations
Let B = {v1 , . . . , vn } ⊆ V , T : V → W be linear. Then:
• T takes linear combinations to linear combinations.
In particular, T (0) = 0.
• N(T ) is a subspace of V . Why? C(T ) is a subspace of W . Why?
• If Span(B) = V , is Span{T (v1 ), . . . , T (vn )} = W ?
Observe: Span{T (v1 ), . . . , T (vn )} = C(T ). Why?
Conclusion: (i) If dim (V ) = n, then dim (C(T )) ≤ n.
(ii) T is onto ⇔ Span{T (v1 ), . . . , T (vn )} = C(T ) = W .
• T (u) = T (v ) ⇔ u − v ∈ N(T ).
Conclusion: T is one-one ⇔ N(T ) = 0.
• If B ⊆ V is linearly independent, is {T (v1 ), . . . , T (vn )} ⊆ W linearly
independent?
H INT : a1 T (v1 ) + · · · an T (vn ) = 0 ⇒ a1 v1 + · · · an vn ∈ N(T ).
• If S : U → V , T : V → W are linear, then the composition
T ◦ S : U → W is linear. Exercise: Show that N(S) ⊆ N(T ◦ S). How
are C(T ◦ S) and C(T ) related?
H. Ananthnarayan D1- Lecture 13 1st February 2018 3/9
Isomorphism of vector spaces
A linear map T : V → W is an isomorphism if T is one-one and onto,
i.e., T is a linear bijection. Notation: V ' W .
Q: If T : V → W is an isomorphism, is T −1 : W → V linear?
Recall: T is one-one ⇔ N(T ) = 0 and T is onto ⇔ C(T ) = W .
Thus T is an isomorphism ⇔ N(T ) = 0 and C(T ) = W .
Example: If V is the subspace of convergent sequences in R∞ , then
L : V → R given by L(x1 , x2 , . . .) = lim (xn ) is linear.
n→∞
What is N(L)? C(L)? Is L one-one or onto?
Exercise: Given A ∈ Mm×n , let T (x) = Ax for x ∈ Rn .
Then T is an isomorphism ⇔ m = n and A is invertible.
Exercise: In the previous examples, identify linear maps which are
one-one, and those which are onto.
 
a b
Example: S = (a, b, c, d)T is an isomorphism since
c d
N(S) = 0 and C(S) = R4 . Thus M2×2 ' R4 . What is S −1 ?
H. Ananthnarayan D1- Lecture 13 1st February 2018 4/9
Linear Maps and Basis

 
4 a b
• Consider S : M2×2 → R given by S = (a, b, c, d)T .
c d
Recall {e11 , e12 , e21 , e22 } is a basis of M2×2 such that
 that 
a b
A= = ae11 + be12 + ce21 + de22 .
c d
Observe that S(e11 ) = e1 , S(e12 ) = e2 , S(e21 ) = e3 , S(e22 ) = e4 .
Thus, S(A) = aS(e11 ) + bS(e12 ) + cS(e21 ) + dS(e22 )
= ae1 + be2 + ce3 + de4 = (a, b, c, d)T .
General case:
If {v1 , . . . , vn } is a basis of V , T : V → W is linear, v ∈ V , then
v = a1 v1 + · · · an vn ⇒ T (v ) = a1 T (v1 ) + · · · an T (vn ). Why? Thus,
T is determined by its action on a basis.

H. Ananthnarayan D1- Lecture 13 1st February 2018 5/9


Finite-dimensional Vector Spaces

Important Observation:
Let dim (V ) = n, and B = {v1 , . . . , vn } be a basis of V .
Define T : V → Rn by T (vi ) = ei .
e.g., If v = v1 + vn , then T (v ) =?
If v = 3v2 − 5v3 , then T (v ) =?
If v = a1 v1 + · · · + an vn , then T (v ) =?
Thus T (v ) = [v ]B .
What is N(T )? What is C(T )?
Conclusion: If dim (V) = n, then V ' Rn .
What is the isomorphism?
How many such isomorphisms can you construct?
Exercise: Find 3 isomorphisms each from P3 and M2×2 to R4 .

H. Ananthnarayan D1- Lecture 13 1st February 2018 6/9


n m
Linear maps from
 R to R    
3 2 −5
Example: T (e1 ) = , T (e2 ) = , T (e3 ) =
1 −1 0
defines a linear map T : R3 → R2 .
If x = (x1 , x2 , x3 )T , then T (x) = T (x
1 e1+ x2 e2+ x3 e3 ) =  
3 2 −5
x1 T (e1 ) + x2 T (e2 ) + x3 T (e3 ) = x1 + x2 + x3 , i.e.,
1 −1 0
 
3 2 −5
T (x) = Ax, where A = . Question: A∗j = T (ej ).
1 −1 0
General case: If T : Rn → Rm is linear, then
for x = (x1 , . . . , xn )T in Rn ,
T (x) = x1 T (e1 ) + · · · xn T (en ) = Ax,

where A = T (e1 ) · · · T (en ) ∈ Mm×n , i.e., A∗j = T (ej ).
Defn. A is called the standard matrix of T . Thus
Linear transformations from Rn to Rm
are in one-one correspondence with m × n matrices.
Q: Can you imitate this if V and W are not Rn and Rm ? T HINK !
H. Ananthnarayan D1- Lecture 13 1st February 2018 7/9
Matrix Associated to a Linear Map: Example
S : P2 → P1 given by S(a0 + a1 x + a2 x 2 ) = a1 + 4a2 x is linear.
Question: Is there a matrix associated to S?
Expected size: 2 × 3. Why?
I DEA : Construct an associated linear map R3 → R2 .
Use coordinate vectors! Fix bases B = {1, x, x 2 } of P2 , and C = {1, x}
of P1 to do this.
Identify f = a0 + a1 x + a2 x 2 ∈ P2 with [f ]B = (a0 , a1 , a2 )T ∈ R3 ,
and S(f ) ∈ P1 with [S(f )]C = (a1 , 4a2 )T ∈ R2 .
The associated linear map S 0 : R3 → R2 is defined by
S 0 (a0 , a1 , a2 )T = (a1 , 4a2 )T , i.e., S 0 ([f ]B ) = [S(f )]C , i.e.,
S 0 is defined by S 0 (e1 ) = (0, 0)T ,S 0 (e2 ) = T 0 T
 (1, 0) , S (e3 ) = (0, 4) ⇒
0 1 0
the standard matrix of S 0 is A = .
0 0 4
Q: How is A related to S?
Observe: A∗1 =[S(1)]C , A∗2 = [S(x)]C , A∗3 = [S(x 2 )]C .

H. Ananthnarayan D1- Lecture 13 1st February 2018 8/9


Matrix Associated to a Linear Map
Example: The matrix of S(a0 + a1 x + a2 x 2 ) = a1 + 4a2 x, w.r.t. the
bases B = {1, x, x 2 } of P2 and C = {1, x} of P1 is A =
 
0 1 0
and A∗1 = [S(1)]C , A∗2 = [S(x)]C , A∗3 = [S(x 2 )]C .
0 0 4
General Case: If T : V → W is linear, then the matrix of T w.r.t. the
ordered bases B = {v1 , . . . , vn } of V , and C = {w1 , . . . , wm } of W ,
denoted [T ]B
C , is

A = [T (v1 )]C · · · [T (vn )]C ∈ Mm×n .
Example: Projection onto the line x1 = x2
   x1 +x2  
x1 2 1/2 1/2
P = x1 +x2 has standard matrix .
x2 2 1/2 1/2
This is the matrix of P w.r.t. the standard basis.
Q: What is [P]B T T
B where B = {(1, 1) , (−1, 1) }?
Conclusion: The matrix of a transformation depends on the chosen
basis. Some are better than others!
H. Ananthnarayan D1- Lecture 13 1st February 2018 9/9
Summary: Linear transformations

Let T : V → W be linear.
• The nullspace of T , N(T ), is a subspace of V .
Its range, C(T ) is a subspace of W .
• T is one-one ⇔ N(T ) = 0, and T is onto ⇔ C(T ) = W . In particular,
T is an isomorphism ⇔ N(T ) = 0 and C(T ) = W .
• If B = {v1 , . . . , vn } is a basis of V , and v = a1 v1 + · · · an vn , then
T (v ) = a1 T (v1 ) + · · · an T (vn ). Thus,
T is determined by its action on a basis.
In particular, T : V → Rn defined by T (vi ) = ei , i.e., T (v ) = [v ]B , is an
isomorphism. Thus, if dim (V) = n, then V ' Rn .
•. Given A ∈ Mm×n , and T (x) = Ax defines a linear map
T : Rn → Rm . Conversely, if T n m
 : R → R is linear, then T (x) = Ax,
where A = T (e1 ) · · · T (en ) ∈ Mm×n , the standard matrix of T .

H. Ananthnarayan D1- Lecture 14 5th February 2018 3 / 14


Matrix Associated to a Linear Map: Example
S : P2 → P1 given by S(a0 + a1 x + a2 x 2 ) = a1 + 4a2 x is linear.
Question: Is there a matrix associated to S?
Expected size: 2 × 3. Why?
I DEA : Construct an associated linear map R3 → R2 .
Use coordinate vectors! Fix bases B = {1, x, x 2 } of P2 , and C = {1, x}
of P1 to do this.
Identify f = a0 + a1 x + a2 x 2 ∈ P2 with [f ]B = (a0 , a1 , a2 )T ∈ R3 ,
and S(f ) ∈ P1 with [S(f )]C = (a1 , 4a2 )T ∈ R2 .
The associated linear map S 0 : R3 → R2 is defined by
S 0 (a0 , a1 , a2 )T = (a1 , 4a2 )T , i.e., S 0 ([f ]B ) = [S(f )]C , i.e.,
S 0 is defined by S 0 (e1 ) = (0, 0)T ,S 0 (e2 ) = T 0 T
 (1, 0) , S (e3 ) = (0, 4) ⇒
0 1 0
the standard matrix of S 0 is A = .
0 0 4
Q: How is A related to S?
Observe: A∗1 =[S(1)]C , A∗2 = [S(x)]C , A∗3 = [S(x 2 )]C .

H. Ananthnarayan D1- Lecture 14 5th February 2018 4 / 14


Matrix Associated to a Linear Map
Example: The matrix of S(a0 + a1 x + a2 x 2 ) = a1 + 4a2 x, w.r.t. the
bases B = {1, x, x 2 } of P2 and C = {1, x} of P1 is A =
 
0 1 0
and A∗1 = [S(1)]C , A∗2 = [S(x)]C , A∗3 = [S(x 2 )]C .
0 0 4
General Case: If T : V → W is linear, then the matrix of T w.r.t. the
ordered bases B = {v1 , . . . , vn } of V , and C = {w1 , . . . , wm } of W ,
denoted [T ]B
C , is

A = [T (v1 )]C · · · [T (vn )]C ∈ Mm×n .
Example: Projection onto the line x1 = x2
   x1 +x2  
x1 2 1/2 1/2
P = x1 +x2 has standard matrix .
x2 2 1/2 1/2
This is the matrix of P w.r.t. the standard basis.
Q: What is [P]B T T
B where B = {(1, 1) , (−1, 1) }?
Conclusion: The matrix of a transformation depends on the chosen
basis. Some are better than others!
H. Ananthnarayan D1- Lecture 14 5th February 2018 5 / 14
Eigenvalues and Eigenvectors: Motivation
• Solve for the differential equation for u: du/dt = 3u.
The solution is u(t) = c e3t , c∈R
With initial condition u(0) = 2, the solution is u(t) = 2e3t .
• Consider the system of linear 1st order differential equations (ODE)
with constant coefficients:
du1 /dt = 4u1 − 5u2 ,
du2 /dt = 2u1 − 3u2 ,
How does one find the solution?
• Write the system in matrix form du/dt = Au,
   
u1 (t) 4 −5
where u(t) = ,A= .
u2 (t) 2 −3
   λt 
u1 (t) λt e x
• Assuming the solution is u(t) = =e v = , where
u2 (t) eλt y
 
x
v= ∈ R2 , we need to find λ and v .
y
H. Ananthnarayan D1 - Lecture 16 8th February 2018 13 / 20
Eigenvalues and Eigenvectors: Definition
We have u10 = 4u1 − 5u2 , u20 = 2u1 − 3u2 , where
u1 (t) = eλt x, u2 (t) = eλt y
λ eλt x = 4eλt x − 5eλt y,
λ eλt y = 2eλt x − 3eλt y .
Cancelling eλt , we get
Eigenvalue problem: Find λ and v = (x, y )T satisfying
4x − 5y = λx,
2x − 3y = λy .
In the matrix form, it is Av = λv .
This equation has two unknowns, λ and v .
If there exists a λ such that Av = λv has a non-zero solution v ,
then λ is called an eigenvalue of A and all nonzero v satisfying
Av = λv are called eigenvectors of A associated to λ.
Q: Given A n × n, how does one find its eigenvalues and eigenvectors?
H. Ananthnarayan D1 - Lecture 16 8th February 2018 14 / 20
Eigenvalues and Eigenvectors: Solving Ax = λx
• Write Av = λv as (A − λI)v = 0.
• λ is an eigenvalue of A
⇔ there is a nonzero v in the nullspace of A − λI
⇔ N(A − λI) 6= 0, i.e., dim (N(A − λI)) ≥ 1,
⇔ A − λI is singular
⇔ det(A − λI) = 0.
• det(A − λI) is a polynomial in the variable λ of degree n. Hence it
has at most n roots ⇒ A has atmost n eigenvalues.
• det(A − λI) is called the characteristic polynomial of A.
• If λ is an eigenvalue of A, then the nullspace of A − λI is called the
eigenspace of A associated to eigenvalue λ.
 
• λ = 0 is an eigenvalue of A ⇔ det(A) = 0 ⇔ A is singular.
 

H. Ananthnarayan D1 - Lecture 16 8th February 2018 15 / 20


Eigenvalues and Eigenvectors: Example
To summarise: An eigenvalue of A is a root of its characteristic
polynomial, and any non-zero vector
in the corresponding eigenspace is an associated eigenvector.
Recall: The ODE system we want to solve is
u10 = 4u1 − 5u2 , u1 (0) = 8, u20 = 2u1 − 3u2 , u2 (0) = 5.
The solutions are u1 (t) = eλt
x, u2 (t) = y , where (x, y )T is a
eλt
solutionof:    
4 −5 x x
=λ (Av = λv )
2 −3 y y
The characteristic polynomial of A is
 
4−λ −5
det(A − λI) = det
2 −3 − λ
= (4 − λ)(−3 − λ) + 10
= λ2 − λ − 2 = (λ + 1)(λ − 2)
The eigenvalues of A are λ1 = −1, λ2 = 2.
H. Ananthnarayan D1 - Lecture 16 8th February 2018 16 / 20
Eigenvalues and Eigenvectors: Example
Eigenvectors v1 and v2 associated to λ1 = −1 and λ2 = 2 respectively,
are in:
N(A − λ1 I) = N(A + I), and  N(A −λ = N(A − 2I).
2 I) 
5 −5 x
Solving (A + I)v = 0, i.e., = 0, we get
2 −2 y
    
y 1
N(A + I) = | y ∈ R and v1 = is an eigenvector
y 1
associated to λ1 = −1.
 5y  
Similarly, solving (A − 2I)v = 0 gives N(A − 2I) = 2 |y ∈R .
y
 
5
In particular, v2 = is an eigenvector associated to λ2 = 2.
2
Thus, the system du/dt = Au has two special solutions
e−t v1 and e2t v2 .

H. Ananthnarayan D1 - Lecture 16 8th February 2018 17 / 20


Reading Slide - Complete Solution to ODE

Note: When two functions satisfy du/dt = Au, then so do


their linear combinations.
Complete solution: u(t) = c1 e−t v1 + c2 e2t v2 ,
     
u1 (t) −t 1 2t 5
i.e. = c1 e + c2 e .
u2 (t) 1 2
i.e. u1 (t) = c1 e−t + 5c2 e2t , u2 (t) = c1 e−t + 2c2 e2t .
If we put initial conditions (IC) u1 (0) = 8 and u2 (0) = 5, then
c1 + 5c2 = 8, c1 + 2c2 = 5 ⇒ c1 = 3, c2 = 1.
Hence the solution of the original ODE system with the given IC is
u1 (t) = 3e−t + 5e2t , u2 (t) = 3e−t + 2e2t .

H. Ananthnarayan D1 - Lecture 16 8th February 2018 18 / 20


Examples

In some cases itis easy


 to find the eigenvalues.
3 0
Example: A = is diagonal. Characteristic polynomial
0 2
(3 − λ)(2 − λ).
Eigenvalues: λ1 = 3, λ2 = 2.
Eigenvectors: (A − 3I)v1 = 0 ⇒ Av1 = 3v1 ⇒ v1 = e1
Similarly, an eigenvector associated to λ2 is v2 = e2
Further, R2 has a basis consisting of eigenvectors of A: {e1 , e2 }.
General case: If A is a diagonal matrix with diagonal entries
λ1 , · · · , λn , then
Eigenvalues: λ1 , · · · , λn Eigenvectors: e1 , · · · , en ,
which form a basis for Rn .

H. Ananthnarayan D1 - Lecture 16 8th February 2018 19 / 20


Examples

 
1/2 1/2
Example: Projection onto the line x = y : P = .
1/2 1/2
T
v1 = 1 1 projects onto itself ⇒ λ1 = 1 with eigenvector v1 .
T
v2 = 1 −1 7→ 0 ⇒ λ2 = 0 with eigenvector v2 .
Further, {v1 , v2 } is a basis of R2 .
Q: Do a collection of eigenvectors always form a basis of Rn ?

H. Ananthnarayan D1 - Lecture 16 8th February 2018 20 / 20


Summary: Determinants

Let A and B be n × n, and c a scalar.


det(A + B) 6= det(A) + det(B), and det(cA) = c n det(A).
det(AB) = det(A)det(B).
det(A) = det(AT ).
If A is orthogonal, i.e., AAT = I, then det(A) =±1.
If A = [aij ] is triangular, then det(A) = a11 · · · ann
A is invertible ⇔ det(A) 6= 0. If this happens, then det(A−1 ) =
1/det(A)
If A and B are similar, i.e., B = P −1 AP for an invertible matrix P,
then det(B) = det(A)
If A is invertible, and d1 , . . . , dn are the pivots of A, then det(A) =
±(d1 · · · dn )

H. Ananthnarayan D1 - Lecture 17 12th February 2018 3/9


Summary: Eigenvalues and Characteristic Polynomial
Let A be n × n.
1 The characteristic polynomial of A is det(A − λI) (of degree n) and
its roots are the eigenvalues of A.
2 For each eigenvalue λ, the associated eigenspace is N(A − λI).
To find it, solve (A − λI)v = 0. Any non-zero vector in N(A − λI) is
an eigenvector associated to λ.
3 If A is a diagonal matrix with diagonal entries λ1 , · · · , λn , then its
eigenvalues are λ1 , · · · , λn with associated eigenvectors
e1 , · · · , en respectively.
4 Write det(A − λI) = (λ1 − λ) · · · (λn − λ) and expand.
Trace of A = a11 + · · · + ann (sum of diagonal entries)
= λ1 + . . . + λn (sum of eigenvalues)
det(A) = λ1 · · · λn (product of eigenvalues)

H. Ananthnarayan D1 - Lecture 17 12th February 2018 4/9


Examples  
1/2 1/2
Example: Projection onto the line x = y : P = .
1/2 1/2
T
v1 = 1 1 projects onto itself ⇒ λ1 = 1 with eigenvector v1 .
T
v2 = 1 −1 7→ 0 ⇒ λ2 = 0 with eigenvector v2 .
Further, {v1 , v2 } is a basis of R2 .
n
Q: Do a collection of eigenvectors always
 form
 a basis of R ?
c 1
A: No! Example: For c ∈ R, let A = .
0 c
Characteristic Polynomial: det(A − λI) = (c − λ)2 .
Eigenvalues: λ = c.  
1
Eigenvectors: (A − I)v = 0 ⇒ v = or a multiple.
0
Eigenspace of A is 1 dimensional ⇒ R2 has no basis of eigenvectors
of A.
Q: What is the advantage of a basis of Rn consisting of eigenvectors?
H. Ananthnarayan D1 - Lecture 17 12th February 2018 5/9
Similarity and Eigenvalues

Defn. Two n × n matrices A and B are similar, if P −1 AP = B for an


invertible matrix P.
Observe: If B = P −1 AP, then B n = P −1 An P for each n.
Theorem: If A and B are similar, then they have the same
characteristic polynomial.
In particular, they have the same eigenvalues, det(A) = det(B) and
Trace(A) = Trace(B).
Proof. Given: B = P −1 AP. Want to prove: det(A − λI) = det(B − λI).
Indeed, det(B − λI) = det(P −1 AP − λP −1 P)
= det(P −1 (A − λI)P) = det(A − λI). 
Observe: A − λI and B − λI are similar.

H. Ananthnarayan D1 - Lecture 17 12th February 2018 6/9

You might also like