Professional Documents
Culture Documents
Exam 1 - Practice 2: C Harvard Math 21b
Exam 1 - Practice 2: C Harvard Math 21b
1. True or false. Please completely fill in the appropriate bubble ; no explanation is necessary.
True False
1. # If the linear system A~x = ~b has exactly one solution, then the linear system A~x = ~c
has exactly one solution for all vectors ~c.
Solution. The fact that A~x = ~b has exactly one solution tells us that there are no
free variables in the system. So, A~x = ~c has at most one solution.
However,
it
is still
1 ~ 2 1
possible for A~x = ~c to be inconsistent. An example is A = ,b= , ~c = .
2 4 0
x
3. # The set S =
y
: x is an integer is closed under addition.
0
x x
Solution. If we have two vectors , 0 in S, that means x and x0 are integers.
y y
x + x0
The sum is then in S because x + x0 is an integer.
y + y0
x
4. # The set S =
y
: x is an integer is closed under scalar multiplication.
x
Solution. If we have a vector in S, that means x is an integer. But if c is a
y
cx
scalar, is not necessarily in S because there’s no reason cx must be an integer.
cy
1 1
For example, ∈ S, but π 6∈ S.
1 1
5. # If the vectors ~v1 , ~v2 , ~v3 , and ~v4 are linearly independent, then the vectors ~v2 , ~v3 , and
~v4 must be linearly independent as well.
we can think of it as a linear relation 0~v1 + c2~v2 + c3~v3 + c4~v4 = ~0. Since ~v1 , ~v2 , ~v3 , ~v4
are linearly independent, this linear relation must be trivial, i.e., c2 = c3 = c4 = 0.
So, the only linear relations of the form (1) are trivial ones, which shows that ~v2 , ~v3 , ~v4
are linearly independent.
1
c Harvard Math 21b
6. # If A is a 5 × 4 matrix of rank 4, the system A~x = ~e1 must be consistent.
0 0 0 0
1 0 0 0
Solution. A counterexample is A = 0 1 0 0 .
0 0 1 0
0 0 0 1
1
x
7. # The function T : R2 → R3 given by T
y
= (x−y) 2 is a linear transformation.
−3
Solution. There are two reasonable
approaches
here. You
could
verify
that Tsat-
x1 x2 x1 x2 x
isfies the properties T + = T +T and T k =
y 1 y 2 y 1 y 2 y
x
kT , or you could just observe that T can also be written as
y
1 −1
x x
T = 2 −2 .
y y
−3 3
1 1 1 1
8. # A linear transformation T : R2 → R2 that sends
2
to
4
must send
4
to
8
.
1 1
Solution. Since and are linearly independent, knowing what the transforma-
4 2
1 1
tion does to does not tell us anything about what it does to .
2 4
9. # If P is the plane in R3 spanned by ~e1 and ~e2 , then ~x − projP ~x is a scalar multiple of
~e3 .
x − projP ~
~ x
~
x
P O
projP ~
x
So, in this case, ~x − projP ~x is perpendicular to span(~e1 , ~e2 ), which indeed means that
it must be a scalar multiple of ~e3 .
2
c Harvard Math 21b
11. # If A is an n × n matrix and A = AA, then A must be either the zero matrix or the
identity matrix.
Solution. A counterexample is projection onto a subspace of Rn .
12. # Suppose A and B are bases of Rn . Let A be the matrix whose columns are the vectors
of A, and let B be the matrix whose columns are the vectors of B. Then there exists
an invertible matrix S such that B = SA.
Solution. Since A and B are bases of Rn , the matrices A and B are invertible (see
#2(a) on the worksheet “More on Bases of Rn , Matrix Products”). So, S = BA−1
makes the statement true.
1 4 7
s
14. # The function T : R2 → R3 defined by T
t
= 2 + s 5 + t 8 is a linear
3 6 9
transformation.
1
0
Solution. T = 2 6= ~0, so T cannot be a linear transformation.
0
3
3
c Harvard Math 21b
16. # If a 3 × 3 matrix R represents
reflection
3
about a plane in R , then there is an invertible
0 1 0
matrix S such that R = S 1 0 0 S −1 .
0 0 1
Solution.
Asking whether there is an invertible matrix S such that R =
0 1 0
S 1 0 0 S −1 is the same as asking whether there is a basis B = (~v1 , ~v2 , ~v3 ) of
0 0 1
0 1 0
R3 such that the B-matrix of the reflection is B = 1 0 0. This would say that
0 0 1
the reflection sends ~v1 to ~v2 , ~v2 to ~v1 and ~v3 to ~v3 . The question is whether we can
find such a basis.
The answer is yes: let ~v1 be any vector that is neither in the plane of reflection nor
perpendicular to the plane of reflection, ~v2 be the reflection of ~v1 (i.e., ~v2 = R~v1 ), and
~v3 be a vector in the plane of reflection that is not coplanar with ~v1 and ~v2 :
~v1
~v3
~v2
17. # The image of a matrix A is always the same as the image of rref(A).
1 0 1
Solution. A counterexample is the matrix A = ; its image is spanned by ,
2 0 2
1 0 1
but the image of rref(A) = is spanned by .
0 0 0
1
18. # If A is a 3 × 3 matrix with 2 6∈ im A, then A cannot be invertible.
3
1 1
Solution. Saying that 2 is not in im A exactly means that the system A~x = 2
3 3
is inconsistent. Therefore, A cannot be invertible. (Remember that the definition of
invertible says that A is invertible
if A~x = ~y has a unique solution for every ~y in R3 ;
1
here, this is false for ~y = 2.)
3
4
c Harvard Math 21b
1
19. # There is a matrix A whose kernel is the single vector
1
.
Solution.
The kernel of any matrix is a subspace of its domain, and the single vector
1 0
is not a subspace of R2 . (For one thing, every subspace of R2 must include .)
1 0
1 3 −1
20. # There are infinitely many ways to express
1
as a linear combination of
7
,
4
,
2
and .
6
1 3 −1 2
Solution. Expressing as a linear combination of , , and means
1 7 4 6
3 −1 2 1
writing c1 +c + c3 = ; we can rewrite this equation in matrix form
7 2 4 6 1
c
3 −1 2 1
1 3 −1 2
as c2 = . The matrix has rank 2 (which we can see
7 4 6 1 7 4 6
c3
because its columns must span R2 ), so this system has infinitely many solutions.
Note that you’ve seen
the basic idea
of this problem in Daily Problem Set 6, #6(e):
3 −1 2
since the vectors , , and span R2 and are linearly dependent, there are
7 4 6
infinitely many ways to express any vector in R2 as a linear combination of these three
vectors.
x − 2kz = 0
x + 2y + 6z = 2
2z − kx = 1
of equations in x, y, and z.
(a) For k = 2, the system has exactly one solution. Find this solution, and check that it actually is a
solution.
Solution. The linear system can be expressed by the augmented matrix
1 0 −2k 0
1 2 6 2 .
−k 0 2 1
5
c Harvard Math 21b
We plug in k = 2 and row-reduce:
1 0 −4 0 1 0 −4 0
1 2 6 2 −(I) → 0 2 10 2 ÷2
−2 0 2 1 +2(I) 0 0 −6 1
1 0 −4 0
→ 0 1 5 1
0 0 −6 1 ÷ − 6
1 0 −4 0 +4(III)
→ 0 1 5 1 −5(III)
1
0 0 1 −6
0 − 23
1 0
11
→ 0 1 0 6
0 0 1 − 16
2
x −3
So, the solution is y = 11
6
. To check that this is correct, we simply plug x = − 2 , y =
3
11
6 ,
1
z −6
z = − 61 into the original equations (with k = 2):
2 1
− −2·2 − =03
3 6
2 11 1
− +2· +6 − =23
3 6 6
1 2
2 − −2 − =13
6 3
(b) For what value(s) of k does the system not have exactly one solution?
Solution. Now, let’s try to solve the general system by row reducing:
1 0 −2k 0 1 0 −2k 0
1 2 6 2 −(I) → 0 2 2k + 6 2 ÷2
−k 0 2 1 +k(I) 0 0 2 − 2k 2 1
1 0 −2k 0
→ 0 1 k + 3 1
0 0 2 − 2k 2 1
We are not done row-reducing yet, but what happens next depends on k:
• If 2 − 2k 2 6= 0, then we can continue row-reducing the matrix to get exactly one solution.
So, the values of k for which the system does not have exactly one solution are the values of k
such that 2 − 2k 2 = 0, which are k = ±1 .
(c) For the value(s) of k you found in (b), which (if any) yield a system with no solutions?
6
c Harvard Math 21b
Which (if any) yield a system with infinitely many solutions?
We’ve circled the leading ones; from that, we see that the first, third, and fifth variables are
leading variables, while the second and fourth are free. So, the solutions of A~x = ~0 are
−3s − 2t −3 −2
s
1
0
−5t = s 0 + t −5 ,
t 0 1
0 0 0
−3 −2
1 0
0 , −5 .
and a basis of ker A is
0 1
0 0
1
0
0. Calculate A~z.
(b) Let ~z =
0
5
1
2 6 0 4 0 0
2
Solution. We simply multiply −3 −9 1 −1 0 0
= −3 .
2 6 −1 −1 1 0 7
5
7
c Harvard Math 21b
Solution. Yes , ~b is in im A, because ~b can be expressed as A~z (where ~z is the vector given in
(b)).
(d) Solve the system A~x = ~b. (Give all solutions.) Here, ~b is the same vector as in (c).
Solution. Since ~b is in im A, the system A~x = ~b is consistent. In this case, if we can find one
solution ~x1 , then the solution set of A~x = ~b is simply ~x1 + ker A. (This is what Daily Problem Set
7, #3 and #1 on the worksheet “Image and Kernel of a Linear Transformation” said.) But we
know that ~z from (b) is a solution, because A~z = ~b. So, the complete solution set is ~z + ker A, or
1 −3 −2
0
1 0
0 + s 0 + t −5 : s, t ∈ R .
0 0 1
5 0 0
(Of course, you could also just solve the system A~x = ~b using Gauss-Jordan.)
(e) True or false: For every vector ~c in R3 , the system A~x = ~c is consistent. Explain your reasoning.
Solution. True. If we look back at our row reduction in (a), we see that
1 3 0 2 0
rref A = 0 0 1 5 0 ,
0 0 0 0 1
so the first, third, and fifth columns of A form a basis of im A. Therefore, dim(im A) = 3, so im A
must be all of R3 (after all, im A is a 3-dimensional subspace of R3 , and the only such subspace
is R3 ). So, any vector ~c ∈ R3 is in im A, which is the same as saying that A~x = ~c is consistent.
4. In each part, determine how many linear transformations T : R3 → R2 there are with the given
properties. Explain your reasoning.
1 −3 −1
1 2 4
(a) T 2 = ,T 1 = ,T 5 = .
3 4 9
3 −2 4
1 −3 −1
Solution. First, let’s determine whether the vectors ~v1 = 2 , ~v2 = 1 , ~v3 = 5 are
3 −2 4
linearly independent. We do this by looking for linear relations
8
c Harvard Math 21b
c1
~v3 c2 = ~0. Let’s do that
i.e., by solving ~v1 ~v2 using Gauss-Jordan:
c3
1 −3 −1 0 1 −3 −1 0
2 1 5 0 −2(I) → 0 7 7 0 ÷7
3 −2 4 0 −3(I) 0 7 7 0
1 −3 −1 0 +3(II)
→ 0 1 1 0
0 7 7 0 −7(II)
1 0 2 0
→ 0 1 1 0
0 0 0 0
5. Let T : R2 → R2 be the linear transformation defined as follows: T (~x) is the vector obtained by
rotating ~x counter-clockwise by 45◦ , projecting that onto y = 3x, and then rotating clockwise by 45◦ .
(a) Find the matrix of T . You may leave your answer as a product of matrices and inverses of
matrices.
Solution. If we let R be the matrix of rotation by 45◦ counter-clockwise and P be the matrix
of projection onto y = 3x, then the matrix of T is R−1 P R. The rotation matrix R is given by
9
c Harvard Math 21b
√ √
2/2 − √2/2 (see #4 on the worksheet “Introduction to Linear Transformations”). To
R= √
2/2 2/2
find the matrix P of projection onto y = 3x, let’s use coordinates.
(b) Is T projection onto a line in R2 ? Justify your answer. In addition, if T is a projection, specify
the line of projection by giving a basis of the line.
Solution. There are several ways to approach this; here is one. The image of any projection is
a line, so let’s first think about the image of T . If we start with any vector ~x in R2 , rotate it
counter-clockwise by 45◦ , project that onto y = 3x, and then rotate clockwise by 45◦ , we end up
with a vector on the line obtained by rotating y = 3x clockwise by 45◦ ; this is shown below for a
few different vectors ~x:
y = 3x y = 3x y = 3x
rotate 45◦
counter- project onto rotate 45◦
clockwise y = 3x clockwise
So, the image of T is the line obtained by rotating y = 3x clockwise by 45◦ ; let’s call this line V .
To decide whether T is projection onto V , we only need to see what it does to a basis of R2 ; let’s
use a basis (~v1 , ~v2 ) where ~v1 is in the line V and ~v2 is perpendicular to V . Then, we can see that
T (~v1 ) = ~v1 and T (~v2 ) = ~0:
y = 3x y = 3x y = 3x y = 3x
rotate 45◦
counter- project onto rotate 45◦
45◦ clockwise 45◦ y = 3x 45◦ clockwise 45◦
~v2
~v1
V V V V
From this, we can see that T must be projection onto the line V . To find a basis of V , we just
need to find one nonzero
vector in V . We can get such a vector by starting with a nonzero
1
vector in y = 3x, like , and rotating 45◦ clockwise. The matrix of 45◦ clockwise rotation
3
10
c Harvard Math 21b
√ √ √ √ √
is √2/2 √ 2/2
, so the rotation of
1 ◦
clockwise by 45 is √2/2 √ 2/2 1 2 2
= √ .
− 2/2 2/2 3 − 2/2 2/2 3 2
√
2 √2 2
Thus, a basis of V is . (A slightly simpler basis is .)
2 1
3 −1
6. (a) Let A be a 3 × 3 matrix for which im(A) = span 2 , 0 . What is rank(A)? Give an
0 2
example of such a matrix A.
Solution. rank(A) = dim(im A) = 2 because
the two vectors spanning im A are clearly linearly
3 −1 0
independent. An example is A = 2 0 0 . (We know this works because im A is the span of
0 2 0
the columns of A.)
1 0
(b) Let B be a 3 × 3 matrix for which ker(B) = span 0 , 1 . What is rank(B)? Give an
2 −1
example of such a matrix B.
Solution. By the rank-nullity theorem, rank(B) = 3 − dim(ker B) = 3 − 2 = 1 .
To come up with an example, let’s think about what we know. We know that
1 0
B 0 = ~0 and B 1 = ~0.
2 −1
If B is written in terms of its columns as B = ~v1 ~v2 ~v3 , then we can rewrite these equations
as saying
~v1 + 2~v3 = ~0 and ~v2 − ~v3 = ~0.
So, ~v1 = −2~v3 and ~v2 = ~v3 . We can pick ~v3 to
be any nonzero
vector, and then the two other
−2 1 1
columns are determined. So, an example is B = 0 0 0.
0 0 0
(c) Is it possible to choose A and B so that rank(AB) = 2? Briefly justify your answer.
Solution. No. The key is that ker B is contained in ker(AB). (You showed this in Weekly
Problem Set 4, #4.) This means that dim(ker AB) ≥ dim(ker B) = 2. By the rank-nullity
theorem, rank(AB) = 3 − dim(ker AB), and this must be ≤ 1.
If this seems like magic, here is a thought process you might use to come up with this. We know
that we can think of AB as a composition of the linear transformations represented by A and B.
Schematically, we can represent those linear transformations like this:
11
c Harvard Math 21b
B A
R3 R3 R3 R3
im B im A
We know from our work above that dim(im B) = 1, and dim(im A) = 2, so we’ve drawn im B as a
bit smaller than im A (and both smaller than R3 ). We want to think about the composition AB,
which we can represent like this:
B A
3 3
R R R3
im B im A
AB
Here, the red circle represents im AB. The question is asking whether dim(im AB) can be 2. From
our diagram, the intuitive answer is no; after all, im AB cannot be “bigger” than im B, since it is
the image of im B under A. So, dim(im AB) should be ≤ 1.
Once you have this intuitive idea, the next step is to justify it. The key to our intuitive argument
was “comparing” AB and B (notice that A ended up not being that important in our intuition),
so it makes sense for us to use the fact that ker B is contained in ker(AB).
1 0
7. Let ~v1 = and ~v2 = . Let B be the basis (~v1 , ~v2 ) of R2 .
2 1
5
(a) Find the B-coordinates of ~v = .
6
Solution. We want to write ~v = c1~v1 + c2~v2 for some c1 , c2 . That is, we want
5 1 0 c1
= c1 + c2 = .
6 2 1 2c1 + c2
2
(b) Suppose w
~ is a vector whose B-coordinates are . What are the standard coordinates of w?
~
1
12
c Harvard Math 21b
2
Solution. Saying that [w]
~B = exactly means that w
~ = 2~v1 + ~v2 , which we can calculate is
1
2
.
5
1 1
(c) The matrix A = defines a linear transformation T (~x) = A~x. What is the B-matrix of T ?
3 4
Please simplify your answer.
Solution. We know a few methods of finding the B-matrix of T . In this solution, we’ll construct
it column by column. The columns of the B-matrix of T are [T (~v1 )]B and [T (~v2 )]B . We have
1 1 1 3
T (~v1 ) = A~v1 = = = 3~v1 + 5~v2
3 4 2 11
1 1 0 1
T (~v2 ) = A~v2 = = = ~v1 + 2~v2
3 4 1 4
3 1 3 1
Thus, [T (~v1 )]B = and [T (~v2 )]B = , so the B-matrix of T is .
5 2 5 2
2 −2
(d) Is there a different basis B such that the B-matrix of T is ? Explain briefly. (Here, T
1 −1
is the linear transformation defined in (c).)
2 −2
Solution. No ; the image of a linear transformation with B-matrix must be 1-
1 −1
1 1
dimensional, but the image of T is span , = R2 .
3 4
2 −2
Why must the image of a linear transformation S with B-matrix be 1-dimensional? Let’s
1 −1
think about possible outputs S(~x) of such a linear transformation. If the basis B is (w ~ 1, w
~ 2 ), then
we can write any ~x as a linear combination of w ~ 1 and w
~ 2 , say ~x = c1 w
~ 1 + c2 w
~ 2 . Then,
S(~x) = c1 S(w
~ 1 ) + c2 S(w
~ 2)
2 −2
The fact that the B-matrix of S is exactly says that S(w
~ 1 ) = 2w
~1 + w
~ 2 and S(w
~ 2) =
1 −1
−2w
~1 − w
~ 2:
= c1 (2w
~1 + w ~1 − w
~ 2 ) + c2 (−2w ~ 2)
= (c1 − c2 )(2w
~1 + w
~ 2)
13
c Harvard Math 21b
4 1
(a) The set U of ~x in R such that A~x = .
3
1
Solution. This is not a subspace of R4 because it does not contain ~0. (After all, A~0 6= .)
3
Solution. This is a subspace of R4 . One way to justify this is to notice that A~x = B~x ⇐⇒
(A − B)~x = ~0. In other words, the set V is simply ker(A − B), and we know that the kernel of
any linear transformation is a subspace.
Alternatively, we can show directly that V is closed under addition and scalar multiplication:
• To show that V is closed under addition, let ~x, ~y ∈ V . We want to show that ~x + ~y ∈ V ,
which is the same as showing that A(~x + ~y ) = B(~x + ~y ). We have:
Since ~x, ~y ∈ V , we know that A~x = B~x and A~y = B~y . So, we can rewrite the previous line
as:
= B~x + B~y
= B(~x + ~y ) (more matrix algebra)
• To show that V is closed under scalar multiplication, let ~x ∈ V and let k be a scalar. Then,
We are also asked to find a basis of this subspace. As we observed earlier, this subspace is simply
ker(A − B), so it’s easy to find a basis. We simply solve (A − B)~x = ~0 using Gauss-Jordan:
1 −1 1 3 0 1 −1 1 3 0
→
6 −3 0 3 0 −6(I) 0 3 −6 −15 0 ÷3
1 −1 1 3 0 +(II)
→
0 1 −2 −5 0
1 0 −1 −2 0
→
0 1 −2 −5 0
t 0 1
14
c Harvard Math 21b
1 2
2 5
which shows that a basis of ker(A − B) is
1 , 0 .
0 1
x1
x2 4 2 2
x3 in R such that x1 = x3 .
(c) The set W of vectors
x4
Solution. W is not a subspace of R4 because it is not closed under addition. For example,
1 1 2
0 0 0
and are both in W , but their sum is not in W .
−1 1 0
0 0 0
1 2
2 4
~
5
3 and b = 5. Let W be the set of vectors ~x in R which are perpendicular to both ~a and
9. Let ~a =
2 1
3 3
~b.
• If ~x1 , ~x2 are in W , that means (by definition of W ) that ~x1 and ~x2 are both perpendicular to
both ~a and ~b; that is, ~a · ~x1 = 0, ~b · ~x1 = 0, ~a · ~x2 = 0, and ~b · ~x2 = 0. Adding the first and
third equations, ~a · (~x1 + ~x2 ) = 0. Adding the second and fourth equations, ~b · (~x1 + ~x2 ) = 0.
So, ~x1 + ~x2 is perpendicular to both ~a and ~b, which shows that ~x1 + ~x2 is in W . Therefore,
W is closed under addition.
15
c Harvard Math 21b
1 2 3 2 3
In other words, W = ker ; as usual, we solve the system (3) using Gauss-Jordan:
2 4 5 1 3
1 2 3 2 3 0 1 2 3 2 3 0
→
2 4 5 1 3 0 −2(I) 0 0 −1 −3 −3 0 ÷ − 1
1 2 3 2 3 0 −3(II)
→
0 0 1 3 3 0
1 2 0 −7 −6 0
→
0 0 1 3 3 0
16
c Harvard Math 21b