Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Knapp Study Group

Week 3 Problems

Eric Tao
June 16, 2023

1 Quick Checks
1. Consider the differential equation 5y ′′ + 7y ′ − 6y = 0, where y : R → R is infinitely-differentiable. The
solutions to this equation form the eigenspace of a certain linear operator. Identify the vector space in
question, the linear operator in question, and the eigenvalue. (There is more than one possible answer
here, but they’re all the same idea.)
2. If A is an n × n matrix of real numbers, consider the following equation:

det(2A) = 2 det A.

Is it always true? If it is, give a brief justification. If it is not, identify when exactly it is true.
3. a) Find the characteristic polynomial, eigenvalues, and eigenvectors of
 
1 1 2 0
−1 1 1 0
 ,
0 0 0 1
0 0 0 0

considering it as a matrix with real entries. Is this matrix diagonalizable?


b) Find the characteristic polynomial, eigenvalues, and eigenvectors of
 
1 1 2 0
−1 1 1 0
 ,
0 0 0 1
0 0 0 0

considering it as a matrix with complex entries. Is this matrix diagonalizable?

4. What is wrong with the following statement? “If A is a matrix of real numbers and x is a vector with
dimension corresponding to the number of columns of A, then Ax = 0 has a nontrivial solution if and
only if the determinant of A is zero.”
5. a) If  
1 0 1
A = 0 3 3 ,
3 1 2
then calculate the determinant of A by doing cofactor expansion along the first row.

1
b) Find the classical adjoint of A.
c) Calculate the entry of A multiplied by the classical adjoint of A in the first row and the first
column. What do you notice, comparing your calculations from the first part of the question to
your calculations from the second and third parts of the question?
d) Now, calculate the determinant of  
1 0 1
A′ = 1 0 1
3 1 2
by doing cofactor expansion along the first row (although you should know what the determinant
is equal to before you even do any computations!).
e) Calculate the entry of A multiplied by the classical adjoint of A in the first row and the second
column. What do you notice, comparing your calculations from the fourth part of the question to
your calculations from the fifth part of the question?
f) Based on what has been demonstrated over the course of this question, convince yourself that if
det A ̸= 0, then the classical adjoint of A is (det A)A−1 , which proves that if the determinant of a
matrix is invertible, the matrix is invertible (you do not have to write anything down for this part
of the question). If you need a reference in the textbook, this is Proposition 2.38.

2 Problems

6. We say that a sequence of vector spaces with linear maps between them is exact if the image of each
map equals the kernel of the next. An exact sequence of five vector spaces where the first and last vector
spaces are zero is called a short exact sequence. For example,

α β γ δ
0 A B C 0

is a short exact sequence if im α = ker β, im β = ker γ, and im γ = ker δ. While this may seem like a
pointless definition now (because as the last part of this question will show, short exact sequences are
indeed somewhat trivial for vector spaces), we’ll get a ton of mileage out of it when we study groups
and modules, so we might as well see it now.

a) Prove that there is only one possible choice for what α can be. What does this imply about β (use
the exactness of the sequence at A)?
b) Prove that there is only one possible choice for what δ can be. What does this imply about δ (use
the exactness of the sequence at C)?
c) Prove that in fact, B ∼
= A ⊕ C. In other words, the exact sequence splits. That is, there is only one
way to combine two vector spaces into another vector space. We’ll see later that this is not true
for groups and modules, and a great deal of our time will go towards understanding how A and C
can combine to form different structures on B.

7. Let V = Mn×n (Q) and consider the map L : V → V which takes a matrix to its transpose.

a) Check that L is an automorphism of V (i.e. it is a linear map from V to itself with a linear inverse).
b) Is L diagonalizable? If it is,
Pthen for each eigenspace Vi of L, give the formula for the projection
map πi : V → Vi such that i πi = id (in other words, give a decomposition of V into a direct sum
of the eigenspaces of L). If it is not, then prove it is not.

Page 2
8. Let P1 (R) be the vector space of polynomials with real coefficients with degree at most one. Does there
exist p ∈ P1 (R) such that
Z 1
(p − cos x)q dx = 0
−1
for all q ∈ P1 (R)? If so, find it. If not, prove your answer.
9. Prove that if 1 + 1 ̸= 0 (as is true in Q, R, and C at least), the following characterizations of alternating
multilinear functions are equivalent:
1. f is a multilinear function such that if any two of the inputs are the same, then f outputs zero.
2. f is a multilinear function such that if any two inputs are switched, the sign of f flips.
3. f is a multilinear function such that if a permutation is applied on the inputs, the output of f is
multiplied by the sign of the permutation.
Later, we’ll see examples of algebraic objects where 1 + 1 = 0, and these three statements will not
be equivalent in that contexst, which is why we picked the first one to be the definition of alternating
multilinear function.
10. If A, B are n×n matrices of real numbers, do AB and BA always have the same characteristic polynomial?
If they do, then prove it. If not, give a counterexample. (Hint: consider first the case when either A or
B is invertible.)
11. Corollary 2.37 introduced a type of matrix known as a Vandermonde matrix, which shows up ubiquitously
in applications like Langrange interpolation and more (the discrete Fourier transform, for example, is a
very special type of Vandermonde matrix). Let’s define our notation for this question. Let k be either
C, R, or Q, and let r0 , . . . , rn ∈ k. The corresponding Vandermonde matrix is then
V (r0 , . . . , rn ) = (rij )0≤i,j≤n ,
which is composed of n + 1 rows and n + 1 columns.
a) When is V (r0 , . . . , rn ) invertible?
b) Now, consider the vector space V of functions from Z≥0 to k. Let r0 , . . . , rn ∈ k as before, and
define fi ∈ V such that
fi (n) = rin .
If V (r0 , . . . , rn ) is invertible, prove that {fi }0≤i≤n is linearly independent.

12. a) Make sure that you understand the definitions of external direct sum and external direct product
and that you can draw out the universal properties that they satisfy. You do not have to write
anything for this part.
b) If {Vi }i∈I is a collection of R-vector spaces indexed by I, prove that if I is finite, then
Vi ∼
M Y
= Vi .
i∈I i∈I

c) Prove that if I is countably infinite, then the previous statement fails. (In fact, it fails whenever I
is infinite, but I am only asking you to prove the case where I is countably infinite.) You may use
the results of question 11.
d) If V has a countably infinite basis, prove that the dimension of the dual of V is uncountable and
thus V and its dual are not isomorphic.
Note: This statement highlights one way in which infinite-dimensional vector spaces are often patho-
logical. However, infinite-dimensional vector spaces in the wild often have a topology on them which
lets us define infinite sums and recover back many of the nice properties of finite-dimensional vector
spaces. For example, a great deal of mathematics and physics relies on the fact that a certain type of
infinite-dimensional vector space known as a Hilbert space has a nice relationship to its continuous dual
(this statement is known as the Riesz representation theorem).

Page 3

You might also like