Professional Documents
Culture Documents
QUANTUM NOTES: The Linear Algebra Brush Up You Just Needed
QUANTUM NOTES: The Linear Algebra Brush Up You Just Needed
Ψ
Budapest Semesters in Mathematics / Aquincum Institute of Technology
Preface
Over the years, starting as a couple of pages long handout, the notes kept getting steadily
longer and longer. So let me also use this preface to thank BSM student Jaclyn R.
Sanders, who gave a hand in writting up chapter 3 of this note.
2 Projections 11
2.1 Geometric description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Elementary properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Algebraic characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Spectral calculus 15
3.1 Eigenspaces and diagonalizability . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Independence of eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Spectral projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Spectral decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5 Computing the spectral decomposition . . . . . . . . . . . . . . . . . . . . 19
3.6 A first application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Spectral calculus: functions of operators . . . . . . . . . . . . . . . . . . . 23
3.8 Further applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2 CONTENTS
Chapter 1
v = c1 v1 + . . . + cn vn .
We shall say that the scalars c1 , . . . cn are the coordinates of v in bases B and write
c1
(v)B ≡ . . . .
cn
Let V be an n-dimensional vector space over the field K (where for us K is either R or C)
and B = (b1 , . . . bn ) a bases of V . Then the map
V 3 v 7→ (v)B ∈ Kn
is evidently a linear isomorphism between V and Kn ; in what follows, we shall denote this
map by ιB . One may say that a bases of V allows us to identify V with Kn . However, it
3
4 CHAPTER 1. LINEAR MAPS AND THEIR MATRICES
is important to note, that this identification depends on the chosen bases: in general, the
n-tuple (v)B ∈ Kn representing the vector v 6= 0 will change as we replace our bases B by
another bases.
For two basis of the same vectorspace V , say B = (b1 , . . . bn ) and F = (f1 , . . . fn ), one
introduces the matrix of the change of basis B → F
SF ,B = b1 . . . bn
F F
i.e. SF ,B is the n × n matrix whose columns are the coordinates of the vector B in bases F.
Proof. Since both sides of the claimed equation depend linearly on v, it is enough to check
the equality for the cases when v ranges through the vectors of a bases. However, if v = bk ,
then (v)B is the n-tuple ek ∈ Kn whose entries are all zero except the k th one, which is
equal to 1 (indeed, bk equals to 1 times itself plus zero times the other bases vectors of B).
It is elementary to check that if M is a matrix with n rows, then
M ek = the k th column of M.
SF ,B SB,F = In = SBF SF ,B
SF−1,B = SB,F .
is called a (linear) subspace of V . (For shortness, we shall often drop the word “linear”
when it does not lead to confusion.) Note that
1.2. LINEAR SUBSPACES 5
Example. Proper subspaces of R2 are precisely the lines of R2 passing through the origin.
It is rather clear that for two subspaces of the same vector space V — say W and U —
the intersection W ∩ U is again a subspace of V but the union W ∪ U in general is not so.
Instead, one usually works with the sum of subspaces:
W + U ≡ {w + u| w ∈ W, u ∈ U }
i.e. W + U is the set of vectors that can be written as the sum of a vector from W and
a vector from U . It is straightforward to check that unlike W ∪ U , the sum W + U is a
subspace. Moreover, since the zero vector 0 ∈ V is an element of both W and U and every
vector is the sum of itself and 0, we see that both W and U are contained in W + U :
W, U ⊂ W + U.
Actually it is easy to see that W + U is precisely the smallest linear subspace containig
both W and U .
Proposition 1.2.1. dim(W + U ) + dim(W ∩ U ) = dim(W ) + dim(U ).
Proof. Let B be a bases of W ∩ U . Since any set of linearly independent vectors can be
completed to be a basis, one can find an F and K such that B together with F form a
bases of W , while B together with K form a bases of U . The claim can be then deduced
as easy arguments show that B, F, K alltogether form a bases of W + U .
A pair of subspaces W, U ⊂ V such that
• W ∩ U = {0},
Example. Any pair of lines of R2 that intersects in the origin only are complementary.
Note that by proposition (1.2.1), in finite dimensions two subspaces W and U in V are
complementary if and only if W ∩ U = {0} and dim(W ) + dim(U ) = dim(V ). On the other
hand, the following characterization (whose proof is left to the reader) holds in infinite
dimensions as well.
6 CHAPTER 1. LINEAR MAPS AND THEIR MATRICES
Proposition 1.2.2. Two subspaces W, U of the vector space V are complementary if and
only if for evey vector v ∈ V there exists a unique w ∈ W and u ∈ U such that v = w + u.
We shall finish this section by the important observation that for every subspace there
exists another subspace so that the two are complementary. The reader should try to give
a formal proof of this statement by using the (more widely known) fact that every linearly
independent set can be completed to be a basis.
is said to be a linear map. When dealing with linear maps, one usually drops the brackets
and simply writes Av instead of A(v) to emphasis its similarity with multiplication. Note
that a linear map must take the zero vector into the zero vector.
Examples. Multiplication by a scalar, that is, the map v 7→ λv is a linear map. (In
particular, the identiy map vmapstov is a linear map.) A rotation in R2 along the origin is
also a linear map. On the other hand, translation, i.e. a map of the form v 7→ v + a where
a is a (fixed) nonzero vector is not a linear map.
We shall list some well-known elementary properties of linear maps which, though im-
portant, are easy to derive from definitions. (And so we shall leave their proofs to the
reader.)
• Composition of linear maps is again a linear map. Note that when A and B are
linear, one usually writes AB instead of A ◦ B like if it was a multiplication.
• Lin(V, W ), i.e. the set of linear maps from V to W is a vector space as linear
combinations of linear maps from V to W is again a linear map from V to W .
1.4. MATRICES OF LINEAR MAPS 7
Let V, W be vector spaces over the same field and A ∈ Lin(V, W ). Choose a subspace
U ⊂ V so that U and Ker(A) are complementary in V (as was mentioned, such a subspace
always exists). Every vector in V is of the form u + x for some u ∈ U and x ∈ Ker(A) and
A(u + x) = Au + Ax = Au + 0 = Au
Thus by restricting A onto U , its image will not decrease. On the other hand, the kernel of
this restriction is U ∩ Ker(A) = {0} and hence this restriction is injective and so actually
it is a linear bijection between U and Im(A). Then by applying proposition 1.2.1, we get
the following conclusion.
Lin(V ) ≡ Lin(V, V ).
In case of Kn (i.e. the space of n-tuples), we already have a concrete manner of handling
linear maps, since every element of Lin(Kn ) is a multiplication by a (uniquely determined)
n × n matrix. So when V is a “generic” vector space, we may begin by fixing a basis B of
V . Then by looking at vectors of V through their coordinates, an “abstract” linear map
A : V → V becomes a linear map between n-tuples; that is, an n × n matrix (A)B , which
we shall call the matrix of A in basis B. To put it with formulas: (A)B is the matrix for
which
(A)B (v)B = (Av)B (v ∈ V ).
As was already noted, when a matrix is applied to the n-tuple ek whose elements are all
zero except its k th element which is 1, the result is simply the k th column of the matrix.
Thus the k th column of (A)B is
where bk is the k th element of the basis B. The above formula gives an actual way of
computing the matrix of a linear map in a given basis.
(XY )B (v)B = (XY v)B = (X(Y v))B = (X)B (Y v)B = (X)B (Y )B (v)B ,
showing that
(XY )B = (X)B (Y )B ,
where on the lefthand side we have the matrix of the composition XY ≡ X ◦ Y whereas
on the righthand side we have the (matrix) product of two matrices.
However, the very same linear map in the standard basis E = (e1 , e2 ) of R2 where
1 0
e1 = , e2 =
0 1
By inspection, the two matrices have not a single common entry. Nevertheless, they share
certain features. For example, the 8th power of both matrices is the identity matrix. This
is of course something we should have expected, since by applying a 45◦ rotation 8 times,
we get back where we were: R8 = id so in any basis F we have that
showing that
(A)F = S(A)B S −1
−1
where S = SF ,B = SB,F . So for example, using that the determinant of a product of square
matrices is the product of the respective determinants and that scalars — unlike matrices
— do commute,
So the determinant of the matrix of A is independent of the chosen basis; hence it makes
sense to define the determinant of A to be the determinant of the matrix of A in some
basis:
det(A) := det((A)B ).
Similarly, we define the trace of A to be the trace of its matrix in some basis. Again, the
definition is good since this value is actually independent of the choice of basis:
Here we have used the well-known fact that the trace of the product of two matrices is
independent of their order:
Tr(M N ) = Tr(N M ).
(Note though that in case of more than 2 matrices the order can matter: in general
Tr(XY Z) 6= Tr(XZY ) — this is why we looked at the product S(A)B S −1 as the product
of the two terms S and (A)B S −1 .)
We finish this section by noting some further invariants. First, the number of inde-
pendent columns in the matrix of A is easily seen to be nothing else than the dimension
of Im(A) (i.e. the rank of A); hence this is also an invariant (that is, basis-independent)
quantity. Second, if the matrix-function q is an invariant quantity, then so is M 7→ q(M n )
for any n ∈ N. (E.g. the trace of the square is also an invariant quantity.)
10 CHAPTER 1. LINEAR MAPS AND THEIR MATRICES
Exercises
E 1.1. Consider the linear subspaces U, W ⊂ R4 defined as U := Span{u, u0 } and W :=
Span{w, w0 }, where
1 1 4 3
2 0 1 1 0 2
u= 3 , u = 1 , w = 2 , w = 3 .
4 1 1 3
U : = {x ∈ R4 | x1 + x4 = 3x2 − 2x3 = 0}
W : = {x ∈ R4 | x1 + x2 + x3 + 6x4 = 3x3 + x4 = 0}.
form a basis in V3 , and that the derivation is a linear map from V3 to V3 . Write down the
matrix of the derivation in the given basis.
E 1.5. Let A, B : V → W be two linear maps, and suppose that dim(Im(A)) = dim(Im(B)) =
dim(Im(A + B)) = 1. Prove that either Im(A) = Im(B) or Ker(A) = Ker(B).
E 1.6. Let V be a finite dimensional vectorspace, and A ∈ Lin(V ) a linear map. Prove that
Ker(A) and Im(A) are complementary if and only if Ker(A2 ) = Ker(A) which is further if
and only if Im(A2 ) = Im(A).
E 1.7. Given that both
2 4 0 4 5 x
0 6 2 and 2 2 y
4 0 2 2 −1 z
are matrices (in different bases) of the same linear map, determine the possible values of
x, y and z.
Chapter 2
Projections
11
12 CHAPTER 2. PROJECTIONS
Of course every column of this matrix is multiple of the single column vector u and the
rank of this matrix is 1; this is to say that Im(P ) = U . In particular, it cannot be inverted
and its determinant must be zero — and indeed this is so. However, we might wonder
about its trace value, which is precisely 1:
2 5 21
− + = 1.
18 18 18
2.3. ALGEBRAIC CHARACTERIZATION 13
Is this just some coincidence, or is there more to it? To answer the question, we shall make
use of the invariance of the trace value with respect to a change of basis. The matrix of P
in the standard basis was not really “nice”: it was full of values particular to the actual
case. However, suppose we pick a basis of U — in our case, since U is one dimensional,
the single vector u1 := u will do — and a basis (w1 , w2 ) of W (now consisting of 2 vectors,
since in our case dim(W ) = 2). Using that U and W are complementary, it is easy to show
that putting together the 2 bases results a basis F := (u1 , w1 , w2 ) of the full space. Then,
since P u1 = u1 whereas P w1 = P w2 = 0 we have that
1 0 0
(P )F = 0 0 0 .
0 0 0
This is why in our case we found the trace to be equal to one, and in general, by the above
sketched argument we can draw the following consequence.
Corollary 2.2.1. Let U and W be complementary subspaces, and P be the projection onto
U along W . Then Tr(P ) = dim(U ) = dim(Im(P )) i.e. the trace of P equals to its rank.
We finish this section by making just one more remark regarding projections. A pair
of complementary subspaces — say U and W — give rise to 2 projections: the one onto U
along W and the one onto W along U . Applying the first one on a vector v = u + w with
u ∈ U, w ∈ W would give u, whereas the second one would give w: thus the sum of the
two projections is the identity map. Hence if P is the projection onto U along W , then
the projection onto W along U is I − P .
v = P z = P 2 z = P (P z) = P v = 0,
showing that U ∩ W = {0}. Thus U and W are complementary and the decomposition we
started with: v = P v + (v − P v) is precisely the decomposition into the sum of a vector
from U and a vector from W .
Recall that the projection onto U along W would take v into the first term of this
decomposition; that is into P v. However, this is exactly what P does; hence P is the
projection onto U along W .
Note that in general — unlike with projections — for a linear map A ∈ Lin(V ) the
subspaces Im(A) and Ker(A) need not be complementary. For example, with the matrix
of A being
0 1
,
0 0
rather than complementarity, we have Im(A) = Ker(A).
Exercises
E 2.1. Suppose A ∈ Lin(V ) is such that An = A for some natural number n. For each of
the following statements decide whether it follows:
• A is a projection.
E 2.2. Compute the matrix (in the standard basis of R4 ) of the projection onto
U = {x ∈ R4 : x1 + x2 = x3 + x4 = 0}
along
W = {x ∈ R4 : x1 − x3 = x1 + x2 + x4 = 0}.
Spectral calculus
Av = λv
for some scalar multiple λ, is called an eigenvector of A. More precisely, in the above
situation we would say that the scalar multiple λ is an eigenvalue of A, and that v
is an eigenvector associated to the eigenvalue λ. The set of eigenvalues of A is called
the spectrum of A and denoted by the symbol Sp(A). (Note however that in infinite
dimensional spaces by spectrum we do not mean the set containing the eigenvalues only.)
Suppose E = (e1 , . . . en ) is a basis of V , and consider the matrix of A in basis E → E.
It is easy to see that this matrix is diagonal if and only if the vectors e1 , . . . en are all
eigenvectors of A. Thus A is diagonizable (i.e. there exists a basis E such that the matrix
E (A)E is diagonal) if and only if eigenvectors of A span the full space V (since from a
spaning set of vectors one can always choose a basis).
Rewritting the equation Av = λv we have (A − λI)v = 0; thus λ is an eigenvalue of
A if and only if the operator A − λI is non-injective which is equivalent to saying that its
determinant is zero. Thus the eigenvalues of A can be found by solving the equation
det(A − λI) = 0.
Vλ = {v ∈ V | Av = λv}
is called the eigenspace of A associated to the eigenvalue λ. Using the notion of eigen-
spaces, diagonalizability can be expressed as the condition
X
Vλ = V.
λ∈Sp(A)
15
16 CHAPTER 3. SPECTRAL CALCULUS
(Note that here — and everywhere after where the summation symbol is used for linear
subspaces — the summation symbol refers to the usual addition operation between linear
subspaces.)
Let us see now some examples. Consider the linear operators A and B on C2 and C3
respectively, given by the multiplication of the following matrices:
1 1 1
1 1
, 1 1 1 .
0 1
1 1 1
1−λ 1
We have that det(A−λI) = det = (1−λ)2 , hence A has a single eigenvalue:
0 1−λ
λ = 1. On the other hand, by a straightforward computation Av = (+1)v if and only if
z
v= for some z ∈ C. Hence its only eigenspace is one-dimensional; its eigenvectors do
0
not span C2 and so A is not a diagonalizable operator.
By simple computation we find that B has two eigenvalues; Sp(B) = {0, 3}. Solving
the corresponding eigen-equations we get that for example the vectors
1 1 1
−1 , 0 , and 1
0 −1 1
are all eigenvectors for B. Since they are linearly independent, they must span C3 and
hence we find that B is diagonalizable.
Though it can be deduced also by considering the characteristic polynomial, the above
fact also shows that the number of eigenvalues of A is smaller or equal than the dimension
of V . However, we now also see that in case of equality A must be diagonalizable. Note
that this is a sufficient but not a necessary condition of diagonalizability; see the operator
B in the example of the previous section (or just consider that I has only a single eigenvalue
— namely the value 1 — but is of course diagonalizable).
Nevertheless, already this is enough to conclude that diagonalizable maps form a dense
set in Lin(V ); so in some sense “almost all” operators are diagonalizable. A matrix picked in
some random sense will have a random characteristic polynomial with the roots randomly
distributed on the complex plane: the chance that any two would coincide is zero. So from
an engineer’s point of view it is enough to discuss how to deal with diagonalizable maps.
Happily this is also the case from the physicist point of view, because the types of maps
they are interested by (e.g. self-adjoint operators and unitary operators) all turn out to
be automatically diagonalizable. So in these notes we shall mainly concern the case of
diagonalizable maps and though we acknowledge that one can give matrices which are not
diagonalizable, we view this as some kind of “pathological phenomenon” of little interest.
The just proved statement also shows that eigenspaces of an operator are always inde-
pendent in the sense formulated below. (The details on how it follows from Prop. 3.2.1 are
left as an exercise.)
Corollary 3.2.2. For any λ ∈ Sp(A) we have that
X
Vλ ∩ Vµ = {0}.
µ∈Sp(A):µ6=λ
On the other hand, if we assume that A is diagonalizable, then we also have that
X X
Vλ + Vµ = Vµ = V.
µ∈Sp(A):µ6=λ µ∈Sp(A)
Thus, together they mean that for any λ ∈ Sp(A), the eigenspace Vλ and the “rest” are
complementary; more precisely that
X
Vλ and Vµ
µ∈Sp(A):µ6=λ
Proof. The first claim is especially simple to justify. Indeed, if µ = λ then Pλ Pµ = Pµ2 = Pµ
since we are dealing with projections. On the other hand, if λ 6= µ then
X
Im(Pµ ) = Vµ ⊂ = Ker(Pλ )
α∈Sp(A):α6=λ
showing that in this case Pλ Pµ = 0. As for the second claim, suppose vλ ∈ Vλ . Then
Pλ vλ = vλ and in general for µ 6= λ Pµ v = 0 since
X
Vλ ⊂ Vα = Ker(Pµ ).
α∈Sp(A):α6=µ
Thus if vλ ∈ Vλ , then X
Pµ v = Pλ v = v = Iv
µ∈Sp(A)
P
i.e. the linear operator I − µ∈Sp(A) Pµ sends every eigenvector of A into the zero vector.
P
Since eigenvectors of A span the full space, the previous fact means that I− µ∈Sp(A) Pµ = 0
which is exactly what was claimed. The third claim can be shown in a similar manner;
here we shall not dwell further on details.
where H is a finite set of scalar values and Pλ are nonzero operators acting on V satisfying
the properties
• Pλ Pµ = δλ,µ Pµ ,
P
• λ∈H Pλ = I,
Proof. Suppose λ ∈ Sp(A) and v 6= 0 is such that Av = λv. Then taking an α ∈ H and
applying Pα on λv and using the assumed relation Pα Pµ = δα,µ Pα we get that
X
λPα v = Pα Av = µPα Pµ v = αPα v,
µ∈H
implying that either λ = α or Pα v = 0. However, we also have that the nonzero vector v
can be expressed as X
v = Iv = Pα v.
α∈H
Thus Pα v cannot be the zero vector for all α ∈ H and P so λ must be an element of H. Since
Pα v = 0 for all α 6= λ, we further have that Pλ v = α∈H Pα v = v and so v ∈ Im(Pλ ).
Now consider a value µ ∈ H. Since Pµ is assumed to be a nonzero projection, there
must exist a nonzero vector in its image v ∈ Im(Pµ ), v 6= 0 and we have that Pµ v = v and
in general
Pα v = Pα Pµ v = δα,µ Pµ v = δα,µ v.
It follows that X X
Av = αPα v = αδα,µ v = µv,
α α
p(x) = c0 + c1 x + c2 x2 + . . . ck xk .
That is, if we know the spectral decomposition of A, we may use it for efficiently computing
polynomials of A.
If Sp(A) contains only a single element — say λ — then in order to have a spectral
decomposition, we must have A = λPλ where Pλ = I. That is, in this case A is either
a multiple of the identity or it does not have a spectral decomposition (i.e. it is not
diagonalizable).
What if Sp(A) contains more than just a single element; how do we find now the spectral
projection Pλ for a λ ∈ Sp(A)? Consider the polynomial p given by the formula
1
p(x) = Π (x − µ).
µ∈Sp(A):µ6=λ λ − µ
It is easy to see that p(µ) = 0 for all µ ∈ Sp(A), µ 6= λ, whereas p(λ) = 1. Thus
1 X
Π (A − µI) = p(α)Pα = Pλ ,
µ∈Sp(A):µ6=λ λ − µ
α
Note that in the above example the trace of each spectral projection is 1, meaning that
they are projections onto one-dimensional subspaces. Infact, it could not be otherwise,
since we have 3 eigenvalues and we are in a 3-dimensional space. If for example B was
still acting on C3 and was still diagonalizable but had only 2 eigenvalues, then one of the
spectral projections should have trace 1 while the other should have trace equal to 2.
Although it is not the only way, as we shall see now, one can also use spectral decompos-
tion to find a non-recursive expression for such a sequence. We begin by collecting two
consequtive members of the sequence into a vector:
an
vn := .
an+1
M = λ+ P+ + λ− P−
Using what we have learned about polynomials of a diagonalizable operator, we have that
M n = λ+ n P+ + λ− n P− .
Thus
n n 0
vn = (λ+ P+ + λ− P− )
1
can be computed by substituting in the above expression the matrices of P+ , P− and the
actual values of λ+ , λ− . By executing the substitution we get that the first entry of the
vector vn — which is exactly the nth term of the Fibonacci sequence — is
√ !n √ !n !
1 1+ 5 1− 5
an = √ − .
5 2 2
• (f g)(A) = f (A)g(A),
Proof. It is straightforward to check both claims; here we shall only show the second one.
We have that
! !
X X X
f (A)g(A) = f (λ)Pλ g(µ)Pµ = f (λ)g(µ)Pλ Pµ
λ µ λ,µ
X X X
= f (λ)g(µ)δλ,µ Pλ = f (λ)g(λ)Pλ = (f g)(λ)Pλ = (f g)(A).
λ,µ λ λ
The previous proposition allows us to answer some further natural questions about our
definition of f (A). For example, when A is invertible, for a positive integer k we usually
introduce A−k as the k th power of the inverse of A (which is of course nothing else than
the inverse of Ak ). On the other hand, the scalar valued function
f : x 7→ x−k
have to do with f (A), which is defined via the spectral calculus (rather than an infinite
sum)? To answer, consider the polynomial pn defined by the formula
n
X
pn (x) = ck x k .
k=0
Since pn P
is not an infinite power series anymore but just a polynomial, we have that
pn (A) = nk=0 ck Ak . On the other hand,
∃ lim pn (λ) = f (λ)
n→∞
converges to X
f (λ)Pλ = f (A)
λ
in every topology in which addition of operators and multiplication of operators by scalars
is continuous. So for example, if A is a diagonalizable matrix, then every entry of the
matrix valued series
1 1 1
I + A + A2 + A3 + A4 + . . .
2 3! 4!
A
converges to the respective entry of e (where this latter is defined by spectral calculus).
By a similar argument it is also easy to show, that in this case the matrix valued function
t 7→ etA is (entry-wise) differentiable, and that for the (again, entry-wise) differential we
have that dtd etA = etA A.
We conclude this section by one more important observation. Suppose H1 , H2 , . . . Hk
are nonempty, disjointP subsets of Sp(A), where A is a diagonalizable operator with spectral
decomposition A = λ λPλ . Then setting
X
Qj := Pλ (j = 1, 2, . . . k)
λ∈Hj
Indeed, the second relation is rather trivial, while the first one follows since if i 6= j then
Pλ Pµ = 0 for all λ ∈ Hi and µ ∈ Hj (as Hi ∩ Hj = ∅), whereas for i = j we have
X X X X X
Q2j = Pλ Pµ = Pλ P µ = δµ,λ Pλ = Pλ = Qj .
λ∈Hj µ∈Hj λ,µ∈Hj λ,µ∈Hj λ∈Hj
where
Q4 = (P−2 + P2 ), Q9 = P3 , Q25 = P5 .
It is also straightforward now to show the following statement, which we shall leave as an
exercise to prove.
x(t) = ea(t−t0 ) r.
So how about a similar problem but with more variables — say this one:
d
x (t) = 3x1 (t) + 2x2 (t),
dt 1
x1 (t0 ) = r1 ,
t 7→ x1 (t), x2 (t)? d
x (t) = x1 (t) + 4x2 (t),
dt 2
x2 (t0 ) = r2 .
3.8. FURTHER APPLICATIONS 27
It is now easy to show, that similarly to the case with one-variable, the solution of the
system is
x1 (t) M (t−t0 ) r1
=e
x2 (t) r2
where
3 2
M=
1 4
and the matrix eM (t−t0 ) is defined via spectral calculus. So indeed: such functions of
matrices can naturally occure.
To finish, let us compute the solution in an explicit manner. We have already found
the spectral decomposition of this matrix in an earlier section, so now we can immediately
write down the exponential in question:
where 1 2 2
− 23
P2 = 3 3 , P5 = 3 .
1 2
3 3
− 13 1
3
Thus by substitution we finally get that the solution is given by the formulas
1 2 2 2
x1 (t) = e2(t−t0 ) ( r1 + r2 ) + e5(t−t0 ) ( r1 − r2 ),
3 3 3 3
1 2 1 1
x2 (t) = e2(t−t0 ) ( r1 + r2 ) + e5(t−t0 ) (− r1 + r2 ).
3 3 3 3
Exercises
E 3.1. Suppose the sum of each row in a square-matrix is 1. Does this reveal something
about its eigenvalues and eigenvectors? Find all eigenvalues and eigenspaces of the matrix
0 2 − i −1 + i
2+i 3 −4 − i .
−1 − i −4 + i 6
E 3.2. Let A be a linear map such that A2 = A3 . Show that A is diagonalizable if and
only if A is a projection. Give an example for a linear map A that satisfies A2 = A3 but
is not a projection.
28 CHAPTER 3. SPECTRAL CALCULUS
E 3.4. Use spectral calculus to find (at least one) matrix X satisfying the equation
−1 1 6
X + 10X = .
8 −1
E 3.5. Let A, B ∈ Lin(V ) be diagonalizable. Prove that there exist 2 functions f, g and a
diagonalizable C ∈ Lin(V ) such that A = f (C), B = g(C) if and only if AB = BA.