Download as pdf or txt
Download as pdf or txt
You are on page 1of 74

OPERATOR THEORY ON HILBERT SPACE

Class notes

John Petrovic
Contents

Chapter 1. Hilbert space 1

1.1. Definition and Properties 1

1.2. Orthogonality 3

1.3. Subspaces 7

1.4. Weak topology 9

Chapter 2. Operators on Hilbert Space 13

2.1. Definition and Examples 13

2.2. Adjoint 15

2.3. Operator topologies 17

2.4. Invariant and Reducing Subspaces 20

2.5. Finite rank operators 22

2.6. Compact Operators 23

2.7. Normal operators 27

Chapter 3. Spectrum 31

3.1. Invertibility 31

3.2. Spectrum 34

3.3. Parts of the spectrum 38

3.4. Spectrum of a compact operator 40

3.5. Spectrum of a normal operator 43


iii
iv CONTENTS

Chapter 4. Invariant subspaces 47

4.1. Compact operators 47

4.2. Line integrals 49

4.3. Invariant subspaces for compact operators 52

4.4. Normal operators 56

Chapter 5. Spectral radius algebras 64

5.1. Compact operators 64


CHAPTER 1

Hilbert space

1.1. Definition and Properties

In order to define Hilbert space H we need to specify several of its features. First, it is a complex vector

space — the field of scalars is C (complex numbers). [See Royden, p. 217.] Second, it is an inner product

space. This means that there is a complex valued function hx, yi defined on H × H with the properties that, for

all x, y, z ∈ H and α, β ∈ C:

(a) hαx + βy, zi = αhx, zi + βhy, zi; it is linear in the first argument;

(b) hx, yi = hy, xi; it is Hermitian symmetric;

(c) hx, xi ≥ 0; it is non-negative;

(d) hx, xi = 0 iff x = 0; it is positive.

In every inner product space it is possible to define a norm as kxk = hx, xi1/2 .

Exercise 1.1.1. Prove that this is indeed a norm.

Finally, Hilbert space is complete in this norm (meaning: in the topology induced by this norm).

Pn
Example 1.1.1. Cn is an inner product space with hx, yi = k=1 xk yk and, consequently, the norm kxk =
pPn (k) (k) (k)
k=1 |xk |2 . Completeness: if {x(k) }∞ n
k=1 is a Cauchy sequence in C (here x
(k)
= (x1 , x2 , . . . , xn )) then so
(k)
is xm for any fixed m, 1 ≤ m ≤ n, and C is complete.

Example 1.1.2. Let H0 denote the collection of all complex sequences, i.e. functions a : N → C, characterized

by the fact that an 6= 0 for a finite number of positive integers n. Define the inner product on H0 by ha, bi =
P∞
n=0 an bn . The space H0 is not complete in the induced norm. Indeed, the sequence {a(k) }k∈N , defined by
(k) (k)
an = 1/2n if n ≤ k and an = 0 if n > k is a Cauchy sequence, but not convergent.
1
2 1. HILBERT SPACE

P∞
Example 1.1.3. Let `2 denote the collection of all complex sequences a = {an }∞
n=1 such that n=1 |an |2
P∞
converges. Define the inner product on `2 by ha, bi = n=1 an bn . Suppose that {a(k) }∞
k=1 is a Cauchy sequence

(k) (k)
in `2 . Then so is {an }∞ 2
k=1 for each n, hence there exists an = limk→∞ an . First we show that a ∈ ` . Indeed,

choose K so that for k ≥ K we have ka(k) − a(K) k ≤ 1. Then, using Minkowski’s Inequality for sequences (see

Royden, p. 122), for any N ∈ N,


( N
)1/2 ( N
)1/2 ( N
)1/2 ( N
)1/2 ( N
)1/2
X X X X X
2
|an | ≤ |an − a(K)
n |
2
+ |a(K)
n |
2
= lim |a(k)
n − a(K)
n |
2
+ |a(K)
n |
2
k→∞
n=1 n=1 1=0 n=1 n=1

≤ lim sup ka(k) − a(K) k + ka(K) k ≤ 1 + ka(K) k.


k→∞

Thus a = {an } ∈ `2 . Moreover, {a(k) } converges to a, i.e. limk→∞ ka − a(k) k = 0. Let  > 0 and choose M so

that k, j ≥ M implies that ka(k) − a(j) k < . For such k ≥ M and any N , we have
N
X N
X
|an − a(k) 2
n | = lim |a(j) (k) 2
n − an | ≤ lim sup ka
(j)
− a(k) k2 ≤ 2 .
j→∞ j→∞
n=1 n=1

Since N is arbitrary, it follows that ka − a(k) k ≤  and, therefore, `2 is Hilbert space.

Example 1.1.4. The space L2 of functions f : X → C, such that |f |2 dµ < ∞ (where X is usually [0, 1] and
R
X

f g dµ and L2 is complete by the Riesz–Fisher


R
µ Lebesgue measure). The inner product is defined by hf, gi = X

Theorem (see Royden, p. 125).

Example 1.1.5. The space H 2 . Let X = T (the unit circle) and µ the normalized Lebesgue measure on T.

The Hardy space H 2 consists of those functions in L2 (T) such that hf, eint i = 0 for n = −1, −2, . . . .

Some important facts.

Proposition 1.1.1 (Parallelogram Law). kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2 .

Proposition 1.1.2 (Polarization Identity). 4hx, yi = hx + y, x + yi − hx − y, x − yi + ihx + iy, x + iyi − ihx −

iy, x − iyi.

Exercise 1.1.2. Prove Propositions 1.1.1 and 1.1.2.


1.2. ORTHOGONALITY 3

Problem 1. Let k · k be a norm on Banach space X , and define hx, yi as in Polarization Identity. Assuming

that the norm satisfies the Parallelogram Law, prove that hx, yi defines an inner product.

1.2. Orthogonality

In Linear Algebra a basis of a vector space is defined as a minimal spanning set. In Hilbert space such a

definition is not very practical. It is hard to speak of minimality when a basis can be infinite. In fact, a basis can
P
be uncountable, so if {ei }i∈I is such a basis, what is the meaning of i∈I xi ei ?

Definition 1.2.1. An orthonormal subset of Hilbert space H is a set E such that (a) kek = 1, for all e ∈ E;

(b) if e1 , e2 ∈ E and e1 6= e2 then he1 , e2 i = 0. An orthonormal basis in H is a maximal orthonormal set. We use

abbreviations o.n.s. and o.n.b. for orthonormal set and orthonormal basis, respectively.

Theorem 1.2.1. Every Hilbert space has an orthonormal basis.

Proof. Let e be a unit vector in H. Then E = {e} is an orthonormal set. Let M be the collection of all

orthonormal sets in H that contain E. By the Hausdorff Maximal Principle (Royden, p.25) there exists a maximal

chain C of such orthonormal sets, partially ordered by inclusion. Let N be the union of all elements of C. Then

N is a maximal orthonormal set, hence a basis of H. 

If the set {e} is replaced by any orthonormal set, the same proof yields a stronger result.

Theorem 1.2.2. Every orthonormal set in Hilbert space can be extended to an orthonormal basis.

Example 1.2.1. For k ∈ N, let ek denote the sequence with only one non-zero entry, lying in the kth position

and equal to 1. The set {ek }k∈N is an o.n.b. for `2 . (If a vector x ∈ `2 is orthogonal to all ek , then each of its

components is zero, so x = 0.)

Example 1.2.2. The set {e1 , e3 , e5 , . . . } is an orthonormal set in `2 but not a basis.

 
1 cos t sin t cos 2t sin 2t
Example 1.2.3. The set √ , √ , √ , √ , √ ,... is an o.n.b. in L2 (−π, π).
2π π π π π
4 1. HILBERT SPACE

 
1
Example 1.2.4. The set √ eint : n ∈ Z is another o.n.b. in L2 (−π, π).

P
In Linear Algebra, if {ei }i∈I is an o.n.b. then every vector x can be written as i∈I hx, ei iei . In Hilbert space

our first task is to make sense of this sum since the index set I need not be countable.

Pk
Theorem 1.2.3 (Bessel’s Inequality). Let {ei }ki=1 be an o.n.s. in H, and let x ∈ H. Then i=1 |hx, ei i|2 ≤

kxk2 .

Proof. If we write xi = hx, ei i, then

k
X k
X k
X k
X Xk k
X
0 ≤ kx − xi ei k2 = hx − xi ei , x − xi ei i = kxk2 − 2Rehx, xi ei i + h xi ei , xj ej i
i=1 i=1 i=1 i=1 i=1 j=1

k
X k X
X k k
X k
X k
X
= kxk2 − 2Re xi hx, ei i + xi xj hei , ej i = kxk2 − 2Re xi xi + xi xi = kxk2 − |xi |2 .
i=1 i=1 j=1 i=1 i=1 i=1

Corollary 1.2.4. Let E = {ei }i∈I be an o.n.s. in H, and let x ∈ H. Then hx, ei i =
6 0 for at most a countable

number of i ∈ I.

Proof. Let x ∈ H be fixed and let En = {ei : |xi | ≥ 1/n}. If ei1 , ei2 , . . . , eik ∈ En then

k
X
kxk2 ≥ |xij |2 ≥ k(1/n2 ).
j=1

So, for each n ∈ N, En is a finite set, and E = ∪n En . 

P
In view of Corollary 1.2.4 the expressions like hx, ei iei turn out to be the usual infinite series. Our next

task is to establish their convergence. The following Lemma will be helpful in this direction.

Lemma 1.2.5. If {xi }i∈N is a sequence of complex numbers and {ei }i∈N is an o.n.s. in H, then the series

|xi |2 are equiconvergent.


P P
i∈N xi ei and i∈N
1.2. ORTHOGONALITY 5

|xi |2 , respectively. Then


P P
Proof. Let sm and σm denote the partials sums of i∈N xi ei and i∈N
Pm Pm Pm Pm
ksm − sn k2 = k i=n+1 xi ei k2 = h i=n+1 xi ei , j=n+1 xj ej i = i=n+1 |xi |2 = |σm − σn |

so the series are equiconvergent. 

P
Now we can establish the convergence of i∈I hx, ei iei . We will use notation xi = hx, ei i for the Fourier

coefficients of x ∈ H relative to the fixed basis {ei }i∈I .

Corollary 1.2.6 (Parseval’s Identity). Let {ei }i∈I be an o.n.s. in H, and let x ∈ H. Then the series

|xi |2 converge and k xi ei k2 = |xi |2 .


P P P P
i∈I xi ei and i∈I i∈I i∈I

Proof. Since only a countable number of terms in each series is non-zero, we can rearrange them and consider
P∞ P∞
the series i=1 xi ei and i=1 |xi |2 . The latter series converges by the Bessel’s Inequality and Lemma 1.2.5 implies

that the former series converges too. Moreover, their partial sums sm and σm satisfy ksm k = σm , so the last

assertion of the corollary follows by letting m go to ∞. 

Now we are in the position to show that, in Hilbert space, every o.n.b. indeed spans H. Of course, the

minimality is a direct consequence of the definition.

P
Theorem 1.2.7. Let E = {ei }i∈I be an o.n.b. in H. Then, for each x ∈ H, x = i∈I xi ei , where xi = hx, ei i.

P
Proof. Let xi = hx, ei i and y = x − i∈I xi ei . (Well defined since the series converges.) Then hy, ek i =
P
hx, ek i − h i∈I xi ei , ek i = 0, for each k ∈ I, so y ⊥ E. If y 6= 0, then E ∪ {y/kyk} is an o.n.s., contradiciting the

maximality of E, so y = 0. 

The following is the analogue of a well known Linear Algebra fact. We use notation card I for the cardinal

number of the set I.

Theorem 1.2.8. Any two orthonormal bases {ei }i∈I and {fj }j∈J in H have the same cardinal number.

Proof. We will assume that both cardinal numbers are infinite. If either of them is finite, one knows from

Linear Algebra that the other one is finite and equal to the first. Let j ∈ J be fixed and let Ij = {i ∈ I : hfj , ei i =
6
6 1. HILBERT SPACE

0}. By Corollary 1.2.4, Ij is at most countable. Further, ∪j∈J Ij = I. Indeed, if i0 ∈ I \ ∪j∈J Ij then hfj , ei0 i = 0

for all j ∈ J so it would follow that ei0 = 0. Since card Ij ≤ ℵ0 we see that card I ≤ card J ·ℵ0 = card J. Similarly,

card J ≤ card I. By Cantor–Bernstein Theorem, (see, e.g., “Proofs from the book”, p.90) card I = card J. 

Definition 1.2.2. The dimension of Hilbert space H, denoted by dim H, is the cardinal number of a basis

of H.

In this course we will assume that dim H ≤ ℵ0 .

Exercise 1.2.1. If H is an infinite dimensional Hilbert space, then H is separable iff dim H = ℵ0 . [Given a

countable basis, use rational coefficients. Given a countable dense set, approximate each element of a basis close

enough to exclude all other basis elements.]

Next, we want to address the question: when can we identify two Hilbert spaces? We need a vector space

isomorphism (i.e., a linear bijection) that preserves the inner product.

Definition 1.2.3. If H and K are Hilbert spaces, an isomorphism is a linear surjection U : H → K such

that, for all x, y ∈ H, hU x, U yi = hx, yi. In this situation we say that H and K are isomorphic.

Exercise 1.2.2. Prove that hU x, U yi = hx, yi for all x, y ∈ H iff kU xk = kxk for all x ∈ H. Conclude that a

Hilbert space isomorphism is injective.

Theorem 1.2.9. Every separable Hilbert space of infinite dimension is isomorphic to `2 . Every Hilbert space

of finite dimension n is isomorphic to Cn .

Proof. We will assume that H is an infinite dimensional Hilbert space and leave the finite dimensional case

as an exercise. Since H is separable, there exists an o.n.b. {en }∞


n=1 . For x ∈ H, let xi = hx, ei i and U (x) =

P∞
(x1 , x2 , x3 , . . . ). By Parseval’s Identity, the series i=1 |xi |2 converges, so the sequence (x1 , x2 , x3 , . . . ) belongs

to `2 . Thus U is well-defined, linear (because the inner product is linear in the first argument), and isometric:
1.3. SUBSPACES 7

P∞ P∞
kU xk2 = i=1 |xi |2 = kxk2 . Finally, if (y1 , y2 , y3 , . . . ) ∈ `2 then i=1 |yi |2 converges so, by Lemma 1.2.5,
P∞ P∞
n=1 yn en converges and U ( n=1 yn en ) = (y1 , y1 , y1 , . . . ). Thus, U is surjective and the theorem is proved. 

Exercise 1.2.3. Prove that every Hilbert space of finite dimension n is isomorphic to Cn .

Problem 2. Let H ne a separable Hilbert space and M a subspace of H. Prove that M is a separable

Hilbert space.

Problem 3. The Haar system {ϕm,n }, m ∈ N, 1 ≤ n ≤ 2m , is defined as:


 n−1 n − 1/2
2m/2 , ≤x≤


 if m
,



 2 2m

ϕm,n (x) = −2m/2 , n − 1/2 n
if m
≤ x ≤ m,


 2 2
  
n−1 n



0,
 if x ∈
/ , .
2m 2m

Prove that this system is an o.n.b. of L2 [0, 1].

1.3. Subspaces

Example 1.3.1. Let H = L2 [0, 1] and let G be a measurable subset of [0, 1]. Denote by L2 (G) the set of

functions in L2 that vanish outside of G. Then L2 (G) is a closed subspace of H. Further, if f ∈ L2 (G) and

g ∈ L2 (Gc ), then hf, gi = 0.

Definition 1.3.1. If M is a closed subspace of the Hilbert space H, then the orthogonal complement of M,

denoted M⊥ , is the set of vectors in H orthogonal to every vector in M.

Exercise 1.3.1. Prove that M⊥ is a closed subspace of H.

Theorem 1.3.1. Let M be a closed subspace of Hilbert space H, and let x ∈ H. Then there exist unique

vectors y in M and z in M⊥ so that x = y + z.


8 1. HILBERT SPACE

Proof. Let {ei }i∈I and {fj }j∈J be orthonormal bases for M and M⊥ , respectively. Their union is an o.n.b.
P P P P
of H so x = i∈I hx, ei iei + j∈J hx, fj ifj and we define y = i∈I hx, ei iei , z= j∈J hx, fj ifj . Then y ∈ M,

z ∈ M⊥ , and x = y + z.

Suppose now that x = y1 + z1 = y2 + z2 , where y1 , y2 ∈ M and z1 , z2 ∈ M⊥ . Then y1 − y2 = z2 − z1 belongs

to both M and M⊥ , so hy1 − y2 , y1 − y2 i = 0 and it follows that y1 = y2 , and consequently z1 = z2 . 

Definition 1.3.2. In the situation described in Theorem 1.3.1 we say that H is the orthogonal direct sum of

M and M⊥ , and we write H = M ⊕ M⊥ . When z = x + y with x ∈ M and y ∈ M⊥ we often write z = x ⊕ y.

Theorem 1.3.1 allows us to define a map P : H → M by P x = y. It is called the orthogonal projection of H

onto M, and it is denoted by PM . Here are some of its properties.

Theorem 1.3.2. Let M be a closed subspace of Hilbert space H and let P be the orthogonal projection on

M. Then:

(a) P is a linear transformation;

(b) kP xk ≤ kxk, for all x ∈ H;

(c) P 2 = P ;

(d) Ker P = M⊥ and Ran P = M.

Proof. Let {ei }i∈I and {fj }j∈J be orthonormal bases for M and M⊥ , respectively, and let Q = I − P be

the orthonormal projection on M⊥ . If x0 , x00 ∈ H and α0 , α00 ∈ C, then P (α0 x0 + α00 x00 ) = 0 0
x + α00 x00 , ei i =
P
i∈I hα

α0 P x0 + α00 P x00 , so (a) holds.

(b) If x ∈ H, then x = P x + Qx and P x ⊥ Qx. Therefore, kxk2 = kP xk2 + kQxk2 ≥ kP xk2 .

(c) If y ∈ M then P y = y. Now, for any x ∈ H, P x ∈ M so P 2 x = P (P x) = P x.

(d) If P x = 0 then x = Qx ∈ M⊥ . If x ∈ M⊥ then Qx = x by (c), so P x = 0. The other assertion is

obvious. 

Problem 4. Prove that PM x is the unique point in M that is nearest to x, meaning that kx − PM xk =

inf{kx − hk : h ∈ M}.
1.4. WEAK TOPOLOGY 9

Problem 5. In L2 [0, 1] find the orthogonal complement to the subspace consisting of:

(a) all polynomials in x;

(b) all polynomials in x2 ;

(c) all polynomials in x with the free term equal to 0;

(d) all polynomials in x with the sum of coefficients equal to 0.

Problem 6. If M and N are subspaces of Hilbert space that are orthogonal to each other, then the sum

M + N = {x + y : x ∈ M, y ∈ N } is a subspace. Show that the theorem is not true if M and N are either:

closed but not orthogonal or orthogonal but not closed.

1.4. Weak topology

Read Royden, page 236–238.

Example 1.4.1. Consider the sequence of functions {cos nt}n∈N in L1 [0, 2π]. It is easy to see that this
R1
sequence is not convergent. However, for any function f ∈ L∞ , 0
f (t) cos nt dt → 0 as n → ∞. Since L∞ is the

dual space of L1 , we say that cos nt → 0 weakly, and we write w − limn cos nt = 0.

Example 1.4.2. Consider the sequence of functions {cos nt}n∈N in L∞ [0, 2π]. Notice that, while not a
R1
convergent sequence, if f ∈ L1 then 0
f (t) cos nt dt → 0 as n → ∞. Since L∞ is the dual space of L1 , we say

that cos nt → 0 in the weak∗ topology.

In a Banach space X it is useful to consider three topologies: the norm topology, induced by the norm; weak

topology — the smallest topology in which all bounded linear functionals on X are continuous; weak∗ topology

(meaningful when X is the dual space of Y so that Y ⊂ X ∗ ) — the smallest topology in which some bounded

linear functionals on X are continuous (those that can be identified as elements of Y). In order to dicuss these

topologies (and understand their role), we need to find out what bounded linear functionals on Hilbert space H

look like.
10 1. HILBERT SPACE

Theorem 1.4.1 (Riesz Representation Theorem). If L is a bounded linear functional on H, then there is a

unique vector y ∈ H such that L(x) = hx, yi for every x ∈ H. Moreover, kLk = kyk.

P
Proof. Assuming that such y exists, we can write it as y = i∈N yi ei relative to a fixed o.n.b. {ei }i∈N .
P
Then yi = hy, ei i = hei , yi = L(ei ). Therefore, we define y = i∈N L(ei ) ei , and all it remains to prove is the
Pn Pn
convergence of the series. Let sn = i=1 L(ei ) ei . Then L(sn ) = i=1 L(ei ) Lei = ksn k2 , so ksn k2 ≤ kLkksn k
Pn
from which it follows that ksn k ≤ kLk. Thus the series i=1 L(ei ) ei converges and the result follows from

Lemma 1.2.5. 

We see that if L ∈ H∗ , the dual space of H, then L = Ly . The mapping Φ : H → H∗ defined by Φ(y) = Ly is

a norm preserving surjection. It is conjugate linear: Φ(α1 y1 + α2 y2 ) = α1 y1 + α2 y2 . Nevertheless, we identify H∗

with H. Consequently, H is reflexive (i.e., H∗∗ = H) so the weak∗ and weak topologies on H coincide. Therefore,

we will work with 2 topologies: weak and norm induced. The absence of a qualifier will always mean that it is

the latter.

Exercise 1.4.1. Prove that the weak topology is weaker than the norm toplogy, i.e., if G is a weakly open

set then G is an open set.

Example 1.4.3. If {en }n∈N is an orthonormal sequence in H then w − lim en = 0 but the sequence is not

convergent.

Exercise 1.4.2. Prove that the Hilbert space norm is continuous but not weakly continuous.

The following result shows why weak topology is important. [See Royden, p. 237]

Theorem 1.4.2 (Banach-Alaoglu). The unit ball {x ∈ H : kxk ≤ 1} in Hilbert space H is weakly compact.

Remark 1.4.1. The unit ball B1 of H is NOT compact (assuming that H is infinite dimensional). Reason:

if {en }n∈N is an o.n.b. then the set {e1 , e2 , e3 , . . . } is closed but not totally bounded, hence not compact.

Exercise 1.4.3. Prove that if a bounded set in H is weakly closed then it is weakly compact.
1.4. WEAK TOPOLOGY 11

In spite of the fact that the weak topology is weaker then the norm topology, some of the standard results

remain true.

Theorem 1.4.3. A weakly convergent sequence is bounded.

Proof. Suppose that xn is a weakly convergent sequence. Then, for any y ∈ H, the sequence hxn , yi is a

convergent sequence of complex numbers, which implies that it is bounded. In other words, for any y ∈ H there

exists C = C(y) > 0 such that |hxn , yi| ≤ C. This means that, for each n ∈ N, xn can be viewed as a bounded

linear functional on H. By the Uniform Bounded Principle (Royden, p. 232), these functionals are uniformly

bounded, i.e., there exists M > 0 such that, for all n ∈ N, kxn k ≤ M . 

Although weakly convergent sequence need not be convergent there are situation when it does.

Theorem 1.4.4. If {xn }n∈N is a weakly convergent sequence in a compact set K then it is convergent.

Proof. Since {xn }n∈N ⊂ K, it has an accumulation point x0 and a subsequence x0n converging to z. If {xn }

had another accumulation point x00 , then there would be another subsequence x00n converging to w. It would follow

that w − lim x0n = x0 and w − lim x00n = x00 . Since {xn } is weakly convergent this implies that x0 = x00 , so it has

only one accumulation point, namely the limit. 

By definition, the weak topology W is the smallest one in which every bounded linear functional L on H is

continuous. This means that, for any such L and any open set G in the complex plane, L−1 (G) ∈ W. Since open

disks form a base of the usual topology in C it suffices to require that L−1 (G) ∈ W for each open disk G. Notice

that x ∈ L−1 (G) iff L(x) ∈ G, so if G = {z : |z − z0 | < r} and z0 = L(x0 ) then x ∈ L−1 (G) iff |L(x − x0 )| < r.

Now Riesz Representation Theorem implies that L−1 (G) = {x ∈ H : |hx − x0 , yi < r} for some y ∈ H. We

conclude that a subbase of W consists of the sets W = W (x0 ; y, r) = {x ∈ H : |hx − x0 , yi < r}.

Exercise 1.4.4. Prove that a bounded linear functional L is continuous in a topology T iff L−1 (G) ∈ T for

every open disk G.


12 1. HILBERT SPACE

Problem 7. Prove that a subspace of Hilbert space is closed iff it is weakly closed.

Problem 8. Prove that Hilbert space is weakly complete.

Problem 9. Let {xn }n∈N be a sequence in Hilbert space with the property that kxn k = 1, for all n, and

hxm , xn i = c, if m 6= n. Prove that {xn }n∈N is weakly convergent.

Problem 10. Find the weak closure of the unit sphere in Hilbert space.
CHAPTER 2

Operators on Hilbert Space

“Nobody, except topologists, is interested in problems about Hilbert space; the people who work in Hilbert

space are interested in problems about operators”.

Paul Halmos

2.1. Definition and Examples

Read Section 10.2 in Royden’s book. Operator always means linear and bounded. The algebra of all bounded

linear operators on H is denoted by L(H).

Example 2.1.1. Let H = Cn and A = [aij ] an n × n matrix. The operator of multiplication by A is linear
P 1/2
n 2
and bounded. Indeed, for x = (x1 , x2 , . . . , xn ) and M = sup1≤i≤n j=1 |aij | ,
 1/2  1/2
n
X Xn n
X
kAxk = sup | aij xj | ≤ sup  |aij |2   |xj |2  = M kxk
1≤i≤n j=1 1≤i≤n j=1 j=1

so kAk ≤ M .

Example 2.1.2. Let H = `2 and A = [aij ]∞


i,j=1 , where aij = ci if i = j and aij = 0 if i 6= j. We call

such matrix diagonal and denote it by diag(c1 , c2 , . . . ), or diag(cn ). The operator A (or, more precisely, the

operator of multiplication by A) is bounded iff c = (c1 , c2 , . . . ) ∈ `∞ (i.e., when c is a bounded sequence).


P∞
Indeed, let x = (x1 , x2 , . . . ) ∈ `2 , so Ax = (c1 x1 , c2 x2 , . . . ) and kAxk2 = i=1 |ci xi |2 . If |ci | ≤ M , i ∈ N, then
P∞
kAxk2 ≤ M 2 i=1 / `∞ , then for each n there exists in so
|xi |2 = kxk2 so A is bounded. On the other hand, if c ∈

that |cin | ≥ n. Then kAein k = kcin ein k ≥ n → ∞ and A is unbounded.

Remark 2.1.1. It is extremely hard to decide, in general, whether an operator A is bounded just by studying

its matrix [hAej , ei i]∞


i,j=1 .

13
14 2. OPERATORS ON HILBERT SPACE

Example 2.1.3. Let H = `2 and let S be the unilateral shift, defined by S(x1 , x2 , . . . ) = (0, x1 , x2 , . . . ).

Notice that kS(x1 , x2 , . . . )k2 = 02 + |x1 |2 + |x2 |2 + · · · = kxk2 so kSk = 1. In fact, S is an isometry, hence

injective, but it is not surjective!

Example 2.1.4 (Multiplication on L2 ). Let h be a measurable function and define Mh f , for f ∈ L2 , by

(Mh f )(t) = h(t)f (t). If h ∈ L∞ (essentially bounded functions — see Royden, p. 118), then
Z Z
kMh f k2 = |hf |2 ≤ khk2∞ |f |2 = khk2∞ kf k2

so Mh is a bounded operator on L2 and kMh k ≤ khk∞ . On the other hand, for  > 0, there exists a set C ⊂ [0, 1]

of positive measure so that |h(t)| ≥ khk∞ −  for t ∈ C. If f = χC then


Z Z
2 2
kMh f k = |hf | = |h|2 ≥ (khk∞ − )2 µ(C) = (khk∞ − )2 kf k2 ,
C

and it follows that kMh k ≥ khk∞ − . We conclude that kMh k = khk∞ and Mh is bounded iff h ∈ L∞ .

Example 2.1.5 (Integral operators on L2 ). Let K : [0, 1] × [0, 1] → C be measurable and square integrable
R1
with respect to planar Lebesgue measure. We define the operator TK by (Tk f )(x) = 0
K(x, y)f (y) dy. Now
Z 1 Z 1 Z 1 2 Z 1 Z 1 2
kTK f k2 = |Tk f (x)|2 dx =

K(x, y)f (y) dy dx ≤ |K(x, y)f (y)| dy dx

0 0 0 0 0
Z 1 Z 1  Z 1  Z 1Z 1
≤ |K(x, y)|2 dy |f (y)|2 dy dx = kf k2 |K(x, y)|2 dydx.
0 0 0 0 0
nR R o1/2
1 1
Therefore, TK is bounded and kTK k ≤ 0 0
|K(x, y)|2 dydx .

Example 2.1.6 (Weighted shifts). Let H = `2 and let {cn }n∈N be a bounded sequence of complex numbers.

A weighted shift W on `2 is defined by W (x1 , x2 , . . . ) = (0, c1 x1 , c2 x2 , . . . ). It can be written as W = S diag(cn )

so it is a bounded operator and kW k = kdiag(cn )k.

In some situations it is useful to have an alternate formula for the operator norm. In what follows we will

use notation B1 for the closed unit ball of H, i.e. B1 = {x ∈ H : kxk ≤ 1}.

Proposition 2.1.1. Let T be linear operator on Hilbert space. Then kT k = sup{|hT x, yi| : x, y ∈ B1 }.
2.2. ADJOINT 15

Proof. Let α denote the supremum above, and let us assume that T 6= 0 (otherwise there is nothing to

prove). Clearly, for x, y ∈ B1 , |hT x, yi| ≤ kT k, so α ≤ kT k. In the other direction,

Tx
α ≥ sup{|hT x, yi| : x, y ∈ B1 , T x 6= 0, y = }
kT xk
Tx
= sup{|hT x, i| : x ∈ B1 , T x 6= 0}
kT xk

= sup{kT xk : x ∈ B1 , T x 6= 0}

= kT k,

and the proof is complete. 

2.2. Adjoint

In Linear Algebra we learn that the column space of matrix A = [aij ]ni,j=1 and the null space of its transpose

AT are orthogonal complements in Rn . In Cn , AT needs to be replaced by A∗ = [aji ]ni,j=1 . In this situation,

(2.1) hAx, yi = hx, A∗ yi.

Exercise 2.2.1. Prove that, if A is an n × n matrix and x, y ∈ Cn , then hAx, yi = hx, A∗ yi.

Example 2.2.1. Let h ∈ L∞ and let Mh be the operator of multiplication on L2 . Then (Mh )∗ = Mh .

The following result will show that a relation (2.1) is available for any operator.

Proposition 2.2.1. If T is an operator on H then there exists a unique operator S on H such that hT x, yi =

hx, Syi, for all x, y ∈ H.

Proof. Let y ∈ H be fixed. Then ϕ(x) = hT x, yi is a bounded linear functional on H. By Riesz Repre-

sentation Theorem there exists a unique z ∈ H such that ϕ(x) = hx, zi, for all x ∈ H. Define Sy = z. Then
16 2. OPERATORS ON HILBERT SPACE

hT x, yi = hx, Syi. To show that S is linear, let Sy1 = z1 , Sy2 = z2 , and let x ∈ H. Then

hx, S(α1 y1 + α2 y2 )i = hT x, α1 y1 + α2 y2 i = α1 hT x, y1 i + α2 hT x, y2 i

= α1 hx, Sy1 i + α2 hx, Sy2 i = hx, α1 Sy1 + α2 Sy2 i.

By the uniqueness part of Riesz Representation Theorem S is linear. That S is unique can be deduced by

contradiction: if hx, Syi = hx, S 0 yi for all x, y ∈ H then hx, Sy − S 0 yi = 0 for all x which implies that Sy − S 0 y = 0

for all y, hence S = S 0 . Finally, S is bounded: kSyk2 = hSy, Syi = hT Sy, yi ≤ kT Sykkyk ≤ kT kkSykkyk so

kSyk ≤ kT kkyk and kSk ≤ kT k. 

Definition 2.2.1. If T ∈ L(H) then the adjoint of T , denoted T ∗ , is the unique operator on H satisfying

hT x, yi = hx, T ∗ yi, for all x, y ∈ H.

Here are some of the basic properties of the involution T 7→ T ∗ .

Proposition 2.2.2.

(a) I ∗ = I

(b) T ∗∗ = (T ∗ )∗ = T ;

(c) kT ∗ k = kT k;

(d) (α1 T1 + α2 T2 )∗ = α1 T1∗ + α2 T2∗ ;

(e) (T1 T2 )∗ = T2∗ T1∗ ;

(f) if T is invertible then so is T ∗ and (T ∗ )−1 = (T −1 )∗ ;

(g) kT 2 k = kT ∗ T k.

Proof. The assertion (a) is obvious and (b) follows from hx, T ∗∗ yi = hT ∗ x, yi = hy, T ∗ xi = hT y, xi = hx, T yi.

It was shown in the proof of Proposition 2.2.1 that kT ∗ k ≤ kT k so kT ∗∗ k ≤ kT ∗ k ≤ kT k and (c) follows from

(b). We leave (d) as an exercise and notice that hx, (T1 T2 )∗ yi = hT1 T2 x, yi = hT2 x, (T1 )∗ yi = hx, (T2 )∗ (T1 )∗ yi

establishes (e). As a consequence of (a) and (e), T ∗ (T −1 )∗ = (T −1 T )∗ = I and (T −1 )∗ T ∗ = (T T −1 )∗ = I which


2.3. OPERATOR TOPOLOGIES 17

is (f). Finally, kT ∗ T k ≤ kT ∗ kkT k = kT k2 and to prove the opposite inequality let  > 0 and let x be a unit vector

such that kT xk ≥ kT k − . Then kT ∗ T k ≥ kT ∗ T xk ≥ hT ∗ T x, xi = kT xk2 > (kT k − )2 , and (g) is proved. 

Example 2.2.2. Let H = `2 and the let S be the unilateral shift (see Example 2.1.3). Then S ∗ (x1 , x2 , . . . ) =

(x2 , x3 , . . . ). The operator S ∗ is called the backward shift.

Example 2.2.3. Let TK be the integral operator on L2 (see Example 2.1.5). Then (TK )∗ = TK ∗ , where

K ∗ (x, y) = K(y, x).

We now give the Hilbert space formulation of the relation with which we have opened this section.

Theorem 2.2.3. If T is an operator on Hilbert space H then Ker T = (Ran T ∗ )⊥ .

Proof. Let x ∈ Ker T and let y ∈ Ran T ∗ . Then there exists z ∈ H such that y = T ∗ z. Therefore

hx, yi = hx, T ∗ zi = hT x, zi = 0 so x ∈ (Ran T ∗ )⊥ . In the other direction, if x ∈ (Ran T ∗ )⊥ and z ∈ H, then

hT x, zi = hx, T ∗ zi = 0. Taking z = T x we see that T x = 0, and the proof is complete. 

We notice that, for T ∈ L(H) and x, y ∈ H, the expression hT x, yi is a form that is linear in the first and

conjugate linear in the second argument. It turns out that this is sufficient for a polarization identity.

Proposition 2.2.4 (Second Polarization Identity).

4hT x, yi = hT (x + y), x + yi − hT (x − y), x − yi + ihT (x + iy), x + iyi − ihT (x − iy), x − iyi.

Exercise 2.2.2. Prove Second Polarization Identity.

2.3. Operator topologies

In this section we take a look at the algebra L(H). It has three useful topologies which lead to 3 different

types of convergence.

Definition 2.3.1. A sequence of operators Tn ∈ L(H) converges uniformly (or in norm) to an operator T if

kTn −T k → 0, n → ∞. A sequence of operators Tn ∈ L(H) converges strongly to an operator T if kTn x−T xk → 0,


18 2. OPERATORS ON HILBERT SPACE

n → ∞, for all x ∈ H. A sequence of operators Tn ∈ L(H) converges weakly to an operator T if hTn x−T x, yi → 0,

n → ∞, for any x, y ∈ H.

It follows from the definition that the weak topology is the weakest of the three, while then norm topology

(a.k.a. the uniform topology) is the strongest. Are they different?

Proposition 2.3.1. The operator norm is continuous with respect to the uniform topology but discontinuous

with respect to the strong and weak topologies.

Proof. The first assertion is a consequence of the inequality |kAk − kBk| ≤ kA − Bk. To prove the other

two, let {en }n∈N be an o.n.b. of H, Hn = ∨∞


k=n ek , Pn = PHn . Then Pn → 0 strongly (hence weakly) since
P∞
kPn xk2 = k=n+1 |xk |2 → 0. However, kPn k = 1 which does not converge to 0. 

Example 2.3.1. We say that an operator T is a rank one operator if there exist u, v ∈ H so that T x = hx, viu.

We use the notation T = u⊗v. Let Tn = en ⊗e1 . Then hTn x, yi = x1 yn → 0 while Tn x = x1 en is not a convergent

sequence. Thus, the weak and strong topologies are different.

Example 2.3.2. The involution T 7→ T ∗ is continuous in uniform topology. (kTn∗ − T ∗ k = kTn − T k). Also,

it is continuous in the weak topology, because

|h(Tn∗ − T ∗ )x, yi| = |hx, (Tn − T )yi| = |h(Tn − T )y, xi| .

However, it is not continuous in the strong topologies. Counterexample: let S be the unilateral shift, and

Tn = (S ∗ )n . Then Tn → 0 strongly but {Tn∗ } is not a strongly convergent sequence. Indeed, for any x =
P∞
(x1 , x2 , . . . ) ∈ H, kTn xk2 = k(xn+1 , xn+2 , . . . )k2 = k=n |xk |2 → 0, as n → ∞. On the other hand, for x = e1 ,

Tn∗ x = S n e1 = en , which is not a convergent sequence.

An operator T ∈ L(H) is a continuous mapping when H is given the strong topology. We will write, following

Halmos, (s→s). One may ask about the other types of continuity.

Theorem 2.3.2. The three types of continuity (s→s), (w→w), and (s→w) are all equivalent.
2.3. OPERATOR TOPOLOGIES 19

Proof. Suppose that T is continuous, and let W be a weakly open neighborhood of T x0 in H. We will show

that T −1 (W ) is weakly open. It suffices to prove this assertion in the case when W belongs to the subbase of the

weak topology. To that end, let W = W (T x0 , y, r) = {x ∈ H : |hx − T x0 , yi| < r}. Then z ∈ T −1 (W ) ⇔ T z ∈

W ⇔ |hT z − T x0 , yi| <  ⇔ |hz − x0 , T ∗ yi| < . We see that z ∈ T −1 (W ) iff z ∈ V (x0 , T ∗ y, ) so T −1 (W ) = V

which is a weakly open set.

The implication (w→w)⇒(s→w) is trivial, so we concentrate on the implication (s→w)⇒(s→s). To that end,

suppose that T is not continuous. Then it is unbounded, so there exists a sequence {xn }n∈N of unit vectors such

that kT xn k ≥ n2 , n ∈ N. Clearly, xn /n → 0 and the assumption (s→w) implies that T xn /n weakly converges to

0. By Theorem 1.4.3 the sequence {T xn /n} is bounded which contradicts the fact that kT xn /nk ≥ n. 

The fact that every operator in L(H) is weakly continuous has an interesting consequence.

Corollary 2.3.3. If T is a linear operator on H then T (B1 ) is closed.

Proof. Banach-Alaoglu Theorem established that B1 is weakly compact so, by Theorem 2.3.2, T (B1 ) is

weakly compact, hence weakly closed, hence norm closed. 

Exercise 2.3.1. Prove that if F is a closed and bounded set in H then T (F ) is closed.

At the end of this section we consider a situation that occurs quite frequently.

Theorem 2.3.4. Let M be a linear manifold that is dense in Hilbert space H. Every bounded linear trans-

formation T : M → H can be uniquely extended to a bounded linear transformation T̂ : H → H. In addition, the

operator norm of T equals kT̂ k.

Proof. Let x ∈ H. Then there exists a sequence {xn }n∈N ⊂ M converging to x. Since {xn }n∈N is also a

Cauchy sequence, for every  > 0 there exists N ∈ N such that, m, n ≥ N ⇒ kxm − xn k < /kT k. It follows

that, for m, n ≥ N , kT xm − T xn k < , so {T xn }n∈N is a Cauchy sequence, hence convergent, and there exists

y = limn→∞ T xn . We will define T̂ x = y, i.e., T̂ (lim xn ) = lim T xn .


20 2. OPERATORS ON HILBERT SPACE

First we need to establish that the definition is independent of the sequence {xn }n∈N . If {x0n }n∈N is another

sequence converging to x, we form the sequence (x1 , x01 , x2 , x02 , . . . ) which also converges to x. By the previous,

the sequence (T x1 , T x01 , T x2 , T x02 , . . . ) must converge, and therefore, both of the subsequences {T xn }n∈N and

{T x0n }n∈N must have the same limit.

Notice that, if xn → x, the continuity of the norm implies that kT̂ xk = k lim T xn k = lim kT xn k ≤

lim kT kkxn k = kT kkxk so kT̂ k ≤ kT k. Since the other inequality is obvious we see that kT̂ k = kT k. In particular,

T̂ is a bounded operator. Also, T̂ (αx + βy) = T̂ (α lim xn + β lim yn ) = T̂ (lim(αxn + βyn )) = lim T (αxn + βyn ) =

lim(αT xn + βT yn ) = α lim T xn + β lim T yn = αT̂ x + β T̂ y, so T̂ is linear.

Finally, suppose that T1 and T2 are two continuous extensions of T , and let x ∈ H. If xn → x, the continuity

implies that both T1 xn → T1 x and T2 xn → T2 x. If xn ∈ M then T1 xn = T2 xn , so T1 x = T2 x. Therefore, the

extension is unique, and the proof is complete. 

Need an example

2.4. Invariant and Reducing Subspaces

When M is a closed subspace of H, we can always write H = M ⊕ M⊥ . Relative to this decomposition, any

operator T acting on H can be written as a 2 × 2 matrix with operator entries

 
X Y
(2.2) T =

.

Z W

It is sometimes convenient to consider only the initial space or the target space as a direct sum. In such a situation
 
we will use a 1 × 2 or 2 × 1 matrix. Thus X Y will describe an operator T : M ⊕ M⊥ → H; if f ∈ M and

g ∈ M⊥ then [ X
f 
Y ] g = Xf + Y g.

A subspace M is invariant for T if, for any x ∈ M, T x ∈ M. It is reducing for T if both M and M⊥ are

invariant for T .
2.4. INVARIANT AND REDUCING SUBSPACES 21

Example 2.4.1. The subspace (0) consisting of zero vector only is an invariant subspace for any operator T .

Also, H is an invariant subspace for any operator T . Because they are invariant for every operator they are called

trivial. A big open problem in Operator theory is whether every operator has a non-trivial invariant subspace.

Example 2.4.2. If M is a closed subspace of H and T1 is an operator on M with values in M, then the

operator T = T1 ⊕ 0, defined by T x = T1 x if x ∈ M and T x = 0 if x ∈ M⊥ is an operator in L(H). However, if

M is not invariant for T1 , the same definition (T x = T1 x for x ∈ M, T x = 0 for x ∈ M⊥ ) describes the operator
 
T1 0 .

Proposition 2.4.1. If T is an operator on Hilbert space H, and P = PM is the projection onto the closed

subspace M, then the following are equivalent:

(a) M is invariant for T ;

(b) P T P = T P ;

(c) Z = 0 in (2.2).

 0 0
Proof. It is not hard to see that the matrix for P is [ I0 00 ] so P T P − T P = −Z 0 . This establishes (b) ⇔
  h i
(c). Since fg ∈ M iff g = 0, we see that T f0 = Xf
 
Zf ∈ M for all x ∈ H iff Z = 0 so (a) ⇔ (c). 

Example 2.4.3. Let S be the unilateral shift, n ∈ N, and M = ∨k≥n ek . Then SM = ∨k≥n+1 ek ⊂ M.

Proposition 2.4.2. If T is an operator on Hilbert space H, and P = PM then the following are equivalent:

(a) M is reducing for T ;

(b) P T = T P ;

(c) Y, Z = 0 in (2.2);

(d) M is invariant for T and T ∗ .

 X∗ Z∗
0 Y
we see that (b) ⇔ (c). Further, the matrix for T ∗ is
  
Proof. Since P T − T P = −Z 0 Y ∗ W∗
so, by

Proposition 2.4.1, M is invariant for T and T ∗ iff Z = Y ∗ = 0 and (c) ⇔ (d). In order to prove that (a) ⇔ (d)
22 2. OPERATORS ON HILBERT SPACE

it suffices to show that M is invariant for T ∗ iff M⊥ is invariant for T . By Proposition 2.4.1, M is invariant for
  h Yg i
T ∗ iff Y ∗ = 0 (iff Y = 0). On the other hand T g0 = W ⊥
g ∈ M iff Y g = 0 for all g. 

 X∗ Z∗
Exercise 2.4.1. Prove that the matrix for T ∗ is

Y ∗ W∗
.

Example 2.4.4. Let T = Mh , let E ⊂ [0, 1], m(E) > 0, and let M = L2 (E). If f ∈ M then T f = hf ∈ M.

Also, T ∗ = Mh and T ∗ f = hf ∈ M, so M is reducing for T .

Example 2.4.5. Let S be the unilateral shift, n ∈ N, and M = ∨k≥n ek . Then M is invariant for S but not

reducing, since en ∈ M but S ∗ en = en−1 ∈


/ M.

2.5. Finite rank operators

The closest relatives of finite matrices are the finite rank operators.

Definition 2.5.1. An operator T is a finite rank operator if its range is finite dimensional. We denote the

set of finite rank operators by F.

Example 2.5.1. If T is a rank one operator u ⊗ v (see Example 2.3.1) then the range of u ⊗ v is the one

dimensional subspace spanned by u, so u ⊗ v ∈ F.

The rank one operators turn out to be the building blocks out of which finite rank operators are made.

Proposition 2.5.1. If T is a linear operator on H then T belongs to F iff there exist vectors u1 ,u2 ,. . . ,un ,
Pn
and v1 ,v2 ,. . . ,vn such that T x = i=1 hx, vi iui .

Proof. Suppose that Ran T is of finite dimension n, and let e1 , e2 , . . . , en be an o.n.b. of Ran T . Then
Pn Pn ∗
Tx = i=1 hT x, ei iei = i=1 hx, T ei iei . We leave the converse as an exercise. 

Pn
Exercise 2.5.1. Prove that if there exist vectors u1 , u2 , . . . , un , v1 , v2 , . . . , vn such that T x = i=1 hx, vi iui ,

for all x ∈ H, then Ran T is of dimension at most n.


2.6. COMPACT OPERATORS 23

ui ⊗ vi then T ∗ =
P P
Exercise 2.5.2. Prove that if T = vi ⊗ ui .

The next theorem summarizes some very important properties of the class F.

Theorem 2.5.2. The set F is a minimal ∗ -ideal in L(H).

Here the star means that F is closed under the operation of taking adjoints.

Proof. It is obvious that F is a subspace of L(H). Furthermore, if T ∈ F and A ∈ L(H), then Ran T A ⊂
Pn
Ran T so T A ∈ F. Also, if T is of finite rank, then according to Proposition 2.5.1, T = i=1 ui ⊗ vi so
Pn
T∗ = i=1 vi ⊗ ui . It follows that T ∗ ∈ F and the same is true of T ∗ A∗ , for any A ∈ L(H). Consequently, AT

is of finite rank, and F is a ∗ -ideal. To see that it is minimal, it suffices to show that, if J is a non-zero ideal,

then J contains all rank one operators. Let T ∈ J, T 6= 0. Then there exists vectors x, y, such that kyk = 1 and

y = T x. Let u ⊗ v be a rank one operator. Since J is an ideal, it contains the product (u ⊗ y)T (x ⊗ v) which

equals u ⊗ v. 

A finite rank operator is a generalization of a finite matrix. What happens when we take the closure of F in

some topology?

Exercise 2.5.3. Prove that the strong closure of F is L(H). [Hint: Prove that Pn → I strongly.] Conclude

that the weak closure of F is also L(H).

2.6. Compact Operators

Exercise 2.5.3 established that the strong closure of F is L(H). Therefore, we consider the norm topology.

Definition 2.6.1. An operator T in L(H) is compact if it is the limit of a sequence of finite rank operators.

We denote the set of compact operators by K.

Example 2.6.1. Let T = diag(cn ) as in Example 2.1.2, with limn→∞ cn = 0. Then T is compact. Reason:

take Tn = diag(c1 , c2 , . . . , cn , 0, 0, . . . ). Then Tn ∈ F and kT − Tn k = sup{|ck | : k ≥ n + 1} → 0. It follows that T

is compact.
24 2. OPERATORS ON HILBERT SPACE

Example 2.6.2. Let T = TK as in Example 2.1.5. If K ∈ L2 ([0, 1] × [0, 1]) then TK is compact. We will

point out at several different sequences in F that all converge to TK

We start with a function theoretic approach: simple functions are dense in L2 (Royden, p. 128), and a

similar proof establishes that simple functions are dense in L2 ([0, 1] × [0, 1]). Since a simple function is a linear

combination of the characteristic functions of rectangles χ[a,b]×[c,d] (x, y) = χ[a,b] (x)χ[c,d] (y) it follows that K(x, y)
Pn
is the L2 limit of functions of the form Kn (x, y) = i=1 fi (x)gi (y), so TK is the norm limit of TKn , which are all

finite rank operators.

Pn
Exercise 2.6.1. Verify that TKn ∈ F, if Kn (x, y) = i=1 fi (x)gi (y).

Our second approach is exploiting the fact that L2 is Hilbert space. If {ej }j∈N is an o.n.b. of L2 we can, for
P∞ PN
a fixed y, write K(x, y) = j=1 kj (y)ej (x). Now define KN (x, y) = j=1 kj (y)ej (x) and notice that TKN → TK

as N → ∞.

Exercise 2.6.2. Verify that TKN ∈ F and that limN →∞ TKN = TK , if KN (x, y) is as above.

Our last method is based on the matrix for TK . Let kij = hTK ej , ei i, with {en }n∈N an o.n.b. of L2 ([0, 1]).

First we notice that, for any f ∈ L2 , |hf, ek i|2 = k 2


= kf k2 . Therefore,
P P
k k hf, ek iek k


X ∞
X ∞
X
2 ∗ ∗ ∗
|hTK ej , ei i| = |hej , TK ei i|2 = |hTK ei , ej , i|2 = kTK ei k 2
j=1 j=1 j=1
2 2
Z1 Z1 Z1 Z1


= K (y, x)ei (x) dx dy = K(x, y)ei (x) dx dy.


0 0 0 0

It follows that, for any n ∈ N,


2 2

n X
X Xn Z1 Z1 Z1 Xn Z1
2

|kij | = K(x, y)ei (x) dx dy =

K(x, y)ei (x) dx dy

i=1 j=1 i=1
0 0
i=1
0 0
2
Z1 X∞ Z1 Z1 Z1
|K(x, y)|2 dxdy

≤ K(x, y)ei (x) dx dy =

i=1
0

0

0 0
2.6. COMPACT OPERATORS 25


∞ P
|kij |2 converges. Operators whose matrices satisfy this condition are called Hilbert–Schmidt
P
so the series
i=1 j=1
∞ P

|kij |2 }1/2 , and it satisfies the inequality
P
operators. The Hilbert-Scmidt norm is defined as kTK k2 = {
i=1 j=1

kAk ≤ kAk2 . Hilbert-Scmidt operators are compact because we can define Tn to be the matrix consisting of the

first n rows of the matrix of TK and having the remaining entries 0. Then each Tn ∈ F and kTn − TK k → 0.
∞ ∞
Indeed, Ran Tn ⊂ ∨{e1 , e2 , . . . , en }, and kTK − Tn k2 ≤ kTK − Tn k22 = |kij |2 → 0, n → ∞.
P P
i=n+1 j=1

Exercise 2.6.3. Prove that the Hilbert-Scmidt norm is indeed a norm and, for any T ∈ L(H), kT k ≤ kT k2 .

Next we consider some of the properties of compact operators. The first one follows directly from the

definition.

Theorem 2.6.1. The set K is the smallest closed ∗ -ideal in L(H).

The following result reveals the motivation for calling these operators compact.

Theorem 2.6.2. An operator T in L(H) is compact iff it maps the closed unit ball of H into a compact set.

Proof. Suppose that K is compact and let {yn }n∈N be a sequence in K(B1 ). We will show that there exists

a subsequence of {yn } that converges to an element of K(B1 ). Notice that, for every n ∈ N, yn = Kxn , and xn

belongs to the weakly compact set B1 . Thus, there exists a subsequence {xnk } converging weakly to x ∈ B1 .

Thus, it suffices to show that Kxnk converges to Kx. Let {Kn } be a sequence in F thaty converges to K. For

any m ∈ N, Km (B1 ) is a bounded and closed set (by Corollary 2.3.3) that is contained in a finite dimensional

subspace of H, so it is compact. By Theorem 1.4.4, {Km xnk }k∈N converges to Km x. Now, let  > 0. Then

there exists N ∈ N such that kK − KN k < /3. Further, with N fixed, there exists k0 ∈ N so that, for k ≥ k0 ,

kKN xnk − KN xk < /3. Therefore, for k ≥ k0 ,

  
kKxnk − Kxk ≤ k(K − KN )xnk k + kKN (xnk − x)k + k(KN − K)xk < + + = .
3 3 3

Thus, ynk = Kxnk is a convergent subsequence converging to Kx ∈ K(B1 ) so K(B1 ) is a compact set.
26 2. OPERATORS ON HILBERT SPACE

Suppose now that K(B1 ) is compact and let n ∈ N. Notice that ∪y∈K(B1 ) B(y, 1/n) is an open covering of
(n) (n) (n) (n)
the compact set K(B1 ), so there exist vectors x1 , x2 , . . . , xk ∈ H so that ∪ki=1 B(Kxi , 1/n) is a covering of
(n) (n) (n)
K(B1 ). Let Hn be the span of Kx1 , Kx2 , . . . , Kxk and Pn the orthogonal projection on Hn . Finally, let Kn =

Pn K. Clearly, Kn ∈ F. Let  > 0, and choose N > 1/. If n ≥ N , and kxk ≤ 1, then kKx−Kn xk = kKx−Pn Kxk.
(n)
Since Pn Kx is the point in Hn closest to Kx, it follows that kKx − Kn xk ≤ inf 1≤i≤n kKx − Kxi k < 1/n < .

Thus Kn → K and the proof is complete. 

Remark 2.6.1. In many texts the characterization of compact operators, established in Theorem 2.6.2, is

taken to be the definition of a compact operator.

Exercise 2.6.4. Prove that if F is a closed and bounded set in H and T is a compact operator in L(H) then

T (F ) is a compact set.

There is another characterization of compact operators:

Proposition 2.6.3. If T is a linear operator on H then T is compact iff it maps every weakly convergent

sequence into a convergent sequence. In this situation, if w − lim xn = x then lim T xn = T x.

Proof. Suppose first that T is compact and let w − lim xn = x. By Proposition 1.4.3, there exists M > 0

such that, for all n ∈ N, kxn k ≤ M . Therefore, T xn /M ∈ T (B1 ), which is compact by Theorem 2.6.2. Now

Theorem 1.4.4 implies that lim xn = x.

In order to establish the converse, we will demonstrate that T (B1 ) is compact by showing that every sequence

in T (B1 ) has a convergent subsequence. Let {yn }n∈N ⊂ T (B1 ). Then yn = T xn , for xn ∈ B1 , so the Banach–

Alaoglu Theorem implies that {xn } has a weakly convergent subsequence {xnk } and, by assumption, {T xnk } is

a (strongly) convergent subsequence of {T xn }. 

Example 2.6.3. We have seen in Example 2.6.1 that if T − diag(cn ) and cn → 0, then T is compact. The

converse is also true: if {en } is the o.n.b. which makes T diagonal, then T en → 0 (because w − lim en = 0 and T

is compact) so kcn en k → 0.
2.7. NORMAL OPERATORS 27

It is useful to know that compactness is inherited by the parts of an operator.

Theorem 2.6.4. Suppose that T is a compact operator on Hilbert space H = M ⊕ M⊥ and that, relative to

this decomposition, T = [ X
Z
Y
W ]. Then each of the operators X, Y, Z, W is compact.

Proof. Let {Tn } be a sequence of finite rank operators that converges to T . Write, for each n ∈ N,
 Xn Yn

Tn = Z n Wn . Then all the operators Xn , Yn , Zn , Wn ∈ F and they converge to X, Y, Z, W , respectively. 

Exercise 2.6.5. Prove that Xn , Yn , Zn , Wn ∈ F and that they converge to X, Y, Z, W , respectively. [Consider

the projections P1 = PM and P2 = PM⊥ and notice that, for example P1 T P2 = [ 00 Y0 ], so kYn − Y k ≤ kTn − T k

and Ran Yn ⊂ Ran P1 Tn P2 the later being finite dimensional.]

2.7. Normal operators

Definition 2.7.1. If T is an operator on Hilbert space H then:

(a) T is normal if T T ∗ = T ∗ T ;

(b) T is self-adjoint (or Hermitian) if T = T ∗ ;

(c) T is positive if hT x, xi ≥ 0 for all x ∈ H;

(d) T is unitary if T T ∗ = T ∗ T = I.

Example 2.7.1. Let T = diag(cn ). Then T ∗ = diag(cn ) so T is normal. Also, T = T ∗ iff cn ∈ R, n ∈ N, and

T is positive iff cn ≥ 0, n ∈ N. Finally, T ∗ T = diag(|cn |2 ) so T is unitary iff |cn | = 1, n ∈ N.

Exercise 2.7.1. Let T = Mh on L2 . Prove that T is normal and that it is: self-adjoint iff h(x) ∈ R, a.e.;

positive iff h(x) ≥ 0 a.e.; unitary iff |h(x)| = 1 a.e..

The relationship between T and T ∗ that defines each of these classes allows us to establish some of their

significant properties.

Proposition 2.7.1. An operator T on Hilbert space H is self-adjoint iff hT x, xi is real for any x ∈ H.
28 2. OPERATORS ON HILBERT SPACE

Proof. If T = T ∗ then hT x, xi = hx, T ∗ xi = hx, T xi = hT x, xi so hT x, xi ∈ R. On the other hand, if hT x, xi

is real for any x ∈ H then Second Polarization Identity implies that hT x, yi = hT y, xi so T = T ∗ . 

Exercise 2.7.2. Prove that hT x, xi ∈ R implies that hT x, yi = hT y, xi.

Corollary 2.7.2. If P is a positive operator on Hilbert space H then P is self-adjoint.

Example 2.7.2. If P is the orthogonal projection on a subspace M of Hilbert space H, then P is a positive

operator. Indeed, if z ∈ H write z = x + y relative to H = M ⊕ M⊥ . By Theorem 1.3.2, P z = x and P y = 0, so

hP z, zi = hx, x + yi = kxk2 ≥ 0.

Combining Theorem 1.3.2 and Example 2.7.2 we see that every projection is a positive idempotent. In fact,

the converse is also true.

Theorem 2.7.3. If T is an idempotent self-adjoint operator then T is a projection on M = {x ∈ H : T x = x}.

Proof. Let z ∈ H and write it as z = T z + (z − T z). Now T (T z) = T z so T z ∈ M. Also, z − T z ∈ M⊥ .

Indeed, if x ∈ M, then hx, z − T zi = hx, zi − hx, T zi = hx, zi − hT x, zi = 0. 

By Proposition 2.1.1, the norm of every operator T in L(H) can be computed by considering the supremum

of the values of its bilinear form hT x, yi. The next result shows that, when T is self adjoint, it suffices to consider

only some pairs of x, y ∈ B1 .

Proposition 2.7.4. If T is a self-adjoint operator on Hilbert space H then kT k = sup{|hT x, xi| : kxk = 1}.

Proof. Clearly, |hT x, xi| ≤ kT kkxk2 , so if we denote by α the supremum above, we have that α ≤ kT k. To

prove that α = kT k, we use the Second Polarization Identity, and we notice that, in view of the assumption T = T ∗

and Proposition 2.7.1, 4RehT x, yi = hT (x + y), x + yi − hT (x − y), x − yi. Moreover, using Parallelogram Law, and

assuming that x and y are unit vectors, we obtain that 4RehT x, yi ≤ αkx + yk2 + αkx − yk2 = α(2kxk2 + 2kyk2 ) =

4α. When x is selected so that kT xk =


6 0, y = T x/kT xk we obtain RekT xk ≤ α so kT k ≤ α. 
2.7. NORMAL OPERATORS 29

Exercise 2.7.3. Prove that two product of two self-adjoint operators is self-adjoint iff the operators commute.

Remark 2.7.1. If we write A = (T + T ∗ )/2 and B = (T − T ∗ )/2i then the operators A, B are self-adjoint

and T = A + iB. We call them the real part and the imaginary part of T .

Proposition 2.7.5. If T is an operator on Hilbert space H then the following are equivalent.

(a) T is a normal operator;

(b) kT xk = kT ∗ xk for all x ∈ H;

(c) the real and imaginary part of T commute.

Proof. Notice that kT xk2 − kT ∗ xk2 = h(T ∗ T − T T ∗ )x, xi. If T is normal then the right side is 0, so (a)

implies (b). If (b) is true, then the left side is 0, for all x. Since T ∗ T − T T ∗ is self-adjoint, Proposition 2.7.4

implies that its norm is 0, so (b) implies (a). A calculation shows that, if A and B are the real and imaginary

part of T , resp., then AB − BA = (T ∗ T − T T ∗ )/2i so (a) is equivalent to (c). 

In Definition 1.2.3 we have introduced the concept of the Hilbert space isomorphsim. Since it preserves the

inner product (hU x, U yi = hx, yi), it preserves the norm, and hence both weak and strong toplogies. Therefore,

if U : H → K, we do not distinguish between an operator T ∈ L(H) and U T U −1 ∈ L(H), and we say that they

are unitarily equivalent. Since, by Definition 2.7.1, an operator T is unitary iff T T ∗ = T ∗ T = I, we should check

that U U ∗ = U ∗ U = I.

Exercise 2.7.4. Verify that U U ∗ = IK and U ∗ U = IH .

Notice that both equalities need to be verified, because it is quite possible for one to hold but not the other.

Example: the unilateral shift S satisfies S ∗ S = I 6= SS ∗ .

Exercise 2.7.5. Prove that T is an isometry iff T ∗ T = I.

Exercise 2.7.3 asserts that the product of two self-adjoint operators is itself self-adjoint iff the operators

commute. What if self-adjoint is replaced by normal ? If M, N are commuting normal operators, their product
30 2. OPERATORS ON HILBERT SPACE

is normal if M N commutes with N ∗ M ∗ and it looks like we need the additional assumption that M commutes

with N ∗ (which also gives that M ∗ commutes with N ). When an operator T commutes with both N and N ∗ we

say that T doubly commutes with N . When N is normal we can establish even a stronger result.

Theorem 2.7.6 (Fuglede–Putnam Theorem). Suppose that M , N are normal operators and T ∈ L(H)

intertwines M and N : M T = T N . Then M ∗ T = T N ∗ .

Proof. Let λ be a complex number, and denote A = λM , B = λN . Notice that AT = T B, so A2 T =

A(AT ) = A(T B) = (AT )B = (T B)B = T B 2 , and inductively Ak T = T B K , for k ∈ N. It follows that, if we

denote the exponential function by exp(z), exp(A)T = T exp(B). It is not hard to see that exp(−A) exp(A) = I

so

T = exp(−A)T exp(B).

If we denote by U1 = exp(A∗ − A), U2 = exp(B − B ∗ ), then both U1 , U2 are unitary operators. Indeed,

U1∗ = [ (A∗ − A)n /n!]∗ = (A − A∗ )n /n! = exp(A − A∗ ) = U1−1 , and similarly for U2 . Now we have that
P P

exp(A∗ )T exp(−B ∗ ) = U1 T U2 and k exp(A∗ )T exp(−B ∗ )k = kT k. We conclude that

k exp(λM ∗ )T exp(−λN ∗ )k = kT k

for all λ ∈ C. Now f (λ) = exp(λM ∗ )T exp(−λN ∗ ) is an entire bounded function, hence a constant. Therefore,

f 0 (0) = 0. On the other hand, f 0 (λ) = M ∗ exp(λM ∗ )T exp(−λN ∗ ) + exp(λM ∗ )T exp(−λN ∗ )(−N ∗ ) so f 0 (0) =

M ∗ T − T N ∗ , and the theorem is proved. 

Exercise 2.7.6. Prove that exp(−T ) exp(T ) = I for any operator T ∈ L(H).

Corollary 2.7.7. The product of two normal operators is itself normal iff the operators commute.

Exercise 2.7.7. Prove Corollary 2.7.7.


CHAPTER 3

Spectrum

3.1. Invertibility

In Linear Algebra we learn that each the properties of being invertible, injective, or surjective implies the

other two. Things are very different in infinite dimesional Hilbert space.

Example 3.1.1. Let T = diag(1/n). It is easy to see that Ker T = (0) so T is injective. However, it is not

surjective, because its range does not contain the sequence (1, 1/2, 1/3, . . . ) ∈ `2 .

Exercise 3.1.1. Prove that T = diag(1/n) is injective but (1, 1/2, 1/3, . . . ) ∈
/ Ran T .

Example 3.1.2. The backward shift S ∗ (see Example 2.2.2) is surjective: given (y1 , y2 , . . . ) ∈ `2 we have that

S ∗ (0, y1 , y2 , . . . ) = (y1 , y2 , . . . ). On the other hand S ∗ e1 = 0 so S ∗ is not injective. Also, S ∗ S(x1 , x2 , x3 , . . . ) =

S ∗ (0, x1 , x2 , . . . ) = (x1 , x2 , x3 , . . . ), so S ∗ S = I. However, SS ∗ (x1 , x2 , x3 , . . . ) = S(x2 , x3 , . . . ) = (0, x2 , x3 , . . . )

so SS ∗ 6= I.

We say that an operator T is left invertible if there exists an operator L ∈ L(H) such that LT = I. It is right

invertible if there exists an operator R such that T R = I. Therefore, the unilateral shift S is left invertible, while

S ∗ is right invertible. Since S is injective, it is tempting to jump to the conclusion that an operator is injective

iff it is left invertible.

Rx
Example 3.1.3. The Volterra integral operator V is defined on L2 by V f (x) = 0
f (t) dt. Since this is an

integral operator TK with K = χE (x, y) where E = {(x, y) : y ≤ x} and χE ∈ L2 , V is a compact operator so it

cannot be left invertible. Yet, V is injective since V f = 0 implies that f = 0.

Exercise 3.1.2. Prove that the Volterra integral operator V is injective.


31
32 3. SPECTRUM

Exercise 3.1.3. Prove that the range of the Volterra integral operator V is a dense linear manifold in H.

Instead of injectivity, another condition plays a major role in the questions about invertibility.

Definition 3.1.1. An operator T ∈ L(H) is bounded below if there exists α > 0 such that kT xk ≥ αkxk, for

all x ∈ H.

Example 3.1.4. Let T = diag(cn ). Then T is bounded below iff |cn | ≥ α > 0, n ∈ N.

An immediate consequence of this property concerns the range of the operator.

Theorem 3.1.1. If an operator T on Hilbert space H is bounded below then its range is a closed subset of H.

Proof. Let yn be a sequence of vectors in Ran T converging to y. Then yn = T xn for some xn ∈ H, so

kyn − ym k = kT xn − T xm k ≥ αkxn − xm k. Since {yn } is a Cauchy sequence, the same is true of {xn }. Let

x = lim xn . Then T xn → T x, i.e. yn → T x. Thus y = T x ∈ Ran T , and Ran T is closed. 

Example 3.1.3 shows that the injectivity is not sufficient to guarantee the left invertibility. The next result

gives the correct necessary and sufficient conditions.

Theorem 3.1.2. Let T be an operator in L(H). The following are equivalent:

(a) T is left invertible;

(b) Ker T = (0) and Ran T is closed;

(c) T is bounded below.

Proof. If LT = I then kxk = kLT xk ≤ kLkkT xk, so T is bounded below with α = 1/kLk, and (a) ⇒ (c).

Clearly, if T is bounded below it must be injective, and the fact that its range is closed is Theorem 3.1.1, so (c)

implies (b). If (b) is true then, by the Open Mapping Theorem (Royden, p.230), there exists a bounded linear

operator L1 : Ran T → H, such that L1 T = I. If we define L = [ L1 0 ] relative to H = Ran T ⊕ (Ran T )⊥ (see

Example 2.4.2), then L ∈ L(H) and LT = I. 


3.1. INVERTIBILITY 33

Exercise 3.1.4. Prove that T = [ I0 A


I ] is bounded below for any operator A.

A similar characterization is available for surjectivity. The most efficient approach seems to be based on the

observation that T is right invertible iff T ∗ is left invertible. In order to continue in this direction we need the

following result, which is significant on its own.

Theorem 3.1.3. The operator T has closed range iff the range of T ∗ is closed.

Proof. Since T ∗∗ = T it suffices to prove one of the two implications. To that end, let Ran T be closed, and

let xn be a sequence of vectors such that T ∗ xn converges to y. We will show that y ∈ Ran T ∗ . Since Ran T is closed

we can write H = Ran T ⊕ Ker T ∗ . If relative to this decomposition xn = x0n ⊕ x00n , then T ∗ xn = T ∗ x0n so, without

loss of generality, we may assume that the sequence xn belongs to Ran T . The convergence of T ∗ xn implies

the weak convergence so, for any z ∈ H, hT ∗ xn , zi → hy, zi. It follows that hxn , T zi → hy, zi and, moreover,

that hxn , wi converges for any w ∈ H. Indeed, if we write w = w1 ⊕ w2 , where w1 ∈ Ran T (so w1 = T z1 )

and w2 ∈ Ker T ∗ , (so hxn , w2 i = 0), we see that {xn } is a weakly convergent sequence. If w − lim xn = x then

w − lim T ∗ xn = T ∗ x. On the other hand, T ∗ xn converges to y, so y = T ∗ x ∈ Ran T ∗ . 

Now we can deliver the promised characterizations of surjectivity.

Theorem 3.1.4. Let T be an operator in L(H). The following are equivalent:

(a) T is right invertible;

(b) T ∗ is bounded below.

(c) T is surjective.

Proof. The equivalence of (a) and (b) follows from Theorem 3.1.2 applied to T ∗ . Further, T R = I implies

that T R is surjective. Since Ran T R ⊂ Ran T , T is surjective and (a) implies (c). Finally, let T be surjective.

This implies that Ker T ∗ = (0) and also, via Theorem 3.1.3, that Ran T ∗ is closed. Applying Theorem 3.1.2 we

see that T ∗ is left invertible and the result follows by taking adjoints. 

We close this section with a sufficient condition for invertibility that is of quite a different nature.
34 3. SPECTRUM

Theorem 3.1.5. If T is an operator on Hilbert space H and kI − T k < 1 then T is invertible.

Proof. Let α = 1 − kI − T k ∈ (0, 1]. If x ∈ H, then kT xk = kx − (1 − T )xk ≥ kxk − k1 − T kkxk = αkxk

so T is bounded below. Suppose now that the range of T is not dense in H. Then there exists y ∈ H such that

d = inf{ky − xk : x ∈ Ran T } > 0. It follows that there exists x ∈ Ran T such that (1 − α)ky − xk < d. (Obvious

if α = 1, otherwise β = 1/(1 − α) > 1 so there exists x such that ky − xk < βd.) Notice that x + T (y − x) ∈ Ran T

so d ≤ ky − x − T (y − x)k ≤ kI − T kky − xk < d, which is a contradiction, so T has dense range. 

Second proof: The series I + T + T 2 + T 3 + . . . converges in the operator norm, and it is easy to verify

that (I − T )(I + T + T 2 + T 3 + . . . ) = I. 

P∞
Exercise 3.1.5. Prove that, if kT k < 1, the series n=0 T n converges uniformly.

P∞
Exercise 3.1.6. Verify that, if kT k < 1, (I − T )−1 = n=0 T n.

3.2. Spectrum

A complex number λ belongs to the spectrum of an operator T (notation: λ ∈ σ(T )) if T − λI is not

invertible. The complement of σ(T ) is called the resolvent set of T and is denoted by ρ(T ). The spectral radius

of T , r(T ) = sup{|λ| : λ ∈ σ(T )}. While it is more pedantic to write λI, it is customary to omit the identity and

write just λ for the operator λI. As usual, the interest in the spectrum of a linear operator T is motivated by the

finite dimensional case. In that situation, λ ∈ σ(T ) iff λ is an eigenvalue of T , and eigenvalues play an essential

role in the structure theory via the Jordan form. As we will see, the situation is quite different in the infinite

dimensional Hilbert space.

Example 3.2.1. Let T = diag(cn ). If λ = cn for some n, then T − λ has non-trivial kernel (containing en ) so

the spectrum contains the whole diagonal. Is there more? If T = diag(1/n) then T is not invertible so 0 belongs

to the spectrum of T , although it is not one of the diagonal entries and not an eigenvalue. What about the

sequence {cn } = (1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 4/5, . . . )? The operator T = diag(cn ) is not invertible, but neither
3.2. SPECTRUM 35

is T − 1, so both 0 and 1 belong to the spectrum of T . Should we include limit points of the sequence as well?

The truth is, we cannot address the problem before we establish some essential properties of the spectrum.

Proposition 3.2.1. If T is an operator on Hilbert space H then r(T ) ≤ kT k.

Proof. If |λ| > kT k then kT /λk < 1. By Theorem 3.1.5, the operator I − T /λ is invertible, so λ ∈
/ σ(T ). 

Example 3.2.2. Let S ∗ be the backward shift on `2 (see Example 2.2.2). If |λ| < 1 then the sequence

u = {λn } is in `2 and it is an eigenvector of S ∗ , i.e., S ∗ u = λu so S ∗ − λ has non-trivial kernel and is not

invertible. Consequently, the spectrum of S ∗ contains the open unit disk. On the other hand, kS ∗ k = kSk = 1

so, by Proposition 3.2.1, σ(S ∗ ) is contained in the closed unit disk.

Example 3.2.2 raises once again the question whether the spectrum must contain its boundary points.

Theorem 3.2.2. If T is an operator on Hilbert space H then σ(T ) is a non-empty compact set.

Proof. Proposition 3.2.1 shows that the spectrum of T is bounded. To show that it is closed, we will show

that ρ(T ) is open. Let λ0 ∈ ρ(T ) so that T − λ0 is invertible. Since

k1 − (T − λ0 )−1 (T − λ)k = k(T − λ0 )−1 [T − λ0 − (T − λ)] k = k(T − λ0 )−1 k|λ − λ0 |

we see that k1 − (T − λ0 )−1 (T − λ)k < 1 if |λ − λ0 | is sufficiently small. By Theorem 3.1.5, for such λ the operator

(T − λ0 )−1 (T − λ) is invertible so the same is true of T − λ. Consequently ρ(T ) is open. 

Our next goal is to show that the spectrum of a bounded operator cannot be empty. In order to do that, let

x, y ∈ H, and consider the complex-valued function F (λ) = h(T − λ)−1 x, yi defined for λ ∈ ρ(T ).

Proposition 3.2.3. The function F is analytic in ρ(T ) ∪ {∞}.

Proof. Let λ0 ∈ ρ(T ). Write

T − λ = (T − λ0 ) − (λ − λ0 ) = (T − λ0 ) 1 − (T − λ0 )−1 (λ − λ0 )
 
36 3. SPECTRUM

and notice that if |λ − λ0 | is sufficiently small, then k(T − λ0 )−1 (λ − λ0 )k < 1. By Exercise 3.1.6, we can write


X
(T − λ)−1 = (T − λ0 )−1 (T − λ0 )−n (λ − λ0 )n .
n=0


h(T − λ0 )−n−1 x, yi(λ − λ0 )n is analytic in a neighborhood of λ0 . As for
P
Therefore, the function F (λ) =
n=0

λ0 = ∞, we consider the function

(3.1) G(λ) = F (1/λ) = h(T − 1/λ)−1 x, yi

at λ = 0. Since T − 1/λ = −(1 − λT )/λ, for λ 6= 0, Theorem 3.1.5 and Exercise 3.1.6 show that, for λ sufficiently

hT n x, yiλn is analytic at
P
small (but different from 0), the operator T − 1/λ is invertible and G(λ) = −λ
n=0

0. Furthermore, F (∞) = G(0) = 0. If the spectrum of T were empty, F would be an entire function that is

bounded, hence by Liouville’s Theorem, a constant. Since F (∞) = 0 it would follow that F is a zero function for

any x, y ∈ H, which is impossible. (Take x = (T − λI)y, y 6= 0.) Thus σ(T ) is non-empty. 

Now we can return to Example 3.2.2 and conclude that the spectrum of S ∗ is the closed unit disk. What

about σ(S)?

Exercise 3.2.1. A complex number λ belongs to σ(T ) iff λ ∈ σ(T ∗ ).

Exercise 3.2.2. Given a non-empty compact set F ⊂ C, show that there exists an operator T ∈ L(H) such

that σ(T ) = F .

Example 3.2.3. The spectrum of the unilateral shift S is the closed unit disk. However, S has no eigenvalues.

Theorem 3.2.4 (Spectral mapping theorem). Let T ∈ L(H) and let p be a polynomial. Then σ(p(T )) =

p(σ(T )).

Proof. Suppose that λ0 ∈ σ(T ), and write p(λ) − p(λ0 ) = (λ − λ0 )q(λ). Then p(T ) − p(λ0 ) = (T − λ0 )q(T )

and it is not hard to see that the operator A = p(T ) − p(λ0 ) cannot be invertible. Otherwise, we would have that
3.2. SPECTRUM 37

T − λ0 has both the left inverse A−1 q(T ) and the right inverse q(T )A−1 . Thus p(λ0 ) ∈ σ(p(T )), and we obtain

that p(σ(T )) ⊂ σ(p(T )).

To prove the converse, let λ0 ∈ σ(p(T )), and let λ1 , λ2 , . . . , λn be the roots of p(λ) = λ0 . Then p(T ) − λ0 =

α(T − λ1 )(T − λ2 ) . . . (T − λn ) for some non-zero complex number α. Since p(T ) − λ0 is not invertible there

exists j, 1 ≤ j ≤ n, such that T − λj is not invertible. For this j, λj ∈ σ(T ) and p(λj ) = λ0 so λ0 ∈ (σ(T )).

Consequently, σ(p(T )) ⊂ p(σ(T )) and the theorem is proved. 

Exercise 3.2.3. Let X and T be operators in L(H), and suppose that X is invertible. Then σ(X −1 T X) =

σ(T ).

In many instances it is quite hard to determine the spectrum of an operator. However, it may be possible to

determine its spectral radius, using the next result.

Theorem 3.2.5 (Spectral Radius Formula). Let T ∈ L(H). Then r(T ) = limn→∞ kT n k1/n .

Proof. By the Spectral mapping theorem, σ(T n ) = [σ(T )]n so [r(A)]n = r(An ) ≤ kAn k. Thus, r(A) ≤

kAn k1/n and r(A) ≤ lim inf n→∞ kAn k1/n . In order to prove the converse we consider the function G(λ) defined

by (3.1) for λ 6= 0 and 1/λ ∈ ρ(T ). For such λ, G is analytic by Proposition 3.2.3 and it can be represented
P∞
by the convergent series −λ n=0 λn hT n x, yi. Thus, the sequence λn hT n x, yi must be bounded. That means

that for each y, the sequence of bounded linear functionals {λn T n x} is bounded at y, i.e., there exists C(y) such

that |hλn T n x, yi| ≤ C(y). By the Uniform Boundedness Principle, the sequence {λn T n x} is uniformly bounded.

This means that, for each x, there exists C(x), such that kλn T n xk ≤ C(x). Applying the Uniform Boundedness

Principle once again, we obtain M > 0 such that |λ|n kT n k ≤ M , n ∈ N. It follows that |λ|kT n k1/n ≤ M 1/n and

|λ| lim supn→∞ kT n k1/n ≤ 1. Since this is true for any λ such that 1/λ ∈ ρ(T ) it holds all the more whenever

1/|λ| > r(T ). It follows that lim supn→∞ kT n k1/n ≤ r(T ) and the theorem is proved. 
38 3. SPECTRUM

3.3. Parts of the spectrum

A combination of Theorems 3.1.2 and 3.1.4 established that an operator is invertible iff it is bounded below

and has closed range.

Definition 3.3.1. A complex number λ belongs to the approximate point spectrum σapp (T ) of a linear

operator T if T − λ is not bounded below. It belongs to the compression spectrum σcomp (T ) of T if the closure

of Ran (T − λ) is a proper subspace of H. Finally, it belongs to σp (T ) — the point spectrum of T , if it is an

eigenvalue of T .

Remark 3.3.1. There is more than one classification of the parts of the spectrum. The residual spectrum is

σcomp (T ) − σp (T ), and the continuous spectrum is σ(T ) − (σcomp(T ) ∪ σp (T )). The left spectrum consists of those

complex numbers λ such that T − λ is not left invertible, and similarly for the right spectrum.

Example 3.3.1. Let T = diag(cn ). First we notice that T is invertible iff the sequence {cn } is invertible.

Indeed, if cn dn = 1, and dn ∈ `∞ , define T −1 = diag(dn ). If T is invertible, then T −1 en = en /cn so 1/|cn | =

kT −1 en k ≤ kT −1 k shows that 1/cn ∈ `∞ . Therefore, λ ∈ σ(T ) iff cn − λ is not invertible, which is true iff there

exists a subsequence {cnk } such that cnk − λ → 0. In other words, if and only if λ is an accumulation point of

{cn }. Thus σ(T ) is the closure of the diagonal.


P∞
What are the parts of the spectrum of diag(cn )? Suppose that k(T −λ)xk ≥ αkxk. Then i=1 |(cn −λ)xn |2 ≥
P∞ √
α i=1 |xn |2 . By taking x = en we obtain that |cn − λ| ≥ α, for all n ∈ N, which means that cn − λ is invertible

and, hence, λ ∈
/ σ(T ). This shows that σ(T ) ⊂ σapp (T ) and therefore σ(T ) = σapp (T ).

The previous example is a special case of a more general result.

Theorem 3.3.1. If T is a normal operator then σ(T ) = σapp (T ).

Proof. By Proposition 2.7.5, taking into account that T − λ is normal, for any x ∈ H, k(T − λ)xk =

k(T ∗ −λ)xk so σp (T ) = σp (T ∗ ). Also, λ ∈ σp (T ∗ ) ⇔ Ker (T ∗ −λ) 6= (0) ⇔ Ran (T ∗ −λ)∗ is not dense ⇔ Ran (T −λ)
3.3. PARTS OF THE SPECTRUM 39

is not dense ⇔ λ ∈ σcomp (T ). Conclusion: σcomp (T ) ⊂ σp (T ) ⊂ σapp (T ). Since σ(T ) = σapp (T ) ∪ σcomp (T ) the

result follows. 

Remark 3.3.2. The proof of Theorem 3.3.1 established that a complex number λ belongs to σp (T ∗ ) iff

λ ∈ σcomp (T ).

Exercise 3.3.1. If T ∈ L(H) then σ(T ) = σapp (T ) ∪ σcomp (T ).

Since the spectrum is the union of two parts, it is interesting that its boundary is always in the same one.

Theorem 3.3.2. The boundary of the spectrum is included in the approximate point spectrum.

Proof. Let λ ∈ ∂σ(T ). The spectrum of T is closed so λ ∈ σ(T ), which means that either λ ∈ σapp (T )

(in which case there is nothing to prove) or λ ∈ σcomp (T ). In the latter case there exists a non-zero vector x

orthogonal to Ran (T − λ). Let {λn } ⊂ ρ(T ) such that λn → λ. Since T − λn is invertible, we can define unit

vectors fn = (T − λn )−1 f /k(T − λn )−1 f k. Now

k(T − λ)fn k2 ≤ k(T − λ)fn k2 + k(T − λn )fn k2 = k(λ − λn )fn k2 = |λ − λn | → 0

where we have used the fact that (T − λ)fn is a multiple of f , hence orthogonal to (T − λn )fn . Consequently,

λ ∈ σapp (T ). 

Example 3.3.2. We have seen in Example 3.2.3 that the spectrum of the unilateral shift S is D− . By

Exercise 3.2.1 the same is true of σ(S ∗ ). Since S is an isometry, 0 canot be an eigenvalue of S. (Sx = 0 imnplies

kxk = kSxk = 0.) If λ 6= 0 then S(x1 , x2 , . . . ) = λ(x1 , x2 , . . . ) leads to 0 = λx1 and xn = λxn+1 , n ∈ N, and we

see that x = 0. Therefore, σp (S) is empty and, by Exercise 3.3.2, so is σcomp (S ∗ ).

The equation S ∗ x = λx leads to xn+1 = λxn , n ∈ N, and thus to x = x1 (1, λ, λ2 , . . . ). Therefore, x is a

non-zero vector in `2 iff |λ| < 1. Consequently, σp (S ∗ ) = σcomp (S) = D.

By Theorem 3.3.2, the approximate point spectra of S and S ∗ include the unit circle T. For S that is all

because, if |λ| < 1 then kSx − λxk ≥ |kSxk − kλxk| = (1 − |λ|)kxk so S − λ is bounded below. On the other

hand, the approximate point spectrum always includes the eigenvalues, so σapp (S ∗ ) = D− .
40 3. SPECTRUM

Theorem 3.3.3. Suppose that M is a closed subspace of Hilbert space H, and that, relative to H = M ⊕ M⊥ ,
 T1 0

T = 0 T2 . Then σ(T ) = σ(T1 ) ∪ σ(T2 ).

Proof. If T − λ is not invertible then T1 − λ and T2 − λ cannot both be invertible, so σ(T ) ⊂ σ(T1 ) ∪ σ(T2 ).

On the other hand, if either T1 or T2 is not bounded below, say kT1 xn k → 0, then kT (xn ⊕ 0)k → 0, so

σapp (T1 )∪σapp (T2 ) ⊂ σ(T ). The corresponding inclusion for the compression spectra can be obtained by switching

to the adjoints and using Exercise 3.3.2. 

Problem 11. Suppose that H = M1 ⊕ M2 ⊕ . . . and that relative to this decomposition T = diag(Tn ) is a

diagonal matrix with operator entries T1 , T2 , . . . . Is it true that σ(T ) = (∪σ(Tn ))− ?

3.4. Spectrum of a compact operator

In this section we take a more detailed look at compact operators and their spectra.

Theorem 3.4.1. Let T be a compact operator, let λ be a non-zero complex number, and suppose that T − λ

is not bounded below. Then λ ∈ σp (T ).

Proof. Let {xn } be a sequence of unit vectors such that k(T − λ)xn k → 0, n → ∞. Since B1 is weakly

compact, {xn } has a weakly convergent subsequence {xnk }, so the compactness of T implies that {T xnk } is a

convergent sequence. Let x = limk T xnk . Notice that kxk ≥ kλxnk k − k(T − λ)xnk k → |λ| so x is a non-zero

vector. Moreover, k(T − λ)xk ≤ k(T − λ)(T xnk − x)k + k(T − λ)T xnk k → 0 so λ ∈ σp (T ). 

Theorem 3.4.1 established that the non-zero points in the approximate point spectrum are eigenvalues. Our

goal is to prove a similar inclusion for the compression spectrum. We start with the following result.

Theorem 3.4.2. Let T be a compact operator and let λ be a non-zero complex number. Then Ran (T − λ) is

closed.

Proof. First we show that, if Ran T is closed, it must be finite dimensional. Indeed, if we denote by T1 the

restriction of T to its initial space (Ker T )⊥ , then T1 is an injective linear transformation from (Ker T )⊥ onto
3.4. SPECTRUM OF A COMPACT OPERATOR 41

Ran T , hence invertible. Let B be the intersection of the closed ball of radius kT1−1 k and Ran T . Now, if y ∈ B

then y = T1 x, for some x ∈ (Ker T )⊥ , so x = T1−1 y. Since kyk ≤ kT1−1 k it follows that x ∈ B1 ∩ (Ker T )⊥ . We

conclude that B is contained in the compact set T (B1 ∩(Ker T )⊥ ) so B must be compact, hence finite dimensional.

Next we observe that Ker (T − λ) must be finite dimensional. Reason: Ker (T − λ) is invariant for T and

the restriction of T to Ker (T − λ) is a compact operator with range Ker (T − λ). (If x ∈ Ker (T − λ) write

x = λ1 [T x − (T − λ)x] = T (x/λ) ∈ T (Ker (T − λ)).)

Finally, we prove the theorem. Let S be the restriction of T − λ to Ker (T − λ)⊥ . Notice that Ran S =

Ran (T − λ) so it suffices to show that Ran S is closed. By Theorem 3.1.2 we will accomplish this goal by

establishing that S is bounded below. However, if S is not bounded below then Theorem 3.4.1 shows that

(T − λ)x = 0 for some nonzero vector x in Ker (T − λ)⊥ . This is impossible, so Ran S is closed and the proof is

complete. 

Before we can proceed we need this technical result.

Lemma 3.4.3. Let T be a compact operator and let {λn } be a sequence of complex numbers. Suppose that

there exists a nested sequence of distinct subspaces M1 ( M2 ( M3 ( . . . such that (T − λn )Mn+1 ⊂ Mn .

Then λn converges to 0.

Proof. Let {en } be an sequence of unit vectors such that e1 ∈ M1 and en+1 ∈ Mn+1 Mn . Clearly, this

is an orthonormal system. Moreover, for n ≥ 2, h(T − λn )en , en i = 0 which implies that kT en k ≥ |hT en , en i| =

|h(T − λn )en , en i + hλn en , en i| = |λn |. Since T is compact and w − lim en = 0 it follows that limn T en = 0 so

limn λn = 0. 

Theorem 3.4.1 shows that if λ ∈ σ(T ) then either λ = 0, or λ ∈ σp (T ), or T − λ is bounded below (hence

injective) but not surjective. By Theorem 3.1.4, T − λ not being surjective is the same as (T − λ)∗ not being

bounded below. Since T ∗ is also compact, another application of Theorem 3.4.1 allows us to conclude that

λ ∈ σp (T ∗ ). The next result shows that there is even less variation in the spectrum of a compact operator.
42 3. SPECTRUM

Theorem 3.4.4. Let T be a compact operator and let λ be a non-zero complex number. Then λ ∈ σp (T ) iff

λ ∈ σp (T ∗ ).

Proof. Clearly, it suffices to prove either direction. Suppose that λ ∈ σp (T ). By Theorem 3.4.2, the range

of T − λ is closed. We will show that it must be a proper subspace of H. Suppose to the contrary that T − λ is

surjective, and denote Mn = Ker (T − λ)n . Since λ is an eigenvalue of T we can inductively define a sequence

{xn } of nonzero vectors such that (T − λ)xn = xn−1 , with x0 = 0. Clearly xn belongs to Mn but not to Mn−1 ,

and (T − λ)Mn+1 ⊂ Mn , so Lemma 3.4.3 implies that the constant sequence λ, λ, λ, . . . converges to 0, which

contradicts the assumption that λ 6= 0. Therefore, Ran (T − λ) (which coincides with Ker (T − λ)∗ ) is a proper

subspace of H and λ ∈ σp (T ∗ ). 

To summarize, the spectrum of a compact operator consists of the point spectrum and, possibly, 0. On the

infinite dimensional Hilbert space, 0 must be in the spectrum because if a compact operator T were invertible,

then so would be the identity (a product of T T −1 ), contradicting the conclusions of Example 2.6.1. Thus we have

a corollary.

Corollary 3.4.5. The spectrum of a compact operator consists of 0 and its eigenvalues.

It is reasonable to ask about the location of the eigenvalues.

Theorem 3.4.6. For any C > 0 there is a finite number of linearly independent eigenvectors of a compact

operator corresponding to eigenvalues λ such that |λ| ≥ C.

Proof. Suppose to the contrary that there is an infinite sequence {xn } of unit vectors, and a sequence of
Pn
eigenvalues λn of T , |λn | ≥ C, so that T xn = λn xn . Let Mn = ∨nk=1 xk . If x ∈ Mn then x = k=1 ck xk so
Pn Pn Pn
(T − λn )x = (T − λn ) k=1 ck xk = k=1 ck (T − λn )xk = k=1 ck (λk − λn )xk ∈ Mn−1 . Applying Lemma 3.4.3

we obtain that λn → 0, which contradicts |λn | ≥ C. 

Corollary 3.4.7. If λ is a non-zero eigenvalue of a compact operator T , then the nullspace of T − λ is a

finite dimensional subspace.


3.5. SPECTRUM OF A NORMAL OPERATOR 43

Corollary 3.4.8. The spectrum of a compact operator T is at most countable, and the only accumulation

point of it can be zero.

Remark 3.4.1. If T = diag(cn ) where c1 = 1 and cn = 0 for n ≥ 2, then T is compact, and σ(T ) = {0, 1} so

it has no accumulation points.

Last remark raises a question: can a compact operator have a one-point spectrum? Since compact operators

are never invertible, the single point is necessarily 0, so the question can be reformulated as: are there compact

quasinilpotent operators? (An operator T is quasinilpotent if σ(T ) = {0}.) In finite dimensions, a quasinilpotent

operator is nilpotent, i.e. there exists a positive integer N such that T N = 0. This need not be the case in infinite

dimensional Hilbert space.

Example 3.4.1. Let T be a weighted shift (see Example 2.1.6) with weight sequence {1/n}n∈N . It is compact

1
following Example 2.6.1. Since W en = (1/n)en+1 it follows that W k en = n(n+1)...(n+k−1) en+k . This shows that

1 1
W k is a product of S k and a diag( n(n+1)...(n+k−1) ). Since S k is an isometry, kW k k = supn { n(n+1)...(n+k−1) }=

1/k!. Now r(W ) = limk kW k k1/k = limk (1/k!)1/k = 0. Therefore, W is a compact quasinilpotent operator.

3.5. Spectrum of a normal operator

On the first glance, normal operators appear to be too diverse to fit one description. Before we can correct

this misconception, we will need to make a thorough study of this class, and some of its prominent subclasses.

Theorem 3.5.1. (a) If T is a unitary operator then σ(T ) is a subset of the unit circle. (b) If T is a self-

adjoint operator then σ(T ) is a subset of the real axis. (c) If T is a positive operator then σ(T ) is a subset of the

non-negative real axis. (d) If T is a non-trivial projection then σ(T ) = {0, 1}.

Proof. All operators listed are normal, so by Theorem 3.3.1, it suffices to prove assertions (a) – (d) with

σapp (T ) instead of σ(T ). To that end, we will prove that, if λ does not belong to the appropriate set, then T − λ

is bounded below.
44 3. SPECTRUM

(a) If T is unitary and |λ| =


6 1, then kT x − λxk ≥ |kT xk − kλxk| = |(1 − λ)| kxk so T is bounded below.

(b) Let λ = α + iβ. Then kT x − λxk2 = kT x − αxk2 − 2RehT x − αx, iβxi + kiβxk2 . If α, β are real numbers

and T = T ∗ we have that hT − αx, xi ∈ R by Proposition 2.7.1, and it follows that RehT x − αx, iβxi = 0.

Therefore, kT x − λxk2 ≥ |β|2 kxk2 , so β 6= 0 implies that T − λ is bounded below.

(c) If T ≥ 0 then T is self-adjoint, so σ(T ) ⊂ R. Notice that kT x − λxk2 = kT xk2 − 2RehT x, λxi + kλxk2 . If

λ < 0 then hT x, λxi < 0 (by definition of a positive operator) so kT x − λxk2 ≥ |λ|2 kxk2 and T − λ is bounded

below.

(d) If T is a non-trivial projection then neither T nor I − T (the projection on the orthogonal complement

1 1
of the range of T ) can be invertible, so {0, 1} ⊂ σ(T ). If λ ∈
/ {0, 1}, a calculation shows that λ(1−λ) T − λ is the

inverse of T . 

Exercise 2.7.1 asserts that the operator of multiplication by an L∞ function is a normal operator. In addition,

it showed that Mh belongs to one of the important subclasses iff its (essential) range belonged to a specific subset

of the complex plane. On the other hand, Theorem 3.5.1 showed that for a general normal operator, a membership

in each of the mentioned subclasses implies the analogous behavior of its spectrum. This is no coincidence. First

we need a proposition.

Proposition 3.5.2. Let T = Mh on L2 . Then the following are equivalent:

(a) Ran T is dense;

(b) h(x) 6= 0 a.e.;

(c) T is injective;

(d) T ∗ is injective.

Proof. Let A = {x : h(x) = 0}. Suppose that µ(A) 6= 0 and let f = χA . For any g ∈ L2 , hT g, f i =
R
hgf =
R
A
hg = 0 so f is a non-zero function that is orthogonal to Ran T . Thus (a) implies (b). Next, if T f = 0 then

h(x)f (x) = 0 a.e., so assuming (b) we see that f = 0, and (c) follows. Notice that if T ∗ f = 0 then h(x)f (x) = 0

so T f = 0 and (c) implies (d). Finally, the implication (d) ⇒ (a) is a direct consequence of Theorem 2.2.3. 
3.5. SPECTRUM OF A NORMAL OPERATOR 45

Recall that the essential range of a function h ∈ L∞ (X, µ) is the set of all complex numbers z such that the

measure of E = {x ∈ X : |h(x) − z| < } is different from zero for all  > 0.

Theorem 3.5.3. Let T = Mh on L2 . Then σ(T ) is the essential range of h.

Proof. Notice that Mh − λ is a multiplication by h − λ. Let us denote by A = {x : h(x) 6= λ}, B = {x :

h(x) = λ}, and define a function g(x) = 1/(h(x) − λ) if x ∈ A and g(x) = 0 if x ∈ B.

Suppose first that λ ∈ ρ(T ). By Proposition 3.5.2, µ(B) = 0. Thus, g(x) = 1/(h(x) − λ) a.e. and Mg Mh−λ =

Mh−λ Mg = I. Since the assumption is that Mh−λ is invertible, the operator Mg is bounded, and by Example 2.1.4,

g ∈ L∞ . The estimate |g(x)| ≤ M a.e. implies that |h(x) = λ| ≥ 1/M a.e. so µ(E1/M ) = 0 and λ is not in the

essential range of h.

Conversely, if λ is not in the essential range of h, then there exists 0 > 0 such that µ(E0 ) = 0. Consequently,

|h(x) − λ| ≥ 0 a.e., whence |g(x)| ≤ 1/0 a.e., and Mg is a bounded operator. This shows that Mh−λ is invertible

and the proof is complete. 

Proposition 3.2.1 established that r(T ) ≤ kT k. For normal operators more can be said, and the following

result paves the way to that goal.

Proposition 3.5.4. If T is a normal operator then kT n k = kT kn , n ∈ N.

Proof. First we notice that, in view of Proposition 2.7.5, for n ∈ N,

kT n xk2 = hT n x, T n xi = hT ∗ T n x, T n−1 xi ≤ kT ∗ T n xkkT n−1 xk = kT n+1 xkkT n−1 xk ≤ kT n+1 kkT n−1 kkxk2

so kT n k2 ≤ kT n+1 kkT n−1 k.

Now we prove the assertion of the proposition using induction. We will assume that kT k =
6 0, otherwise the

theorem is trivially correct. It is easy to see that the statement is valid for n = 0 and n = 1. Suppose that it is

true for n. Then

kT k2n = (kT kn )2 = kT n k2 ≤ kT n+1 kkT n−1 k ≤ kT n+1 kkT kn−1


46 3. SPECTRUM

and, dividing both sides by kT kn−1 , it follows that kT kn+1 ≤ kT n+1 k. Since the opposite inequality is obvious,

the theorem is proved. 

Corollary 3.5.5. If T is a normal operator then r(T ) = kT k.

p
Proof. By Theorems 3.2.5 and 3.5.4, kT k = n
kT n k → r(T ). 
CHAPTER 4

Invariant subspaces

4.1. Compact operators

We have seen that the spectrum of a compact operator consists of the eigenvalues and 0 which may be but is

not necessarily an eigenvalue. Furthermore, each of the eigenspaces E(λ) = Ker (T − λ), corresponding to λ 6= 0,

is finite dimensional. The situation is especially pleasant when T is self-adjoint, in addition to being compact.

One of the benefits of this additional hypothesis concerns the eigenspaces.

Proposition 4.1.1. If T is a compact, self-adjoint operator on Hilbert space, and if λ, µ are two different

eigenvalues of T , then the corresponding eigenspaces E(λ), E(µ) are mutually orthogonal.

Proof. If T x = λx and T y = µy, then λhx, yi = hT x, yi = hx, T yi = µhx, yi, since µ ∈ R. Given that λ 6= µ

it follows that hx, yi = 0. 

Proposition 4.1.1 shows that H can be written as a direct sum M ⊕ M⊥ , where M = ⊕n∈N E(λn ), the

orthogonal direct sum of all eigenspaces. When T is self-adjoint, the subspace M⊥ is just a mirage.

Theorem 4.1.2. If T is a compact, self-adjoint operator on H space, and σp (T ) = {λi }i∈I , then H =

⊕i∈I E(λi ).

6 H. Notice that M is invariant for T = T ∗ , so M⊥ is


Proof. Let M = ⊕i∈I E(λi ) and suppose that M =

also reducing for T . Let T1 be the restriction of T to M⊥ . Then σ(T1 ) ⊂ σ(T ) by Theorem 3.3.3. Since T1 is

compact, if λ 6= 0 is in its spectrum it must be an eigenvalue. However, the corresponding eigenvectors would

also be eigenvectors of T and, as such, would belong to M. It follows that T1 must be quasinilpotent. On the

other hand T1 is normal which would necessitate that its norm and spectral radius are equal, so T1 = 0 which

means that M⊥ ⊂ E(0) ⊂ M. The obtained contradiction shows that H = ⊕i∈I E(λi ). 
47
48 4. INVARIANT SUBSPACES

Remark 4.1.1. Each eigenspace E(λ) is reducing for a self-adjoint operator so, relative to the decomposition

H = ⊕i∈I E(λi ), T can be represented as diag(Ti ), where Ti is an operator mapping E(λi ) into itself, and σ(Ti )

is a singleton {λi }. In addition, regardless of whether T is self-adjoint or not, each eigenspace is hyperinvariant

for T . This means that it is invariant for any operator that commutes with T . Indeed, if A commutes with T ,

then T − λ annihilates Ax together with x.

When T is not self-adjoint, the situation is much more complicated. The eigenspaces need not be mutually

orthogonal any more. The eigenvectors do not necessarily span H. In fact, there are compact operators without

eigenvalues, (so they are necessarily quasinilpotent). Still, we can see some of the structure remaining. The

eigenspaces are hyperinvariant (if there are any), although they need not be reducing. Since all operators on Cn

are compact, it is instructive to look at finite matrices.

Example 4.1.1. Let T = [ 10 11 ] acting on C2 . Then σ(T ) = {1} and E(1) = C ⊕ (0) which is neither invariant

for T ∗ , nor is the span of eigenvectors of T equal to C2 .

Example 4.1.2. Let T = [ 20 13 ] acting on C2 . The eigenvalues of T are 2 and 3, with corresponding eigenvectors

[ 10 ] and [ 11 ], and they are not mutually orthogonal.

When T has eigenvalues, it must have a non-trivial invariant subspace. What about the case of a compact

quasinilpotent operator?

Rx
Example 4.1.3. Let T be the Volterra-type integral operator with kernel K, i.e., T f (x) = 0
K(x, y)f (y) dy.

It is compact (Example 2.6.2) and has no eigenvalues different from 0. Indeed, let λ ∈ σ(T ), λ 6= 0 and let f ∈ L2
Rx
be the appropriate eigenfunction. Define g(x) = 0
|f (y)|2 dy. Clearly, g is a monotone differentiable function

and g 0 (x) = |f (x)|2 a.e. Let a = sup{x ∈ [0, 1] : g(x) = 0}. (Since g(0) = 0 such a number exists.) Now, for

a.e. x,
x 2
Z Zx Zx
2 2 2
|f (y)|2 dy,

|λf (x)| = |T f (x)| = K(x, y)f (y) dy ≤ |K(x, y)| dy


0 0 0
4.2. LINE INTEGRALS 49

Rx
so |λ|2 g 0 (x)/g(x) ≤ 0
|K(x, y)|2 dy for a.e. x ∈ (a, 1). By integrating the last inequality we obtain

Z1 Zx
2 1
|λ| ln g(x) |a ≤ |K(x, y)|2 dy ≤ kT k2
a 0

which is a contradiction since ln g(1) = ln kf k2 and kT k are finite, but ln g(a) is not.

This example shows that there are many compact quasinilpotent operators. For the Volterra-type integral

operators we can exhibit some invariant subspaces.

Theorem 4.1.3. Let T be a Volterra-type integral operator with kernel K, let a ∈ [0, 1], and let Ma = {f ∈

L2 : f (x) = 0 when x ≤ a}. Then Ma is a subspace of L2 that is invariant for T .

Exercise 4.1.1. Prove Theorem 4.1.3.

A deep result in the theory of integral operators is that every compact quasinilpotent operator is unitarily

equivalent to an operator of the form as in Example 4.1.3. Consequently every compact operator (quasinilpotent

or not) has an invariant subspace. As we will demonstrate, there is a way to prove an even stronger theorem.

(See Theorem 4.3.2 below.)

4.2. Line integrals

In this section we make a brief detour, by considering line integrals of functions of a complex variable with

values in L(H).

Example 4.2.1. Let T ∈ L(H) and consider the function ρ(λ) = (T − λ)−1 defined for λ ∈ ρ(T ). This

function is known as the resolvent of T .

Let C be a curve in the complex plane. We will assume that it is parametrized by a continuous function

γ : [0, 1] → C and that it is rectifiable, which means that γ is a function of bounded variation. Suppose that S is

a function defined and continuous on C, with values in L(H). Let P be a partition of [0, 1]: 0 = t0 < t1 < t2 <
50 4. INVARIANT SUBSPACES

· · · < tn = 1 and, for 1 ≤ k ≤ n let t∗k ∈ [tk−1 , tk ]. Then we have a partition of C with points γi = γ(ti ) and

intermediate points γi∗ = γ(t∗i ). Let us denote ∆γi = γi − γi−1 and consider the sum

n
X
S(γk∗ ) ∆γk .
k=1
R
It can be shown that these sums converge to a unique operator S which we denote as S = C
S(γ) dγ. Moreover,

if T is an operator that commutes with each S(γ), then T commutes with S.

Example 4.2.2. Let T ∈ L(H), and let C be a curve in ρ(T ) defined by γ = γ(t). For every λ ∈ ρ(T ), the
R
function ρ(λ) is a continuous function (in the uniform topology), so we can consider C
ρ(γ) dγ.

What happens when the curve C is replaced by a curve C 0 that is not far from C?

Theorem 4.2.1. Let C0 be a rectifiable curve in the resolvent set of T , and let C1 be a curve homotopic to
R R
C0 . Then C0
ρ(γ) dγ = C1
ρ(γ) dγ.

Remark 4.2.1. All these facts can be established following the same procedures as in the case when the

integrand is a complex-valued function. [See Conway.]

R
Now we turn to operators. Example 4.2.2 showed that the operator C
ρ(γ) dγ is well defined. It turns out

that this operator has some interesting properties.

Theorem 4.2.2. Let C be a simple closed rectifiable curve in ρ(T ). Then the operator
Z
1
(4.1) P =− ρ(λ) dλ
2πi
C

is a projection (not necessarily orthogonal) that commutes with every operator that commutes with T . Conse-

quently, the subspaces Ran P and Ker P are both invariant for T .

Proof. Let C 0 be a simple closed rectifiable curve in ρ(T ) that lies inside C and is homotopic to C. Then
Z Z Z Z
2 2
(2πi) P = ρ(γ) dγ ρ(λ) dλ = ρ(γ)ρ(λ) dγdλ.
C C0 C C0
4.2. LINE INTEGRALS 51

A calculation shows that ρ(γ)ρ(λ) = [ρ(γ) − ρ(λ)](γ − λ)−1 . Thus we have that

Z Z Z Z Z
(2πi)2 P 2 = ρ(γ) (γ − λ)−1 dλdγ − ρ(λ) (γ − λ)−1 dγdλ = −2πi ρ(γ) dγ − 0 = (2πi)2 P.
C0 C C C0 C0

So, P 2 = P , and it follows from the definition of the integral and ρ(λ), that if A commutes with T then A

commutes with P .

Finally, if y ∈ Ran P , then T y = T P y = P T y so T y ∈ Ran P . Similarly, if x ∈ Ker P , then 0 = T P x = P T x

so T x ∈ Ker P . 

Exercise 4.2.1. Verify that ρ(γ)ρ(λ) = [ρ(γ) − ρ(λ)](γ − λ)−1 .

Theorem 4.2.2 required that the closed curve C lies in ρ(T ), but made no reference to the spectrum of

T . Consequently, we may have a part of the spectrum inside C and a part outside. In that case we obtain a

decomposition of T .

Theorem 4.2.3. Let T be an operator in L(H), let C be a simple closed rectifiable curve in ρ(T ), let P be

the projection defined in (4.1), and let T 0 and T 00 be the restrictions of T to Ran P and Ker P , respectively. Then

T = T 0 + T 00 , the spectrum of T 0 is precisely the subset of σ(T ) inside C, and the spectrum of T 00 is precisely the

subset of σ(T ) outside C.

Proof. Since ρ(λ) commutes with P , for any λ ∈ ρ(T ), the subspaces Ran P and Ker P are invariant for

ρ(λ). Let ρ0 (λ) and ρ00 (λ) denote the restrictions of ρ(λ) to these subspaces. If we denote by I 0 and I 00 the

identity operators on these subspaces, then ρ0 (λ)(λI 0 − T 0 ) = I 0 and ρ00 (λ)(λI 00 − T 00 ) = I 00 . Therefore, if λ ∈ ρ(T )

then λ must belong to both ρ(T 0 ) and ρ(T 00 ). In the other direction, if λ ∈ ρ(T 0 ) ∩ ρ(T 00 ) then there exist

operators A0 and A00 such that A0 (λI 0 − T 0 ) = I 0 and A00 (λI 00 − T 00 ) = I 00 . Now we can define, for any x ∈ H,

Ax = A0 P x + A00 (I − P )x. It is not hard to see that the restricitons of A to Ran P and Ker P are precisely A0

and A00 , and that A(λI − T )x = x when x belongs to either Ran P or Ker P . It follows that A(λI − T )x = x

holds for all x ∈ H, so λ ∈ ρ(T ). We conclude that λ ∈ σ(T ) iff λ ∈ σ(T 0 ) or λ ∈ σ(T 00 ).
52 4. INVARIANT SUBSPACES

Suppose now that λ lies outside of C. We will show that λ ∈ ρ(T 0 ), which is true iff there exists an operator A0

acting on Ran P and satsifying A0 (λI 0 − T 0 ) = I 0 . Actually, we will show that there exists an operator A ∈ L(H)

that commutes with T and A(λI − T ) = P . To that end, we notice that

(T − λI)ρ(γ) = (T − λI)(T − γI)−1 = (T − γI)(T − γI)−1 + (γ − λ)(T − γI)−1 = I + (γ − λ)(T − γI)−1 .

Therefore,

Z Z Z
1 −1 1 −1 1
(4.2) (T − λI) ρ(γ)(γ − λ) dγ = (γ − λ) dγ I + ρ(γ) dγ = 0 − P = −P.
2πi 2πi 2πi
C C C

On the other hand, if λ lies inside of C, then the integral in (4.2) equals I − P , so the restriction to Ker P yields

I 00 . Once again, this shows that λI 00 − T 00 is invertible. 

4.3. Invariant subspaces for compact operators

In Section 4.1 we have discovered that every compact operator on Hilbert space has an invariant subspace.

What more is there to say? For one thing, if λ is an eigenvalue of T , then E(λ) is hyperinvariant. Thus, it is

natural to ask whether a compact quasinilpotent operator always has a hyperinvariant subspace.

Before we address this question, let us take a look at the set of all operators that commute with T . It is

called the commutant of T , it is denoted by {T }0 , and it is an algebra. The last statement means that {T }0 is

closed under sums, products, and multiplication by scalars.

Exercise 4.3.1. Prove that {T }0 is an algebra.

Definition 4.3.1. A subalgebra of L(H) is transitive if it is weakly closed, unital (containing the identity

operator), and has only the trivial invariant subspaces.

Example 4.3.1. The algebra L(H) is transitive. It is clearly weakly closed and unital. If L(H) had a non-

trivial invariant subspace M, then we could pick non-zero vectors x ∈ M⊥ and y ∈ M, and consider the rank

one operator T = x ⊗ y. This would lead to a contradiciton, since y ∈ M but T y = (x ⊗ y)y = hy, yix ∈ M⊥ .
4.3. INVARIANT SUBSPACES FOR COMPACT OPERATORS 53

A big open problem in operator theory is whether L(H) is the only transitive algebra. This is true when H

is finite dimensional.

Theorem 4.3.1 (Burnside’s Theorem). Let H be a finite dimensional vector space of dimension larger than

1. If A is a transitive algebra of linear transformations on H, then A = L(H).

Proof. We will show that A contains a rank one operator. Let T0 be an operator with minimal non-zero

rank d. If d > 1, choose x1 and x2 so that vectors T0 x1 , T0 x2 are linearly independent, and then choose A ∈ A so

that AT0 x1 = x2 . (Such an operator A exists, otherwise {AT0 x1 : A ∈ A} would be a subspace of H, invariant for

A.) Then T0 AT0 x1 (= T0 x2 ) and T0 x1 are linearly independent, and T0 AT0 − λT0 is not a zero transformation for

any λ ∈ C. On the other hand, there exists a complex number λ0 such that the restriction of T0 A − λ0 to Ran T0

is not invertible. Therefore, T0 AT0 − λ0 T0 has rank less than d and greater than 0, contradicitng the minimality

of d. Hence d = 1.

If T0 = x ⊗ y, we will show that A contains all rank one operators. Let u ⊗ v be a rank one operator. Once

again, there must be an operator A1 ∈ A such that A1 x = u. Notice that the algebra A∗ = {A∗ : A ∈ A} is also

transitive. Therefore, there exists an operator A2 ∈ A such that A∗2 y = v. Then A1 T0 A2 = u ⊗ v so A contains

all rank one operators and, hence, all finite rank operators, i.e. L(H). 

Exercise 4.3.2. Prove that if A is a subalgebra of L(H) and x ∈ H, then Ax = {Ax : A ∈ A} is a subspace

of H, invariant for A.

Exercise 4.3.3. Prove that A is transitive iff A∗ is transitive.

Theorem 4.3.2 (Lomonosov’s Theorem). Let A be a non-scalar operator on Hilbert space that commutes

with a compact operator. Then A has a nontrivial hyperinvariant subspace.

The proof of this result uses a fixed point theorem.

Theorem 4.3.3. Let F be a compact and convex subset of Hilbert space H, and let T be a linear operator in

L(H) with the property that T (F ) ⊂ F . Then there exists p ∈ H such that T p = p.
54 4. INVARIANT SUBSPACES

Proof. For every n ∈ N, let Tn = (1 + T + T 2 + · · · + T n−1 )/n. The set Tn (F ) is convex, (Exercise 4.3.4),

and compact, as the image of a compact set under a continuous map. Also, Tn (F ) ⊂ F , because if x ∈ F then

T k x ∈ F , 0 ≤ k ≤ n − 1, and K is convex. Further, for any m, n ∈ N, Tm (F )Tn (F ) ⊂ Tm (F ) ∩ Tn (F ) which

shows that the family {Tn (F )}n∈N has a finite intersection property. Since they are all subset of a compact set

F , they all have a non-empty intersection, i.e., there exists p ∈ ∩{Tn (F ) : n ∈ N}. We will show that T p = p.

Suppose, to the contrary, that T p 6= p. Then there exists α > 0 such that kT p − pk ≥ α. Since F is a bounded

set, there exists M > 0 such that kxk ≤ M , for x ∈ F . Let n be a positive integer satisfying n > 2M/α. Since

p ∈ Tn (F ), there exists xn ∈ F such that p = Tn xn and, therefore,

1 + T + T 2 + · · · + T n−1 Tn − 1
T p − p = (T − 1)Tn xn = (T − 1) xn = xn .
n n

Then α ≤ kT p − pk = k(T n − 1)/nxn k ≤ (kT n xn k + kxn k)/n ≤ 2M/n which contradicts the choice of n. 

Exercise 4.3.4. Prove that if C is a convex set in Hilbert space H and T ∈ L(H), then T (C) is a convex set.

Now we can prove the result which is frequently referred to as the Lomonosov’s Lemma.

Theorem 4.3.4. If A is a transitive subalgebra of L(H) and if K is a non-zero compact operator in L(H),

then there exists an operator A ∈ A and a non-zero vector x ∈ H such that AKx = x.

Proof. Without loss of generality we will assume that kKk = 1. As we have already noticed, it suffices to

consider the case when K is quasinilpotent. Let x0 be a vector in H such that kKx0 k > 1 and notice that this

implies that kx0 k > 1, so the closed ball B(x0 , 1) does not contain 0. Let D be the image under K of the closed

ball B(x0 , 1). By Exercise 2.6.4, D is a compact set. In addition, it is convex, by Exercise 4.3.4 and it does not

contain 0. Indeed, for any x ∈ B(x0 , 1), kKxk ≥ kKx0 k − kK(x − x0 )k > 1 − kx − x0 k ≥ 0.

For an operator T ∈ A, consider the set UT = {y ∈ H : kT y − x0 k < 1}. Notice that UT = T −1 ({z :

kz − x0 k < 1} so it is an open set. Moreover, every non-zero vector y belongs to UT , for some T ∈ A. Indeed, A

is transitive so the linear manifold {T y : T ∈ A} must be dense in H and, hence, there exists T ∈ A such that

kT y − x0 k < 1, which means that y ∈ UT . Thus, ∪T ∈A UT is a covering of H − {0}, and all the more of D. As
4.3. INVARIANT SUBSPACES FOR COMPACT OPERATORS 55

established earlier, D is a compact set, so there exist operators T1 , T2 , . . . , Tn ∈ A such that D ⊂ ∪ni=1 UTi . This

means that, for any y ∈ D there exists Ti , 1 ≤ i ≤ n, such that kTi y − x0 , k < 1.

Now, for each j, 1 ≤ j ≤ n, and y ∈ D, we define αj (y) = max{0, 1 − kTj y − x0 k}. Notice that each αj is
Pn
continuous on D, 0 ≤ αj ≤ 1, and j=1 αj (y) > 0, for all y ∈ D. Furthermore, αj (y) 6= 0 iff kTj y − x0 k < 1.

Define, for y ∈ D and 1 ≤ j ≤ n,

αj (y)
βj (y) = P
n ,
αi (y)
i=1

Pn
and notice that each βj is continuous on D, 0 ≤ βj ≤ 1, and j=1 βj (y) = 1, for all y ∈ D. Also, βj (y) 6= 0 iff
Pn
αj (y) 6= 0 iff kTj y − x0 k < 1. Finally, let Ψ : D → H be defined by Ψ(y) = j=1 βj (y)Tj y. It is easy to see that

Ψ is continuous on D. We will show that Ψ(D) ⊂ B(x0 , 1). Let y ∈ D. Then

n
X n
X n
X
kΨ(y) − x0 k = k βj (y)Tj y − βj (y)x0 k ≤ |βj (y)|kTj y − x0 k ≤ 1
j=1 j=1 j=1

so Ψ(y) ∈ B(x0 , 1) and Ψ(D) ⊂ B(x0 , 1). If we define Φ : B(x0 , 1) → H by Φ(y) = Ψ(Ky), then Φ is a continuous

map of B(x0 , 1) into itself. Since B(x0 , 1) is a compact, convex set, Theorem 4.3.3 shows that Φ has a fixed
n
P
point p ∈ B(x0 , 1), hence non-zero. Now we define the operator A = βj (Kp)Tj which is in A. Finally,
j=1
n
P
AKp = βj (Kp)Tj Kp = Ψ(Kp) = Φ(p) = p. 
j=1

Now we can prove Lomonosov’s theorem.

Proof of Lomonosov’s Theorem. Let A = {A}0 and suppose, to the contrary, that A is transitive. By

Theorem 4.3.4, there exists an operator T ∈ {A}0 such that T Kx = x. In other words, a compact operators AK

has 1 as an eigenvalue. Let E(1) denote the appropriate eigenspace which is finite dimensional. Since A commutes

with T K, the subspace E(1) is invariant for A as well. The restriction of A to E(1) must have an eigenvalue λ

and, since E(1) is invariant for A, we see that λ is an eigenvalue for A (not just the restriction). Let M denote

the eigenspace of A corresponding to λ, i.e., M = {x ∈ H : Ax = λx}. Being an eigenspace, it is hyperinvariant

for A. It is not (0), so it remains to notice that it is not H because A 6= λ. 


56 4. INVARIANT SUBSPACES

4.4. Normal operators

We have seen in Exercise 2.7.1 that a multiplication operator Mh on L2 is a normal operator. In this section

we will show that, in a sense, every normal operator is a multiplication by an essentially bounded function.

Example 4.4.1. Let T = [ a0 0b ], with a, b ∈ C. Then T T ∗ = T ∗ T . Let X = {1, 2} and let µ be a counting
1/2
measure on X. Notice that L2 (X, µ) is the collection of all functions f : X → C with norm |f |2 dµ
R
X
=
1/2
|f (1)|2 + |f (2)|2 . Since this is the Euclidean norm, we see that L2 (X, µ) is just L(C2 ). Finally, let h be a

function on X, h(1) = a, h(2) = b. Then T can be identified with Mh on L(C2 ).

Remark 4.4.1. A similar construction can be made for the case when T is an n × n diagonal matrix,

T = diag(cn ).

Example 4.4.2. Let T = diag(cn ), with cn ∈ C for all n ∈ N. Let X = N and µ({n}) = 1/2n . Then (X, µ)

is a finite measure space. Further, let h : X → C be defined by h(n) = cn . Then T can be identified with the

operator Mh on L2 (X, µ).

The last example shows the danger of going through the motions. What does it mean “can be identified”?

While it is easy to see that T f = Mh f for any sequence f , their domains are not the same. Namely, T acts on

`2 but Mh acts on L2 (X, µ), and these 2 spaces are not the same. For example, the sequence (1, 1, 1, . . . ) belongs

to L2 (X, µ) but not to `2 . However, these two spaces are isomorphic. Let U : L2 (X, µ) → `2 be defined by
√ √ √
U (f ) = (f (1)/ 2, f (2)/ 22 , f (3)/ 23 , . . . ). It is easy to verify that U is injective and surjective linear map so,

by the Open Mapping Principle, it is an isomorphism. Moreover, if f ∈ L2 (X, µ), then

f (1) f (2) f (3) c1 f (1) c2 f (2) c3 f (3)


U −1 T U (f ) = U −1 T ( √ , √ , √ , . . . ) = U −1 ( √ , √ , √ ,...)
2 22 23 2 22 23
h(1)f (1) h(2)f (2) h(3)f (3)
= U −1 ( √ , √ , √ , . . . ) = hf,
2 22 23

so T is unitarily equivalent to Mh .
4.4. NORMAL OPERATORS 57

Exercise 4.4.1. Prove that the map U : L2 (X, µ) → `2 , constructed in Example 4.4.2, is an isometric

isomorphism.

Notice that in Examples 4.4.1 and 4.4.2 the measure was defined on each of the pieces. What happens if

pieces are not that obvious? How do we define a piece?

Definition 4.4.1. A vector ξ is cyclic for an operator T if the set {p(T )ξ : p is a polynomial} is dense in H.

An operator T is cyclic if it has a cyclic vector.

Example 4.4.3. Let T = S, the unilateral shift. The vector ξ = e1 is cyclic for S. If x ∈ `2 , x = (x1 , x2 , . . . )
Pn Pn
then x can be approximated by truncated sequences (x1 , x2 , . . . , xn , 0, 0, . . . ) = k=1 xk ek = k=1 T k e1 .

Example 4.4.4. Let {. . . , e−2 , e−1 , e0 , e1 , e2 , . . . } be an o.n.b. of H, and let T be the bilateral shift: T en =

en+1 , n ∈ Z. Then ξ = e0 is not a cyclic vector for T , because {p(T )e0 }− = ∨∞ ∗


k=0 ek . However, T en = en−1 ,
Pn
n ∈ Z, so we need to replace polynomials in T by polynomials in T and T ∗ , i.e., f (T ) = i,j=1 T i T ∗ j . If the set

{f (T )ξ : f is a polynomial in T, T ∗ } is dense in H, we say that e0 is a star-cyclic vector for T .

Before we proceed, we revisit the Stone–Weierstrass Theorem [Bartle, p. 184]. Although it is proved under

the assumption that K is a compact subset of Rp , the same proof is valid when K is a compact set in C. Also,

we will rephrase it using the following terinology. We will say that an algebra A of functions separates points on

K if, for any two distinct points x, y ∈ K there is a function f ∈ A such that f (x) 6= f (y). If for each x ∈ K

there is a function g ∈ A such that g(x) 6= 0, we say that A vanishes at no point of K.

Theorem 4.4.1 (Stone–Weierstrass Theorem). Let A be an algebra of continuous, real-valued functions on

a compact set K in C. If A separates points on K and if A vanishes at no point of K, then the uniform closure

of B of A consists of all real-valued continuous functions on K.

The Stone–Weierstrass Theorem deals only with real-valued functions of complex variable. Now we extend

it to complex-valued functions. We will require that A be self-adjoint, meaning that if f ∈ A the f ∈ A.


58 4. INVARIANT SUBSPACES

Theorem 4.4.2. Let A be a self-adjoint algebra of continuous, complex functions on a compact set K in C.

If A separates points on K and if A vanishes at no point of K, then the uniform closure of B of A consists of all

complex continuous functions on K.

Proof. Let f = u+iv be a continuous function on K, and let AR denote the set of all real-valued functions in

A. Since u, v are continuous real-valued continuous function on K, it suffice to show that every such function lies

in the closure of AR . Since AR is clearly an algebra, the result will follow from the Stone–Weierstrass Theorem,

once we show that AR separates points on K and vanishes at no point of K.

Suppose that z1 , z2 are distinct points in K. By assumption, A separates points on K so it contains a function

f such that f (z1 ) 6= f (z2 ). Also, A vanishes at no point of K, so it contains two functions g, h such that g(z1 ) 6= 0,

h(z2 ) 6= 0. Then, the function

f (z)g(z) − f (z2 )g(z)


F (z) =
f (z1 )g(z1 ) − f (z2 )g(z1 )

belongs to A and has the property that F (z1 ) = 1, F (z2 ) = 0. Notice that, if F = u + iv ∈ A, then F ∈ A and

u = (F + F )/2 ∈ AR . Clearly, u(z1 ) = 1, u(z2 ) = 0 so AR separates points on K.

Let z0 ∈ K. Then there exists a function G ∈ A such that G(z0 ) 6= 0. Let λ be a complex number such that

λG(z0 ) > 0 and notice that H = Re(λG) is a function in AR such that H(z0 ) > 0. Thus, AR vanishes at no point

of K and the proof is complete. 

Now we are ready to establish a stronger connection between normal operators and operators of multiplication.

Theorem 4.4.3. Let T be a normal operator in L(H) with a star-cyclic vector ξ. Then there exists a finite

measure µ on σ(T ), a bounded function h : σ(T ) → R, and an isomporphism U : L2 (σ(T ), µ) → H such that

U −1 T U f (x) = h(x)f (x) for a.e. x ∈ σ(T ) and all f ∈ L2 (σ(T ), µ).

Proof. Let A be the algebra of complex-valued polynomials in z, z. For f ∈ A we define L(f ) = hf (T )ξ, ξi.

Clearly, L is a linear functional and it is bounded on A. Indeed, |L(f )| = |hf (T )ξ, ξi| ≤ kf (T )ξkkξk ≤ kf (T )kkξk2 .

Further, T is normal, so f (T ) is also normal and, by Corollary 3.5.5, kf (T )k = r(f (T )) = sup{|λ| : λ ∈ σ(f (T ))}.
4.4. NORMAL OPERATORS 59

Finally, by the Spectral Mapping Theorem, λ ∈ σ(f (T )) iff λ = f (µ), for some µ ∈ σ(T ). Thus, kf (T )k =

sup{|f (µ)| : µ ∈ σ(T )} = kf k∞ . We conclude that |L(f )| ≤ kf k∞ kξk2 , so L is bounded on A. By Theorem 4.4.2,

A is dense in C(σ(T )) so we can extend L to a bounded linear functional on C(σ(T )). If f is a non-negative function

in C(σ(T )), then so is f and it can be approximated by a sequence fn ∈ A. It follows that f can be approximated

by the sequence fn fn and, by the continuity of L, L(f )f = lim L(fn fn ) = hfn (T )fn (T )ξ, ξi = kfn (T )ξk2 ≥ 0.

Thus, L is positive, and by Riesz Representation Theorem [Royden, p. 352] there exists a finite positive measure

f dµ. Now define the operator U on A by U (f ) = f (T )ξ. Since |f |2 = f f


R
µ on σ(T ) such that hf (T )ξ, ξi =

|f |2 dµ = hf (T )f (T )ξ, ξi = kf (T )ξk2 = kU (f )k2 . That way, U is an isometry on A. Further,


R
we have that

A is dense in L2 (µ) because it is dense in C(σ(T )), and the latter set is dense in L2 ([Rudin, Theorem 3.14]).

Therefore, by Theorem 2.3.4, U can be extended to an isometry U : L2 (σ(T ), µ) → H. Since ξ is star-cyclic, the

set {f (T )ξ : f ∈ A} is dense in H so the range of U is dense. Since U is bounded below its range is closed so U

is surjective.

Finally, if we denote by f˜(z) the function zf (z), then U −1 T U (f ) = U −1 T f (T )ξ = U −1 f˜(T )ξ = f˜ so T can

be identified with Mz on L2 (σ(T ), µ). 

What if T does not have a star-cyclic vector?

Theorem 4.4.4. Let T be a normal operator in L(H). Then there exists a compact set X, a finite measure µ

on X, a bounded function h : X → R, and an isomporphism U : L2 (X, µ) → H such that U −1 T U f (x) = h(x)f (x)

for a.e. x ∈ X and all f ∈ L2 (X, µ).

Proof. Let x1 be a non-zero vector and let M1 be the closed linear span of {f (T )x1 : f ∈ A}. If M1 = H

then x1 is a star-cyclic vector for T and Theorem 4.4.3 applies. If M1 6= H there exists a non-zero vector x2 ∈ M⊥
1.

Notice that M1 is invariant (hence reducing) for T and T ∗ , so the same is true of M⊥
1 . Now, either the closed

linear span of {f (T )x2 : f ∈ A} equals M⊥


1 , in which case T = T1 ⊕ T2 and both T1 and T2 are star-cyclic, or

we continue the process. Applying the Hausdorff Maximal Principle, we obtain a decomposition of H relative to

which T = diag(Ti ) and each of the operators on the diagonal is star-cyclic. By Theorem 4.4.3, for each i there
60 4. INVARIANT SUBSPACES

exists a finite measure space (Xi , µi ), a function hi ∈ L2 (Xi , µi ), and unitary operator Ui : L2 (Xi , µi ) → Mi ,

such that Ui−1 Ti Ui = Mhi . Next we define X to be the union of Xi and µ a measure on X so that µ = µi on

Xi . Finally, we define a function h so that h = hi on Xi and a unitary operator U = diag(Ui ). Then T can be

identified with Mh on L2 (X, µ), i.e., U −1 T U = Mh . 

We will now introduce a very important concept.

Definition 4.4.2. If X is a set, Ω a σ-algebra of subsets of X, and H is Hilbert space, a spectral measure

for (X, Ω, H) is a function E : Ω → L(H) such that

(a) for each ∆ in Ω, E(∆) is a projection;

(b) E(∅) = 0 and E(X) = 1;

(c) E(∆1 ∩ ∆2 ) = E(∆1 )E(∆2 );


P
(d) if {∆i }i∈I are pairwise disjoint sets in Ω, then E(∪i∈I ∆i ) = i∈I E(∆i ).

Example 4.4.5. Let X = N, let Ω be the set of all subsets of N, and let {en }n∈N be an o.n.b. of H. For ∆ ⊂ N,

define E(∆) to be the projection onto the span ∨n∈∆ en . Properties (a) and (b) of Definition 4.4.2 are obvious.

Since E(∆)ei is either ei or 0, depending on whether i belongs to ∆ or not, we see that E(∆1 )E(∆2 )ei = 0 unless
P P
i ∈ ∆1 ∩ ∆2 , in which case it equals ei . Thus, for x = xi ei , E(∆1 )E(∆2 )x = i∈∆1 ∩∆2 xi ei = E(∆1 ∩ ∆2 )x,
P
and (c) holds as well. Finally, if {∆i }i∈I are pairwise disjoint sets in Ω, and ∆ = ∪i∈I ∆i , writing x = n∈N xn en ,
P P
we have that E(∆)x = i∈∆1 xi ei + i∈∆2 xi ei + · · · = E(∆1 )x + E(∆2 )x + . . . .

Example 4.4.6. If X is a set, Ω a σ-algebra of subsets of X, and µ a measure on Ω, let H = L2 (X, µ), and

define, for ∆ ∈ Ω and f ∈ L2 , E(∆)f = χ∆ f . Then, E is a spectral measure.

Exercise 4.4.2. Verify that E in Example 4.4.6 is a spectral measure.

We will now show that the equality U −1 T U = Mh , established in Theorem 4.4.3, can be extended in the

following manner. Suppose that F is a bounded function on σ(T ). Then we can define F (T ) = U MF ◦h U −1 since,

for x ∈ H, U −1 x ∈ L2 and MF ◦h U −1 x is also in L2 , so U MF ◦h U −1 x is well defined.


4.4. NORMAL OPERATORS 61

Theorem 4.4.5. Let T be a bounded linear operator on Hilbert space H. The mapping F 7→ F (T ) is an

algebra homomorphism from L∞ (σ(T ), µ) to L(H).

Exercise 4.4.3. Prove Theorem 4.4.5.

Remark 4.4.2. The homomorphism F 7→ F (T ) is called a functional calculus.

Example 4.4.6 shows that a spectral measure can be defined using multiplication by characteristic functions.

We present a variation on this theme.

Theorem 4.4.6. If T is a normal operator on Hilbert space, ∆ is a measurable subset of σ(T ), and F = χ∆ ,

then the mapping E defined by E(∆) = F (T ) = U MF ◦h U −1 is a spectral measure.

Exercise 4.4.4. Prove Theorem 4.4.6.

Exercise 4.4.5. What is E when T = diag(cn )?

Let x, y ∈ H and denote by f = U −1 x and g = U −1 y. Since U is a surjective isometry, U −1 = U ∗ so f = U ∗ x

and g = U ∗ y. If F = χ∆ then, by definition, hE(∆)x, yi = hF (T )x, yi = hU MF ◦h U −1 x, yi = hMF ◦h f, gi =


R
F ◦ h f g dµ. On the other hand, E is the spectral measure of T , so hE(∆)x, yi also defines a measure ν(∆). It

is often called the scalar spectral measure of T .

Exercise 4.4.6. Verify that ν is a measure.

R R
Now, hE(∆)x, yi is equal to χ∆ dν as well as to F ◦ hf g dµ, so we have the equality

Z Z
(4.3) F ◦ hf g dµ = F dν

whenever F is a characteristic function. Since every simple function is a linear combination of characteristic

functions, it is not hard to see that (4.3) remains true when F is a simple function. Further, every bounded

function can be approximated by simple functions so, by relying on Lebesgue Dominated Convergence Theorem,
62 4. INVARIANT SUBSPACES

we obtain that (4.3) holds for any bounded function F . In particular, if F (λ) = λ, we obtain that hT x, yi =
R R R
λ dν = λ dhE(λ)x, yi. Since this is true for all x, y ∈ H, we can write T = λ dE(λ) or

Z
(4.4) T = λ dE.

More generally, since (4.3) holds for any bounded function F , it follows that, for any such function,

Z
(4.5) F (T ) = F (λ) dE.

Theorem 4.4.6 established that to every normal operator there corresponds a spectral measure. The following

result shows how essential this measure is for the operator.

Theorem 4.4.7. If T is a normal operator and E the associated spectral measure, then an operator A com-

mutes with T iff A commutes with E(∆) for every Borel set ∆ ⊂ σ(T ).

Proof. Let x, y ∈ H, and let F be a bounded function on σ(T ). Then

Z
hAF (T )x, yi = hF (T )x, A∗ yi = F (λ) dhE(λ)x, A∗ yi, and
Z
hF (T )Ax, yi = F (λ) dhE(λ)Ax, yi.

If A and T commute, Fuglede–Putnam Theorem implies that A commutes with T ∗ , hence with F (T ), for any

bounded function F . In particular, by taking F = χ∆ , we obtain that hE(∆)x, A∗ yi = hE(∆)Ax, yi or, equiv-

alently that hAE(∆)x, yi = hE(∆)Ax, yi. Since this holds for all x, y ∈ H it follows that A commutes with

E(∆).

Conversely, if A commutes with E(∆), then hE(∆)x, A∗ yi = hAE(∆)x, yi = hE(∆)Ax, yi. Since hAT x, yi =

λ dhE(λ)x, A∗ yi and hT Ax, yi =


R R
λ dhE(λ)Ax, yi, we obtain that hAT x, yi = hT Ax, yi for all x, y ∈ H. Thus

AT = T A and the proof is complete. 

Theorem 4.4.7 has an important consequence that concerns the existence of hyperinvariant subspaces.
4.4. NORMAL OPERATORS 63

Corollary 4.4.8. If T is a normal operator in L(H), and E is its spectral measure, then E(∆) is a hyper-

invariant subspace for T , for any Borel set ∆ ⊂ σ(T ). Consequently, if T is not a scalar multiple of the identity,

then T has a non-trivial hyperinvariant subspace.

Exercise 4.4.7. Prove Corollary 4.4.8.


CHAPTER 5

Spectral radius algebras

5.1. Compact operators

In Section 4.3 we have shown that every compact operator is contained in an algebra, namely its commutant,

that is not transitive. Are there other algebras that would contain a given operator and still have an invariant

subspace? We will show that the answer is affirmative. Let us denote the class of quasinilpotent operators as Q.

The following is a direct consequence of Theorem 4.3.4.

Proposition 5.1.1. Let A be a unital subalgebra of L(H) and let K be a compact operator in L(H). If

AK ∈ Q for each A ∈ A, then A has a n. i. s.

Proof. If A is transitive, by Theorem 4.3.4 there exists A ∈ A such that 1 ∈ σp (AK), so AK ∈


/ Q. 

Our goal is to find an algebra A with the property stated in Proposition 5.1.1. Let A ∈ L(H). For m ∈ N,

define

!1/2
m X
∗n n
(5.1) dm = , and Rm = d2n
mA A .
1 + mr(A) n=0

Exercise 5.1.1. Prove that the series in (5.1) converges uniformly and, for each m ∈ N, Rm is invertible

−1
with ||Rm || ≤ 1.

If A is an operator in L(H) and Rm is as in (5.1), we associate with A the collection


 
−1
BA = T ∈ L(H) : sup ||Rm ARm || <∞ .
m

Exercise 5.1.2. Show that BA is an algebra.

We will show that BA contains all operators that commute with A. In fact, we can prove a stronger result.
64
5.1. COMPACT OPERATORS 65

Proposition 5.1.2. Suppose A is a nonzero operator, B is a power bounded operator commuting with A,

and T is an operator for which AT = BT A. Then T ∈ BA .

An operator T is power bounded if there exists C > 0 such that kT n k ≤ C, for all n ∈ N. For example, if

kT k ≤ 1, then T is power bounded.

Proof. It is easy to verify that A2 T = B 2 T A2 . Using induction one can prove that An T = B n T An , for

every n ∈ N. The operator B is power bounded so there is a constant C such that kB n k ≤ C, for each n ∈ N.

For any vector x ∈ H and any positive integer m, we have that


X ∞
X ∞
X
∗n n
(5.2) kRm xk2 = hRm x, Rm xi = hRm
2
x, xi = d2n
m hA A x, xi = d2n n n
m hA x, A xi = d2n n 2
m kA xk .
n=0 n=0 n=0

−1 −1 −1
On the other hand, kAn T Rm xk = kB n T An Rm xk ≤ CkT kkAn Rm xk so we obtain that


X
−1 −1
kRm T Rm xk2 = d2n n
m kA T Rm xk
2

n=0

X
2 2 n −1
≤ C kT k d2n
m kA Rm xk
2

n=0

−1
= C 2 kT k2 kRm Rm xk2

= C 2 kT k2 kxk2 .

Thus T ∈ BA . 

From this we deduce an easy consequence.

Corollary 5.1.3. Let T be an operator such that AT = λT A for some complex number λ with |λ| ≤ 1.

Then T ∈ BA . In particular BA contains the commutant of A.

Example 5.1.1. If u and v are unit vectors then Bu⊗v = {T ∈ L(H) : v is an eigenvector for T ∗ }. Let

A = u ⊗ v be a rank one operator, with u and v are unit vectors. One knows that r(u ⊗ v) = |hu, vi|. A
66 5. SPECTRAL RADIUS ALGEBRAS

calculation shows that, for n ∈ N, An = hu, vin−1 u ⊗ v and A∗n An = r2n−2 v ⊗ v. Therefore,


!
2
X d2m
Rm =I+ d2n
m r
2n−2
v⊗v =I + v ⊗ v.
n=1
1 − d2m r2
p
Let λm = 1 + d2m /(1 − d2m r2 ) for every m ∈ N. Notice that λm → ∞ as m → ∞. Indeed, either dm → 1/r

or, if A is quasinilpotent, λm = 1 + m2 . If we denote by M the one dimensional space spanned by v then,

relative to H = M ⊕ M⊥ , the matrix of Rm is Rm = −1


λ   1/λ 0

m 0
0 1 and Rm = 0
m
1
. If T is an arbitrary operator,

say T = [ X
Z
Y
W ], then
    

−1
λm 0 X Y  1/λm 0  X Y λm 
Rm T Rm =









0 1 Z W 0 1 Z/λm W
−1
and it is easy to see that supm kRm T Rm k < ∞ if and only if Y = 0. This means that M⊥ is invariant for T or,

equivalently, that M is invariant for T ∗ , and this is true iff v is an eigenvector for T ∗ .

Exercise 5.1.3. Prove that r(u ⊗ v) = |hu, vi|.

−1
Now we define QA = {T ∈ L(H) : kRm T Rm k → 0}.

Theorem 5.1.4. QA is a two sided ideal in BA and every operator in QA is quasinilpotent. Furthermore, if

A is quasinilpotent, then A ∈ QA .

−1 −1 −1
Proof. Let T ∈ QA and let X ∈ BA . Then kRm T XRm k ≤ kRm T Rm k kRm XRm k → 0 so QA is a right

ideal. Since the same estimate holds for XT we see that QA is a two sided ideal in BA . On the other hand

−1 −1
r(T ) = r(Rm T Rm ) ≤ kRm T Rm k which shows that if T ∈ QA then it must be quasinilpotent. Finally, if A ∈ Q

then r(A) = 0 and dm = m. Using (5.2) we see that

∞ ∞
−1
X
−1 1 X 2n+2 n+1 −1 2
kRm ARm xk2 = m2n kAn+1 Rm xk2 = m kA Rm xk
n=0
m2 n=0

" #
1 −1
X
−1 1  −1
 kxk2
= 2 −kRm xk2 + m2n kAn Rm xk2 = 2 kxk2 − kRm xk2 ≤
m n=0
m m2

−1
from which it follows that kRm ARm k ≤ 1/m → 0, m → ∞. 
5.1. COMPACT OPERATORS 67

Remark 5.1.1. The ideal QA need not contain every quasinilpotent operator in BA . Indeed, if A is the

2
unilateral forward shift a calculation shows that Rm = 1/(1 − d2m ). Since every operator commutes with a scalar

−1
multiple of the identity it follows that BA = L(H). On the other hand, kRm T Rm k = kT k for any T in L(H), so

QA = (0).

The following result justifies our interest in QA .

Theorem 5.1.5. If QA 6= (0) and there exists a nonzero compact operator in BA , then BA has a n. i. s.

Proof. Let K be a nonzero compact operator in BA . Without loss of generality we may assume that QK = 0

for every Q ∈ QA . Indeed, if QK 6= 0 for some Q ∈ QA , then QK is a compact quasinilpotent operator with the

property that BA QK ⊂ Q and the result follows from Proposition 5.1.1.

Let Q be a fixed nonzero operator in QA and let T be an arbitrary operator in BA . Then QT ∈ QA and,

hence, QT K = 0. Since K 6= 0 there is a nonzero vector z in the range of K. Clearly, QT z = 0 so T z ∈ ker Q for

all T ∈ BA . Naturally, the closure of the subspace {T z : T ∈ BA } is an invariant subspace for BA . It is nonzero

since z 6= 0 and the identity operator is in BA . Finally, it is not H since it is contained in the kernel of a nonzero

operator Q. 

From Theorem 5.1.5 we deduce some easy consequences.

Corollary 5.1.6. Suppose that A is a quasinilpotent operator, B is a power bounded operator commuting

with A, and K is a nonzero compact operator satisfying AK = BKA. Then BA has a n. i. s.

Proof. By Proposition 5.1.2, K is in BA . Since A ∈ Q, Theorem 5.1.4 shows that A ∈ QA . The result then

follows from Theorem 5.1.5. 

Corollary 5.1.7. Suppose that A is a quasinilpotent operator, λ is a complex number, and K is a nonzero

compact operator satisfying AK = λKA. Then either BA or BA∗ has a n. i. s. In any case, A has a proper

hyperinvariant subspace.
68 5. SPECTRAL RADIUS ALGEBRAS

Proof. If |λ| ≤ 1 Corollary 5.1.6 implies that BA has a n. i. s. For |λ| > 1, we have A∗ K ∗ = (1/λ)K ∗ A∗

so the same argument shows that BA∗ has a n. i. s. If M is such a subspace then it is hyperinvariant for A∗ . It

follows that M⊥ is a proper hyperinvariant subspace for A. 

Now we arrive to the main result of this section.

Theorem 5.1.8. Let K be a nonzero compact operator on the separable, infinite dimensional Hilbert space

H. Then BK has a n. i. s.

Proof. We will show that QK 6= (0). The result will then follow from Theorem 5.1.5. Of course, if K is

quasinilpotent, Theorem 5.1.4 shows that K ∈ QK . Therefore, for the rest of the proof, we will assume that

r(K) > 0.

−1 −1 −1
Notice that x ⊗ y ∈ QA iff kRm (x ⊗ y)Rm k → 0. However, kRm (x ⊗ y)Rm k = kRm xkkRm yk so it suffices

−1
to exhibit a rank one operator x ⊗ y with supm kRm xk < ∞ and limm kRm yk = 0. A vector y with the desired

property is supplied by the following lemma.

Lemma 5.1.9. Suppose that K is a compact operator and r(K) > 0. Then there exists a unit vector v such
−1
that Rm v → 0, m → ∞.

Proof. Let λ be a complex number in σ(K) such that |λ| = r(K). Then λ ∈ σ(K ∗ ) so there are unit vectors

u and v for which Ku = λu and K ∗ v = λv. An easy calculation shows that K (u ⊗ v) = (u ⊗ v) K so that u ⊗ v ∈
0 −1
{K} ⊂ BA . It then follows that supm ||Rm u|| ||Rm v|| < ∞. On the other hand, a straightforward calculation

−1 −1
shows that ||Rm u|| → ∞, m → ∞. Since supm ||Rm u|| ||Rm v|| < ∞ it must follow that ||Rm v|| → 0. 

Exercise 5.1.4. Prove that kRm uk → ∞, m → ∞.

So it remains to provide a nonzero vector x with the property that

(5.3) sup kRm xk < ∞.


m
5.1. COMPACT OPERATORS 69

To that end, it suffices for x to satisfy

(5.4) lim sup kK n xk1/n < r(K).


n

P∞ 2
Indeed, (5.4) implies that the power series n=0 ||K n x|| z n has radius of convergence bigger than 1/r2 and,

kK n xk2 /r2n converges. Since


P
consequently, the series n

∞  2n
2
X m 2
||Rm x|| = ||K n x||
n=0
1 + mr

and {m/(1 + mr)} is an increasing sequence converging to 1/r, we see that (5.4) implies (5.3).

It is not hard to see that, if K has an eigenvalue λ with the property that |λ| < r(K), then any eigenvector

corresponding to λ satisfies (5.4). Thus we may assume that 0 is an isolated point of σ(K). Let Γ be a positively

oriented circle around the origin such that 0 is the only element of σ(K) inside the circle, and let

Z
1
P =− (K − λI)−1 dλ.
2πi
Γ

By Theorem 4.2.2, P is a projection that commutes with K, and the restriction K0 of K to the invariant subspace
1/n 1/n
Ran P is quasinilpotent. It follows that, if x is a unit vector in Ran P , then ||K n x|| = ||K0n x|| ≤ kK0n k1/n →

0. This completes the proof of the theorem. 

Exercise 5.1.5. Prove that if u ⊗ v is a rank one operator, then ku ⊗ vk = kukkvk.

As mentioned earlier, the presence of proper invariant subspaces for BK (K compact) is an advancement in
0
invariant subspace theory only if BK differs from {K} . We do not know at the present time if BK can equal
0
{K} for a compact nonzero operator K on an infinite dimensional space. We do know that the answer is no if

K has positive spectral radius.

Proposition 5.1.10. Let K be a compact operator on an infinte dimensional Hilbert space such that r(K) > 0.
0
Then BK 6= {K} .
70 5. SPECTRAL RADIUS ALGEBRAS

Proof. Notice that the vectors x and y obtained in the proof of Theorem 5.1.8 satisfy (5.3) and K ∗ y = λ̄y,

with |λ| = r(K). Since it was established that x ⊗ y ∈ BK it suffices to prove that K(u ⊗ v) 6= (u ⊗ v)K. This

follows from the fact that Kx 6= λx which is a simple consequence of (5.3). 

You might also like