Potential Scattering On The Real Line: 1.1 Free Hamiltonian

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Potential Scattering on the Real Line

Siu-Hung Tang and Maciej Zworski


1.1 Free Hamiltonian
We start with a systematic analysis of the simplest free scattering problem on R, namely,
when the quantum Hamiltonian is given by
H
0
= D
2
x
=
2
x
where D
x
=
1
i

x
. Its eigenequation is
(H
0

2
)u = 0 (1.1)
with general solution given by
u = Ae
ix
+ Be
ix
(1.2)
where A and B are arbitrary constants. Also, the Schr odinger equation

(i
t
H)v = 0
v[
t=0
= v
0
(1.3)
is solved by
v(t, x) =
1
(4ti)
1
2

(xy)
2
4ti
v
0
(y)dy . (1.4)
In particular, when v
0
= u where u as in (1.2), we have
v(t, x) = Ae
ixix
+Be
ixi
2
x
. (1.5)
Take > 0 and write the phase as
(x t) and (x +t) .
We can think of the rst term in (1.5) as a wave moving to the right and the second term
as a wave moving to the left. Consequently, we call
Ae
ix
for x < 0 , Be
ix
for x > 0
the incoming terms and
Ae
ix
for x > 0 , Be
ix
for x < 0
1
the outgoing terms.
e
ix

x
e
ix

Figure 1
Hence, the solution e
+
(x, ) := e
ix
of (1.1) is incoming for x < 0 and outgoing for
x > 0. Similarly, the solution e

(x, ) := e
ix
of (1.1) has the opposite property. We
call them plane waves. Clearly, there exists no solution of (1.1) with only incoming terms
or only outgoing terms for = 0. However, for = 0, the solution u 1 is considered
both incoming and outgoing. Although the physical intuition underlying our incoming and
outgoing convention makes sense only for > 0, we will take the same convention for all
C. Note that this amounts to a convention of taking the square root of the energy
2
.
Next we consider the equation
(H
0

2
)u = f for f C

0
(R). (1.6)
It has a unique outgoing and a unique incoming solution given by
u

(x, ) =
i
2

f(y) e
i[xy[
dy (1.7)
where the plus sign gives the outgoing solution and the minus sign gives the incoming one.
Note that the uniqueness follows from the fact that there is no incoming or outgoing solution
to the eigenequation (1.1) as mentioned above.
An alternative characterization of the outgoing or incoming solution u

of (1.6) is that
u
+
(x, ) L
2
(R) for Im > 0
and
u

(x, ) L
2
(R) for Im < 0.
Also, we have
u
+
(x, ) = u

(x, ) = u

(x, ). (1.8)
We can now dene the outgoing resolvent of H
0
, R
0
(), by
u
+
= R
0
()f for f C

0
(R) .
2
and R
0
() is then dened to be the incoming resolvent which satises
u

= R
0
()f for f C

0
(R) .
by (1.8). Note that (1.7) implies
R
0
()(x, y) =
i
2
e
i[xy[
. (1.9)
Clearly, the outgoing resolvent R
0
() is bounded on L
2
(R) for Im > 0. Moreover, its
norm is given by
|R
0
()|
L
2
L
2 =
1
d(R
+
,
2
)
. (1.10)
where d(R
+
,
2
) denotes the distance between
2
and the positive real axis R
+
. This follows
from standard facts in the spectral theory of self-adjoint operator. In our case, it can be
seen directly from the Plancherel Theorem. Indeed, for f C

0
(R), we have
R
0
()f = T
1

1
( )
2

2
Tf

where T is the Fourier transform. Since


|M 1
( )
2

2
|
L
2
L
2
= sup
R
1

2
=
1
d(R
+
,
2
)
with M
g
(u) = gu denoting the multiplication operator, (1.10) follows.
Also, by (1.9), R
0
() as an operator
R
0
() : L
2
comp
(R) L
2
loc
(R)
has a meromorphic extension to C with a simple pole at = 0. In fact, we have
R
0
() =
P

+ Q()
with (Pf)(x) =
i
2

R
f(y)dy for f L
2
comp
(R) and Q() : L
2
comp
(R) L
2
loc
(R) being
entire in C.
Formally, P =

where (x) =
e
i/4

2
can be regarded as an outgoing solution to
(1.1) at = 0.
3
The spectral decomposition of H
0
can be given by the Fourier transform as follows (here
we use the same symbol of the operator to denote its Schwartz kernel),
H
0
(x, y) = D
2
x
(x, y) =
1
2

2
e
i(xy)
dy :=

2
dE

(x, y) . (1.11)
Thus the spectral measure dE

is given by
dE

(x, y) =
1
2
(e
i(xy)
+e
i(xy)
)d
=
1
2
(e
+
(x, )e
+
(y, ) + e

(x, )e

(y, ))d (1.12)


=

i
(R
0
() R
0
())(x, y)d
where the last equality follows from (1.9). Note that (1.12) is a special case of a general
result in functional analysis, namely the Stones Formula, and of the spectral decomposition
in terms of generalized eigenfunctions. To see this, we have to clarify the convention. Write
H
0
=


0
zdE
z
=

2
dE

(z =
2
)
as usual (note that H
0
is a nonnegative operator). Thus dE
z
= dE

. In our incoming and


outgoing convention, for > 0, R
0
() = R
0
(z + i0) and R
0
() = R
0
(z i0). Hence the
usual Stones Formula,
dE
z
=
1
2i
(R
0
(z +i0) R
0
(z i0))dz
coincides with our formula (1.12).
Next we mention the connection with the wave equation:
(D
2
t
D
2
x
)U = f . (1.13)
It can be solved by using the advanced and retarded fundamental solutions E

which are
characterized uniquely by the following support property.
(D
2
t
D
2
x
)E

(t, x, y) =
0
(t)
y
(x)
E

(t, x, y) = 0 for t < 0


(1.14)
Indeed, we have
E

(t, x, y) =

1
2
t > [x y[
0 otherwise.
4
Their relation with the outgoing and incoming resolvents R
0
() is then given by
E

(t, x, y) =
1
2

R
0
()(x, y)e
it
d . (1.15)
Finally, we remark that our incoming and outgoing convention is motivated by the Schr odinger
equation and is dierent from the wave equation approach.
1.2 Perturbed Hamiltonian and Distorted Plane Waves
We now consider the perturbed Hamiltonian on R:
H
V
= D
2
x
+V for V L

comp
(R)
We would like to establish the same results we had for H
0
in section 1.1.
We will rst show the existence of the resolvent R
V
() which satises, for f C

0
(R),
(H
V

2
)R
V
()f = f (1.16)
with R
V
()f being outgoing. Here, the meaning of outgoing (or incoming) can be taken as
that of section 1.1 since R
V
()f solves (1.1) for large x.
Theorem 1.1 The operator
R
V
() : L
2
comp
(R) L
2
loc
(R)
satisfying (1.16) exists as a meromorphic function of operators for C and it has no pole
for R ` 0.
Proof. For Im > 0, by applying the operator H
V

2
to the free resolvent, we get
(D
2
x
+ V
2
)R
0
() = I +V R
0
() . (1.17)
By (1.10), we have
|V R
0
()|
L
2
L
2 << 1 for Im >> 0 .
Thus, for Im >> 0,
(I +V R
0
())
1
=

k=0
(V R
0
())
k
: L
2
(R) L
2
(R)
exists and is holomorphic in .
5
For Im >> 0, let
R
V
() = R
0
()(I +V R
0
())
1
. (1.18)
Clearly, R
V
() is a holomorphic family of bounded operators on L
2
(R) which satises (1.16).
We need to show that as operators R
V
() : L
2
comp
(R) L
2
loc
(R), R
V
() continues
meromorphically from Im >> 0 to the whole complex plane. This is the same as extending
R
V
() : L
2
(R) L
2
loc
(R)
meromorphically to C for any C

0
(R) where denotes the corresponding multiplication
operator. Note that once the meromorphic continuation is established, it does not depend
on in the following sense:
if
1
C

0
(R) with
1
1 on Supp ,
then we have (R
V
()
1
) = R
V
() .
(1.19)
In fact, this is obviously true for Im >> 0, and then for C by meromorphic continu-
ation.
Next we observe that, for Im >> 0 and C

0
(R) with V V ,
R
V
() = R
0
()(I + V R
0
())
1
(1.20)
by (1.18) and the equality
(I +V R
0
())
1
= (I +V R
0
())
1
.
Hence, the meromorphic continuation of R
V
() is reduced to that of (I + V R
0
())
1
.
This in turn follows from the Analytic Fredholm Theory in the Appendix, once we note
that V R
0
() = V R
0
() is a meromorphic family of compact operators on L
2
(R) as
R
0
() : L
2
(R) H
2
comp
(R) for C`0 is compact and V : L
2
(R) L
2
(R) bounded.
To complete the proof of Theorem 1.1 it remains to show that there is no pole for R
V
()
on R`0. This will come in several steps. We start with
Proposition 1.1 If R
V
() has a pole at =
0
= 0 and write
R
V
() =
P
N
(
0
)
N
+
P
N1
(
0
)
N1
+ +
P
1

0
+Q() (1.21)
for near
0
where Q() is holomorphic at
0
, then u P
N
(L
2
comp
(R)) is an outgoing
solution of (H
V

2
0
)u = 0.
6
Proof. First, note that the expansion (1.21) follows from meromorphy of R
V
(). Next,
by applying the operator (
0
)
N
(H
V

2
) to R
V
() and then putting =
0
, we see
that
(H
V

2
0
)P
N
0 .
It remains to show that elements in P
N
(L
2
comp
(R)) are always outgoing. For this, we let
C

0
(R) with V V . By (1.20), we can write
(I +V R
0
()))
1
=

P
N
(
0
)
N
+ +

P
1

0
+

Q()
for near
0
where

P
j
,

Q() : L
2
(R) L
2
(R), j = 1, . . . N, and

Q() is holomorphic at

0
. Then (1.20) also implies that
P
N
(L
2
(R)) = R
0
()

P
N
(L
2
(R)) R
0
()(L
2
comp
(R))
which means that elements of P
N
(L
2
comp
(R)) are outgoing.
Next, we prove that there is no outgoing solution to (H
V

2
)u = 0 for R`0. This
follows from the following proposition.
Proposition 1.2 Suppose (H
V

2
)u = 0 for R`0 and let
u(x) =

A
+
e
ix
+B

e
ix
for x >> 0
A

e
ix
+B
+
e
ix
for x << 0
Then
[A
+
[
2
+[B
+
[
2
= [A

[
2
+[B

[
2
(1.22)
Proof. Since is real, u also satises the equation. Thus, the Wronskian of u and u
W(u, u) =

u u
t
u u
t

2i([A
+
[
2
[B

[
2
) for x >> 0
2i([A

[
2
[B
+
[
2
) for x << 0
is constant. (1.22) follows immediately.
Proposition 1.3 Suppose u L

(R) satises (D
2
x
+W)u = 0 for W L

(R) and supp


u [0, ). Then u 0.
7
Proof. Fix h > 0. Let v = e

x
h
u. Then the boundedness and the support property of
u imply that v L
2
(R). Now we have
|e

x
h
(hD
x
)
2
e
x
h
v|
L
2 = |(h
2
D
2
x
2ihD
x
1)v|
L
2
= |(h i)
2
v|
L
2 by Plancherel formula
| v|
L
2 since [h i[ 1 as h R
= |v|
L
2 .
Changing back to u, we get
|e

x
h
u|
L
2 |e

x
h
h
2
D
2
x
u|
L
2
= |e

x
h
h
2
Wu|
L
2
|W|
L

h
2
|e

x
h
u|
L
2 .
Taking h
2
< |W|
1
L

, we obtain e

x
h
u 0, i.e. u 0.
Now we can nish the proof of Theorem 1.1. In fact, assume that there is a pole of R
V
()
at
0
R`0. Proposition 1.1 implies that there is a nonzero outgoing solution u of the
equation (H
V

2
0
)u = 0. Proposition 1.2 then implies that u is vanishing outside some
compact set in R. But Proposition 1.4 says that u 0 which is a contradiction.
Remark. Proposition 1.2 is the only part of the proof of Theorem 1.1 which is simpler
in dimension 1, all the remaining parts work in higher dimensions.
With the construction of the resolvent R
V
() in place, we can now dene and obtain the
distorted plane waves which constitute the continuous spectrum of H
V
.
Proposition 1.4 For R`0, there exist unique solutions e

(x, ) to
(H
V

2
0
)u = 0 (1.23)
satisfying e

(x, ) = e
ix
+ outgoing terms.
Proof. For R`0, put
e

(x, ) = e
ix
R
V
()(V e
ix
) (1.24)
which makes sense because of Theorem 1.1. Clearly, e

(x, ) satises the equation (1.23)


and the last term in (1.24) is outgoing thanks to (1.20). Uniqueness again follows from
nonexistence of outgoing solution to (1.23) proved in Theorem 1.1.
8
To establish the analogue of (1.12), we use a standard ODE technique to represent R
V
()
in terms of e

(x, ). First, recall that, in general the fundamental solution of the ODE
(a
2
x
+b(x))E(x, y) =
y
(x)
can be written in terms of any two linearly independent solutions
j
, j = 1, 2 of
(a
2
x
+b(x))
j
= 0 j = 1, 2 .
Namely,
E(x, y) =
1
aW
(
1
(x)
2
(y)(x y)
0
+
+
1
(y)
2
(x)(x y)
0

) (1.25)
where W =

1

t
1

2

t
2

is the Wronskian of
1
,
2
and
(x y)
0

1 x > y
0 otherwise
denote the Heaviside functions.
We now apply (1.25) to the operator H
V

2
for R`0 and solutions
1
= e
+
,

2
= e

. Note that, for xed y, both e


+
(x, )(x y)
0
+
and e

(x, )(x y)
0

are outgoing
and hence (1.25) gives us the outgoing resolvent.
To write down R
V
()(x, y) explicitly, we need to compute the Wronskian of e
+
and e

.
First, observe that, by the characterizing properties of e

, we can write
e

(x, ) =

()e
ix
for x >> 0
e
ix
+ R

()e
ix
for x << 0
(1.26)
Computing the Wronskian W of e
+
and e

for large x and large negative x respectively,


we get
W = 2iT
+
() = 2iT

() .
In particular,
T
+
() = T

() := T() . (1.27)
e
+
(x, )
e
ix

R
+
()e
ix

T()e
ix

[ V [
e

(x, )
T()e
ix

()e
ix

e
ix

[ V [
9
Figure 2
T() is called the transmission coecient and R

() the reection coecients. Now, we


can write down the expression for R
V
() in terms of e

from (1.25). For R`0, we


have
R
V
()(x, y) =
1
2iT()

e
+
(x, )e

(y, )(x y)
0
+
+e
+
(y, )e

(x, )(x y)
0

. (1.28)
This implies the following useful asymptotics
R
V
()(r, y) =
1
2i
e
ir
e

(y, ) for r >> 0 . (1.29)


The spectral decomposition of H
V
is now given by
Theorem 1.2 Let e

be given by Proposition 1.4. Then


(x y) =
1
2

e
+
(x, )e
+
(y, ) + e

(x, )e

(y, )

d +
N

j=1
e
j
(x)e
j
(y)
and
H
V
(x, y) =
1
2

e
+
(x, )e
+
(y, ) + e

(x, )e

(y, )

d +
N

j=1
E
j
e
j
(x)e
j
(y)
(1.30)
where E
j
=
2
j
, j = 1, . . . , N,
t
j
s are the poles of R
V
() for Im > 0, and (H
V
E
j
)e
j
=
0 with |e
j
|
L
2 = 1.
Proof. According to the Appendix, Theorem 1.1 and the boundedness of V , H
V
acting
on C

0
(R) has a self-adjoint extension on L
2
(R) whose spectrum consists of nitely many
negative eigenvalues (with multiplicity) and a continuous part [0, ). Hence
H
V
=
N

j=1
E
j
e
j
e
j
+


0
zdE
z
=
N

j=1
E
j
e
j
e
j
+

2
dE

(1.31)
where the E
j
s are the eigenvalues of H
V
and the e
j
s are the corresponding normalized
eigenfunctions, i.e. (H
V
E
j
)e
j
= 0, |e
j
|
L
2 = 1. (In the second equality, we use the
substitution z =
2
.)
In our convention of taking square root of z, E
j
=
2
j
where the
j
s are the poles of
R
V
() for Im > 0. To compute dE

we use the Stones Formula


dE

=

i
(R
V
() R
V
())d . (1.32)
10
We want to express the right-hand side by the distorted plane waves e

. For this we use


(D
2
x
+ V
2
)R
V
()(x, y) = (x y)
and the symmetry of R
V
()(x, y) with respect to x, y to write for xed x, y and large r,
(R
V
() R
V
())(x, y)
=

r
r
[R
V
()(x, y
t
)(D
2
y
R
V
()(y
t
, y)) (D
2
y
R
V
()(x, y
t
))R
V
()(y
t
, y)]dy
t
= [R
V
()(x, y
t
)D
y
R
V
()(y
t
, y) D
y
R
V
()(x, y
t
)R
V
()(y
t
, y)][
y

=r
y

=r
=
i
2
(e
+
(x, )e
+
(y, ) + e

(x, )e

(y, ) ) (1.33)
by (1.29). Theorem 1.2 now follows by putting (1.33) into (1.31) and (1.32).
As at the end of section 1.1, we can also consider the relation to the wave equation. The
advanced and retarded fundamental solutions E

are again characterized by


(D
2
t
(D
2
x
+ V ))E

(t, x, y) =
0
(t)
y
(x)
E

(t, x, y) = 0 for t < 0 .


Again we have
E

(t, x, y) =
1
2

R
V
()(x, y)e
it
d .
We have more or less established in the perturbed case all the results we proved in
section 1.1. We end this section by discussing a class of intertwining operators A

satisfying
H
V
A

= A

H
0
which will be useful later. More precisely, we want to nd distributions
A

(x, y) satisfying
(D
2
x
+ V )A

(x, y) = D
2
y
A

(x, y)
A

(x, y) = (x y) x >> 0 .
(1.34)
To show their existence, we will rst construct solutions of the stationary equation, for
R,
(D
2
x
+V
2
)

(x, ) = 0

(x, ) = e
ix
x >> 0 .
(1.35)

+
(x, )
e
ix

[ V [
e
ix

(x, ) [ V [
11
Figure 3
Lemma 1.1 There exist unique solutions

(x, ) to (1.35) and for xed x,

(x, ) are
tempered functions in R.
Proof. Put

(x, ) =
1
T()
e

(x, ) (1.36)
where T() is dened in (1.26) and (1.27). Then

(x, ) are meromorphic in C.


We claim that

(x, ) are holomorphic on R. Indeed, for R`0, T() = 0


otherwise e

(x, ) would be identically zero by Proposition 1.3. Next, suppose

(x, ) has
a pole of order m > 0 at = 0. Then
m

(x, ) is holomorphic for near 0. Let

(x) = lim
0

(x, ) ,
then

(x) is a solution of
(D
2
x
+V )

(x) = 0

(x) = 0 x >> 0 .
Hence

(x) 0 by Proposition 1.3 which is a contradiction. Although we dont need this


fact in the proof of the Lemma, we remark that the above argument actually shows that

(x, ) is dened and holomorphic for C. As

(x, ) clearly satisfy (1.35), it remains


to verify the temperedness of

(x, ) as [[ . For that, recall by (1.9),


[(V R
0
())(x, y)[
1
[[
for [[ >> 0
where C

0
(R) with V V ; also from (1.20) we have
R
V
() = R
0
()

k=0
(R
0
()V )
k
.
(1.29) then shows that
[e

(x, )[ C for [[ >> 0 . (1.37)


Moreover, we have
[T()[
1
C for [[ >> 0 .
This follows from a similar argument as
T() = 1 e
ix
R
V
()(V e
i
) for [[ >> 0
12
which implies
T() = 1 +O

1
[[

as [[ . (1.38)
This completes the proof of our lemma.
Proposition 1.5 There exist unique solutions A

(x, y) to (1.34). Moreover, they satisfy


the following properties:
(a) Supp A

(x, y) (x, y) R
2
: x y
(b)
y
A

(x, y) = X(y x) + Y (x +y) x >> 0.


Here X, Y are distributions with compact support with
supp X [2(b a), 0]
supp Y [2a, 2b]
(1.39)
where [a, b] = ch supp V .
Proof. Rewriting (1.34) slightly, we have

D
2
x
(D
2
y
V (x))A

(x, y) = 0
A

(x, y) = (x y) for x >> 0


Thus A

(x, y) satises the wave equation with x taking the place of time (this choice is
dictated by the forcing condition imposed). The uniqueness part then follows from the energy
estimates of the wave equation proved in the Appendix.
For the existence part, we put
A

(x, y) =
1
2

(x, )e
iy
d , (1.40)
which is well dened, thanks to Lemma 1.1.
Now A

(x, y) satises (1.34) because of (1.35). (a) is then a direct consequence of the
energy estimates, that
y
A

(x, y) is of the form given in (b) for x >> 0 is simply because


it satises the wave equation there:
(D
2
x
D
2
y
)
y
A

(x, y) = 0 for x >> 0.


The support properties of X and Y can now be seen from Figure 4 which shows the support
of
y
A

(x, y) (the region enclosed by the thick lines) and with the supports of X(y x)
and Y (y +x) indicated.
13
E x
T
y
0 a b
supp X [2(b a), 0]
supp Y [2a, 2b]
x = y
x = y + 2a
E '
chsupp V = [a, b]

y
A

(x, y)
=
t
(x y)
for x < a
b
2a b

d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d

Figure 4
Remark. A direct construction of the intertwining kernels A

(x, y) is important in
inverse problems.
14
1.3 Scattering Matrix and Wave Operators
We start with the denition of the scattering matrix associated to the Hamiltonian
H
V
= D
2
x
+V V L

comp
(R)
in section 1.2. For any solution u of
(D
2
x
+ V
2
)u = 0 , (1.41)
it has the expansion
u(x, ) =

A
+
e
ix
+ B

e
ix
for x >> 0
A

e
ix
+ B
+
e
ix
for x << 0 .
(1.42)
B
+
e
ix

e
ix

e
ix

A
+
e
ix

u(x, ) [ V [
Figure 5
Then the scattering matrix is dened to be the operator which maps the incoming coef-
cients to the outgoing coecients, i.e. S() : C
2
C
2
.

A
+
B
+

(1.43)
Theorem 1.3 The matrix S() is meromorphic for C where poles with Im > 0
correspond to the square roots of the eigenvalues of H
V
. In the notation of Proposition 1.5
we have
S() =

X()

Y ()

X()

Y ()

X()
i

X()

(1.44)
where

X denotes the Fourier transform of X and
S()S(

= S()JS()J = I with J =

0 1
1 0

(1.45)
Proof. To nd S(), we use the two linearly independent solutions

(x, ) of (1.41)
given by Lemma 1.1. Write

(x, ) =

A()e
ix
+B()e
ix
for x >> 0
e
ix
for x << 0
15
and

+
(x, ) =

e
ix
for x >> 0
C()e
ix
+D()e
ix
for x << 0
where A(), . . . , D() satisfy
A() = A(), . . . , D() = D() (1.46)
and the unitarity conditions
[A()[
2
+ 1 = [B()[
2
, [C()[
2
+ 1 = [D()[
2
. (1.47)
The denition (1.43) of the scattering matrix gives
S() =

A()
B()C()
A()
B()
B()D()1
C()B()
1
B()

(1.48)
From (1.40) and Proposition 1.5(b), we nd, for R,
i

(x, ) =

X()
+
(x, ) +

Y
+
(x, ) . (1.49)
Using (1.49) we can express A(), . . . , D() in terms of

X() and

Y (). A simple calculation
gives
A() =

Y ()
i
, B() =

X()
i
, C() =

Y ()
i
, D() =

Y ()
i
. (1.50)
In the calculation, we have used the fact that X and Y are real and the unitarity condition
[A()[
2
+ 1 = [B()[
2
which implies

X()

X() =
2
+

Y ()

Y () . (1.51)
Putting (1.50) into (1.48) we get (1.44). In particular, S() is meromorphic on C as both
X and Y are compactly supported distributions. Suppose S() has a pole at
0
where Im

0
> 0, then

X() has a zero at
0
, thus B() has a zero at
0
and

(x,
0
) becomes an
eigenfunction of (1.41), i.e.,
0
is a square root of the eigenvalues of H
V
. Finally, relations
(1.45) can be checked directly by using (1.44) and (1.51).
The scattering matrix S() also relates the incoming and outgoing distorted plane waves
e

(x, ), dened in Proposition 1.4.


16
Proposition 1.6 We have the following functional equations for the distorted plane waves
e

(x, )
S()
t
J

e
+
(x, )
e

(x, )

e
+
(x, )
e

(x, )

where J =

0 1
1 0

(1.52)
Proof. We rst recall the relations between

and e

from (1.36)

+
(x, ) =
1
T()
e
+
(x, ) =

X()
i
e
+
(x, )

(x, ) =
1
T()
e

(x, ) =

X()
i
e

(x, )
(1.53)
where we have used
T() =
i

X()
(1.54)
which can be obtained, for example, by comparing e

(x, ) and

(x, ) for x >> 0 and


(1.50). For later use, we also record
R

() =

Y ()

X()
(1.55)
which by (1.44) implies that
S() =

T() R

()
R
+
() T()

. (1.56)
Now, putting (1.53) into (1.49) we obtain
ie

(x, ) =

X()e
+
(x, ) +

Y ()e
+
(x, ) . (1.57)
Then, we compute
S()
t
J

e
+
(x, )
e

(x, )

=
1

X()

i

Y ()

Y () i

(x, )
e
+
(x, )

=
1

X()

ie

(x, ) +

Y ()e
+
(x, )

Y ()e

(x, ) + ie
+
(x, )

e
+
(x, )
e

(x, )

where we have used (1.57) in the last equality.


Our notion of incoming and outgoing behavior was motivated by the Schrodinger equation
(see section 1.1) while the above denition of the scattering matrix is purely stationary. Now,
17
we would like to connect it back to the dynamical point of view. Recall that if H is a self-
adjoint operator, then the initial value problem

(i
t
H)v = 0
v[
t=0
= u
(1.58)
is solved by the 1-parameter unitary group e
itH
, i.e.,
v(t) = e
itH
u .
We want to compare the free and perturbed evolutions corresponding to the self-adjoint
operators H
0
and H
V
respectively. First, we want to show that for any initial data u L
2
(R)
orthogonal to the space of eigenfunctions of H
V
, there exist u

L
2
(R) such that
e
itH
V
u e
itH
0
u

as t .
This is given by the following classical theorem whose proof does not depend on the space
dimension. Hence, we present the general case in our simple setting for V L
2
comp
(R
n
).
Theorem 1.4 Let H
V
= + V , where V L
2
comp
(R
n
). If u L
2
(R
n
), the following
limits exist
W

u = lim
t+
e
itH
V
e
itH
0
u (1.59)
Also, we have
W

H
0
= H
V
W

(1.60)
and
|W

u|
L
2 = |u|
L
2 , (1.61)
i.e., W

are partial isometries intertwining the operators H


0
and H
V
.
Proof. We rst prove the existence of W

. Let
U(t) = e
itH
V
e
itH
0
.
Since e
itH
V
and e
itH
0
are unitary operators on L
2
(R
n
) which follows from the spectral
theorem of self-adjoint operators, U(t) is also unitary, i.e.,
|U(t)w|
L
2
(R
n
)
= |w|
L
2
(R
n
)
for any w L
2
(R
n
).
18
Hence by a standard density argument, it suces to prove the existence of limits in (1.59)
for u in a dense subset of L
2
(R
n
). We take
D = u L
2
(R
n
) : u C

0
(R
n
`0) .
D is dense in L
2
(R
n
) because C

0
(R
n
`0) is dense in L
2
(R
n
) and Fourier Transform is
unitary on L
2
(R
n
). Now, for u L
2
(R
n
),
d
dt
(e
itH
V
e
itH
0
u) = ie
itH
V
(H
V
H
0
)e
itH
0
u
= ie
itH
V
V e
itH
0
u
Thus
U(s)u = u +i

s
0
e
itH
V
V e
itH
0
u dt
and W

exist if

|e
itH
V
V e
itH
0
u|
L
2
(R
n
)
dt =

|V e
itH
0
u|
L
2
(R
n
)
dt < (1.62)
for all u D.
Take u D, then there exists 0 < r < R such that for supp u, we have r < [[ < R.
Let u
t
(x) = (e
itH
0
u)(x), then (1.62) follows if for some constant C which may depend on
u, we have
[u
t
(x)[
C
[t[
2
(1.63)
for x supp V and for t suciently large. To see this, we apply integration by parts to
u
t
(x) = e
itH
0
u(x) =
1
2

r<[[<R
e
ixit[[
2
u()d
=
1
2

r<[[<R

1
i(x
j
2t
j
)

2
e
ixit[[
2
u()d
=
1
2

r<[[<R
e
ixit[[
2

1
i(x
j
2t
j
)

2
u()d .
We obtain (1.63) if we observe that, as r < [[ < R,

1
x
j
2t
j

C
[t[
19
for x in some compact set and t suciently large. As we have proved the existence of W

,
(1.61) is clear as they are strong limits of unitary operators.
Finally, (1.60) can be seen as follows. For any s R,
e
isH
V
W

e
isH
0
= lim
t
e
i(s+t)H
V
e
i(s+t)H
0
= W

Thus,
0 =
1
i

s

e
isH
V
W

e
isH
0

= H
V
W

H
0
which is (1.60).
The operators W

dened in Theorem 1.4 are called wave operators. They can be


dened in situations with greater generality. To illustrate this, we present the following
simple example.
Example. Let H
0
= D
x
, H
V
= D
x
+V , V C

0
(R). Then e
itH
0
u(x) = u(x +t) and
since H
V
= e
iF
H
0
e
iF
where F
t
= V , we have
e
itH
V
u(x) = e
iF
e
itH
0
e
iF
u(x)
= e
iF(x)
e
iF(x+t)
u(x +t)
Thus,
W

u(x) = lim
t
e
itH
V
e
itH
0
u(x)
= lim
t
e
iF(x)
e
iF(x)
e
iF(x+t)
u(x)
= e
i(F()F(x))
u(x)
Now, using the same wave operators, we can dene the scattering operator by
o = W

+
W

(1.64)
In our simple example above, o is simply the multiplication operator by the constant
e
i
R
R
V (x)dx
.
The following theorem relates the scattering operator to the scattering matrix dened in
(1.43).
Theorem 1.5 The scattering operator o : L
2
(R) L
2
(R) is unitary and is given by the
Fourier multiplier
o =

S() (1.65)
20
where : L
2
(R) (L
2
([0, ))
2
is dened by
(u) =

u()
u()

(1.66)
Proof. We rst prove (1.65). Take any v C

0
(R), let u = (

S())v. Thus

u()
u()

= S()

v()
v()

(1.67)
and u S(R). We need to prove ov = u or equivalently W
+
u = W

v. Let
w(x) =
1
2

e
+
(x, ) v() + e

(x, ) v()

d . (1.68)
We claim that
W

v = w . (1.69)
Indeed, we have
e
itH
V
w(x) =
1
2


0
[e
+
(x, ) v() + e

(x, ) v()] e
it
2
d
= e
itH
0
v(x) +
1
2


0
[f
+
(x, ) v() + f

(x, ) v()] e
it
2
d
where f

(x, ) = e

(x, ) e
ix
= R
V
()(V e
ix
) are outgoing and holomorphic in
C : Re > 0, Im > 0 as shown in Proposition 1.4. Now, deforming the contour
of integration in the last integral from R
+
to
+
= + i : > 0, we see that the last
integral tends to 0 in L
2
(R) as t once we observe the bounds
|f

( , )|
L
2 C e
CIm
, [ v()[ C e
CIm
(1.70)
on
+
. The rst estimate comes from the bound
|R
V
()|
L
2
L
2
1
Im
2
()
on
+
by the Spectral Theorem and the second estimate comes from the fact that v is compactly
supported. Thus, our claim is proved.
Next, we let
w(x) =
1
2


0
[e
+
(x, ) u() + e

(x, ) u()]d (1.71)


21
and we can prove by similar argument that
W
+
u = w .
Note that the sign switch comes from the fact that e

(x, ) e
ix
is incoming and hence
we have to deform the contour of integration into the lower half plane. we also remark
that although the second estimate in (1.70) may not hold for u which is only known to be
Schwartz a priori, (1.71) can still be obtained by an approximation argument.
(1.65) will follow if we can prove w = w. Now
w(x) =
1
2


0
(e
+
(x, ), e

(x, ))

u()
u()

d
=
1
2


0
(e
+
(x, ), e

(x, ))S()
1
J

u()
u()

d by (1.52)
=
1
2


0
(e
+
(x, ), e

(x, ))

v()
v()

d by (1.67)
= w(x) .
Finally, the unitarity of o follows from (1.65) and the unitarity of S() and .
Remark. A dierent proof not specic to dimension 1 will be given in Chapter 2. In
the more general context, the multiplier we used is related to the spectral decomposition of
the free Laplacian in section 1.1, namely,
dE
0

0
()
0
()d
with

0
()u = ( u(), u()) .
Since o commutes with H
0
= D
2
x
, we have formally
o =


0
S()dE
0

and o =

0
S()
0
.
As an application, we give the weak asymptotic completeness of the wave operators.
Proposition 1.7 We have Ran W
+
= Ran W

.
Proof. This is a consequence of the unitarity of o and can be seen clearly from the
following diagram.
22

r
r
r
r
r
r
r
r
r

r
r
r
r
r
r
r
r
r
L
2
(R) L
2
(R) L
2
(R) L
2
(R) L
2
(R)
E E E E
W

+
W
+
W

E E
o o

= I
y
W

y = x
x
W

x = y

q
r
r
r
r
r
r
r
r
rj

B
Figure 6
Take any x Ran W
+
, the existence and uniqueness of x is clear from Figure 6. Now
we have
W
+
W

+
x = x
which implies x = x since W
+
is a partial isometry. Thus Ran W
+
Ran W

. Similarly,
by considering o

o = I instead of oo

= I , we get Ran W

Ran W
+
.
In fact, the range of W
+
(or W

) is characterized as the orthogonal complement of the


eigenfunctions of H
V
. More precisely,
Proposition 1.8 We have
(Ran W

= Ker W

= Span

(x, i

E
k
) (1.72)
where the E
k
s are eigenvalues of H
V
.
Proof. The fact that

(x, i

E
k
) gives an eigenfunction corresponding to the eigen-
value E
k
is explained in the proof of Theorem 1.3. Note that all the eigenvalues of H
V
are
simple because of the uniqueness theorem of ordinary dierential equation, see Proposition
1.3.
The rst equality in (1.72) is a standard fact, we only need to prove the second equality
there.
23
First, we show Span

(x, i

E
k
) (Ran W

. Of course, we only need to show it


for Ran W

in view of Proposition 1.7. For this, it suces to check 'w(x),

(x, i

E
k
)` =
0 for all w(x) of the form
w(x) =
1
2


0
[e
+
(x, ) v() + e

(x, ) v()]d v C

0
(R) ,
see (1.68),(1.69). This in turn follows from, for > 0
'e

(x, )

(x, i

E
k
)` =
1
E
k
'e

(x, ), H
V

(x, i

E
k
)`
=
1
E
k
'H
V
e

(x, ),

(x, i

E
k
)`
=

2
E
k
'e

(x, ),

(x, i

E
k
)`
implying 'e

(x, ),

(x, i

E
k
)` = 0 as

2
E
k
= 1. It remains to show (Ran W


Span

(x, i

E
k
) or equivalently (Span

(x, i

E
k
)

Ran W

. By the spectral
decomposition of H
V
, Theorem 1.2, a generic element of (Span

(x, i

E
k
)

is of the
form
w(x) =
1
2


0
[e
+
(x, )

f
+
() + e

(x, )

f

()] d
where

() =

(y, )f(y)dy , f C

0
(R) .
Note that

f

() L
2
(R
+
) as e

(y, ) is uniformly bounded by (1.37). We can then nd


g L
2
(R) such that g() =

f
+
() and g() =

f

() for > 0. Thus w(x) Ran W

and this completes the proof of Proposition 1.8.


1.4 Resonances
We have seen in Theorem 1.2 that the poles of the outgoing resolvent R
V
() in the upper
half plane Im > 0 corresponds to the eigenvalues of H
V
. In Theorem 1.1, we saw that
R
V
() as operator
R
V
() : L
2
comp
(R) L
2
loc
(R)
has a meromorphic continuation to C. Its poles are some most important objects in scatter-
ing theory and are called resonances or scattering poles. We will provide more motivation
and justication later. We start with some preliminaries on multiplicity.
24
Proposition 1.9 If the multiplicity of a pole
0
= 0 of R
V
() is dened by
m
R
(
0
) = rank
1
2i

0
R
V
()d , (1.73)
then
m
R
(
0
) =
1
2i


X
t
()

X()
d
= the order of vanishing of

X() at
0
(1.74)
where X c
t
(R) is dened in Proposition 1.5.
Proof. Recall that

(x, ) are a pair of linearly independent solutions to (H


V

2
)u = 0
dened in Lemma 1.1.
Apply (1.25) to the operator H
V

2
and use
1
=
+
(x, ) and
2
=

(x, ), we
get
R
V
()(x, y) =
1
2

X()
(
+
(x, )

(x,)(x y)
0
+
+
+
(y, )

(x, )(x y)
0

) . (1.75)
Using (1.49), namely
i
+
(x, ) =

X()
+
(x, ) +

Y ()
+
(x, )
we obtain from (1.75) and (1.73) that
m
R
(
0
) = rank Res


Y ()
2i

X()

+
(, )
+
(, )

.
Now suppose the order of vanishing for

X() at
0
is k + 1, then the residue of

Y ()
2i

X()

+
(, )
+
(, )
at
0
is given by, as

Y () and
+
(x, ) are entire,
1
2ik!

k

Y ()


+
(, )
+
(, )

=
0
=

=0

+
(, )


j
1
,j
2
0,j
1
+j
2
+=k
c
j
1
,j
2
,

j
1

Y ()

j
2


+
(, )

=
0
for some nonzero constants c
j
1
,j
2
,
. The above operator is of rank k + 1 because
25
(i)
j

+
(x, )[
=
0
, j = 0, 1, 2, . . . k, are linearly independent functions in x
since
j

+
(x, )[
=
0
= (ix)
j
e
i
0
x
for x >> 0
and
(ii) the coecient of
j

+
(x, )[
=
0
, j = 0, . . . k, since

Y (
0
) = 0 by the
unitarity relation (1.51).
This nishes the proof of Proposition 1.9.
Theorem 1.3 shows that the poles of the resolvent coincides with the poles of the scattering
matrix. For matrix-valued meromorphic function (or more generally, for operator valued
meromorphic function), the natural notion of multiplicity of poles is given by
m
S
(
0
) =
1
2i

0
tr(S()
1
S
t
())d (1.76)
which is the same as the order of the poles of det S() because
(det S())
t
det S()
= tr(S()
1
S
t
()) . (1.77)
Proposition 1.10 The denition of multiplicities given by (1.73) and (1.76) are related by
m
S
() = m
R
() m
R
() (1.78)
Proof. Using (1.44), we compute
det S() =
X()

X()
(1.79)
Proposition 1.10 then follows from Proposition 1.9.
Remark. If Im < 0, m
R
() = 0 only for
2
being an eigenvalue of H
V
. With a slight
abuse of terminology, we say that the poles of S() and R
V
() coincide with multiplicities.
We will now give some motivations for the study of resonances. We start with the time
delay operator. For simplicity, write
r
(x) = 11
|[x[<r
. For f C

0
(R), put
S
r
(f) =

|
r
e
itH
V
W

f|
2
dt
and
S
0
r
(f) =

|
r
e
itH
0
f|
2
dt
26
where W

is one of the wave operators studied in section 1.3.


Dene an operator

T
r
by its corresponding quadratic form
'

T
r
f, f` = S
r
(f) S
0
r
(f) ,
then the time delay operator

T is given by
'

Tf, f` = lim
r
'

T
r
f, f` . (1.80)
The next proposition guarantees the existence of the limit. Here we note that, by denition
of W

,
e
itH
V
W

f e
itH
0
f as t .
Hence W

f and f evolve the same way under the perturbed and free propagation respec-
tively for large negative times.
Proposition 1.11 (Eisenbud-Wigner Formula) The operator

T given by (1.80) exists and

T =

T() (1.81)
where T() = 2iS()
d
d
S() and is as in Theorem 1.5.
Outline of proof. (1.81) is equivalent to, for f C

0
(R)

Tf()

Tf()

= 2iS()

S
t
()


f()

f()

(1.82)
which in turn is equivalent to
'

Tf, f` =

Tf)(x)

f(x)dx
=
1
2

Tf()

f() d by Plancherel formula
=
1
2

Tf()

f() +

Tf()

f())d (1.83)
=
1
2

(

f(),

f())

Tf()

Tf()

d
=
1
2

(

f(),

f())(2iS()

S
t
())


f()

f()

d
27
for f C

0
(R). Now, using the expression of S() in terms of the transmission and reection
coecients given in (1.56), we get
S()

S
t
() =

T()T
t
() + R
+
()R
t
+
() T()R
t

() + R
+
()T
t
()
R

()T
t
() + T()R
t
+
() R

()R
t

() + T()T
t
()

:=

a b
c d

(1.84)
Thus, we need to prove that lim
r
(S
r
(f) S
0
r
(f)) exists and
lim
r
(S
r
(f) S
0
r
(f)) (1.85)
=
1
2


0
(2i)

f()

f() + d

f()

f() + b

f()

f() + c

f()

f()

d .
To compute S
r
(f) S
0
r
(f), we use the formula for W

given by (1.68) (and (1.69)), we get


S
r
(f) =

|
r
e
itH
V
W

f|
2
dt
=

|
r
W

e
itH
0
f|
2
dt
=

1
2

r
r
[


0
(e
+
(x, )

e
itH
0
f() + e

(x, )

e
itH
0
f())d[
2
dxdt
by (1.68)
=

1
2

r
r
[


0
e
it
2
(e
+
(x, )

f() + e

(x, )

f())d[
2
dxdt
=

1
2

r
r


0
T
t
(e
it
2
(e
+
(x, )

f() + e

(x, )

f())d[
2
ddx
by Plancherel formula
=
1
2

r
r

0
( +
2
)(e
+
(x, )

f() + e

(x, )

f())d[
2
ddx
=
1
2

r
r


0
[e
+
(x,

)

f(

) + e

(x,

)

f(

)[
2
ddx
=
1
2

r
r


0
2[e
+
(x, )

f() + e

(x, )

f()[
2
ddx
=
1
2

r
r
2(e
+
(x, )e
+
(x, )

f()

f() + e

(x, )e

(x, )

f()

f()
+e

(x, )e
+
(x, )

f()

f() + e
+
(x, )e

(x, )

f()

f())dxd (1.86)
28
Similarly, we have
S
0
r
(f) =

|
r
e
itH
0
f|
2
dt (1.87)
=
1
2

r
r
2(

f()

f() +

f()

f() + e
ix

f()

f() + e
ix

f()

f())dxd
Thus, to prove (1.85), we need
lim
r

r
r
2(e

(x, )e

(x, ) 1)dx

f()

f()d
=


0
(2i)[T()T
t
() + R

()R
t

()]

f()

f()d
and
lim
r

r
r
2(e

(x, )e

(x, ) e
ix
)dx

f()

f()d
=


0
(2i)[T()R
t

() + R

()T
t
()]

f()

f()d
These can be proved by exactly the same type of computation as we prove the Birman Krein
formula in section 1.5. Since we are going to give a detailed exposition there, we will not
carry out the computation here. Indeed, the rst equality above follows directly from the
computation there.
This nishes our outline of the proof of Proposition 1.11.
Another motivation for the study of resonances is the fact that in a weaker sense, reso-
nances replace eigenvalues in expansion with modes (eigenfunctions). We recall that if we
have H
V
= D
2
x
+ V on [a, b] with Dirichlet (or Neumann) boundary condition, then the
problem

(H
V

2
)u = 0 on (a, b)
u(a) = u(b) = 0
has a distinct set of solutions (i

E
k
, v
k
), (
j
, u
j
) with E
N
< < E
1
< 0 <
2
0
<
2
1
<
,

b
a
[u
j
[
2
dx =

b
a
[v
k
[
2
dx = 1. If we consider the wave equation

(D
2
t
H
V
)w = 0 on R (a, b)
w(0, x) = w
0
(x) on [a, b]

t
w(0, x) = w
1
(x) on [a, b]
w(t, a) = w(t, b) = 0 on R,
29
then
w(t, x) =
N

k=1
cosh(t

E
k
)a
k
v
k
(x) +
N

k=1
sinh(t

E
k
)b
k
v
k
(x)
+

j=0
cos(t
j
)c
j
u
j
(x) +

j=0
sin(t
j
)d
j
u
j
(x) (1.88)
where
a
k
=

b
a
w
0
(x) v
k
(x)dx , b
k
=

b
a
w
1
(x) v
k
(x)dx ,
c
j
=

b
a
w
0
(x) u
j
(x)dx , d
j
=

b
a
w
1
(x) u
j
(x)dx .
We now give an analogue of (1.88) for problems on open domains involving resonances. In
its proof we need the following
Lemma 1.2 Suppose V L

comp
(R). Then for any C

0
(R) satisfying V = V , there
are constants A
t
, C, T depending on the support of such that
|R
V
()|
L
2
L
2
C
[[
e
T[Im[
(1.89)
for Im A
t
log'` and suciently large. Here is a constant depending only on
the support of V . In particular there are only nitely many resonances in the region
Im A log'`
for any A.
Proof. First, note the following obvious estimate of the free resolvent
|R
0
()|
L
2
L
2
1
[[
e
T[Im[
(1.90)
for some constant T depending on the support of , see (1.9). Since we have, from (1.20),
R
V
() = R
0
()(I +V R
0
()
1
)
1
where
1
C

0
(R) is any function satisfying
1
=
1
, (1.89) holds in the region where
|V R
0
()
1
|
L
2
L
2
1
2
.
30
Our lemma clearly follows from (1.90). Note that the constant does not depend on as
we can choose
1
with support as close to that of V as we like.
Remark. Here, we draw a consequence of Lemma 1.2. Since
(D
2
x
+ V
2
)R
V
() =
2
I + [D
2
x
, ]R
V
()
we have
D
2
x
(R
V
()) =
2
I + (D
2
x
+ 2D
x
D
x
)R
V
() V R
V
() +
2
R
V
()) .
Thus
|R
V
()|
L
2
H
2 C|D
2
x
R
V
()|
L
2
L
2
C

1 +|
1
R
V
()|
L
2
L
2 +|D
x
|
L
|
1
R
V
()|
L
2
H
1
+(1 +
2
)|R
V
()|
L
2
L
2

where
1
C

0
(R) with
1
= . If |D
x
|
L
is small, we have for large ,
|R
V
()|
L
2
H
2 C[[ e
T

[Im[
(1.91)
in the region Im A
t
log'`.
Now we can state the analogue of (1.88).
Theorem 1.6 Suppose w(t, x) is the solution of

(D
2
t
H
V
)w(t, x) = 0 on R R
w(0, x) = w
0
(x) on R

t
w(0, x) = w
1
(x) on R,
(1.92)
where w
0
H
1
comp
(R), w
1
L
2
comp
(R) with supp w
0
, supp w
1
[x[ < R. Then, for
any A > 0,
w(t, x) =

Im>0
Res[(iR
V
()w
1
+R
V
()w
0
)e
it
]
+

0<ImA+ log/)
Res[(iR
V
()w
1
+R
V
()w
0
)e
it
] + E
A
(t) (1.93)
where E
A
(t) satises the estimate
|11
|[x[<K
E
A
(t)| C
K,R
e
(A)(tT

)
(|w
0
|
H
1 +|w
1
|
L
2) (1.94)
for any > 0 and some constants T
t
, K suciently large.
31
Remarks. 1. For any A > 0, the sum in (1.93) is nite because of Lemma 1.2.
2. The rst term on the right-hand side of (1.93) corresponds to the eigenvalues of H
V
and can be written as
N

k=1
(a
k
cosh t

E
k

(x, i

E
k
) + b
k
sinh t

E
k

(x, i

E
k
))
where E
k
, k = 1, . . . N are the eigenvalues of H
V
and

(x, i

E
k
) the corresponding
eigenfunctions as explained in Theorem 1.3, the a
k
s, b
k
s are given by
a
k
=
1
|

(x, i

E
k
)|
L
2

w
0
(x)

(x, i

E
k
)dx
b
k
=
1
|

(x, i

E
k
)|
L
2

w
1
(x)

(x, i

E
k
)dx
Proof of Theorem 1.6. For simplicity, we assume that H
V
has no negative eigenvalues
as their contribution to (1.93) is clear. Also, we will only consider (1.92) with w
0
0 as the
proof below clearly works in the case w
1
0 if we replace
sin t

by cos t in the formula for


w(t, x). The general case is then obtained by taking linear combinations.
With the above simplications understood, by the Spectral Theorem, the solution of (1.92)
can be written as
w(t) =


0
sin t

dE

(w
1
)
Using Stones Formula to write dE

in terms of R
V
(), we get
w(t) =
1
i


0
sin t(R
V
() R
V
())w
1
d
=
1
i


0
e
it
e
it
2i
(R
V
() R
V
())w
1
d (1.95)
=
1
i

1
i

e
it
R
V
())w
1
d

e
it
R
V
())w
1
d

Now, as R
V
() is holomorphic in the upper half-plane, we can deform the contour of inte-
gration of the rst term on the right-hand side of (1.95) to the contour illustrated in Figure
7.
32
0
M
z plane
Im = Re +M


Figure 7
By letting M , we can eliminate it from (1.95).
Next, for K large enough, we can choose C

0
(R) with V = V , 11
|[x[<K
= ,
11
|[x[<K
= 11
|[x[<K
such that (1.89) holds, we then have
w(t) =
1
2i

e
it
(iR
V
())w
1
d
By (1.91), we can deform the contour of integration to the contour illustrated in Figure 8
0
plane
-A
Im = A log < >
Figure 8
and we have
w(t) =

0<ImA+ log/)
Res((iR
V
()w
1
)e
it
) + E
A,
(t)
with
|E
A,
(t)|
H
1 C

e
(A)(tT

)
(|w
1
|
L
2)
Letting K and support of go to innity, we obtain (1.93) and (1.94).
33
Comparison of Theorem 1.6 with the normal mode expansion (1.88) shows that resonances
are the natural analogue of eigenvalues for scattering problems. Now, it is classical that for
H
V
on [a, b] , with either the Dirichlet or Neumann boundary conditions, we have
#
j
:
j
r =
b a

r +O(1) .
There is also an analogous result for resonances.
Theorem 1.7 Let m
R
() be the multiplicity of resonances given by (1.73), then

[[r
m
R
() =
2[ch suppV [

(r +o(1)) (1.96)
where ch suppV is the convex hull of the support of V .
Proof. By Proposition 1.9, Theorem 1.7 is a statement about the distribution of zeros of
the entire function

X(). To prove it, we need the following generalization to distributions
of a classical result of Titchmarch.
Lemma 1.3 If u c
t
(R), then
N
u
(r) =
[ch suppu[

(r +o(1)) (1.97)
where
N
f
(r) =

[z[r
1
2i

z
f
t
(w)
f(w)
dw
is the counting function of the zeros of f .
Proof. Recall that the classical Theorem of Titchmarch gives Lemma 1.3 when u
L
1
comp
(R). We need to extend it to u c
t
(R). First, observe that Titchmarchs Theorem
implies that for u, v L
1
comp
(R), we have
ch supp(u v) = ch supp u + ch supp v . (1.98)
In fact, since u v = u v, (1.97) implies
[ch supp u v[ = [ch supp u[ +[ch supp v[
and we easily have
ch supp u v ch supp u + ch supp v .
34
Thus, (1.98) follows. Now (1.98) can be generalized easily to compactly supported distribu-
tions. To see this, let u, v c
t
(R) and

0
(R) be supported in (, ). Then
ch supp(u

) + ch supp(v

)
= ch supp((u v) (

)) ch supp u v + (2, 2) .
By letting


0
as 0, we obtain
ch supp u + ch supp v ch supp(u v) .
As the reverse inclusion is clearly true, we obtain (1.98) for u, v c
t
(R).
Now, to see (1.97) for u c
t
(R), we apply Titchmarchs Theorem to C

0
(R) and
u C

0
(R), so that

r
N
u
(r) =

r

N
u

(r) N

(r)

[ch supp u [ [ch supp [ = [ch supp u[


where we have used (1.98) in the last equality.
Going back to the proof of Theorem 1.7, it is now clear that it suces to show that
ch supp X = [2(b a), 0] where [a, b] = ch supp V . (1.99)
Assume the contrary, that is, for some

,
+
0,

+
+
> 0, we have
ch supp X = [2(b a) +

,
+
] .
We rst need the following
Lemma 1.4 Suppose that V L

comp
(R). Then X
t
0
(x) C([2(b a), 0]).
That immediately shows that
+
= 0 and to obtain a contradictions we assume that

> 0. Recall the unitarity relation,

X()

X() =
2
+

Y ()

Y () .
The density of zeros of the left hand side is given by 2c/, where c = 2(b a) 2

. Since
the convex hull of Y Y ()
tt
is the same as ch supp Y ch supp Y it follows that
ch Y = [d

, d
+
] = [2a, 2b] .
Suppose that d

> a. Then
y
A

(x, y) must vanish in [x d

/2[ < d

/2 y by causality.
But that means that V (x)(xy) = 0 for x < d

/2 > a which contradicts ch suppV = [a, b] .


A similar argument using A
+
shows that d
+
= 2b.
35
Thus, our proof of Theorem 1.7 is completed.
E x
T
y
0 a b
supp X
[2(b a) +

,
+
]
supp Y
[2a +

, 2b
+
]
x = y
x = y + 2a

y
A

(x, y)
=
t
(x y)
for x < a

d
d
d
d
d
d
d
d
d
E

+
E

d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d

d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d

d
d
d
d
d
d
d
d
d
d
d
d
Figure 9
Finally, we discuss an important characterization of resonances which comes from the
method of complex scaling. It is particularly clear in the case of compactly supportd potential
on R. To introduce it, we rst review the restriction of holomorphic dierential operator on
C to smooth curves.
36
Let
z
=
1
2
(
x
i
y
). If C is a smooth curve and u C

(), we dene
(
z
[

)u = (z
t
(t))
1

t
u
where = z(t) is some parameterization of . Clearly, our denition of
z
[

is indepen-
dent of the choice of the parameterization.
With this preliminary, by regarding D
2
x
as the holomorphic dierential operator
2
z
on
C, we can restrict the operator H
V
to any curve C with the property that
Supp V R .
The curve inherits a measure from the Lebesgue measure on C. We dene L
2
() using
this measure.
Theorem 1.8 Fix 0 < <

2
. Assume Supp V [x[ < R. Let

be a curve on C
satisfying

[z[ R = [R, R] and

[z[ 2R
Re z 0

= e
i
[2R, ).
0 2R 2R R R
z plane


Figure 10
Put H

def
= H
V
[

. Then for < arg < 0 (recall z =


2
is the spectral parameter),
the spectrum of H

coincides with the resonances of H


V
in agreement with multiplicity.
37
Proof. For simplicity, we write

as below. First we want to show that the spectrum


of H

is discrete or, in other words, (H


2
)
1
is meromorphic for < arg < , where
is any positive number. As in the proof of Theorem 1.1, this is the same as showing that
(I + V (D
2
z
[


2
)
1
)
1
is meromorphic for < arg < . By analytic Fredholm theory
from the Appendix, it will follow from the compactness of the operator V (D
2
z
[


2
)
1
on
L
2
() for < arg < and the bound
|V (D
2
z
[

2
)
1
|
L
2
L
2
C
[[
for Im > 0 (1.100)
for any C

0
(R) with V V which will guarantee the existence of
(I +V (D
2
z
[

2
)
1
)
1
for large with Im > 0 by a Neumann series argument.
Now by (1.9) we have
((D
2
z
[

2
)
1
u(x) =
2i

e
i((xy)
2
)
1
2
u(y)dy
where the branch of square root is chosen to be positive on the positive real axis.
For [y[ >> 0 on , we have
y = e
i
r for r > 0 ,
thus ((x y)
2
)
1
2
e
i
r for [x[ < R and r >> 0. Hence, for < arg < , [x[ < R, we
have
[e
i((xy)
2
)
1
2
[ e
r[[ sin(+arg )
decays exponentially as r . From this, we see that V (D
2
z
[


2
)
1
denes a compact
operator on L
2
(). Next to see (1.100), we simply observe that
V ((D
2
z
[

2
)
1
V (D
2
x

2
)
1
if supp [x[ < R
and (1.100) then follows from (1.90).
Having now established that the spectrum of H

is discrete for < arg < , we have


to show that it coincides with the resonance set of H
V
there. To make the argument clear,
we will assume simple eigenvalue or resonance. The argument for multiplicity follows the
same line.
Now, suppose
0
is a simple resonance of H
V
which means that

X() has a simple zero
at
0
and hence (see (1.50))

(x,
0
) =

C e
i
0
x
x >> 0
e
i
0
x
x << 0
.
38
For [x[ R,

(x,
0
) clearly continues analytically in x to C and satises
(D
2
z

0
)

(z,
0
) = 0 there. This means that (H


2
0
)(

) = 0. Note also that


e
iz
0
[
|[z[R, Rez0
L
2
( [z[ R, Re z 0)
thus

L
2
() and is an eigenfunction of H

with eigenvalue
2
0
. The eigenvalue is
simple otherwise the solution of (H


2
0
)
2
u = 0, u L
2
() would give a solution to
(H
V

2
0
)
2
v = 0 with
v =

Ae
i
0
x
x >> 0
Be
i
0
x
x << 0
.
contradicting the simplicity of the resonance (compare with the proof of Proposition 1.9).
Similarly, a simple eigenvalue of H

corresponds to a simple resonance of H


V
.
1.5 Trace Formulae
To motivate the trace formulae, we again consider the Dirichlet realization of H
V
on a com-
pact interval [a, b] , denoting the corresponding self-adjoint operator by H
D
V
. The spectrum
of H
D
V
is discrete, E
N
< E
N1
< < E
1
< 0 <
2
0
<
2
1
< . Then for f S(R),
we have
tr f(H
D
V
) =

j=0
f(
2
j
) +
N

k=1
f(E
k
) (1.101)
and
tr f(H
D
V
) =


0
g()
dN
d
()d +
N

k=1
f(E
k
) (1.102)
where N() = #
2
j
:
2
j

2
is the positive eigenvalues counting function and g() is
the even function dened by g() = f(
2
).
In this section we prove the scattering theoretical analogues of (1.101) and (1.102). Our
purpose is to present the simple one-dimensional case as a preparation to the results in higher
dimensions. In higher dimensions, trace formulae link the geometry of the scatterer with
the spectral and scattering data. More precisely, trace of f(H
V
) can be related to dynamical
information. We will see this later.
Although for H
D
V
, (1.101) and (1.102) are essentially the same once we observe that
dN
d
() =

j=0
(
j
),
their scattering analogues are nevertheless quite dierent. We start with the analogue of
(1.102).
39
Theorem 1.9 (Birman Krein Formula) Suppose f S(R) and g() = f(
2
). Then
f(H
V
) f(H
0
) is a trace class operator and
tr(f(H
V
) f(H
0
)) =


0
g()
d
d
()d +
N

k=1
f(E
k
) +

2
g(0) (1.103)
where = 1 if

X(0) = 0 or 0 if

X(0) = 0, also
() =
1
2i
log det S() with (0) = 0 (1.104)
Proof. To see the trace class property, we apply the Heler-Sj ostrand formula in the
Appendix and the relevant identity to get
f(H
V
) f(H
0
) =
1

z

f(z)(H
V
z)
1
V (H
0
z)
1
dz f S(R) . (1.105)
Recall that

f is an almost analytic extension fo f and satises [


f(z)[ C
N
[Imz [
N
'z`
N
for all N N.
Since H
0
(H
0
z)
1
= I +z(H
0
z)
1
, we have
|H
0
|
H
2
L
2|(H
0
z)
1
|
L
2
H
2 = |H
0
((H
0
z)
1
|
L
2
L
2
= |I +z(H
0
z)
1
|
L
2
L
2 1 +
[z[
[Imz[
which implies
V (H
0
z)
1
= O

[z[
[Imz[

: L
2
H
2
comp
([R, R]) (1.106)
where Supp V [R, R] .
The results in the Appendix implies that V (H
0
z)
1
is of trace class and
|(H
V
z)
1
V (H
0
z)
1
|
tr

C'z`
[Imz[
2
(1.107)
Combining (1.105), (1.106), (1.107), we have
f(H
V
) f(H
0
) L
1
(L
2
(R), L
2
(R)) .
Now we can prove (1.103). For simplicity, we assume that H
V
has no negative eigenvalue
(as their contribution is quite clear). Thus, by Theorem 1.2, we can write
f(H
V
)(x, x) =
1
2


0
(e
+
(x, )e
+
(x, ) + e

(x, )e

(x, ))g()d , (1.108)


40
and we also have
f(H
0
)(x, x) =
1
2


0
2g()d . (1.109)
Using Lidskiis Theorem in the Appendix and (1.108),(1.109), we get
tr(f(H
V
) f(H
0
)) = lim
r

r
r
(f(H
V
)(x, x) f(H
0
)(x, x))dx
=
1
2
lim
r

r
r
[e
+
(x, )e
+
(x, )+(e

(x, )e

(x, ) 2]dx g()d (1.110)


=
1
4
lim
r

r
r
[e
+
(x, )e
+
(x, ) + (e

(x, )e

(x, ) 2]dx g()d


To eliminate the integration in x, we apply the following reduction to boundary trick.
The resulting formula (1.114) is known as the Maass-Selberg Formula. We have
(H
V

2
)e

(x, ) = 0 . (1.111)
Dierentiating with respect to , we get
(H
V

2
)

(x, ) = 2e

(x, ) . (1.112)
Hence, for = 0,
e

(x, )e

(x, )
=
(H
V

2
)
2
(

(x, ))e

(x, )
1
2
(

(x, ))(H
V

2
)e

(x, )
=
1
2
(D
2
x
(

(x, ))e

(x, ) (

(x, ))D
2
x
e

(x, )) (1.113)
where we have used (1.111) and (1.112) in the rst equality. Thus,

r
r
e

(x, )e

(x, )dx
=
1
2

r
r
[
2
x
(

(x, ))e

(x, ) +

(x, )
2
x
e

(x, )]dx
=
1
2

r
r

x
(

(x, )
x
e

(x, )
x

(x, )e

(x, ))dx
=
1
2
[(

(x, ))(
x
e

(x, )) (
x

(x, ))e

(x, )]
r
r
. (1.114)
41
Put this into (1.110), we get
tr(f(H
V
) f(H
0
)) =
1
8
lim
0,r

R\(,)
(1.115)

[(

(x, ))(
x
e

(x, )) (
x

(x, ))e

(x, )]
r
r
8r

g()

d
Now, as we know the behaviors of e

(x, ) for [x[ >> 0 precisely, the remainder of the proof


amounts to a direct, but quite tedious computation.
First, we record the formulae of e

(x, ) and their rst derivatives for [x[ >> 0. In the


following formulae, as a rule, the upper row gives the behavior for x >> 0 and the lower
row for x << 0,
e
+
(x, ) =

T()e
ix
e
ix
+R
+
()e
ix
e

(x, ) =

e
ix
+R

()e
ix
T()e
ix

x
e
+
(x, ) =

iT()e
ix
ie
ix
iR
+
()e
ix

x
e

(x, ) =

ie
ix
+iR

()e
ix
iT()e
ix

e
+
(x, ) =

(T
t
() + ixT())e
ix
ixe
ix
+ (R
t
+
() ixR
+
())e
ix

(x, ) =

ixe
ix
+ (R
t

() + ixR

())e
ix
(T
t
() ixT())e
ix
(1.116)
42
Now, by using (1.116), we compute

(x, ))(
x
e

(x, ))[
r
r
= (T
t
() + irT())e
ir
(iT()e
ir
)
(ire
ir
+ (R
t
+
() + irR
+
())e
ir
)(ie
ir
+iR
+
()e
ir
)
+(ire
ir
+ (R
t

() + irR

())e
ir
)(ie
ir
iR

()e
ir
)
(T
t
() + irT())e
ir
(iT()e
ir
)
= i(T
t
()T() + irT()T()) [r +iR
+
()(R
t
+
() + irR
+
())
+rR
+
()e
2ir
i(R
t
+
() + irR
+
())e
2ir
]
i(T
t
()T() + irT()T()) [r +iR

()(R
t

() + irR

())
rR

()e
2ir
i(R
t

() + irR

())e
2ir
]
= i(2T
t
()T() + R
t
+
()R
+
() + R
t

()R

()) + 4r
+ terms of the form h(r, )e
2ir
with smooth and tempered function h
Note that
lim
0,r

R\(,)
h(r, )e
2ir
g()

d = 0
as g S(R). Thus,
1
8
lim
0,r

R\(,)

(x, ))(
x
e

(x, ))[
r
r
4r

g()

d
=
1
8i

R
(2T
t
()T() + R
t
+
()R
+
() + R
t

()R

())g()d (1.117)
=
1
4

R
d
d
g()d
Here, we have used the expression of S() given in (1.56) to compute
d
d
. Next, we look
43
at
lim
0

R\(,)

(
x

(x, ))e

(x, )

r
r
4r

g()

d
= lim
0

R\(,)

((
x
e

(x, ))e

(x, )

r
r

(
x
e

(x, ))((

)(x, )[
r
r
4r

g()

d
=: (I) + (II)
where the splitting into two sums corresponds to the two summation signs. By a change of
variables, lim
r
(II)
8
is exactly the same as the term computed in (1.117). Thus, so far we
obtain
tr(f(H
V
) f(H
0
)) =
1
2

R
g()
d
d
d +
1
8
lim
r
(I) . (1.118)
Finally, we have to compute
1
8
lim
r
(I) =
1
8
lim
0,r

R\(,)

((
x
e

(x, ))e

(x, ))[
r
r
g()

d
Using (1.116) we get

(
x
e

(x, ))e

(x, ))[
r
r
=

i[(R
+
() + R

())e
2ir
(R
+
() + R

())e
2ir
]
As before, if we integrate the term
i

(R
+
() + R

())e
2ir
(R
+
() + R

())e
2ir

g()

in and let r , we get zero. Hence


1
8
lim
0,r
(I)
=
1
8
lim
0,r

R\(,)

((
x
e

(x, ))e

(x, ))[
r
r
g()

d
=
1
8i
lim
0,r

R\(,)
[(R
+
() + R

())e
2ir
(R
+
() + R

())e
2ir
]
g()

d
=
1
4i
lim
0,r

R\(,)
[(R
+
() + R

())e
2ir
g()

d (1.119)
44
In computing (1.119), we may assume g is entire by approximating f by Schwartz functions
with compactly supported Fourier transform side, and deform the contour of integration as
shown in the following Figure 11.
plane
'$
0
E E
' '
T
c
Figure 11
This is admissible as R
+
() and R

() are holomorphic in the upper half plane and


e
2ir
decays exponentially when r > 0 there. Hence
1
8
lim
0,r
(I) =
1
4

Res
0
((R
+
() + R

())e
2ir
g()

0 if

X(0) = 0

1
4

Y (0)

X(0)

g(0) if

X(0) = 0
(1.120)
as R

() =

Y ()

X()
.
Finally we obtain (1.103) by putting (1.120) into (1.118) and observe that
d
d
is an even
function and

Y (0) =

X(0) by (1.49) and (1.51).
Remark. () dened in (1.104) is called the scattering phase which is a natural
analogue of the eigenvalue counting function N().
We now give the analogue of (1.101) in terms of resonances.
Theorem 1.10 (Poisson Formula) Let f() be a function such that if g() = f(
2
), we
have g(t) t
3
C

0
([0, )), then
tr(f(H
V
) f(H
0
)) =
1
2

C
m
R
()g() + g(0)
45
where m
R
(0) =

1 if

X(0) = 0
0 if

X(0) = 0
Proof. By using Theorem 1.9, we need to show that

g()
d
d
()d =

C
m
R
()g() 2
N

k=1
f(E
k
) m
R
(0)g(0)
where we have used the evenness of g and
d
d
.
Recall that
det S() =


X()

X()
.
Then by using Theorem 1.7 and Hadamard factorization, we have

X() = e
a
0
+a
1

P() where P() =

j
s areresonances

j
Let G be a function such that G
t
= g. Note that such G exists and is unique as g(0) = 0.
Then, we have

g()
d
d
()d =

G()
d
2

d
2
d
=

G()

1
(
j
)
2

1
( +
j
)
2

d
Since

G t
2
C

0
([0, )), [G()[
C
[[
2
for Im 0. Again, by using the estimate on
the number of resonances in Theorem 1.7, we can deform the contour of integration along
the lower half plane. We get

g()
d
d
d
=

\0
m
R
()g()

C
+
m
R
()g()
=

C
m
R
()g() 2
N

k=1
f(E
k
) m
R
(0)g(0)
which completes the proof.
46

You might also like