Professional Documents
Culture Documents
Quantum Mechanics: A Mathematical Introduction
Quantum Mechanics: A Mathematical Introduction
A Mathematical Introduction
ANDREW LARKOSKI
SLAC NATIONAL ACCELERATOR LABORATORY
Copyright 2023. This publication is in copyright. Subject to statutory exception and
to the provisions of relevant collective licensing agreements, no reproduction of any
part may take place without the written permission of Cambridge University Press.
2 Linear Algebra 1
2.1 1
2.2 6
2.3 6
2.4 8
2.5 9
2.6 10
2.7 11
2.8 11
3 Hilbert Space 14
3.1 14
3.2 15
3.3 16
3.4 16
3.5 18
3.6 18
3.7 19
3.8 20
5.6 42
5.7 45
5.8 47
10 Approximation Techniques 97
10.1 97
10.2 98
10.3 99
10.4 100
10.5 103
10.6 105
10.7 105
10.8 108
10.9 110
Exercises
2.1 (a) The 2 × 2 and 3 × 3 discrete derivative matrices can actually be read off from
the general form provided there:
1
0
D2×2 = 2∆x , (2.1)
− 2∆x
1
0
1
0 2∆x 0
D3×3 = − 2∆x 1
0 1
2∆x . (2.2)
0 − 2∆x1
0
(c) For the 2 × 2 derivative matrix, the eigenvector equation can be expressed as
1
0 a i a
2∆x =± , (2.8)
− 2∆x
1
0 b 2∆x b
for some numbers a, b. This linear equation requires that
b = ±ia , (2.9)
and so the eigenvectors can be expressed as
1 1
⃗v1 = a , ⃗v2 = a . (2.10)
i −i
To ensure that they are unit normalized, we require that
∗ 1
⃗v1 ·⃗v1 = a (1 − i)
2
= 2a2 , (2.11)
i
√
or that a = 1/ 2 (ignoring a possible overall complex phase). Thus, the
normalized eigenvectors are
1 1 1 1
⃗v1 = √ , ⃗v2 = √ . (2.12)
2 i 2 −i
Note that these are mutually orthogonal:
1 1
⃗v1∗ ·⃗v2 = (1 − i) = 1−1 = 0. (2.13)
2 −i
For the 3 × 3 derivative matrix, we will first determine the eigenvector
corresponding to 0 eigenvalue, where
1
0 2∆x 0 a 0
− 1 0 1
b = 0 . (2.14)
2∆x 2∆x
0 − 2∆x
1
0 c 0
Performing the matrix multiplication, we find that
b 0
c−a = 0 . (2.15)
−b 0
This then enforces that b = 0 and c = a. That is, the normalized eigenvector
with 0 eigenvalue is (again, up to an overall complex phase)
1
1
⃗v1 = √ 0 . (2.16)
2
1
Next, the eigenvectors for the non-zero eigenvalues satisfy
1
0 2∆x 0 a a
− 1 1 i
2∆x 0 2∆x b = ± √ b . (2.17)
2∆x
0 − 2∆x
1
0 c c
3 Exercises
or that
√ i
b = ± 2i a , c = ± √ b = −a . (2.19)
2
Then, the other two eigenvectors are
√1 √1
⃗v2 = a 2i , ⃗v3 = a − 2i . (2.20)
−1 −1
All three eigenvectors, ⃗v1 ,⃗v2 ,⃗v3 , are mutually orthogonal. For example,
1 √ √1
1−2+1
⃗v2∗ ·⃗v3 = 1 − 2i − 1 − 2i = = 0. (2.23)
4 4
−1
and so
∞ ∞
∆xn n (λ ∆x)n
M⃗v = ∑ D ⃗v = ∑ ⃗v = eλ ∆x⃗v . (2.26)
n=0 n! n=0 n!
(e) Now, we are asked to determine the matrix form of the exponentiated 2 × 2
and 3 × 3 derivative matrices. Let’s start with the 2 × 2 matrix and note that
we can write
∞ n ∞ n
∆xn 0 1 1 0 1
M2×2 = e∆xD2×2 = ∑ 2∆x = ∑ n .
n=0 n!
− 2∆x
1
0 n=0 2 n!
−1 0
(2.27)
So,
the problem
is reduced to establishing properties of the matrix
0 1
. Note that the first few powers of the matrix are:
−1 0
0
0 1 1 0
= = I, (2.28)
−1 0 0 1
1
0 1 0 1
= , (2.29)
−1 0 −1 0
2
0 1 0 1 0 1 1 0
= =− = −I . (2.30)
−1 0 −1 0 −1 0 0 1
Now, note that even powers of the matrix take the form
0
0 1 0
−1 0 1 = I, (2.34)
0 −1 0
5 Exercises
2
0 1 0 1 0 1
−1 0 1 = 0 0 0 − 2I , (2.35)
0 −1 0 1 0 1
4
0 1 0 1 0 1
−1 0 1 = 2 0 0 0 + 4I , (2.36)
0 −1 0 1 0 1
and so the general result takes the form
2n
0 1 0 1 0 1
−1 0 1 = (−2)n−1 0 0 0 + (−2)n I , (2.37)
0 −1 0 1 0 1
for n = 1, 2, . . . . Products of odd powers of the matrix take the form
1
0 1 0 0 1 0
−1 0 1 = −1 0 1 , (2.38)
0 −1 0 0 −1 0
3
0 1 0 0 1 0
−1 0 1 = −2 −1 0 1 , (2.39)
0 −1 0 0 −1 0
5
0 1 0 0 1 0
−1 0 1 = 4 −1 0 1 . (2.40)
0 −1 0 0 −1 0
The general form is then
2n+1
0 1 0 0 1 0
−1 0 1 = (−2)n −1 0 1 , (2.41)
0 −1 0 0 −1 0
for n = 0, 1, 2, . . . . Putting these results together, the 3 × 3 exponentiated
matrix is
∞ n 1 0 1
(−2) 1
M3×3 = I + ∑ 2n − 0 0 0 + I (2.42)
n=1 2 (2n)! 2
1 0 1
∞ 0 1 0
(−2)n −1 0 1
+ ∑ 2n+1
n=0 2 (2n + 1)!
0 −1 0
1 0 1 1 0 −1 ∞
1 1 (−1)n
= 0 0 0 + 0 2 0 ∑ √ 2n
2 2 n=0 2 (2n)!
1 0 1 −1 0 1
0 1 0 ∞
1 (−1)n
+ √ −1 0 1 ∑ √ 2n+1
2 (2n + 1)!
0 −1 0 n=0 2
6 2 Linear Algebra
1 0 1 cos √12 1 0 −1 sin √12 0 1 0
1
= 0 0 0 + 0 2 0 + √ −1 0 1 .
2 2 2
1 0 1 −1 0 1 0 −1 0
2.2 If we instead defined the derivative matrix through the standard asymmetric
difference, the derivative matrix would take the form
.. .. ..
..
. . . . ···
··· − 1 1
0 ···
∆x ∆x
D= ··· 0 − ∆x 1 1
∆x ··· . (2.43)
··· 0 0 − 1
· · ·
∆x
.. .. .. .. ..
. . . . .
Such a matrix has no non-zero entries below the diagonal and so its characteristic
equation is rather trivial, for any number of grid points. Note that
.. .. ..
..
. . . . ···
··· − 1 −λ 1
0 ···
∆x ∆x
det(Dn×n − λ I) = det ··· 0 − ∆x
1
−λ 1
∆x ···
··· 0 0 − ∆x
1
−λ ···
.. .. .. .. ..
. . . . .
n
1
= 0 = − −λ . (2.44)
∆x
Thus, there is but a single eigenvalue, λ = −1/∆x.
2.3 (a) We are asked to express the quadratic polynomial p(x) as a linear combina-
tion of Legendre polynomials. So, we have
or that
r
2
d1 = b. (2.49)
3
Finally, matching coefficients of x0 , we have
r
1 5 1 a
d0 √ − d2 = d0 √ − = c , (2.50)
2 8 2 3
or that
√
√ 2
d0 = 2c + a. (2.51)
3
Then, we can express the polynomial as the linear combination
√ ! r r
√ 2 2 1 8
p(x) = 2c + a P0 (x) + bP1 (x) + aP2 (x) , (2.52)
3 3 3 5
(b) Let’s now act on this vector with the derivative matrix we constructed:
√ √ √ √
2c + 32 a 2b
0 3 √0 q q
d
p(x) = 0 0 15
2 =
3a
b 8 . (2.54)
dx q3
0 0 0 1 8 0
3 5a
2
0 0 0 0 0 0
The action of this matrix on the polynomial as expressed as a vector in
Legendre polynomial space is
√ √ √ √ √
2c + 32 a 2c + 32 a √
q q q2b 8a
M
2
b
= 2
b
+ ∆x 3a
8 + ∆x 2
0
q3 q3
1 8 1 8 0 0
3 5a 3 5a
where we have
d d
F(x) = f (x) , G(x) = g(x) , (2.63)
dx dx
and c is an arbitrary constant. Linearity requires that this equal the sum of
their anti-derivatives:
Z Z
dx f (x) + dx g(x) = F(x) + G(x) + 2c . (2.64)
9 Exercises
Note that each integral picks up a constant c. So, the only way that anti-
differentiation can be linear is if the integration constant c = 0. Note also
that integration, i.e., the area under a curve, is independent of the integration
constant.
Next, we must also demand that multiplication by a constant is simple for
linearity. That is, for some constant c, we must have that
Z
dx c f (x) = c F(x) , (2.65)
where we restrict to the domain x ∈ [0, ∞), as mentioned above. Now, let’s
change variables to y = log x, and now y ∈ (−∞, ∞), and the integral becomes
Z ∞ Z ∞
−i(λ1 −λ2 ) log x
dx e = dy ey e−i(λ1 −λ2 )y . (2.72)
0 −∞
10 2 Linear Algebra
Note the extra factor ey in the integrand; this is the Jacobian of the change
of variables x = ey , and so dx = ey dy. If this Jacobian were not there, then
the integral would be exactly like we are familiar with from Fourier trans-
forms. However, with it there, this integral is not defined. We will fix it in
later chapters.
(c) A function g(x) with a Taylor expansion about 0 can be expressed as
∞
g(x) = ∑ an x n , (2.73)
n=0
We can write this in a very nice way in terms of the trace and the determinant
of M:
(c) With the determinant equal to 1 and the trace equal to 0, the eigenvalues
are
√
0± 0−4
λ= = ±i , (2.81)
2
which are clearly not real-valued.
2.7 (a) The other elements of the matrix M we need in the ⃗u-vector basis are
⊺ M11 M12 cos θ
⃗u1 M⃗u1 = (cos θ sin θ ) (2.82)
M21 M22 sin θ
= M11 cos2 θ + M21 cos θ sin θ + M12 cos θ sin θ + M22 sin2 θ ,
M11 M12 cos θ
⃗u⊺2 M⃗u1 = (− sin θ cos θ ) (2.83)
M21 M22 sin θ
= −M11 cos θ sin θ + M21 cos2 θ − M12 sin2 θ + M22 cos θ sin θ ,
M11 M12 − sin θ
⃗u⊺2 M⃗u2 = (− sin θ cos θ ) (2.84)
M21 M22 cos θ
= M11 sin2 θ − M21 cos θ sin θ − M12 cos θ sin θ + M22 cos2 θ .
(b) In exercise 2.6, we had shown that the characteristic equation for a 2 × 2
matrix can be written as
det(M − λ I) = λ 2 − (tr M) λ + det M = 0 , (2.85)
where tr M is the sum of the diagonal elements of M. In the ⃗v-vector basis,
this characteristic equation is
λ 2 − (tr M) λ + det M = 0 = λ 2 − (M11 + M22 ) λ + (M11 M22 − M12 M21 ) .
(2.86)
Now, in the⃗u-vector basis, we can calculate the trace and determinant. First,
the trace:
tr M = ⃗u⊺1 M⃗u1 +⃗u⊺2 M⃗u2 (2.87)
= M11 cos θ + M21 cos θ sin θ + M12 cos θ sin θ + M22 sin θ
2 2
+ M11 sin2 θ − M21 cos θ sin θ − M12 cos θ sin θ + M22 cos2 θ
= M11 + M22 ,
exactly the same as in the ⃗v basis. The determinant is
det M = ⃗u⊺1 M⃗u1⃗u⊺2 M⃗u2 −⃗u⊺1 M⃗u2⃗u⊺2 M⃗u1 . (2.88)
Plugging in the explicit values for the matrix elements, one finds the identical
value of the determinant in either the ⃗v-vector or ⃗u-vector basis. Therefore,
the characteristic equation is independent of basis.
2.8 (a) Verifying orthonormality is straightforward. For m ̸= n, we have
Z 2π Z 2π
1
∗
d θ fm (θ ) fn (θ ) = d θ ei(n−m)θ (2.89)
0 2π 0
i
=− e2π i(n−m) − 1 = 0 ,
2π (n − m)
12 2 Linear Algebra
d
D̂ fλ (θ ) = −i f (θ ) = λ fλ (θ ) . (2.91)
dθ λ
The solutions are
fλ (θ ) = ceiλ θ , (2.92)
e2π iλ = 1 . (2.94)
because the integral of cosine over its entire domain is 0, and we used the
double angle formula. Essentially the same calculation follows for sine. For
orthogonality, we will again just show one explicit calculation, the product
of cosine and sine:
Z 2π Z 2π
1 1
d θ cos(nπ ) sin(mπ ) = d θ [sin ((n + m)θ ) − sin ((n − m)θ )]
π 0 2π 0
= 0, (2.96)
using the angle addition formulas and noting again that the sum and differ-
ence of two integers is still an integer, and integral of sine over its domain
√ is
0. Finally, the normalized basis element if n = 0 is just the constant, 1/ 2π ,
as identified in the exponential form.
(d) We are asked to determine the matrix elements of D̂ in the sinusoidal basis.
To do this, we sandwich the operator between the basis elements and inte-
13 Exercises
grate over the domain. First, note that cos-cos or sin-sin matrix elements
vanish. For example,
Z 2π Z 2π
1 in
d θ cos(mθ )D̂ cos(nθ ) = d θ cos(mπ ) sin(nπ ) = 0 , (2.97)
π 0 π 0
Exercises
3.1 (a) If the matrix M is normal, then MM† = M† M. The Hermitian conjugate of
M is
∗
a c∗
M† = . (3.1)
b∗ d ∗
Then, for the matrix M to be normal, we must require that every element of
MM† and M† M are equal. Therefore, we must enforce that
a∗ c + b∗ d = ac∗ + bd ∗ . (3.4)
14
15 Exercises
To show that this matrix is not normal, we will just focus on the 11 entry of
the products with its Hermitian conjugate. Note that
∗
M11 M12 M13 · · · M11 0 0 ···
0 M22 M23 · · · ∗ ∗ ···
M12 M22 0
MM† = 0 0 M33 · · · M13 M23 M33 · · ·
∗ ∗ ∗ (3.7)
.. .. .. .. .. .. .. ..
. . . . . . . .
!
|M11 |2 + |M22 |2 + |M33 |2 + · · · · · ·
= .. .. .
. .
By contrast, the product of matrices in the other order is
∗
M11 0 0 ··· M11 M12 M13 ···
M∗ M∗ 0 · · · 0 M22 M23 ···
12 22
M† M = M ∗ M ∗ M ∗ · · · 0 0 M33 ··· (3.8)
13 23 33
.. .. .. .. .. .. .. ..
. . . . . . . .
!
|M11 | · · ·
2
= .. .. .
. .
Therefore, for a general upper triangular matrix it is not true that
|M11 |2 = |M11 |2 + |M22 |2 + |M33 |2 + · · · , (3.9)
and so upper triangular matrices are not normal.
3.2 (a) We want to calculate the exponentiated matrix
∞
(iϕ )n n
A = eiϕ σ3 = ∑ σ3 . (3.10)
n=0 n!
Note that this sum can be split into even and odd components noting that
1 0 1 0 1 0
σ3 =
2
= = I. (3.11)
0 −1 0 −1 0 1
Thus, we have
∞ ∞ ∞
(iϕ )n n (iϕ )2n (iϕ )2n+1
∑ σ 3 = I ∑ + σ 3 ∑ = I cos ϕ + iσ3 sin ϕ (3.12)
n=0 n! n=0 (2n)! n=0 (2n + 1)!
iϕ
cos ϕ + i sin ϕ 0 e 0
= = .
0 cos ϕ − i sin ϕ 0 e−iϕ
(b) Now, we are asked to evaluate the exponentiated matrix
∞
(iϕ )n
B = eiϕ (σ1 +σ3 ) = ∑ n!
(σ1 + σ3 )n . (3.13)
n=0
Note that
1 1 1 1 2 0
(σ1 + σ3 )2 = = = 2I. (3.14)
1 −1 1 −1 0 2
16 3 Hilbert Space
Therefore, this sum again splits into even and odd parts where
∞ ∞ ∞
(iϕ )n 2n (iϕ )2n 2n (iϕ )2n+1
∑ (σ1 + σ3 )n = I ∑ + (σ1 + σ3 ) ∑ (3.15)
n=0 n! n=0 (2n)! n=0 (2n + 1)!
∞
√ √
(i 2ϕ ) 2n σ1 + σ3 ∞ (i 2ϕ )2n+1
=I∑ + √ ∑
n=0 (2n)! 2 n=0 (2n + 1)!
√ σ + σ √
1 3
= I cos 2ϕ + i √ sin 2ϕ .
2
This can be written in usual matrix form, but we won’t do that here.
3.3 (a) To verify that the rotation matrix is unitary, we just multiply its Hermitian
conjugate with itself:
cos θ sin θ cos θ − sin θ
MM = †
(3.16)
− sin θ cos θ sin θ cos θ
cos2 θ + sin2 θ cos θ sin θ − cos θ sin θ
= = I,
− cos θ sin θ + cos θ sin θ cos2 θ + sin2 θ
and so M is indeed unitary.
(b) We are now asked to determine which of the Pauli matrices are exponenti-
ated to generate this rotation matrix. First, note that each element of this
rotation matrix is real, but the form of the exponent is iθ σ j , which is in
general complex. To ensure that exclusively real-valued matrix elements are
produced, the Pauli matrix must have only imaginary elements and there is
only one Pauli matrix for which that is true:
0 −i
σ2 = . (3.17)
i 0
Then, its exponentiation is
∞
(iθ )n n
eiθ σ2 = ∑ σ2 . (3.18)
n=0 n!
Let’s focus on the first entry. For this to equal the first entry of ⃗v, we must
have the exponential phase factor to reduce to eiθ1 . So, we can set
The exact same idea follows for the second entries, and so
Inserting these expressions into the matrix product above, we then find the
reduced linear equation that
cos v |a| cos u + |b| sin u
= . (3.26)
sin v |c| cos u + |d| sin u
Then,
by the normalization of the kets. Note however that this operator is not unitary,
for
However, the fix to make the operator unitary is pretty simple: we just keep adding
outer products of vectors that are mutually orthogonal and orthogonal to |u⟩ until
they form a complete basis of the Hilbert space. Except for the fact that this basis
includes |u⟩, it is otherwise very weakly constrained, so we won’t write down an
explicit form.
3.6 (a) The sum of the outer products of these vectors is
iϕ
1 e 1 sin θ
|v1 ⟩⟨v1 | + |v2 ⟩⟨v2 | = (1 0) + ϕ e−iϕ1 sin θ e−iϕ2 cos θ
0 e cos θ
i 2
(3.37)
1 0 sin θ2
ei(ϕ1 −ϕ2 ) sin θ cos θ
= + −i(ϕ −ϕ .
0 0 e 1 2 ) sin θ cos θ cos2 θ
This clearly does not equal the identity matrix and therefore does not satisfy
the completeness relation for general θ . However, it does equal the identity
matrix if θ = 0, for which the off-diagonal entries vanish, and so does not
depend on the phases ϕ1 , ϕ2 .
(b) Recall that the first three Legendre polynomials are
1
P0 (x) = √ , (3.38)
2
r
3
P1 (x) = x, (3.39)
2
r
5 2
P2 (x) = (3x − 1) . (3.40)
8
19 Exercises
And so, indeed, the outer product of Legendre polynomials P0 (x)P0 (y) +
P1 (x)P1 (y) + P2 (x)P2 (y) acts as an identity matrix.
3.7 (a) We are asked to calculate the expectation value of the Hamiltonian Ĥ on the
time-dependent state. First, the time-dependent state is
E1 t E2 t
|ψ (t)⟩ = α1 e−i h̄ |1⟩ + α2 e−i h̄ |2⟩ . (3.46)
Then, the action of the Hamiltonian on this linear combination of energy
eigenstates is
E1 t E2 t
Ĥ|ψ (t)⟩ = α1 E1 e−i h̄ |1⟩ + α2 E2 e−i h̄ |2⟩ , (3.47)
because Ĥ|1⟩ = E1 |1⟩, for example. Then, the expectation value is, assuming
orthonormality of the energy eigenstates,
E1 t E2 t E1 t E2 t
⟨ψ (t)|Ĥ|ψ (t)⟩ = α1∗ ei h̄ ⟨1| + α2∗ ei h̄ ⟨2| α1 E1 e−i h̄ |1⟩ + α2 E2 e−i h̄ |2⟩
= |α1 |2 E1 + |α2 |2 E2 . (3.48)
This is independent of time.
(b) In complete generality, we can express Ô as a linear combination of outer
products:
Ô = a|1⟩⟨1| + b|1⟩⟨2| + c|2⟩⟨1| + d|2⟩⟨2| , (3.49)
for some complex numbers a, b, c, d. In this form, we can act it on |1⟩ to find:
Ô|1⟩ = a|1⟩ + c|2⟩ = |1⟩ − |2⟩ , (3.50)
20 3 Hilbert Space
(b) We can isolate the probability for measuring a muon neutrino by taking the
inner product with the time-evolved electron neutrino state. The probability
amplitude is
E1 t E2 t
⟨νµ |νe (T )⟩ = (− sin θ ⟨ν1 | + cos θ ⟨ν2 |) cos θ e−i h̄ |ν1 ⟩ + sin θ e−i h̄ |ν2 ⟩
(3.57)
E2 t E1 t
= sin θ cos θ e−i h̄ − e−i h̄
21 Exercises
(E2 +E1 )t (E −E )t (E −E )t
= e−i 2h̄ sin θ cos θ e i 1 2h̄ 2
−e−i 1 2h̄ 2
Exercises
4.1 This question has numerous interpretations. First, the variance of a non-
Hermitian operator isn’t even well-defined. To be interpretable as a property of a
probability distribution, the variance should be a real number, but even the expec-
tation value of a non-Hermitian operator ⟨Ĉ⟩ is not real. So, in some sense this is
a non-starter because eigenvalues of non-Hermitian operators do not correspond
to physical observables.
However, a more useful way to interpret the uncertainty principle is right-to-
left, as a consequence of a non-trivial commutation relation. If two operators
Ĉ and D̂ have a non-zero commutation relation, this means that it is impossi-
ble for them to be simultaneously diagonalizable. So, the eigenstates of Ĉ and
D̂ must be different if their commutation relation is non-zero. Correspondingly,
if their eigenspace is distinct, then an eigenstate of one operator is necessarily a
non-trivial linear combination of the others eigenstates. So there is still an “uncer-
tainty” in the decomposition of any vector in terms of eigenstates of Ĉ and D̂. A
probabilistic interpretation is lost (or at the very least not obvious), but a lack of
commutation means that there is no basis in which both operators take a simple
form.
4.2 (a) Recall that the 3 × 3 derivative matrix we constructed earlier is
1
0 2∆x 0
D = − 2∆x
1
0 1
2∆x
. (4.1)
0 − 2∆x
1
0
(b) The position operator X has eigenvalues of the positions on the grid and in
position space, those are its diagonal entries:
22
23 Exercises
x0 0 0
X= 0 x1 0 . (4.3)
0 0 x2
(c) Now, let’s calculate the commutator of the position and momentum opera-
tors on this grid. First, we have the product
1
x0 0 0 0 2∆x 0
XP = −ih̄ 0 x1 0 − 2∆x
1
0 1
2∆x
(4.4)
0 0 x2 0 − 2∆x
1
0
x0
0 2∆x 0
= −ih̄ − 2∆x
x1
0 x1
2∆x .
x2
0 − 2∆x 0
1
0 2∆x 0 x0 0 0
PX = −ih̄ − 2∆x
1
0 1
2∆x 0 x1 0 (4.5)
0 − 2∆x
1
0 0 0 x2
x1
0 2∆x 0
= −ih̄ − 2∆x
x0
0 x2
2∆x .
x1
0 − 2∆x 0
1
0 2 0
[X, P] = ih̄ 1
2 0 1
2
. (4.7)
1
0 2 0
This is a bit strange from the point of view of the canonical commutation
relation, because we would expect that their commutator is [X, P] = ih̄I, pro-
portional to the identity matrix. This is close, but this can make more sense
when taking a limit of ∆x → 0.
24 4 Axioms of Quantum Mechanics and Their Consequences
As ∆x decreases, there are never any non-zero entries on the diagonal, but
immediately off the diagonal are factors of 1/2. Relative to the size of the
matrix, these factors of 1/2 get closer and closer to being on the diagonal.
Continuous-dimensional spaces like position space are a bit strange because
infinitesimals can be defined, and so as the grid spacing ∆x → 0, we expect
that these off-diagonal entries merge and sum into the identity matrix.
4.5 (a) For the exponential to make sense, the quantity in the exponent must be
unitless. Therefore, the units of the prefactor for the position operator must
have dimensions of inverse length:
"r #
2π c
= L−1 . (4.9)
h̄
that is, mass per unit time. Therefore, c quantifies a flow or current of mass
of the system, in a similar way that electric current quantifies the flow of
electric charge per unit time.
(b) To calculate the commutation relation of the operators X̂ and P̂, we need to
use a test function f (x) to ensure that the manipulations we perform use the
linearity of the derivative. We then have
q q
i 2π c x 2π h̄ d
[X̂, P̂] f (x) = e h̄ , e ch̄ dx f (x) (4.11)
q q q q
i 2π c x 2π h̄ d 2π h̄ d i 2π c x
=e h̄ e f (x) − e ch̄ dx e h̄ f (x)
ch̄ dx
q r ! q q r !
i 2π c x 2π h̄ i 2πh̄ c x+ 2πc h̄ 2π h̄
=e h̄ f x+ −e f x+ .
c c
On the second line, we have expressed the momentum and position opera-
tors in position space, and then on the third line, note that the exponentiated
derivative operator is the translation operator. Further, note that the second
remaining exponential factor simplifies:
q q
2π c
q q
i h̄ x+ 2πc h̄ i 2π c x+2π i i 2π c x
e =e h̄ =e h̄ , (4.12)
25 Exercises
The only way that a translated function can equal itself up to a multiplicative
factor is if that function is periodic, and so can be expressed as a complex
exponential. Let’s write
fλ (x) = eibx , (4.18)
for some real constant b. This must satisfy
q
ib x+ 2πc h̄
λ eibx = e , (4.19)
or that
q
ib 2π h̄
λ =e c . (4.20)
Through the property of periodicity of the imaginary exponential, we can
limit b in the range
r
2π c
0≤b< . (4.21)
h̄
That is, eigenvalues of the operator P̂ lie on the unit circle, as expected
because P̂ is a unitary operator.
26 4 Axioms of Quantum Mechanics and Their Consequences
(e) Because the commutator [X̂, P̂] = 0, the eigenstates of X̂ are identical to the
eigenstates of P̂.
(f) This may seem like we have created position and momentum operators that
commute, and therefore have somehow skirted the Heisenberg uncertainty
principle. However, as we observed in part (c), the fact that these exponen-
tiated position and momentum operators are unitary means that in each the
position or momentum is only defined as modulo 2π . Therefore, the uncer-
tainty has migrated from the non-zero commutator into an uncertainty in
the operators themselves. A given eigenvalue for X̂, for example, does not
define a unique position, and in fact is consistent with a countable infinity
of positions!
4.4 First, let’s write down the relevant Taylor expansions for the exponentiated
matrices, to third order. We have
Â2 Â3
ei = I + i − −i +··· , (4.22)
2 6
B̂2 B̂3
eiB̂ = I + iB̂ − −i +··· , (4.23)
2 6
(Â + B̂)2 (Â + B̂)3
e i(Â+B̂)
= I + i(Â + B̂) − −i +··· . (4.24)
2 6
Then, to third order in the matrices  and B̂, the product of the exponentials is
Â2 Â3 B̂2 B̂3
ei eiB̂ = I + i − −i +··· I + iB̂ − −i +··· (4.25)
2 6 2 6
Â2 B̂2 Â3 B̂3
= I + i(Â + B̂) − − − ÂB̂ − i − i
2 2 6 6
ÂB̂2 B̂Â2
−i −i +··· .
2 2
Now, we can associate terms and complete squares and cubes. Note that
(Â + B̂)3 = Â3 + B̂3 + ÂB̂2 + B̂Â2 + ÂB̂Â + B̂ÂB̂ + Â2 B̂ + B̂2 Â , (4.27)
[Â, B̂]
ei eiB̂ − ei(Â+B̂) = −
2
i
− 2ÂB̂2 + 2B̂Â2 − ÂB̂Â − B̂ÂB̂ − Â2 B̂ − B̂2 Â + · · · (4.28)
6
27 Exercises
Thus, at least through cubic order in the Taylor expansion, all of the terms that
differ between the product of exponentiated matrices to the exponential of the
sum of the matrices can be expressed in terms of commutators of matrices  and
B̂. Note that if [Â, B̂] = 0, then all of these residual differences vanish.
4.5 (a) If the operator Ô acts as
Ô|n⟩ = |n + 1⟩ , (4.30)
By the assumed completeness of the winding number states, this final outer
product is the identity matrix. Hence, Ô† Ô = I, which indeed proves that Ô
is unitary.
(b) As a unitary operator, the eigenvalue equation for Ô is
βn = e−inθ . (4.39)
(c) The winding states are eigenstates of the Hamiltonian, so we can express the
Hamiltonian in outer product form as
∞
Ĥ = ∑ |n|E0 |n⟩⟨n| . (4.41)
−∞
With this form, we can just calculate the commutator directly. We have
∞ ∞ ∞ ∞
Ĥ Ô = ∑ ∑ |m|E0 |m⟩⟨m|n + 1⟩⟨n| = ∑ ∑ |m|E0 |m⟩⟨n| δm,n+1
m=−∞ n=−∞ m=−∞ n=−∞
∞
= ∑ |n + 1|E0 |n + 1⟩⟨n| . (4.42)
n=−∞
∞ ∞ |n|E0 t
e−i h̄ |ψ ⟩ = ∑ e−inθ e−i h̄ |n⟩ = ∑ e−inθ e−i
Ĥt Ĥt
h̄ |n⟩ (4.45)
n=−∞ n=−∞
∞
E t
−in θ +sgn(n) h̄0
= ∑ e |n⟩ ,
n=−∞
Using this, we can then further use the saturated Heisenberg uncertainty
principle. The saturated uncertainty principle enforces the relationship that
h̄2
σx2 σ p2 = . (4.48)
4
Next, the variance in position, say, is just the square of the vector in the
Cauchy–Schwarz inequality:
for some function f . This can only be satisfied for the expectation value on
a general state if the function f is linear:
f (x) = a + bx , (4.64)
for some constants a, b. This can be argued for a large class of functions by
assuming it has a Taylor expansion about x = 0 and noting that it is not true
in general that
for n > 1. If this was true, then for example, the variance would always van-
ish. Thus, if the derivative of the potential must be linear, the potential must
be quadratic, and so the potential takes the form
Note that ⟨x̂2n−1 ⟩ weights larger values of position more than does ⟨x̂⟩2n−1
and so we expect that
dV (x̂) dV (⟨x̂⟩)
≥ . (4.68)
d x̂ d⟨x̂⟩
(c) Now, we flip the interpretation and are asked to find the state |ψ ⟩ on which
⟨x̂2n−1 ⟩ = ⟨x̂⟩2n−1 . This can only be true if there is a unique value of x̂ for
which the state has a non-zero value; namely it is a position eigenstate. That
is, |ψ ⟩ = |x⟩, which is also not in the Hilbert space. Therefore, for any state
on the Hilbert space, it is actually impossible for ⟨x̂2n−1 ⟩ = ⟨x̂⟩2n−1 .
4.8 (a) The integral of the exponential probability distribution is
Z ∞
N
1=N dx e−λ x = . (4.69)
0 λ
Therefore, the normalization constant N = λ .
(b) The expectation value of x can be calculated through explicit integration,
but we will do another approach here. Let’s instead take the derivative of the
normalization of the probability distribution with respect to the parameter
λ . We have
Z ∞ Z ∞ Z ∞
d d
1=0= λ dx e−λ x = dx e−λ x − λ dx x e−λ x . (4.70)
dλ dλ 0 0 0
32 4 Axioms of Quantum Mechanics and Their Consequences
Note that the first integral is 1/λ and the second integral is exactly the
expectation value ⟨x⟩. Therefore, we have that
1
0= − ⟨x⟩ , (4.71)
λ
or that
1
⟨x⟩ = . (4.72)
λ
(c) The second moment can be calculated in exactly the same way, just through
taking the second derivative of the normalization integral. We have
Z ∞ Z ∞ Z ∞
d2 d2 d d
1=0= λ dx e−λ x = dx e−λ x − λ dx x e−λ x
dλ 2 dλ 2 0 dλ 0 dλ 0
Z ∞ Z ∞ Z ∞
=− dx x e−λ x − dx x e−λ x + λ dx x2 e−λ x
0 0 0
⟨x⟩
= −2 + ⟨x2 ⟩ . (4.73)
λ
Therefore, we find that
2
⟨x2 ⟩ = . (4.74)
λ2
The variance of the exponential distribution is then
1
σ 2 = ⟨x2 ⟩ − ⟨x⟩2 = , (4.75)
λ2
and then the standard deviation is
1
σ= . (4.76)
λ
The area of the distribution within one standard deviation of the expectation
value is
Z 2
λ
λ dx e−λ x = −e−2 + 1 ≃ 0.864665 . (4.77)
0
This is indeed greater than 1/2 and so most of the distribution is within 1
standard deviation.
(d) Now, let’s calculate this generalized spread about the mean. We have
Z ∞
⟨(x − ⟨x⟩)n ⟩ = λ dx (x − ⟨x⟩)n e−λ x . (4.78)
0
Let’s now make the change of variables y = x − ⟨x⟩, so that the integral
becomes
Z ∞ Z ∞
⟨(x − ⟨x⟩)n ⟩ = λ dy yn e−λ (y+⟨x⟩) = λ e−1 dy yn e−λ y . (4.79)
−⟨x⟩ − λ1
33 Exercises
Exercises
(d) To include time dependence, we just need to augment the wavefunction with
exponential phase factors with the appropriate energy eigenvalues. We have
1 h −i E1 t E2 t i
ψ (x,t) = √ e h̄ ψ1 (x) + 3e−i h̄ ψ2 (x) . (5.7)
10
The expectation value of the energy or the Hamiltonian is identical when
time dependence is included again because the wavefunction is expressed in
terms of energy eigenstates. However, the expectation value of the position
will change in time. To evaluate this, let’s first just look at the wavefunction
times its complex conjugate:
1 h i E1 t ∗ E2 t i
ψ ∗ (x,t)ψ (x,t) = √ e h̄ ψ1 (x) + 3ei h̄ ψ2∗ (x) (5.8)
10
1 h −i E1 t E2 t i
×√ e h̄ ψ1 (x) + 3e−i h̄ ψ2 (x)
10
1 (E2 −E1 )t
= |ψ1 (x)|2 + 9|ψ2 (x)|2 + 3e−i h̄ ψ1∗ (x)ψ2 (x)
10
(E2 −E1 )t
+3ei h̄ ψ1 (x)ψ2∗ (x) .
where
Z a
3 (E2 − E1 )t
∆⟨x̂(t)⟩ = − 1 − cos dx x ψ1 (x)ψ2 (x) (5.11)
5 h̄ 0
16a (E2 − E1 )t
= 1 − cos .
15π 2 h̄
Thus, the time-dependent expectation value of the position on this state is
a 16a (E2 − E1 )t
⟨x̂(t)⟩ = − cos . (5.12)
2 15π 2 h̄
36 5 Quantum Mechanical Example: The Infinite Square Well
(5.2) We will first verify that this state is normalized. Note that
px px
∗ e−i h̄ ei h̄ 1
ψ (x)ψ (x) = √ √ = , (5.13)
a a a
independent of position x. Then, the nth moment of this distribution on the
infinite square well is
Z a Z a
1 an
⟨x̂n ⟩ = dx ψ ∗ (x) xn ψ (x) = dx xn = . (5.14)
0 a 0 n+1
The normalization corresponds to n = 0, for which we do indeed find the value
1, so this is unit normalized. The variance is the difference between n = 2 and the
square of n = 1 moments:
2 1 1 a2
σx = ⟨x̂ ⟩ − ⟨x̂⟩ = a
2 2 2
− 2 = . (5.15)
3 2 12
On the other hand, for the variance of momentum on this state, note that this
state is an eigenstate of the momentum operator:
px px
d ei h̄ ei h̄
p̂|ψ ⟩ = p|ψ ⟩ ⇒ −ih̄ √ =p√ . (5.16)
dx a a
h̄2
σx2 σ p2 = 0 < . (5.17)
4
However, this state is actually not in the Hilbert space of the infinite square
well. The reason for this is the following. In earlier chapters, we showed that the
momentum operator is Hermitian only if all states in the Hilbert space vanish
at the boundaries of the physical space. This momentum eigenstate is non-zero
at the phase space boundaries of the infinite square well, and therefore violates
this property. That is, if this state were allowed to be in the Hilbert space, then
momentum would not be Hermitian and therefore momentum would also not
be observable! Our derivation of the Heisenberg uncertainty principle required
that the operators were Hermitian, so this result does not contradict what we had
proved earlier.
5.3 (a) First, the energy eigenvalues are just
n2 π 2 h̄2
En = , (5.18)
2mL2
where L is the width of the well. Now, L = π , and so the energy eigenvalues
are
n2 h̄2
En = . (5.19)
2m
37 Exercises
That is,
π pn
β = −α e−i h̄ . (5.22)
pn = nh̄ , (5.25)
Taking the inner product with the bra ⟨ψm | isolates the coefficient βm :
∞
⟨ψm |ψ ⟩ = ∑ βn ⟨ψm |ψn ⟩ = βm . (5.31)
n=1
√
4 2 h̄ 1 1 1 1
= −i 2 −√ + √ + √ − √ +··· .
π m 2 2 2 2
This sum actually vanishes. The sine factor just oscillates between + √12 and
− √12 , and so the sum keeps canceling itself at higher terms in the series.
From the Schrödinger equation, the time derivative is determined by the
Hamiltonian, which, for the infinite square well, is just the squared momen-
tum operator:
p̂2 d
Ĥ = so ih̄ |ψ ⟩ = Ĥ|ψ ⟩ . (5.39)
2m dt
The momentum operator is a spatial derivative, and at t = 0, the initial wave-
function is piecewise constant over the well. This is then projected onto a
uniform wavefunction on the well. The second derivatives are 0 almost eve-
rywhere, except
p at the points where the initial wavefunction changes value
from 0 to 2/π , but the values of the spatial second derivatives are oppo-
site of one another. (Roughly, about the point x = −π /4, the wavefunction
is “concave-up”, and around x = π /4, it is “concave-down.”) Once projected
onto the uniform wavefunction, these second derivatives cancel, rendering
the time-derivative at t = 0 0.
(e) Let’s first consider the expectation value of momentum on this state. Recall
from Ehrenfest’s theorem that the time dependence of the expectation values
are
d⟨ p̂⟩ i
= ⟨[Ĥ, p̂]⟩ . (5.40)
dt h̄
d
p̂ = −ih̄ . (5.42)
dx
40 5 Quantum Mechanical Example: The Infinite Square Well
So, even the ground state of this infinite square well is relativistic. One thing
to note is that clearly if the pion is traveling near the speed of light, its
kinetic energy is not as simple as mπ v 2 /2. However, setting v = c estab-
lishes an energy at which the pion is definitely relativistic, and at which the
non-relativistic expression for the energy is no longer applicable.
5.6 (a) The matrix elements of the momentum operator are
h̄ mn
( p̂)mn = −2i 1 − (−1)m+n . (5.59)
a m −n
2 2
4h̄2 mn ∞ l 2 1 − (−1)m+l 1 − (−1)l+n
=− 2 ∑ .
a l=1 (m2 − l 2 )(l 2 − n2 )
(b) Note that terms in the sum vanish if either (−1)m+l = 1 or (−1)l+n = 1. So,
to simply ensure that every term in the sum vanishes, we just need to force
one of these relationships for every value of l. This can be accomplished if
m and n differ by an odd number, call it k. If n = m + k, then either m + l is
even and n + l is odd, or vice-versa, for every value of l. Thus,
( p̂2 )m(m+k) = 0 , (5.61)
if k is odd.
If instead n = m + k and k is even, the sum can be expressed as
∞ l 2 1 − (−1)m+l
∞ l 2 1 − (−1)m+l
1 − (−1)l+m+k 1 − (−1)m+l
∑ (m2 − l 2 )(l 2 − (m + k)2 )
=∑
(m2 − l 2 )(l 2 − (m + k)2 )
l=1 l=1
∞ 2l 2 1 − (−1)m+l
=∑ 2 2 2 . (5.62)
l=1 (m − l )(l − n )
2
This follows by distributing the product and noting that (−1)2(m+l) = 1. Fur-
ther, note that the value of 1 − (−1)m+l is either 0 or 2, and so every non-zero
term in the sum has the same sign, at least from this factor.
Now, let’s partial fraction the denominators. Note that
1 1 1 1
= − , (5.63)
m2 − l 2 2l m − l m + l
and similar for the other denominator factor. Then, the sum becomes
1
∞ 2l 2 1−(−1)m+l
1 ∞ 1 1 1
∑ 2 2 2 2 2∑ = 1−(−1) m+l
− − .
l=1 (m −l )(l −n ) l=1 m−l m + l n+l n − l
(5.64)
43 Exercises
where the values of l that are summed over are either just odd or just even,
depending on m. If m is odd, then l is even and we can write l = 2k, and the
sum takes the form
∞
∑ (a2k+m − a2k−m ) , (5.72)
k=1
where ai is the placeholder for the corresponding term in the sum. Note that
by the telescoping nature, once k is greater than or equal to k + 1, everything
cancels between the first term and the second. The only sum that remains is
just of the second term, up to k = m:
∞ m m
1
∑ (a2k+m − a2k−m ) = − ∑ a2k−m = − ∑ 2k − m . (5.73)
k=1 k=1 k=1
Let’s see what this evaluates to for some low values of m. If m = 1, we find
1
1
−∑ = −1 . (5.74)
k=1 2k − 1
44 5 Quantum Mechanical Example: The Infinite Square Well
If m = 3, we find
3
1 1 1
−∑ = 1−1− = − . (5.75)
k=1 2k − 3 3 3
We could keep going to larger m, but this suggests that it evaluates to −1/m.
Let’s prove this with induction. We have already verified low m; let’s now
assume it is true for m and prove it is true for m + 2. We consider the sum
m+2 m+1
1 1 1
− ∑ 2k − m − 2
=− ∑
2k − m − 2
−
2m + 4 −m−2
(5.76)
k=1 k=1
m+1
1 1
=− ∑ −
2k − m − 2 m + 2
k=1
m+1
1 1
=− ∑ −
2(k − 1) − m m + 2
k=1
m
1 1
=−∑ −
k=0 2k − m m+2
m
1 1 1 1
= −∑ − =− .
m k=1 2k − m m + 2 m+2
if m is odd.
Next, if m is even, then l must be odd in the sum and can be written as
l = 2k − 1 and the sum takes the form
∞ m m
1
∑ (a2k−1+m − a2k−1−m ) = − ∑ a2k−1−m = − ∑ 2k − 1 − m , (5.78)
k=1 k=1 k=1
For m = 4, we have
4
1 1 1
−∑ = +1−1− = 0. (5.80)
k=1 2k − 1 − 4 3 3
This suggests that this sum simply vanishes! Let’s again prove this with
induction, assuming it is true for m. The sum for m + 2 is
m+2 m+1
1 1 1
− ∑ 2k − 1 − m − 2
=− ∑ −
k=1 k=1 2(k − 1) − 1 − m 2m + 4 − 1 − m − 2
45 Exercises
m
1 1
=−∑ − (5.81)
k=0 2k − 1 − m m + 1
m
1 1 1
= −∑ − = 0,
m + 1 k=1 2k − 1 − m m + 1
which proves the induction step.
Inputting these results into Eq. 5.70 proves that all off-diagonal terms in the
matrix representation of p̂2 vanish.
(c) Let’s now consider just the ( p̂2 )11 element. From our earlier partial fraction-
ing, this term can be expressed as
2
4h̄2 ∞ 1 1
( p̂2 )11 = 2 ∑ + (5.82)
a k=1 2k − 1 2k + 1
4h̄2 ∞ 1 1 2
= 2 ∑ + +
a k=1 (2k − 1)2 (2k + 1)2 (2k − 1)(2k + 1)
" #
∞ ∞
4h̄2 π 2 1 1 1
= 2 +∑ +∑ −
a 8 k=1 (2(k + 1) − 1)2 k=1 2k − 1 2k + 1
" #
∞
4h̄2 π 2 π 2 1 1
= 2 + −1+ ∑ −
a 8 8 k=1 2k − 1 2k + 1
2 2
4h̄ π π 2
= 2 + −1+1
a 8 8
π 2 h̄2
= .
a2
In these expressions, we used our knowledge about telescoping series and the
given value of the series presented in the exercise. This is indeed the value of
the squared momentum in the first energy eigenstate.
(d) The solution to this problem is explicitly worked out in J. Prentis and B. Ty,
“Matrix mechanics of the infinite square well and the equivalence proofs of
Schrödinger and von Neumann,” Am. J. Phys. 82, 583 (2014).
5.7 (a) Let’s now calculate the matrix elements of the position operator in the
energy eigenstate basis. We have
Z mπ x nπ x
2 a
(x̂)mn = ⟨ψm |x̂|ψn ⟩ = dx sin x sin (5.83)
a 0 a a
4mn (1 − (−1)m+n )
=− a.
π 2 (m2 − n2 )2
We have just presented the result from doing the integral, which is a standard
exercise in integration by parts.
(b) To evaluate the commutator of position and momentum, we need to take
their matrix product in two orders. First,
∞ ∞ l 2 1 − (−1)m+l
8ih̄ 1 − (−1)l+n
(x̂ p̂)mn = ∑ (x̂)ml ( p̂)ln = 2 mn ∑ . (5.84)
l=1 π l=1 (m2 − l 2 )2 (l 2 − n2 )
46 5 Quantum Mechanical Example: The Infinite Square Well
elements must vanish, which seems to be at odds with one another. Our intu-
ition for the trace as the sum of diagonal elements of a matrix works well for
finite-dimensional matrices, just like our intuition for finite sums. However,
we know that infinite sums or series can have very surprising properties and
the trace of the commutator of position and momentum is an infinite sum.
So it’s like for any finite n, the diagonal element ([x̂, p̂])nn = ih̄, but the “infi-
nite” diagonal element is an enormous negative number ensuring that the
trace vanishes.
5..8 (a) The normalization of this wavefunction is
Z a
a5
1 = N2 dx x2 (a − x)2 = N 2 . (5.92)
0 30
Therefore, the normalized wavefunction is
r
30
ζ1 (x) = x(a − x) . (5.93)
a5
(b) The overlap of this state with the ground state of the infinite square well is
√ Z √
60 a π x 8 15
⟨ζ1 |ψ1 ⟩ = 3 dx x(a − x) sin = . (5.94)
a 0 a π3
To evaluate this integral is a standard exercise in integration by parts, but we
suppress details here. The fraction of ζ1 (x) that is described by the ground
state of the infinite square well is then the square of this:
960
|⟨ζ1 |ψ1 ⟩|2 = ≃ 0.998555 . (5.95)
π6
As illustrated in the picture, these wavefunctions are very, very similar!
(c) We can rearrange the Hamiltonian’s eigenvalue equation to solve for the
potential. We then find
h̄2 1 d 2
V (x) = ζ1 (x) + E1 . (5.96)
2m ζ1 (x) dx2
The term involving the second derivative of the wavefunction becomes
1 d2 1 d2 2
ζ 1 (x) = x(a − x) = − . (5.97)
ζ1 (x) dx 2 x(a − x) dx 2 x(a − x)
Therefore, the potential is
h̄2 1
V (x) = − + E1 . (5.98)
m x(a − x)
By setting
4h̄2
E1 = , (5.99)
ma2
we can plot this potential and compare to the infinite square well. This is
shown in Fig. 5.1.
48 5 Quantum Mechanical Example: The Infinite Square Well
t
Fig. 5.1 Comparison of the new potential (solid black) to the infinite square well. The new potential diverges at the points
x = 0, a, which is also where the wavefunctions must vanish.
(d) The thing we know about the first excited state with respect to the ground
state is that it is orthogonal. Further, the ground state is symmetric about the
potential, and so to easily ensure that the first excited state is orthogonal, we
just make it anti-symmetric on the well. So, we expect that the first excited
state wavefunction takes the form
ζ2 (x) = Nx(a − x)(a − 2x) , (5.100)
for some normalization constant N (that we won’t evaluate here). Plugging
this into the eigenvalue equation for the Hamiltonian with the potential
established above, the first excited state energy is
h̄2 1 d 2 2h̄2 1
E2 = − ζ 2 (x) +V (x) = E1 + . (5.101)
2m ζ2 (x) dx 2 m x(a − x)
Note that this ζ2 (x) is actually not an eigenstate, because the “energy”
E2 for this potential is position dependent. However, we can estimate the
excited state energy level by noting that the minimum value of the position
dependent piece is when a/2 where
1 4
= . (5.102)
x(a − x) x=a/2 a2
So, the second energy eigenvalue is approximately
8h̄2
E2 ≃ E1 +
. (5.103)
ma2
For comparison, in the infinite square well, note that
3π 2 h̄2 h̄2
E2 = E1 + ≃ E1 + 14.8 . (5.104)
2ma2 ma2
Quantum Mechanical Example: The Harmonic
6
Oscillator
Exercises
6.1 (a) The inner product of two coherent states as represented by the action of the
raising operator on the harmonic oscillator ground state is
|λ |2 |η |2 ∗
⟨ψλ |ψη ⟩ = e− e− ⟨ψ0 |eλ â eη â |ψ0 ⟩ .
†
2 2 (6.1)
Now, in the basis of the raising operator, â is a derivative and so its
exponential is the translation operator. That is,
∗ † +λ ∗
eλ â eη â = eη (â
†
) = eλ ∗ η eη ↠. (6.2)
Then, the inner product of the two coherent states is
|λ |2 |η |2 ∗ |λ |2 |η |2 ∗η
⟨ψλ |ψη ⟩ = e− e− ⟨ψ0 |eλ â eη â |ψ0 ⟩ = e− e− eλ ⟨ψ0 |eη â |ψ0 ⟩
† †
2 2 2 2
|λ |2 |η |2 ∗η
= e− 2 e− 2 eλ . (6.3)
For any finite values of λ , η this is non-zero, and so two generic coherent
states are not orthogonal to one another.
(b) Recall that the nth energy eigenstate of the harmonic oscillator can be
expressed through the action of the raising operator on the ground state:
(↠)n
|ψn ⟩ = √ |ψ0 ⟩ . (6.4)
n!
A coherent state with eigenvalue λ can then be expanded in energy eigen-
states as
∞
|λ |2 |λ |2 λ n (↠)n
|ψλ ⟩ = e− eλ â |ψ0 ⟩ = e− ∑ √n!
†
2 2 √ |ψ0 ⟩ (6.5)
n=0 n!
∞
|λ |2 λn
= e− 2
∑ √n! |ψn ⟩ .
n=0
Then, the integral representing the sum over all coherent states is
Z ∞Z ∞
dRe(λ ) dIm(λ ) |ψλ ⟩⟨ψλ | (6.6)
−∞ −∞
Z ∞Z ∞ ∞
λ n (λ ∗ )m
dRe(λ ) dIm(λ ) e−|λ | ∑
2
= √ |ψn ⟩⟨ψm | .
−∞ −∞ n,m=0 n!m!
49
50 6 Quantum Mechanical Example: The Harmonic Oscillator
6.2 (a) Demanding that the ground state wavefunction is normalized, we have
Z ∞ r
2 − mh̄ω x2 2 π h̄
1=N dx e =N . (6.12)
−∞ mω
Therefore, the L2 -normalized ground state wavefunction of the harmonic
oscillator is
mω 1/4 mω 2
ψ0 (x) = e− 2h̄ x . (6.13)
π h̄
(b) We had determined that the first excited state wavefunction is
r
2mω − mω x2
ψ1 (x) = N x e 2h̄ . (6.14)
h̄
51 Exercises
The second excited state is defined through the action of the raising operator
as
(↠)2 â†
ψ2 (x) = √ ψ0 (x) = √ ψ1 (x) (6.15)
2 2
r r ! r
h̄ d mω mω − m ω x 2
= − + x N x e 2h̄
2mω dx 2h̄ h̄
N 2mω 2 mω 2
=√ x − 1 e− 2h̄ x .
2 h̄
The third excited state is
(↠)3 â†
ψ3 (x) = √ ψ0 (x) = √ ψ2 (x) (6.16)
3! 3
r r !
h̄ d mω N 2mω 2 mω 2
= − + x √ x − 1 e− 2h̄ x
2mω dx 2h̄ 6 h̄
r
N 2mω 3 2mω − mω x2
=√ x − 3x e 2h̄ .
6 h̄ h̄
6.3 (a) The matrix elements of the raising operator ↠in the basis of energy eigen-
states are found in the usual way, from sandwiching with the eigenstates.
That is,
âm (↠)n
(↠)mn = ⟨ψm |↠|ψn ⟩ = ⟨ψ0 | √ ↠√ |ψ0 ⟩ . (6.17)
m! n!
As established in this chapter, this matrix element is non-zero only if m =
n + 1, so there are an equal number of raising and lowering operators. In
this case, note that
r
ân+1 (↠)n+1 1 d n+1 (n + 1)!
p √ =p † n+1
† n+1
(â ) = (6.18)
(n + 1)! n! n!(n + 1)! d(â ) n!
√
= n+1.
(b) Note that the lowering operator is just the Hermitian conjugate of the
raising operator, and so
∞ √
â = (↠)† = ∑ n + 1 |ψn ⟩⟨ψn+1 | . (6.21)
n=0
We had already established the expressions for the position and momentum
operators in terms of the raising and lowering operators, and so we can
write:
r r
h̄ h̄ ∞ √
x̂ =
2mω
†
â + â = ∑ n + 1 (|ψn ⟩⟨ψn+1 | + |ψn+1 ⟩⟨ψn |) ,
2mω n=0
(6.22)
r r
mh̄ω † mh̄ω ∞ √
p̂ = i
2
â − â = i ∑ n + 1 (|ψn+1 ⟩⟨ψn | − |ψn ⟩⟨ψn+1 |) .
2 n=0
(6.23)
(c) The raising operator has exclusively non-zero entries just off the diagonal,
so its determinant is 0. The determinant is basis-independent, so in any basis
in which the raising operator is expressed, its determinant is 0.
(d) The general Laurent expansion of a function of the raising operator does
not exist. Because its determinant is 0, the inverse of ↠does not exist, so
for g(↠) to exist, we must assume that it is analytic; i.e., it has a Taylor
expansion.
6.4 (a) Using the result of Eq. 6.118, we can note that the only effect of time evo-
lution of a coherent state is to modify its eigenvalue under the lowering
operator as λ → λ e−iω t . This remains a general complex number for all
time, and one can simply replace the value of λ in the beginning of the anal-
ysis in section 6.4 to demonstrate that neither the variances of position nor
momentum are modified by including time dependence. Therefore, we still
find that
h̄ h̄mω
σx2 = , σ p2 = . (6.24)
2mω 2
Further, the Heisenberg uncertainty principle is saturated for a coherent
state for all time.
(b) For a classical harmonic oscillator, if you start at rest from position x = ∆x,
then the time-dependent position is sinusoidal:
From the results of the beginning of section 6.4, the time-dependent expec-
tation value of position on a coherent state is
r
h̄
⟨x̂⟩ = (λ e−iω t + λ ∗ eiω t ) . (6.27)
2mω
If we demand that at time t = 0 this is ∆x, we have that
r
h̄
∆x = (λ + λ ∗ ) . (6.28)
2mω
Correspondingly, the time-dependent expectation value of momentum is
r
mh̄ω ∗ iω t
⟨ p̂⟩ = i (λ e − λ e−iω t ) . (6.29)
2
At time t = 0, we assume that the particle is at rest, and so we enforce
that λ = λ ∗ , or that the eigenvalue of the lowering operator is initially
real-valued. Then,
r
2h̄
∆x = λ, (6.30)
mω
and
r r
h̄ 2h̄
⟨x̂⟩ = λ (e−iω t + eiω t ) = λ cos(ω t) = ∆x cos(ω t) . (6.31)
2mω mω
The expectation value of momentum is
r
mh̄ω √
⟨ p̂⟩ = i λ (eiω t − e−iω t ) = − 2mh̄ωλ sin(ω t) = −mω ∆x sin(ω t) .
2
(6.32)
These expectation values are identical to the classical oscillation!
(c) A coherent state of the raising operator would satisfy the eigenvalue equa-
tion
↠|χ ⟩ = η |χ ⟩ . (6.33)
Assuming completeness of the energy eigenstates of the harmonic oscillator,
this can be expanded as
∞ ∞
(↠)n
|χ ⟩ = ∑ βn |ψn ⟩ = ∑ βn √n! |ψ0 ⟩ . (6.34)
n=0 n=0
Further, note that there is no contribution to the ground state |ψ0 ⟩ on the
left, so we must require that β0 = 0. However, with the recursion relation,
this then implies that β1 = 0, β2 = 0, etc. So, there is actually no state on
the Hilbert space of the harmonic oscillator that can satisfy the eigenvalue
equation for the raising operator. The reason for this is essentially the same
as why we must forbid negative eigenvalues of the Hamiltonian. If there
was one eigenstate of the raising operator, then there must be an arbitrary
number of them, with decreasingly negative expectation values of energy.
6.5 (a) Let’s just evaluate the product †  with the information given
i i
†  = − √ p̂ +W (x̂) √ p̂ +W (x̂) (6.37)
2m 2m
p̂2 i
= + √ [W (x̂), p̂] +W (x̂)2 .
2m 2m
Now, recall that
dW
[W (x̂), p̂] = ih̄ , (6.38)
d x̂
so we have
p̂2 h̄ dW (x̂) p̂2
†  = −√ +W (x̂)2 = Ĥ − E0 = +V (x̂) − E0 . (6.39)
2m 2m d x̂ 2m
Here we used the fact that the Hamiltonian is the sum of the kinetic and
potential energies. We can then rearrange and solve for the potential:
h̄ dW (x̂)
V (x̂) = − √ +W (x̂)2 + E0 . (6.40)
2m d x̂
(b) Note that the commutator is
We already evaluated the second order of the product, and the first is
i i
ÂÂ = √ p̂ +W (x̂)
†
− √ p̂ +W (x̂) (6.42)
2m 2m
p̂2 i
= − √ [W (x̂), p̂] +W (x̂)2
2m 2m
p̂2 h̄ dW (x̂)
= +√ +W (x̂)2 .
2m 2m d x̂
The commutator is then
r
† 2 dW (x̂)
[Â, Â ] = h̄ . (6.43)
m d x̂
55 Exercises
W (x̂) = α x̂ , (6.47)
for some constant α . Plugging this into the expression for the potential we
have
h̄ dW (x̂) h̄α h̄ω mω 2 2
W (x̂)2 − √ + E0 = α 2 x̂2 − √ + = x̂ . (6.48)
2m d x̂ 2m 2 2
Demanding that the constant terms cancel each other this sets α to be
r
m
α =ω . (6.49)
2
Note that this is also consistent with the value of α from matching the
quadratic terms.
Further, note that with this superpotential, the operators  and † for the
harmonic oscillator are proportional to the raising and lowering operators
â and ↠. The anti-commutator of these operators is
p̂2 p̂2 mω 2 2
{Â, † } = + 2W (x̂)2 = +2 x̂ = 2Ĥ . (6.50)
m m 2
So, we didn’t need the anti-commutator for the harmonic oscillator because
it is proportional to the Hamiltonian itself.
(e) The potential of the infinite square well is 0 in the well and so the superpo-
tential satisfies:
h̄ dW (x̂)
−√ +W (x̂)2 + E0 = 0 . (6.51)
2m d x̂
The ground state energy of the infinite square well is
π 2 h̄2
E0 = , (6.52)
2ma2
56 6 Quantum Mechanical Example: The Harmonic Oscillator
We can determine the action of the number operator N̂ on this state, noting
that
N̂|n⟩ = n|n⟩ . (6.61)
That is,
∞ ∞
d
N̂|θ ⟩ = β0 ∑ einθ N̂|n⟩ = β0 ∑ n einθ |n⟩ = −i d θ |θ ⟩ . (6.62)
n=0 n=0
57 Exercises
cθ = c0 e−inθ , (6.65)
(b) We just showed that, in the phase eigenstate basis, the number operator is a
derivative. Therefore, we immediately know that the commutator is
Now, this is a bit quick because it ignores the n = 0 subtlety. On the state
|0⟩, the phase θ is ill-defined, or, the representation of the number operator
as a derivative breaks down. Specifically, on |0⟩, the phase can be anything
and has no relationship to the properties of the number operator state. Cor-
respondingly, the variances of the number and phase operators on this state
are completely unrelated: |0⟩ is an eigenstate of the number operator and
so has 0 variance, while it is flat or uniform in the eigenstates of the phase
operator. As such the product of variances can be 0, and there exists no
non-trivial uncertainty relation.
6.7 (a) To prove that f (↠) is not unitary, all we have to do is to show that there
is one counterexample. For concreteness, let’s just take f (↠) = ↠and the
state |ψ ⟩ = |ψ1 ⟩, the first excited state of the harmonic oscillator. Then, we
have
√
f (↠)|ψ ⟩ = ↠|ψ1 ⟩ = (↠)2 |ψ0 ⟩ = 2|ψ2 ⟩ , (6.68)
√
a factor of 2 larger than the second excited state. A unitary operator main-
tains normalization, but in general ↠does not, as exhibited in this example.
Therefore, a general operator of the form of an analytic function of the
raising operator f (↠) is not unitary.
58 6 Quantum Mechanical Example: The Harmonic Oscillator
(b) We would like to construct a unitary operator exclusively from the raising
and lowering operators that leaves the action on the ground state of the
harmonic oscillator unchanged. To do this, we will start from the knowl-
edge that we can always write a unitary operator Û as the exponential of a
Hermitian operator T̂ :
Û = eiT̂ . (6.69)
(c) Now, for the operator that generates coherent states from acting on the
ground state. Again, the eigenvalue equation for the coherent state |χ ⟩ is
â|χ ⟩ = λ |χ ⟩ . (6.72)
Let’s now express the coherent state as a unitary operator Û acting on the
ground state, where
where we have used â|ψ0 ⟩ = 0. Matching terms at each order in n, this then
implies the recursion relation
[â, (iT̂ )n+1 ] (iT̂ )n
|ψ0 ⟩ = λ |ψ0 ⟩ . (6.75)
(n + 1)! n!
By linearity of all of the operators, this rearranges to
The commutator therefore acts exactly like a derivative, which suggests that
T̂ is formed from the raising operator, as established earlier:
T̂ ∼ −iλ ↠, (6.77)
however, this is not Hermitian. This can be fixed easily, by including the
appropriate factor of â:
T̂ = −iλ ↠+ iλ ∗ â . (6.78)
We can prove the recursion relation with induction. First, it works if n = 0,
for which
[â, T̂ ]|ψ0 ⟩ = â −iλ ↠+ iλ ∗ â |ψ0 ⟩ = −iλ |ψ0 ⟩ . (6.79)
Now, assuming it is true for n − 1, for n we have
âT̂ n+1 |ψ0 ⟩ = [â, T̂ ]T n + T̂ âT̂ n |ψ0 ⟩ = −iλ T̂ n + T̂ (−iλ nT n−1 ) |ψ0 ⟩
= −iλ (n + 1)T̂ n |ψ0 ⟩ , (6.80)
which is what we wanted to prove. Therefore the unitarized operator that
produces coherent states from the ground state is:
† −λ ∗ â
Û = eiT̂ = eλ â . (6.81)
This is typically called the displacement operator.
6.8 (a) This slightly displaced state |ψϵ ⟩ is still normalized, and so we must enforce
⟨ψϵ |ψϵ ⟩ = 1 = (⟨ψ | + ϵ ⟨ϕ |)(|ψ ⟩ + ϵ |ϕ ⟩) (6.82)
= ⟨ψ |ψ ⟩ + ϵ (⟨ψ |ϕ ⟩ + ⟨ϕ |ψ ⟩) + O(ϵ 2)
= 1 + ϵ (⟨ψ |ϕ ⟩ + ⟨ϕ |ψ ⟩) + O(ϵ 2) .
Therefore, for this equality to hold at least through linear order in ϵ , we must
demand that |ψ ⟩ and |ϕ ⟩ are orthogonal:
⟨ψ |ϕ ⟩ = 0 . (6.83)
(b) To take this derivative, we need to evaluate the variances on the state |ψϵ ⟩
through linear order in ϵ . Here, we will present the most general solutions
with non-zero values for the expectation values. First, for two Hermitian
operators Â, B̂, we have (ignoring terms beyond linear order in ϵ )
⟨ψϵ |Â|ψϵ ⟩⟨ψϵ |B̂|ψϵ ⟩ = ⟨ψ |Â|ψ ⟩ + ϵ (⟨ϕ |Â|ψ ⟩ + ⟨ψ |Â|ϕ ⟩) (6.84)
× ⟨ψ |B̂|ψ ⟩ + ϵ (⟨ϕ |B̂|ψ ⟩ + ⟨ψ |B̂|ϕ ⟩)
= ⟨Â⟩⟨B̂⟩ + 2 ϵ ⟨Â⟩Re(⟨ϕ |B̂|ψ ⟩) + 2 ϵ ⟨B̂⟩Re(⟨ϕ |Â|ψ ⟩) ,
where ⟨Â⟩ = ⟨ψ |Â|ψ ⟩, for example. Then, in the derivative, the first term
cancels, leaving
lim ⟨ψϵ |Â|ψϵ ⟩⟨ψϵ |B̂|ψϵ ⟩ − ⟨ψ |Â|ψ ⟩⟨ψ |B̂|ψ ⟩ (6.85)
ϵ →0
(c) Now, inserting this into the expression for the derivative and demanding it
vanishes enforces the constraint for arbitrary |ϕ ⟩ that
⟨Â⟩B̂ + ⟨B̂⟩Â |ψ ⟩ = λ |ψ ⟩ . (6.86)
We note that this is an eigenvalue equation because we had established that
|ψ ⟩ and |ϕ ⟩ are orthogonal. So for the derivative to vanish, all we need is
that the action of the  and B̂ operators produces something proportional
to |ψ ⟩ again.
Now, we can insert the corresponding position and momentum operators in
for  and B̂. The operators are
 = (x̂ − ⟨x̂⟩)2 , B̂ = ( p̂ − ⟨ p̂⟩)2 . (6.87)
Note also that
h̄2
⟨Â⟩ = σx2 , ⟨B̂⟩ = σ p2 = , (6.88)
4σx2
using the saturated uncertainty principle. The eigenvalue equation can then
be expressed as
σ p2 (x̂ − ⟨x̂⟩)2 + σx2 ( p̂ − ⟨ p̂⟩)2 |ψ ⟩ = λ |ψ ⟩ . (6.89)
The eigenvalue λ can be determined by using the saturated uncertainty
principle, noting that
h̄2
⟨ψ | σ p2 (x̂ − ⟨x̂⟩)2 + σx2 ( p̂ − ⟨ p̂⟩)2 |ψ ⟩ = 2σx2 σ p2 = = λ ⟨ψ |ψ ⟩ = λ .
2
(6.90)
Then, the eigenvalue is
h̄2
. λ= (6.91)
2
This is a familiar equation that we might want to factorize. Let’s call
b̂ = σ p (x̂ − ⟨x̂⟩) + iσx ( p̂ − ⟨ p̂⟩) , b̂† = σ p (x̂ − ⟨x̂⟩) − iσx ( p̂ − ⟨ p̂⟩) . (6.92)
Now, note that the product b̂† b̂ is
b̂† b̂ = [σ p (x̂ − ⟨x̂⟩) + iσx ( p̂ − ⟨ p̂⟩)] [σ p (x̂ − ⟨x̂⟩) − iσx ( p̂ − ⟨ p̂⟩)] (6.93)
= σ p2 (x̂ − ⟨x̂⟩)2 + σx2 ( p̂ − ⟨ p̂⟩)2 − iσx σ p [x̂, p̂]
h̄2
= σ p2 (x̂ − ⟨x̂⟩)2 + σx2 ( p̂ − ⟨ p̂⟩)2 + .
2
Note that up to a rescaling, the operators b̂, b̂† are the familiar raising and
lowering operators, just translated by their expectation values. Then, the
eigenvalue equation can be expressed as
h̄2 h̄2
b̂† b̂ + |ψ ⟩ = |ψ ⟩ , (6.94)
2 2
61 Exercises
or simply b̂|ψ ⟩ = 0. That is, this eigenvalue equation is exactly the coherent
state equation, translated in space away from ⟨x̂⟩ = 0 and in momentum
away from ⟨ p̂⟩ = 0. The solutions of this eigenvalue equation are therefore
the coherent states with appropriate expectation values.
(d) This becomes very clear when we consider ⟨x̂⟩ = ⟨ p̂⟩ = 0. In this case, the
eigenvalue equation reduces to
h̄2
(σ p2 x̂2 + σx2 p̂2 )|ψ ⟩ =|ψ ⟩ . (6.95)
2
This is exactly of the form of the eigenvalue equation for the Hamiltonian of
the harmonic oscillator. We can verify this by dividing everything by 2mσx2 :
! 2
σ p2 2 p̂2 h̄ p̂2 h̄2
x̂ + |ψ ⟩ = x̂ 2
+ |ψ ⟩ = |ψ ⟩ . (6.96)
2mσx 2 2m 8mσx 4 2m 4mσx2
Exercises
7.1 (a) The lowering operator can be expressed in the momentum basis as
r r
i mω i mh̄ω d
â = √ p̂ + x̂ = √ p+i . (7.1)
2mh̄ω 2h̄ 2mh̄ω 2 dp
Acting on the wavefunction in momentum space we have
r !
i mh̄ω d
âg(p) = √ p+i g(p) = λ g(p) . (7.2)
2mh̄ω 2 dp
or in differential form
r !
dg 2 p
= −iλ − dp. (7.4)
g mh̄ω mh̄ω
Further, bound states can only exist if V0 < 0, so we can replace V0 = −|V0 |,
where
q q
a
m|V0 | − p2 − p 2m|V0 | − p2 cot 2m|V0 | − p2 = 0 . (7.15)
h̄
That is,
q p
a p 2m|V0 | − p2
tan 2m|V0 | − p2 = . (7.16)
h̄ m|V0 | − p2
Tangent diverges whenever its argument is an odd multiple of π /2, so we
would expect in general that there are many momentum solutions p to this
equation. However, for a very shallow potential, |V0 | → 0, it’s less clear.
To see what happens for a shallow potential, note that in this limit, the
argument of tangent is also very small, so we can approximate
q q
a a
lim tan 2m|V0 | − p2 = 2m|V0 | − p2 . (7.17)
|V0 |→0 h̄ h̄
Then, in this limit, the bound state solutions satisfy
q p
a p 2m|V0 | − p2
2m|V0 | − p2 = , (7.18)
h̄ m|V0 | − p2
or that
h̄
m|V0 | − p2 = p. (7.19)
a
Now, this is just a quadratic equation so we know how to explicitly solve it,
but I want to do something a bit different here. We are considering the limit
in which |V0 | → 0, and that can be imposed by rescaling |V0 | → λ |V0 |, and
then taking λ → 0, but keeping |V0 | fixed. We are also assuming that mass m
and width a are just finite parameters, so they do not scale with λ . However,
the bound state momentum p must scale with λ , so that the particle indeed
stays in the well. A priori, we do not know this scaling, so we will just make
the replacement p → λ β p, for some β > 0. With the introduction of λ , the
equation becomes
h̄
mλ |V0 | − λ 2β p2 = λ β p . (7.20)
a
Now, in the λ → 0 limit, we can just keep terms that could possibly
contribute at leading order. So, we ignore the p2 term and have
h̄
mλ |V0 | = λ β p , (7.21)
a
which requires that β = 1. Then, the (magnitude of the) bound state
momentum is
ma|V0 |
p= . (7.22)
h̄
Regardless of the shallowness of the potential, this bound state always exists.
65 Exercises
(c) Now, if instead |V0 | → ∞, we expect that there is a solution to the bound
state equation whenever tangent diverges. That is, whenever
q
a π
2m|V0 | − p2 = (2n − 1) , (7.23)
h̄ 2
for n = 1, 2, 3, . . . . This can be rearranged to produce
A = H1 + iH2 , (7.26)
or that
This is the equation for a circle centered at (xn , yn ) = (0, 1) with radius 1.
(c) The interaction matrix for the narrow potential is
maV0
1 1
M̂ = − h̄
. (7.31)
p + i maV0 1 1
h̄
Now, let’s verify that these two eigenvalues lie on the Argand diagram.
Indeed it is true that a zero eigenvalue satisfies xn2 + (yn − 1)2 = 1, but only
66 7 Quantum Mechanical Example: The Free Particle
is a single point on the circle, where (xn , yn ) = (0, 0). For the non-trivial
eigenvalue we can write it in a more compact form where
2p0 p 2p2
λ =− 2 2
+i 2 0 2 , (7.33)
p + p0 p + p0
where
maV0
p0 = . (7.34)
h̄
This eigenvalue can indeed take any value on the Argand circle for p ∈
(−∞, ∞).
7.4 (a) L2 -normalization for this wave packet in momentum space is
Z ∞ Z ∞
−
(p−p0 )2
1 σ p2
d p |g(p)| = dp q
2
e . (7.35)
−∞ −∞ πσ p2
px0 (p−p0 )2
p ei h̄ −
2σ p2
gT (p) = e . (7.45)
p + i maV
h̄
0 (πσ p2 )1/4
Then, using the result for the Fourier transform of the initial wave packet,
we then have
r Z (x+x0 )p0 maV0 x0 (x+x0 )2 σ p2
maV0 σ p maV20 x0 −i − 2 −
ψR (x) = 2 e h̄ dx 0 e h̄ h̄ 2h̄2 . (7.47)
h̄ π h̄
2
√
Then, with p = 2mE where the potential is zero, the translation operator
across the potential is
√ √ √
(x1 −a) 2mE a 2m(E−V0 ) x0 2mE
Û(x0 , x1 ) = ei h̄ ei h̄ e−i h̄ . (7.53)
(c) Taking this δ -function potential limit, we can simplify the translation
operator in a few ways. First, for fixed and finite energy E, aE → 0 and
so
√ √ √
(x −x ) 2mE a 2ma(V0 −E)
i 1 0h̄ −
lim Û(x0 , x1 ) = lim e eh̄ (7.54)
a→0,V0 →∞ a→0,V0 →∞
√
(x −x ) 2mE
i 1 0h̄
=e .
This would seem to suggest that the δ -function potential has no effect on
the translation operator of a free particle. However, we know this cannot be
true from our analysis of the S-matrix for this system.
So, let’s try another route. Let’s go back to the differential equation for the
translation operator and let’s actually take the second derivative:
d2 p̂(x + ∆x)2
Û(x, x + ∆x) = − Û(x, x + ∆x) . (7.55)
d∆x 2
h̄2
We will consider x = 0 and ∆x = a, for very small displacement a, so that we
focus right on the region where the potential is. Right around the potential,
the squared momentum for a fixed energy E state is
(c) With this insight, let’s determine the Hausdorff dimension of the trajectory
of a quantum mechanical particle with 0 expectation value of momentum,
⟨ p̂⟩ = 0. With this expectation value, the saturated Heisenberg uncertainty
principle is
h̄2
⟨ p̂2 ⟩σx2 = . (7.68)
4
Note that the variance of position σx2 is a measure of the squared step size ∆x
away from the mean position, so we set σx2 = ∆x2 . Further, the characteristic
squared momentum depends on the spatial and temporal step sizes:
∆x2
⟨ p̂2 ⟩ = m2 . (7.69)
∆t 2
Then, the Heisenberg uncertainty principle becomes
∆x h̄
m ∆x = . (7.70)
∆t 2
Next, let’s multiply and divide by the total number of steps N. We then have
N∆x l h̄
m δ x = m ∆x = , (7.71)
N∆t T 2
using the relationship T = N∆x and that the total, resolution-dependent
length, is l = N∆x. Thus, this can be rearranged into
T h̄
≡ L = l∆x . (7.72)
2m
Everything on the left is independent of resolution ∆x, and so comparing to
the definition of the Hausdorff length, we find that the Hausdorff dimension
of the trajectory of a quantum mechanical particle is D = 2. That is, the
trajectory is an area-filling curve.
(d) With a non-zero expectation value of momentum, the variance is now
σ p2 = ⟨ p̂2 ⟩ − ⟨ p̂⟩2 = ⟨ p̂2 ⟩ − p20 . (7.73)
Just as earlier, the expectation value of the squared momentum is deter-
mined by the magnitude of the fluctuations:
∆x2 m2 l 2
⟨ p̂2 ⟩ = m2 = . (7.74)
∆t 2 T2
With the saturated Heisenberg uncertainty principle, note that the variance
of momentum is
h̄2 h̄2
σ p2 = = . (7.75)
4σx2 4∆x2
Then, we have that
h̄2 m2 l 2
σ p2 = = ⟨ p̂2
⟩ − p2
0 = − p20 , (7.76)
4∆x2 T2
72 7 Quantum Mechanical Example: The Free Particle
S = AR = eiϕ , (7.81)
AR is the reflection amplitude and we used our result from part (a) to write
it as an effective phase. ϕ is just some overall, x-independent phase, so is
irrelevant for fixed momentum scattering, so we can ignore the overall phase
factor. That is, we can consider
px ϕ
ψi (x) = 2 cos − . (7.83)
h̄ 2
In the region of finite potential, we can express the momentum state as
√ √
x 2m(E−V0 ) x 2m(E−V0 )
ψII (x) = α ei h̄ + β e−i h̄ , (7.84)
where the energy E is set by the momentum in the region with no potential
p2
E= . (7.85)
2m
Because of the infinite potential barrier at x = 0, we must enforce that the
momentum state vanish there (as usual, by Hermiticity of momentum). This
then enforces that β = −α , and so
√ √ p
x 2m(E−V0 )
−i
x 2m(E−V0 ) x 2m(E −V0 )
ψII (x) = α e i h̄ −e h̄ = 2iα sin . (7.86)
h̄
Now, we can match the momentum states at x = −a:
p !
ap ϕ a p2 − 2mV0
ψI (−a) = ψII (−a) → cos + = −iα sin .
h̄ 2 h̄
(7.87)
74 7 Quantum Mechanical Example: The Free Particle
There are some interesting limits to study. First, if p → 0, then the argument
of arctangent diverges, corresponding to a value of π /2. Thus the phase ϕ
in this limit is
lim ϕ = −π , (7.92)
p→0
Exercises
8.1 (a) To show that the determinant of a rotation matrix is 1, we make the following
observations. The determinant is basis-independent, and just the product of
eigenvalues of the matrix. For the 2 × 2 matrix we are studying here, there
are therefore two eigenvalues, λ1 , λ2 for which
det U = λ1 λ2 . (8.1)
which is just the sum of the eigenvalues of the logarithm of U. The sum of
the eigenvalues of a matrix is its trace, and therefore
i
log det U = tr log U = tr θx Ŝx + θy Ŝy + θz Ŝz . (8.4)
h̄
The spin operators Ŝi are proportional to the Pauli matrices, and all Pauli
matrices have 0 trace:
tr = σi = 0 , (8.5)
75
76 8 Rotations in Three Dimensions
A similar calculation follows for ⃗v2 . Next, we are asked to fix the phases
ξ1 , ξ2 , ξ3 , ξ4 so that ⃗v1 and ⃗v2 are orthogonal. That is,
−eiξ3 sin θ
0 =⃗v1†⃗v2 = e−iξ1 cos θ e−iξ2 sin θ (8.8)
eiξ4 cos θ
= −e−i(ξ1 −ξ3 ) + e−i(ξ2 −ξ4 ) sin θ cos θ .
Now, we can establish the unitarity of the matrix formed from vectors⃗v1 ,⃗v2
by matrix multiplication. We multiply
! !
⃗v1† ⃗v1†⃗v1 ⃗v1†⃗v2 1 0
U U=
†
(⃗
v ⃗
v
1 2 ) = = , (8.11)
⃗v2† ⃗v2†⃗v1 ⃗v2†⃗v2 0 1
(e) Unit normalization of the vector ⃗v1 in this complex Cartesian expression
implies that
⃗v1†⃗v1 = 1 = (a11 − ib11 ) (a11 + ib11 ) + (a21 − ib21 ) (a21 + ib21 ) (8.19)
= a211 + b211 + a221 + b221 .
To evaluate this, we can pick specific values for i, j, k and note that the result
is completely invariant to permutations of i, j, k. First, if i = j = k, every
term in the sum has a factor of ϵ iil = 0, and so we find 0. If j = k, but i ̸= j,
then the sum reduces to
(b) If the Lie bracket is the commutator, then the cyclic sum of the nested
commutations is
[Â, [B̂, Ĉ]] + [Ĉ, [Â, B̂]] + [B̂, [Ĉ, Â]] (8.24)
= [Â, B̂Ĉ − ĈB̂] + [Ĉ, ÂB̂ − B̂Â] + [B̂, ĈÂ − ÂĈ]
= ÂB̂Ĉ − ÂĈB̂ − B̂ĈÂ + ĈB̂Â + ĈÂB̂ − ĈB̂Â
− ÂB̂Ĉ + B̂ÂĈ + B̂ĈÂ − B̂ÂĈ − ĈÂB̂ + ÂĈB̂
= 0,
because all terms cancel pairwise.
8.3 First, note that the trace of the Casimir of a D dimensional representation is
(R) 2 (R) 2 (R) 2
tr(CR ID ) = DCR = tr L̂x + L̂y + L̂z . (8.25)
By the definition of the lowering operator, the state L̂− |ℓ, m⟩ ∝ |ℓ, m − 1⟩, when
m > −ℓ. Therefore, the highest state in the sum on the left is |ℓ, ℓ − 1⟩, while the
79 Exercises
highest state on the right is |ℓ, ℓ⟩. Therefore, we must enforce that λ βℓ = 0. There
are two possibilities. If we set βℓ = 0, then the recursive nature of this relationship
sets βℓ−1 = βℓ−2 = · · · = β−ℓ+1 = 0. Thus, the coherent state would simply be the
state with the lowest angular momentum eigenvalue, |ψ ⟩ = |ℓ, −ℓ⟩ with λ = 0. By
contrast, if we fix λ = 0, then we again find this state. Therefore, the only coherent
state of angular momentum is the state of the lowest value of z-component.
Note that in the case of the harmonic oscillator coherent states, there is no
upper limit to the energy eigenvalue, so there was no issue with the matching of
states in the coherent state eigenvalue equation. The finite number of angular
momentum states forbids the existence of coherent states.
8.5 (a) The Hamiltonian for a spin-1/2 particle immersed in a magnetic field was
eB0
Ĥ = Ŝz . (8.32)
me
The commutator of the Hamiltonian and the spin operator Ŝx is then
eB0 eh̄B0
[Ĥ, Ŝx ] = [Ŝz , Ŝx ] = i Ŝy . (8.33)
me me
Therefore, the uncertainty principle that the energy and x-component of
angular momentum satisfy is
2
⟨[Ĥ, Ŝx ]⟩ e2 h̄2 B20
σE2 σS2x ≥ = |⟨Ŝy ⟩|2 . (8.34)
2i 4m2e
(b) When the expectation value of the y-component of angular momentum van-
ishes, ⟨Ŝy ⟩ = 0, the lower bound in the uncertainty principle is 0. Having an
expectation value of 0 means that the state |ψ ⟩ is in a linear combination of
up and down spin with coefficients of equal magnitude:
1
|ψ ⟩ = √ eiϕ | ↑y ⟩ + e−iϕ | ↓y ⟩ , (8.35)
2
for some phase ϕ . In terms of eigenstates of Ŝz , we can express eigenstates
of Ŝy as
1 1
| ↑y ⟩ = √ (| ↑⟩ + i| ↓⟩) , | ↓y ⟩ = √ (| ↑⟩ − i| ↓⟩) . (8.36)
2 2
The state |ψ ⟩ in terms of the eigenstates of Ŝz is then
8.6 To prove the Schouten identity, we will work in the basis of eigenstates of the spin
operator Ŝz . First, we can always rotate the z-axis to lie along the direction of one
of the spins. So we can, without loss of generality, choose |ψ ⟩ = |↑⟩. For the other
spinors, we can write
|ρ ⟩ = a1 | ↑⟩ + b1 | ↓⟩ , (8.38)
|χ ⟩ = a2 | ↑⟩ + b2 | ↓⟩ , (8.39)
|η ⟩ = a3 | ↑⟩ + b3 | ↓⟩ , (8.40)
for some complex coefficients ai , bi . The two products of inner products of spinors
on the left of the Schouten identity are then:
Then, we need to determine how the Pauli matrix acts on spinors. Note that
0 1
iσ2 = . (8.42)
−1 0
(⟨ρ |)∗ iσ2 |η ⟩⟨ψ |iσ2 (|χ ⟩)∗ = (a1 b3 − b1 a3 )b∗2 . (8.44)
Putting this together with the other spinor product on the right side of the
equation, we find
⟨ψ |η ⟩⟨χ |ρ ⟩ + (⟨ρ |)∗ iσ2 |η ⟩⟨ψ |iσ2 (|χ ⟩)∗ = a3 (a∗2 a1 + b∗2 b1 ) + (a1 b3 − b1 a3 )b∗2
= a1 (a∗2 a3 + b∗2 b3 ) = ⟨ψ |ρ ⟩⟨χ |η ⟩ ,
(8.45)
The trace of the commutator is 0 by the cyclic property of the trace. By the
orthogonality of  and B̂, the trace of their product is 0, and so this requires
that
iα trÂ2 = 0 . (8.47)
(c) Let’s first construct the matrices that implement rotation by π /2 about the
x and y axes. We have
1 0 0 0 0 1
Ux (π /2) = 0 0 −1 , Uy (π /2) = 0 1 0 . (8.58)
0 1 0 −1 0 0
Exercises
∂ rl
= ∑ ϵ l jk p j , ∂ pl
= ∑ ϵ jlk r j .
j=1 j=1
because
3 3
∑ ϵ jlk rl r j = ∑ ϵ l jk p j pl = 0 , (9.14)
j,l=1 j,l=1
86 9 The Hydrogen Atom
∂H ∂V rl ∂ V e2 rl ∂ 1 e2 rl
= = =− = . (9.15)
∂ rl ∂ rl r ∂r 4π ϵ 0 r ∂ r r 4π ϵ 0 r 3
where we have inserted the explicit expression for the Hamiltonian. Then,
the Poisson bracket of the Hamiltonian and this component of the Laplace–
Runge–Lenz vector is
3
e2 rl 2rk pl − rl pk −⃗r ·⃗p δkl
{H, Ak } = ∑ 4π ϵ 0 r 3 me
(9.17)
l=1
2
pl e rk rl − r2 δkl |⃗p|2 δkl − pk pl
− +
me 4π ϵ 0 r3 me
!
3
1
=− |⃗p|2 pk − pk ∑ p2l
m2e l=1
" #
e2 1 3
+
4π ϵ 0 me r3 ∑ rk rl pl − pk rl2 −⃗r ·⃗p rk + r pk
2
l=1
= 0,
3
⃗r ·⃗p = ∑ rl pl . (9.18)
l=1
3
e2 ri rk − r2 δik |⃗p|2 δik − pi pk 2r j pk − rk p j −⃗r ·⃗p δ jk
{Ai , A j } = ∑ 4π ϵ 0 r3
+
me me
k=1
(9.19)
e2 r j rk − r2 δ jk |⃗p|2 δ jk − p j pk 2ri pk − rk pi −⃗r ·⃗p δik
− +
4π ϵ 0 r3 me me
!
3
e2 1
= (ri p j − r j pi ) 3r2 − ∑ rk2
4 π ϵ 0 me r 3
k=1
!
3
1
− 2 (ri p j − r j pi ) 3|⃗p| − 2 ∑ pk
2 2
me k=1
2
2 |⃗p| 2
e 1 2
= − (ri p j − r j pi ) − = − H(ri p j − r j pi ) ,
me 2me 4π ϵ 0 r me
the Hamiltonian for the hydrogen atom. Now, all that remains is to interpret
ri p j − r j pi . This looks like a component of angular momentum if i ̸= j, so
we must enforce that. Note that
3 3 3
∑ ϵ i jk Lk = ∑ ϵ i jk ϵ lmk rl pm = ∑ δil δ jm − δim δ jl rl pm (9.20)
k=1 k,l,m=1 l,m=1
= ri p j − r j pi ,
where we used the result established in the chapter for the sum over a product
of totally anti-symmetric objects. Using this result, we then find
3
2
{Ai , A j } = − H ∑ ϵ Lk , (9.21)
me k=1 i jk
exactly as claimed.
9.3 (a) Because angular momentum is conserved, the two particles’ trajectories
must lie in a plane and must orbit with a common angular frequency ω .
Then, the total energy of the system can be expressed as
1 1 n
E = m1 r12 ω 2 + m2 r22 ω 2 + krn , (9.22)
2 2 |n|
where r is the distance between the masses m1 and m2 and r1 , r2 is their
respective distance from their common orbiting point. Note that r1 + r2 = r.
Given this expression for the potential energy, we can also write down New-
ton’s second law. Note that the magnitude of the conservative force from the
potential is
d
F= U(r) = |n|krn−1 . (9.23)
dr
Then, the Newton’s second laws for both masses are
Note that this argument is independent of what the operator Ô actually is.
With this observation and the fact that the Hamiltonian can be expressed as
kinetic plus potential operators
Ĥ = K̂ +V (r̂) , (9.34)
we require that
or that
Let’s first consider the commutator of the potential with Ô. In position
space, the operator is
∂ ∂ ∂
Ô = −ih̄x − ih̄y − ih̄z . (9.37)
∂x ∂y ∂z
The commutator is then
n ∂ ∂ ∂
[Ô,V (r̂)] = −ih̄ k x +y +z (x2 + y2 + z2 )n/2 . (9.38)
|n| ∂x ∂y ∂z
Note that the derivative is
∂ 2
(x + y2 + z2 )n/2 = nx2 (x2 + y2 + z2 ) 2 −1 ,
n
x (9.39)
∂x
and so the sum of the derivatives is
∂ ∂ ∂ n
x +y +z (x2 + y2 + z2 )n/2 = n(x2 + y2 + z2 ) 2 . (9.40)
∂x ∂y ∂z
Therefore, the commutator is
Note that the constant term in the operator Ô from re-ordering the posi-
tion and momentum operators does not affect the commutator. Using these
results, the expectation values of the commutators are related as
That is,
n
⟨K̂⟩ = ⟨V (r̂)⟩ , (9.45)
2
exactly the classical virial theorem, by Ehrenfest’s theorem.
(c) What makes the operator Ô useful for this task is the following observation.
In position space, the operator contains a term of the form
∂ ∂
Ô ⊃ −ih̄x = −ih̄ . (9.46)
∂x ∂ log x
90 9 The Hydrogen Atom
Thus, the job of the operator Ô is to count the powers of x that appear in
the potential, for example. The fact that the potential and kinetic energies
are related by a factor of n/2 is a reflection of the n powers of position in the
potential and 2 powers of momentum in the kinetic energy.
(d) On the ground state of the hydrogen atom, recall that the potential is
e2 1
V (r̂) = − . (9.47)
4π ϵ 0 r̂
Its expectation value on the ground state wavefunction is
Z ∞
e2 4 1 −2r/a0 e2 1 me e 4
⟨V (r̂)⟩ = − dr r2 e =− =− .
4π ϵ 0 a30 0 r 4π ϵ 0 a0 (4π ϵ 0 )2 h̄2
(9.48)
Also, recall that the energy of the ground state is
me e4
E0 = − , (9.49)
2(4π ϵ 0 )2 h̄2
and so the expectation value of the kinetic energy must be their difference
me e4
⟨K̂⟩ = E0 − ⟨V (r̂)⟩ = . (9.50)
2(4π ϵ 0 )2 h̄2
Thus, we see that
1
⟨K̂⟩ = |⟨V (r̂)⟩| , (9.51)
2
exactly as predicted classically for a 1/r potential and Ehrenfest’s theorem.
(e) For the harmonic oscillator, recall that its eigenenergies were
1
En = n + h̄ω , (9.52)
2
for n = 0, 1, 2, . . . . The potential of the harmonic oscillator is
mω 2 2 mω 2 h̄ 2 h̄ω † 2
V (x̂) = x̂ = ↠+ â = â + â . (9.53)
2 2 2mω 4
The expectation value on an energy eigenstate is then
h̄ω 2
⟨ψn |V (x̂)|ψn ⟩ = ⟨ψ0 |ân ↠+ â (↠)n |ψ0 ⟩ (9.54)
4n!
h̄ω
= ⟨ψ0 |ân ↠â + â↠(↠)n |ψ0 ⟩
4n!
h̄ω
= ⟨ψ0 |ân [↠, â] + 2â↠(↠)n |ψ0 ⟩
4n!
h̄ω h̄ω
= ⟨ψ0 |ân+1 (↠)n+1 |ψ0 ⟩ − ⟨ψ0 |ân (↠)n |ψ0 ⟩
2n! 4n!
h̄ω h̄ω h̄ω 1
= (n + 1) − = n+ .
2 4 2 2
91 Exercises
This is exactly half of the energy eigenvalue, and so we find that, for the
harmonic oscillator,
for two constants a and b. The eigenvalue equation then implies that
h̄2 b(b + 1) 2(b + 1) 1 k
− − + − n =E. (9.58)
2m r2 ar a2 r
Immediately, it is clear that if n > 2, there is no way to make this relationship con-
sistent; the k/rn term dominates as r → 0. If n = 2, we must enforce a cancellation
of the two terms that scale like r−2 :
h̄2
− b(b + 1) = k . (9.59)
2m
However, we must also enforce that the single term that scales like r−1 vanishes by
itself, or that b = −1. These requirements are inconsistent, and so only for n < 2
can there possibly be bound states.
9.5 (a) In evaluating the ground state and first excited state of the hydrogen atom,
for ℓ = 0, we had constructed specific forms for their ansatze. In particular,
these wavefunctions took the form
for constants a0 , a1 , b0 , b1 . So, this suggests that for the nth energy eigen-
state, the prefactor of the exponential suppression at large r, is an nth order
polynomial:
n
ψn (r) = e−r/an ∑ ci rn , (9.61)
i=0
as expected.
(c) The first excited state wavefunction is again
1 r
ψ2,0,0 (r) = q 1− e−r/2a0 . (9.67)
8π a0
3 2a 0
Let’s first determine the ψ1,1,0 (⃗r) wavefunction. The operator Âz is, in
position space,
h̄ e2 ẑ h̄2 ∂ 1 z
Âz = −i p̂z − =− + . (9.68)
me 4π ϵ 0 r̂ me ∂ z a0 r
The derivative acting on the wavefunction is
∂ ∂r d z r 1
ψ2,0,0 (r) = ψ2,0,0 (r) = − e−r/2a0 . (9.69)
∂z ∂ z dr r 4a20 a0
Then, the wavefunction of interest is
z r 1 −r/2a0 1 z r
ψ1,1,0 (⃗r) ∝ Âz ψ2,0,0 (r) ∝ − e + 1 − e−r/2a0
r 4a20 a0 a0 r 2a0
z
∝ − e−r/2a0 . (9.70)
a0
93 Exercises
which is on the bluer end of the visible spectrum. It is unlikely that the electron is
in an extremely high energy state because any errant radiation will knock it out
of its bound state. So, on the other end, the transition from the second to the first
excited state would produce a photon with wavelength of
1240
λ3→2 ≈ nm ≈ 656 nm , (9.76)
13.6
22
− 13.6
32
on the redder end of the spectrum. Indeed, hydrogen’s strongest spectral lines are
red, with many lines at the blue end of the spectrum, but much weaker brightness.
9.7 (a) Recall that the Biot-Savart law is
Z
⃗B(⃗r) = µ0 I d⃗ℓ × (⃗r − ⃗ℓ)
, (9.77)
4π |⃗r − ⃗ℓ|3
the magnetic moment. As such, a point on the circle can be expressed as the
vector
ℓ = r(cos ω t, sin ω t) , (9.78)
where r is the radius and ω is the angular frequency of the orbit. The
differential distance is
d⃗ℓ = rω (− sin ω t, cos ω t) dt , (9.79)
which we note is orthogonal to the position vector ℓ. In each full orbit, an
electric charge e travels around, and therefore the current is
eω
I= . (9.80)
2π
With these results, the magnitude of the magnetic field at the center of the
loop is
I
µ0 eω r2 ω dt
|⃗B(⃗0)| = , (9.81)
8π 2 r3
and the integral runs over the total period T = 2π /ω of the orbit. Then, the
magnetic field is
µ0 eω 1
|⃗B(⃗0)| = . (9.82)
4π r
Now, for a particle of mass me traveling with angular frequency ω around a
circle of radius r, its angular momentum is
L = me r 2 ω . (9.83)
Then, the magnetic field, in terms of angular momentum, is
µ0 e 1 µ0 2
|⃗B(⃗0)| = L= |⃗m| , (9.84)
4π me r 3 4π r 3
where ⃗m is the magnetic dipole moment. Thus, the magnetic dipole moment
is
e
|⃗m| = L, (9.85)
2me
and the gyromagnetic moment is easily read off,
e
|γclass | = . (9.86)
2me
When the magnitude is removed, we pick up a minus sign because the
electron is negatively charged.
(b) We consider the external magnetic field along the z-axis as in the Zeeman
effect example, ⃗B = B0 ẑ. This magnetic field only affects the energy through
the z-component of angular momentum of the electron as it orbits the
proton. The new energy operator Û that represents this effect is
eB0
Û = −⃗m · ⃗B = L̂z , (9.87)
2me
95 Exercises
(c) Doing the same exercise for T̂z and Ŝz , the states that are produced are
T̂z |n, ℓ, m⟩ = α |n, ℓ, m⟩ + β |n − 1, ℓ + 1, m⟩ , (9.94)
and Ŝz produces the orthogonal linear combination.
(d) To determine the form of the operators T̂i and Ŝi in position space, we
can first simplify the expression for the Laplace–Runge–Lenz operator Âk .
Noting that the angular momentum operator is
3
L̂k = ∑ ϵ r̂i p̂ j , (9.95)
i, j=1 i jk
we note that
3 3
∑ ϵ ( p̂i L̂ j − L̂i p̂ j ) = ∑ ϵ ϵ p̂i r̂l p̂m − ϵ r̂l p̂m p̂ j (9.96)
i, j=1 i jk i, j,l,m=1 i jk lm j lmi
3
= ∑ (δim δkl − δil δkm ) p̂i r̂l p̂m
i,l,m=1
3
+ ∑ δkm δ jl − δkl δ jm r̂l p̂m p̂ j
j,l,m=1
3
= ∑ p̂i r̂k p̂i − p̂i r̂i p̂k + r̂i p̂k p̂i − r̂k p̂2i
i=1
3
= ih̄ ∑ (1 − δik ) p̂k = −2ih̄ p̂k .
i=1
Exercises
97
98 10 Approximation Techniques
or for
E1 − E0
ϵ< . (10.8)
2
That is, the perturbation had better be less than the difference in energy
between the two states to honestly be a “perturbation.”
10.2 (a) The normalization of the wavefunction is
Z a
16a5 2
1 = N2 dx (a2 − x2 )2 = N . (10.9)
−a 15
That is,
r
15
N= . (10.10)
16a5
(b) In position space, the Hamiltonian is
h̄2 d 2 mω 2 2
Ĥ = − 2
+ x , (10.11)
2m dx 2
and then its action on this wavefunction is
h̄2 d 2 mω 2 2
Ĥ ψ (x; a) = − + x N(a2 − x2 ) (10.12)
2m dx2 2
2
h̄ mω 2 2 2
=N + x (a − x2 ) .
m 2
Then, the expectation value of the Hamiltonian on this wavefunction is
Z a 2
15 h̄ mω 2 2 2
⟨ψ |Ĥ|ψ ⟩ = dx (a − x )
2 2
+ x (a − x )
2
(10.13)
16a5 −a m 2
mω 2 a 2 5h̄2
= + .
14 4ma2
(c) Now, we want to determine the value of a that minimizes this expression.
Taking the derivative and demanding that it vanish, we find
d mω 2 a 2 5h̄2 mω 2 a 5h̄2
+ = 0 = − , (10.14)
da 14 4ma2 7 2ma3
which has the solution of
r
235 h̄
a = . (10.15)
2 mω
Inserting this into the expression for the expectation value, we find the
variational estimate of the ground state energy to be
m ω 2 a2 5h̄2
⟨ψ |Ĥ|ψ ⟩min = + q (10.16)
14 4ma2 a2 = 35 h̄
2 mω
r r ! r
1 35 5 2 5
= + h̄ω = h̄ω
14 2 4 35 14
≈ 0.5976 . . . h̄ω ,
99 Exercises
If this is orthogonal to the ansatz for the first excited state, we must find
Z 1 Z 1
dx ψ1 (x; α )ψ2 (x; β ) ∝ dx xα +β (1 − x)α +β (1 − 2x) = 0 , (10.18)
0 0
h̄2 d 2 β
Ĥ ψ2 (x; β ) = −N x (1 − x)β (1 − 2x) (10.22)
2m dx2
h̄2 β −2
=N β x (1 − x)β −2 (1 − 2x)(1 − β + 2(1 + 2β )(1 − x)x) .
2m
100 10 Approximation Techniques
h̄2
≈ 39.9736 . . ..
2m
The exact expression for the energy of the first excited state would be
h̄2 h̄2
E2 = 4π 2
≈ 39.4784 . . . , (10.27)
2m 2m
so our estimate is only very sightly above the true result.
(e) The precise answer to this problem depends on the form of the ansatz made
for the second excited state wavefunction, so we will not present a solution
here. However, the fundamental property that this state must satisfy is that
it is orthogonal to both the ground state and the first excited state. An
example of a form of a wavefunction that is orthogonal to the first two
energy eigenstates is
ψ2 (x; γ ) ≃ x(1 − x)(x − γ )(1 − γ − x) , (10.28)
where γ is a parameter that can be fixed to ensure orthogonality with the
ground state ansatz. Note also that this form of a wavefunction has two
nodes exclusive of the well’s boundaries, exactly as expected for the second
excited state.
10.5 (a) With the ansatz expression for the inverse Hamiltonian, the product that
we must demand produces the identity operator is
∞
1
Ĥ Ĥ −1 = I = h̄ω (1 + ↠â)
h̄ω ∑ βn (↠)n ân (10.29)
n=0
∞ ∞
= ∑ βn (↠)n ân + ∑ βn ↠â(↠)n ân
n=0 n=0
101 Exercises
∞ ∞
= ∑ βn (↠)n ân + ∑ βn ↠(↠)n â + [â, (↠)n ] ân
n=0 n=0
∞
= ∑ (1 + n)βn (↠)n ân + βn (↠)n+1 ân+1 .
n=0
Then, demanding that all terms with positive powers of raising and lower-
ing operators cancel, we must enforce the recursion relation
or that
βn−1
βn = − , (10.31)
1+n
with β0 = 1. The solution of this recursion relation is
(−1)n
βn = . (10.32)
(n + 1)!
Then, the inverse Hamiltonian of the harmonic oscillator is
∞
1 (−1)n
Ĥ −1 =
h̄ω ∑ (n + 1)! (↠)n ân . (10.33)
n=0
because ⟨χ |↠= ⟨χ |λ ∗ .
(c) We would like to now evaluate the ratio of expectation values
⟨χ |Ĥ −1 |χ ⟩
. (10.35)
⟨χ |Ĥ −1 Ĥ −1 |χ ⟩
Let’s first evaluate the numerator expectation value. We have
∞ ∞
1 (−1)n 1 (−1)n
⟨χ |Ĥ −1 |χ ⟩ =
h̄ω ∑ (n + 1)! ⟨χ |(↠)n ân |χ ⟩ = h̄ω ∑ (n + 1)! |λ |2n
n=0 n=0
1 − e−|λ |
2
1
= . (10.36)
h̄ω |λ |2
Now, let’s evaluate the denominator with the hint. We have
∞ ∞
1 1
⟨χ |Ĥ −1 Ĥ −1 |χ ⟩ = ∑ ⟨χ |Ĥ −1 |ψn ⟩⟨ψn |Ĥ −1 |χ ⟩ = (h̄ω )2 ∑ (n + 1)2 |⟨χ |ψn ⟩|2 .
n=0 n=0
(10.37)
Now, the normalized energy eigenstate is
(↠)n
|ψn ⟩ = √ |ψ0 ⟩ , (10.38)
n!
102 10 Approximation Techniques
where |ψ0 ⟩ is the ground state of the harmonic oscillator. We can now
evaluate their inner product straightforwardly as
|λ |2 |λ |2 |λ |2
e− 2 e− 2 λ n e− 2
⟨ψn |χ ⟩ = √ ⟨ψ0 |ân eλ â |ψ0 ⟩ = √ λ n ⟨ψ0 |eλ â |ψ0 ⟩ = √
† †
.
n! n! n!
(10.40)
e−|λ | ∞
2
|λ |2n
= ∑
(h̄ω ) n=0 (n + 1)!(n + 1)
2
. (10.41)
as claimed.
(d) To determine the λ → 0 limit, we can use l’Hôpital’s rule. The derivative of
the numerator evaluated at |λ |2 = 0 is
d |λ |2
e − 1 = 1. (10.46)
d|λ |2 |λ |2 =0
103 Exercises
From part (b), the initial expectation value of the coherent state with λ = 1
would have been
So, indeed, we see that the power method improved the lower bound
estimate of the ground state of the harmonic oscillator.
10.5 (a) From the formalism of perturbation theory, the first-order correction to
the nth energy eigenstate is the expectation value
where |ψn ⟩ is the nth energy eigenstate of the harmonic oscillator. Using
the raising and lowering operators, we can express this expectation value
as
h̄2
∆En = λ ⟨ψn |(â + ↠)4 |ψn ⟩ . (10.51)
4m2 ω 2
Now, we can evaluate this by noting that the action of the raising and
lowering operators is
(↠)n+1 √
↠|ψn ⟩ = √ |ψ0 ⟩ = n + 1|ψn+1 ⟩ , (10.52)
n!
d (↠)n √
â|ψn ⟩ = † √ |ψ0 ⟩ = n|ψn−1 ⟩ .
d â n!
Because the expectation value is a diagonal element of x̂4 , there will only
be non-zero contributions when the number of raising and lowering oper-
ators are equal. That is, contributions like â(↠)3 necessarily vanish on this
expectation value. So, we only need to consider products of raising and
lowering operators that arise from expanding the product that have the
same number of both:
(â + ↠)4 ⊃ â2 (↠)2 + â↠â↠+ â(↠)2 â + ↠â↠â + ↠(â)2 ↠+ (↠)2 â2 .
(10.53)
104 10 Approximation Techniques
With these results, the matrix element of four of the terms is easily
evaluated:
⟨ψn |â↠â↠+ â(↠)2 â + ↠â↠â + ↠(â)2 ↠|ψn ⟩ = (2n + 1)⟨ψn |â↠+ ↠â|ψn ⟩
= (2n + 1)2 . (10.55)
h̄2 2
= 6n + 6n + 2 . (10.57)
4m2 ω 2
Therefore, the correction to the nth energy eigenstate from the anharmonic
oscillator is
h̄2 2
∆En = λ 6n + 6n + 2 . (10.58)
4m2 ω 2
(b) With λ < 0, there will be some energy level n for which En + ∆En < 0.
Assuming that |λ | is very small, this n will be very large, so we will approx-
imate the harmonic oscillator energy level as En ≈ nh̄ω , and the correction
as
3h̄2 n2
∆En ≈ λ . (10.59)
2m2 ω 2
Then, if ∆En ≈ −En , we have
3h̄2 n2
λ ≈ nh̄ω , (10.60)
2m2 ω 2
or that
2m2 ω 3
n≈ . (10.61)
3h̄λ
(c) The correction to the ground state wavefunction |ψ0 ⟩ is
∞
λ ⟨ψn |x̂4 |ψ0 ⟩
|∆ψ0 ⟩ = −
h̄ω ∑ n
|ψn ⟩ . (10.62)
n=1
105 Exercises
Due to the form of the quartic position operator x̂4 in terms of raising and
lowering operators, only the states n = 2, 4 can contribute to the sum. Thus,
this simplifies to
∞
λ ⟨ψn |x̂4 |ψ0 ⟩
|∆ψ0 ⟩ = −
h̄ω ∑ n
|ψn ⟩ (10.63)
n=1
λ h̄ ⟨ψ4 |(↠)4 |ψ0 ⟩
=− 2 3 |ψ4 ⟩
4m ω 4
⟨ψ2 |â(↠)3 + ↠â(↠)2 + (↠)2 â↠|ψ0 ⟩
+ |ψ2 ⟩
2
√ !
λ h̄ 6 √
=− 2 3 |ψ4 ⟩ + 3 2 |ψ2 ⟩ .
4m ω 2
(|ψ0 ⟩ + |∆ψ0 ⟩)† T̂ (|ψ0 ⟩ + |∆ψ0 ⟩) = ⟨ψ0 |T̂ |ψ0 ⟩ + ⟨ψ0 |T̂ |∆ψ0 ⟩ + ⟨∆ψ0 |T̂ |ψ0 ⟩ .
(10.64)
With T̂ = x̂ and the modification only involving |ψ2 ⟩ and |ψ4 ⟩, the expec-
tation value of x̂ on the modified state still vanishes. So, the variance of the
ground state is still determined by the second moment: σx2 = ⟨ψ |x̂2 |ψ ⟩. Fur-
ther, the contribution from the state |ψ4 ⟩ is 0 on the expectation value of
x̂2 because x̂2 can only relate energy eigenstates that differ by 2h̄ω . There-
fore, the change in the variance of the ground state from the anharmonic
oscillator is
√
3 2λ h̄
∆σx = ⟨ψ0 |T̂ |∆ψ0 ⟩ + ⟨∆ψ0 |T̂ |ψ0 ⟩ = − 2 3 ⟨ψ2 |x̂2 |ψ0 ⟩ + ⟨ψ0 |x̂2 |ψ2 ⟩
2
√ 4m ω
3 2λ h̄2
=− ⟨ψ2 |(â + ↠)2 |ψ0 ⟩ + ⟨ψ0 |(â + ↠)2 |ψ2 ⟩
√ ω
8m 3 4
3 2λ h̄2 3λ h̄2
=− ⟨ψ 2 |(â † 2
) |ψ0 ⟩ + ⟨ψ 0 |â2
|ψ2 ⟩ = − . (10.65)
8m3 ω 4 2m3 ω 4
This is negative, and so the ground state wavefunction of the anharmonic
oscillator is narrower than the ground state of the harmonic oscillator. This
makes sense because the anharmonic oscillator’s potential is narrower than
the harmonic oscillator (see Fig. 10.7).
10.6 The solutions to this problem highly vary depending on what system the stu-
dent chooses to study. We refer to T. Sulejmanpasic and M. Ünsal, “Aspects
of perturbation theory in quantum mechanics: The BenderWu Mathematica
® package,” Comput. Phys. Commun. 228, 273–289 (2018) [arXiv:1608.08256
[hep-th]], for their worked examples and instructions to use the program.
10.7 (a) The first thing we need to do for the Bohr-Sommerfeld condition is to
establish what the classical turning points are. The classical turning points
106 10 Approximation Techniques
for a given energy En correspond to the positions where the kinetic energy
is 0, so the total energy is just potential:
mω 2 2
En = x , (10.66)
2
or that
r
2En
xmin,max = ∓ . (10.67)
mω 2
Then, the Bohr-Sommerfeld condition for the harmonic oscillator is
q s
Z 2En
1 mω 2 ′ mω 2 ′2
q dx 2m En − x (10.68)
h̄ − 2En2 2
mω
q s
Z 2En
2 mω 2 ′ mω 2 ′2
= dx 2m En − x = nπ .
h̄ 0 2
En = nh̄ω , (10.72)
which differs from the correct energy eigenvalues of the harmonic oscillator
by the ground state energy, h̄ω /2.
(b) With the Maslov correction, all we need to change is the value of the
quantized integral, to
π En 1
= n+ π, (10.73)
h̄ω 2
107 Exercises
En = k|x|α , (10.75)
or that
1/α
En
xmin,max = ∓ . (10.76)
k
Then, the quantization condition is
Z ( En )1/α
2 p 1
dx′
k
2m(En − kx′α ) = n + π, (10.77)
h̄ 0 2
by the symmetry about x = 0. Let’s now change variables to
kx′α = En u , (10.78)
and so
1/α
En
x′ = u1/α . (10.79)
k
The integration measure is
1/α
1′ En
u α −1 du ,
1
dx = (10.80)
α k
so the quantization becomes
Z ( En )1/α √ Z
2 p
′ 2 2mEn En 1/α 1
du u α −1 (1 − u)1/2
k 1
dx 2m(En − kx′α ) =
h̄ 0 h̄ α k 0
√ 1/α
2 2mEn En Γ(1/α )Γ(3/2)
=
h̄ α k Γ(1/α + 3/2)
1
= n+ π, (10.81)
2
108 10 Approximation Techniques
where we have used the fact that the integral that remains is in the form of
the Euler Beta function. First, for general α , the energies scale with n for
large n like
1+ 1
En2 α
∝ n, (10.82)
or that
2α
En ∝ n 2+α . (10.83)
and so
Γ(1/α )Γ(3/2)
lim = 1. (10.85)
α →∞ α Γ(1/α + 23 )
In this limit, the quantization condition becomes
2p
2mEn = nπ , (10.86)
h̄
and we have turned off the Maslov correction because α → ∞ is a singular
limit. The solution of this equation is
n2 π 2 h̄2
En = . (10.87)
2m · 22
As α → ∞, the potential diverges beyond |x| = 1 and is 0 for −1 < x <
1. Thus, this is an infinite square well with width a = 2, and the Bohr-
Sommerfeld quantization condition predicts that exactly correctly.
10.8 (a) The power method would predict that ground state is approximately
proportional to
|1⟩ ≃ Ĥ N |χ ⟩ . (10.88)
To estimate the ground state energy, we take the expectation value of the
Hamiltonian on this state and then ensure it is properly normalized. So,
the ground state energy is bounded as
⟨χ |Ĥ N Ĥ Ĥ N |χ ⟩ ⟨χ |Ĥ 2N+1 |χ ⟩
E1 ≤ = . (10.89)
⟨χ |Ĥ N Ĥ N |χ ⟩ ⟨χ |Ĥ 2N |χ ⟩
From the given form of the state |χ ⟩, the action of the Hamiltonian is
∞ 2N+1 ∞
me e4 βn
Ĥ 2N+1 |χ ⟩ = ∑ β Ĥ 2N+1 |n⟩ = − ∑ n4N+2 |n⟩ .
n=1 2(4 π ϵ 0 )2 h̄2
n=1
(10.90)
109 Exercises
The value of the sum that remains is π 4 /90 and so the expectation value is
me e4 π2 me e 4
⟨χ |Ĥ|χ ⟩ = − ≈ 0.65794 − (10.96)
2(4π ϵ 0 )2 h̄2 15 2(4π ϵ 0 )2 h̄2
which is indeed larger than the ground state energy, as expected.
(c) From the result of part (a), the estimate of the ground state energy after N
applications of the power method on this state is
me e 4 ∑∞ 1
n=1 n4N+4 me e 4 ζ (4N + 4)
E1 ≤ − 2 ∞
= − 2 ζ (4N + 2)
. (10.97)
2(4π ϵ 0 ) h̄ ∑n=1 n4N+2
2 1
2(4π ϵ 0 ) h̄
2
In the expectation value is just the unperturbed kinetic energy, and on the
unperturbed ground state, we can use the fact that
|⃗p|
ˆ2 e2 1
= E0 + . (10.106)
2me 4π ϵ 0 r̂
Then, the expectation value of the relativistic correction is
2
1 e2 1
⟨ψ |Ĥrel |ψ ⟩ = − ⟨ψ | E0 + |ψ ⟩ (10.107)
2me c2 4π ϵ 0 r̂
1 e2 1 e4 1
=− E 2
+ E0 ⟨ψ | |ψ ⟩ + ⟨ψ | |ψ ⟩ .
2me c2 0
2π ϵ 0 r̂ (4π ϵ 0 )2 r̂2
On the ground state, the expectation values are
Z
1 4 ∞ 1
⟨ψ | |ψ ⟩ = 3
dr r e−2r/a0 = , (10.108)
r̂ a0 0 a0
Z
1 4 ∞ 2
⟨ψ | 2 |ψ ⟩ = dr e−2r/a0 = 2 .
r̂ a30 0 a0
Then, the expectation value of the relativistic correction is
1 e2 2e4
⟨ψ |Ĥrel |ψ ⟩ = − E 2
+ E0 + (10.109)
2me c2 0
2π ϵ 0 a0 (4π ϵ 0 )2 a20
E02 5E02
=− (1 − 4 + 8) = − ,
2me c2 2me c2
where I have used the expression for the ground state energy.
(c) As observed in the previous part, the lowest-order expansion of the power
method exactly reproduces the first-order correction to the ground state
energy from perturbation theory.
11 The Path Integral
Exercises
11.1 (a) For a free particle that travels a distance x in time T , its velocity is
x
v= . (11.1)
T
Further, the energy E of the particle in terms of momentum is
p2
E= , (11.2)
2m
and so the difference is
p2 p2 p2 p2
px − ET = pvT − T = T− T= T, (11.3)
2m m 2m 2m
which is indeed the kinetic energy times the elapsed time; the classical
action of a free particle.
(b) In terms of the angular velocity ω , the elapsed angle over time T is
θ = ωT , (11.4)
L = Iω , (11.5)
112
113 Exercises
where T is the total elapsed time. We can change the integration variable
to the time x′ = vt = mp t, where v is the velocity and p is the momentum of
the particle. In general, the momentum varies with position or time, but, as
with the original WKB approximation, we assume that variation is small,
so the integration measure is
p
dx′ = dt . (11.9)
m
Then, the wavefunction is approximately
ZT
i 2m(E −V ) ET
ψ (x,t) ≈ exp dt −i ψ (x0 ) , (11.10)
h̄ 0 m h̄
p
because the momentum with fixed energy is p = 2m(E −V ). This can be
rearranged to:
ZT
i
ψ (x,t) ≈ exp dt [2(E −V ) − E] ψ (x0 ) (11.11)
h̄ 0
ZT
i
= exp dt (E − 2V ) ψ (x0 )
h̄ 0
ZT
i
= exp dt (K −V ) ψ (x0 ) ,
h̄ 0
where we have used that the total energy is the sum of the kinetic and poten-
tial energies: E = K +V . Thus, what appears in the exponent is indeed the
classical action, the time integral of the kinetic minus the potential energy.
The general exact expression for the wavefunction at any later time requires
the infinity of integrals of the path integral, but the WKB approximation
assumes that the potential varies slowly enough that those infinity of inte-
grals just produce an overall normalization, and only the first “step” of the
path integral is needed.
11.2 (a) The Euler-Lagrange equation from a Lagrangian L is
∂L d ∂L
− = 0. (11.12)
∂ x dt ∂ ẋ
For the harmonic oscillator’s Lagrangian, the necessary derivatives are
∂L d ∂L d
= −mω 2 x , − = − mẋ = −mẍ . (11.13)
∂x dt ∂ ẋ dt
Therefore, the Euler-Lagrange equations are
−mω 2 x − mẍ = 0 , (11.14)
as claimed.
(b) Simplifying the equations of motion, we have
ẍ + ω 2 x = 0 , (11.15)
for which the solutions are linear combinations of sines and cosines:
x(t) = a cos(ω t) + b sin(ω t) , (11.16)
114 11 The Path Integral
Of course, the equality is only formal, because both sides are infinite.
11.4 (a) The harmonic oscillator path integral is
r s
S[x
i class
] m ωT
Z = e h̄ . (11.32)
2π ih̄T sin(ω T )
Taking the ω → 0 limit of the right-most square-root factor produces
s r
ωT ωT
lim = lim = 1. (11.33)
ω →0 sin(ω T ) ω →0 ω T
116 11 The Path Integral
Z Z 3
i m 2 mω 2 2
= [dx] exp dt ẋ − x
h̄ 2 2
3
= Z1D .
That is, the path integral of the three-dimensional harmonic oscillator is just
the cube of the path integral for the one-dimensional harmonic oscillator, by
the linearity of integration.
11.6 (a) For the free-particle action, we need the first time derivative of the trajec-
tory. For this trajectory, we have
2 ′ 2
ẋ = (x − xi )Θ(T /2 − t) + (x f − x′ )Θ(T − t)Θ(t − T /2) , (11.41)
T T
where the Heaviside theta function Θ(x) is 1 if x > 0 and 0 if x < 0. The
action of this trajectory is therefore
Z
m T 4 ′ 4 ′ 2
S[x] = dt (x −x i )2
Θ(T /2−t) + (x f −x ) Θ(T −t)Θ(t−T /2)
2 0 T2 T2
m
= (x′ − xi )2 + (x f − x′ )2 , (11.42)
T
because the integrand is time independent (except for the Heaviside func-
tions). This is just a parabola in the intermediate point x′ with a minimum
where
d
′
(x′ − xi )2 + (x f − x′ )2 = 0 = 2(x′ − xi ) − 2(x f − x′ ) , (11.43)
dx
or that
x f + xi
x′ = . (11.44)
2
At this point, the trajectory of the particle is a straight line for all time
0 < t < T , as expected for the physical, classical free particle.
(b) The time derivative of this trajectory is now
1 πA πt
ẋ = (x f − xi ) + cos . (11.45)
T T T
The action of this trajectory is then
Z
m T 1 πA πt 2
S[x] = dt (x f − xi ) + cos (11.46)
2 0 T T T
Z T
m 1 π A
2 2
2 πt
= dt (x f − xi ) + 2 cos
2
2 0 T2 T T
m 1 π A2 2
= (x f − xi )2 + .
2 T 2T
This is again a parabola, this time in the amplitude A, and it is clearly
minimized when A = 0, when there is no sinusoidal oscillation on the
straight-line trajectory.
118 11 The Path Integral
1 nπ nπ t
ẋ = (x f − xi ) + (x f − xi ) cos . (11.47)
T T T
The action is then
Z T 2
m 1 nπ nπ t
S[x] = dt (x f − xi ) + (x f − xi ) cos (11.48)
2 0 T T T
Z T
m 1 n2 π 2 nπ t
= 2
(x f − xi )2 + 2 (x f − xi )2 cos2
dt
2
0 T T T
m 1 n π
2 2
= (x f − xi )2 + (x f − xi )2 ,
2 T 2T
yet again a parabola, this time in the frequency n. And again, the action is
minimized when n = 0, eliminating the sinusoidal oscillation on top of the
classical trajectory.
(d) Repeating the exercise for the harmonic oscillator with the same model tra-
jectories is a very different analysis than for the free particle. The biggest
difference is that there is no limit of the parameter in these model tra-
jectories for which it reduces to the classical trajectory of the harmonic
oscillator, so the trajectory that minimizes the action will be some approx-
imation to the classical trajectory. Here, we will just present one of these
calculations; the kinked trajectory from part (a). We have already cal-
culated the time integral of the kinetic energy. The time integral of the
potential energy in the action is
Z T
mω 2 2t ′
S[x] ⊃ − dt xi + (x − xi ) Θ(T /2 − t) (11.49)
2 0 T
2
2
+ x′ + (t − T /2)(x f − x′ ) Θ(T − t)Θ(t − T /2)
T
Z T
" 2
mω 2 2t ′
=− dt xi + (x − xi ) Θ(T /2 − t)
2 0 T
2 #
′ 2 ′
+ x + (t − T /2)(x f − x ) Θ(T − t)Θ(t − T /2)
T
mω 2 T 2
=− x f + xi2 + (x f + xi )x′ + 2x′2 .
12
Then, the complete value of the action for this harmonic oscillator trajec-
tory is
m mω 2 T 2
S[x] = (x′ − xi )2 + (x f − x′ )2 − x f + xi2 + (x f + xi )x′ + 2x′2 .
T 12
(11.50)
119 Exercises
2m mω 2 T
=0=− (x f + xi − 2x′ ) − (x f + xi + 4x p ) (11.51)
T 12
or that
1 + ω24T x f + xi
2 2
x′ = . (11.52)
1− ω T
2 2
2
12
What is interesting about this point is that it is always at a position larger
than (xi + x f )/2 as long as the frequency is non-zero, ω > 0. Further, this
trajectory is only a minimum of the action (for this parametrization) if the
second derivative with respect to x′ is positive. The second derivative of the
action is
d2 m ′ ′ 2
mω 2 T 2 ′ ′2
(x − xi ) + (x f − x ) −
2 2
x f + xi + (x f + xi )x + 2x
dx′2 T 12
4m mω 2 T
= − > 0. (11.53)
T 3
This inequality can only be satisfied if
12
ω2 < . (11.54)
T2
So, for sufficiently long high enough frequencies, this trajectory is not even
a reasonable approximation to the classical trajectory.
11.7 (a) Let’s first consider the path integral of the particle that passes through the
upper slit. We can use the formalism established in example 11.2 to do this.
First, in time τ , the particle travels from the origin to the upper slit, located
at ⃗xm = (d, s/2). The path integral for this is
2 2
r
m d +(s/2) m
Z1 = ei 2 h̄τ , (11.55)
2π ih̄τ
where we note that the squared distance that the particle traveled was
s2
⃗x2f = d 2 + . (11.56)
4
Then, the particle travels from the slit to the point on the screen ⃗x f = (d +
l, h) in time T − τ . Note that the squared distance traveled now is
(⃗x f −⃗xm )2 = (l, h − s/2)2 = l 2 + (h − s/2)2 . (11.57)
This path integral is
l 2 +(h−s/2)2
r
i m2 h̄(T −τ )
m
Z2 = e . (11.58)
2π ih̄(T − τ )
120 11 The Path Integral
Thus, the path integral for the whole trajectory from the origin to the screen
is
l 2 +(h−s/2)2 d 2 +(s/2)2 m 1
i m2 +i m2
Ztop slit = Z1 Z2 = e h̄(T −τ ) h̄τ p . (11.59)
2π ih̄ τ (T − τ )
The path integral for the particle passing through the bottom slit is almost
identical, we just replace s/2 → −s/2 to the position of the bottom slit.
Then,
l 2 +(h+s/2)2 d 2 +(s/2)2 m 1
i m2 +i m2
Zbottom slit = Z1 Z2 = e h̄(T −τ ) h̄τ p . (11.60)
2π ih̄ τ (T − τ )
(b) We can then sum together the two path integrals of the particle passing
through the top and bottom slits to get the total path integral Z:
Z = Ztop slit + Zbottom slit (11.61)
l 2 +(h−s/2)2 l 2 +(h+s/2)2
d 2 +(s/2)2 m 1 im im
i m2
=e h̄τ p e 2 h̄(T −τ ) + e 2 h̄(T −τ )
2π ih̄ τ (T − τ )
d 2 +(s/2)2 l 2 +h2 +(s/2)2
i m2 +i m2 m 1 i m hs −i m hs
=e h̄τ h̄(T −τ ) p e 2 h̄(T −τ ) + e 2 h̄(T −τ )
2π ih̄ τ (T − τ )
d 2 +(s/2)2 l 2 +h2 +(s/2)2
i m2 +i m2 m 1 ms
=e h̄τ h̄(T −τ ) p cos h .
π ih̄ τ (T − τ ) 2h̄(T − τ )
That is, the probability amplitude varies sinusoidally with the distance h
from the center of the screen. The center of the screen is bright (h = 0),
and then there is a dark spot, then light, etc., as |h| increases away from 0.
This analysis is restricted to a fixed time T − τ for which the particle travels
from the slit to the screen and in general we would need to integrate over τ
because we perform no measurement to determine the time τ at which the
particle passes through the slit. However, that integral would just modulate
the amplitude of the interference pattern on the screen, but have no effect
its broad features.
11.8 (a) The Euler-Lagrange equations that we would find for this Lagrangian
are simply augmented by accounting for the spatial derivative of the
wavefunction:
∂L ∂ ∂L ∂ ∂L
− − = 0. (11.62)
∂ψ ∗ ∂ t ∂ ψ̇ ∗ ∂ x ∂ ψ ∗′
From the given Lagrangian, the derivatives are:
∂L ∂L ∂ ∂L ∂ h̄2 ′ h̄2 ′′
= ih̄ψ̇ −V (x)ψ , = 0, = − ψ = − ψ .
∂ ψ∗ ∂ ψ̇ ∗ ∂ x ∂ ψ ∗′ ∂ x 2m 2m
(11.63)
Then, the Euler-Lagrange equations are
∂L ∂ ∂L ∂ ∂L h̄2 ′′
∗
− ∗
− ∗ ′ = 0 = ih̄ψ̇ −V (x)ψ + ψ , (11.64)
∂ψ ∂ t ∂ ψ̇ ∂x ∂ψ 2m
121 Exercises
or that
h̄2 ′′
ih̄ψ̇ = −
ψ +V (x)ψ , (11.65)
2m
exactly the Schrödinger equation in position space.
(b) The variation of this Lagrangian with respect to ψ produces the Euler-
Lagrange equations:
∂L ∂ ∂L ∂ ∂L
− − = 0. (11.66)
∂ ψ ∂ t ∂ ψ̇ ∂ x ∂ ψ ′
The derivatives are now
∂L ∂ ∂L ∂ ∂L h̄2 ∗ ′′
= −V (x)ψ ∗ , = ih̄ψ̇ ∗ , = − ψ . (11.67)
∂ψ ∂ t ∂ ψ̇ ∂ x ∂ ψ′ 2m
The Euler-Lagrange equation for ψ ∗ is then
∂L ∂ ∂L ∂ ∂L ∗ ∗ h̄2 ∗ ′′
− − = 0 = −V (x)ψ − ih̄ψ̇ + ψ , (11.68)
∂ ψ ∂ t ∂ ψ̇ ∂ x ∂ ψ ′ 2m
or that
h̄2 ∗ ′′
−ih̄ψ̇ ∗ = −
ψ +V (x)ψ ∗ , (11.69)
2m
which is clearly the complex conjugate of the Schrödinger equation.
(c) If ψ is an energy eigenstate, then it necessarily satisfies the Schrödinger
equation. In that case, we can re-write the action using integration by parts
as
Z T Z ∞
∗ h̄2 ∗ ′ ′ ∗
S[ψ ] = dt dx −ih̄ψ ψ̇ − ψ ψ −V (x)ψ ψ (11.70)
0 −∞ 2m
Z T Z ∞
h̄2 ∂ h̄2 ∗ ′′
= dt dx − ψ ∗ ψ ′ − ih̄ψ ∗ ψ̇ + ψ ψ −V (x)ψ ∗ ψ
0 −∞ 2m ∂ x 2m
2 Z T Z ∞
h̄ ∂
=− dt dx ψ ∗ψ ′ ,
2m 0 −∞ ∂x
where we have eliminated terms that vanish assuming the Schrödinger
equation. Then, by the fundamental theorem of calculus, the action
Z T Z ∞ Z T
h̄2 ∂ h̄2
ψ ∗ψ ′ = − dt ψ ∗ ψ ′
x=∞
S[ψ ] = − dt dx = 0.
2m 0 −∞ ∂x 2m 0
x=−∞
(11.71)
Now, with the assumption that ψ is an energy eigenstate wavefunction, it
must be L2 -normalizable and therefore have compact support in position
x. Thus, it and its first derivative must vanish as x → ±∞. So, the action on
an energy eigenstate is 0.
12 The Density Matrix
Exercises
12.1 (a) For two independent systems A and B, their combined density matrix is the
tensor product of their individual density matrices:
ρAB = ρA ⊗ ρB . (12.1)
Inserting this into the definition of the Rényi entropy, we have
α log tr (ρAα ⊗ ρBα )
(α ) log tr ρAB log tr (ρA ⊗ ρB )α
SAB = = = (12.2)
1−α 1−α 1−α
log tr ρAα + log tr ρBα
=
1−α
(α ) (α )
= SA + SB ,
so indeed Rényi entropies for independent systems satisfy subadditivity.
(b) Again, the density matrix that demonstrated violation of subadditivity for
Rényi entropies was
1 1 1
ρ12 = | ↑1 ↑2 ⟩⟨↑1 ↑2 | + | ↑1 ↓2 ⟩⟨↑1 ↓2 | + | ↓1 ↑2 ⟩⟨↓1 ↑2 | . (12.3)
2 4 4
Its partial traces are
3 1
ρ1 = tr2 ρ12 = ⟨↑2 |ρ12 | ↑2 ⟩ + ⟨↓2 |ρ12 | ↓2 ⟩ = | ↑1 ⟩⟨↑1 | + | ↓1 ⟩⟨↓1 | , (12.4)
4 4
3 1
ρ2 = tr1 ρ12 = ⟨↑1 |ρ12 | ↑1 ⟩ + ⟨↓1 |ρ12 | ↓1 ⟩ = | ↑2 ⟩⟨↑2 | + | ↓2 ⟩⟨↓2 | .
4 4
All of these density matrices are diagonal, so calculating the Rényi
entropies is simple. For the combined system, we find
α
(α ) log tr ρ12 log 21α + 41α + 41α log 21α + 42α
S12 = = = . (12.5)
1−α 1−α 1−α
The entropies of the two subsystems are equal to each other and are
α
3 1
log tr ρ1α log α + α
(α ) (α ) 4 4
S1 = S2 = = . (12.6)
1−α 1−α
Saturating the subadditivity inequality means that
(α ) (α ) (α )
S12 = S1 + S2 , (12.7)
122
123 Exercises
or that
1 2 3α 1
log α
+ α = 2 log α
+ α . (12.8)
2 4 4 4
Exponentiating both sides, we then have
α
1 2 3 1 2 9α 3α 1
α
+ α
= α
+ α
= α
+ 2 α
+ α. (12.9)
2 4 4 4 16 16 16
Rearranging, this is
8α + 2 · 4α = 9α + 2 · 3α + 1 . (12.10)
This is a transcendental equation for which the solution α cannot be
expressed in closed form. Nevertheless, we can approximate it by plot-
ting both sides of the equation and looking where the curves intersect. The
approximate value of α at which this equality is true is α ≈ 1.6.
(c) Let’s consider the density matrix of the form
ρ12 = (1 − 2 ϵ )| ↑1 ↑2 ⟩⟨↑1 ↑2 | + ϵ | ↑1 ↓2 ⟩⟨↑1 ↓2 | + ϵ | ↓1 ↑2 ⟩⟨↓1 ↑2 | , (12.11)
for some ϵ ∈ [0, 1/2]. The Rényi entropy of this density matrix, and the sum
of its two partial traces are
(d) With an appropriate choice of basis, the most general density matrix of two
spins is
(α ) 1
S12 = log (aα + bα + cα + d α ) . (12.19)
1−α
The sum of the Rényi entropies of the partial traces are then
(α ) (α ) 1
S1 + S2 = [log ((a + b)α + (c + d)α ) + log ((a + c)α + (b + d)α )]
1−α
(12.20)
Now, let’s consider ϵ very small and so we can Taylor expand both sides of
this inequality to lowest order in ϵ , assuming that α < 1 and 1 − α is much
larger than ϵ . Taylor expanding, we find
Note that the second term is positive as long as 3 − 21+α > 0 or that
log 3
α< − 1 ≈ 0.585 . (12.26)
log 2
Therefore, for sufficiently small ϵ , this inequality is satisfied and sub-
additivity is violated for α ≲ 0.585. This example can be extended and
generalized to establish violation of subadditivity for all Rényi entropies
with 0 < α < 1.
125 Exercises
12.2 (a) If the systems A, B, and C are all independent, then the entropy of the joint
system ABC, for example, is the sum of the individual entropies:
ABC A B
SvN = SvN + SvN + SCvN . (12.27)
Using this result, the strong subadditivity condition for independent sys-
tems just reduces to
ABC
SvN B
+ SvN A
= SvN B
+ 2SvN + SCvN ≤ SvN
AB BC
+ SvN A
= SvN B
+ 2SvN + SCvN , (12.28)
Next, if ABC is pure, then the entropies of B and AC are equal. So, the
inequality becomes
AC
SvN ≤ SvN
AB BC
+ SvN . (12.30)
Then, continuing with this argument, the entropies of AB and C are equal,
as are BC and A. Then,
AC
SvN ≤ SvN
AB BC
+ SvN = SCvN + SvN
A
, (12.31)
but this is just the condition of subadditivity for systems A, C, and AC.
(c) First, checking the normalization of this pure state, we have
1
⟨ψ |ψ ⟩ = (⟨↑1 |⟨↓2 |⟨↓3 | + ⟨↓1 |⟨↑2 |⟨↓3 | + ⟨↓1 |⟨↓2 |⟨↑3 |)
3
× (| ↑1 ⟩| ↓2 ⟩| ↓3 ⟩ + | ↓1 ⟩| ↑2 ⟩| ↓3 ⟩ + | ↓1 ⟩| ↓2 ⟩| ↑3 ⟩)
1
= (⟨↑1 | ↑1 ⟩⟨↓2 | ↓2 ⟩⟨↓3 | ↓3 ⟩ + ⟨↓1 | ↓1 ⟩⟨↑2 | ↑2 ⟩⟨↓3 | ↓3 ⟩
3
+⟨↓1 | ↓1 ⟩⟨↓2 | ↓2 ⟩⟨↑3 | ↑3 ⟩)
= 1, (12.32)
as expected.
The reduced density matrices are
where we have ignored the completely 0 rows and columns. The eigenvalues
of this matrix are 2/3, 1/3 and 0. Therefore, its von Neumann entropy is
1 1 2 2 2
12
SvN = − log − log = log 3 − log 2 . (12.39)
3 3 3 3 3
By the simple, symmetric form of the initial pure state, the entropy of the
system 23 is identical:
2
23
SvN 12
= SvN = log 3 − log 2 . (12.40)
3
Then, the inequality we find is
2 2
log 3 − log 2 ≤ 2 log 3 − log 2 , (12.41)
3 3
or that
2
0 ≤ log 3 − log 2 ≈ 0.636514 , (12.42)
3
which is true.
127 Exercises
12.3 (a) Just considering the first three energy eigenstates, the partition function for
the hydrogen atom is
where E0 is the ground state and E1 and E2 are the first and second excited
states. The factors of 4 and 9 are the degeneracy of the excited states: there
are 4 orthogonal states on the Hilbert space that each have energy E1 ,
for example. Additionally, from the structure of the energy spectrum of
hydrogen, we know that
E0 E0
E1 = , E2 = . (12.44)
4 9
Thus, the partition function is approximately
E0 E0
Z ≈ e−β E0 + 4e−β 4 + 9e−β 9 . (12.45)
Demanding that 50% of the hydrogen is in the ground state means that the
Boltzmann factor for the ground state is half of the partition function:
1 e−β E0 e−β E0 1
= ≈ E0 E0 = 3E0 8E0 , (12.46)
2 Z e−β E0 + 4e−β 4 + 9e−β 9 1 + 4eβ 4 + 9eβ 9
or that
3E0 8E0
1 ≈ 4eβ 4 + 9eβ 9 . (12.47)
For right now, we will measure β in units of the inverse ground state energy,
β = −x/E0 , for some unitless number x (remember that E0 < 0). Then, the
equation to solve for 50% of the hydrogen in the ground state is
1 ≈ 4e− 4 x + 9e− 9 x .
3 8
(12.48)
To approximately solve for x here, we can plot both sides of the equa-
tion and then see where they intersect. We find that x ≈ 3, or that the
temperature T at which half of the hydrogen is in the ground state is
E0 13.6
kB T ≈ − ≈ eV ≈ 4.5 eV . (12.49)
3 3
This is a temperature of about 50,000 K, a bit less than our naive estimate
of the temperature of recombination from chapter 9. If you include more
and more energy eigenstates in the evaluation of the partition function, this
temperature further decreases.
(b) By the ratio test, an infinite series converges if the ratio of its terms an is
less than 1 as n → ∞:
an+1
lim < 1. (12.50)
n→∞ an
128 12 The Density Matrix
For the Planck–Larkin partition function, we will first simplify the terms
in the large n-limit. Note that the leading contribution of the term an for
n → ∞ is
!
E
−β 20 E0 E0 1 E0 2 E0
lim n e n − 1 + β 2 = lim n 1 − β 2 +
2 2
β 2 −1+β 2
n→∞ n n→∞ n 2 n n
β 2 E02
= + O(n−4 ) . (12.51)
2n2
Then, in the limit that n → ∞, the ratio test produces
an+1 n2 2
lim = lim = 1− < 1, (12.52)
n→∞ an n→∞ (n + 1)2 n
and so the Planck–Larkin partition function indeed converges. (We tech-
nically used a small extension to the ratio test developed by Gauss.)
(c) As T → 0, β → ∞. The bound state energies of hydrogen all have negative
energy, E0 < 0, and so the partition function is exponentially dominated
by the n = 1 term:
∞ E
−β 20 E0
lim ZP-L = lim ∑ n e n − 1 + β 2 → e−β E0 .
2
(12.53)
β →0 β →0 n=1 n
That is, as β → ∞, the only state that is occupied is the ground state of
hydrogen.
(d) In the high-temperature limit, T → ∞, β → 0, so we can Taylor expand each
term in the sum of the partition function. We have
E
−β 0 E0 β 2 E02 1 π2 2 2
lim lim n2 e n2 − 1 + β 2 = lim 2 = β E0 . (12.54)
β →0 n→∞ n 2 n→∞ n 12
In the high temperature limit, electrons should not be bound into hydro-
gen, but should instead be free particles. There are a continuous infinity of
free, scattering states, and so the probability that any bound state is occu-
pied should be 0. This seems to somewhat be represented with this partition
function, through the fact that it vanishes. However, taking a naive ratio
to determine the occupancy of the ground state in this limit is very strange.
The ratio of the n = 1 term to the full partition function should be a meas-
ure of the occupancy of the ground state, and with this prescription we
have
β 2 E02
6
2
= ≈ 0.608 , (12.55)
π2 2 2
β E0 π2
12
which is very large. If instead we took the ratio of the naive ground state
Boltzmann factor with this partition function, we get an even stranger
result. Note that
lim e−β E0 = 1 , (12.56)
β →0
129 Exercises
and so
e−β E0
lim → ∞. (12.57)
β →0 ZP-L
Zpath = ⟨x f |e−i
T Ĥ
h̄ |xi ⟩ , (12.58)
the transition amplitude from an initial position xi to a final position x f
over time T . Now, we note that the partition function has no complex num-
bers in it, so let’s eliminate the i in the exponent by complexifying time. We
will set τ = iT so that
τ Ĥ
Zpath → ⟨x f |e− h̄ |xi ⟩ . (12.59)
Now, let’s take the product of the complexified path integral with its
complex conjugate:
τ Ĥ τ Ĥ τ Ĥ τ Ĥ
†
Zpath Zpath → ⟨x f |e− h̄ |xi ⟩⟨x f |e− h̄ |xi ⟩† = ⟨x f |e− h̄ |xi ⟩⟨xi |e− h̄ |x f ⟩ .
(12.60)
The path integral encodes transition amplitudes between pairs of initial
and final points, so we should think of it as a matrix. Matrix multiplication
means that we need to integrate over the intermediate rows and columns:
Z
τ Ĥ τ Ĥ
†
Zpath Zpath = dx f ⟨xi |e− h̄ |x f ⟩⟨x f |e− h̄ |xi ⟩ (12.61)
2τ Ĥ
≡ ⟨x|e− h̄ |x⟩ ,
where we have used the fact that the position basis is complete and
removed the subscript i in the final line because it holds for any posi-
tion x. This describes the transition amplitude from an initial position x
back to itself around a complex time loop of circumference 2τ . To turn
this into the trace of the exponential-suppressed Hamitonian operator, we
simply integrate or sum over all possible positions x. That is, the partition
function corresponds to the sum over all positions of the complex-time
loops of circumference 2τ . We identify this circumference with the inverse
temperature: β = 2τ .
(b) First, note that the partition function is a sum of Boltzmann factors over
energy eigenstates:
Zpart = ∑ e−β En . (12.62)
n
Next, we had expressed the path integral as a sum over energy eigenstate
wavefunctions, too, where
Zpath = ∑ e−i
T En
h̄ ψn (x f )ψn∗ (xi ) , (12.63)
n
130 12 The Density Matrix
where the elapsed time is T , the initial position is xi and the final position is
x f . As demonstrated in part (a), we just need to use completeness relations
a couple of times. First, if we complexify time iT = τ , then the path integral
is
τ En
Zpath → ∑ e− h̄ ψn (x f )ψn∗ (xi ) . (12.64)
n
Next, the product of the path integral and its complex conjugate is
Z
τ En τ Em
†
Zpath Zpath → dx f ∑ e− h̄ e− h̄ ψm (xi )ψm∗ (x f )ψn (x f )ψn∗ (xi )
m,n
− 2τh̄En
= ∑e |ψn (xi )|2 , (12.65)
n
Then, integrating over position xi renders the product of the path integral
and its complex conjugate a sum or trace of Boltzmann factors over all
energy eigenstates.
(c) The harmonic oscillator’s path integral is, from the results of chapter 11,
r s
S[x
i class
] m ωT
Zpath = e h̄ . (12.67)
2π ih̄T sin(ω T )
The product of the path integral with its complex conjugate is then
r s r s
† m ωT m ωT
Zpath Zpath = (12.68)
2π ih̄T sin(ω T ) 2π (−i)h̄T sin(ω T )
mω 1
= .
2π h̄ sin(ω T )
Now, if we complexify time, setting
T̃
T =i , (12.69)
2
the product of path integrals becomes
mω 1 mω 1
†
Zpath Zpath → = , (12.70)
2π h̄ sin i ω T̃ π ih̄ 2 sinh ω T̃
2 2
Z = ∑ e−β En . (12.72)
n
1 e−β En 1
S = − ∑ pn log pn = − ∑ e−β En log = ∑ e−β En (β En + log Z)
n Z n Z Z n
= β ⟨E⟩ + log Z , (12.73)
because
1
Z∑
e−β En = 1 , (12.74)
n
by definition.
(c) The exponential factor in the path integral that contains the energy of the
particle, in analogy to the Boltzmann factor, is of the form
tEn
e−β En → e−i h̄ , (12.75)
for some elapsed time t. Then, with this generous analogy, we can identify
the inverse temperature in the path integral with
t
β →i , (12.76)
h̄
or that inverse temperature is proportional to elapsed time. Let’s see if this
makes any sense with the free particle partition function derived in the
previous chapter. Recall that the path integral is
(x −x )2
r
i m2 f h̄T i m
Z=e , (12.77)
2π ih̄T
for total elapsed time T in traveling from position xi to x f . The expectation
value of the energy, within this analogy, would be
h̄ d log Z d m (x f − xi )2 1 2π ih̄T
⟨E⟩ → − = ih̄ i − log (12.78)
i dT dT 2 h̄T 2 m
m (x f − xi )2 1 m (x f − xi )2 ih̄
= ih̄ −i 2
− = − .
2 h̄T 2T 2 T2 2T
This result is quite fascinating. Note that the ratio in the first term is just
the squared velocity v 2 of the classical free particle:
(x f − xi )2
v2 = , (12.79)
T2
and so the first term is exactly the expected kinetic energy of a classical, free
particle. The second term spoils this interpretation, but it is proportional
to h̄, and so seems to exclusively be due to quantum mechanical effects.
132 12 The Density Matrix
12.6 (a) To find the pure states of two spin-1/2 particles that produce the maximal
entropy of entanglement, we just write the post general pure state and take
the partial trace. This most general state is
|ψ ⟩ = a| ↑1 ↑2 ⟩ + b| ↑1 ↓2 ⟩ + c| ↓1 ↑2 ⟩ + d| ↓1 ↓2 ⟩ , (12.80)
for some complex numbers a, b, c, d. Taking the partial trace of particle 2
to generate the density matrix of particle 1, we have
ρ1 = ⟨↑2 |ψ ⟩⟨ψ | ↑2 ⟩ + ⟨↓2 |ψ ⟩⟨ψ | ↓2 ⟩ (12.81)
∗ ∗
= |a| + |b| | ↑1 ⟩⟨↑1 | + |c| + |d| | ↓1 ⟩⟨↓1 | + (ac + bd )| ↑1 ⟩⟨↓1 |
2 2 2 2
This is implicitly two equations, for the phase and the magnitude, but let’s
just focus on the equation for the magnitude:
and we must require that 0 ≤ χ , η ≤ π /2 so that the phases encode all sign
information. On this restricted range both cosine and sine are invertible,
and so this can only hold true if χ = η . In this case, the original state can
be expressed as
|ψ ⟩ = a| ↑1 ↑2 ⟩ + b| ↑1 ↓2 ⟩ + c| ↓1 ↑2 ⟩ + d| ↓1 ↓2 ⟩ (12.89)
iα iβ iγ
= e cos θ cos χ | ↑1 ↑2 ⟩ + e cos θ sin χ | ↑1 ↓2 ⟩ + e sin θ cos χ | ↓1 ↑2 ⟩
+ eiδ sin θ sin χ | ↓1 ↓2 ⟩
= eiα cos θ | ↑1 ⟩ cos χ | ↑2 ⟩ + ei(β −α ) sin χ | ↓2 ⟩
+ eiγ sin θ | ↓1 ⟩ cos χ | ↑2 ⟩ + ei(δ −γ ) sin χ | ↓2 ⟩ .
Note that the determinant vanishing requirement enforced that the phases
satisfied β − α = δ − γ . Further, note that the state
is just a definite spin pointed along an axis defined by the angle χ and the
phase β − α . Let’s call this spin up along the χ axis: | ↑2,χ ⟩. Then, the state
can be expressed as
|ψ ⟩ = eiα cos θ | ↑1 ⟩ + eiγ sin θ | ↓1 ⟩ | ↑2,χ ⟩ . (12.91)
is again a definite spin state long an axis defined by angle θ and phase
difference γ − α . Let’s call this spin up along the θ axis: | ↑1,θ ⟩. Finally, if
the partial traced state is pure, then the initial state takes the form
where |↑3,θ ⟩ is the state of particle 3 pointed up along the θ axis, and |ψ12 ⟩
is some pure state of particles 1 and 2. We can continue this logic. Now,
134 12 The Density Matrix
let’s assume that particles 1 and 3 form a pure state after partial tracing.
With this already separable form, particle 2 must also be separated:
|ψ ⟩ = | ↑1,χ ⟩| ↑2,η ⟩| ↑3,θ ⟩ , (12.95)
which is a completely separable state of three spins pointed along three
different axes. Thus, entropy is monogamous: only two particles can be
maximally entangled at a time.
12.7 (a) For the detected state |11⟩, where each detector is hit by an identical pho-
ton, there are two possible paths that the photons can take. For one, both
photons can be transmitted through the beam splitting and then hit the
detectors. We can assume that the probability amplitude for such a path is
some value A. Next, the two photons could instead have both reflected off
of the beam splitter. The probability of reflecting or transmitting is identi-
cal, and so the absolute value of the probability amplitude for this double
reflection is still |A|. However, each photon now picks up a relative phase
of π /2 as compared to the transmitted photons. Therefore, the probability
amplitude for generating the reflected contribution to |11⟩ is
A11 ⊃ eiπ /2 eiπ /2 A = −A , (12.96)
because there are two photons that each pick up a phase of π /2. Thus,
when the transmitted and reflected amplitudes are summed coherently, we
actually find that the probability for the state |11⟩ is 0.
(b) By contrast, the states |20⟩ and |02⟩ each have a unique path that can con-
tribute: one photon is transmitted and the other is reflected. Therefore,
quantum mechanically, they are both observed. This is exactly the opposite
measurements that would be made classically.
12.8 (a) The first thing we will do is to recall that the partition function of the
harmonic oscillator is
1
Z= . (12.97)
2 sinh β h̄2ω
Then, the thermal average of the Hamiltonian in the harmonic oscillator
is
1 dZ β h̄ω d 1
⟨⟨Ĥ⟩⟩β = ⟨E⟩ = − = −2 sinh (12.98)
Z dβ 2 d β 2 sinh β h̄ω
2
h̄ω β h̄ω
= coth .
2 2
(b) To show that the thermal expectation values of the momentum and posi-
tion are 0, it is sufficient to note that both x̂ and p̂ are some linear
combinations of the raising and lowering operators ↠, â. In the evaluation
of the orthonormalization of the eigenstates of the harmonic oscillator, we
had established that
⟨ψn |↠|ψn ⟩ = ⟨ψn |â|ψn ⟩ = 0 . (12.99)
135 Exercises
h̄4
− Ŝb Ŝb′ (Ŝb̄ Ŝb̄′ − Ŝb̄′ Ŝb̄ ) + Ŝb′ Ŝb (Ŝb̄ Ŝb̄′ − Ŝb̄′ Ŝb̄ )
=
4
h̄4
= − [Ŝb , Ŝb′ ][Ŝb̄ , Ŝb̄′ ] .
4
(b) This partially entangled state is
√ √
|ψ ⟩ = w| ↑b ↓b̄ ⟩ + 1 − w| ↓b ↑b̄ ⟩ . (12.125)
As observed in eq. 12.121 on page 325 of the textbook, the operator Ĉ only
contributes to the expectation value on the state |ψ ⟩ if there is a spin-flip.
A spin-flip correlates the two entangled states in |ψ ⟩, and
ptheir correlation
is proportional to the product of their relative fractions, w(1 − w). So, all
we need to do to account for this difference from thepcalculation in the text
is to renormalize the result of equation 12.124 by 2 w(1 − w). The factor
of 2 cancels the case when w = 1 − w = 1/2, and so
p
⟨ψ |Ĉ|ψ ⟩ = 2w(1 − w) h̄2 cos(ϕ − π /4) . (12.126)
At the optimized angle between spin measurements, ϕ = π /4, violation of
Bell’s inequalities is the statement that
p h̄2
⟨ψ |Ĉ|ψ ⟩ = 2w(1 − w) h̄2 > . (12.127)
2
This is satisfied if
1
2w(1 − w) > , (12.128)
4
or that
1 1 1 1
1− √ <w< 1− √ . (12.129)
2 2 2 2
(c) Indeed, there are entangled states that do not violate Bell’s inequalities. In
this case, if either
1 1 1 1
0<w≤ 1− √ , 1+ √ ≤ w < 1. (12.130)
2 2 2 2