Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Prof. Dr. A. Komnik, G. Schober 1.5.

2012

Condensed Matter Theory


Summer term 2012

Solutions to Sheet No. 1

Solution to Problem 1: Poisson distribution


The exercise introduces an apparent paradox in the Drude model by com-
paring the following two situations:
#1: Assume that an electron scatters at t = 0. What is the average time
htf i of the following scattering event?
#2: Let t = 0 be an arbitrary observation time. What is the average time
difference hT i = htf − tℓ i between the last (tℓ < 0) and the following
(tf > 0) scattering event?
At first one might think that both questions should receive the same answer,
however, this is not the case: assuming the Poisson distribution of scattering
events, the answer to #1 is htf i = τ , while the answer to #2 is hT i = 2τ ,
as we will show below.

Consider first the situation #1, i. .e. we assume that an electron scatters
at t = 0. We first calculate the probability that the electron does not scatter
in the time interval [0, t]. By splitting the interval into infinitesimal intervals
of length dt (i. e. N = t/dt such intervals), the probability is given by

lim (1 − dt/τ )t/dt , (1)


dt→0

since (1 − dt/τ ) is the probability that the electron does not scatter in one
infinitesimal time interval. This equals
h i−t/τ
lim (1 − dt/τ )−τ /dt = e−t/τ , (2)
dt→0

where the limit of the expression in brackets has been evaluated as follows:
   N
1 −N 1
lim 1 − = lim 1 + = e. (3)
N →∞ N N →∞ N −1
Now let P (t)dt be the probability that the next scattering after t = 0 occurs
in the infinitesimal time interval [t, t + dt]. This equals the probability that
the next scattering does not occur in [0, t] times the probability that it does
occur in [t, t + dt], and hence is given by
dt
P (t)dt = e−t/τ . (4)
τ

1
Prof. Dr. A. Komnik, G. Schober 1.5.2012

The probability distribution is therefore given by

e−t/τ
P (t) = , (5)
τ
which is shown in Figure 1. In particular we see that
R∞
• 0 P (t)dt = 1,

• the most probable time of the next scattering is t = 0, and

• the average time of the next scattering event is


Z ∞
hti = t P (t)dt = τ. (6)
0

Probability distribution P(t) for τ = 1


1
exp(-t/τ)/τ
0.8

0.6
P(t)

0.4

0.2

0
0 1 2 3 4 5 6 7 8
t

Figure 1: Probability distribution for the time of a scattering event.

Notice that we have not used that a scattering event indeed occurs at
t = 0, and hence t = 0 may be an arbitrary observation time. This means,
in the Poisson distribution the time of the next scattering event does not
depend on the scattering events that have occured in the past.

Next consider the situation #2. Of course we can argue as follows: ac-
cording to the previous calculation, the average time of the following scat-
tering event is given by htf i = τ . Similarly, the average time of the last
scattering event is htℓ i = −τ . Hence the average time between the last and
the following scattering event is hT i = htf i − htℓ i = 2τ .

2
Prof. Dr. A. Komnik, G. Schober 1.5.2012

We will now calculate the probability distribution of T and rederive this


result from it. Let P (tℓ , tf ) be the joint probability distribution that the
previous scattering occurs at tℓ and the following one at tf . Obviously,

e−|tℓ |/τ e−tf /τ


P (tℓ , tf ) = . (7)
τ τ
With T = tf − tℓ , the joint probability that the last scattering occurs at tℓ
and the time between the two scattering events is T is given by

e−|tℓ |/τ e−(T +tℓ )/τ


Pe(tℓ , T ) = P (tℓ , T + tℓ ) = . (8)
τ τ
Since tℓ < 0, this is just
e−T /τ
Pe(tℓ , T ) = . (9)
τ2
As we are only interested in the probability distribution of T , we can inte-
grate over all possible values of tℓ (where tℓ > −T must hold since tf > 0):
Z 0
T e−T /τ
Pe(T ) = Pe(tℓ , T ) dtℓ = . (10)
−T τ2

This is the probability distribution for the time between the last and the
following scattering event, see Figure 2. In particular we see that
R∞
• 0 Pe (T )dT = 1,

• the most probable time difference between the last and the following
scattering events is T = τ , and

• the average time between these two scattering events is


Z ∞
hT i = T Pe(T )dT = 2τ. (11)
0

In summary, Problem 1 (a) is shown by Equation (2), (b) by Equa-


tion (4), (c) and (d) by Equation (6) (since all electrons have the same
probability distribution, we can calculate the average over all electrons from
the probability distribution of a single electron), and (e) by Equation (11).

3
Prof. Dr. A. Komnik, G. Schober 1.5.2012


Probability distribution P (T) for τ = 1
1
T exp(-T/τ)/τ2
0.8

0.6
P∼(T)

0.4

0.2

0
0 1 2 3 4 5 6 7 8
T

Figure 2: Probability distribution for the time difference between the last
and the following scattering event.

Solution to Problem 2: Sommerfeld expansion


Consider the integral Z ∞
I= g(ǫ)nF (ǫ) (12)
−∞

with nF (ǫ) = 1/[eβ(ǫ−µ)+1 ] and β = 1/T . We will show the more general
result,
Z µ ∞
X g(2n−1) (µ)
I= g(ǫ)dǫ + (2 − 22−2n )ζ(2n) (13)
−∞ β 2n
n=1

where g (2n−1) (µ) denotes the (2n − 1)-th derivative of g evaluated at µ, and

X 1
ζ(n) = (14)
kn
k=1

is the Riemann ζ-function. For the first terms, in particular, we obtain with
ζ(2) = π 2 /6 and ζ(4) = π 4 /90,
Z µ
π2 7π 4 4 ′′′
I= g(ǫ)dǫ + T 2 g′ (µ) + T g (µ) + . . . (15)
−∞ 6 360

To prove Equation (13) we define the function


Z ǫ
G(ǫ) = g(ǫ′ )dǫ′ , (16)
−∞

4
Prof. Dr. A. Komnik, G. Schober 1.5.2012

such that G′ (ǫ) = g(ǫ). We then use partial integration,


Z ∞ Z ∞  
dG(ǫ) dnF (ǫ)
I= nF (ǫ)dǫ = G(ǫ) − dǫ. (17)
−∞ dǫ −∞ dǫ
Next we expand G(ǫ) in a Taylor series around µ,

X 1 (n)
G(ǫ) = G(µ) + G (µ)(ǫ − µ)n (18)
n!
n=1

and expand the Fermi function as a geometric series: if ǫ < µ,



X
1
= (−1)k ekβ(ǫ−µ) , (19)
eβ(ǫ−µ) + 1 k=0

and if ǫ > µ,
X ∞
1 e−β(ǫ−µ)
= = (−1)k−1 e−kβ(ǫ−µ) . (20)
eβ(ǫ−µ) + 1 1+e −β(ǫ−µ)
k=1

This gives for the derivative: if ǫ < µ,



dnF (ǫ) X
− = (−1)k+1 (kβ)ekβ(ǫ−µ) , (21)

k=1

and if ǫ > µ,

dnF (ǫ) X
− = (−1)k+1 (kβ)e−kβ(ǫ−µ) . (22)

k=1
Putting (18), (21) and (22) into (17) yields
X∞ X∞ Z µ
1 (n) k+1
I = G(µ) + G (µ) (−1) (ǫ − µ)n (kβ)ekβ(ǫ−µ) dǫ
n! −∞
n=1 k=1
Z ∞ 
+ (ǫ − µ)n (kβ)e−kβ(ǫ−µ) dǫ . (23)
µ

Substituting x = kβ(ǫ − µ), we obtain


X∞ X∞ Z 0
1 (n) 1
I = G(µ) + G (µ) (−1)k+1 xn ex dx
n! (kβ)n −∞
n=1 k=1
Z ∞ 
n −x
+ x e dx . (24)
0

The term in brackets is n! {(−1)n + 1}, and hence we are left with
X G(n) (µ) X∞
(−1)k+1
I = G(µ) + 2 . (25)
n even
βn kn
k=1

5
Prof. Dr. A. Komnik, G. Schober 1.5.2012

Consider the alternating sum on the right hand side. We have



X X 1 X 1 ∞
X X 1
(−1)k+1 1
= − = − 2 (26)
kn kn kn kn kn
k=1 k odd k even k=1 k even

and further,
X 1 ∞
X ∞
1 1 X 1
= = , (27)
kn (2k)n 2n kn
k even k=1 k=1

hence

X ∞
X
(−1)k+1 1
= (1 − 21−n ) = (1 − 21−n ) ζ(n). (28)
kn kn
k=1 k=1

Thus we obtain
X G(n) (µ)
I = G(µ) + n
(2 − 22−n ) ζ(n), (29)
n even
β

or equivalently,
Z µ ∞
X g(2n−1) (µ)
I= g(ǫ)dǫ + (2 − 22−2n ) ζ(2n), (30)
−∞ n=1
β 2n

which is the assertion.

You might also like