Professional Documents
Culture Documents
Statistical Mechanics
Statistical Mechanics
Girish S Setlur
Department of Physics
Indian Institute of Technology Guwahati
Guwahati, Assam 781039
Ludwig Eduard Boltzmann (February 20, 1844 – September 5,
1906) was an Austrian physicist and philosopher whose greatest
achievement was in the development of statistical mechanics, which
explains and predicts how the properties of atoms (such as mass,
charge, and structure) determine the physical properties of matter
(such as viscosity, thermal conductivity, and diffusion).
Source:Wikipedia
History
Historically, Thermodynamics was the conceptual precursor to Statistical Mechanics (source: Stephen Wolfram “A New Kind of Science”)
▪ Antiquity - 1600 AD : Notions of heat and temperature widely accepted. It was believed at the time that heat was associated with the motion of
microscopic constituents of matter.
▪ 1700 -1840 : Later the wrong notion that heat was instead a fluid like substance started becoming more popular.
▪ 1850 : Experiments of James Joule and others showed that heat is a form of energy.
▪ Sadi Carnot had explained the relation between heat and energy. This was important in the development of steam engines.
▪ In 1850 Rudolf Clausius and William Thomson (Lord Kelvin) formulated the First Law which is the idea that total energy is conserved. The
Second Law of Thermodynamics which is the idea that heat cannot be completely converted to work was also formulated.
▪ In 1738 Daniel Bernoulli has pointed out that gases consist of molecules in motion. Clausius revived this idea in 1857.
▪ In 1860, James Clerk Maxwell derived the expected distribution of molecular speeds in a gas by taking into account molecular collisions.
▪ In 1872 Ludwig Boltzmann constructed an equation that he thought could describe the detailed time evolution of a gas regardless of whether it was
in equilibrium or not.
▪ In the 1860s, Clausius had introduced entropy as a ratio of heat to temperature, and had stated the Second Law in terms of the increase of this
quantity.
▪ Boltzmann then showed that his equation implied the so-called H Theorem, which states that a quantity equal to entropy in equilibrium must always
increase with time. Since molecular collisions were assumed reversible, his derivation could be run in reverse, and would then imply the opposite of
the Second Law. Boltzmann made the implicit assumption that the motion was uncorrelated before collision but not after which imposes
irreversibility.
▪ In 1900 Gibbs introduced the notion of an ensemble - a collection of many possible states of a system, each assigned a certain probability. He
argued that if the time evolution of a single state were to visit all other states in the ensemble - the so-called ergodic hypothesis - then averaged over
a sufficiently long time a single state would behave in a way that was typical of the ensemble. Gibbs also gave qualitative arguments that entropy
would increase if it were measured in a "coarse-grained" way in which nearby states were not distinguished.
Zeroth Law, First Law and Second Law
Zeroth Law: If A and B are in equilibrium with a third C, they
are also in equilibrium with one another.
The above process is called “adiabatic” which means - no heat flows in or out
of the system and no mass enters or exits the system.
Second Law:
Clausius statement: No process is possible whose sole result is the transfer of
heat from a colder to a hotter body.
Kelvin statement: No process is possible which involves converting heat
completely into work.
EQUIVALENCE OF KELVIN AND CLAUSIUS FORMS
OF THE SECOND LAW
HOT
W
COLD
COLD
• For example, a hand of N playing cards from a standard deck that has N1 black
cards (spades and clubs) and the rest red (hearts and diamonds) is a macrostate.
We could ask how many ways are there in which such a hand can be formed. The
logarithm of this number would then be the entropy of the system.
• Imagine a staircase with M steps and N equally heavy people standing on it. A
macrostate could be described by the potential energy U of all the people. We
could ask how many ways are there in which such a gathering of people may be
formed. The logarithm of this number would then be the entropy of the system.
• Instead of people (who are all distinct individuals even though they weigh the
same) we could look at identical marbles. Now the entropy is going to be different
since the marbles are all indistinguishable.
What we have covered so far…
• Prerequisites for the course.
• A historical timeline of notable events and milestones in the development
of the subject.
• Zeroth Law, First Law and the Clausius and Kelvin form of the Second
Law of thermodynamics.
• The equivalence of the Clausius and the Kelving forms of the Second Law.
• Meaning of microstates and macrostates.
• Boltzmann’s definition of entropy.
Is entropy a measure of the “disorder” in the system?
This lecture aims to illustrate how entropy is a measure of disorder.
i saihawpyawwpwal entropy rawgar eat aatineaatar bhaallout sarupya
hpyaww hphoet rairwal sai.
tahdif hadhih almuhadarat 'iilaa tawdih kayf 'ana alaintirubia hi
miqyas lilfawdaa.
Běn jiǎngzuò zhǐ zài shuōmíng shāng shì yīzhǒng wú xù de héngliáng
biāozhǔn.
Ī upan'yāsa eṇṭropi asvasthateya ondu aḷate embudannu vivarisuttade.
Aftí i diálexi stochévei na deíxei pós i entropía eínai éna métro tis
diatarachís.
ORDERED ? DISORDERED ?
Shuffle Unshuffle
SHUFFLED
Playing cards example
A standard deck of playing cards has two colors – black and red.
There are 26 distinct cards of each color.
Imagine I have a hand which has N cards.
26!
There are C(26, N1 ) = 26−𝑁1 ! 𝑁1 !
ways of getting 𝑁1 black cards in hand.
26!
There are C(26, N − N1 ) = 26+𝑁1 −𝑁 ! (𝑁−𝑁1 )!
ways of getting 𝑁 − 𝑁1 red cards in hand.
26! 26!
= 𝐿𝑜𝑔[ ]
26 − 𝑁1 ! 𝑁1 ! 26 + 𝑁1 − 𝑁 ! (𝑁 − 𝑁1)!
The entropy 𝑆 𝑁, 𝑁1 is a function of the macrostate described by the two macroscopic
quantities 𝑁, 𝑁1 . Note that each macrostate corresponds to many possible microstates.
For example with 10 cards in hand, we may plot the entropy versus the
number of black cards.
We can see that the entropy peaks when there are 5 black cards and 5 red cards in
hand. The entropy is minimum (but not zero) when all the cards are black or none of
the cards are black. The entropy increases with the increase of the number of black
cards initially and then falls as the number of black cards approaches the total
number of cards.
We may define a type of thermodynamic potential which we generically call
“temperature” which is the inverse of the slope of this plot. This temperature
is positive sometimes and negative at other times. This is a common
occurrence in systems where the entropy is not a monotonic function of its
independent variable. Correction: In the plot below the y-axis is
1/Temperature
Prerequisites for the course
• A preparation in advanced calculus which includes multivariable
integrations (Stokes, Gauss theorems), complex variables (residues, method
of steepest descent), combinatorics (permutations combinations) and so on.
• Basic programming in some language.
• Knowledge of Classical Mechanics at the level of Goldstein, and
knowledge of Quantum Mechanics at the level of Griffiths.
• Above all, a desire to learn about this important subject and a willingness to
remedy gaps in one’s prerequisites as and when it is felt.
Staircase example
• A microstate is described by 𝑛1 people (all different individuals) standing
on the first step, 𝑛2 people standing on the second step an so on until the
last M-th step.
𝑛1 + 𝑛2 + … + 𝑛𝑀 = 𝑁
𝑤 ℎ 𝑛1 +2 𝑤 ℎ 𝑛2 + … + 𝑀 𝑤 ℎ 𝑛𝑀 = 𝑈
• These two equations are a special case of what are sometimes known as Diophantine
equations. These are called Frobenius equations (not to be confused with the Frobenius
method of solving ordinary differential equations with non-constant coefficients).
• Historical aside: An algorithm for solving Diophantine equations with two integer
unknowns 𝑛1 , 𝑛2 :
𝜖1 𝑛1 + 𝜖2 𝑛2 = 𝑈
where 𝜖1 , 𝜖2 and U are integers was invented by Āryabhaṭa (476–550 CE). This method
was given the name Kuṭṭaka by his successor Bhāskara I (c. 600 – c. 680).
A wasteful way of finding the entropy is to list all the solutions of these two simultaneous
Diophantine equations and count how many there are. There are many codes available in
languages such as Python and Mathematica to do this. But we really don’t need to list all the
microstates we just want a quick way of counting how many there are. For this there is a
separate analytical method which will be described later. If N and U are small numbers, these
solutions can be enumerated explicitly which is what we do in the next few slides.
Take a specific example with 2 steps and 3 people.
𝑛1 + 𝑛2 = 3
All three people can be on the first step, or one person can be on the first step
and two can be on the second, two can be on the first and one can be on the
second or all three can be on the second.
In the first case the potential energy is 𝑈 = 3 𝑤 ℎ
In the second case the potential energy is
𝑈= 𝑤ℎ+2 × 2𝑤ℎ =5𝑤ℎ
In the third case the potential energy is
𝑈 = 2𝑤ℎ+ 2𝑤ℎ = 4𝑤ℎ
In the fourth case the potential energy is
𝑈 = 3× 2𝑤ℎ = 6𝑤ℎ
In the staircase example, all people are distinct individuals. This means there are
3 different ways in which the potential energy can be U = 4 𝑤 ℎ or 5 𝑤 ℎ .
This means the entropy versus potential energy would be similar to the earlier
example – has regions with positive as well as negative temperature.
To make this example interesting and simple, we take three marbles and three
steps. The potential energy of the microstate is
𝑤 ℎ 𝑛1 +2 𝑤 ℎ 𝑛2 + 3 𝑤 ℎ 𝑛3 = 𝑈
𝑛1 + 𝑛2 + 𝑛3 = 3
Entropy
U
3 4 5 6 7 8 9
Marbles on an endless staircase
In the earlier examples, the temperature was positive sometimes and negative
sometimes. The main reason for this is there is a maximum energy of the system in each
of these examples. This is why now we consider another example - an infinite staircase.
Consider 10 identical marbles.
𝑛1 + 𝑛2 + 𝑛3 + … = 10
𝑤 ℎ 𝑛1 +2 𝑤 ℎ 𝑛2 + 3 𝑤 ℎ 𝑛3 + 4 𝑤 ℎ 𝑛4 + … = 𝑈
𝑤 ℎ 𝑛1 +2 𝑤 ℎ 𝑛2 + 3 𝑤 ℎ 𝑛3 + 4 𝑤 ℎ 𝑛4 + … = 𝑈
𝑛1 , 𝑛2 , 𝑛3 , … … = 0,1
Generating Function Method
The generating function method is an analytical method for counting the
number of solutions of these Diophantine equations without actually
listing all the possible solutions explicitly.
Z 𝑛 = 𝛿𝑈−σ∞ 𝛿𝑁−σ∞
𝑗=1 𝑗 𝑛𝑗 , 0 𝑗=1 𝑛𝑗 , 0
Note that the term 𝑒 𝑆(𝑈,𝑁) = σ∞𝑛 =0 𝑍 𝑛 represents the total number
of ways in which marbles can be arranged on an infinite staircase such
that the total number of marbles is N and the total energy is U.
But,
This means
Z
Here the result that sum of products is the product of sums has been used.
𝑛1 𝑛2 𝑛3 𝑛4 1 1 1
Sum of products = σ∞
[𝑛=0] 𝑦1 𝑦2 𝑦3 𝑦4 … . . = ….
1−𝑦1 1−𝑦2 1−𝑦3
𝑛𝑗 1 1 1 1
Product of sums = ς∞ ∞
𝑗=1 σ𝑛𝑗 =0 𝑦𝑗 = ς∞
𝑗=1 = ….
1−𝑦𝑗 1−𝑦1 1−𝑦2 1−𝑦3
2𝜋 2𝜋 ∞
𝑑𝜃 𝑖 𝜃 𝑈 𝑑𝜑 𝑖 𝜑 𝑁 1
𝐺𝑥 =න 𝑒 න 𝑒 ෑ
0 2𝜋 0 2𝜋 1 − 𝑒 −𝑖 𝜃 𝑗 𝑒 −𝑖 𝜑 𝑥𝑗
𝑗=1
If the heights of the steps are uneven – say 𝜖𝑗 instead of 𝑗; 𝑗 = 1,2,3, … . This answer would be modified to,
∞
2𝜋 2𝜋
𝑑𝜃 𝑖 𝜃 𝑈
𝑑𝜑 𝑖 𝜑 𝑁
1
𝐺𝑥 =න 𝑒 න 𝑒 ෑ
0 2𝜋 0 2𝜋
𝑗=1
1 − 𝑒 −𝑖 𝜃 𝜖𝑗 𝑒 −𝑖 𝜑 𝑥𝑗
• The number of microstates with total energy and total number of particles fixed
can be obtained from G[x] by setting all the x-values to unity.
2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜑 𝑁 1
𝑒 𝑆(𝑈,𝑁) ≡ 𝐺 𝑥 = 1 = 0 𝑒𝑖 𝜃 𝑈
0 2𝜋 𝑒 ς∞
𝑗=1 −𝑖 𝜃 𝜖𝑗
2𝜋 1− 𝑒 𝑒 −𝑖 𝜑
This is a formal and very general answer to the question what is the entropy of a
system of N identical bosons distributed over energy levels 𝜖𝑗 in such a way that
the total energy of the system is U. But this is hard to compute analytically.
However it is possible to do so numerically if one is willing to make some
approximations. It is also possible to compute this analytically if we agree that the
number of steps are small and finite.
For example, for 3 equidistant steps,
2𝜋 𝑑𝜑 𝑖 𝜑 𝑁 1 (𝑒 −𝑖 𝑁 𝜃(−1+𝑒 −𝑖 (𝑁+1) 𝜃 ) (−1+𝑒 −𝑖 (𝑁+2) 𝜃))
0 2𝜋 𝑒 ς𝑗=1,2,3 =
1 − 𝑒 −𝑖 𝜃 𝑗 𝑒 −𝑖 𝜑 (−1+𝑒 −𝑖 𝜃 )2 (1+𝑒 −𝑖 𝜃 )
Finally,
1
−1 −3𝑁 (−( −1 𝑈 + −1 3𝑁 (3 + 6 𝑁 − 2 𝑈)) Θ[−3 − 3 𝑁 + 𝑈]
4
+ −1 𝑁+𝑈 (−Θ[−2 − 2 𝑁 + 𝑈] + Θ[−1 − 2 𝑁 + 𝑈] + −1 𝑁 Θ[−𝑁 + 𝑈])
3𝑁
+ −1 ((1 + 4 𝑁 − 2 𝑈) Θ[−2 − 2 𝑁 + 𝑈] + (−1 + 4 𝑁 − 2 𝑈) Θ[−1 − 2 𝑁 + 𝑈] + (3 − 2 𝑁 + 2 𝑈) Θ[−𝑁 + 𝑈]))
0.7
0.6
On the right we see the entropy versus energy
0.5
for the three marble-three step system obtained
0.4
using the above formula. The answer is same as
0.3
the one obtained earlier by explicitly
0.2
enumerating all the solutions.
0.1
U
3 4 5 6 7 8 9
Intensive and Extensive Quantities
Thermodynamic macro-variables such as temperature (yet to be properly
defined), total internal energy, volume, entropy, number of particles,
pressure (yet to be properly defined) and so on may be classified as being
intensive or extensive depending upon how they behave if one doubles,
triples or in general, scales by an integer factor the quantities that are
manifestly extensive. The manifestly extensive quantities are total
number of particles, total internal energy (this is not at all obvious when the subsystems
interact amongst themselves, hence this is clear only for ideal systems) and total volume. Any
other quantity that behaves like these three upon “ballooning” the size
of the system is called extensive. Quantities that do not change at all
upon scaling the extensive quantities are called intensive. Fortunately
we don’t have to deal with intermediate cases in most practical
applications so we don’t have to name them either.
In the past few lectures we have spent a lot of effort in deriving explicit formulas for the entropy of a system
in terms of manifestly extensive quantities such as total internal energy and total number of particles. The
natural question to ask now is if the entropy an extensive function like total internal energy and number of
particles. This is not easy to answer given the complicated dependence of the entropy on the number of
particles and the total internal energy. But we can still run some simulations to see what we get. Specifically
let us focus on identical marbles on an infinite staircase with uniform heights (later we will see that this
corresponds to identical bosons each subject to a spring type of force). Specifically imagine we do this:
𝑆(𝜆 𝑈, 𝜆 𝑁)
Plot versus 𝜆
𝑆(𝑈, 𝑁)
for fixed U,N. This plot is going to be a straight line if the entropy is extensive. To be sure we expect this to
be valid, if at all, only for large values of U,N. Unfortunately for large values of U,N it is not possible to list
all the solutions and count them so we have to rely on our analytical formula obtained using the generating
function method. Hence we now focus on making approximations. We have seen that,
2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜑 𝑁 1
N identical bosons: 𝑒 𝑆(𝑈,𝑁) = 0 𝑒𝑖 𝜃 𝑈
0 2𝜋 𝑒 ς∞
𝑗=1 −𝑖 𝜃 𝜖𝑗
2𝜋 1 −𝑒 𝑒 −𝑖 𝜑
2𝜋 𝑑𝜃 2𝜋 𝑑𝜑
N identical fermions: 𝑒 𝑆(𝑈,𝑁) = 0 𝑒𝑖 𝜃 𝑈
0 𝑒𝑖 𝜑 𝑁 ς∞
𝑗=1 (1 + 𝑒
−𝑖 𝜃 𝜖𝑗 −𝑖 𝜑
𝑒 )
2𝜋 2𝜋
More compactly we may write,
2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜑 𝑁
N identical quantum particles: 𝑒 𝑆(𝑈,𝑁) = 0 𝑒𝑖 𝜃 𝑈
0 2𝜋 𝑒 𝑓𝑞 (𝜃, 𝜑)
2𝜋
where, 𝑓𝑞 𝜃, 𝜑 = ς∞
𝑗=1 (1 + 𝑞 𝑒 −𝑖 𝜃 𝜖𝑗 −𝑖 𝜑 𝑞
𝑒 ) and 𝑞 = +1 fermions ; 𝑞 = −1 (bosons)
𝑖
Let us write 𝑈 = 𝑁 𝑢 and h𝑞 𝜃, 𝜑 = 𝐿𝑜𝑔[ 𝑓𝑞 𝜃, 𝜑 ] . This means 𝑓𝑞 𝜃, 𝜑 = 𝑒 −𝑖 𝑁 h𝑞 𝜃,𝜑
𝑁
2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝑁 (𝜑 +𝜃 𝑢 − h𝑞 𝜃,𝜑 ) 2𝜋 𝑑𝜃 2𝜋 𝑑𝜑
𝑒 𝑆(𝑈,𝑁) = 0 𝑒 ≡ 0 𝑒 𝑁 𝑤𝑞 𝜃,𝜑
2𝜋 0 2𝜋 2𝜋 0 2𝜋
𝑤𝑞 𝜃, 𝜑 = 𝑖 (𝜑 + 𝜃 𝑢 − h𝑞 𝜃, 𝜑 )
𝑅 ∝ 𝑒𝑁 𝑔 𝑧∗
; 𝑤ℎ𝑒𝑟𝑒 𝑔′ (𝑧∗ ) = 0
Hence,
where
𝜕 𝜕
𝑤 𝜃, 𝜑 ቚ = 𝑤 𝜃, 𝜑 ቚ =0
𝜕𝜃 𝑞 𝜃,𝜑 = 𝜃∗ ,𝜑∗ 𝜕𝜑 𝑞 𝜃,𝜑 = 𝜃∗ ,𝜑∗
This means (leaving out additive constants unimportant in the thermodynamic limit),
𝑆 𝑈, 𝑁 = 𝑆 𝑁 𝑢, 𝑁 = 𝑁 𝑤𝑞 𝜃∗ , 𝜑∗
This shows that for large number of particles in the thermodynamic limit, the entropy of a collection
of quantum particles is extensive provided we can convince ourselves that 𝑤𝑞 𝜃∗ , 𝜑∗ is intensive
in the thermodynamic limit.
Implications of extensivity of entropy
From thermodynamics we know that (note that in this course, we are measuring temperature in
energy units which is the same as setting Boltzmann’s constant 𝒌𝑩 to unity),
The “Fundamental Relation” of Thermodynamics
From the earlier statements we may write down the following identity:
Extensive quantities: S, U, V , N
Intensive quantities: p, T, μ
Entropy of a free Bose and Fermi gas
• The marbles on a staircase example where there is no restriction on
how many marbles can be on the staircase is an example of a free
Bose gas – ie. a gas of bosons that do not interact with each other.
The reason for this is because bosons that do interact with each other
will have the energy of each level which we have been calling 𝜖𝑗
also depend on all the numbers 𝑛1 , 𝑛2 , … . This is because the
energy per particle of each level will depend on the presence of
absence of other bosons if they are interacting with each other. In
this course we shall assume that 𝜖𝑗 are given constants.
𝑤𝑞 𝜃, 𝜑 = 𝑖 (𝜑 + 𝜃 𝑢 − h𝑞 𝜃, 𝜑 )
This means,
We may relate 𝜃∗ and 𝜑∗ to the absolute (or thermodynamic) temperature T and the chemical potential 𝜇 as follows:
1 𝜕𝑆 𝜕 𝜕 𝑈 𝑞𝑖
= = 𝑁 𝑤𝑞 𝜃∗ , 𝜑∗ = 𝑖 𝑁 (𝜑∗ + 𝜃∗ − 𝐿𝑜𝑔(1 + 𝑞 𝑒 −𝑖 𝜃∗ 𝜖𝑗 𝑒 −𝑖 𝜑∗ ))
𝑇 𝜕𝑈 𝜕𝑈 𝜕𝑈 𝑁 𝑁
𝑗
This means,
𝜕𝜑∗ 𝜕𝜃∗
1 𝜕 𝜕 𝜖𝑗
𝜕𝑈 𝜕𝑈
= (𝑖 𝑁 𝜑∗ + 𝑖 𝑈 𝜃 + 𝑖 𝜃∗ − 𝑖 σ𝑗 − 𝑖 σ𝑗 )
𝑇 𝜕𝑈 𝜕𝑈 ∗ 𝑒
𝑖 𝜃∗ 𝜖𝑗 𝑖 𝜑
𝑒 ∗ +𝑞 𝑒
𝑖 𝜃∗ 𝜖 𝑗 𝑖 𝜑
𝑒 ∗ +𝑞
q = +1 (fermions) ; q = −1 (bosons)
These two relations relate the total energy and total number of particles of an ideal Fermi or Bose gas in the thermodynamic limit to the
temperature and chemical potential of the system. These two relations may be formally inverted to express the temperature and chemical
potential in terms of the energy and number of particles. We may rewrite the above equations as follows.
𝑈 = 𝜖𝑗 𝑛𝑗 𝑁 = 𝑛𝑗
𝑗 𝑗
Here 𝑛𝑗 has the meaning of the average number of particles in energy level j.
When q = 1 (fermions) this formula for 𝑛𝑗 is called Fermi Dirac distribution (FD distribution).
When q = -1 (bosons) the formula for 𝑛𝑗 is called Bose Einstein distribution (BE distribution).
Finally we do justice to the subtitle of this section and write down the entropy of an ideal Fermi and Bose gas in the thermo dynamic limit.
𝑖
𝑆 𝑈, 𝑁 = 𝑁 𝑤𝑞 𝜃∗ , 𝜑∗ = 𝑖 𝑁 (𝜑∗ + 𝜃∗ 𝑢 − 𝐿𝑜𝑔[ 𝑓𝑞 𝜃∗ , 𝜑∗ ])
𝑁
𝑆 𝑈, 𝑁 = −𝑁 𝛽 𝜇 + 𝛽 𝑈 + 𝑞 𝐿𝑜𝑔 1 + 𝑞 𝑒 −𝛽 𝜖𝑗 𝑒𝛽 𝜇
𝑗
The implication in the above formula is that one inverts the relation for U and N in terms of
𝛽, 𝜇 and inserts in the above formula to get the entropy of an ideal Bose and Fermi gas purely as a function of total energy and number of
particles and the known energy levels 𝜖𝑗 .
Equation of State of an Ideal Bose and Fermi Gas
We may now combine the fundamental relation of thermodynamics and the above formula for entropy
𝑆 = −𝑁 𝛽 𝜇 + 𝛽 𝑈 + 𝛽 𝑝 𝑉
−𝛽 𝜖𝑗 𝛽𝜇
𝑆 = −𝑁 𝛽 𝜇 + 𝛽 𝑈 + 𝑞 𝐿𝑜𝑔 1+𝑞 𝑒 𝑒
𝑗
For historical reasons, the relation between pressure, volume, number of particles and temperature is called
“The Equation of State”. The implication of this statement is – the chemical potential is written in terms of
the total number of particles and possibly, the volume of the system.
We know from thermodynamics that the entropy of a gas depends on the total energy, volume and the number of
particles. For a quantum ideal gas we just derived a formula for the entropy in the thermodynamic limit.
There we see total energy and total number of particles quite prominently but volume is prominent by its
absence. We wish to convince the audience that the volume is actually hidden in the sum over the index j.
Ideal quantum gas with no external forces: Note that we had defined an ideal gas to be one where the molecules
or particles do not interact among themselves but perhaps with external objects and forces such as walls of the
container and maybe any other external force one may choose to apply. For example, if the particles are electrically
charged, an application of an electric field would still make it an ideal gas just as it does when the only forces are
due to the walls of the container. However we now restrict ourselves to the special case where the only forces
acting are the ones due to the walls which prevent the particles from escaping a cubical volume 𝐿 × 𝐿 × 𝐿. Since
these are quantum particles, they obey wave mechanics of Schrodinger. A wavefunction of a single particle in such
a box is (𝑛𝑥 , 𝑛𝑦 , 𝑛𝑧 = 1,2,3, … . ),
ℏ2 𝑘 2 𝑛𝑥 𝜋 𝑛𝑦 𝜋 𝑛𝑧 𝜋
𝐸= ; 𝒌 = 𝑘𝑥 𝑥ෝ + 𝑘𝑦 yෝ + 𝑘𝑧 zෝ ; 𝑘𝑥 = , 𝑘𝑦 = , 𝑘𝑧 =
2m L L L
The sum over the index j is now nothing but the sum over the three integers 𝑛𝑥 , 𝑛𝑦 , 𝑛𝑧 .
… = …
𝑗 𝑛𝑥 ,𝑛𝑦 ,𝑛𝑧 =1,2,3,...
Note that in the formulas for entropy, number of particles and total internal energy, the energy level 𝜖𝑗 appears
only as the product 𝛽 𝜖𝑗 . For the case of otherwise free particles trapped in a box,
ℏ2 𝜋2
𝛽 𝜖𝑗 = 𝛽 (𝑛2𝑥 + 𝑛2𝑦 + 𝑛2𝑧 )
2 𝑚 𝐿2
Now we prove a theorem that gives us license to replace summations over the integers 𝑛𝑥 , 𝑛𝑦 , 𝑛𝑧 by
integrations in the thermodynamic limit.
Consider an even function 𝑓(𝜆 𝑛) of an integer 𝑛 = 0, ±1, ±2, … … ie. 𝑓 −𝜆 𝑛 = 𝑓(𝜆 𝑛) and a real number 𝜆
> 0. Define (also consider only those functions 𝑓(𝜆 𝑛) for which 𝐽 𝜆 is finite),
Write, 𝛽 𝜖𝑗 = (𝜆 𝑛𝑥 )2 + (𝜆 𝑛𝑦 )2 + (𝜆 𝑛𝑧 )2
where
2 2 2
𝑐03 𝛽 𝑝 = 𝑞 2𝜆 σ𝑛𝑥 =1,2,.. 2𝜆 σ𝑛𝑦 =1,2,.. 2𝜆 σ𝑛𝑧 =1,2,.. 𝐿𝑜𝑔 1 + 𝑞 𝑒 −(𝜆 𝑛𝑥 ) 𝑒 −(𝜆 𝑛𝑦 ) 𝑒−(𝜆 𝑛𝑧 ) 𝑒𝛽 𝜇
𝑞 3 −𝑅 2
𝑝= 𝑔𝑜𝐿 𝑅 𝑑 1 + 𝑞 𝑒 𝑒𝛽 𝜇
𝑐03 𝛽
𝑆 𝑈, 𝑉, 𝑁 = 𝑉 ( 𝑢 𝛽 + 𝑝 𝛽 − 𝜌 𝛽 𝜇 )
𝑞 = +1 fermions ; 𝑞 = −1 (bosons)
Theorem:
Proof: It is easiest to prove this by invoking Fourier transform. Any well behaved function
(that is continuous for all values of x, and vanishes rapidly at 𝑥 = ± ∞) can be written in
terms of simple function such as trigonometric or exponential functions as follows
And Fourier’s theorem guarantees that this way of writing a function is unique where the
coefficients 𝑔 𝑘 are given by
Hence,
∞ 𝑑𝑘
𝐽 𝜆 = 𝜆 σ∞
𝑛=1 𝑓(𝜆 𝑛) = 𝜆 𝐿𝑖𝑚 σ𝑀
𝑀→∞ 𝑛=1 −∞
𝑔 𝑘 𝑒 𝑖𝑘𝜆𝑛
2𝜋
or, 𝑘𝑀𝜆
1+𝑀 𝜆 𝑠𝑖𝑛( 2 ) 𝑑𝑘
∞ 1
𝑖𝑘
𝐽 𝜆 =𝜆 𝐿𝑖𝑚𝑀→∞ −∞ 𝑔 𝑘 𝑒 2
𝑘𝜆
𝑠𝑖𝑛( ) 2𝜋
2
but 𝑓 −𝑥 = 𝑓 𝑥 means 𝑔 −𝑘 = 𝑔 𝑘 . Hence, 𝑘𝑀𝜆
∞ 1 𝑠𝑖𝑛( ) 𝑑𝑘
2
𝐽 𝜆 =𝜆 𝐿𝑖𝑚𝑀→∞ −∞ 𝑔 𝑘 cos( 𝑘 1 + 𝑀 𝜆) 𝑘𝜆
2 𝑠𝑖𝑛( 2 ) 2𝜋
or ∞ 𝜋𝑘𝜆 𝑠𝑖𝑛(𝑘 𝑀 𝜆) 𝑑𝑘
𝐽 𝜆 = 𝜆 −∞ 𝑔 𝑘 𝑘𝜆 𝐿𝑖𝑚𝑀→∞
2 𝑠𝑖𝑛( 2 ) 𝜋𝑘 𝜆 2𝜋
But
𝑠𝑖𝑛(𝑘 𝑀 𝜆)
𝐿𝑖𝑚𝑀→∞ = 𝛿 𝑘 𝜆 = Dirac Delta Function
𝜋𝑘𝜆
𝜋 3𝜋
To be sure, other possibilities are there eg. 𝑘 𝜆 = ± , ± , … and M = odd, but these are inconsistent
2 2
with the final requirement that 𝜆 → 0. Hence,
∞ 𝜋𝑘𝜆 𝑑𝑘 1 1 ∞
𝐿𝑖𝑚𝜆→ 0 𝐽 𝜆 = 𝜆 −∞ 𝑔 𝑘 𝑘𝜆 𝛿 𝑘𝜆
2𝜋
=
2
𝑔 0 =
2
−∞ 𝑓 𝑥 𝑑𝑥
2 𝑠𝑖𝑛( )
2
1 𝑑 3𝑅′
Limℏ→0 𝜌 = Limℏ→0 3 ′2 < ∞
2 2
ℏ 𝜋 2
𝑒𝑅 𝑒 −𝛽 𝜇+𝑞
2𝛽
m
′2 3
𝑑 3𝑅 ′ 𝑒 −𝑅 𝜋2
𝑒 −𝛽 𝜇 = 3 = 3
ℏ2 𝜋2 2 ℏ2 𝜋2 2
𝜌 2𝛽 𝜌 2𝛽
m m
This means,
1 𝑑 3 𝑅′ 𝑑 3 𝑅′
Limℏ→0 𝜌 = Limℏ→0 3න 3 = 𝜌න 3 =𝜌< ∞
ℏ2 𝜋 2 2 ′2 𝜋2 𝑒 𝑅 ′2 𝜋2
2𝛽 𝑒𝑅 3+𝑞
m
ℏ2 𝜋 2 2
𝜌 2𝛽
m
From this we may derive the famous Maxwell-Boltzmann distribution of
classical particles.
Maxwell-Boltzmann distribution:
Classical Ideal Gas: Phase space method
In the earlier slides, we showed how the Maxwell Boltzmann
distribution of a classical ideal gas may be derived by first deriving the
distribution for a quantum ideal gas where simple combinatorics gives
the right answer and then take the classical limit by sending Planck’s
constant to zero. This is clever no doubt, but somewhat circuitous. A
more direct approach would be to learn how to count the actual
microstates of a classical gas which lives in phase space described by
the pair of position and momentum coordinates.
x=L
(𝑝, 𝑥)
×
x, p
x=0
p=0
To facilitate counting, we have superimposed a grid in the region where the molecule is present. A given molecule can
be in any of the small pieces at any given time (the area of each small piece has units of angular momentum and it is
really tiny so it is tempting to equate the area of this piece to the smallest angular momentum imaginable namely
Planck’s constant h). Imagine I divide the interval (0, 𝐿) into small pieces of size 𝑎 each such that 𝐿 = 𝑁𝐿 𝑎 . The
ℎ′
momentum can also be divided into small pieces of size so that any point in phase space may be written as 𝑥, 𝑝 ≡
𝑎
ℎ′
𝑎 𝑛𝑥 , 𝑛𝑝 where 0 ≤ 𝑛𝑥 < 𝑁𝐿 and −∞ ≤ 𝑛𝑝 ≤ ∞ are integers and ℎ′ → 0 is the area of one pixel.
𝑎
𝑝2 ℎ′ 22
The energy of each molecule is E𝑝 = = 𝑛𝑝 . We want to count how many ways
2𝑚 2 𝑚 𝑎2
are there of filling the grid with molecules with no more than one molecule per square so that
the total energy of all the molecules is
where N is the total number of molecules. The way to count the number of ways of doing this
ℎ′ 2
is to write (𝑈 = 𝜔 ; where 𝜔 is the sum of squares of integers),
2 𝑚 𝑎2
Here the factor 𝛿𝜔, σ𝑁 𝑛 2 enforces the idea that the total energy of the each of the
𝑖=1 𝑝,𝑖
microstates is U.
The other factor ς𝑁 𝑖 ≠ 𝑗 (1 − 𝛿𝑛𝑥,𝑖 ,𝑛𝑥,𝑗 𝛿𝑛𝑝,𝑖 , 𝑛𝑝,𝑗 ) is there to ensure that if two particles are
different, then the two particles are on different pixels. In general of course it is very hard to
evaluate 𝑒 𝑆(𝑈,𝐿,𝑁) explicitly. However we note that the device of dividing the phase space into
pixels was merely for convenience or we would not be able to count the number of
microstates since the phase space is a continuum. For a classical gas we have to final set ℎ ′ →
ℎ′ 2
0 . This means that 𝑈 = 2 𝑚 𝑎2 𝜔 is going to tend to zero unless 𝜔 also tends to infinity to
compensate. When 𝑢 is really large, the solutions 𝜔 = σ𝑁 𝑛 2
𝑖=1 𝑝,𝑖 are all really close together
(there are huge number of solutions between any two randomly selected solutions). This
means there is no harm in pretending that the quantities 𝑛𝑝,𝑖 are continuously distributed
from −∞ to ∞ except that they should also obey the energy constraint. Furthermore we don’t
have to be careful in making sure that more than one particle does not occupy a pixel, since
the number of such possibilities is miniscule compared to the opposite situation when the
quantity 𝑢 is huge. Hence we may write,
Now we write,
∞ 𝑑τ 𝑖 τ (𝜔− σ𝑁 2
𝑖=1 𝑝,𝑖 )
𝑛
𝛿 𝜔− σ𝑁 𝑛 2
𝑖=1 𝑝,𝑖 = −∞ 2𝜋 𝑒
In other words,
𝐿 2 𝑚 𝑎2 𝑈
Note that 𝑁𝐿 = ,𝜔 = . Hence,
𝑎 ℎ′ 2
or
𝐿 𝑁 4𝜋 𝑚 𝑎2 𝑈
𝑆 𝑈, 𝐿, 𝑁 = 𝑁 𝐿𝑜𝑔 + 𝐿𝑜𝑔
𝑎 2 𝑁ℎ ′ 2
U1 , V1 , N1 U - U1 , V – V1 , N - N1
1 2
Denote the number of ways in which you can rearrange the microstates of the overall system so that the total energy is U etc.
as, Ω(𝑈, 𝑉, 𝑁). It is clear that this is nothing but the product of the number of ways in which we can rearrange the
microstates in compartment labelled “1” (such that its total energy is 𝑈1 etc.) and the number of ways in which we can
rearrange the microstates in compartment labelled “2” (such that its total energy is 𝑈 − 𝑈1 etc.). In other words,
𝑆 𝜆 𝑈, 𝜆 𝐿, 𝜆 𝑁 = 𝜆 𝑆 𝑈, 𝐿, 𝑁
Sackur-Tetrode formula
In three dimensions, the energy of each molecule is,
𝑝𝑥2 +𝑝𝑦
2 +𝑝2
𝑧 ℎ′ 2
E𝑝 = = ( 𝑛2𝑝𝑥 + 𝑛𝑝2𝑦 + 𝑛𝑝2𝑧 )
2𝑚 2 𝑚 𝑎2
If we assume our volume has sides 𝐿𝑥 = 𝑁𝐿𝑥 𝑎 , 𝐿𝑦 = 𝑁𝐿𝑦 𝑎, 𝐿𝑧 = 𝑁𝐿𝑧 𝑎
Also
𝑞 3 −𝑅 2 𝛽𝜇 𝜌
𝑝 = 𝐿𝑖𝑚ℏ→0 𝑔𝑜𝐿 𝑅 𝑑 1 + 𝑞 𝑒 𝑒 =
𝑐03 𝛽 𝛽
Comparing with the result obtained from the phase space method we are
forced to conclude
or
This means the size of each pixel is proportional to Planck’s constant even
though we are talking about a classical gas.
Equation of state of an ideal classical gas:
or
What are temperature, pressure and chemical potential?
In thermodynamics, temperature is formally defined as,
Even though the energy 𝑈1 is not fixed we could find the energy 𝑈1 = 𝑈1∗ that makes the overall number of microstates viz.
Ω1 𝑈1 Ω2 𝑈 − 𝑈1 a maximum. This means
𝜕
𝜕 𝑈1 𝑈1 = 𝑈1∗
Ω1 𝑈1 Ω2 𝑈 − 𝑈1 = 0
1 1
= 𝜕 𝑆1 𝜕 𝑆2 =
This means 𝑇1 = 𝑇2
𝜕 𝑈1 𝑈1 = 𝑈1∗ 𝜕 𝑈2 𝑈2 = 𝑈2∗ = 𝑈 − 𝑈1∗
This shows that the most probable state ie. the configuration of microstates that has energy 𝑈1∗ is the one that
ensures that the slope of entropy versus internal energy is equal for both the system and the reservoir. This means
temperature is that quantity that equalizes when a system is in thermal contact with a reservoir which means
energy exchange is allowed.
Suppose the part of the wall that is common to the system and the reservoir is movable, then not only energy 𝑈1 but also the
volume 𝑉1 fluctuates since if the partition moves, energy also gets redistributed as work is done. In this case we have,
Because energy freely redistributes between the system and the reservoir, as usual the temperatures of the two become the same.
However now in addition we also see that the slope of the entropy versus volume also
becomes the same for the system and the reservoir (left as an exercise to the audience). Thermodynamics tells us that this
quantity is nothing but the ratio of the pressure to the temperature.
𝑝1 𝜕 𝑆1 𝜕 𝑆2 𝑝2
= = =
𝑇1 𝜕 𝑉1 𝑉1 = 𝑉1 ∗ 𝜕 𝑉2 𝑉2 = 𝑉2∗ = 𝑉 − 𝑉1 ∗ 𝑇2
But since the temperatures are already equal, this means the pressure on the wall on the system side is the same as the pressure
on the wall of the reservoir side.
𝑇1 = 𝑇2 and 𝑝1 = 𝑝2
𝑈1
Given that 𝑈 ≫ 𝑈1 , we may write 𝑆2 𝑈 − 𝑈1 ≈ 𝑆2 𝑈 − 𝑈1 𝑆2′ 𝑈 = 𝑆2 𝑈 −
𝑇
𝑈1∗
We denote 𝑆1 𝑈1∗ − by − β 𝐹 where 𝐹 (sometimes also denoted by A) is
𝑇
called Helmholtz free energy.
𝐹 = 𝑈1∗ − T 𝑆1 𝑈1∗
Hence,
Ω𝑛𝑒𝑡 𝑈 = 𝑒 𝑆2 𝑈
𝑒 −β 𝐹
Similarly, when there are movable partitions, the Gibbs free energy determine
the thermodynamics of the system.
We write, or
We may see that the slope of this straight line by construction is the
slope of the curve S(U1) vs U1 at 𝑈1 = 𝑈1∗.
1
𝑆1 𝑈1 ≈ 𝑆1 𝑈1∗ + 𝑈1 − 𝑈1∗ 𝑆1′ 𝑈1∗ + 𝑈1 − 𝑈1∗ 2 𝑆1′′ 𝑈1∗
2
Note that this expression is not extensive. From this it is clear that 𝑆1 𝑈1 + 𝑆2 𝑈 − 𝑈1 goes through a maximum at 𝑈1 =
1
𝑈1∗ provided 𝑆1′ 𝑈1∗ = 𝑇
1
and 𝑆1′′ 𝑈1∗ < 0. Note that 𝑆1′′ 𝑈1∗
is an extensive quantity. Hence we may write,
1
𝑆1′′ 𝑈1∗ = − 𝑇 2 𝑁 < 0
1 𝑐𝑉
where 𝑐𝑉 > 0 is called the specific heat (per particle) at constant volume (since we don’t allow the volume of the system to
change).
Hence we may write the number of ways in which we can rearrange the microstates of the system so that its energy is 𝑈1
and the reservoir so that its energy is U − 𝑈1 is,
∗ 2
𝑈1 −𝑈1
− 2
ω𝑛𝑒𝑡 (𝑈1) = e 𝑆2(𝑈) 𝑒 −β 𝐹 𝑒 2 𝑇 𝑁1 𝑐 𝑉
The total number of ways we can ensure that the combined system has energy U with no other restriction is,
The last approximation is valid since we work with systems with a large number of particles which means 𝑈1∗ ≡ 𝑁1 𝑢1∗ ≫
2 𝑇 2 𝑁1 𝑐𝑉 . We may also express these ideas in terms of probabilities. Suppose you randomly select a system from a
collection of system plus the corresponding reservoirs (known as ensemble). The chances that the energy of the system is
between 𝑈1 and 𝑈1 + 𝑑𝑈1 is proportional to 𝑑𝑈1 and also to ω𝑛𝑒𝑡 𝑈1 . Thus the properly normalized probability that the
energy of a system chosen randomly from an ensemble of systems plus reservoirs is between 𝑈1 and 𝑈1 + 𝑑𝑈1 is
From this we may compute the root mean square deviation of the energy 𝑈1 from the
most probable energy 𝑈1∗. The answer to this comes out as,
where “const.” is a intensive quantity. We can see that this ratio is vanishingly small for
large systems 𝑁1 ≫ 1
Recall that in the case of quantum ideal gases we employed the saddle point method where
we claimed that in the limit of large N, the integral to be evaluated may be approximated by
the value of the integrand at the saddle point,
While this is true for large N, it is instructive to see what the deviations are going to be from
this result upon including fluctuations. Since by definition the saddle point is one where the
first derivatives vanish, we find,
𝑁 2 𝑁
𝑁 𝑤𝑞 𝜃,𝜑 𝑁 𝑤𝑞 𝜃∗ , 𝜑∗ + 2 𝜃 −𝜃∗ 𝑤𝜃𝜃 𝜃∗ , 𝜑∗ + 2 𝜑 −𝜑∗ 2 𝑤𝜑𝜑 𝜃∗ , 𝜑∗ +𝑁 𝜃 −𝜃∗ 𝜑 −𝜑∗ 𝑤𝜃𝜑 𝜃∗ , 𝜑∗
𝑒 ≈ 𝑒
1 𝜇
Recall that = 𝑖 𝜃∗ and − = 𝑖 𝜑∗
𝑇 𝑇
Recall,
This means
𝜕 𝜕 𝑞
w𝜃𝜑 = ȁ 𝜃,𝜑 = 𝜃∗ ,𝜑∗ 𝑤𝑞 𝜃, 𝜑 = σ∞
𝑗=1 𝜖𝑗 𝑛𝑗 (𝑛𝑗 − 𝑞)
𝜕𝜃 𝜕𝜑 𝑁
𝜕2 𝑞
w𝜑𝜑 = ȁ 𝜃,𝜑 = 𝜃∗ ,𝜑∗ 𝑤𝑞 𝜃, 𝜑 = σ∞
𝑗=1 𝑛𝑗 (𝑛𝑗 − 𝑞)
𝜕𝜑2 𝑁
𝜕2 𝑞
w𝜃𝜃 = ȁ 𝜃,𝜑 = 𝜃∗ ,𝜑∗ 𝑤𝑞 𝜃, 𝜑 = σ∞ 2
𝑗=1 𝑗 𝑛𝑗 (𝑛𝑗 − 𝑞)
𝜖
𝜕𝜃 2 𝑁
0 2𝜋 0 2𝜋
This means that the entropy of a finite quantum system deviates from
extensivity in the following precise manner.
Specific heat at constant pressure is defined as the change in enthalpy per unit change in temperature.
3
Classical Ideal Gas: Note that internal energy of a classical ideal gas is 𝑁 𝑇. This means the specific heat at constant
2
3 3 5 5
volume is 𝐶𝑉 = 2 𝑁 . The enthalpy is H = 𝑈 + 𝑝 𝑉 = 2 𝑁 𝑇 + 𝑁 𝑇 = 2
𝑁 𝑇.This means 𝐶𝑃 = 2 𝑁 . It so happens that
while these results are peculiar to an ideal gas, the difference between these two is quite generally shown to be N. In terms
of specific heats per particle,
𝑐𝑃 − 𝑐𝑉 = 1
;
Using Mathematica commands such as (set 𝑧 = 𝑒 𝛽𝜇 ),
we get ;
Fugacity expansion: The formal answers in terms of the Poly Log function is not
very illuminating. The alternative is to evaluate approximately by performing
expansions – typically low temperature or high temperature expansions. In the
present case it is more convenient to expand in powers or inverse powers of the
fugacity 0 < 𝑒 𝛽𝜇 = 𝑧 < ∞. Consider the integral (n = 0,1)
Fermions: q = +1,
Small z expansion:
where where
The total energy is given by
+ ….
pV= + ….
The above represent the semi-classical expansions for the energy and equation
of state of an ideal Fermi gas.
Large z expansion: Write 𝑧 = 𝑒 β μ and R = ξ β μ so that,
When , the dominant contribution comes from
. Hence we may write,
In the last two integrals the dominant contribution comes from the region near
𝜉 = 1. Hence we may write 𝜉 = 1 + 𝜉 ′ ,
1 1 0 1 0 1 0 0 1 ′
2𝜋 ξ𝑛+2 ′ 2𝜋 (1 + 𝜉 ′ )𝑛+2 ′ 2𝜋 (1 + 𝜉 ′ )𝑛+2 ′ 2𝜋 2𝜋 𝑛+
2
𝜉
𝐼1 = න 𝑑ξ = න 𝑑𝜉 ′ ≈ න 𝑑𝜉 ′ ≈ න 𝑑𝜉 ′ + න 𝑑𝜉 ′ ′
𝑧 −(ξ−1) + 1 𝑧 −𝜉 + 1 𝑧 −𝜉 + 1 𝑧 −𝜉 + 1 𝑧 −𝜉 +1
0 −1 −∞ −∞ −∞
Similarly,
Therefore,
3 4π 3 𝜋 𝐿𝑜𝑔 4 2𝑛 + 1 𝜋 3 3 𝜋 𝐿𝑜𝑔[4] 2𝑛 + 1 𝜋 3
𝑔𝑛1 = βμ 2 +𝑛 − βμ 2 +𝑛 − + βμ 2 +𝑛 ( + )
3+2 n 𝐿𝑜𝑔 𝑧 12 𝐿𝑜𝑔 𝑧 2 𝐿𝑜𝑔[𝑧] 12 𝐿𝑜𝑔 𝑧 2
or,
3 3
+𝑛 4π +𝑛 2𝑛+1 𝜋3
𝑔𝑛1 = βμ 2 + βμ 2
3+2 n 6 𝐿𝑜𝑔 𝑧 2
When the density of fermions is fixed (independent of temperature) then the
Chemical potential has to be adjusted and made to depend on temperature such
that this is achieved. Given that the large z expansion is same as low temperature
expansion, we may write,
𝜇 = 𝜇𝐹 1 + 𝑥 ; where 𝑥 ≪ 1
2 2 5𝜋2 𝑇 2
EQUATION OF STATE : 𝑝 𝑉 = 𝑈= 𝑁 𝜇𝐹 ( 1 + )
3 5 12 𝜇𝐹
Bosons: ,
1
𝑛𝑗 = 𝛽𝜖𝑗 > 0, Log[z] ∈ 𝑅𝑒𝑎𝑙𝑠, for each 𝛽 > 0, 𝜖𝑗 ≥ 0, implies 0 ≤ 𝑧 < 1.
𝑒 𝑧 −1 − 1
1 3
−𝑛− 𝑛+ 3
𝑔𝑛1 =2 2 π 𝑧 𝑧+2 2 Γ 𝑛+
2
OR
The above discussion was the high temperature limit or the semi-classical limit.
We now turn to the low temperature limit. A naïve expansion in powers of 1 − 𝑧
does not work.
The divergence is coming from small values of 𝑅, alternatively when the energy
𝜖𝑗 = 0. The extreme case of 𝑧 → 1 is quite instructive. In this limit we may
separate 𝜖𝑗 = 0 from the rest.
1 1
𝑁 = −1 +
𝑧 −1
𝜖𝑗 ≠ 0
𝑒 𝛽 𝜖𝑗 𝑧 −1 − 1
when 𝜖𝑗 ≠ 0, we may replace the sum over 𝜖𝑗 by integration so that,
1
here 0< ≡ 𝑁0 < 𝑁 is known as the condensate. Note that
𝑧 −1 −1
in the limit 𝑧 → 1, 𝑁0 become huge (macroscopic) instead of being of order unity.
Note that the volume V in the next term is also macroscopic. Hence unless 𝑧 → 1
we are justified in neglecting 𝑁0 .
Expanding the integral in powers of 1 − 𝑧 we get,
1 𝑉 3 1
𝑁= + 𝑅 𝑑
𝑧 −1 −1 𝑐03 2
𝑒 𝑅 𝑧 −1 −1
+ ..)
From this it is clear that in the region 0 < 1 − 𝑧 ≪ 1 the most important terms
are,
𝑁0
The condensate fraction is defined as 0 < 𝑓0 = <1
𝑁
The condensate fraction is where is
called the thermal wavelength. We may also write this as,
where
3
𝜁 ≈ 2.612
2
The total energy is (since 𝜖𝑗 = 0 does not contribute to the energy)
3 3 3
𝑉 3 𝑅2 𝑉 3 5 3 3 2
𝑈= 𝑅 𝑑 ≈ 𝜋 𝜁
2 − 𝜋 𝜁2 1−𝑧 +2𝜋 1−𝑧 2 + ….
𝛽 𝑐03 2
𝑒 𝑅 𝑧 −1 −1 𝛽 𝑐03 2 2 2 2
5
The enthalpy is H = U + p V = 𝑈
3
Next we consider the situation where the temperature is larger than the condensate
temperature (𝑇 > 𝑇𝐶 ). This means the number of bosons in the condensate is of the
order of unity or zero. Close to but larger than the condensate temperature
( 0 < 𝑇 − 𝑇𝐶 ≪ 𝑇𝐶 ) the formulas do not involve the condensate.
Note that since in Bose gas 𝜇 ≤ 0 this means 0 < 𝑧 ≤ 1
Now ;
This method works well for dilute systems ie. 𝑤 ≪ 1. In terms of the condensation
temperature,
3
𝑇𝑐 2 3
𝑤= 𝜁
𝑇 2
The previous analysis for z close to 0. For z close to unity the answers are as follows.
The general relation is,
We may write the equation of state for an ideal Fermi gas as:
Virial Expansion
𝑝𝑉
Let us now denote the ratio ≡ 𝜉. This is known as the compressibility
𝑁𝑇
factor. We now list expressions we have derived for this ratio for ideal classical
gas, ideal Bose and ideal Fermi gases.
𝑁
where 𝜌 = and the “virial coefficients” 𝐴0 𝑇 , 𝐴1 𝑇 , 𝐴2 (𝑇) etc. depend on
𝑉
temperature in some complicated way in general. However for classical non-
ideal gases it is customary to expand these as a series in inverse powers of
temperature so that
∞ 𝑎𝑛 𝑚
𝐴𝑛 𝑇 = σ𝑚=0 𝑚
𝑇
For a general non-ideal gas, listing all these numbers 𝑎𝑛 𝑚 amounts to knowing
the equation of state fully. In practice approximations are called for and one
makes do with only a small numbers of these coefficients.
Chandrasekhar Limit of a White Dwarf Star
A series of papers published between 1931 and 1935 had its beginning on a trip from India
to England in 1930, where the Indian physicist S. Chandrasekhar worked on the calculation
of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved
the hydrostatic equation together with the nonrelativistic Fermi gas equation of state and also
treated the case of a relativistic Fermi gas, giving rise to the value of the limit that goes by
his name.
1 ∞
where s 𝜃, 𝜑 = σ log
V 𝑗=1
1 + 𝑒 −𝑖 𝜃 𝜖𝑗 𝑒 −𝑖 𝜑 is an intensive quantity.
Hence,
2𝜋 2𝜋
𝑑𝜃 𝑖 𝜃 𝑑𝜑 𝑖 𝜑
e𝑆(𝑈,𝑉,𝑁) =න 𝑒 𝑈 න 𝑒 𝑁 e2 𝑉 s 𝜃,𝜑
0 2𝜋 0 2𝜋
Differentiating e𝑆(𝑈,𝑉,𝑁) with respect to U we get
𝜕𝑆 𝑆(𝑈,𝑉,𝑁) 2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜑 𝑁
e = 0 𝑒𝑖 𝜃 𝑈𝑖 𝜃 0 𝑒 e2 𝑉 s 𝜃,𝜑 ≈ 𝑒 𝑖 𝜃∗ 𝑈𝑖 𝜃∗ 𝑒 𝑖 𝜑∗ 𝑁 e2 𝑉 s 𝜃∗ ,𝜑∗
𝜕𝑈 2𝜋 2𝜋
𝜕𝑆 𝑆(𝑈,𝑉,𝑁) 2𝜋 𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜑 𝑁
𝜕𝑉
e = 0 2𝜋
𝑒𝑖 𝜃 𝑈
0 2𝜋
𝑒 2 s 𝜃, 𝜑 e2 𝑉 s 𝜃,𝜑 ≈ 𝑒 𝑖 𝜃∗ 𝑈 𝑒 𝑖 𝜑∗ 𝑁 2 s 𝜃∗ , 𝜑∗ e2 𝑉 s 𝜃∗ ,𝜑∗
Dividing one by the other we get the equation of state,
The assumption now is that these quantities change from point to point
inside the star. So we may write . Now we consider the situation
where the star is very dense so that ℏ 𝑘𝐹 ≫ 𝑚 𝑐 . This is the ultra relativistic
limit where the electrons are moving close to speed of light. This means,
The gravitational pull at radius r can be written as follows. A shell of thickness 𝑑𝑟
and area 𝑑𝐴 that has mass 𝑑𝑚 𝑟 = 2𝑚𝑛 𝜌 𝑟 𝑑𝐴 𝑑𝑟 experiences a gravitational
force from the sphere below it as shown. Note that due to charge neutrality the
number of positive charges (protons) per unit volume is equal to the number of
electrons per unit volume. We also assume that the number of neutrons is equal to
the number of protons per unit volume hence the mass of the shell is mostly from
the mas of the nucleons which is 𝑚𝑛 per nucleon.
𝐺 𝑀 𝑟 𝑑𝑚(𝑟) 𝑟
𝑀 𝑟 = 0 2𝑚𝑛 𝜌 𝑟 ′ 4 𝜋 𝑟 ′2 𝑑𝑟 ′
𝑑𝐹 𝑟 =
𝑟2
r
The gravitational force per unit area pressing down on the surface at radius r is,
The equation for the pressure versus density is known as the polytropic
equation. For an ultra relativistic Fermi gas we saw that it is,
=
This means,
1 d r2 2 1
𝑃′ 𝑟 ≈ 𝑃′ 𝑟 ≈ −4 𝜋 𝐺 𝜌(0)
𝑟 2 𝑑𝑟 𝜌(0) 𝑟 𝜌(0)
4
𝑐ℏ
𝑃′(𝑟) = 12 𝜋2
3 𝜋 2𝜌 0 3 4 Θ3 𝑟 Θ′ 𝑟
This means,
Θ′ 0 = 0 𝑜𝑟 𝑈′ 0 = 0
Thus the equation to be solved are
1 𝑑 𝑑 𝑈(𝜉)
𝜉2 = −𝑈3 𝜉 ; 𝑈 0 = 1; 𝑈′ 0 = 0
𝜉 2 𝑑𝜉 𝑑𝜉
It can be shown that, 𝑈 𝜉1 = 0 𝑓𝑜𝑟 𝜉1 = 6.89685 𝑎𝑛𝑑 𝜉12 𝑈′ 𝜉1 = −2.01824. Thus the radius of the white dwarf
is
This means the mass of the star is,
𝑅 𝜉 𝜉 𝑑 𝑑𝑈 𝜉
𝑀 = 0 4 𝜋 𝑟 2 𝜌 𝑟 𝑑𝑟 = 4 𝜋 𝛼 3 𝜌 0 0 1 𝜉 2 𝑈3 𝜉 𝑑𝜉 = − 4 𝜋 𝛼 3 𝜌 0 0 1 𝜉2 𝑑𝜉 =
𝑑𝜉 𝑑𝜉
− 4 𝜋 𝛼 3 𝜌 0 𝜉12 𝑈′ 𝜉1
Or
The reason why this is the upper limit for the mass of a stable white dwarf is as follows. Recall that we used
the ultra relativistic approximation to arrive at this result. This means this is valid for stars where the density
at the center is very high. For densities much lower that a certain value which we denote by
𝜌𝑐 = 1.96 × 106 𝑔/𝑐𝑐 which is the value at which the Fermi momentum equals 𝑚𝑐 we may write down
the answer for the mass which now grows as the density at the center grows.
Thermodynamics of radiation
In 1900 Max Planck correctly guessed that the intensity 𝑑𝐼(𝜈)
of radiation emitted by a black body at temperature T between
frequencies 𝜈 and 𝜈 + 𝑑𝜈 is given by,
𝑛𝑥 𝜋 𝑛𝑦 𝜋 𝑛𝑧 𝜋
The wavevector 𝒌 = , , and 𝜔𝒌 = 𝑐 ȁ𝒌ȁ. Furthermore for photons
𝐿 𝐿 𝐿
there are two polarization states 𝑠 = 1,2.
The entropy of photons is given by,
As usual we approximate,
𝑆(𝑈) 2𝜋 𝑑𝜃 𝑖 𝑌(𝜃)
𝑒 = 0 2𝜋 𝑒 ≈ 𝑒𝑖 𝑌 𝜃∗
; 𝑌′ 𝜃∗ = 0
1
Recall that quite generally 𝑖 𝜃∗ = . This means,
𝑇
𝑖 ℏ 𝜔𝒌
𝑈 = − 𝑖 σ𝒌 ℏ 𝜔𝒌 cot − ℏ 𝜔𝒌 = σ𝒌 ℏ 𝜔𝒌 + 2 σ𝒌 ℏ 𝜔𝒌
2𝑇
𝑒 𝑇 −1
𝑐 𝑐 𝑐
The Poynting vector is 𝑺= 𝑬 × 𝑯= 𝑬 × 𝑘 × 𝑬 = 𝑘 𝑬2
4𝜋 4𝜋 4𝜋
1 1
The energy (minus the zero point energy) density is 𝑢 = 𝑬2 + 𝑩2 = 𝑬2 (in
8𝜋 4𝜋
the time averaged sense).
Hence the Poynting vector for a given k is,
2 𝑐𝑘 ℏ 𝜔𝒌
𝑺𝒌 = 𝑐 𝑘 𝑢 = ℏ 𝜔𝒌
𝑉
𝑒 𝑇 −1
The intensity due to all wavelengths moving in all directions is,
2 𝑐 ℏ 𝜔𝒌 2 𝑐 ℏ 𝜔𝒌 2 ∞ 𝑐 ℏ 𝜔𝒌
𝐼 = σ𝒌 𝑘 . 𝑺𝒌 = σ𝑘 ℏ 𝜔𝒌 = σ𝑘 ℏ 𝜔𝒌 = 0 ℏ 𝜔𝒌 4 𝜋 𝑘 2 𝑑𝑘
𝑉 𝑉 2𝜋 3
𝑒 𝑇 −1 𝑒 𝑇 −1 𝑒 𝑇 −1
But 𝜔𝒌 = 2 𝜋 𝜈 = 𝑐 𝑘. Hence,
8𝜋 ∞ ℎ𝜈
𝐼= 0 ℎ𝜈 𝜈 2 𝑑𝜈
𝑐2
𝑒 𝑇 −1
The intensity in the frequency interval 𝜈 and 𝜈 + 𝑑𝜈 is,
8 𝜋 ℎ 𝜈3 𝑑𝜈
𝐼 𝜈 𝑑𝜈 = 2 ℎ𝜈 𝑐
𝑒 𝑇 −1
𝑃 ℏ 𝜔𝒌
2 𝑐 𝑧Ƹ .𝑘 2 ∞ 2 𝑐ℏ𝑐𝑘
𝐴
= σ𝒌 𝑧Ƹ . 𝑺𝒌 = σ𝑧Ƹ .𝑘 >𝟎 𝑉 ℏ 𝜔𝒌 = 𝑘
2𝜋 3 0
𝑑𝑘 ℏ𝑐𝑘 𝑧Ƹ .𝑘 >𝟎 𝑑Ω cos(𝜃)
𝑒 𝑇 −1 𝑒 𝑇 −1
Or,
Subrahmanyan Chandrasekhar
2 2 2 𝑐 𝑙𝑃 2 2 2 𝑐 𝑙𝑃 2
Δ𝑆 = ((2𝑀) − 2𝑀 ) 𝜋 ℏ
= 2𝑀 𝜋 ℏ
>0
The number of microstates is Ω = 2𝑛𝐵 where 𝑛𝐵 is the number of bits. But the
𝑆 𝑆
Number of microstates is also Ω = 𝑒 . Hence 𝑛𝐵 = log(2). Hence the change in the number
Δ𝑆
of bits is Δ𝑛𝐵 = log(2) ~ 1077 bits are destroyed.
The thermal radiation near the event horizon produces particle hole pairs and one of them fall
in the other escapes to infinity. The rate of this process is given by the rate at which energy is
being emitted by the horizon (which is a black body type radiation). The total radiation
emitted which is assumed to be converted to matter by pair production is,
2 3
𝑑𝑀 𝑑𝐸 16 𝜋 𝐺 ℏ 𝑐
𝑐2 = = −𝜎 𝑇 4 𝐴; 𝐴 = 4 𝑀 2
;𝑇 =
𝑑𝑇 𝑑𝑡 𝑐 8𝜋𝐺𝑀
This gives,
𝑉−𝑁𝑏 𝑎
𝐹 𝑇, 𝑉, 𝑁 = −𝑁 𝑇 𝐿𝑜𝑔 𝑛𝑄 +1 − 𝑁2
𝑁 𝑉
where
3
𝑚𝑇 2
𝑛𝑄 =
2 π ℏ3
From this it is possible to plot isotherms which is a plot of pressure versus volume
keeping the temperature fixed. From the plots it is easy to see that there is a special
temperature for which the pressure is nearly a constant ie.
8𝑎 𝑎
𝑉 = 𝑉𝑐𝑟 = 3 𝑏 𝑁, 𝑇 = 𝑇𝑐𝑟 = , 𝑝 = 𝑝𝑐𝑟 =
27𝑏 27 𝑏 2
We may now choose to measure volume, temperature and pressure as multiples of
these critical values so that in general we write
3 1 8
𝑝𝑟 + 2 𝑉𝑟 − = 𝑇𝑟
𝑉𝑟 3 3
L
I
Q
U
I
D
Plg
COEXISTENCE GAS
Definition:
𝐺 =𝑈 +𝑝𝑉 −𝑇𝑆
From Extensivity:
𝑇𝑆 =𝑈+𝑝𝑉 − 𝜇𝑁
Hence,
𝐺= 𝜇𝑁
Differential relations:
𝑇 𝑑𝑆 = 𝑑𝑈 + 𝑝 𝑑𝑉 − 𝜇 𝑑𝑁
From this we define
𝐺 𝑆 𝑉
𝑔 = ,𝑠 = ,𝑣 =
𝑁 𝑁 𝑁
or
𝑑𝜇 = 𝑑𝑔 = −𝑠 𝑑𝑇 + 𝑣 𝑑𝑝
𝑑𝑝 𝑠𝑙 −𝑠𝑔
Clausius Clayperon equation: 𝑑𝑇
=
𝑣𝑙 −𝑣𝑔
http://www.pmaweb.caltech.edu/~mcc/Ph127/b/Lecture3.pdf
https://www.thoughtco.com/definition-of-triple-point-604674
Landau Diamagnetism
Diamagnetism refers to the property that charged particles, chiefly electrons in
a metal acquire a net magnetic moment when subject to a uniform magnetic
field due to circulating currents that are set up as a result of the magnetic field.
This magnetic moment is proportional to the applied magnetic field and the
proportionality constant is known as diamagnetic susceptibility.
𝜙 ℎ𝑐
𝒩= 𝜙0
; 𝜙 = 𝐴 ȁ𝐻ȁ ; 𝜙0 = 2𝑒
where A is the area of the
sample.
The entropy of the system is given by 𝑗 = 1,2, … , 𝒩; 𝑛𝐿 = 0,1,2, …
As usual we write,
This means
2𝜋
𝑑𝜃 𝑖 𝜃 (𝑈−σ𝑗,𝑘 ,𝑛 𝜖𝑘 ,𝑛 𝑚𝑗,𝑘 ,𝑛 ) 2𝜋 𝑑𝜑 𝑖 𝜑 (𝑁−σ𝑗,𝑘 ,𝑛 𝑚𝑗,𝑘𝑧 ,𝑛𝐿 )
𝑒 𝑆(𝑈,𝑁) = න 𝑒 𝑧 𝐿 𝑧 𝐿 𝑧 𝐿 න 𝑒 𝑧 𝐿
0 2𝜋 0 2𝜋
{ 𝑚𝑗,𝑘𝑧 ,𝑛𝐿 =0,1 }
2𝜋
𝑑𝜃 2𝜋 𝑑𝜑 𝑖 𝜃 𝑈 𝑖 𝜑 𝑁
𝑒 𝑆(𝑈,𝑁) = න න 𝑒 𝑒 ෑ (1 + 𝑒 −𝑖 𝜃 𝜖𝑘𝑧 ,𝑛𝐿 +𝜑
)
0 2𝜋 0 2𝜋
𝑗,𝑘𝑧 ,𝑛𝐿
or
2𝜋
𝑑𝜃 2𝜋 𝑑𝜑 𝑖
𝑒 𝑆(𝑈,𝑁) = න න 𝑒 𝑁 𝑤 𝜃,𝜑 ≈ 𝑒𝑖 𝑁 𝑤 𝜃∗ , 𝜑∗
0 2𝜋 0 2𝜋
𝑖 −𝑖 𝜃 𝜖𝑘𝑧 ,𝑛𝐿 +𝜑
𝑤 𝜃, 𝜑 = 𝜃 𝑢 + 𝜑 − 𝑙𝑜𝑔(1 + 𝑒 )
𝑁
𝑗,𝑘𝑧 ,𝑛𝐿
1 𝜖𝑘𝑧 ,𝑛𝐿 1 1
0 = 𝑤𝜃 𝜃∗ , 𝜑∗ = 𝑢 − ; 0 = 𝑤𝜑 𝜃∗ , 𝜑∗ = 1 −
𝑁 𝑁
𝑗,𝑘𝑧 ,𝑛𝐿 1 + 𝑒𝑖 𝜃∗ 𝜖𝑘𝑧 ,𝑛𝐿 +𝜑∗
𝑗,𝑘𝑧 ,𝑛𝐿 1 + 𝑒𝑖 𝜃∗ 𝜖𝑘𝑧 ,𝑛𝐿 +𝜑∗
Or
𝜖 𝑘𝑧 ,𝑛𝐿 1
𝑈= ; 𝑁=
𝛽 𝜖𝑘𝑧 ,𝑛𝐿 − 𝜇 𝛽 𝜖𝑘𝑧 ,𝑛𝐿 − 𝜇
𝑗,𝑘𝑧 ,𝑛𝐿 1 + 𝑒 𝑗,𝑘𝑧 ,𝑛𝐿 1 + 𝑒
The entropy is given by,
𝑗,𝑘𝑧 ,𝑛𝐿
Since we are working with the canonical formalism we have to evaluate
Helmholtz’s free energy which is
Note that the electrons are assumed to not have an intrinsic magnetic moment
so that the free energy is going to be an even function of H. Hence 𝐹 ′ 0 = 0.
The non-zero quantity 𝐹 ′′ 0 ∝ 𝜒𝑚 is called Landau’s diamagnetic
susceptibility.
A version of the Euler-MacLaurin formula is,
Set,
Set,
Note that, 𝐹(𝐻) = 𝑁 𝜇(𝐻) + Ω 𝐻
𝑑Ω 𝐻 𝑚 1 2
Note that 𝑁 = − = 2𝑇 𝑉 𝑅(𝜇 𝐻 ) − ℏ 𝜔𝑐 𝑅 ′′ 𝜇 𝐻
𝑑𝜇 𝐻 ℎ2 6
′ ′ 𝑚 1 2 ′′ 𝑚 1 𝑒
Ω 𝐻 ≡ −𝜇 𝐻 2𝑇 𝑉 2 𝑅 𝜇 𝐻 − ℏ 𝜔𝑐 𝑅 𝜇 𝐻 − 2𝑇 𝑉 2 − ℏ 𝜔𝑐 ℏ 𝑅′ 𝜇 𝐻
ℎ 6 ℎ 3 𝑚𝑐
𝑚 1 2
𝑁 = 2𝑇 𝑉 2 𝑅(𝜇 𝐻 ) − ℏ 𝜔𝑐 𝑅′′ 𝜇 𝐻
ℎ 6
hence,
𝑚 1 𝑒
Ω′ 𝐻 ≡ −𝜇 ′ 𝐻 𝑁 − 2𝑇 𝑉 2 − ℏ 𝜔𝑐 ℏ 𝑅′ 𝜇 𝐻
ℎ 3 𝑚𝑐
or
𝑚 1 𝑒
𝐹 ′ 𝐻 = 𝑁 𝜇 ′ 𝐻 + Ω′ 𝐻 = 2𝑇 𝑉 ℏ 𝜔𝑐 ℏ 𝑅′ 𝜇 𝐻
ℎ2 3 𝑚𝑐
∞
ℏ2𝑘2𝑧
−𝛽 − 𝜇(0)
𝑅 𝜇(0) = න 𝑑𝑘𝑧 log 1 +𝑒 2𝑚
∞
1
𝑅′ 𝜇(0) = 𝛽 න 𝑑𝑘𝑧 ℏ2𝑘𝑧2
𝛽 − 𝜇(0)
0 1+𝑒 2𝑚
2𝑉 𝑒2 ∞ 1 2 𝑉 𝑒 2 𝑘𝐹
𝐹 ′′ 0 = 0 𝑑𝑘𝑧 ℏ2 𝑘2
= 𝑎𝑡 𝑧𝑒𝑟𝑜 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒
(2𝜋)2 3 𝑚 𝑐2 𝛽 2𝑚𝑧 − 𝜇(0) (2𝜋) 2 3 𝑚 𝑐 2
1+𝑒
The magnetization is defined as the change in free energy per unit volume per unit change in
𝐹𝐻 1
magnetic field. In the linear regime, = −𝑀 𝐻, 𝑀 = − 𝐹 ′ 𝐻 . In the present case,
𝑉 𝑉
1 ′ 1 2 𝑒 2𝑘
𝐹
𝑀 = − 𝐹 𝐻 = − 𝐻 𝐹 ′′ 0 = − 𝐻
𝑉 𝑉 (2𝜋)2 3 𝑚 𝑐 2
Landau’s diamagnetic susceptibility is 𝜒𝑚 where,
2 𝑒 2 𝑘𝐹
𝑀 = 𝜒𝑚 𝐻; 𝜒𝑚 = −
(2𝜋)2 3 𝑚 𝑐 2
Derivation of canonical and grand canonical ensembles
We have seen that the entropy is given by,
𝑒𝑆 𝑈,𝑁
= σ𝑛𝑖 𝛿0,𝑈−σ𝑖 𝜖𝑖 𝑛𝑖 𝛿0,𝑁−σ𝑖 𝑛𝑖
The canonical ensemble is the Laplace transform of this with respect to U.
Classical case: For general classical molecules with a magnetic moment it is,
𝑒
𝐻=− σ𝑁𝑖=1 𝑱𝑖 . 𝑯
2𝑚𝑐
Assume 𝑯 is in the z-direction. This means,
or
Set 𝑥 = 𝛽 𝑔 𝜇𝐵 𝑗 𝐻
Here
𝑆𝑖 = ±1
To evaluate this we use the transfer matrix method. First note that due to periodic
boundary conditions we may write,
This allows us to write down the transfer matrix. The transfer matrix denoted by 𝑇 is
a 2 × 2 matrix with elements 𝑇1,1 , 𝑇1,−1, 𝑇−1,1 𝑎𝑛𝑑 𝑇−1,−1. Let 𝑚, 𝑛 = ±1. Then we
set,
𝛽ℎ
2𝛽𝐽 𝑚 𝑛 + (𝑚+𝑛)
𝑇𝑚,𝑛 = 𝑒 2
𝑇= 𝑒 2𝛽𝐽+𝛽ℎ 𝑒 −2𝛽𝐽
𝑒 −2𝛽𝐽 𝑒 2𝛽𝐽−𝛽ℎ
The eigenvalues are,
1
𝜆1 = 𝑒 −𝛽 ℎ+2𝐽 (𝑒 4 𝐽 𝛽 1 + 𝑒 2 ℎ 𝛽 + 4 𝑒 2 ℎ 𝛽 + 𝑒 8 𝐽 𝛽 + 𝑒 4(ℎ+2𝐽)𝛽 − 2 𝑒 2 ℎ+4𝐽 𝛽
2
and
1 −𝛽 ℎ+2𝐽
𝜆2 = 𝑒 (𝑒 4 𝐽 𝛽 1 + 𝑒 2 ℎ 𝛽 − 4 𝑒 2 ℎ 𝛽 + 𝑒 8 𝐽 𝛽 + 𝑒 4(ℎ+2𝐽)𝛽 − 2 𝑒 2 ℎ+4𝐽 𝛽
2
1 𝜕 1 𝜕 𝑒 4 𝐽 𝛽 −1 + 𝑒 2 ℎ 𝛽
𝑆𝑡𝑜𝑡 = log 𝑍 = 𝑁 log 𝜆1 = 𝑁
𝛽 𝜕ℎ 𝛽 𝜕ℎ 4 𝑒 2 ℎ 𝛽 + 𝑒 8 𝐽 𝛽 + 𝑒 4(ℎ+2𝐽)𝛽 − 2 𝑒 2 ℎ+4𝐽 𝛽
< 𝑆𝑚 𝑆𝑚+𝑟 > =< 𝑆𝑚 𝑆𝑚+1 𝑆𝑚+2 𝑆𝑚+2 𝑆𝑚+3 . . 𝑆𝑚+𝑟−1 𝑆𝑚+𝑟 >
Mean Field Solution of the Ising model
The earlier example was an exact solution of the 1D Ising model. Unfortunately this method
does not generalize easily to higher dimensions. Indeed only in 2dimensions and that too
without magnetic field we may write down the exact solution using the same method and this
is known as Onsager’s solution. But typically we want to see if we can find an approximate
solution that works reasonably well. It so happens that the method I am going to describe
which is the mean field solution works well especially in higher dimensions (ie. 3
dimensions).
The mean (or average) field solution of the Ising model is the assertion that it is legitimate to
Think of the Ising model with magnetic field to be just a collection of independent spins
interacting with an effective field called the mean-field.
Thus we take the liberty to write,
𝐻 = −𝐽 𝑆𝑖 𝑆𝑗 − ℎ 𝑆𝑖 ≈ −ℎ𝑒𝑓𝑓 𝑆𝑖
<𝑖,𝑗> 𝑖 𝑖
Note that the average is independent of the index due to translational symmetry. Hence,
1 𝜕 𝜕
< 𝑆𝑘 > = 𝑁 𝜕 𝛽ℎ𝑒𝑓𝑓
𝐿𝑜𝑔 𝑍 = 𝜕 𝛽ℎ𝑒𝑓𝑓
𝐿𝑜𝑔 𝑒 𝛽 ℎ𝑒𝑓𝑓 + 𝑒 −𝛽 ℎ𝑒𝑓𝑓 = tanh(𝛽 ℎ𝑒𝑓𝑓 ) .
OR,
𝛽 𝐽 σ𝑁
𝑖=1 𝛿 𝜎𝑖, 𝜎𝑖+1
𝑍 = 𝑇𝑟 𝑒 = 𝑇𝜎1, 𝜎2 𝑇𝜎2, 𝜎3 𝑇𝜎3, 𝜎4 …….. 𝑇𝜎𝑁, 𝜎1
[𝜎]
𝑍 = 𝑇𝑟(𝑇 𝑁 )
𝛽 𝐽 𝛿 𝜎1, 𝜎2
𝑇𝜎1, 𝜎2 = 𝑒
The transfer matrix is,
𝑍 = 𝜆1𝑁 + 𝑞 − 1 𝜆𝑁
2 ≈ 𝜆𝑁
1
− 𝑖−𝑗
The correlation function is, < 𝛿𝜎𝑖, 𝜎𝑗 > = 𝑒
𝜉
−1 𝜆2
The correlation length is 𝜉 = 𝐿𝑜𝑔( )
𝜆1
Debye and Einstein Theory of Specific Heat
Specific heat is a familiar notion. It is the amount of heat (Joules) required to raise
the temperature of a given material of mass 1 gram by one degree. Water has high
specific heat (4.18 J/gram/degree) compared to iron (0.450 J/gram/degree) .
The Debye model is a solid-state equivalent of Planck's law of black body radiation,
where one treats electromagnetic radiation as a photon gas. The Debye model treats
atomic vibrations as phonons in a box (the box being the solid). Most of the
calculation steps are identical as both are examples of a massless Bose gas with
linear dispersion relation.
Source:wiki
Consider a cube of side L . Since there are nodes at the boundaries we must have,
𝜆𝑚𝑖𝑛 𝐿
=𝑎= 1
2
𝑁3 1
This is the same as saying 𝑛𝑥,𝑚𝑎𝑥 , 𝑛𝑦,𝑚𝑎𝑥 , 𝑛𝑧,𝑚𝑎𝑥 = 𝑁 3. The total energy is (there are
two transverse and one longitudinal polarization states),
𝜋/2 𝜋/2 𝑅 3
𝑈= =𝜙0 =𝜃0 =𝑛0 𝐸𝑛 𝐸𝑛 𝑑𝑛 𝑛2 sin 𝜃 𝑑𝜃 𝑑𝜙
𝑒 𝑇 −1
Hence,
Define,
At high temperatures,
1
𝑛 = 𝜖
𝑒𝑇 −1
The specific heat is,
Tutorials
1. Imagine 2N labelled sites on a line. At each site there is a spin that can point
up or down. If it points up its energy is +1 and if it points down its energy is
− 1. Find the entropy of the system if the total energy is 0. Also find the
canonical partition function at temperature T and the average energy.
Ans.
The total energy is U = σ2𝑁 𝑖=1 𝜎𝑖 where 𝜎𝑖 = ±1. The total energy is zero is if
N sites are up and the remaining are down. We just have to find the number of
ways in which N sites can be populated with up spins from a total number of
2N sites. The answer is
2𝑁 !
Hence 𝑆 𝑈 = 0 = 𝐿𝑜𝑔 .
𝑁!𝑁!
The canonical partition function is,
−𝛽𝑈 − 𝛽 σ2𝑁
𝑖=1 𝜎𝑖 2𝑁
Z = Tr( e ) = Tr e = 2 cosh(𝛽)
The average energy is,
Since the corners are labelled the number of microstates with this energy is,
𝑈0
The entropy is S U = , 𝑁 = 2 = 𝐿𝑜𝑔(12).
2
When all the sites are occupied the potential energy is a maximum. This is given by,
2 11
𝑈𝑚𝑎𝑥 = 𝑈0 (12 + 3
+ 2
+ 3) . The number of microstates with energy U and number of N is same as
The number of microstates with energy 𝑈𝑚𝑎𝑥 − 𝑈 and number of particles 8 − 𝑁. The entropy is given by 𝑆 = log Ω .
Euler Maclaurin formula
Euler Maclaurin formula tells us how to replace a discrete sum by an
integral which are typically easier to evaluate. The version we employed in
this course is,
∞ 1 1 1
σ∞
𝑖=0 𝑓 𝑖 ≈ 0 𝑓 𝑥 𝑑𝑥 + 𝑓 0 − ′
𝑓 0 + 𝑓 ′′′ 0 − …
2 12 720
We could use Fourier analysis to prove this. Write the Fourier transform and
its inverse as follows,
∞ 𝑘𝑁 𝑘 1 𝑑𝑘 ∞ 𝑘 𝑘𝑁 1 𝑑𝑘
σ𝑁
𝑖=0 𝑓 𝑖 = −∞ cos csc sin 𝑁+1 𝑘 𝑔 𝑘 + 𝑖 −∞ csc sin sin 𝑁+1 𝑘 𝑔 𝑘
2 2 2 2𝜋 2 2 2 2𝜋
𝑘𝑁 2 𝑘𝑁 2 1 𝑘𝑁 𝑘𝑁
Taking the limit 𝑁 → ∞ we get ( cos 2
= sin 2
= 2 , sin 2
cos 2
=0)
1 ∞ 𝑑𝑘 𝑖 ∞ 𝑘 𝑘 𝑑𝑘 1 ∞ 𝑑𝑘 𝑖 ∞ 𝑘 𝑑𝑘
σ∞
𝑖=0 𝑓 𝑖 = 2 −∞ 𝑔 𝑘 + csc cos 𝑔 𝑘 = 2 −∞ 𝑔 𝑘 + −∞ co𝑡 𝑔 𝑘
2𝜋 2 −∞ 2 2 2𝜋 2𝜋 2 2 2𝜋
Note that,
∞ ∞
𝑑𝑘 3 𝑑𝑘
𝑓′ 0 = න 𝑖 𝑘 𝑔 𝑘 ; 𝑓′′′ 0 = න 𝑖 𝑘 𝑔 𝑘
2𝜋 2𝜋
−∞ −∞
Saddle point method
Imagine I want to calculate the integral
𝑥2
𝐼(𝑁) = )𝑥(𝑓 𝑁 𝑒 𝑥 𝑔 𝑥𝑑 𝑥
1
where 𝑁 → ∞ is large. The idea is to try and write a series for 𝐼(𝑁) in powers
1
of when it is assumed that the function 𝑓(𝑥) has a maximum
𝑁
at 𝑥1 < 𝑥0 < 𝑥2. This means,
𝑓 ′ 𝑥0 = 0, 𝑓 ′′ 𝑥0 < 0
The contribution to 𝐼(𝑁) is dominated by contributions from 𝑥 close to 𝑥0 .
Hence we may write,
Set 𝑁 𝑥 − 𝑥0 = 𝑦
where,
𝑦1 = 𝑁 𝑥1 − 𝑥0 → −∞ ; 𝑦2 = 𝑁 𝑥2 − 𝑥0 → ∞
Hence,
Tutorial Questions
Q. Consider a gas of N identical atoms in a spherical harmonic trap described by the Hamiltonian,
∞
𝑝𝑖2 𝐾 2
𝐻= + 𝑟𝑖
2𝑚 2
𝑖=0
𝜕 𝑒 −𝛽𝜀 𝑒 −𝛽𝜀
The average energy is 𝑈 = − 𝜕𝛽
log 𝑍 = 2+𝑒 −𝛽𝜀
𝑁 𝜀 = 𝑝 𝑁 𝜀. Here 𝑝 = 2+𝑒 −𝛽𝜀 is the probability that a given
molecule is standing up. The heat capacity is,
𝑑𝑈 2 𝑒 𝛽𝜀 𝑁 𝛽 𝜀 2
𝐶 𝑇 = 𝑑𝑇 = 2
1+2 𝑒 𝛽𝜀
e) The largest energy at a positive temperature is when 𝛽 = 0,
1
𝑈𝑚𝑎𝑥 = 𝑁𝜀
3
Molecular oxygen has a net magnetic spin, 𝑆, Ԧ of unity, i.e. 𝑆𝑧 is quantized to -
1, 0, or +1. The Hamiltonian for an ideal gas of N such molecules in a magnetic
field.
b) Calculate the Gibbs free energy 𝐺 𝑇, 𝐵 and show that for small 𝐵,
𝑁 𝜇2 𝑠 𝑠 + 1 2
𝐺 𝐵 =𝐺 0 − 𝐵 + …
6𝑇
𝜕𝑀𝑧
c) Calculate the zero field susceptibility 𝜒 = and show that it satisfies Curie’s Law
𝜕𝐵 𝐵=0
𝑐
𝜒=
𝑇
𝐵2
d) Show that 𝐶𝐵 − 𝐶𝑀 = 𝑐 2 where 𝐶𝐵 and 𝐶𝑀 are heat capacities at constant 𝐵 and 𝑀 respectively.
𝑇
𝑁 𝜇2 𝑠 𝑠+1
𝐺 𝑇, 𝐵 = −T 𝑁 𝐿𝑜𝑔 cosh 𝑠 𝛽 𝜇 𝐵 + coth 𝛽 𝜇 𝐵/2 sinh(𝑠 𝛽 𝜇 𝐵) ≈ −𝑁 𝑇 log 2𝑠 + 1 − 6𝑇
𝐵2 + …
Langmuir isotherms: An ideal gas of particles is in contact with the surface of a catalyst.
(a) Show that the chemical potential of the gas particles is related to their temperature and pressure via
𝑃
𝜇 = 𝑇 log 𝑎 5 where a is a constant.
𝑇2
(b) If there are M distinct adsorption sites on the surface, and each adsorbed particle gains an energy 𝜖 upon adsorption, calculate the grand partition
function for the two dimensional gas with a chemical potential µ.
(c) In equilibrium, the gas and surface particles are at the same temperature and chemical potential. Show that the fraction of occupied surface sites is
then given by 𝑓(𝑇, 𝑃) = 𝑃/(𝑃 + 𝑃0 𝑇 ). Find 𝑃0 𝑇 .
(d) In the grand canonical ensemble, the particle number N is a random variable.
Calculate its characteristic function < exp −𝑖𝑘𝑁 > in terms of G(𝛽μ),
and hence show that
𝜕𝑚𝐺
< 𝑁𝑚 >𝑐 = −𝑇 𝑚−1 ቚ
𝜕𝜇 𝑚 𝑇
where G is the grand potential.
(e) Using the characteristic function, show that
𝜕<𝑁>
< 𝑁 2 >𝑐 = 𝑇 ȁ𝑇
𝜕𝜇
< 𝑁 2 >𝑐 1− 𝑓
=
< 𝑁 >2𝑐 𝑀𝑓
a) We may calculate the grand partition function
∞
of a classical ideal gas as follows.
𝑒𝛽 𝜇 𝑁
𝑄= 𝑍 𝑇, 1 𝑁 = 𝐸𝑥𝑝(𝑒 𝛽 𝜇 𝑍(𝑇, 1))
𝑁!
𝑁=0
1 𝜕 log 𝑄 1 𝜕 𝛽𝜇
<𝑁>= = 𝑒 𝑍 𝑇, 1 = 𝑒 𝛽 𝜇 𝑍 𝑇, 1
𝛽 𝜕𝜇 𝛽 𝜕𝜇
or,
𝑉 𝛽 𝑝2 𝜋𝑉 3
−
𝑍 𝑇, 1 = 3 න 𝑑 3𝑝 𝑒 2𝑚 = 3 𝜋 2𝑚 𝑇 2
ℎ ℎ
𝛽 𝜇 𝜋𝑉 3
𝑁=𝑒 3
𝜋 2𝑚 𝑇 2
ℎ
or,
3
𝜋 𝑇
p= 𝑒 𝛽 𝜇 ℎ3 𝜋 2𝑚 𝑇 2
or,
𝑃 1
𝜇 = 𝑇 log 𝑎 5 ; 𝑎= 3
𝜋
𝑇2 𝜋 2𝑚 2
ℎ3
b) The partition function of defects is,
This means,
𝑓 𝑇, 𝑃 =
d)
By definition,
𝑚
𝑚 𝑚 𝑑
<𝑁 >𝑐 = 𝑖 Log < 𝑒 −𝑖 𝑘 𝑁 > ȁ𝑘=0 = −𝑇 𝑚−1 𝐺𝑚 𝜇
𝑑𝑘 𝑚
e) < 𝑁 2 >𝑐 = − 𝑇 𝐺 ′′ 𝜇
But,
𝑑 𝑑
< 𝑁 >𝑐 = 𝐿𝑜𝑔 𝑄 = 𝑀 𝐿𝑜𝑔[ 1 + 𝑒 𝑠 ] =
𝑑 𝛽𝜇 𝑑 𝛽𝜇
or,
<𝑁2>𝑐 1−𝑓
= =
<𝑁>2𝑐 𝑀𝑓
Polar rods: Consider rod shaped molecules with moment of inertia I, and a
dipole moment μ. The contribution of the rotational degrees of freedom to the
Hamiltonian is given by
where E is an external electric field. (φ ∈ [0, 2π], θ ∈ [0, π] are the azimuthal
and polar angles, and 𝑝𝜑 , 𝑝𝜃 are their conjugate momenta.)
(a) Calculate the contribution of the rotational degrees of freedom of each
dipole to the classical partition function.
(b) Obtain the mean polarization P = (μ cos θ), of each dipole.
(c) Find the zero–field polarizability
(e) Sketch the rotational heat capacity per dipole.
a) Integrate over the canonical momenta to get,
1 ∞ ∞ −𝛽 𝐻𝑟𝑜𝑡 2πI
𝑍𝑝𝑎𝑟𝑡𝑖𝑎𝑙 = 𝜃𝑝𝑑 −∞ 𝑑𝑝𝜑 𝑒 = sin 𝜃 𝑒 𝛽𝐸 𝜇 cos 𝜃
ℎ 2 −∞ 𝛽
2
2 𝐸𝜇
𝑈= − 𝜇 𝐸 coth 𝛽 𝜇 𝐸 ; 𝐶 𝑇 = 2 −
𝛽 𝐸𝜇
𝑇 2 sinh2 𝑇
One dimensional polymer: Consider a polymer formed by connecting N disc
shaped molecules into a one dimensional chain. Each molecule can align either
along its long axis (of length 2a) or short axis (length a). The energy of the
monomer aligned along its shorter axis is higher by 𝜀,
i.e. the total energy is 𝐻 = 𝜀 𝑈, where 𝑈 is the number of monomers standing
up.
(a) Calculate the partition function, 𝑍(𝑇, 𝑁), of the polymer.
(b) Find the relative probabilities for a monomer to be aligned along its short or
long axis.
(c) Calculate the average length, < 𝐿 𝑇, 𝑁 >𝑐 , of the polymer.
(d) Obtain the variance < 𝐿 𝑇, 𝑁 2 >𝑐
(e) What does the central limit theorem say about the probability distribution for
the length 𝐿(𝑇, 𝑁)?
a 2a
a) The partition function is,
𝑒 𝛽 𝜀 −𝑒 −𝑁 𝛽 𝜀
𝑍 𝑇, 𝑁 = σ𝑁
𝑈=0 𝑒
−𝛽𝜀 𝑈
= 𝑒𝛽 𝜀 − 1
1 𝑒 −𝛽𝜀
𝑝𝑙𝑜𝑛𝑔 = ; 𝑝𝑠ℎ𝑜𝑟𝑡 =
1+𝑒 −𝛽𝜀 1+𝑒 −𝛽𝜀
c) The average length is
2 𝑎+𝑎 𝑒 −𝛽𝜀
< 𝐿 𝑇, 𝑁 >𝑐 = 𝑁 =𝑁<𝑥>
1+𝑒 −𝛽𝜀
2 2 𝑎 2 + 𝑎2 𝑒 −𝛽𝜀
< 𝑥 >=
1 + 𝑒 −𝛽𝜀
c) 𝐿 𝑇, 𝑁 = 𝑥1 + 𝑥2 + 𝑥3 + … . +𝑥𝑁
The variance is Δ𝐿2 = 𝑁 < 𝑥 2 > + 𝑁 𝑁 − 1 < 𝑥 >2 −𝑁 2 < 𝑥 >2 = 𝑁 ( < 𝑥 2 >−< 𝑥 >2 )