Statistical Physics Exit

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 86

College of Natural and Computational Science

Department of Physics
Statistical Physics (phys4201)
Module of Statistical mechanics
The term statistical mechanics means the same as statistical physics.
 Statistical mechanics provides a theoretical bridge that takes you from the micro world, to
the macro world.
 Statistical Mechanics makes an attempt to derive the macroscopic properties of an object
from the properties of its microscopic constituents and the interactions amongst them.
 Statistical physics considers systems of a large number of entities (particles) such as
atoms, molecules, spins, etc.
1. Features of Macroscopic Systems and basic probability concept
1.1. Macroscopic and microscopic systems

1
Macro state and Microstate
A macro state of a thermodynamic system is described by a few thermodynamic
variables, such as P, V, T and E, S etc. for a gas system. These quantities are the measure
of collective behaviors of atoms in the gas. A microstate, however, specifies the system
with the physical quantities of the constituent particles. For a single particle, we cannot
speak of P, V, and T. But we can specify its physical state of motion. In classical
mechanics, the state of a single particle is specified by its position and momentum at a
moment, (r; p). For N particles in a box, a microstate is specified by a set of values of
their positions and momenta (r1, r2, · · ·, rN; p1, p2, · · ·, pN). Since each of these
vectors has three components, the whole lot represents a 6N -dimensional space. This is
the phase space of the N-particle system. In quantum mechanics, we use wave functions
to describe the physical motion of a single particle and these wave functions are usually
specified by a set of quantum numbers. For example, a state of an electron in hydrogen
atom is specified by a set of quantum numbers (n, L, m, σ), where n is the principle
number, L the angular momentum number, m the z-component number of angular
momentum, and finally σ = ±1/2 is its spin quantum number.
1.2. Equilibrium state and fluctuations
A thermodynamic state of the system is called an equilibrium state if the
thermodynamic variables characterizing the properties of the system, such as
pressure, temperature, and volume, do not experience any spatial and temporal
variation. The equilibrium state is also referred to as thermodynamic equilibrium.
The properties of a system vary with time about the mean equilibrium values is called
fluctuations.
Thermodynamic system
We take the following definitions:
• Thermodynamic system: a quantity of fixed mass under investigation,

2
• Surroundings: everything external to the system,
• System boundary: interface separating system and surroundings, and
• Universe: combination of system and surroundings. The system, surroundings, and
system boundary for a universe are shown for a potatoshaped system in Fig. 1.2A. We
allow two important interactions between the system and its surroundings:

Figure 1.2A: Sketch of a universe composed of a system, its surroundings, and the system
boundary.
• Heat can cross into the system (our potato can get hot), and
• Work can cross out of the system (our potato can expand). Now, the system boundaries
can change, for example the potato might expand on heating, but we can still distinguish
the system and the surroundings.

3
4
5
• Micro canonical ensemble: isolated system, T, E, V, N fixed.
• Canonical ensemble: closed system, T, V, N fixed, bur E fluctuates.
• Grand canonical ensemble: open system, T, V and  held constant.
A system refers to any parts of the universe being studied.
There are three types of thermodynamics systems. Based on the possible heat and matter
transfer, they are classified as closed, isolated and open, systems.
What is a Closed System?
• A closed system is one which does not exchange material with the surroundings.
However, it does exchange energy. It is in thermal contact with the surroundings. We
idealize the surroundings as a ”heat bath”. Thus, a closed system in thermal equilibrium,
is characterized by T , V and N. The system is not isolated. A closed system, on the other
hand, does not allow the exchange of matter but allows energy to be transferred. It allows
heat to be transferred from the stove to the water Heat is also transferred to the
surroundings

Isolated Systems

Neither matter nor heat can transfer to or from the surroundings. It prevents both heat and matter
from being transferred to the surrounding.

6
Open system

An open system is one that freely allows both energy and matter to be transferred in an out of a
system.

The Ensemble Distribution


Consider a system A in thermal contact with a system A′, where A′ is much larger than
A, and assume the systems A and A′ have reached equilibrium. We consider the case in
which the interaction between systems A and A′ is so small that we can consider the
states of system A independent of system A’. If r is a particular microstate of A
corresponding to an energy Although A can exchange energy with A′, system A + A′

7
is isolated. Since system A + A′ is isolated, the total energy is fixed, so the sum of the
energy of A and A′ must always equal the total energy .

Thus the probability of occurrence of the state r is given by

8
1.3. Reversible and irreversible processes
Reversible and Irreversible Processes: In the reversible process, the system and
surroundings can be restored to the initial state from the final state while in the Irreversible
process this cannot be done. Change is the only constant we see around us every day in the
world. Ice melts to form water, which evaporates into water vapor, paper burns and becomes
smoke and ash, plants and animals grow up, grow old and die etc.
In our daily life, we can see these changes and categorize them as reversible and
irreversible, for example, water vapor can be condensed to water and frozen back to ice.
However, most of these process such as rusting of iron, paper-burning, growth of plants, etc.
Some examples of nearly reversible processes are given following:
1. Frictionless relative motion.
2. Expansion and compression of spring.
3. Polytrophic expansion or compression of a gas.
4. Isothermal expansion or compression.
Some more examples of irreversible processes are:
1. Relative motion with friction
2. Combustion
3. Diffusion
4. Free expansion
5. Heat transfer
6. Plastic deformation

9
7. All spontaneous processes (or naturally occurring processes) are thermodynamically
irreversible
 Cooling down of a cup of tea
 Spreading of a drop of ink in water
 The flow of water down a hill.
 Mixing of two gases
1.4. Properties of systems in equilibrium
If a system is at equilibrium, its temperature, pressure, and volume are necessarily
constant; all interactions between such a system and its surroundings can be severed
without changing any of the properties of the system.
1.5. Elementary relations among probabilities

1.6 Binomial Distribution


What is Binomial Distribution?
Binomial distribution is a common probability distribution that models the probability of
obtaining one of two outcomes under a given number of parameters. It summarizes the
number of trials when each trial has the same chance of attaining one specific outcome.

10
11
12
Applications of Binomial distribution
• The mean, variance, and standard deviation

13
1.7. Mean values and Calculation of mean values for spin system
Mean values for a single spin. The magnetic moment of a spin is such that its

component in the up direction has probability of being equal to and


probability of equal to - . Then calculate
̅ ,̅̅̅ and ̅̅̅̅̅̅̅̅ ?
̅ ∑ = -
- )= -
(P-1+p) = (2p-1) then
̅ (2p-1)
̅̅̅ ∑

̅̅̅

14
̅̅̅̅̅̅̅̅ ̅̅̅ ̅

Where then
̅̅̅̅̅̅̅̅ =

1. Find the partition function for a single dipole (ideal two-state paramagnet);
2. Find the probability of state the dipole in
a) the “up” state;
b) the “down” state;
3. Check do these probabilities add up to 1.
4. Find the average energy of the dipole;
5. Find the total energy for a sample of N dipoles;
6. Find the average energy of the sample;
7. Find the average value of the dipole’s magnetic moment along the direction of B;
8. Find the total magnetization of the sample.

Solution

15
16
1.8. Continuous probability distributions
• Continuous Probability Distribution: Assigns density at individual points,
probability of ranges can be obtained by integrating density function.
Rules governing continuous distributions:



f ( y )dy  1
or
Continuous Probability Distributions
 Uniform Probability Distribution
 Normal Probability Distribution
 Exponential Probability Distribution
2.2. Statistical ensemble
If we are considering a collection of particles with macroscopic properties, like energy,
volume, chemical potential, then the collection of such particles is considered as an
assembly. Further, this collection of a large number of non-interacting, independent
assemblies is known as an ensemble or statistical ensemble. The members of an ensemble
are referred to as elements or assemblies. These elements are identical in macroscopic
properties, like , , , and differ in their microscopic properties, i.e., elements have different
position and momentum coordinates. In other words, we can say that an ensemble is
defined as a collection of a large number of assemblies which are identical in
macroscopic properties but differ in microscopic properties. Thus, it can be viewed as
numerous copies of a system or a probability distribution defining the state of the system.
Assembly: A system of N identical particles.
Ensemble: collection of microstate that appear the same macroscopically/
thermodynamically. The entities may be single particles or they may themselves be
identical assemblies of particles.
2.3. Statistical postulates
The various methods of statistical mechanics are applied to discuss some average or most
probable properties of large assemblies of electrons, atoms, molecules etc. Before the
advent of quantum mechanics, Maxwell- Boltzmann applied statistical methods with the
help of classical physics (classical case). These methods are collectively known as
Classical Statistics or Maxwell-Boltzmann (MB) Statistics. These statistics were proved

17
to be successful in explaining pressure, temperature etc. of gaseous systems. But these
couldn’t explain some experimental results like the energy distribution in case of black-
body radiation, specific heat at low temperature etc. For explaining such phenomena
Bose- Einstein and Fermi-Dirac had made use of some new statics with the help of
newly discovered quantum theories. The new statistics are known as Quantum Statistics
and can be divided into the following two groups (Quantum case) : - (i) Bose-Einstein
(BE) statistics
(ii)Fermi-Dirac (FD) statistics

Maxwell-Boltzmann (MB) Statistics .The basic postulates of MB statistics are:-


(i) The associated particles are distinguishable.
(ii) Each energy state can contain any number of particles.
(iii) Total number of particles in the entire system is constant.
(iv) Total energy of all the particles in the entire system is constant.
(v) Particles are spin less. Examples: gas molecules at high temperature and low
pressure.
Bose- Einstein (BE) Statistics. The basic postulates of BE statistics are:-:
(i) The associated particles are identical and indistinguishable.
(ii) Each energy state can contain any number of particles.
(iii) Total energy and total number of particles of the entire system is constant
(iv) The particles have zero or integral spin, i.e where
where is the unit of spin.
(v) The wave function of the system is symmetric under the positional exchange of
any two particles. Examples: photon, phonon, all mesons
Fermi-Dirac (FD) Statistics .The basic postulates of FD statistics are:-

18
(i) Particles are identical and indistinguishable.
(ii) Total energy and total number of particles of the entire system is constant
(iii) Particles have half-integral spin, i.e.,1/2 ℏ , 3/2 ℏ , 37/2 ℏ etc.
(iv) Particles obey Pauli’s exclusion principle, i.e. no two particles in a single system
can have the same value for each of the four quantum numbers. In other words, a single
energy state can contain at best a single particle with appropriate spin.
(v) The wave function of the system is anti-symmetric under the positional exchange of
any two particles. Examples: electron, proton, neutron, all hyperons (Λ, Σ, Ξ, Ω) etc.,
these are known as Fermions.
( ,, ) etc., these are known as Bosons. [Note: Symmetric and Anti-symmetric wave
function Suppose the allowed wave function for a n-particles system is
ψ(1,2,3,…..,r,s,…n), where the integers within the argument of ψ represent the
coordinates of the n-particles relative to some fixed origin. Now, if we interchange the
positions of any two particles, say, r and s, the resulting wave function becomes
ψ(1,2,3,….s,r,…..n). The wave function ψ is said to be symmetric when
ψ(1,2,3,…..,r,s,…n) = ψ(1,2,3,….s,r,…..n) and anti-symmetric when ψ(1,2,3,…..,r,s,…n)
= - ψ(1,2,3,….s,r,…..n)]
Example: Consider a gases of any two particles, let say A and B. Assume that each
particle can be in any one of the three possible quantum state (S=1, 2, 3). Then put those
particles in
A) Maxwell-Boltzmann Statistics
B) Bose-Einstein Statistics
C) Fermi-Dirac Statistics
Soln
A) Maxwell-Boltzmann Statistics

19
B) Bose-Einstein Statistics

There are now three distinct ways of placing the particles in the same state. There are
three distinct ways of placing the particles in different states. Hence there exist a total of
3 +3 = 6 possible states for the whole gas.
D) Fermi-Dirac Statistics

There exist now only a total of 3 possible states for the whole gas.
20
Then we have for the three cases.

Thus in the BE case there is a greater relative tendency for particles to bunch together in the
same state than in classical statistics. On the other hand, in the FD case there is a greater relative
tendency for particles to remain apart in different states than there is in classical statistics. It is
one of the most fundamental facts of nature that elementary particles, atoms, and molecules are
divided into two groups called bosons and fermions.
2.4. Probability calculations
What is Probability? Probability can be defined as the ratio of the number of favorable
outcomes to the total number of outcomes of an event. The probability is classified into
theoretical probability and experimental probability. The following terms in probability
help in a better understanding of the concepts of probability.
Experiment: A trial or an operation conducted to produce an outcome is called an
experiment.
Sample Space: All the possible outcomes of an experiment together constitute a sample
space. For example, the sample space of tossing a coin is head and tail.
Favorable Outcome: An event that has produced the desired result or expected event is
called a favorable outcome. Trial: A trial denotes doing a random experiment.
Random Experiment: An experiment that has a well-defined set of outcomes is called a
random experiment. For example, when we toss a coin, we know that we would get ahead
or tail, but we are not sure which one will appear.
Event: The total number of outcomes of a random experiment is called an event.
Equally Likely Events: Events that have the same chances or probability of occurring
are called equally likely events. The outcome of one event is independent of the other.
For example, when we toss a coin, there are equal chances of getting a head or a tail.
If P (E) represents the probability of an event E, then, we have,
 P (E) = 0 if and only if E is an impossible event (never occurs).
 P (E) = 1 if and only if E is a certain event.
 0 ≤ P (E) ≤ 1.
Suppose, we are given two events, "A" and "B", then the probability of event A, P(A) > P(B) if

21
and only if event "A" is more likely to occur than the event "B".
2.5. Number of stats accessible to a macroscopic system

22
23
2.6. Distribution of energy between macroscopic systems

For an ideal gas

24
25
2.7. Thermal equilibrium, Temperature and Heat
In thermal equilibrium when the two systems have the same temperature.
Temperature is a physical sensation of hotness and coldness an object. It is a quantity that
describes the system or the object whether in equilibrium or not.
Heat, symbol Q and unit Joule (J), is the spontaneous flow of energy into or out of a system
caused by a difference in temperature between the system and its surroundings, or between two
objects whose temperatures are different .i.e. means there exist a temperature difference in a
medium or between a medium heat transfer can be occur.
2.8. Heat transfer
Heat may be transferred from one place to another in three ways:
 conduction
 convection
 radiation
 direct burning
Conduction

26
Conduction is most obvious in solids. All liquids (except mercury) and gases are very poor
conductors of heat. When a solid heats up, its particles gain kinetic energy and increase the
energy with which they vibrate. Metals are all good conductors of heat especially copper,
aluminum and silver, because they have free electrons which are easily able to transfer heat
energy.
Convection
Convection is the transfer of heat by the movement of the heated particles themselves. This can
only take place in liquids and gases because in solids the particles are not able to move from their
fixed positions. When a liquid or gas is heated, it expands and becomes less dense.
Radiation
Radiation is the way we receive heat energy from the sun. It does not require a medium for its
transmission (i.e. it can travel through empty space) and is in the form of electromagnetic energy
waves which travel in the same way as light or radio waves. When these energy waves fall on a
body, the energy may be:
 absorbed
 transmitted
 reflected
When radiant energy is absorbed the body will rise in temperature.

27
Ideal gas

3. Microscopic Theory and Macroscopic Measurements


3.1. Determination of the absolute temperature
Temperature measures the kinetic energy per molecule due to random motion.

28
Absolute temperature, also called thermodynamic temperature, is the temperature of
an object on a scale where 0 is taken as absolute zero. Absolute temperature scales is Kelvin.
Absolute temperature is measured in kelvins (K):

3.2. Work, internal energy and heat


Work
A system interacts with its surroundings; it can exchange energy in two ways, that is
Heat and work are two possible ways of transferring energy from one system to another.
Work is a non-spontaneous energy transfer into or out of a system due to force acting
through a displacement. Work takes many forms, moving a piston, or stirring, or running
an electrical current through a resistance. Work depends on the path followed and work is
a path function and hence not a property of the system.
Internal Energy
Internal Energy, symbol U, is defined as the energy associated with the random,
disordered motion of the microscopic components-atoms and molecules.
Any bulk kinetic energy of the system due to its motion through space is not included in
its internal energy. Internal energy includes kinetic energy of translation, rotation, and
vibration of molecules, potential energy with in molecules, and potential energy between
molecules. Examples
 The molecule as a whole can move in x, y and z directions with respective components
of velocities and hence possesses translation kinetic energy.
 There can be rotation of molecule about its center of mass and then the kinetic energy
associated with rotation is called rotational energy.

29
 In addition the bond length undergoes change and the energy associated with it is called
vibrational energy.
 The electron move around the nucleus and they possess a certain energy that is called
electron energy.
Heat
Heat is the spontaneous flow of energy into or out of a system caused by a difference in
temperature between the system and its surroundings, or between two objects whose
temperatures are different. Another aspect of this definition of heat is that a body never
contains heat. Rather, heat can be identified only as it crosses the boundary. Thus, heat is
a transient phenomenon.
3.4. Heat capacity
Suppose that a body absorbs an amount of heat , and its temperature consequently rises
by .The usual definition of the heat capacity, or specific heat, of the body is

If the body consists of ν moles of some substance then the molar specific heat (i.e., the
specific heat of one mole of this substance) is defined

In the limit that the amount of absorbed heat becomes infinitesimal, we obtain

In classical thermodynamics, it is usual to define two molar specific heats. Firstly, the
molar specific heat at constant volume, denoted

and, secondly, the molar specific heat at constant pressure, denoted

Consider the molar specific heat at constant volume of an ideal gas. Because ,
no work is done by the gas on its surroundings and the first law of thermodynamics
reduces to
It follows from Equation (4) that

30
E is a function of T only, we can write

( )

Two expressions can be combined to give

According to the first law of thermodynamics,

The equation of state of an ideal gas, implies that if the volume changes by
the temperature changes by , and the pressure remains constant, then

The previous two equations can be combined to give

Now, by definition,

So we obtain for an ideal gas.

The ratio of the two specific heats, , is conventionally denoted γ. We have

3.5. Entropy
Entropy is a property of a system.
According to the second law of thermodynamics may be stated as the entropy of the
universe can never decrease.

31
“The entropy of an isolated system can only increase.” For an isolated system,
∆S = 0, for reversible processes
∆S > 0, for irreversible processes
S < 0, the process is impossible
Entropy in the reversible process
Entropy is a measure of disorder of a state. Entropy can be defined using macroscopic
concepts of heat and temperature.

Entropy can also be defined in terms of the number of microstates, W, in a microstate


whose entropy is S,

Where W is the number of micro-states.

32
The entropy of the universe increases in all real processes. This is another statement of
the second law of thermodynamics. The change in entropy in an arbitrary reversible
process is

This is called Clausius’ theorem.


From first law of thermodynamics where and
Then when at constant temperature.

So where PV =RT from ideal gas. And then

Then integrate both sides

∫ The system changes to

( Then

3.6. Intensive and extensive parameters


The state variables may be classified into two groups: the intensive variables and the
extensive variables. Intensive variables are size or mass independent of the system.
Examples include pressure, electric field, magnetic field, specific heat, temperature,
specific enthalpy, magnetic moment, density, Surface tension, E.m.f (E) etc.
Extensive variables are proportional to the mass or the size of the system. Examples
include volume, internal energy, Area, Charge, length etc.

33
4. Thermodynamics
4.1. Laws of thermodynamics and Basic statistical relations
Zero Law of thermodynamics
Consider three systems A, B, and C as shown in Fig. below. If system A and system B
are brought into thermal contact and there after achieve thermal equilibrium, and the
same holds for systems A and system C, then the same will also hold true for systems B
and system C. This experimental observation is surmised in Zero law, which states that
If each of two systems is in thermal equilibrium with a third, they are in equilibrium with
one another.

All the systems referred to the above and any other in equilibrium with them possess a
common property called temperature (T). The temperature of a system is a property
that determines whether or not that system is in thermal equilibrium with other
systems.

Figure 4.1A. Systems in thermal equilibrium


Walls that prevent thermal interactions are called adiabatic, and a system enclosed by an
adiabatic wall is called an isolated system. An isolated system cannot exchange heat with
its surroundings, though work may be done on it. Such changes it will undergo are called
adiabatic changes. Walls that allow some interactions between the system and the

34
surrounding are called diathermal, and two systems separated by a diathermal wall are
said to be in contact. In other words two systems are in thermal contact if heating one of
them results in macroscopic changes in the other. For example, you are aware that if we
place two metal containers of water in physical contact, and heat one container, the water
in both containers becomes hotter. We say that the two containers are in thermal contact.
First law of thermodynamics
It is conservation of energy.
Introduce internal energy
Generally it states that the heat added (extracted) to a system equal to the sum of
changing internal energy and the work done on the system. In the mathematical form

Second law of thermodynamics


There are many ways to state the second law of thermodynamics. One statement is as
follows:
• Second law of thermodynamics: The entropy of an isolated system can never decrease
with time. Entropy as a measure of the randomness (or disorder) of a system, with high
randomness corresponding to high entropy. Low randomness or low disorder often
corresponds to low entropy.
Clausius statement: Clausius gives a more precise statement of the second law:
• Second law of thermodynamics: Heat cannot itself pass from a colder to a hotter body.
The Clausius formulation of the second law is easy to understand in engineering terms
and is illustrated schematically in Fig. 4.1B. Note that air conditioners move heat from
cold regions to hot regions, but that work input is required.

Figure 4.1B: Schematic of the Clausius statement of the second law of thermodynamics.

35
Kelvin-Planck statement: Another statement of the second law is Kelvin-Planck
statement.
• Second law of thermodynamics: It is impossible for any system to operate in a
thermodynamic cycle and deliver a net amount of work to its surroundings while
receiving an energy transfer by heat from a single thermal reservoir.

Figure 4.1C: Schematic of the Kelvin-Planck statement of the second law of


thermodynamics.
The Kelvin-Planck formulation of the second law is easy to understand in engineering
terms and is illustrated schematically in Fig. 4.1C. For the schematic of Fig. 4.1C, the
first law, neglecting changes in kinetic and potential energy, states that U2 − U1 = Q −
W. But we have specified that the process is a cycle, so U1 = U2, and thus the first law
holds Q = W. Now, the second law, for this scenario, holds that positive Q cannot be
delivered, that gives, for an engine in contact with a single thermal reservoir, Q ≤ 0, W ≤ 0.
In informal language, the Kelvin-Planck statement says
• You can turn all the work into heat, but
• You cannot turn all the heat into work.
Third law of thermodynamics
“every substance has a finite positive entropy, but at the absolute zero of temperature the
entropy may become zero, and does so become in the case of perfect crystalline
substances,” further here. Because entropy is a thermodynamic property, it can be used to
help determine the state. That is we can say any of the following:

36
When temperatures falls to absolute zero, the entropy of any pure crystalline tends to be
universal constant. i.e. =0=

Where

∫ = ∫

=∫

But

Because
So no change of entropy in the equilibrium

S= constant

4.2. Statistical calculation of thermodynamic quantities


Consider a system of gas consisting of N identical monatomic molecules or particles with
motion and the mass m enclosed in a container of volume V. Let = the position vector of
particles and = the momentum of the gas molecule. The total energy of the system

∑ +U ( … )

If the gas is sufficiently dilute (ideal gas) the interaction between the gas molecules
becomes negligible
Let the gas molecule treated by classical approximation the partition function

=∫

∫ *∫ Where

3= volume of 3-D

∫ ∫
For diluted gas.

Where ∫

37
∫ ∫ ∫
Let

∫ ∫ ∫ Because U

and by using ∫ √

⁄ ⁄ ⁄

⁄ ⁄ ⁄

is partition function for single molecule



)


)


( )
Now calculate the mean energy, pressure, specific heat at constant
volume ( and entropy
Solution The mean energy of the system is given by

38

( )


( )


( ( ) )


( )


( ⁄
)(( ( )
( )

⁄ ⁄
( ) ( ) Then

⁄ ⁄
( ) ( )

B) Pressure

But

= ( )

( )

C) ( ) But

39

= ( )

( )
⁄ ⁄ ⁄
( ) ( ) ()
D) Entropy (S)

⁄ ⁄ ⁄
= ( ) ( ) ( )

⁄ ⁄ ⁄

= ( ) ( ) ( ) ( )

⁄ ⁄

= ( ) ( ) ( )



( )


= ( ) ( )


( )

4.3. Thermodynamic potential


Thermodynamic potential are internal energy, enthalpy energy, Helmholtz free energy
and Gibbs free energy or we can represent as U, H, F and G.
Internal energy

, S and V are the are the two independent parameters and U dependent
parameters

40
Enthalpy

S and P are the two independent parameters and H dependent parameters


Helmholtz free energy is a thermodynamic potential that measures the useful work
obtainable from a closed thermodynamic system at constant temperature and volume.


T and V are the two independent parameters and F dependent parameters
Gibbs free energy is away to predict for example the direction of a chemical
reaction unlike entropy which just takes in to account accessible states.

– –

Are the two independent parameters and G dependent parameters


4.4. Gibbs-Duhem’s and Maxwell’s relations
Gibbs-Duhem’s Relations

Where

41
=

+ + =

+ + =

+ + + + Let ,

+ + + + Then

+ + =

S – PV +
S – PV +
– –

This equation is called Gibbs-Duhem’s Relations


Maxwell’s relations
From internal energy

It immediately follows that

But
Hence, we deduce that

42
So

This is known as first law of Maxwell equations.


From enthalpy

Then

43
This also called second law of Maxwell equations

44
From Helmholtz free energy

So

This is called third law of Maxwell equations


From Gibbs free energy

45
is called fourth law of Maxwell equations
4.5. Response functions
The two most important response functions systems are heat capacity at constant volume
and pressure

( ) = ( ) and ( ) ( )

Isothermal compressibility (at constant ( ) and adiabatic

compressibility ( ) and also thermal coefficient expansion α ( )


We just expect that specific heats and compressibility to be positive i.e.
And
Let’s drive the relationship among the response function.
Considering the no of particles keeps constant.

=( ) ( ) Let divide of the two both sides by

( ) =( ) ( ) ( ) and multiply both sides by T

( ) = ( ) ( ) ( )

46
= ( ) ( )

Using Max-well’s relation and state function

Proof ( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

From max-well relation we have

( ) ( ) ( ) ( )

Since - ( ) ( )

- ( ) ( )

- ( ) ( ) ( )

- ( ) ( )

- ( ) But α ( ) ( ) and
( )

( ) Since -
( )

- There fore

…….(1)
In similarly show that

47
…….(2)
From equation (1) and (2) we can get

And ) since
)
By rearranging

This is response function relation

4.6. Condition for equilibrium


“Equilibrium” - state of balance
Conditions for equilibrium
1. Thermal equilibrium: Two systems are in thermal equilibrium when they have the
same temperature.
2. Mechanical equilibrium: This occurs when the system experiences no unbalanced
forces.
3. Chemical equilibrium: Occurs when no chemical reaction occurs
So, Thermodynamic (complete) equilibrium: Attained when the system experiences
thermal, mechanical and chemical equilibriums.
Consider an isolated system composed of two subsystems which are separated from each
other by a rigid, diathermal wall, i.e. the two systems can only exchange energy in the
form of heat. Thus, for two systems we have, d V = d N = 0. Let net heat is flow from
system 1 to system 2. Let the amount of heat removed from system 1 is dQ1 and the
amount of heat added to the system 2 is dQ2. Since the total system is isolated,
dQ2 = - dQ1
Change in entropy of the total system is
dS = dS1 + dS2

48
For system in equilibrium, ds = 0

T 1 = T2 Thus, for a system is said to be in equilibrium with respect to energy without


particle exchange or volume change, T1 must equal to T2.
Show that

4.7. Thermodynamics of phase transitions


What are the States of Matter?

49
50
51
52
53
54
5. Simple Applications of Statistical mechanics
The equipartition energy theorem
Equipartition of energy theorem says that energy is distributed on average equally among all
energetically accessible degrees of freedom, such as those associated with molecular translations,
molecular rotations, bond vibrations, and electronic motion. According to the law of
equipartition of energy, for any dynamic system in thermal equilibrium, the total energy for the
system is equally divided among the degree of freedom. The kinetic energy of a single molecule
along the x-axis, the y-axis, and the z-axis is given as

, along the x-axis


, along the y-axis
, along the z-axis
When the gas is at thermal equilibrium, the average kinetic energy is denoted as

, along the x-axis


, along the y-axis
, along the z-axis
According to the kinetic theory of gases, the average kinetic energy of a molecule is given by,

Degree of freedom: A molecule allows moving how many type of motions. That is a molecules
to move x, y or z directions. So it has 3-degree of motion.

55
Where N degree of freedom

Monoatomic gas molecules have 3-degree of freedom.

Diatomic gas molecules have 5-degree of freedom.

Triatomic gas molecules have 6-degree of freedom when non-linear.

Triatomic gas molecules have 7-degree of freedom when linear.

56
57
58
5.1. Partition function and their properties
All-important microscopic quantities associated with a system can be expressed in terms
of its partition function Z. Let us investigate how the partition function is related to
thermo dynamical quantities.

(1)
The partition function of system A is written as

(2)
The partition function of system A| is written as

(3)
The partition function of A0 takes the form

It is clear that we can perform statistical thermo dynamical calculations using the
partition function Z instead of the more direct approach in which we use the density of

59
states Ω. The partition function approach is advantageous because it is an unrestricted
sum of Boltzmann factors over all accessible states, irrespective of their energy, whereas
the density of states is a restricted sum over all states whose energies lie in some narrow
range. Thus, it is generally easier to derive statistical thermo dynamical results using Z
rather than Ω, or in another way, it is easier to calculate virtually and piece of statistical
information using this partition function.
Remarks on the partition function (z):

(4)

(5)
Systems in canonical ensemble are distributed over their accessible states in accordance
with canonical distribution. The probability of finding a system in stare r is the

(6)

The mean energy then given by

(7)
The mean energy of the new state is then

60
(8)
If a system of two weakly interacting parts A’ and A’’ which are in state r and s
respectively, then state A can be specified by the pair r, s with corresponding energy Ers is
given by:

(9)
The partition function of the total system A is then:

(10)

(11)
Therefore, a system consisting distinct non-interacting or weakly interacting parts, the partition
function is simply the product of that of each part.
5.3. Gibbs paradox
The Gibbs paradox comes in two main forms. The thermo dynamical Version and the statistical
mechanical version. In statistical version

* ( ) +

( ) Where ( ) So,


But * ( ) + then * +

61
This equation is not correct representation for entropy of the system. Gibb’s paradox is a
consequence of the following fact. before the removal of the partition a molecule of each
subsystem could only be found in a volume v’, while removing the partition leads to diffusion of
the molecules throughout of the whole volume v = 2v’, which obviously increase the entropy of
the whole system since the process is irreversible. The correct part ion function Z, which does
take in to account the essential of indistinguishable of the molecules and solve the difficulty of
Gibbs paradox.

But –

(Striling formula)

* +

* +

( ) So, the correct formula for entropy as an extensive

parameter is * +

5.4. Validity of the classical approximation


An approximate criterion for the validity for this classical description can be obtained by
appealing to the Heisenberg uncertainty principle
This relates the uncertainties and introduced by quantum effects in any attempt at
simultaneous specification of a position q and corresponding momentum p of a particle. Consider
a system of gas molecules with mean momentum of a molecule ̅ and mean separation between
molecules ̅ . By Heisenberg uncertainty principle, eqn. the motion of the molecule can be
described by classical mechanics if

Where ̅ is mean de Broglie wave length which is defined as:

62
̅
̅
Assume each molecule in the system lie at the center of a little cube of side ̅ such that, the
whole volume of the system subdivided in to N cells of volume̅̅̅̅.

Hence the mean intermolecular [parathion ̅ is approximately given by eqn. the mean energy
̅ of a molecule in the gas at temperature is

Hence eqn. an approximate criterion for validity of classical approximation becomes:

5.5. Proof of equipartition

63
64
65
5.6. Simple applications
Mean kinetic energy of a molecule in a gas: Consider a system of gas at temperature T.
Consider a molecule in the system which has mass m and momentum of center mass p = mv. The
translational kinetic energy of the molecule is then,
+

According to the law of equipartition of energy, for any dynamic system in thermal equilibrium
We can write the mean kinetic energy

̅̅̅̅̅

For an ideal monatomic gas the entire energy is kinetic, so that the mean energy per mole of gas
is simply

̅ ( )

Molar specific heat at constant volume becomes

Brownian motion: Suppose a particle of mass m is immersed in a system of liquid at


temperature of . What can we say about the motion of this particle? Let us consider only the x-
component of the center of mass velocity of the particle. By the principle of symmetry, we can
write; ̅̅̅
By equipartition theorem we have:

Equation exhibits for a particle of large mass m, dispersion in velocity is negligible and
hence the particle appears to be at rest. On the hand particle of small mass would exhibit
appreciable dispersion in velocity, the velocity fluctuation can be clearly observed. Thus, a small
particle immersed in a system of liquid undergoes a perpetual motion in a random manner. This
phenomenon first observed by Brown, hence called "Brownian motion."
Harmonic oscillator: The energy of one dimensional harmonic oscillator is given by;

66
Since both terms in right hand side of eqn. are quadratic, equipartition theorem yields:

In quantum mechanics possible energy levels of harmonic oscillator are;

67
This is the case for a system at high temperature. In this case thermal energy is large
compared to separation between energy levels .This is also the case where classical
approximation is valid. By Taylor’s series we can write

Eqn. is then becomes

Eqn. is in agreement with the classical result eqn.

In this case the system is at low temperature, and hence the separation between energy levels is
much larger than the thermal energy . Thus, the concept of discrete energy levels is
important, hence classical approximation doesn’t valid.

eqn. (3) becomes

Eqn. (5) exhibits as , mean energy of the system ̅ approaches to zero point or ground level
energy ⁄

5.7. Specific heat of solids


Consider a single solid with (Avogadro numbers).
The atoms are free to vibrate about the equilibrium position (lattice vibrate).Each atom specified
by three position coordinate and three momentum coordinates

68
*∑ + Where

And

The energy value of each independent quadratic term is equal to . If the system is at high

temperature where classical mechanics applicable, we can write:


For each quadratic term in eqn. (1)

Then the mean energy ̅ *∑ +


̅
But Where

Where is molar specific heat at constant volume. Eqn. implies molar specific is the same
for all simple solids at higher temperature, and it is called Dulong- petit law. However, third law
of thermodynamic requires that must approach to as T approaches to zero. Thus, eqn.
doesn’t hold fir low temperature. The behavior of at all temperature can be approximated by
assuming all atoms in the solids vibrate with the same angular frequency , hence Ki in eqn.
is given by
For each atom and a mole of solid can be considered as an assemble of independent of
onedimension harmonic oscillation. Thus, the total mean energy is given by times that of
that of single oscillator in eqn. (1).

69
Where is Einstein Temperature.

5.8. General calculation of magnetism


Consider a system of N non-interacting atoms at absolute temperature T placed in an external
magnetic field. If the total magnetic field on an atom is H, say along z direction, the magnetic
energy of the atom is given by:

Where is magnetic moment of the atom which is proportional to angular momentum of the atom
and given by;

70
Read more…
5.9. Maxwell’s velocity distribution

71
72
73
74
75
5.10. Number of molecule striking a surface
For a system of dilute gas enclosed in a container, we want to estimate the number of
molecules that strike a unit area of the wall of the container per unit time. Consider an
element of area of the wall of the container. Consider molecules with velocity
between and . Those molecules travel a displacement of with infinitesimal
time . Let the velocity of the molecules make angle with z-axis, where z -axis is chosen
to be normal to the wall. Hence only the molecules which lie within the infinitesimal
cylinder of volume will strike the with in time interval , where is cross-
sectional area of the cylinder and is the length of the cylinder. The number of
molecules per unit volume with velocity between and is . Hence the
number of molecules which strike the area of the wall in time is given by
(1)
Dividing eqn. (1) by and time interval yields the number of molecules with
velocity between and which strike a unit area of the wall per unit time,
.

The total number of molecules which strike a unit area of the wall per unit time, can
be obtained by summing eqn. over all values of such that

Where
In spherical coordinate,


∫ ∫ ∫

The integration over gives 2, while the integral over yields the value .
Hence ∫
This can be expressed in terms of the mean speed, which is given by;

̅ ∫

76

∫ ∫ ∫

∫ √

Since the integration over the angles and is just the total solid angle about a
point. It can also be written

From equation of state for ideal gas


̅̅ ̅ Where ̅ = mean volume and ̅ = mean pressure
̅

̅
( ̅ ) ( √ )

Both side square we get

̅
( √ )

̅
There fore is direct proportional to ̅
5.11. Effusion
Suppose small hole is made on the wall of a container. Let the hole is sufficiently small
such that the disturbance on the equilibrium condition inside the container due to the hole
is negligible. In this case number of molecules which emerge through the same hole is the
same as the number if molecules which would strike the area occupied by the wall. This
process of emerging molecules through such a small hole is called effusion. The number

77
of molecules with speed in the range and which emerge through a small hole of
area A per unit time is

Consider a system of gas enclosed in a container which is divided in to two parts by a


partition containing small hole. Let each parts of the container maintained at different
temperature of and .

Fig. 5.12: A container divided into two parts by a partition containing a small hole. The
gas in the two parts is at different temperatures and pressures.
For sufficient small hole where effusion can take place, equilibrium condition requires
that the mass of gas on each side remain constant with time. That is the number of
molecules which pass per second through the hole in both `directions must be the same.
Thus by eqn. we can write at equilibrium,

Similarly, by the most probable speed v of the molecule i.e.

So,

78
This means even at equilibrium pressure of the two parts are not equal rather it must be
higher in the part of the container which maintained at higher temperature.
5.12. Pressure and momentum
Consider a system of gas enclosed in a container. We want to calculate the mean force F
exerted by the gas on small element of area of the container wall. This can be
obtained by calculating the mean net momentum divided to the element of the wall per
unit time by the impinging molecules. If we consider an element of area laying inside
the gas at infinitesimal distance inside the wall, the mean rate of change in momentum of
the wall element can be obtained from the difference between the mean molecular
momentum crossing d A per unit time from left to right, and that of from right to left

The mean number of molecules with velocity between and which cross the
area within time interval is given by
From (1)
So, )
The mean momentum transported across per unit time by molecules whose velocity
lie in the range can be obtained by multiplying eqn. by , where is

momentum of each of molecule,

79
6. Quantum Statistics of Ideal Gases and System of Interaction Particles
6.1. Formulation of statistical problems

(1)

(2)

(3)

(4)

80
(5)

6.3 The quantum distributions functions

81
82
83
84
Fermi-Dirac distribution

is called Fermi-Dirac distribution


6.8. Quantum statistics in the classical limit
The quantum statics of ideal gases can be summarized

̅ The parameter is to be determined by the conditions

∑ ̅̅̅=N

∑ =N

The partition function of the ideal gases which can be obeying the two statistics

Let’s as now calculate the magnitude of in some limiting case
Case 1: a gas at a given temperature . When its concentration in mode is sufficiently low.

̅ For all states of r

85
Case2: Consider a gas with some fixed number of particles when its temperature is
mode sufficiently large. Small.
Increases no of terms with large value of .Contributed sufficiently to sum
̅
̅

∑ ∑

=∑





, ∑


From the sterling approximation

86

You might also like