2406.19206v1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

SciPost Physics Lecture Notes Submission

Quantum Thermodynamics
Patrick P. Potts

Department of Physics and Swiss Nanoscience Institute, University of Basel,


Klingelbergstrasse 82, 4056 Basel, Switzerland

patrick.potts@unibas.ch

Abstract
The theory of quantum thermodynamics investigates how the concepts of heat, work,
arXiv:2406.19206v1 [quant-ph] 27 Jun 2024

and temperature can be carried over to the quantum realm, where fluctuations and ran-
domness are fundamentally unavoidable. These lecture notes provide an introduction to
the thermodynamics of small quantum systems. It is illustrated how the laws of thermo-
dynamics emerge from quantum theory and how open quantum systems can be modeled
by Markovian master equations. Quantum systems that are designed to perform a cer-
tain task, such as cooling or generating entanglement are considered. Finally, the effect
of fluctuations on the thermodynamic description is discussed.

Contents
1 Introduction 3
1.1 Basic concepts 4
1.1.1 Linear algebra 4
1.1.2 The density matrix 6
1.1.3 Second quantization 8
1.1.4 Exact and inexact differentials 9
1.2 Macroscopic thermodynamics 10
1.3 Information theory 14

2 Thermodynamic equilibrium 17
2.1 The grand-canonical ensemble 17
2.1.1 Subsystem of an isolated system 18
2.1.2 Jaynes’ maximum entropy principle 20
2.1.3 Complete passivity 21
2.2 Equivalence of ensembles in the thermodynamic limit 22

3 The laws of thermodynamics 23


3.1 The general scenario 23
3.2 Entropy production 24
3.3 The first law of thermodynamics 26
3.4 The second law of thermodynamics 26
3.5 The zeroth law of thermodynamics 27
3.6 The third law of thermodynamics 27

4 Markovian master equations 28


4.1 Nakajima-Zwanzig superoperators 29

1
SciPost Physics Lecture Notes Submission

4.2 Born-Markov approximations 30


4.3 Example: equilibration of a quantum dot 31
4.3.1 The master equation 32
4.3.2 Solving the master equation 34
4.3.3 Energy flows and the first law 35
4.3.4 Entropy and the second law 35
4.4 Obtaining GKLS form 36
4.4.1 The secular approximation 36
4.4.2 The singular-coupling limit 38
4.4.3 The unified GKLS master equation 40
4.5 Heat and work in quantum master equations 40
4.6 Example: A double quantum dot 41
4.6.1 The secular approximation 43
4.6.2 The singular-coupling limit 43

5 Quantum thermal machines 44


5.1 A quantum dot heat engine 44
5.1.1 Solving the master equation 45
5.1.2 The first law 46
5.1.3 The second law 47
5.1.4 Refrigeration 48
5.2 Entanglement generator 50
5.2.1 Entanglement 50
5.2.2 The master equation 51
5.3 Absorption refrigerator 54
5.3.1 The master equation 55
5.3.2 Figures of merit 57
5.3.3 Perturbation theory 58
5.3.4 Coherence-enhanced cooling 59

6 Fluctuations 60
6.1 Fluctuation theorems for an isolated system 61
6.1.1 The two-point measurement scheme 61
6.1.2 The backward experiment 62
6.1.3 Fluctuation theorems 63
6.2 Fluctuation theorems for the general scenario 64
6.2.1 Forward trajectories 65
6.2.2 Backward trajectories 66
6.2.3 Fluctuation theorems 67
6.3 Full counting statistics 67
6.3.1 Counting particles 68
6.3.2 Example: transport through a quantum dot 70
6.3.3 Counting heat and work 72
6.4 The Thermodynamic uncertainty relation 72
6.4.1 Current and current noise 73
6.4.2 Application: heat engine 73

7 Summary 74

8 Outlook 74
8.1 The thermodynamics of Information 75

2
SciPost Physics Lecture Notes Submission

8.2 Quantum thermo-kinetic uncertainty relation 75


8.3 Finite-size environments 76
8.4 Dissipative phase transitions 76
8.5 Thermodynamics of multiple conserved quantities 76

References 77

1 Introduction
By investigating concepts such as heat, work, and temperature, the theory of thermodynamics
was a driving force in the industrial revolution, enabling the improvement and development
of technologies that reshaped society such as steam engines and refrigerators [1]. Quantum
thermodynamics investigates heat, work, temperature, as well as related concepts in quantum
systems. In analogy to macroscopic thermodynamics, this endeavor promises to enable the de-
velopment and improvement of novel quantum- and nano-technologies. Furthermore, a good
understanding of the thermodynamics of quantum systems is key for keeping the energetic
footprint of any scalable quantum technology in check [2].
As the concepts investigated by quantum thermodynamics are very general, the theory is
of relevance for essentially all scenarios involving open quantum systems. This makes the
field extremely broad, including well established topics such as thermoelectricity [3, 4], inves-
tigating how an electrical current arises from a temperature gradient, as well as very recent
investigations into the energetics of single qubit gates [5]. The broad nature of the field im-
plies that the quantum thermodynamics community brings together people with very different
backgrounds.
These lecture notes are meant to provide an introduction to this diverse and fast-growing
field, discussing fundamental concepts and illustrating them with simple examples. After a
short introduction to basic concepts, macroscopic thermodynamics, and information theory
will be reviewed in Sec. 1. Section 2 will address the topic of thermodynamic equilibrium.
This will allow for understanding why a system may often be described by very few parame-
ters, such as temperature and chemical potential, and it will give you a physical understanding
of these fundamental parameters. Section 3 will then discuss how the laws of thermodynam-
ics emerge from quantum mechanics, connecting these very general laws to the microscopic
behavior of quantum systems. Applying the laws of thermodynamics to quantum systems will
allow you to make general predictions on what happens when small systems are placed into
contact with thermal reservoirs. This may improve your physical intuition for how systems in
and out of equilibrium behave, which is an extremely valuable skill as a physicist. In Sec. 4, we
will study quantum master equations as a tool to describe open quantum systems. This is an
extremely useful and versatile tool that can be employed to model many physical systems, e.g.,
quantum dots, NV centers, and optical cavities. Section 5 will then investigate quantum ther-
mal machines, exploring some of the tasks that can be achieved with open quantum systems.
Such tasks include the production of work, refrigeration, and the creation of entanglement.
Finally, in Sec. 6, we will consider fluctuations which become important in nano-scale systems.
You will learn how thermodynamic considerations may result in equalities and inequalities that
can be understood as extensions of the laws of thermodynamics, determining the behavior of
fluctuations around average values.

3
SciPost Physics Lecture Notes Submission

Further reading: In 2022, a textbook on quantum stochastic thermodynamics by Strasberg


was published [6]. While similar in spirit to the material discussed in this course, the book has
a stronger focus on fluctuations which are only covered in Sec. 6 in this course. Another good
resource is the book published in 2019 by Deffner and Campbell [7]. Compared to these lecture
notes, it has a stronger focus on information-theoretic concepts and thermodynamic cycles. A
slightly older textbook on the topic was published in 2009 by Gemmer, Michel, and Mahler [8].
While being a valuable resource for many of the concepts, it predates a number of important
works that are central to how the topic is presented here. In addition, in 2019 a book giving
a snapshot of the field was published, providing numerous short reviews on different topics
by over 100 contributing authors [9]. Furthermore, a number of excellent reviews focusing
on different aspects of quantum thermodynamics are provided by Refs. [3, 4, 10–14]. These
resources are complemented by the focus issue in Ref. [15].

1.1 Basic concepts


In this section, we introduce some basic concepts that are used throughout the lecture notes.
We set ħ h = 1 throughout. Note that this section should mainly act as a reminder (with the
possible exception of Sec. 1.3) and to set the notation. If a concept is completely new to
you, I suggest to read up on it in the respective references, which provide a far more detailed
introduction.

1.1.1 Linear algebra


Because linear algebra provides the mathematical description of quantum mechanics, we re-
view some basic concepts here. In a format accessible for physicists, a more detailed introduc-
tion can be found in most textbooks on quantum mechanics, see, e.g., Sec. 1 in the book by
Shankar [16] or Sec. 2.1 in the book by Nielsen and Chuang [17].

Vectors in Hilbert spaces: A (pure) quantum state can be described by a vector in a complex
Hilbert space H, which is a vector space equipped with an inner product. Such vectors may
be denoted by
|ψ〉 , |ϕ〉 , a |ψ〉 + b |ϕ〉 , (1)
where a and b are complex numbers. For reasons that will become clear later, we also call a
vector a ket. Any vector can be written using a set of linearly independent basis vectors | j 〉 as
X
|ψ〉 = ψ j | j 〉. (2)
j

Linear operators: Vectors can be manipulated by operators. A linear operator  : H → H′


maps a vector from a Hilbert space, H, onto another vector, potentially in another Hilbert
space H′ , i.e., Â |ψ〉 is a vector in H′ for any |ψ〉 ∈ H. Furthermore, a linear operator obeys

 (a |ψ〉 + b |ϕ〉) = a  |ψ〉 + b  |ϕ〉 . (3)

Dual vectors: For each vector, a dual vector (an element of the dual space H∗ ) exists given
by the Hermitian conjugate of the original vector

〈ψ| = |ψ〉† . (4)

A dual vector is also called a bra. The notation introduced in Eqs. (1) and (4) is called Dirac
or bra-ket notation.

4
SciPost Physics Lecture Notes Submission

Inner product: The inner product between two vectors |ψ〉 and |ϕ〉 is denoted by

〈ψ|ϕ〉 ∈ C, (5)

and is a complex number. Note that this makes any dual vector a linear operator mapping
vectors onto the Hilbert space of complex numbers, i.e., 〈ψ| : H → C. With the inner product,
we may define an orthonormal basis | j〉 as one fulfilling

〈i| j 〉 = δi, j , (6)

where δi, j denotes the Kronecker delta. Using Eq. (2), we may express any vector through an
orthonormal basis. The inner product between two vectors may then be evaluated as
X ‹ X ‹ X

〈ψ|ϕ〉 = ψj 〈 j| ϕk |k〉 = ψ∗j ϕ j . (7)
j k j

Composite systems: When describing a composite system that is made of multiple subsys-
tems, a tensor product space can be used. In the case of two subsystems (a bi-partite system),
the total Hilbert space is then given by H = HA ⊗ HB , where HA and HB are the Hilbert spaces
describing subsystems A and B, and ⊗ denotes the Kronecker product.
An orthonormal basis can be constructed from orthonormal bases of the subsystems as

|a, b〉 = |a〉 ⊗ |b〉 , (8)

where |a〉 and |b〉 are the basis states of the subsystems.
Sometimes it is necessary to discard one of the subsystems. This is achieved through the
partial trace. For an operator acting on the composite system Ĉ : H → H, the partial trace
over the subsystem B is defined as
X
TrB {Ĉ } = |a〉〈a′ |〈a, b|Ĉ |a′ , b〉, (9)
a,a′ ,b

which is an operator that acts on subsystem A. In particular, if we consider a pure quantum


state describing the composite system |Ψ〉, then the partial trace TrB {|Ψ〉〈Ψ|} is the reduced
(possibly mixed) state of subsystem A. The reduced state fully describes subsystem A if no
information on subsystem B can be obtained.

Row and column vectors: Every complex vector space of finite dimension n is isomorphic
to Cn . What this means is that as long as we consider finite-dimensional Hilbert spaces, we
may use conventional row and column vectors. For instance, a set of orthonormal basis vectors
may be identified by column vectors with the j -th entry equal to one and all others equal to
zero. General vectors can then be expressed through Eq. (2), i.e.,

..
 
. ψ0
 
0
 
ψ1 
 , 〈ψ| ≡ ψ∗ ψ∗ ψ∗ · · · .
 
| j 〉 ≡ 1 , 〈 j | ≡ · · · 0 1 0 · · · , ⇒ |ψ〉 ≡ 
 
ψ2  0 1 2
0
 
..
.. .
.
(10)

5
SciPost Physics Lecture Notes Submission

Matrices and the resolution of the identity: An important operator is the identity operator
1 defined by
1 |ψ〉 = |ψ〉 ∀ |ψ〉 . (11)
Using any orthonormal basis, the identity may be written
X
1= | j 〉〈 j | . (12)
j

This equation is called the resolution of the identity and it is heavily used in deriving various
results in quantum mechanics. With its help, we may express an operator in the basis of | j 〉 as
X X
 = | j 〉〈 j |  |k〉〈k| = A j k | j 〉〈k| , (13)
j ,k j ,k

where A j k = 〈 j| Â |k〉 are the matrix elements of Â. Indeed, with the help of the row and
column vectors in Eq. (10), we may identify

A01 · · · A∗00 A∗10 · · ·


   
A00

 ≡ A10 A11 , † ≡ A01 A∗11 , (14)
.. .. .. ..
. . . .

and we recover the usual prescription for the Hermitian conjugate of matrices.

Superoperators: We may consider operators as vectors in a different vector space, the so-
called Liouville space. The Hilbert-Schmidt inner product between two operators  and B̂
is defined as Tr{† B̂}. With this inner product, Liouville space is itself a Hilbert space. An
operator that acts on a vector in Liouville space is called a superoperator to distinguish it
from operators that act on states |ψ〉. Superoperators thus act on operators the same way
as operators act on states. Throughout these lecture notes, superoperators are denoted by
calligraphic symboles, e.g., L, while operators can be identified by their hat, e.g., Â (with a
few exceptions such as the identity operator 1).
As discussed above, we may write a vector in Liouville space as a column vector. Starting
from the matrix representation of an operator Â, c.f. Eq. (14), this can be achieved by stacking
its columns. Any superoperator may then be written as a matrix. This procedure is called
vectorization (see Ref. [18] for more details) and it can be highly useful when using numerics
to compute the behavior of open quantum systems.

1.1.2 The density matrix


The density matrix describes the state of a quantum system and is thus a central object through-
out these lecture notes and in quantum theory in general. A more detailed introduction can
be found in the book by Nielsen and Chuang [17], see Sec. 2.4.
The state of a quantum system is described by a positive semi-definite operator with unit
trace acting on the Hilbert space H. Positive semi-definite means that all eigenvalues of the
density matrix are larger or equal to zero and unit trace means that the eigenvalues add up to
one. The density matrix may be written as
X X
ρ̂ = pj ψj ψj , with ψ j ψ j = 1, p j ≥ 0, p j = 1. (15)
j j

We may interpret the density matrix as the system being in the pure state ψ j with probability
p j . If multiple p j are non-zero, this describes a scenario of incomplete knowledge.

6
SciPost Physics Lecture Notes Submission

Measurements and expectation values: In quantum mechanics, any measurement can be


described using a positive operator valuedP measure (POVM) [17]. A POVM is a set of positive,
semi-definite operators M̂ j that obey j M j = 1. The index j labels the outcome of the
measurement and it is obtained with probability p j = Tr{ M̂ j ρ̂} for a quantum state ρ̂. Note
that the positive semi-definiteness ensures p jP≥ 0, since 〈ψ| M̂ j |ψ〉 ≥ 0 for any |ψ〉. The fact
that the M̂ j sum to the identity ensures that j p j = 1.
Of particular interest are projective measurements, where M̂ j2 = M̂ j , i.e., the operators M̂ j
are projectors. We may create a projective measurement from any operator  = j a j | j 〉〈 j |,
P

by using the eigenbasis of the operator to define M̂ j = | j 〉〈 j|. In this case, we say that a
projective measurement of the operator  gives the value a j with probability p j = 〈 j|ρ̂| j 〉.
The average value of the measurement outcomes is given by
X
〈Â〉 ≡ Tr{Âρ̂} = aj pj . (16)
j

We also call this the average value of Â. After a projective measurement with outcome a j ,
the system is collapsed into the state | j 〉〈 j |. Repeating the same projective measurement twice
thus necessarily gives the same outcome. While projective measurements are widely used in
quantum theory, they represent an idealization that strictly speaking cannot be implemented
in the laboratory as they would violate the third law of thermodynamics [19].

Classical mixture vs quantum superposition: Let us consider the toss of a coin, where we
identify with |0〉 the outcome tails and with |1〉 the outcome heads. The outcome of such a
coin toss (before observation) is described by the density matrix
1
ρ̂ = (|0〉〈0| + |1〉〈1|) = 1, (17)
2
which is equal to the identity matrix, also called the maximally mixed state because each
outcome is equally likely. Such a mixture of states is completely classical and merely reflects
a scenario of incomplete knowledge, in this case the outcome of the coin toss.
We now compare the p state in Eq. (17) to a quantum superposition provided by the pure
state |+〉 = (|0〉 + |1〉)/ 2
1
ρ̂ = |+〉〈+| = (|0〉〈0| + |1〉〈1| + |0〉〈1| + |1〉〈0|) . (18)
2
In contrast to the maximally mixed state, this quantum state is pure and we thus have perfect
knowledge of the system. Performing a projective measurement on this state in the basis |±〉
will with certainty result in +. However, the state describes a coherent superposition of the
states |0〉 and |1〉. This coherent superposition is distinguished from a mixture by the off-
diagonal elements |0〉〈1| and |1〉〈0| (also called coherences).

Time evolution of an isolated system: The time-evolution of a system that is isolated from
its environment is given by
Rt
d t ′ Ĥ(t ′ )
ρ̂(t ) = Û(t )ρ̂(0)Û † (t ), Û(t ) = T e −i 0 , (19)

where Ĥ(t ) is the Hamiltonian of the system and T denotes the time-ordering operator. The
time-ordered exponential appearing in Eq. (19) can be written as
Rt
d t ′ Ĥ(t ′ )
T e −i 0 = lim e −iδt Ĥ(t ) e −iδt Ĥ(t −δt ) e −iδt Ĥ(t −2δt ) · · · e −iδt Ĥ(2δt ) e −iδt Ĥ(δt ) e −iδt Ĥ(0) ,
δt →0
(20)

7
SciPost Physics Lecture Notes Submission

such that the time argument in the Hamiltonian on the right-hand side increases from the
right to the left of the expression. Each exponential in the product can be understood as the
time-evolution by the infinitesimal time-step δt [20].
Note that in quantum mechanics, a system that is isolated from its environment is denoted
as a closed system. This can be confusing in the field of quantum thermodynamics because
in thermodynamics, a closed system traditionally refers to a system that can exchange energy
but cannot exchange matter with its environment. To avoid confusion, I use the term isolated
system throughout these lecture notes.

1.1.3 Second quantization


Many scenarios considered in this course feature the transport of electrons or photons through
a quantum system. Such scenarios are most conveniently described using the formalism of
second quantization, see Sec. 1 in the Book by Bruus and Flensberg [21] for a more detailed
introduction.
We first introduce the commutator and anti-commutator which play important roles for
bosons and fermions respectively

Commutator: [Â, B̂] = ÂB̂ − B̂ Â,


(21)
Anti-commutator: {Â, B̂} = ÂB̂ + B̂ Â.

Bosons: For a single bosonic mode, a central object is the creation operator ↠, which creates
a boson in this mode. Its Hermitian conjugate â denotes the annihilation operator, removing
a boson from the mode. These operators obey the canonical commutation relations

[â, ↠] = 1, [â, â] = [↠, ↠] = 0. (22)

The state without any bosons is denoted the vacuum state |0〉 and by definition it is annihilated
by the annihilation operator, â |0〉 = 0. The state with n bosons (also called a Fock state) can
be written with help of the creation operator as
n
â†
|n〉 ≡ p |0〉 , 〈n|m〉 = δn,m . (23)
n!
With these definitions, we find the action of the creation and annihilation operators on the
Fock states
p p
â |n〉 = n |n − 1〉 , ↠|n〉 = n + 1 |n + 1〉 , ↠â |n〉 = n |n〉 . (24)

The operator n̂ ≡ ↠â is called the number operator.


When dealing with multiple bosonic modes, we introduce subscripts on the annihilation
operators â j . The canonical commutation relations then read

[â j , â†k ] = δ j ,k , [â j , â k ] = [â†j , â†k ] = 0. (25)

The number states can be written as


€ Šn j

Y â j
|n0 , n1 , · · ·〉 = |n〉 ≡ Æ |0〉 , â†j â j |n〉 = n j |n〉 . (26)
j nj!

8
SciPost Physics Lecture Notes Submission

Fermions: For a single fermionic mode, the creation and annihilation operators are de-
noted as ĉ † and ĉ respectively. In contrast to bosonic operators, they obey canonical anti-
commutation relations

{ĉ, ĉ † } = 1, {ĉ, ĉ} = {ĉ † , ĉ † } = 0. (27)

The latter relation directly implies that ĉ 2 = (ĉ † )2 = 0. A fermionic mode may thus at most
be occupied by a single fermion. This is known as the Pauli exclusion principle. Just like for
bosons, the vacuum state |0〉 is annihilated by the annihilation operator, ĉ |0〉 = 0. The Fock
states can then be written as

|1〉 = ĉ † |0〉 , ĉ |1〉 = |0〉 , ĉ † ĉ |n〉 = n |n〉 , (28)

where n = 0, 1 and ĉ † ĉ is called the number operator just like for bosons. Note that due
to the Pauli exclusion principle, the occupied state |1〉 is annihilated by the creation operator
ĉ † |1〉 = 0, implying a symmetry between creation and annihilation operators. Indeed, we may
call ĉ † the annihilation operator for a hole which has the vacuum state |1〉.
Multiple modes are denoted by an index and obey the canonical anti-commutation rela-
tions
{ĉ j , ĉk† } = δ j,k , {ĉ j , ĉk } = {ĉ †j , ĉk† } = 0. (29)

Note that operators for different modes anti-commute, implying a sign change when exchang-
ing two fermions. Just as for bosons, the number states can be written as
Y € Šn j
|n0 , n1 , · · ·〉 = |n〉 ≡ ĉ †j |0〉 , ĉ †j ĉ j |n〉 = n j |n〉 . (30)
j

Note however that do to the anti-commutation relations, the order with which ĉ †j act on the
vacuum in the last expression matters.

1.1.4 Exact and inexact differentials


Exact and inexact differentials play an important role in thermodynamics. They are introduced
in most textbooks on thermodynamics, see for instance App. A of the book by Hentschke [22].
We recall that the differential of a differentiable function f (x , y) of two variables is de-
noted by
d f (x , y) = ∂ x f (x , y)d x + ∂ y f (x , y)d y, (31)
where ∂ x denotes the partial derivative with respect to x , keeping y fixed. The integral of
such a differential along any curve γ is determined solely by its endpoints
Z
d f (x , y) = f (x f , y f ) − f (x i , yi ), (32)
γ

where the subscript i ( f ) denotes the initial (final) values of the curve γ.
We now consider a differential form

d g (x , y) = a(x , y)d x + b(x , y)d y. (33)

We call this an exact differential if and only if

∂ y a(x , y) = ∂ x b(x , y). (34)

In that case, we may write a(x , y) = ∂ x g (x , y) and b(x , y) = ∂ y g (x , y) and g (x , y) is


a differentiable function. An exact differential is therefore a differential form that can be

9
SciPost Physics Lecture Notes Submission

written as the differential of a function. We call the differential form in Eq. (33) an inexact
differential if it cannot be written as the differential of a function. In this case, ∂ y a(x , y) ̸=
∂ x b(x , y). Importantly, the integral of an inexact differential generally does depend on the
path of integration and not just its endpoints.
As an example, we consider the exact differential

d z = yd x + x d y, (35)

which is the differential of the function z(x , y) = x y. The differential in Eq. (35) can be
written as the sum of two inexact differentials

d¯q = yd x , d¯w = x d y, (36)

where d¯ denotes an inexact differential. We may now integrate these differentials along two
different curves with the same endpoints

γ1 : (0, 0) → (1, 0) → (1, 1), γ2 : (0, 0) → (0, 1) → (1, 1). (37)

Integrals along these paths may be evaluated using


Z Z 1 Z 1
d g (x , y) = d x a(x , 0) + d y b(1, y),
γ1 0 0
Z Z1 Z1 (38)
d g (x , y) = d y a(0, y) + d x b(x , 1),
γ2 0 0

where we made use of Eq. (33). A simple calculation results in


Z Z Z Z
d¯q = d¯w = 1, d¯q = d¯w = 0. (39)
γ1 γ2 γ2 γ1

As expected, we recover
Z Z
dz = d z = z(1, 1) − z(0, 0) = 1. (40)
γ1 γ2

1.2 Macroscopic thermodynamics


Here we briefly summarize some important aspects of the traditional theory of thermodynam-
ics which applies at macroscopic scales, where fluctuations may safely be neglected. There are
many textbooks on this topic, see for instance the books by Callen [23] or Hentschke [22].
Among physical theories, Thermodynamics takes a rather peculiar role because it is gener-
ally valid for all systems (at the macroscopic scale) and it does not provide numerical results
but rather general equalities and inequalities which constrain physically allowed processes.
Indeed, the theory of thermodynamics has been called the village witch among physical theo-
ries [12]. The main assumption in thermodynamics is that we can describe a macroscopic body
by very few variables such as temperature T , pressure p, volume V, internal energy U, and
entropy S, see Fig. 1. Given these macroscopic variables, thermodynamics provides relations
and constraints between them.

10
SciPost Physics Lecture Notes Submission

Figure 1: A gas in a container of volume V, at temperature T , with pressure p. The


internal energy of the gas is given by U and its entropy by S. Using a piston, the
volume of the gas can be changed and work W can be performed on the system.
Energy may also be exchanged with the environment in the form of heat Q. Note
that the temperature TB and pressure pB of the environment may differ from the
temperature and pressure of the system.

The first law of thermodynamics: The first law of thermodynamics may be written in the
form
dU = d¯W − d¯Q, (41)
The first law states that energy is conserved and that energy changes come in two forms:
heat d¯Q and work d¯W . Quite generally, work is a useful form of energy that is mediated
by macroscopic degrees of freedom. This is in contrast to heat, which is a more inaccessible
form of energy mediated by microscopic degrees of freedom. Note that these quantities are
inexact differentials, i.e., they may depend on the path taken when changing thermodynamic
variables. Here we use a sign convention where work is positive when increasing the energy
of the body while heat is positive when it flows out of the body and into the environment.
The first law of thermodynamics prevents a perpetuum mobile of the first kind, i.e., a
machine that produces the work it needs to run itself indefinitely. In the presence of any type
of losses, such a machine needs to produce work out of nothing, violating Eq. (41). An example
would be a lamp that is powered by a solar cell, illuminated only by the lamp itself, see Fig. 2.

Figure 2: Perpetuum mobile of the first kind. A lamp illuminates a solar panel which
in turn powers the lamp. Without any hidden energy source, such a device cannot
work because of unavoidable losses. At long times, we may set dU = 0. Further-
more, we split the work d¯W = d¯WL − d¯WS, where WL denotes the work exerted
on the lamp by the solar panel and WS denotes the work exerted on the solar panel
by the lamp. In the presence of losses within the solar panel (any other loss mech-
anism results in similar conclusions), it provides strictly less work to the lamp than
it received, i.e., d¯WL < d¯WS. By Eq. (41), this would necessitate another source of
energy, either in the form of work or in the form of heat with d¯Q < 0.

Work may come in different forms. For instance, electrical work (also called chemical
work) is produced when charges are moved from a region with lower voltage to a region with

11
SciPost Physics Lecture Notes Submission

higher voltage, which is what happens when charging a battery. While this type of work will
be the focus of later parts of these lecture notes, here we focus on the traditional example of
mechanical work which is produced by changing the volume of a gas using, e.g., a piston as
illustrated in Fig. 1. In this case, we know from classical mechanics that work is given by d¯W =
−pB d V, where pB denotes the external pressure exerted by the piston. Furthermore, the heat
flowing into the body can be related to the entropy change of the surrounding environment
d¯Q = TB dSB , where the subscript B stands for ”bath´´. This relation follows from describing
the environment as being in thermal equilibrium at temperature TB and we will revisit this
later. We stress that in these expressions for heat and work, the quantities of the surrounding
environment appear (note that d V = −d VB ).

The second law of thermodynamics: The second law of thermodynamics is provided by


the Clausius inequality [24]
X d¯Q
α
dΣ = dS + dSB = dS + ≥ 0, (42)
α Tα
where we included multiple reservoirs at respective temperatures Tα that may exchange heat
d¯Qα with the body. The second law states that the total entropy,Pwhich is the sum of the
entropy of the body S and the entropy of the environment SB = α Sα , can only increase.
This change in total entropy is called entropy production Σ.

Figure 3: Perpetuum mobile of the second kind. A boat moves across the sea, being
powered by the heat absorbed from the surrounding water. To move in the water, the
boat needs to perform work against the friction force, i.e., d¯W < 0. At long times, we
may neglect any changes in the internal energy or the entropy of the system, i.e., dS =
dU = 0. The work required to move against the friction force must then be provided
by heat coming from the environment, requiring d¯Q < 0. Since the environment
only has a single temperature, this violates the second law of thermodynamics given
in Eq. (42) which requires d¯Q/T ≥ 0.

The second law of thermodynamics prevents a perpetuum mobile of the second kind, i.e.,
a machine that runs indefinitely by extracting heat out of an equilibrium environment. In
this case, there is only one temperature T describing the environment. Furthermore, for long
times we may neglect U as well as S since they remain finite while the produced work −W
increases indefinitely. The first law then requires W = Q which when inserted in Eq. (42)
provides W /T ≥ 0, preventing any production of work. An example would be given by a boat
that moves (performing work against the friction force of the water) by extracting heat from
the surrounding water, see Fig. 3. Note that dissipating work (i.e., turning work into heat,
W = Q ≥ 0) is perfectly allowed by the laws of thermodynamics and indeed is what happens
when an incandescent light bulb is glowing.

12
SciPost Physics Lecture Notes Submission

Qh < 0 Qc > 0
Th S Tc

W <0

Figure 4: Working principle of a heat engine. Heat from a hot reservoir Qh is partly
converted into work W and partly dissipated into the cold reservoir Qc . The dissipa-
tion ensures that the entropy production is positive, as required by the second law
of thermodynamics, and limits the efficiency to be below the Carnot efficiency, see
Eq. (44).

In addition to this example, the second law of thermodynamics provides important limita-
tions on physically allowed processes. Consider a heat engine, where the temperature gradi-
ent between two thermal reservoirs at temperatures Th > Tc is exploited to produce work, see
Fig. 4. Let us further consider a long-time scenario, where any changes in the system may be
neglected compared to the heat and work which continuously increase. The first and second
laws of thermodynamics then read

Qh Qc
W = Qh + Qc , + ≥ 0. (43)
Th Tc

These equations have a few remarkable consequences. First, for W = 0 it is straightforward


to show that −Qh = Qc ≥ 0, i.e, heat flows out of the hot and into the cold reservoir. Second,
either Qh , Qc , or both have to be positive, i.e., it is not possible to extract heat simultaneously
from both reservoirs. This is equivalent to the argument preventing a perpetuum mobile of
the second kind. A heat engine, defined by −W > 0 will thus require an influx of heat from
the hot reservoir Qh < 0, providing the necessary energy, as well as a heat outflux into the
cold reservoir Qc > 0 ensuring the increase in entropy. The second law of thermodynamics
then implies the well-known result that the efficiency of such a heat engine is upper bounded
by the Carnot efficiency
−W Tc
η≡ ≤1− , (44)
−Qh Th
where the efficiency can be understood as the ratio between the desired output (−W ) divided
by the resource that is used (−Qh ).

Thermodynamic processes and cycles: In a thermodynamic process, the state of a body


is changed by, e.g., changing its volume V or pressure p. Such changes can be applied un-
der different conditions. For instance, in an adiabatic process, the state is changed without
exchanging heat with the environment. Similarly, processes at constant temperature T or en-
tropy S are called isothermal and isentropic processes respectively. Note that the word adiabatic
is used differently in thermodynamics and in quantum mechanics. In quantum mechanics, it
refers to processes that are sufficiently slow such that a system in its ground state remains in its
groundstate throughout the process. For this reason, I will try to avoid the word adiabatic. An
important class of thermodynamic processes are reversible processes, which obey Σ = 0. This
generally requires that the body is in thermal equilibrium with its environment at all times.
Reversible processes therefore have to be very slow and always are an idealization of any real
process. As we will see below, reversible processes are very useful to derive thermodynamic
relations.

13
SciPost Physics Lecture Notes Submission

By combining different thermodynamic processes, we can design thermodynamic cycles.


After a completed cycle, the body is in the same state but since d¯Q and d¯W are inexact
differentials, we generally have
I I I
dU = 0, d¯Q = d¯W ̸= 0. (45)

Thus, it is possible to turn heat into work in a thermodynamic cycle. Prominent examples
of thermodynamic cycles are the Otto cycle, providing the underlying principle behind car
engines, and the Carnot engine, which is an idealized, reversible engine that reaches the Carnot
efficiency given in Eq. (44).

Maxwell relations: Let us consider the first law of thermodynamics for a reversible process.
As the body is in equilibrium with the environment throughout the process, we may identify
T = TB , p = pB and because dΣ = 0, we have dS = −dSB . The first law thus reads

dU = T dS − pd V = ∂S UdS + ∂V Ud V. (46)

Since this is an exact differential, the last expression holds no matter if the process is reversible
or not. However, the identification of heat and work with the first and second term on the
right-hand side is only justified for reversible processes. Since U is a smooth function, partial
derivatives with respect to different variables commute and we find the Maxwell relation

∂S ∂V U = ∂V T = −∂S p. (47)

Other Maxwell relations may be obtained similarly from other thermodynamic potentials such
as the free energy F = U −T S. The free energy plays a particularly important role as it captures
the capacity of a body to do work (in contrast to U capturing the capacity to provide heat and
work). The differential of the free energy reads

d F = −Sd T − pd V = ∂ T F dS + ∂V F d V, (48)

from which we may derive the Maxwell relation

−∂ T ∂V F = ∂V S = ∂ T p. (49)

Equations of state: In order to obtain quantitative predictions, the theory of thermodynam-


ics has to be supplemented by equations of state. For instance, a monatomic ideal gas with N
atoms obeys the relations
3
U= N kB T, p V = N kB T, (50)
2
where kB denotes the Boltzmann constant. With the help of these equations, we may then de-
rive non-trivial quantitative results such as the change in entropy when changing temperature
and volume – 3/2 ™
T V
S(T, V) − S(T0 , V0 ) = kB ln . (51)
T0 V0

1.3 Information theory


As we will see below, entropy is closely related to concepts from information theory. Here we
provide a brief introduction to these concepts, for a more detailed discussion see for instance
the original paper by Shannon [25] or the book by Cover and Thomas [26].

14
SciPost Physics Lecture Notes Submission

Self-information: Let us consider a random variable X with outcomes x 1 , · · · , x n occuring


with probabilities p1 , · · · , pn . Note that in this section, we use a different symbol for the
random variable X and its outcomes x j as is customary in mathematics. The self-information
or surprisal of an outcome x j is defined as

I j = − ln p j . (52)

It quantifies the information that is gained when observing outcome x j . The self-information
has the following properties:
i) The less probable an event is, the more surprising it is and the more information we gain

lim I j = − ln 1 = 0, lim I j = ∞. (53)


p j →1 p j →0

ii) The information of uncorrelated events is additive. Let p jX and pkY denote the probabilities
that the independent random variables X and Y take on the values x j and yk respectively.
It then holds that

p j,k = p jX pkY , I j,k = − ln p j ,k = − ln p jX − ln pkY = I jX + IkY , (54)

where p j ,k denotes the joint probability of observing outcomes x j and yk .


We note that with Eq. (52), information is quantified in nats since we are using the natural
logarithm. To quantify information in bits, one should use the logarithm of base two instead.

Shannon entropy: The Shannon entropy is defined as the average self-information


X X
H(X) = pj Ij = − p j ln p j . (55)
j j

It quantifies the average information gain when observing the random variable X , or, equiv-
alently, the lack of information before observation. The practical importance of the Shannon
entropy is highlighted by the noiseless coding theorem which states that storing a random vari-
able taken from a distribution with Shannon entropy H requires at least H nats on average (if
H is defined using log2 , then it requires at least H bits). Any further compression results in
loss of information. This theorem can be illustrated using an example taken from Nielsen and
Chuang [17]:
Consider a source that produces the symbols A, B, C , and D with respective probabilities
1/2, 1/4, 1/8, and 1/8. If we do not use any compression, we need two bits of storage per
symbol, e.g.,
A → 00, B → 01, C → 10, D → 11. (56)
However, the symbols provided by the source may be encoded using less bits per symbol on
average by taking into account the probability distribution of the source. The basic idea is to
use less bits for the more frequent symbols. For instance, we may use the following encoding

A → 0, B → 10, C → 110, A → 111. (57)

Note that we cannot use B → 1 because then we can no longer determine if 110 means C or
if it means BBA. With this compression, we find the average number of bits per symbol to be
1/2 · 1 + 1/4 · 2 + 1/8 · 3 + 1/8 · 3 = 7/4. This equals the Shannon entropy (in bits)
1 1 1 1 1 1 1 1 7
H = − log2 − log2 − log2 − log2 = . (58)
2 2 4 4 8 8 8 8 4
It turns out that any compression algorithm that uses less bits results in the loss of information.

15
SciPost Physics Lecture Notes Submission

Von Neumann entropy: The quantum version of the Shannon entropy is given by the von
Neumann entropy X
SvN [ρ̂] = −Tr{ρ̂ ln ρ̂} = − p j ln p j , (59)
j

where ρ̂ = j p j | j 〉〈 j |. As a generalization of the Shannon entropy to quantum systems, it


P

describes the lack of knowledge we have over a quantum system given a state ρ̂. The von
Neumann entropy is minimized for pure states and maximized for the maximally mixed state
given by the identity matrix

SvN [|ψ〉〈ψ|] = 0, SvN [1/d] = ln d, (60)

where d denotes the dimension of the Hilbert space.

Relative entropy: The relative entropy, also known as the Kullback-Leibler divergence [27] is
useful to quantify information when a random variable is described by an erroneous distribu-
tion. For two distributions {p j } and {q j }, the relative entropy is defined as
X pj
DKL [{p j }||{q j }] = p j ln . (61)
j
qj

By definition, DKL ≥ 0 where equality is only achieved if the two distributions are the same. To
shed some light onto the interpretation of this quantity, let us assume that a random variable
X is distributed according to {p j } but we describe it by the erroneous distribution {q j }. The
information gain that we attribute to the outcome x j then reads − ln q j . On average, this gives
X X X pj
− p j ln q j = − p j ln p j + p j ln = H(X) + DKL [{p j }||{q j }]. (62)
j j j
qj

The left-hand side of this equation provides the information we assign to the random variable
on average, the assumed information. The right-hand side provides the actual average infor-
mation, given by the Shannon entropy, plus the relative entropy. The actual information is thus
strictly smaller than the assumed information and the discrepancy is quantified by the rela-
tive entropy. We can think of the relative entropy as an information loss due to the erroneous
description.

Quantum relative entropy: The generalization of the relative entropy to the quantum sce-
nario is defined as [28]
S[ρ̂||σ̂] = Tr{ρ̂ ln ρ̂ − ρ̂ ln σ̂}. (63)
Just as its classical counterpart, is always non-negative S[ρ̂||σ̂] ≥ 0 and it quantifies the
discrepancy between the assumed and the actual lack of information when a system in state
ρ̂ is erroneously described by the state σ̂.

Mutual information: The mutual information is a measure that characterizes the mutual
dependence of two random variables. It tells us how much information one random variable
has on the other and it is defined as

IMI (X; Y ) = DKL [{p j ,k }||{p jX pkY }] = H(X) + H(Y ) − H(X , Y ). (64)

To extend this definition to the quantum regime, we consider a bi-partite system in a Hilbert
space H = HA ⊗ HB . The quantum mutual information in a quantum state ρ̂AB is defined as

IMI [A; B] = S[ρ̂AB ||ρ̂A ⊗ ρ̂B ] = SvN [ρ̂A ] + SvN [ρ̂B ] − SvN [ρ̂AB ], (65)

16
SciPost Physics Lecture Notes Submission

where we introduced ρ̂A = TrB {ρ̂AB } and ρ̂B = TrA {ρ̂AB }. Since the quantum relative entropy
is a non-negative quantity, so is the mutual information. Furthermore, the mutual information
is bounded from above by a corollary of the Araki–Lieb inequality [29]

IMI [A; B] ≤ 2 min{SvN [ρ̂A ], SvN [ρ̂B ]} ≤ 2 min{dA , dB }, (66)

where dα denotes the dimension of the Hilbert space Hα . The last inequality follows from the
upper bound on the von Neumann entropy given in Eq. (60).

2 Thermodynamic equilibrium
Equilibrium states may be characterized by a few intensive variables such as temperature T ,
chemical potential µ, and pressure p. In equilibrium, no net currents flow and the system
is stationary, i.e., observables become time-independent. Note the similarity to the main as-
sumption in macroscopic thermodynamics, see Sec. 1.2. This is no coincidence. Indeed, in
macroscopic thermodynamics, systems are assumed to be in local equilibrium (i.e., can be
described by a thermal state with variables that may differ from the environment).

Figure 5: System exchanging energy and particles with the environment.

2.1 The grand-canonical ensemble


Throughout these lecture notes, we consider a system that may exchange both energy and
particles with its environmnet as sketched in Fig. 5. In equilibrium, such a system is described
by the grand-canonical Gibbs state

e −β(Ĥ−µ N̂) ¦ ©
ρ̂G = , Z = Tr e −β(Ĥ−µ N̂) , (67)
Z

where β = 1/kB T denotes the inverse temperature, Ĥ is the Hamiltonian, N̂ the particle-
number operator, and Z is called the partition function. For equilibrium states, we may gen-
erally identify the relevant quantities in macroscopic thermodynamics as average values

U ≡ 〈Ĥ〉, S ≡ kB SvN [ρ̂]. (68)

Because the Gibb’s state is of central importance for much that follows, we now discuss three
different ways of motivating why it provides an adequate description for thermodynamic equi-
librium.

17
SciPost Physics Lecture Notes Submission

2.1.1 Subsystem of an isolated system


The first motivation considers the system of interest to be a small part of a very large “super-
system” that is described as an isolated system, see Fig. 6. This motivation for the Gibbs state
can be found in many textbooks, see e.g., Landau & Lifshitz [30] or Schwabl [31]. We start by
assuming that the supersystem has a fixed energy and particle number given by Etot and Ntot
respectively. In this case, it is described by the so-called micro-canonical ensemble
X
ρ̂tot = ptot (E, N) |E, N〉〈E, N| , (69)
E,N

where ptot (E, N) is nonzero only if the energy and particle number are close to Etot and Ntot ,
i.e., for Etot ≤ E < Etot + δE and Ntot ≤ N < Ntot + δN. For energies and particle numbers
within these shells, each state is equally likely in the microcanonical ensemble, i.e.,

Θ(E − Etot )Θ(N − Ntot )Θ(Etot + δE − E)Θ(Ntot + δN − N)


ptot (E, N) = , (70)
Ω(Etot , Ntot )δEδN

where Ω(Etot , Ntot ) denotes the density of states and Θ(x ) the Heaviside theta function which
is equal to one for x > 0 and zero otherwise. Being a macroscopic supersystem, the number
of states contributing to the microcanonical ensemble is assumed to be large

Ω(Etot , Ntot )δEδN ≫ 1. (71)

At the same time, for energy and particle numbers to be well defined, we require δE ≪ Etot
and δN ≪ Ntot . This also justifies to assume that the density of states is constant in the
energy and particle number shells that contribute to the micro-canonical ensemble. Fur future
reference, we note that the von Neumann entropy of the microcanoncial ensemble is given by
the logarithm of the number of contributing states [see also Eq. (60)]

SvN (ρ̂tot ) = ln[Ω(Etot , Ntot )δEδN]. (72)

Figure 6: System of interest as a small part of a large, isolated “supersystem”. The


energy and particle number in the supersystem are fixed to Etot and Ntot respectively.
The reduced state of the system can then be shown to be the Gibbs state.

Since our system of interest is part of the supersystem, we may write the Hamiltonian and
particle-number operator as

Ĥtot = ĤS + ĤB + V̂, N̂tot = N̂S + N̂B , (73)

18
SciPost Physics Lecture Notes Submission

where the subscript B stands for “bath” and describes the remainder of the supersystem. We
now make a weak coupling approximation that is crucial to obtain the Gibbs state: we assume
that V̂ is sufficiently weak such that we may neglect the coupling energy and the energy and
particle eigenstates of the supersystem approximately factorize

|E, N〉 ≃ |ES, NS〉 ⊗ |EB , NB 〉 . (74)

The density matrix in Eq. (69) may then be written as


X X
ρ̂tot = ptot (ES + EB , NS + NB ) |ES, NS〉〈ES, NS| ⊗ |EB , NB 〉〈EB , NB | . (75)
ES ,NS EB ,NB

We are interested in the reduced state of the system obtained by taking the partial trace over
the bath
X X
ρ̂S = TrB {ρ̂tot } = 〈EB , NB | ρ̂tot |EB , NB 〉 = pS(ES, NS) |ES, NS〉〈ES, NS| . (76)
EB ,NB ES ,NS

The probability for the system to be in a state with energy ES and particle number NS may be
written as
X ΩB (Etot − ES, Ntot − NS)δEδN
pS(ES, NS) = ptot (ES + EB , NS + NB ) = , (77)
EB ,NB
Ω(Etot , Ntot )δEδN

where the numerator in the last


P expression denotes the number of states in the bath, i.e., the
number of terms in the sum EB ,NB ptot (ES + EB , NS + NB ) which are non-vanishing.
In analogy to Eq. (72), the entropy of the bath is given by

SB (E, N) = kB ln[ΩB (E, N)δEδN]. (78)

We note that the factor of kB appears here because this is the thermodynamic, not the von
Neumann entropy. Since the system is considered to be small, we may expand the bath entropy
around Etot and Ntot to obtain

SB (Etot − ES, Ntot − NS) ≃ SB (Etot , Ntot ) − ES∂E SB (E, Ntot )|E=Etot
− NS∂ N SB (Etot , N)| N=Ntot
(79)
ES − µNS
= SB (Etot , Ntot ) − ,
T
where we identified ∂E SB (E, N) = 1/T and ∂ N SB (E, N) = −µ/T . Inserting the last expression
into Eq. (77) results in

ΩB (Etot , Ntot )δEδN e −β(ES −µNS )


pS(ES, NS) = e −β(ES −µNS ) = , (80)
Ω(Etot , Ntot )δEδN Z

where the last equality follows from the normalization of the state. Since this is exactly the
probability for the Gibbs state given in Eq. (67), we have shown that a small subsystem of a
large and isolated supersystem is described by the Gibbs state if the supersystem is described
by the microcanonical ensemble (i.e., has well defined energy and particle number) and if the
system is weakly coupled to the remainder of the supersystem.

19
SciPost Physics Lecture Notes Submission

2.1.2 Jaynes’ maximum entropy principle


The second motivation for the Gibbs state that we discuss is based on Jaynes’ maximum entropy
principle [32, 33]. The basic idea behind this principle is that an adequate description of the
physical system should only encode the information that we actually have. As discussed above,
equilibrium states may be described by intensive variables such as temperature and chemical
potential. Equivalently, they may be described by the conjugate extensive quantities provided
by the average energy Ē and the average particle number N̄. Following Jaynes’, we take Ē and
N̄ as the prior data, i.e., the knowledge that we have about the state. Jaynes’ crucial insight
was that in order to avoid encoding any other knowledge in the state, the entropy, a quantifier
for lack of knowledge, should be maximized. Mathematically, the Gibbs state is then obtained
by maximizing the von Neumann entropy under the constraints

Ē = 〈Ĥ〉, N̄ = 〈 N̂〉. (81)

To perform this maximization, we start by considering states of the form


X
ρ̂ = pn |En , Nn 〉〈En , Nn | , Ĥ |En , Nn 〉 = En |En , Nn 〉 , N̂ |En , Nn 〉 = Nn |En , Nn 〉 . (82)
n

This form may be motivated by demanding that equilibrium states do not change in time,
which implies [see also Eq. (19)]

Û(t )ρ̂ Û † (t ) = ρ̂ ⇔ [Ĥ, ρ̂] = 0, (83)

and that there are no coherent superpositions of states with different number of particles,
which implies [ N̂, ρ̂] = 0. The latter assumption may be motivated by a particle superselection
rule [34] and it does not apply when considering scenarios with a vanishing chemical potential
µ = 0. We now want to maximize the von Neumann entropy
X
SvN [ρ̂] = − pn ln pn , (84)
n

under the following three constraints


X X X
pn = 1, pn En = Ē, pn Nn = N̄, (85)
n n n

where the first constraint simply ensures the normalization of the state. Such constrained
maximization problems can be solved by the method of Lagrange multipliers [35]. To this
end, we define the function
X X  X  X 
L=− pn ln pn − λ0 pn − 1 − λ1 pn En − Ē − λ2 pn Nn − N̄ . (86)
n n n n

The solution to the maximization procedure is the distribution {pn } that obeys

∂L
= − ln pn − 1 − λ0 − λ1 En − λ2 Nn = 0. (87)
∂ pn

We now identify
λ1 = β, λ2 = −βµ, e λ0 +1 = Z. (88)
The solution of Eq. (87) then reads

e −β(En −µNn )
pn = , (89)
Z

20
SciPost Physics Lecture Notes Submission

recovering the Gibbs state given in Eq. (67). The Lagrange multipliers or, equivalently, the
quantities β, µ, and Z are fully determined by the constraints given in Eq. (85).
Introducing the grand potential which plays a similar role to the free energy

ΦG = 〈Ĥ〉 − kB T SvN [ρ̂] − µ〈 N̂〉, (90)

we may write Eq. (86) as


X
L = −βΦG − λ0 ( pn − 1) + β(Ē − µ N̄). (91)
n

The last term may be dropped as it does not depend on pn . The last equation then implies
that maximizing the von Neumann entropy given the average energy and average particle
number is mathematically equivalent to minimizing the grand potential (the only constraint
left, corresponding to the Lagrange multiplier λ0 , ensures the normalization of the state).

2.1.3 Complete passivity


The final motivation we provide for the Gibbs state is based on the notion of passive states
[36, 37]. These are states from which no work can be extracted. To define passive states, we
consider an isolated system that obeys the time-evolution given in Eq. (19). For an isolated
system with a time-dependent Hamiltonian, we interpret the total energy change of the system
as work. This can be motivated by considering the time-dependence of the Hamiltonian as
mediated by a classical field describing a macroscopic degree of freedom. In analogy to a
piston compressing a gas, the energy change mediated by this macroscopic degree of freedom
should be considered as work. We now consider a cyclic work protocol of duration τ, where
the Hamiltonian is changed over time Ĥ(t ) such that

Ĥ ≡ Ĥ(0) = Ĥ(τ). (92)

The work extracted during this protocol reads

Wex = Tr{Ĥ ρ̂} − Tr{Ĥ Û ρ̂ Û † }, (93)

where ρ̂ denotes the initial state of the system and the unitary Û ≡ Û(τ) is determined by
the time-dependent Hamiltonian, see Eq. (19). We note that with a suitable Hamiltonian, any
unitary operator can be generated.
A passive state can now be defined as a state τ obeying

Tr{Ĥ(τ̂ − Û τ̂Û † )} ≤ 0 ∀ Û, (94)

which implies that no work can be extracted from passive states, no matter how the time-
dependent Hamiltonian is designed. Passive states are of the form
X
τ̂ = pn |En 〉〈En | , with E0 ≤ E1 ≤ E2 · · · p0 ≥ p1 ≥ p2 · · · (95)
n

Remarkably, taking multiple copies of a passive state may result in a state that is no longer
passive. This suggests the definition of completely passive states [36]: a state τ̂ is completely
passive iff
Tr{Ĥn (τn − Û τ̂n Û † )} ≤ 0 ∀ Û, n, (96)
where
Ĥn = Ĥ ⊕ Ĥ ⊕ · · · ⊕ Ĥ, τ̂n = τ̂ ⊗ τ̂ ⊗ · · · ⊗ τ̂. (97)

21
SciPost Physics Lecture Notes Submission

Here we introduced the Kronecker sum as  ⊕ B̂ =  ⊗ 1 + 1 ⊗ B̂. This implies that it is


not possible to extract work from completely passive states even if we take multiple copies
and let them interact during the work protocol. Remarkably, the only completely passive
states are Gibbs states in the canonical ensemble (assuming no degeneracy in the ground
state) [36, 38, 39]
e −β Ĥ
ρ̂c = . (98)
Tr{e −β Ĥ }
In complete analogy, the grand-canonical Gibbs state is completely passive with respect
to Ĥ − µ N̂, i.e., the average 〈Ĥ − µ N̂〉 cannot be lowered by any unitary, even when tak-
ing multiple copies. If we further assume a particle-superselection rule [34], preventing the
creation of coherent superpositions of different particle numbers as long as µ ̸= 0, then the
grand-canonical Gibbs state in Eq. (67) is completely passive since [Û, N̂] = 0.

2.2 Equivalence of ensembles in the thermodynamic limit


We have already encountered the grand-canonical, the canonical, as well as the micro-canonical
ensemble. Here we discuss these ensembles in a bit more detail and argue that they all be-
come equivalent in the thermodynamic limit, which is the limit of large systems relevant for
macroscopic thermodynamics.

Microcanonical ensemble
X 1
ρ̂M = |E, N〉〈E, N| , (99)
Ē≤E<Ē+δE
Ω(Ē, N̄)δEδN
N̄≤N< N̄+δN

where Ω(Ē, N̄) denotes the density of states that we have encountered in Eq. (71). In the
microcanonical ensemble, both the particle number as well as the energy is fixed, where fixed
means that all states within an energy (particle number) shell of thickness δE (δN) contribute.
Within this shell, the state is assumed to be completely mixed, i.e., all states have equal weight.
As such, the state maximizes the von Neumann entropy for fixed Ē and N̄.

Canonical ensemble
X X e −β E e −β ĤN̄
ρ̂c = |E, N〉〈E, N| = , (100)
Zc Tr{e −β ĤN̄ }
N̄≤N< N̄+δN E

where Ĥ N̄ denotes the Hamiltonian projected onto the relevant particle-number subspace
X X
Ĥ N̄ = |N〉〈N| Ĥ |N〉〈N| , |N〉 = |E, N〉 . (101)
N̄≤N< N̄+δN E

In the canonical ensemble, only the particle number is fixed. The average energy 〈Ĥ〉 is de-
termined by the temperature of the environment. The canonical ensemble is the adequate
equilibrium state when energy (but not particles) can be exchanged with the environment.
The canonical Gibbs state maximizes the von Neumann entropy for a fixed particle number N̄
and an average energy given by 〈Ĥ〉 = Ē. Furthermore, the canonical Gibbs state minimizes
the free energy
F = 〈Ĥ〉 − kB T SvN [ρ̂], (102)
when the particle number N̄ is fixed.

22
SciPost Physics Lecture Notes Submission

Grand-canonical ensemble The grand-canonical Gibbs state is given in Eq. (67). As dis-
cussed above, it maximizes the von Neumann entropy for given average values of energy and
particle number. This is equivalent to minimizing the grand potential given in Eq. (90). The
grand-canonical ensemble is the adequate description when both energy and particles may be
exchanged with the environment. The average energy and particle number are then deter-
mined by the temperature and chemical potential of the environment respectively.

Thermodynamic limit and equivalence of ensembles The thermodynamic limit is the limit
of large systems which formally can be defined as

Ē N̄
V → ∞, = cs t. = cs t. (103)
V V
This can be achieved by setting V ∝ x , Ē ∝ x , N̄ ∝ x , and letting x → ∞. In the ther-
modynamic limit, all ensembles become equivalent [40] because relative fluctuations around
average values vanish. Here, we illustrate this for the canonical ensemble. In this ensemble,
the probability for the system to take on a certain energy obeys

p(E) ∝ e −β E Ω(E, N̄). (104)

As it turns out, this function becomes highly peaked in the thermodynamic limit. This can be
seen from the relative fluctuations

〈Ĥ 2 〉 − 〈Ĥ〉2 1 kB T 2 1
=− ∂β 〈Ĥ〉 = C∝ , (105)
〈Ĥ〉2 〈Ĥ〉 〈Ĥ〉2 x

where we introduced the heat capacity

C = ∂ T 〈Ĥ〉, (106)

which quantifies the change of energy upon a change of temperature. The last proportionality
in Eq. (105) follows because the heat capacity is an extensive quantity and is thus proportional
to x for large systems. Equation (105) implies that in the thermodynamic limit, the relative
fluctuations vanish. The probability distribution for the system to take on a given relative
energy may then be approximated as a Gaussian distribution (see Ref. [23] for a discussion on
higher moments)
(E/Ē−1)2
Ē −
p(E/Ē) ≃ p e 2kB T 2 C /Ē 2 , (107)
2πkB T 2 C
which tends to a Dirac delta distribution δ(E/Ē − 1) in the thermodynamic limit.

3 The laws of thermodynamics


In this section, we discuss how the laws of thermodynamics emerge in a quantum mechanical
framework.

3.1 The general scenario


The general scenario we consider consists of a system coupled to multiple reservoirs which are
in local thermal equilibrium as sketched in Fig. 7. This is described by the Hamiltonian
X 
Ĥtot (t ) = ĤS(t ) + Ĥα + V̂α , (108)
α

23
SciPost Physics Lecture Notes Submission

where the system is labeled by the subscript S and the reservoirs are labeled by the index α.
the term V̂α denotes the coupling between the system and reservoir α. Because the system
and reservoirs together comprise an isolated system, the time evolution of the total density
matrix is given by

ρ̂tot (t ) = Û(t )ρ̂tot (0)Û † (t ) ⇔ ∂ t ρ̂tot (t ) = −i[Ĥtot (t ), ρ̂tot (t )], (109)

where the time-evolution operator is given by


Rt
d t ′ Ĥtot (t ′ )
Û(t ) = T e −i 0 , (110)

with T denoting the time-ordering operator (see Sec. 1.1.2) and we allow the system Hamilto-
nian to be time-dependent while we assume the reservoir Hamiltonians as well as the coupling
terms to be time-independent.

Figure 7: General scenario. A system can exchange both energy and particles with
N thermal reservoirs, described by the temperatures Tα and the chemical potentials
µα .

As seen in Sec. 1.2, the energy flows between the system and the reservoirs are of cru-
cial importance in thermodynamics. This motivates considering the mean energy change of
reservoir α

∂ t 〈Ĥα 〉 = Tr Ĥα ∂ t ρ̂tot = ∂ t 〈Ĥα − µα N̂α 〉 + µα ∂ t 〈 N̂α 〉 = Jα + Pα . (111)

Here we divided the energy change into a part that we call the heat current Jα and a part that
we call the power Pα that enters reservoir α. Throughout these notes, the sign convention is
such that energy flows are positive when flowing toward the body that their index refers to.
To motivate the separation of energy flows into heat and work, we introduce the concept of
entropy production in the quantum regime.

3.2 Entropy production


The standard extension of entropy to the quantum regime is the von Neumann entropy. How-
ever, under unitary time-evolution, the von Neumann entropy is constant and thus

∂ t SvN [ρ̂tot ] = −∂ t Tr{ρ̂tot (t ) ln ρ̂tot (t )} = 0. (112)

The von Neumann entropy of the total system can thus not tell us anything about how energy
flows between the system and the reservoirs. To make progress, we consider an effective
description based on local thermal equilibrium [i.e., the reservoirs are described by the Gibbs
state given in Eq. (67)]

24
SciPost Physics Lecture Notes Submission

• True description: ρ̂tot (t )

• Effective description: ρ̂S(t ) α τ̂α


N

where ρ̂S(t ) = TrB {ρ̂tot (t )} and we denote the Gibbs state describing reservoir α as

e −βα (Ĥα −µα N̂α )


τ̂α = . (113)

The effective description thus contains all information on the system but neglects any
changes to the reservoirs as well as the correlations that may build up between system and
reservoirs. Such an effective description is often the best one can do in an experiment, where
one might have control over the microscopic degrees of freedom of the system only.
We now consider the quantum relative entropy between the true state ρ̂tot (t ) and our
effective description
X X
S [ρ̂tot ||ρ̂S α τ̂α ] = SvN [ρ̂S] + βα 〈Ĥα − µα N̂α 〉 + ln Zα − SvN [ρ̂tot ],
N
(114)
α α

where averages are with respect to the total state 〈X̂ 〉 = Tr{X̂ ρ̂tot (t )} and we used ln τ̂α =
−βα (Ĥα − µα N̂α ) − ln Zα as well as ln(Â ⊗ B̂) = ln(Â) ⊗ 1 + 1 ⊗ ln(B̂). As discussed in Sec. 1.3,
Eq. (114) may be interpreted as the information that is lost due to our effective description.
We now introduce the entropy production rate [41]
XJ
α
Σ̇ = kB ∂ t S [ρ̂tot ||ρ̂S α τ̂α ] = kB ∂ t SvN [ρ̂S] +
N
. (115)
α Tα

The entropy production rate can thus be interpreted as the rate at which information is lost
by our local equilibrium description, due to the buildup of correlations between system and
environment as well as changes to the reservoirs. Note that it is not guaranteed to be posi-
tive. Finite size effects as well as non-Markovian dynamics can result in a negative entropy
production rate (a backflow of information from the reservoirs). However, as we will see later,
for infinitely large and memoryless reservoirs, the entropy production rate is ensured to be
positive at all times as information is irretrievably lost when one can only access the system
alone.
Note that Eq. (115) motivates the interpretation of Jα as a heat flow, such that the entropy
production associated to reservoir α is given by the usual expression for reservoirs which re-
main in thermal equilibrium. Interestingly, we may refine our effective description by using
time-dependent temperatures and chemical potentials [42, 43]

e −βα (t )(Ĥα −µα (t ) N̂α )


τα (t ) = , (116)

such that
Tr{Ĥα ρ̂tot (t )} = Tr{Ĥα τ̂α (t )}, Tr{ N̂α ρ̂tot (t )} = Tr{ N̂α τ̂α (t )}. (117)
In this case, one may show that
Jα (t )
kB ∂ t SvN [τ̂α (t )] = , (118)
Tα (t )
and the entropy production rate reduces to
X
Σ̇ = kB ∂ t S [ρ̂tot ||ρ̂S α τ̂α (t )] = kB ∂ t SvN [ρ̂S] + kB ∂ t SvN [τ̂α (t )]
N
α
X J (t ) (119)
α
= kB ∂ t SvN [ρ̂S] + .
α Tα (t )

25
SciPost Physics Lecture Notes Submission

3.3 The first law of thermodynamics


To derive the first law of thermodynamics, we consider the change in the average energy of
system and reservoirs
 
∂ t 〈Ĥtot (t )〉 = Tr [∂ t Ĥtot (t )]ρ̂tot + Tr Ĥtot (t )∂ t ρ̂tot
(120)
X X
= ∂ t 〈ĤS(t )〉 + ∂ t 〈Ĥα 〉 + ∂ t 〈V̂α 〉.
α α

Using Eq. (72), the last term on the first line can be shown to vanish. With the help of the last
equation and Eq. (111), we may then derive the first law of thermodynamics (note that the
energy flows are defined to be positive when they enter the location corresponding to their
index)
X X
∂ t 〈ĤS(t )〉 = PS(t ) − [Jα (t ) + Pα (t )] − ∂ t 〈V̂α 〉, PS ≡ 〈∂ t ĤS〉, (121)
α α

where PS denotes the power entering the system due to some external classical drive that
renders ĤS time-dependent. The term due to the coupling energy 〈V̂α 〉 can be neglected when
the coupling is weak, which is a common assumption for open quantum systems. Relaxing
this assumption and considering the thermodynamics of systems that are strongly coupled to
the environment is an exciting ongoing avenue of research [44–47].

3.4 The second law of thermodynamics


Let us consider an initial state which is a product state of the form
O
ρ̂tot (0) = ρ̂S(0) τ̂α , (122)
α

i.e., we assume our effective description to be exact at t = 0. In this case, Eq. (114) can be
written as
XQ
α
Σ ≡ kB S [ρ̂tot (t )||ρ̂S(t ) α τ̂α ] = kB SvN [ρ̂S(t )] − kB SvN [ρ̂S(0)] +
N
, (123)
α Tα

where the heat is defined as


 
Qα = Tr (Ĥα − µα N̂α )ρ̂tot (t ) − Tr (Ĥα − µα N̂α )ρ̂tot (0) , (124)

and we used that S [ρ̂tot (0)||ρ̂S(0) α τ̂α ] = 0. Since it is expressed as a quantum relative
N

entropy, we have
Σ ≥ 0. (125)
From an information point of view, this inequality tells us that if our effective description is
true at t = 0, then it can only be worse at later times. To understand how our description
becomes worse, we follow Refs. [41, 48] and write the entropy production as
O
Σ = IMI [S; B] + S[ρ̂B (t )|| τ̂α ], (126)
α

where we introduced the reduced state of the environment ρ̂B (t ) = TrS{ρ̂tot (t)} and the
mutual information between system and environment is given by [c.f. (65)]

IMI [S; B] = SvN [ρ̂S(t )] + SvN [ρ̂B (t )] − SvN [ρ̂tot (t )]. (127)

The first term in Eq. (126) describes the correlations between the system and the environment
while the second term describes the departure of the environment from the initial product of

26
SciPost Physics Lecture Notes Submission

Gibbs states. As was recently shown, the departure of the environment from its initial state
provides the dominant contribution to the entropy production [48]. This can be seen from
the upper bound of the mutual information IMI [S; B] ≤ 2dS [c.f. Eq. (66)], where dS denotes
the dimension of the Hilbert space of the system. In the long-time limit, or under steady state
conditions, the mutual information therefore saturates and can no longer contribute to Σ.
Any entropy production due to persisting heat currents, see Eq. (123), can thus be associated
with the environment departing further and further from the initial state with well defined
temperatures and chemical potentials.
As mentioned above, the entropy production rate Σ̇ is not always guaranteed to be positive
(i.e., Σ is not necessarily a monotonously increasing function of time). However, at small times,
Eq. (125) ensures that the entropy production rate is also positive. Furthermore, as we will
see in the next section, the entropy production is positive for infinitely large and memory-less
reservoirs which couple weakly to the system. We note that with a time-dependent effective
description, see Eq. (116), Eqs. (123) and (125) still hold.

3.5 The zeroth law of thermodynamics


For completeness, we will also briefly mention the zeroth and the third law of thermodynamics,
although they will play a less important role throughout these notes. The zeroth law is usually
phrased as: If two systems are both in equilibrium with a third one, then they are in equilibrium
with each other. Being equipped with definitions for temperature and chemical potential, then
the zeroth law implies: Systems in equilibrium with each other have the same temperature and
chemical potential.
When a small system is coupled to a large equilibrium reservoir, then the reduced system
state is expected to tend to [47]

TrB e −β(Ĥtot −µ N̂tot )



weak e β(ĤS −µ N̂S )
lim ρ̂S(t ) = −−−−−→ . (128)
Tr e −β(Ĥtot −µ N̂tot ) coupling

t →∞ Z

As expected, in equilibrium, the system is thus characterized by the same temperature and
chemical potential as the reservoir. We note that equilibrium may only be reached if Ĥtot is
time-independent and if there is only a single temperature and a single chemical potential
characterizing the environment. The exact range of validity of the first equality in Eq. (128)
is a subject of ongoing research [47].

3.6 The third law of thermodynamics


The formulation of the third law of thermodynamics that we consider here is known as Nernst’s
unattainability principle [49]. In its modern formulation, it reads: It is impossible to cool a
system to its ground state (or create a pure state) without diverging resources. These resources
may be time, energy (in the form of work), or control complexity [50]. When a diverging
amount of work is available, one may cool a system to the ground state by increasing the
energy of all excited states by an infinite amount. Then, the equilibrium state at any finite
temperature reduces to the ground state which can thus be obtained by equilibration with
any thermal reservoir. When an infinite amount of time is available, this process may be
performed reversibly, reducing the work cost to a finite amount. Loosely speaking, infinite
control complexity allows one to parallelize this cooling process and reach the ground state
using only a finite amount of time and work [50].
Existing proofs of the third law of thermodynamics for quantum systems [50, 51] use the
framework of the resource theory of thermodynamics [13] and are not directly applicable to
the scenario considered in these notes.

27
SciPost Physics Lecture Notes Submission

4 Markovian master equations


In this section, we consider Markovian master equations as a description for the reduced state
of the system ρ̂S. Markovianity implies that no memory effects of the environment are taken
into account. For instance, a particle that is emitted from the system will not be reabsorbed
by the system before losing all memory of the emission event. In principle, memory effects
are always present but if the coupling between system and environment is weak, these effects
can often safely be ignored. There are numerous references that discuss Markovian master
equations going substantially beyond these notes, see for instance Refs. [52–56].

Figure 8: Time evolution of total and reduced density matrices. The total density
matrix evolves unitarily, according to Eq. (109). The reduced density matrix at time
t can either be obtained by tracing out the environment from ρ̂tot (t ), or by evolving
ρ̂S(0) by the universal dynamical map E t ,0

As illustrated in Fig. 8, the time-evolution of the reduced density matrix can be described
by a universal dynamical map (UDM). A UDM E is a linear map which transforms a density
matrix into another density matrix. Furthermore, it is independent of the density matrix it acts
upon. In its most general form, a UDM can be written as
X X
E ρ̂ = K̂ j ρ̂ K̂ †j , K̂ †j K̂ j = 1,
j j

where the operators K̂ j are called Kraus operators.


We say that a system obeys Markovian time-evolution if it is described by a divisible UDM,
i.e.,
ρ̂(t ) = E t ,t0 ρ̂ = E t ,t1 E t1 ,t0 ρ̂,
For any intermediate time t0 < t1 < t . Furthermore, we note that a differential equation is
a Markovian master equation (i.e., results in Markovian time-evolution of a density matrix) if
and only if it can be written in the form

1 †
X • ˜

∂ t ρ̂(t ) = −i[Ĥ(t ), ρ̂(t )] + γk (t ) L̂k (t )ρ̂(t )L̂k (t ) − {L̂k (t )L̂k (t ), ρ̂(t )}
k
2
X (129)
= −i[Ĥ(t ), ρ̂(t )] + γk (t )D[L̂k (t )]ρ̂(t ),
k

where Ĥ(t ) is Hermitian, γk (t ) ≥ 0, and the operators L̂k are referred to as Lindblad jump
operators. For a proof, see Ref. [53]. This form of the master equation is also called GKLS
form, after Gorini, Kosakowski, Sudarshan [57], and Linblad [58] who considered the time-
independent case. In the rest of this chapter, we provide detailed derivations for Markovian
master equations that describe our general scenario introduced in Sec. 3.1.

28
SciPost Physics Lecture Notes Submission

4.1 Nakajima-Zwanzig superoperators


To derive Markovian master equations, we use the Nakajima-Zwanzig projection operator ap-
proach [59, 60] following Ref. [61]. To this end, we introduce the superoperators
O O
P ρ̂tot = TrB {ρ̂tot } τ̂α = ρ̂S τ̂α , Q = 1 − P. (130)
α α

Note that these are projectors as P2 = P . Further, note that we are interested in the time-
evolution of P ρ̂tot (t ), which provides us with an effective description of the form discussed in
Sec. 3.2. We consider the general scenario discussed in Sec. 3.1, i.e., the Hamiltonian is given
by Eq. (108), but we move to an interaction picture (which we denote by a tilde instead of a
hat)
ρ̃tot (t ) = Û0† (t )ρ̂tot (t )Û0 (t ), (131)
determined by the unitary operator
Rt
d t ′ [ĤS (t ′ )+ Ĥα ]
P
Û0 (t ) = T e −i 0 α , (132)
with T denoting time-ordering. In the interaction picture, the time-evolution of the total
density matrix is determined by
X 
∂ t ρ̃tot (t ) = −i Ṽα (t ), ρ̃tot (t ) = V (t )ρ̃tot (t ), (133)
α

where we used Eq. (109) as well as


 X   X 
∂ t Û0 (t ) = −i ĤS(t ) + Ĥα Û0 (t ), ∂ t Û0† (t ) = i Û0† (t ) ĤS(t ) + Ĥα , (134)
α α

and the coupling Hamiltonian in the interaction picture is given by Ṽα (t ) = Û0† (t )V̂α Û0 (t ), in
analogy to Eq. (131). Finally, we have expressed the commutator in Eq. (133) with the help of
the superoperator V . In the following, we will assume PV (t )P = 0. This is not a restriction
as it can always be ensured by adding some terms to ĤS and subtracting them from V̂α , as we
will see below. We can then write
∂ t P ρ̃tot (t ) = PV (t )ρ̃tot (t ) = PV (t )Qρ̃tot (t ),
(135)
∂ t Qρ̃tot (t ) = QV (t )ρ̃tot (t ) = QV (t )P ρ̃tot (t ) + QV (t )Qρ̃tot (t ),
where we used P + Q = 1. The formal solution to the second equation is given by
Z t
Qρ̃tot (t ) = G (t , 0)Qρ̃tot (0) + d s G (t , s )QV (s )P ρ̃tot (s ), (136)
0

where we introduced the propagator


Rt
QV (τ)dτ
G (t , s ) = T e s . (137)
We now assume factorizing initial conditions
O
ρ̃tot (0) = ρ̂S (0) τ̂α , (138)
α

such that P ρ̃tot (0) = ρ̃tot (0) and Qρ̃tot (0) = 0. Inserting Eq. (136) into Eq. (135), we find
Z t
∂ t P ρ̃tot (t ) = d s PV (t )G (t , s )QV (s)P ρ̃tot (s ). (139)
0

We note that this expression is still exact (for the given initial conditions). Since it explicitly
depends on P ρ̃tot at previous times, it contains memory effects and does not constitute a
Markovian master equation.

29
SciPost Physics Lecture Notes Submission

4.2 Born-Markov approximations


We now make a weak coupling approximation. If the coupling between system and reservoirs
are proportional to V̂α ∝ r , with r ≪ 1, the propagator in Eq. (137) obeys G (t , s ) = 1+ O(r ),
resulting in
Z t
∂ t P ρ̃tot (t ) = P d s V (t )V (s )P ρ̃tot (s ) + O(r 3 ), (140)
0
where we again used PV (t )P = 0. The last equation implies
Z t X ¦” ” O ——©
∂ t ρ̃S(t ) = − ds TrB Ṽα (t ), Ṽα (t − s ), ρ̃S(t − s ) τ̂α , (141)
0 α α

where we substituted s → t −s and we made use of TrB {Ṽδ (t )Ṽγ (s )ρ̃S(s)⊗α τ̂α } = 0 for δ ̸= γ.
This is similar to the assumption PV (t )P = 0 and can always be ensured by an appropriate
redefinition of the terms in the Hamiltonian as shown below. We note that Eq. (141) is often
obtained by assuming that ρ̃tot (t ) = ρ̃S(t )⊗α τ̂α at all times, the so-called Born approximation.
Here we do not make such an assumption. In agreement with the discussion in the previous
section, we consider ρ̃S(t ) ⊗α τ̂α to be an effective description, which only keeps track of the
system and neglects changes in the environment state as well as correlations between system
and environment.
In addition to the weak coupling approximation, we now make a Markov approximation.
To this end, we assume that the integrand in Eq. (141) decays on a time-scale τB (the bath-
correlation time, more on this below). If this time-scale is short enough, which is the case for
large, memory-less environments, we can assume ρ̃S(t − s ) to approximately remain constant
and replace its time-argument in Eq. (141) by t . Furthermore, using the same argumentation,
we can extend the integral to infinity obtaining
Z∞ X ¦” ” O ——©
∂ t ρ̃S(t ) = − ds TrB Ṽα (t ), Ṽα (t − s), ρ̃S(t ) τ̂α . (142)
0 α α

This equation is Markovian, i.e., it is local in time and does not depend explicitly on the initial
conditions. However, it is not in GKLS form and does not in general preserve the positivity of
the density matrix, i.e., eigenvalues may become negative. The approximations that result in
Eq. (142) are usually called the Born-Markov approximations. For a more formal application
of these approximations, see Refs. [62,63]. Note that under the Born-Markov approximations,
the effect induced by different reservoirs is additive.
To make progress, we write the coupling Hamiltonian in the general form
X X
† †
V̂α = Ŝα,k ⊗ B̂α,k = Ŝα,k ⊗ B̂α,k , (143)
k k

where we used the Hermiticity of V̂α in the second equality. We note that the operators Ŝα,k
and B̂α,k are not necessarily Hermitian. Inserting Eq. (143) into Eq. (142), we find after some
algebra
XXZ ∞ ¦
α
” —
† †
∂ t ρ̃S(t ) = d s Ck,k ′ (s ) S̃α,k ′ (t − s )ρ̃S (t )S̃α,k (t ) − S̃α,k (t )S̃α,k ′ (t − s )ρ̃S (t )
α k,k ′ 0
α
” —©
† †
+Ck,k ′ (−s ) S̃α,k ′ (t ) ρ̃ S (t ) S̃α,k
(t − s ) − ρ̃ S (t ) S̃α,k
(t − s ) S̃α,k ′ (t ) ,
(144)
where we introduced the bath-correlation functions
α
¦ ©

Ck,k ′ (s ) = Tr B̃α,k (s) B̂α,k ′ τ̂α , (145)

30
SciPost Physics Lecture Notes Submission

α
and we used [Ck,k ′
(s )]∗ = Ckα′ ,k (−s ). These bath-correlation functions are usually peaked
around s = 0 and decay over the time-scale τB (indeed, this is how τB is defined). If this
time-scale is short, the integrand in Eq. (144) decays quickly and the Markov assumption
performed above is justified. Note that it is important that this approximation is made in the
interaction picture, where ρ̃S varies slowly (in the Schrödinger picture, ρ̂S tends to oscillate
with frequencies given by the differences of the eigenvalues of ĤS).
While Eq. (144) is in general still not in GKLS form, it happens to reduce to GKLS form
α
when Ck,k ′
(s ) ∝ δk,k ′ and

Ŝα,k , ĤS = ωα,k Ŝα,k S̃α,k (t ) = e −iωα,k t Ŝα,k ,


 
(146)

which implies that Ŝα,k are ladder operators for the Hamiltonian ĤS, removing the energy
ωα,k from the system. Before we consider such an example in detail, we briefly justify two
assumptions made in the derivation above. To this end, we note that if

Tr B̂α,k τ̂α = 0, (147)
then it is straightforward to show that

PV (t )P = 0, Tr Ṽδ (t )Ṽγ (s )ρ̃S(t ) ⊗α τ̂α = 0, ∀δ ̸= γ. (148)
In case Eq. (147) is not true, we may define


B̂α,k = B̂α,k − Tr B̂α,k τ̂α , (149)

which do fulfill Tr{B̂α,k τ̂α } = 0. Defining
X X 
V̂α′ = ′
Ŝα,k ⊗ B̂α,k , ĤS′ (t ) = HS(t ) + Tr B̂α,k τ̂α Ŝα,k , (150)
k α,k

we have Ĥtot (t ) = HS′ (t ) + α (Ĥα + V̂α′ ), i.e., the same total Hamiltonian but now in a form
P

such that Eq. (147) holds, ensuring our assumptions given in Eq. (148).

4.3 Example: equilibration of a quantum dot


We now consider an example provided by a spinless, single-level quantum dot tunnel-coupled
to a single fermionic reservoir, see Fig. 9. In this case, Eq. (144) happens to already be in GKLS
form. The Hamiltonian of system and environment is then given by
Ĥtot = ĤS + ĤB + V̂, (151)

Figure 9: Spinless, single-level quantum dot tunnel-coupled to a fermionic reservoir.


the reservoir is characterized by a temperature T and a chemical potential µ. The
coupling between the reservoir and the quantum dot is given by κ. In the steady
state, the quantum dot is in equilibrium with the reservoir.

31
SciPost Physics Lecture Notes Submission

with
X X
ĤS = ϵd d̂ † d̂, ϵq ĉq† ĉq , gq d̂ ĉq† − gq∗ d̂ † ĉq .

ĤB = V̂ = (152)
q q

Here the reservoir is modeled as a collection of non-interacting fermions and the coupling
Hamiltonian describes tunneling of single electrons between the system and the reservoir (the
minus sign arises from the fermionic anti-commutation relations).
We note that for fermions, there is no tensor product structure because operators on the
environment and on the system may anti-commute. Strictly speaking, the derivation in the
last section is thus not valid. However, using a Jordan-Wigner transform, one can map the
fermionic system onto a spin system where such a tensor-product structure is provided [54].
After tracing out the reservoir, the spin operators can then be replaced by fermionic operators
again. For the tunneling Hamiltonian that we often use to describe the system-environment
coupling for fermions, this procedure is equivalent to replacing the tensor product by the usual
product between fermionic operators in the derivation.

4.3.1 The master equation

Comparing the coupling Hamiltonian to Eq. (143), we may write V̂ = Ŝ0 B̂0 + Ŝ1 B̂1 with
X X
Ŝ0 = d̂, Ŝ1 = d̂ † , B̂0 = gq ĉq† , B̂1 = − gq∗ ĉq . (153)
q q

We further find

[d̂, ĤS] = ϵd d̂ ⇒ S̃0 (t ) = e −iϵd t d̂, S̃1 (t ) = e iϵd t d̂ † = S̃0† (t ), (154)

and similarly
X X
B̃0 (t ) = gq e iϵq t ĉq† , B̃1 (t ) = − gq∗ e −iϵq t ĉq . (155)

These expressions result in the bath-correlation functions


X X
|gq |2 e −iϵq s Tr ĉq ĉq† τ̂B = |gq |2 e −iϵq s [1 − nF (ϵq )]

C0,0 (s) =
q q
Z ∞
= dωe −iωs ρ(ω)[1 − nF (ω)],
−∞
X X (156)
|gq |2 e iϵq s Tr ĉq† ĉq τ̂B = |gq |2 e iϵq s nF (ϵq )

C1,1 (s) =
q q
Z ∞
= dωe iωs ρ(ω)nF (ω),
−∞

where we introduced the Fermi-Dirac occupation


1
nF (ω) = ω−µ
, (157)
e kB T
+1

and made use of Tr{ĉq† ĉq ′ τ̂B } = 0 for q ̸= q ′ . Furthermore, it is easy to show that C0,1 (s ) =
C1,0 (s ) = 0 since Tr{(ĉq† )2 τ̂B } = Tr{(ĉq )2 τ̂B } = 0. Finally, in Eq. (156) we introduced the
spectral density X
ρ(ω) = |gq |2 δ(ϵq − ω), (158)
q

32
SciPost Physics Lecture Notes Submission

which will be treated as a continuous function. This is justified whenever the summands in
Eq. (156) are sufficiently smooth such that the sums can be thought of as Riemann sums that
approximate integrals of smooth functions.
Using Eq. (154), we may re-write the master equation in Eq. (144) as
Z ∞
d s C0,0 (s )e iϵd s D[d̂]ρ̃S(t ) + C1,1 (s )e −iϵd s D[d̂ † ]ρ̃S(t )

∂ t ρ̃S(t ) =
−∞
Z ∞ (159)
1  iϵd s −iϵd s
 †

− d s sign(s ) C0,0 (s )e − C1,1 (s )e d̂ d̂, ρ̃S(t ) ,
2 −∞

where
1
D[Â]ρ̂ = Âρ̂ † − {† Â, ρ̂}. (160)
2
With the help of the bath-correlation functions given in Eq. (156), we may evaluate the inte-
grals over s to find the transition rates
Z∞ Z∞ Z∞
d s C0,0 (s )e iϵd s = dω d s e −i(ω−ϵd )s ρ(ω)[1 − nF (ω)] = κ[1 − nF (ϵd )],
−∞ −∞ −∞
Z ∞ Z ∞ Z ∞
d s C1,1 (s )e −iϵd s = dω d s e i(ω−ϵd )s ρ(ω)nF (ω) = κnF (ϵd ),
−∞ −∞ −∞
(161)
where we used Z ∞
d s e −i(ω−ϵd )s = 2πδ(ω − ϵd ), (162)
−∞

and introduced κ ≡ 2πρ(ϵd ).


We furthermore find the so-called Lamb shift
Z∞
1
d s sign(s ) C0,0 (s )e iϵd s − C1,1 (s )e −iϵd s
 

2 −∞
Z∞ ∞ (163)
ρ(ω)
Z
iϵd s −iϵd s
 
= −iIm d s C0,0 (s )e − C1,1 (s )e = iP dω ,
0 −∞ ω − ϵd

where we made use of C j,∗ j (s ) = C j, j (−s ) as well as the identity


Z t
1
 ‹
iωs
lim dse = πδ(ω) + i P , (164)
t →∞
0 ω

with P denoting the Cauchy principal value [i.e., P(1/ω) is equal to 1/ω except at ω = 0,
where the principal value vanishes].
Finally, inserting Eqs. (161) and (163) back into Eq. (159), we find the Markovian master
equation in the Schrödinger picture

∂ t ρ̂S(t ) = − i[ĤS, ρ̂S(t )] + e −i ĤS t [∂ t ρ̃S(t )] e i ĤS t


(165)
= − i[ϵ̄d d̂ † d̂, ρ̂S(t )] + κ[1 − nF (ϵd )]D[d̂]ρ̂S(t ) + κnF (ϵd )D[d̂ † ]ρ̂S(t ),

where the renormalized dot energy reads



ρ(ω)
Z
ϵ̄d = ϵd + P dω . (166)
−∞ ϵd − ω

33
SciPost Physics Lecture Notes Submission

The reservoir thus has two effects: Through the dissipative part of the master equations, it
describes electrons entering (D[d̂ † ]) and leaving (D[d̂]) the quantum dot. Note that to enter
the dot, electrons in the reservoir have to be available, corresponding to the factor nF while to
leave the dot, empty states have to be available corresponding to the factor 1−nF . In addition,
the energy level of the quantum dot is renormalized.
Note that when taking the ratio between the rates of entering and leaving the quantum
dot, we obtain the Boltzmann factor
κ[1 − nF (ϵd )]
e β(ϵd −µ) = . (167)
κnF (ϵd )
This condition is known as local detailed balance and it generally holds for transition rates that
are induced by reservoirs in thermal equilibrium. It ensures that in equilibrium, the system
state tends to a Gibbs state, see also Eq. (173) below.
As discussed above, the Markovian approximation is justified if the bath-correlation func-
tions decay on a time scale which is much faster than the time over which ρ̃S varies. In the
limiting case where both ρ(ω) as well as nF (ω) are independent of ω, the bath-correlation
functions become proportional to a Dirac delta function and the environment becomes truly
memoryless (i.e., τB → 0). In practice, it is sufficient for τB to be much shorter than any rel-
evant time-scale of the system. In energy, this translates to the condition that the functions
ρ(ω) as well as nF (ω) are flat around the relevant energies of the system. For the present
system ρ̃S changes on the time-scale 1/κ. The Markov approximation is then valid as long as
κτB ≪ 1. In energy space, this requires ρ(ω) as well as nF (ω) to be approximately constant
in the interval ω ∈ [ϵd −κ, ϵd +κ]. The spectral density depends on the details of the reservoir
and may or may not fulfill this condition depending on the specific scenario. For the Fermi-
Dirac occupation to be sufficiently flat for the Markov approximation, the following condition
has to hold
κ ≪ max{kB T, |ϵd − µ|}. (168)
At low temperatures, the Fermi-Dirac occupation becomes a step function. Therefore, the
Markovian approximation is not justified at low temperatures if the dot level ϵd is close to
the chemical potential. For a more detailed discussion on the validity of the Born-Markov
approximations, see appendix B.1 of Ref. [64].

4.3.2 Solving the master equation


To solve the master equation in Eq. (165), we write the density matrix of the system as

ρ̂S(t ) = p0 (t )|0〉〈0| + p1 (t )|1〉〈1|, p0 (t ) + p1 (t ) = 1, (169)

where |0〉 denotes the empty state and |1〉 the full state. Here we used the fact that we cannot
have a superposition of states with a different number of electrons in the system due to a
particle superselection rule [34]. Using these basis states, the fermionic operators can be cast
into

d̂ = |0〉〈1| , d̂ † = |1〉〈0| , d̂ † d̂ = |1〉〈1| , p1 (t ) = Tr{d̂ † d̂ ρ̂S(t )}. (170)

The master equation in Eq. (165) can then be reduced to

∂ t p1 (t ) = −κ[1 − nF (ϵd )]p1 (t ) + κnF (ϵd )p0 (t ) = −κ[p1 (t ) − nF (ϵd )]. (171)

This equation shows that a full dot is emptied with rate κ[1 − nF (ϵd )] whereas an empty dot
is filled with rate κnF (ϵd ). The solution to this differential equation reads

p1 (t ) = p1 (0)e −κt + nF (ϵd )(1 − e −κt ). (172)

34
SciPost Physics Lecture Notes Submission

The occupation probability thus exponentially goes toward the equilibrium value nF (ϵd ). The
time-scale with which this happens is given by 1/κ. In equilibrium, the system is described by
the thermal state with temperature and chemical potential equal to those of the reservoir, as
demanded by the zeroth law of thermodynamics, and we find
† d̂
e −β(ϵd −µ)d̂
lim ρ̂S(t ) = . (173)
t →∞
Tr{e −β(ϵd −µ)d̂ † d̂ }
This result is closely related to the local detailed balance condition in Eq. (167).

4.3.3 Energy flows and the first law


In addition to the state of the quantum dot, we are interested in the energy flow between the
dot and the reservoir. To this end, we consider the change in the energy of the system

∂ t 〈ĤS〉 = Tr{ϵd d̂ † d̂∂ t ρ̂S} = (ϵd − µ)∂ t 〈d̂ † d̂〉 + µ∂ t 〈d̂ † d̂〉 = −JB (t ) − PB (t ). (174)

To identify the heat current and the power, we used that the change in the number of electrons
in the system is minus the change in the number of electrons in the reservoir, more on this in
Sec. 4.5. Using p1 = 〈d̂ † d̂〉, we find for the heat current and power that flow into the reservoir

JB (t ) = −(ϵd − µ)∂ t 〈d̂ † d̂〉 = (ϵd − µ)κe −κt [p1 (0) − nF (ϵd )],
(175)
PB (t ) = −µ∂ t 〈d̂ † d̂〉 = µκe −κt [p1 (0) − nF (ϵd )].

We thus find that if the dot starts out in a non-equilibrium state, there is an exponentially
decreasing energy flow which can be divided into power and heat. The power flows into
the reservoir whenever p1 (t ) > nF (ϵd ). The heat flow additionally depends on the sign of
ϵd − µ: electrons entering the reservoir above the chemical potential (ϵd − µ > 0) heat up the
reservoir, while electrons entering below the chemical potential (ϵd − µ < 0) cool it down.
This can be understood intuitively by noting that at zero temperature, all states below the
chemical potential are occupied while states above the chemical potential are empty. Electrons
entering the reservoir below the chemical potential thus bring the reservoir closer to the zero-
temperature distribution.

4.3.4 Entropy and the second law


For the second law, we need to consider the entropy of the quantum dot given by
p1 (t )
SvN [ρ̂S(t )] = −p0 (t ) ln p0 (t ) − p1 (t ) ln p1 (t ), ∂ t SvN [ρ̂S(t )] = −ṗ1 (t ) ln ,
1 − p1 (t )
(176)
where the dot denotes the time-derivative and we used p0 + p1 = 1 to compute the derivative.
The entropy production rate given in Eq. (115) can then be expressed as

kB JB (t ) p1 (t )
 
JB
Σ̇(t ) = kB ∂ t SvN [ρ̂S(t )] + = β(ϵd − µ) + ln , (177)
T (ϵd − µ) 1 − p1 (t )

where we used −ṗ1 = JB /(ϵd − µ) which follows from Eq. (175). Using the equality in
Eq. (167), we find

kB JB (t ) p1 (t )[1 − nF (ϵd )]
 
Σ̇(t ) = ln
(ϵd − µ) [1 − p1 (t )]nF (ϵd )
(178)
p1 [1 − nF (ϵd )]
 
€ Š
=kB κ p1 (t )[1 − nF (ϵd )] − [1 − p1 (t )]nF (ϵd ) ln ≥ 0,
[1 − p1 ]nF (ϵd )

35
SciPost Physics Lecture Notes Submission

where we used JB = (ϵd −µ)κ(p1 −nF ) which follows from Eqs. (171) and (175). the positivity
of Σ̇ can be shown by writing it in the form (x − y)(ln x − ln y) ≥ 0 which is ensured to be
positive because the logarithm is a monotonously increasing function. Using the solution in
Eq. (172), we can explicitly write the entropy production rate as

(e −κt δ0 + nF )(1 − nF )
Σ̇(t ) = kB κe −κt δ0 ln , (179)
(1 − nF − e −κt δ0 )nF

where we introduced δ0 = p1 (0) − nF and we suppressed the argument of the Fermi-Dirac


occupation for ease of notation. We thus find that for this Markovian master equation, the
entropy production rate is indeed always positive, as anticipated above, and exponentially
decreases in time such that in equilibrium, no entropy is produced as expected.

4.4 Obtaining GKLS form


We now return to Eq. (144). As mentioned above, this equation is not in GKLS form. In this
section, we consider additional approximations that bring it into GKLS form. To this end, we
first write X
j
S̃α,k (t ) = e −iω j t Ŝα,k , (180)
j

i.e., we write the system operators in Eq. (144) in a Fourier series. While this can always be
done, it is particularly simple when the system Hamiltonian is time-independent. In this case,
j
we may introduce its eigenstates as ĤS |Ea 〉 = Ea |Ea 〉. We may then find the operators Ŝα,k by
multiplying Ŝα,k from the left and the right by resolved identities
X
j j
Ŝα,k = |Ea 〉〈Ea | Ŝα,k |E b 〉〈E b | ⇒ [Ŝα,k , ĤS] = ω j Ŝα,k , (181)
a,b
| {z }
j
Ŝα,k

with ω j = E b − Ea .
With the help of Eq. (180), we may cast Eq. (144) into
X X h ′ € j Š† € j Š† j ′ i
e i(ω j −ω j ′ )t Γk,k
α j
∂ t ρ̃S(t ) = ′ (ω j ′ ) Ŝ ρ̃
α,k ′ S
(t ) Ŝα,k
− Ŝα,k
Ŝ ρ̃
α,k ′ S
(t ) + H.c.
α,k,k ′ j , j ′
(182)
with Z ∞
α α 1
Γk,k ′ (ω) ≡ d s e iωs Ck,k ′ (s ) = γαk,k ′ (ω) + i∆αk,k ′ (ω), (183)
0 2
where γαk,k ′ (ω) and ∆αk,k ′ (ω) are both real.
In the remainder of this section, we will consider a time-independent Hamiltonian ĤS, as
α
well as bath-correlation functions that obey Ck,k ′
∝ δk,k ′ for simplicity. This will simplify the
notation as well as the derivation of the laws of thermodynamics for the master equations we
consider.

4.4.1 The secular approximation


The secular approximation is the most common approach for obtaining a master equation in
GKLS form and can be found in many text-books (see for instance Ref. [52]). Let τS denote the
time scale over which ρ̃S changes (remember that the Born-Markov approximations require
τS ≫ τB ). The secular approximation is justified if

|ω j − ω j ′ |τS ≫ 1 ∀ j ̸= j ′ . (184)

36
SciPost Physics Lecture Notes Submission

In this case, we may drop all terms in Eq. (182) with j ̸=j ′ because they  average out over a
time-scale shorter than τS due to the oscillating term exp i(ω j − ω j ′ )t .
Going back to the Schrödinger picture, we then find the GKLS master equation
X
γαk (ω j )D[Ŝα,k ]ρ̂S(t ),
j
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + (185)
α,k, j

where γαk (ω j ) = γαk,k (ω j ) and the Lamb-shift Hamiltonian is given by


X € j Š† j
ĤLS = ∆αk (ω j ) Ŝα,k Ŝα,k , (186)
α,k, j

with ∆αk (ω j ) = ∆αk,k (ω j ). We note that from Eq. (181), it follows that [ĤS, ĤLS] = 0.
Due to Eq. (184), the secular approximation works well for systems which have no small
gaps in ĤS. As we will see, thermal machines may consist of multiple parts that are weakly
coupled and have small energy gaps induced by the weak coupling. In such systems, the
j
secular approximation breaks down. Note that in order to obtain the jump operators Ŝα,k , the
Hamiltonian ĤS needs to be diagonalized first. This implies that already obtaining the master
equation may be a formidable task.
We note that in the secular approximation, for a non-degenerate system Hamiltonian, the
populations decouple from the coherences. Indeed, the dissipative part of Eq. (185) describes
classical jumps between the eigenstates of ĤS. Concretely, this means that in the energy-
eigenbasis, the off-diagonal terms of the density matrix tend to zero and the dynamics can
be described by a classical rate equation involving only the populations. While this may be a
good approximation, there are many situations of interest where coherences between energy
eigenstates are important in the presence of thermal reservoirs and the secular approximation
is no longer justified [65–70].
A particularly appealing feature of the secular approximation is the fact that the laws of
thermodynamics are ensured to hold [10]:

0th law: If all reservoirs are at the same inverse temperature β and chemical potential µ,
then the steady state of Eq. (185) reduces to the Gibbs state

t →∞ e −β(ĤS −µ N̂S )
ρ̂S(t ) −−−→ . (187)
Tr e −β(ĤS −µ N̂S )


In equilibrium, the system is thus described by the same temperature and chemical potential
as the environment.

1st law: Writing the master equation in Eq. (185) as


X
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + Lα ρ̂S(t ), (188)
α

we find the first law as X


∂ t 〈ĤS〉 = − [Jα (t ) + Pα (t )] , (189)
α
with
  
Jα (t ) = −Tr ĤS − µα N̂S Lα ρ̂S(t ) , Pα (t ) = −µα Tr N̂SLα ρ̂S(t ) . (190)

These definitions are to be compared with the definitions for the power and heat current in
the general scenario, c.f. Eq. (111). There, we defined heat and work using the changes in the

37
SciPost Physics Lecture Notes Submission

energy and particle numbers in the reservoirs. When using a master equation for the reduced
system state, we no longer have access to the properties of the reservoirs. However, we may
still infer heat and work because the term Lα in the master equation describes the exchange
of energy and particles with reservoir α. An increase in the particle number due to Lα implies
a decrease of the particle number in the reservoir α by the same amount, and similarly for
energy. This is discussed in more detail below, see Sec. 4.5.

2nd law: The second law of thermodynamics also follows from Eq. (185) and it may be
shown that [71, 72]
X J (t )
α
Σ̇ = kB ∂ t SvN [ρ̂S(t )] + ≥ 0. (191)
α Tα
While we focused on a time-independent system Hamiltonian here, the secular approxi-
mation may analogously be applied for a time-dependent Hamiltonian and the laws of ther-
modynamics continue to hold in this case, see Ref. [10] and references therein.

4.4.2 The singular-coupling limit


The singular-coupling limit [52, 54] is another popular approach to obtain a master equation
in GKLS form. It is justified when all Bohr frequencies ω j are close to each other, i.e.,

|ω j − ω j ′ |τB ≪ 1 ∀ j , j ′. (192)

In this case, we may write


X X
e −iω j (t −s ) Ŝα,k ≃ e iωα,k s
j j
S̃α,k (t − s ) = e −iω j t Ŝα,k = e iωα,k s S̃α,k (t ), (193)
j j

where ωα,k has to be chosen such that |ωα,k − ω j |τB ≪ 1 for all j . Because of Eq. (192),
one may for instance choose ωα,k to equal the average of all ω j . Equation (193) is justified
for all s ≲ τB , i.e., exactly the values of s which are relevant in the integral of Eq. (144). As
a consequence of Eq. (193), we may replace Γkα (ω j ′ ) by Γkα (ωα,k ) in Eq. (182) which, in the
Schrödinger picture, results in the GKLS master equation
X
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + γαk (ωα,k )D[Ŝα,k ]ρ̂S(t ), (194)
α,k

where the Lamb-shift Hamiltonian reads


X
ĤLS = ∆αk (ωα,k )Ŝα,k

Ŝα,k , (195)
α,k

which does not necessarily commute with ĤS.


Note that the jump operators Ŝα,k entering Eq. (194) are the operators that enter the cou-
pling Hamiltonian V̂α . This implies that, in contrast to the secular approximation, the system
Hamiltonian does not need to be diagonalized in order to write down the master equation.
Since we are often interested in systems that are not explicitly solvable, this is very helpful.
We further note that the singular-coupling limit is always justified for a perfectly Markovian
environment, i.e., an environment where τB → 0. More generally, the singular-coupling limit
is justified when γαk (ω j ) ≃ γαk (ωα,k ) for all j . This is often the case in quantum-optical systems,
which is why the singular-coupling limit is widely applied in this community.
Showing that the laws of thermodynamics hold in the singular-coupling limit is a bit more
difficult than in the secular approximation. We first need to introduce a thermodynamic Hamil-
tonian ĤTD , which is obtained by rescaling the gaps of ĤS as ω j → ωα,k . The thermodynamic

38
SciPost Physics Lecture Notes Submission

Hamiltonian is then used to compute the internal energy of the system. As argued in Ref. [64],
the mistake we make by replacing ĤS with ĤTD in the thermodynamic bookkeeping is smaller
than the resolution of heat that the master equation in Eq. (194) ensures. Within the accu-
racy of our model, the replacement is thus completely justified and in many cases it can be
compared to neglecting the system-bath coupling in the thermodynamic bookkeeping. For the
thermodynamic Hamiltonian, we find

[ĤTD , ĤS] = [ĤTD , ĤLS] = 0, [Ŝα,k , ĤTD ] = ωα,k Ŝα,k . (196)



The last equality implies that a jump Ŝα,k ρ̂SŜα,k reduces the internal energy by ωα,k .
With the help of the thermodynamic Hamiltonian, the laws of thermodynamics may be
shown to hold [64]:

0th law: If all reservoirs are at the same inverse temperature β and chemical potential µ,
then the steady state of Eq. (194) reduces to the Gibbs state

t →∞ e −β(ĤTD −µ N̂S )
ρ̂S(t ) −−−→ . (197)
Tr e −β(ĤTD −µ N̂S )


In equilibrium, the system is thus described by the same temperature and chemical potential
as the environment.

1st law: Writing the master equation in Eq. (194) as


X
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + Lα ρ̂S(t ), (198)
α

we find the first law as X


∂ t 〈ĤTD 〉 = − [Jα (t ) + Pα (t )] , (199)
α

with
  
Jα (t ) = −Tr ĤTD − µα N̂S Lα ρ̂S(t ) , Pα (t ) = −µα Tr N̂SLα ρ̂S(t ) . (200)

2nd law: The second law of thermodynamics also follows from Eq. (194) and it may be
shown that
X J (t )
α
Σ̇ = kB ∂ t SvN [ρ̂S(t )] + ≥ 0. (201)
α Tα
As for the secular approximation, the results from this section may be extended to a time-
dependent Hamiltonian and the laws of thermodynamics continue to hold in this case [64].
We note that in particular in thermodynamic contexts, it is often the case that there are
positive and negative Bohr frequencies, with each S̃α,k (t ) containing only frequencies of one
sign. In this case, we may perform the singular-coupling limit for the positive and negative
Bohr frequencies separately. In this case, the master equation in Eq. (194) is also valid if
Eq. (192) is only respected for frequencies ω j and ω j ′ of the same sign. Below, we will discuss
an example where this is the case, see Sec. 4.6.

39
SciPost Physics Lecture Notes Submission

4.4.3 The unified GKLS master equation


Finally, we briefly mention an approach to obtain GKLS form for systems where neither the
secular approximation nor the singular-coupling limit is justified [64,73]. The problem is that
for some values of j and j ′ , we may find |ω j −ω j ′ |τS ≲ 1, rendering the secular approximation
inapplicable, while for other values of j and j ′ , we may have |ω j − ω j ′ |τB ≳ 1, such that the
singular-coupling limit may not be applied. The solution for this problem exploits the fact that
τB ≪ τS, otherwise the Born-Markov approximations are not justified in the first place. This
inequality implies that for all values of j and j ′ , we either have |ω j − ω j ′ |τS ≫ 1 or we have
|ω j − ω j ′ |τB ≪ 1. One may then perform the following approximations to reach GKLS form:

1. Drop all terms with j and j ′ such that |ω j − ω j ′ |τS ≫ 1, in analogy to the secular
approximation.

2. Perform a singular-coupling limit on the remaining cross terms with ω j ̸= ω j ′ .

The resulting master equation also obeys the laws of thermodynamics for an appropriately
chosen thermodynamic Hamiltonian. The thermodynamic consistency of this approach was
recently also shown using the method of full counting statistics [74].

4.5 Heat and work in quantum master equations


Here we briefly connect the definitions of heat and work we use for master equations, as
introduced in Eqs. (190) and (200), to the definitions introduced for the general scenario, see
Eqs. (111). We first consider power. The conservation of the total number of particles results
in X
∂ t 〈 N̂S〉 = −∂ t 〈 N̂α 〉, (202)
α

where the averages are taken with respect to the exact, total density matrix ρ̂tot (t ). In a master
equation written in the form of Eq. (198), we only have access to the left-hand side of the last
equation, which is given by
X
∂ t 〈 N̂S〉 = Tr{ N̂SLα ρ̂S(t )}. (203)
α

Comparing Eq. (203) to Eq. (202), we may infer

Pα (t ) = µα ∂ t 〈 N̂α 〉 = −µα Tr{ N̂SLα ρ̂S(t )}, (204)

connecting the definition in Eqs. (111) to Eqs. (190) and (200). Since the contributions of
the different reservoirs to the Liouvillean are additive, we may infer the change of particle
number induced by each reservoir and, by exploiting the conservation of particles, we may
infer ∂ t 〈 N̂α 〉 even though we only have access to the reduced system state.
A similar analysis can be done for energy. From total energy conservation (assuming as
above a time-independent Hamiltonian), we have
X  X
∂ t 〈ĤS〉 = −∂ t 〈Ĥα 〉 + 〈V̂α 〉 ≃ −∂ t 〈Ĥα 〉, (205)
α α

where we neglected the energy stored in the coupling between system and environment be-
cause we assume it to be small in order to derive a Markovian master equation. From a master
equation written in the form of Eq. (198), we may write the left-hand side of Eq. (205) as
X
∂ t 〈ĤS〉 = Tr{ĤSLα ρ̂S(t )}, (206)
α

40
SciPost Physics Lecture Notes Submission

which leads us to identify


∂ t 〈Ĥα 〉 ≃ −Tr{ĤSLα ρ̂S(t )} (207)
and therefore to define the heat current as

Jα′ = −Tr{(ĤS − µα NS)Lα ρ̂S(t )} ≃ ∂ t 〈Ĥα 〉 − µα ∂ t 〈 N̂α 〉. (208)

While this is indeed the appropriate heat current for master equations in the secular approxi-
mation, see Eq. (190), we introduced a different definition in Eq. (200) based on a thermody-
namic Hamiltonian. To understand why, it is necessary to appreciate that Markovian master
equations rely on assumptions on time-scales. In particular, the Markov approximation ne-
glects the time-evolution of the system during the bath-correlation time, see Eq. (142). This
neglects the finite life-time of the particles in the system. Through the time-energy uncertainty
relation, this neglects the energy broadening of the states in the system. As a consequence,
we no longer know the exact energy at which particles are exchanged with the environment.
In the singular-coupling limit, this is exacerbated because we also neglect small differences in
the Bohr frequencies of the system Hamiltonian, see Eq. (193).
The fact that we no longer know the exact energy at which particles enter the environment
implies that we lose resolution in the energy exchanged with the environment. As a result, a
naive application of Eq. (208) can result in violations of the laws of thermodynamics, see for
instance Ref. [75]. It is important to stress however that any violations should remain absent
or negligibly small as long as the approximations that result in the master equation are well
justified [76, 77].
It is however desirable to have a framework that mathematically ensures the laws of ther-
modynamics. This can be obtained by introducing the thermodynamic Hamiltonian to quan-
tify the internal energy of the system [64] which results in the definition for the heat currents
[c.f. Eq. (200)]

Jα ≡ −Tr{(ĤTD − µα NS)Lα ρ̂S(t )} ≃ ∂ t 〈Ĥα 〉 − µα ∂ t 〈 N̂α 〉. (209)

Importantly, the use of the thermodynamic Hamiltonian should only slightly affect the values
of the heat currents, i.e., ∂ t 〈ĤS〉 ≃ ∂ t 〈ĤTD 〉 and Jα′ ≃ Jα , such that both definitions of the heat
current are acceptable. Should the two definitions result in substantial differences, then the
approximations that went into the master equation are most likely no longer justified.
Since we will mainly consider master equations in the singular-coupling limit below, and
since we prefer to have a framework that ensures the laws of thermodynamics exactly and not
only approximately, we will use the heat current given in Eq. (209) and the power given in
Eq. (204).

4.6 Example: A double quantum dot


We now consider an example provided by a double quantum dot, coupled to two fermionic
reservoirs. We again consider spinless electrons, such that each dot can at maximum host one
electron. Furthermore, we consider two dots in series, such that each dot is coupled to one of
the reservoirs, see Fig. 10. The total Hamiltonian that describes this scenario is given by

Ĥtot = ĤS + ĤL + ĤR + V̂L + V̂R, (210)

with the system Hamiltonian

ĤS = ϵ d̂L† d̂L + ϵ d̂R† d̂R + g (d̂L† d̂R + d̂R† d̂L ) = (ϵ + g )d̂+† d̂+ + (ϵ − g )d̂−† d̂− , (211)

41
SciPost Physics Lecture Notes Submission

Figure 10: Double quantum dot coupled to two fermionic reservoirs. The reservoirs
are characterized by temperatures Tα and chemical potentials µα . The couplings
between the reservoirs and the double dot are given by κα , and g denotes the inter-
dot tunnel coupling. A temperature bias (TL ̸= TR) and/or a chemical potential bias
(µL ̸= µR) may result in particle and heat transport through the double dot in the
steady state.

where we chose the on-site energies ϵ of the two dots to be equal for simplicity and we intro-
duced the eigenmodes
1  1  1 
d̂± = p d̂R ± d̂L , d̂R = p d̂+ + d̂− , d̂L = p d̂+ − d̂− . (212)
2 2 2
The reservoirs are again modeled by a collection of non-interacting electrons that are tunnel-
coupled to the respective dots
X X
† † ∗
d̂α† ĉα,q ,

Ĥα = ϵα,q ĉα,q ĉα,q , V̂α = gα,q d̂α ĉα,q − gα,q (213)
q q

with α = L, R. As in the example of a single quantum dot, we may write the coupling Hamil-
tonian as V̂α = Ŝα,0 B̂α,0 + Ŝα,1 B̂α,1 and we find

d̂+ d̂−
S̃R,0 (t ) = e i ĤS t d̂Re −i ĤS t = e −i(ϵ+g )t p + e −i(ϵ−g )t p ,
2 2
(214)
d̂+ d̂−
S̃L,0 (t ) = e i ĤS t d̂L e −i ĤS t = e −i(ϵ+g )t p − e −i(ϵ−g )t p .
2 2
Comparing these expressions to Eq. (180), we may read off the frequencies ω j = ϵ ± g , as
j †
well as the corresponding operators Ŝα,0 . Similarly, we find S̃α,1 (t ) = S̃α,0 (t ), which involve
the frequencies ω j = −(ϵ ± g ).
The bath-correlation functions are obtained in analogy to the single-dot case in Sec. 4.3
and read
Z∞ Z∞
α
C0,0 (s ) = dωe −iωs ρα (ω)[1 − nFα (ω)], α
C1,1 (s ) = dωe iωs ρα (ω)nFα (ω),
−∞ −∞
(215)
with the Fermi-Dirac occupation
1
nFα (ϵ) = ϵ−µα
. (216)
e +1 kB Tα

Furthermore, from Eq. (183) we find the relevant transition rates

γα
0
(ω) = κα [1 − nFα (ω)], γα
1
(−ω) = κα nFα (ω), (217)

42
SciPost Physics Lecture Notes Submission

where κα = 2πρα (ω), as well as the energy shifts


ρ (ω′ )[1 − nFα (ω′ )] ρα (ω′ )nFα (ω′ )
Z∞ Z∞
α ′ α α
∆0 (ω) = P dω , ∆1 (−ω) = −P dω′ .
−∞ ω − ω′ −∞ ω − ω′
(218)
Note the different signs in the arguments of the functions with different subscripts. As we
will see below, these functions will be evaluated at frequencies of opposite signs. From the
bath-correlation functions, we conclude that the Born-Markov approximations are justified
when
ρα (ϵ ± g ± κ) ≃ ρα (ϵ ± g ), nFα (ϵ ± g ± κ) ≃ nFα (ϵ ± g ), (219)
where κ = max{κL , κR}.

4.6.1 The secular approximation


Having identified all the quantities appearing in Eq. (185), we find the GKLS master equation
in the secular approximation
– ™
X

∂ t ρ̂S(t ) = − i ϵ̄σ d̂σ d̂σ , ρ̂S(t )
σ=±
X X κ  (220)
α
+ nFα (ϵσ )D[d̂σ† ] + [1 − nFα (ϵσ )]D[d̂σ ] ρ̂S(t ),
α=L,R σ=±
2

with the energies



ρL (ω) + ρR(ω)
Z
ϵ± = ϵ ± g , ϵ̄± = ϵ± + P dω . (221)
−∞ ϵ±g −ω

The master equation in the secular approximation describes classical hopping into and out-of
the delocalized eigenmodes described by the operators d̂± . Any coherences between these
eigenmodes decay. We may thus interpret the secular master equation as a classical master
equation. However, note that the eigenmodes themselves describe electrons being in a coher-
ent superposition between the two dots.
Evaluating Eq. (190), we find the heat flow and power
X ϵ σ − µα X κ 
α
κα 〈d̂σ† d̂σ 〉 − nFα (ϵσ ) , 〈d̂σ† d̂σ 〉 − nFα (ϵσ ) . (222)
  
Jα = Pα = µ α
σ=± 2 σ=± 2

From Eq. (184), we conclude that the secular approximation is justified when
1 1
= κα ≪ g = |ω j − ω j ′ |. (223)
τS 2
This implies that for strong coupling between the dots, the secular approximation can safely be
applied. However, once g becomes comparable to either coupling κα , it is no longer justified.
In this case, one should apply the singular-coupling limit.

4.6.2 The singular-coupling limit


In the singular-coupling limit, we make the replacement

S̃α,0 (t − s ) ≃ e iϵs S̃α,0 (t ), S̃α,1 (t − s ) ≃ e −iϵs S̃α,1 (t ). (224)

This implies that the frequencies appearing in Eq. (194) are ωα,0 = ϵ and ωα,1 = −ϵ. We note
that there is no restriction on |ωα,0 − ωα,1 | because S̃α,0 only involves the frequencies ϵ ± g

43
SciPost Physics Lecture Notes Submission

while S̃α,1 only involves the frequencies −(ϵ ± g ). With the substitution in Eq. (224), we find
the GKLS master equation
X
κα nFα (ϵ)D[d̂α† ] + [1 − nFα (ϵ)]D[d̂α ] ρ̂S(t ), (225)

∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] +
α=L,R

with the Lamb-shift Hamiltonian



ρα (ω)
Z
ĤLS = ∆L d̂L† d̂L + ∆Rd̂R† d̂R, ∆α = P dω . (226)
−∞ ϵ−ω

The master equation in the singular-coupling limit is also known as a local master equation,
because the jump operators act locally on the left and right quantum dots. In contrast to the
secular master equation (also denoted global master equation), the populations and coher-
ences do not decouple and we may find coherence and even entanglement between the two
quantum dots [70]. We note that the local master equation may also be obtained heuristically
by first setting g = 0, deriving the master equation as in Sec. 4.3, and then reinstating g in
the Hamiltonian. For certain scenarios, this heuristic approach may result in master equations
that violate the laws of thermodynamics [75]. For this reason, it is recommended to perform
the singular-coupling approximation as outlined above. We then obtain a thermodynamically
consistent framework with the thermodynamic Hamiltonian

ĤTD = ϵ d̂L† d̂L + d̂R† d̂R .



(227)

In the thermodynamic bookkeeping, we thus neglect the coupling between the dots, in analogy
to how we neglect the system-bath coupling energy.
Evaluating Eq. (200), we find the heat flow and power

Jα = κα (ϵ − µα ) 〈d̂α† d̂α 〉 − nFα (ϵ) , Pα = κα µα 〈d̂α† d̂α 〉 − nFα (ϵ) .


   
(228)

In this approximation, each electron that enters reservoir α carries the heat ϵ − µα and the
power µα , resulting in the simple relation Jα /Pα = (ϵ − µα )/µα .
The singular-coupling limit is justified when ρα (ϵ ± g ) ≃ ρα (ϵ) as well as nFα (ϵ ± g ) ≃
α
nF (ϵ). The second condition is obeyed when

g ≪ max{kB Tα , |ϵ − µα |}. (229)

Note that g ≪ κ is not required for the singular-coupling limit to be justified. The secular
approximation and the singular-coupling limit may thus be justified at the same time. Indeed,
since τS ≫ τB this is what we expect from Eqs. (184) and (192).

5 Quantum thermal machines


In this section, we consider quantum thermal machines, i.e., machines that use reservoirs
in local equilibrium to perform a useful task such as converting heat into work or producing
entanglement. While in local equilibrium, these reservoirs have different temperatures and/or
chemical potentials such that together, they describe an out-of-equilibrium scenario.

5.1 A quantum dot heat engine


The first machine we consider is a simplified version of the heat engine that was implemented
experimentally in Ref. [78]. In contrast to a quantum dot coupled to a single reservoir, where

44
SciPost Physics Lecture Notes Submission

the only thing that happens is thermalization, we will find heat flows in the steady state and
we will see how heat can be converted into work and how work can be used to refrigerate. The
system we consider is a spinless, single-level quantum dot tunnel-coupled to two heat baths

Ĥtot = ĤS + Ĥc + Ĥh + V̂c + V̂h , (230)

with
X X X
† † ∗
ĤS = ϵd d̂ † d̂, Ĥα = ϵα,q ĉα,q ĉα,q , V̂α = d̂ gα,q ĉα,q − d̂ † gα,q ĉα,q , (231)
q q q

where α =c, h labels the reservoirs according to their temperatures Tc ≤ Th . Just as for the
quantum dot coupled to a single reservoir, Eq. (144) is already in GKLS form. Since the terms
in the master equation corresponding to different reservoirs are additive, we find

∂ t ρ̂S = −i[ĤS, ρ̂S] + Lc ρ̂S + Lh ρ̂S, (232)

with
Lα ρ̂ = κα [1 − nFα (ϵd )]D[d̂]ρ̂ + κα nFα (ϵd )D[d̂ † ]ρ̂, (233)
where nFα is the Fermi-Dirac occupation with temperature Tα and chemical potential µα , see
Eq. (216). Here we neglected the renormalization of ϵd which is given by a straightforward
generalization of Eq. (166).

5.1.1 Solving the master equation


The master equation can easily be solved by considering
X
∂ t p1 = Tr{d̂ † d̂∂ t ρ̂S} = − κα [1 − nFα (ϵd )]p1 − nFα (ϵd )p0 = −γ(p1 − n̄),

(234)
α=c,h

where
κc nFc (ϵd ) + κh nFh (ϵd )
γ = κc + κh , n̄ = . (235)
κc + κh
Comparing to Eq. (171), we find that the quantum dot behaves just like a quantum dot coupled
to a single heat bath with coupling strength γ and mean occupation n̄. The solution thus reads

p1 (t ) = p1 (0)e −γt + n̄(1 − e −γt ). (236)

Figure 11: Quantum dot heat engine. A quantum dot is coupled both to a hot reser-
voir with temperature Th and chemical potential µh as well as a cold reservoir with
temperature Tc and chemical potential µc . A temperature bias, Th > Tc , allows for
a particle current against a chemical potential bias (µh ̸= µc , for electrons, this cor-
responds to a voltage bias). Thereby, heat is converted into chemical work.

45
SciPost Physics Lecture Notes Submission

5.1.2 The first law


From the master equation, we find the first law

∂ t 〈ĤS〉 = ϵd Tr{d̂ † d̂ Lc ρ̂S} + ϵd Tr{d̂ † d̂ Lh ρ̂S} = −Jc − Pc − Jh − Ph , (237)

where the power and heat currents are defined in agreement with Eq. (190)

Pα = −µα Tr{d̂ † d̂ Lα ρ̂S}, Jα = −(ϵd − µα )Tr{d̂ † d̂ Lα ρ̂S}. (238)

Explicitly, we find
ϵ d − µα
Pα = µα κα e −γt [p1 (0) − n̄] + µα κα [n̄ − nFα (ϵd )], Jα = Pα . (239)
µα

Just as for a single reservoir, there is a transient term in the power which decreases exponen-
tially in time. In contrast to the single reservoir case, there is now also a time-independent
term which remains in steady state.
In the steady state, the observables of the system do not change. We can use this fact to
draw a number of conclusions without using the explicit solutions for the power and the heat
currents. In particular, since the left-hand side of Eq. (237) vanishes, we find

Tr{d̂ † d̂ Lc ρ̂S} = −Tr{d̂ † d̂ Lh ρ̂S}. (240)

From this, using Eqs. (238), follows

P = Pc + Ph = −(Jc + Jh ), (241)

which is nothing but the first law, as well as

P µc − µh ϵ d − µc
η= = =1− , (242)
−Jh ϵ d − µh ϵ d − µh

where we introduced the efficiency η which is given by the ratio between the power (the
output of the heat engine) and the heat current from the hot reservoir (the input of the heat
engine). Using the explicit solution for the power in Eq. (239), we find
κc κh
P= (µc − µh )[nFh (ϵd ) − nFc (ϵd )]. (243)
κc + κh

This quantity vanishes at zero voltage (µc = µh ), as well as at the stopping voltage where
nFh (ϵd ) = nFc (ϵd ), see also Fig. 13 b).
Let us now consider under which conditions the system acts as a heat engine, i.e., heat from
the hot reservoir is converted into power. From Eq. (243), we can identify different regimes
depending on the signs of P, Jc , and Jh . These regimes are illustrate in Figs. 12 and 13 (a).
For µc ≥ µh (µc ≤ µh ), we find that the quantum dot acts as a heat engine for large positive
(negative) values of ϵd . In both cases, we find from Eq. (243) that power is positive as long as

ϵd − µc Tc Tc
≥ ⇒ η≤1− = ηC . (244)
ϵ d − µh Th Th

We thus find that the efficiency is bounded from above by the Carnot efficiency as long as the
power output is non-negative.

46
SciPost Physics Lecture Notes Submission

Figure 12: Different regimes determined by the signs of P, Jc , and Jh . For µc ≥ µh , ϵd


increases from left to right (illustrated by the cartoons in the upper row). For µc ≤
µh , we find the exact same regimes but ϵd decreases from left to right (illustrated
by the cartoons in the lower row). In the regime to the far left, heat flows out of
the hot reservoir and into the cold reservoir and power is dissipated. If ϵd lies in
between the chemical potentials, both reservoirs are heated up by the dissipated
power. Refrigeration is obtained in the blue shaded regime, where heat is extracted
from the cold reservoir and dumped into the hot reservoir. Finally, in the red shaded
regime the quantum dot acts as a heat engine where a heat flow from hot to cold
drives a charge flow against the external voltage bias.

5.1.3 The second law


We first make some general statements about the second law in a two-terminal setup. These
are very similar to the statements made in Sec. 1.2. The entropy production rate is given by
Jc Jh
Σ̇ = kB ∂ t SvN [ρ̂S] + + ≥ 0. (245)
Tc Th
In the steady state, the first term vanishes and we immediately find that at least one of the
heat currents has to be positive. This implies that it is impossible to cool down all reservoirs
at the same time (in the steady state). Furthermore, for equal temperatures (Tc = Th ), we
find P = −(Jc + Jh ) ≤ 0 which implies that it is not possible to convert heat into work with
reservoirs at a single temperature. This is known as the Kelvin-Planck statement of the second
law. Finally, we can use the first law in Eq. (241) to eliminate Jc , resulting in
P ηC − η
Σ̇ = ⇒ 0 ≤ η ≤ ηC , (246)
Tc η
where the last inequality holds for P ≥ 0. The fact that the efficiency is upper bounded by the
Carnot efficiency is thus a direct consequence of the second law.
In our system, the entropy of the quantum dot is given in Eq. (176). Using
Jc Jh
∂ t p1 = − − , (247)
ϵ d − µc ϵ d − µh
we can write the entropy production rate as
p1 [1 − nFα (ϵd )]
X  
κα p1 [1 − nFα (ϵd )] − [1 − p1 ]nFα (ϵd )
€ Š
Σ̇ = kB ln ≥ 0, (248)
α=c,h
[1 − p1 ]nFα (ϵd )

47
SciPost Physics Lecture Notes Submission

1.0

0.8

0.6

0.4

0.2

0.0
0.0

Figure 13: Performance of the quantum dot heat engine. (a) Power and heat currents
as a function of ϵd . The regimes illustrated in Fig. 12 can clearly be identified. (b)
“Lasso" diagram. Along the curve, the voltage bias is increased in the direction of
the arrow. For µc = µh , both the power as well as the efficiency vanishes. Increasing
µc then results in a finite power and efficiency, until the power vanishes again at
the stopping voltage, where the efficiency reaches ηC . Such plots are called Lasso
diagrams because the curve usually goes back to the origin at the stopping voltage,
see also Fig. 14. Parameters: κc = κh , ϵd = 2κc , kB Tc = 0.3κc , kB Th = 0.8κc ,
µc = κc , µh = 0.

which is positive since each term in the sum is positive in complete analogy to Eq. (178). In
the steady state, we find
κc κh
Σ̇ = kB [nFh (ϵd ) − nFc (ϵd )] [βc (ϵd − µc ) − βh (ϵd − µh )] ≥ 0, (249)
κc + κh

which vanishes at the Carnot point, where nFh (ϵd ) = nFc (ϵd ), η = ηC , and both the power as
well as the heat currents vanish. We stress that while an equilibrium situation (i.e., Tc = Th and
µc = µh ) ensures nFh (ϵd ) = nFc (ϵd ), the Carnot point can also be reached out of equilibrium.
The interplay between power and efficiency is illustrated in Fig. 13. We find that at maxi-
mum power, the efficiency reaches above 60% of the Carnot efficiency. Similar values where
found experimentally in Ref. [78]. As mentioned above, the Carnot efficiency is obtained at
the stopping voltage. This is a consequence of the fact that there is only a single energy at
which transport happens. At the stopping voltage, all transport is blocked implying that both
the charge as well as the heat currents vanish. This implies that there is no dissipation (Σ̇ = 0)
and the efficiency takes on the Carnot value (see also Ref. [79]). In reality, as well as in the
experiment of Ref. [78], this ideal filtering effect is spoiled by the broadening of the energy
level which is neglected in the Markovian master equation, see Fig. 14. Including this energy
broadening, one finds that at some energies, charges move from hold to cold while at other
energies, they move in the other direction. At the stopping voltage, this results in a vanishing
of power but still a net heat current, such that the efficiency vanishes. In this case, Fig. 13 (b)
takes on the shape of a lasso.

5.1.4 Refrigeration
As discussed above, the quantum dot can also act as a refrigerator in the regime where P < 0,
Jh > 0, and Jc < 0. In this case, electrical power is used to reverse the natural flow of heat,
resulting in a heat flow out of the cold reservoir and into the hot reservoir. The efficiency of

48
SciPost Physics Lecture Notes Submission

Figure 14: Effect of level broadening. At the Carnot point, where nFh (ϵd ) = nFc (ϵd ),
our Markovian treatment predicts P = Jh = 0 and η = ηC . This occurs because the
quantum dot is approximated as being a perfect energy filter and no transport occurs
at the Carnot point as illustrated in the left panel. In reality, the energy level of the
quantum dot is broadened. Above ϵd , nFh (ϵ) > nFc (ϵ) and we get a particle current
from hot to cold. Below ϵd , nFh (ϵ) < nFc (ϵ) resulting in a particle current from cold to
hot. These particle currents cancel, resulting in P = 0. However, since the particles
above and below ϵd have a different energy, their contribution to the heat current
does not cancel resulting in −Jh > 0 which implies η = 0. If the Markovian approxi-
mation is justified, Jh is small. However, it remains finite which always implies η = 0
at the stopping voltage.

this process is usually characterized by the coefficient of performance (COP)


−Jc
ηCOP = , (250)
−P
where we left the minus signs to stress that this performance quantifier is relevant in the regime
where both P as well as Jc are negative under our sign convention. We can use the first law
in Eq. (241) to eliminate Jh and write the entropy production rate as

−Jc ηC
COP
− ηCOP
Σ̇ = ⇒ 0 ≤ ηCOP ≤ ηCOP
C
, (251)
Th ηCOP
C
ηCOP
where the second inequality holds for Jc ≤ 0 and we introduced the Carnot value for the COP
Tc
ηCOP
C
= . (252)
Th − Tc
We note that as Tc → Th , ηCOP
C
diverges. This reflects the fact that in principle, it is possible
to move heat in between two reservoirs with equal temperature without investing any work.
In our system, we find from Eqs. (238) and (240)
ϵ d − µc
ηCOP = . (253)
µc − µh
From this, we find that ηCOP vanishes when ϵd = µc and takes on the Carnot value at ϵd =
µc Th −µh Tc
Th −Tc , which is exactly the point where the regime of the refrigerator meets the regime
of the heat engine, see Fig. 12. Interestingly, both the COP as well as the efficiency reach
their maximum value at this point, where no transport takes place. The Carnot point is often
called the point of reversibility. At this point nothing happens but it can be seen as the limit of
converting heat into work infinitely slowly and without wasting any energy (thus, reversibly).
Equivalently, taking the limit from the other side, it can be seen as the limit of cooling the cold
reservoir reversibly.

49
SciPost Physics Lecture Notes Submission

5.2 Entanglement generator


In this section, we consider a thermal machine that uses a temperature gradient in order to
produce entanglement between two quantum dots. The original idea goes back to Ref. [80],
where qubits instead of quantum dots are considered. Here we focus on the scenario investi-
gated in Ref. [70], i.e., a double quantum dot coupled to two fermionic reservoirs, just like in
Sec. 4.6 (see Fig. 10).

5.2.1 Entanglement
Before we consider the thermal machine itself, we provide a brief introduction to entanglement
which is one of the most intriguing features of quantum mechanics. For more information, see
the Book by Nielsen and Chuang [17]. To consider entanglement, we require a bi-partite
Hilbert space H = HA ⊗ HB , where HA denotes the Hilbert space of Alice and HB is Bob’s
Hilbert space. A quantum state on this bi-partite Hilbert space is then said to be a product state
if it reads ρ̂ = ρ̂A ⊗ ρ̂B . This corresponds to the scenario where Alice and Bob have access to
their respective states ρ̂A and ρ̂B , without any correlations between them. A classical mixture
of product states is called a separable state
X X
j j
ρ̂ = p j ρ̂A ⊗ ρ̂B , p j ≥ 0, p j = 1. (254)
j j

Such a state contains classical correlations. For instance, it can describe the rather trivial
example where Alice and Bob by fruit together. With equal probabilities, they buy two apples
or two oranges. If Alice has an apple, we know Bob has an apple as well and similarly for
oranges. Obviously, no fruits are entangled in this example.
A state is entangled iff it is not separable, i.e., if it cannot be written in the form of Eq. (254).
In this case, the correlations between Alice and Bob are no longer classical in nature. Entangle-
ment may thus be seen as a form of correlation that goes beyond classical correlations. Here
we illustrate this with two simple examples. First, we consider the state

1 1 1
ρ̂cl = (|00〉〈00| + |11〉〈11|) = |0〉〈0| ⊗ |0〉〈0| + |1〉〈1| ⊗ |1〉〈1| , (255)
2 2 2
which is evidently separable. It corresponds to the apple-orange scenario above, where |00〉
could denote an apple for both Alice and Bob. As an example for an entangled state, we
consider one of the Bell states
1 1
|Φ+ 〉 = p (|00〉 + |11〉) , |Φ+ 〉〈Φ+ | = (|00〉〈00| + |11〉〈11| + |00〉〈11| + |11〉〈00|) .
2 2
(256)
This state cannot be written in the form of Eq. (254) due to the off-diagonal elements. Indeed,
it is the maximally entangled state for two qubits.
The amount of entanglement can be quantified by the entanglement of formation [81].
Loosely speaking, it is determined by the number of Bell states that are required to prepare the
given state using only local operations and classical communication (LOCC). For two qubits,
the entanglement of formation is a number between zero and one, where zero is obtained for
separable states and one for Bell states.

Concurrence: Determining if a given state is entangled is in general a highly non-trivial


task. However, for two qubits the low dimensionality of the problem considerably facilitates
the task. A common measure for entanglement in this scenario is the concurrence [82, 83],

50
SciPost Physics Lecture Notes Submission

which is monotonically related to the entanglement of formation. Just as the latter, the con-
currence ranges from zero, obtained for separable states, to one, reached for Bell states. The
concurrence of a state ρ̂ can be computed with the help of the auxiliary state

ρ̃ = σ̂ y ⊗ σ̂ y ρ̂ ∗ σ̂ y ⊗ σ̂ y , σ̂ y = −i |0〉〈1| + i |1〉〈0| , (257)

where the star denotes complex conjugation in the computational basis (|0〉 , |1〉). Let λ j de-
note the eigenvalues of ρ̂ ρ̃ in decreasing order, i.e., λ1 ≥ λ2 ≥ λ3 ≥ λ4 . The concurrence
may then be written as
¦ Æ Æ Æ Æ ©
C [ρ̂] = max 0, λ1 − λ2 − λ3 − λ4 . (258)

This quantity lies between zero and one, where C [ρ̂] = 0 holds for separable states and
C [ρ̂] > 0 for entangled states, with unity reached only for Bell states.

Fermionic entanglement: The discussion above assumed that the Hilbert space has the
tensor-product structure H = HA ⊗ HB . For fermions, this is not the case because fermionic
operators corresponding to Alice and Bob anti-commute. This implies that any definition for
entanglement needs to be reconsidered for fermions [84]. To make progress, we denote by
|00〉 the vacuum state, where no fermion is present. Using the creation operators on the dots
d̂α† , we may then define the states

|10〉 = d̂L† |00〉 , |01〉 = d̂R† |00〉 , |11〉 = d̂L† d̂R† |00〉 = −d̂R† d̂L† |00〉 . (259)

Note the minus sign in the last equation, which forces us to associate to the state |11〉 a spe-
cific order for the fermionic operators. It turns out that in our system, we may compute the
concurrence as if these states were two-qubit states. This is the case because we will only find
a single off-diagonal element corresponding to a coherent superposition of a single electron
between the two modes (i.e., between |01〉 and |10〉). In general, more care has to be taken
when evaluating the entanglement of fermionic systems (even for two modes) [84].

5.2.2 The master equation


We will use the master equation in the singular-coupling limit (local master equation) given
in Eq. (225), neglecting any Lamb-shift. For convenience, we reproduce this equation here

∂ t ρ̂S(t ) = −i[ĤS, ρ̂S(t )] + LL ρ̂S(t ) + LRρ̂S(t ), (260)

with the local dissipators

Lα = κα nFα (ϵ)D[d̂α† ] + κα [1 − nFα (ϵ)]D[d̂α ], (261)

and the Hamiltonian


ĤS = ϵ d̂L† d̂L + d̂R† d̂R + g d̂L† d̂R + d̂R† d̂L .
 
(262)
As discussed in Sec. 4.6, this restricts our analysis to g ≪ max{kB Tα , |ϵ−µα |}. For a discussion
of the entanglement generator outside of this regime, see Ref. [70].
Since this master equation is bi-linear and contains one annihilation and one creation
operator per term, the time-derivatives ∂ t 〈d̂α† d̂β 〉 form a closed set of equations. Furthermore,
the quantities 〈d̂α† d̂β 〉 completely determine the quantum state of the system at all times (given
that this is true for the initial state). The quantum state is said to be Gaussian. Expectation
values including more than two annihilation or creation operators can then always be reduced
using Wick’s theorem [85, 86]. In particular, for Gaussian states it holds that

〈d̂i† d̂ †j d̂k d̂l 〉 = 〈d̂i† d̂l 〉〈d̂ †j d̂k 〉 − 〈d̂i† d̂k 〉〈d̂ †j d̂l 〉. (263)

51
SciPost Physics Lecture Notes Submission

For more information on solving master equations for Gaussian processes, see Ref. [18].
From the master equation in Eq. (260), one may derive

v + ⃗b,
∂ t v⃗ = A⃗ (264)

with

〈d̂L† d̂L 〉
   
−κL 0 −i g ig κL nFL (ϵ)
 
〈d̂ † d̂R〉  0 −κR ig −i g  κRnFR(ϵ)
v⃗ =  R A= ⃗b = 
〈d̂ † d̂R〉 ,
 κL +κR , . (265)
−i g ig − 2 0  
0
L κL +κR
〈d̂R† d̂L 〉 i g −i g 0 − 2 0

At steady state, we may set the LHS of Eq. (264) to zero and we find

vss + ⃗b,
0 = A⃗ ⇒ v⃗ss = −A−1 ⃗b. (266)

In order to connect the averages in v⃗ to the state of the double quantum dot, we write
X
n ,n
ρ̂S = ρ ′L ′R nL , nR nL′ , nR′ . (267)
nL ,nR
nL ,nR
nL′ ,nR′

With the basis states given in Eq. (259), the matrix elements may be cast into

n ,n
n n ′ n ′ n n
o
ρ ′L ′R = 〈nL , nR| ρ̂S nL′ , nR′ = Tr d̂L† L d̂R† R |00〉〈00| d̂R R d̂L L ρ̂S
nL ,nR
D n ′ n ′  n n E (268)
= d̂L† L d̂R† R 1 − d̂L† d̂L 1 − d̂R† d̂R d̂R R d̂L L .


Here we used |00〉〈00| = 1 − d̂L† d̂L 1 − d̂R† d̂R which may be verified by applying this opera-
 

tor to all basis states. Note that due to the fermionic anti-commutation relations, and because
there is no coherence between states with a different total number of electrons, the only aver-
ages required to compute the density matrix that are not of the form 〈d̂α† d̂β 〉 can be reduced
by Eq. (263).
A lengthy but straightforward calculation then results in the steady state


−β̄(ϵ−µ̄)(d̂L† d̂L +d̂R† d̂R)
1 e −βL (ϵ−µL )d̂L d̂L e −βR(ϵ−µR)d̂Rd̂R 2e
§
ρ̂ss = κL κR + 4g
4g 2 + κL κR ZL ZR Z̄ 2
(269)
2g κL κR(nF − nF )
L R ª
i d̂L† d̂R − d̂R† d̂L ,


κL + κR

where we omitted the argument of the Fermi functions for brevity and the barred quantities
are defined by the equation

1 κL nFL (ϵ) + κRnFR(ϵ)


n̄ = = , (270)
e β̄(ϵ−µ̄) + 1 κL + κ R

and we abbreviated the partition functions


† †
Zα = Tr e −βα (ϵ−µα )d̂α d̂α , Z̄ = Tr e −β̄(ϵ−µ̄)d̂α d̂α ,
¦ © ¦ ©
(271)

where in the last equality, the choice of α does not matter. Note that the terms in the first line
of Eq. (269) are both product states. Thus, it is the second line, including the coherence factor
d̂L† d̂R − d̂R† d̂L = |10〉〈01| − |01〉〈10|, which is responsible for any entanglement we might get.

52
SciPost Physics Lecture Notes Submission

Limiting cases: It is instructive to consider the steady state in some limiting cases. First,
we consider an equilibrium scenario where βL = βR = β̄ and µL = µR = µ̄. In this case, the
second line of Eq. (269) vanishes and the first two terms become proportional to each other.
We thus find, as expected from Eq. (197)

e −β̄ (ĤTD −µ̄ N̂S )


ρ̂ss = , (272)
Z
with the thermodynamic Hamiltonian [64] and particle-number operator

ĤTD = ϵ d̂L† d̂L + d̂R† d̂R , N̂S = d̂L† d̂L + d̂R† d̂R.

(273)

Note that this is a product state due to the additive nature of ĤTD . In the singular-coupling
master equation considered here, the thermal state does not exhibit any entanglement. This
is different at large coupling g , where the secular approximation should be used instead [70].
Second, we consider the limit where κL , κR ≫ g . In this case, only the first term in
Eq. (269) is relevant. This term describes two quantum dots, each in equilibrium with the
reservoir it couples to. The state is thus a product of Gibbs states with the local reservoir
temperatures and chemical potentials.
The third case we consider is the case where κL = 0. In this case, we find that only the
second term in Eq. (269) survives with β̄ = βR and µ̄ = µR. The system thus equilibrates with
the right reservoir, taking the form of Eq. (272).
Finally, we consider the limit g ≫ κL , κR. Just as in the last case, the second term in
Eq. (269) dominates and the system equilibrates to the average occupation n̄ with the state
reducing to Eq. (272). In all these limiting cases, no entanglement is found as the term in-
volving coherence always drops out.

Concurrence: To quantify the entanglement in Eq. (269), we consider the concurrence. The
steady state given in Eq. (269) is of the form

p0 0 0 0
 
 0 pL α 0 
ρ̂ss =  , (274)
0 α∗ pR 0 
0 0 0 pd

where the basis states are |00〉, |10〉, |01〉, |11〉. For such a state, the concurrence reduces to
p
C [ρ̂ss ] = max{0, 2|α| − 2 p0 pd }. From Eq. (269), we find

2κL κR 2|g ||nFR − nFL |



2|α| − 2 p0 pd =
p
4g 2 + κL κR κL + κR
v   (275)
4g 2 4g 2
u
2

(1 − nF ) 1 − nF +
L R
(1 − n̄) nF nF +
L R
t
− n̄ 2 .
κL κR κL κR

We thus find that entanglement is indeed generated. Interestingly, the entanglement is gener-
ated due to reservoirs which are out of equilibrium, as can be inferred from the term |nFR − nFL |.
This is a surprising result because usually, any coupling to the environment only reduces co-
herence and entanglement in the system. This is because entanglement is monogamous, which
means that if A is strongly entangled with B, it cannot at the same time be strongly entangled
with C [87]. When a system couples to its environment, this results in entanglement between
the system and the reservoirs, which generally reduces the inter-system entanglement. Here
we encounter a different behavior, where an out-of-equilibrium environment induces entan-
glement within a system.

53
SciPost Physics Lecture Notes Submission

Heat current: It is instructive to consider the heat current, which is given by

  4g 2 κL κR(nFL − nFR)
JR = −Tr ĤTD − µR N̂S LRρ̂ss = (ϵ − µR) , (276)
(κL + κR)(4g 2 + κL κR)

with the thermodynamic Hamiltonian given in Eq. (273). Noting the similarity of this expres-
sion with the first term in the concurrence, see Eq. (275), we find that the system is entangled
when the heat current exceeds a critical value [88]
v
2κL κR t
 
4g 2 4g 2
u
2

|JR| ≥ |ϵ − µR||g | (1 − nF ) 1 − nF +
L R
(1 − n̄) nF nF +
L R 2
n̄ .
4g 2 + κL κR κL κR κL κR
(277)
When we can trust the master equation we employ to be a valid description, the heat current
may thus serve as an indicator for entanglement.

Maximal entanglement: The concurrence in Eq. (275) can be maximized over all parame-
ters, which results in p
5−1
C [ρ̂ss ] = ≃ 0.309. (278)
4
p
This value is obtained by the parameters nFL = 1, nFR = 0, κL = κR, g /κL = ( 5 − 1)/4.
The Fermi functions should thus be maximally different, which results in the largest current
and off-diagonal density matrix element. The couplings to the reservoirs should be equal
and interestingly, the ratio between the interdot coupling and the system-bath coupling is
determined by the golden ratio ϕ. Indeed, we find
p
1+ 5 κL 1 |α|
ϕ≡ = = = p . (279)
2 2g 2C 2 p0 pd

As shown in Ref. [89], the steady state for these parameters is not only entangled but can
also be used for quantum teleportation. The entanglement is further enhanced when includ-
ing electron-electron interactions, which may even result in nonlocal states that violate Bell
inequalities [70].

5.3 Absorption refrigerator


As a final example of a thermal machine, we consider a quantum absorption refrigerator, which
is a refrigerator that uses heat as its energy source [14, 90, 91]. Quantum absorption refriger-
ators have been implemented experimentally, using trapped ions [92] and very recently using
superconducting qudits [93]. The system we consider here consists of three two-level systems
(qubits), coupled to three reservoirs respectively, see Fig. 15, with temperatures Tc ≤ Tr ≤ Th .
Here the subscript r denotes an intermediate room temperature. The basic principle behind
the absorption refrigerator is that the tendency from heat to flow from the hot reservoir to the
room reservoir is exploited to drive a heat current from the cold reservoir to the room reser-
voir. Thereby, a hot thermal reservoir is used to cool down the coldest reservoir. At the same
time, the qubit coupled to the cold reservoir is cooled below Tc . The absorption refrigerator
may thereby achieve two goals:

1. Cooling the cold reservoir.

2. Cooling the qubit coupled to the cold reservoir.

54
SciPost Physics Lecture Notes Submission

Which of these goals is more relevant may depend on the specific circumstances. As we see
below, different figures of merit can be used to describe the performance of the refrigerator in
reaching the different goals.
We stress that in contrast to the previous examples, we do not consider quantum dots
where fermions are exchanged with the environment. Instead, we consider qubits that can
exchange photons with the environment. We therefore do not have chemical potentials in this
section (since the µ = 0 for photons).

Figure 15: Sketch of a three-qubit absorption refrigerator. Three two-level systems


(qubits) are coupled to three thermal reservoirs at temperatures Tc ≤ Tr ≤ Th . A heat
flow from the hot reservoir to the room reservoir induces a heat flow from the cold
reservoir to the room reservoir, thereby extracting heat from the coldest reservoir.
At the same time, the qubit coupled to the cold reservoir is cooled to a temperature
below Tc .

5.3.1 The master equation


The three qubits are described by the Hamiltonian

ĤS = Ĥ0 + Ĥint , (280)

where the first term describes the individual qubits

Ĥ0 = ϵc σ̂c† σ̂c + ϵh σ̂h† σ̂h + ϵr σ̂r† σ̂r , (281)

and the second term describes their interaction


€ Š
Ĥint = g σ̂r† σ̂c σ̂h + σ̂c† σ̂h† σ̂r = g (|001〉〈110| + |110〉〈001|) . (282)

Here we introduced the lowering operators

σ̂c = |0〉〈1| ⊗ 12 ⊗ 12 , σ̂h = 12 ⊗ |0〉〈1| ⊗ 12 , σ̂c = 12 ⊗ 12 ⊗ |0〉〈1| . (283)

From the interaction Hamiltonian in Eq. (282), we may anticipate the cooling mechanism:
Two excitations, one in the hot qubit and one in the cold qubit, are turned into an exciation in
the room qubit by the term σ̂r† σ̂c σ̂h . This excitation is then dissipated into the room reservoir.
The hot qubit is then re-excited by the hot reservoir, preparing to remove any excitation in the

55
SciPost Physics Lecture Notes Submission

cold qubit coming from the cold reservoir. The interaction in Eq. (282) is most effective when
it is in resonance, i.e., when the states |001〉 and |110〉 have the same energy. We thus demand

ϵr = ϵc + ϵ h . (284)

We note that the interaction in Eq. (282) would be unphysical for fermions, where it would
imply that one fermion is turned into two fermions.
We describe the coupling to the environment using the master equation in the singular-
coupling limit

∂ t ρ̂S(t ) = −i[ĤS, ρ̂S(t )] + Lc ρ̂S(t ) + Lh ρ̂S(t ) + Lr ρ̂S(t ), (285)

with the dissipators

Lα = κ̃α nBα D[σ̂α† ] + κ̃α (nBα + 1)D[σ̂α ] = κα nFα D[σ̂α† ] + κα (1 − nFα )D[σ̂α ], (286)

where the Bose-Einstein occupation and the Fermi-Dirac occupation are given by

1 1
nBα = nFα = . (287)
e ϵα /kB Tα −1 e ϵα /kB Tα +1
This master equation can be derived using reservoirs described by noninteracting bosons (with
µ = 0) X X
ϵα,q â†α,q âα,q , gα,q â†α,q σ̂α + σ̂α† âα,q .

Ĥα = V̂α = (288)
q k

This naturally results in the dissipators given by the first equality in Eq. (286), including the
Bose-Einstein occupations. In the second equality in Eq. (286), the dissipators are written in
terms of the Fermi-Dirac occupations by using

nBα nBα + 1 1 − nFα


κα = κ̃α α
, = = e ϵα /kB Tα . (289)
nF nBα nFα

The last equality ensures that the ratio between the rates of absorbing and emitting energy is
given by the Boltzmann factor. This is known as local detailed balance and it ensures that the
equilibrium state is a Gibb’s state. Note that while κ̃α is determined by the reservoir’s spectral
density and is temperature independent, κα depends on temperature Tα .

Heat currents: From the master equation, we may extract the heat currents that enter the
different reservoirs. As discussed in Sec. 4.4.2, the singular-coupling limit requires the use
of a thermodynamic Hamiltonian for a consistent thermodynamic bookkeeping. The correct
choice for the thermodynamic Hamiltonian depends on the details of the Hamiltonian. Given
the resonance condition in Eq. (284), the appropriate thermodynamic Hamiltonian is given
by ĤTD = Ĥ0 , i.e., we neglect the interaction between the qubits in the thermodynamic book-
keeping. The heat currents then read

Jα = Tr Ĥ0 Lα ρ̂S = −ϵα κα nFα − 〈σ̂α† σ̂α 〉 .


 
(290)

Similarly to what we found for the systems based on quantum dots, the sign of the heat cur-
rents are determined by which occupation is larger, the occupation of the reservoir nFα or the
occupation of the qubit 〈σ̂α† σ̂α 〉.
To make progress, it is instructive to consider the time-evolution of the qubit occupation

∂ t 〈σ̂α† σ̂α 〉 = Tr σ̂α† σ̂α ∂ t ρ̂S(t ) = −i〈[σ̂α† σ̂α , Ĥint ]〉 + κα nFα − 〈σ̂α† σ̂α 〉 .
 
(291)

56
SciPost Physics Lecture Notes Submission

The second term on the right-hand side can be identified as −Jα /ϵα by comparing to Eq. (290).
The first term can be related to the following quantity
I ≡ 2g Im〈σ̂r† σ̂c σ̂h 〉 = i〈[σ̂c† σ̂c , Ĥint ]〉 = i〈[σ̂h† σ̂h , Ĥint ]〉 = −i〈[σ̂r† σ̂r , Ĥint ]〉. (292)

In the steady state, we have ∂ t 〈σ̂α† σ̂α 〉 = 0. From Eqs. (291) and (292), we may then infer
Jc = −ϵc I, Jh = −ϵh I, Jr = ϵr I. (293)
This remarkably simple relation between the heat currents arises due to the simple structure
of the Hamiltonian. Heat may only traverse the system by exchanging an excitation in the
room qubit with two excitations, one in the cold and one in the hot qubit.

The laws of thermodynamics: In the steady state, Eq. (292) allows us to express the laws
of thermodynamics in a particularly simple form. The first law of thermodynamics may be
written as
Jc + Jh + Jr = I(ϵr − ϵc − ϵh ) = 0, (294)
which holds because of the resonance condition in Eq. (284). Obviously, the first law still holds
if this resonance condition is not met. However, in that case the thermodynamic Hamiltonian
should be chosen differently which would modify Eq. (290).
The second law of thermodynamics reads
ϵc + ϵh ϵc ϵh
 
Jc Jh Jr
+ + =I − − ≥ 0. (295)
Tc Th Tr Tr Tc Th
In contrast to the first law, we may not verify the second law without explicitly solving the
master equation. However, knowing that it holds, the second law can tell us when the device
operates as a refrigerator. From Eq. (292), we see that the cold reservoir is cooled when I ≥ 0.
From the second law, we can infer that this is the case if the term in brackets in Eq. (295) is
positive. This results in the condition for cooling
ϵc Tc (Th − Tr )
≤ ≡ ηCOP , (296)
ϵh Th (Tr − Tc ) C

where ηCOP
C
denotes the Carnot value for the coefficient of performance of an absorption re-
frigerator, see also the next subsection.

5.3.2 Figures of merit


As discussed above, we may pursue two distinct goals with the absorption refrigerator: Cooling
the cold reservoir (goal 1), or cooling the qubit coupled to the cold reservoir (goal 2). For these
two goals, we introduce different figures of merit.

Goal 1: For this goal, the desired output is a large heat current out of the cold reservoir. The
natural figure of merit is then the coefficient of performance (COP)
−Jc ϵc
ηCOP ≡ = ≤ ηCOP . (297)
−Jh ϵh C

As for the heat engine in Sec. 5.1, the coefficient of performance can be understood as the de-
sired output divided by the consumed resource, which in this case is the heat provided by the
hot reservoir. For the absorption refrigerator under consideration, the COP is determined only
by the qubit energy splittings, due to Eq. (292). As shown above, the second law of thermo-
dynamics provides an upper bound on the COP, under the assumption that the device indeed
operates as a refrigerator. This bound diverges when Tr → Tc , as heat can in principle be
moved in between two reservoirs of the same temperature without consuming any resources.

57
SciPost Physics Lecture Notes Submission

Goal 2: When cooling the qubit coupled to the cold reservoir, the natural figure of merit is
its occupation. It may be quantified by an effective temperature
ϵc 1
θ=  ‹ ⇔ 〈σ̂c† σ̂c 〉 = ϵc . (298)
1+〈σ̂c† σ̂c 〉
kB ln e kB θ
+1
〈σ̂c† σ̂c 〉

From Eq. (291), we find (in the steady state)

〈σ̂c† σ̂c 〉 = nFc − I/κc > 0, (299)

which depends on the quantity I encountered before. The occupation may asymptotically
reach zero in the limit
g κr κh ϵh ϵr
→ 0, = → ∞, → 0, → ∞. (300)
κr κc κc kB Th kB Tr
These conditions are very demanding and can obviously never be reached exactly. This can
be understood as an implication of the third law of thermodynamics, which prevents cooling
a system down to the ground state.

5.3.3 Perturbation theory


As we have seen above, the quantity I determines most of the observables that we are inter-
ested in. In the model we consider, I can be calculate analytically [this is how the limit in
Eq. (300) was obtained]. However, this is a rather tedious calculation. Here we consider a
perturbative calculation, which is simpler but nevertheless provides some insight. To this end,
we consider the interacting part of the Hamiltonian Ĥint [c.f. Eq. (282)] as a small perturbation
to the dynamics. This is justified, as long as g ≪ κc + κh + κr . We then write
(0) (1)
ρ̂S = ρ̂S + ρ̂S + O(g 2 ), (301)

(j)
where ρ̂S is proportional to g j . We note that the Hamiltonian is already written as the sum
of a g independent part, Ĥ0 , and the interaction Hamiltonian which is linear in g . We then
write the master equation in Eq. (285) order by order in g , starting with the zeroth order
(0) (0) (0) (0) (0)
∂ t ρ̂S (t ) = −i[Ĥ0 , ρ̂S (t )] + Lc ρ̂S (t ) + Lh ρ̂S (t ) + Lr ρ̂S (t ). (302)

This master equation describes three independent qubits, each coupled to its respective ther-
mal reservoir. The steady state is thus simply a product of three Gibbs states (at different
temperatures) and can be written as

1 − nFc 0 1 − nFh 0 1 − nFr 0


     
(0)
ρ̂ = ⊗ ⊗ . (303)
0 nFc 0 nFh 0 nFr

The master equation to first order may be written as


(1) (0) (1) (1) (1) (1)
∂ t ρ̂S (t ) = −i[Ĥint , ρ̂S (t )] − i[Ĥ0 , ρ̂S (t )] + Lc ρ̂S (t ) + Lh ρ̂S (t ) + Lr ρ̂S (t ). (304)

The first term on the right-hand side gives


(0)
[Ĥint , ρ̂S (t )] = g δn (|001〉〈110| − |110〉〈001|) , (305)

where we abbreviated

δn = nFc nFh (1 − nFr ) − (1 − nFc )(1 − nFh )nFr . (306)

58
SciPost Physics Lecture Notes Submission

Note that δn provides the population of the state |110〉 minus the population of the state
|001〉 at g = 0. The term in Eq. (305) acts like a source term in the differential equation in
Eq. (304). This suggests the ansatz
(1)
ρ̂S = x |001〉〈110| + x ∗ |110〉〈001| . (307)

This ansatz is Hermitian and trace-less, which is important to keep the density matrix in
Eq. (301) Hermitian and with trace one. With this ansatz, we find
(1)
[Ĥ0 , ρ̂S ] = (ϵr − ϵc − ϵh )x |001〉〈110| − (ϵr − ϵc − ϵh )x ∗ |110〉〈001| , (308)

and
(1)
κj (1)
L j ρ̂S = − ρ̂S . (309)
2
These identities can be inserted into Eq. (304) to obtain

(1) 2i g δn
ρ̂S = − (|001〉〈110| − |110〉〈001|), (310)
κc + κh + κr
which allows us to determine
4g 2 δn
I = 2g Im〈σ̂r† σ̂c σ̂h 〉 = 2g Im 〈110| ρ̂S |001〉 = . (311)
κc + κh + κr

From this expression, we find that for small g , the heat currents are quadratic in g . Further-
more, we obtain cooling whenever δn > 0. One may show that this inequality is equivalent
to ϵc /ϵh < ηCOP
C
. This perturbative calculation thus reproduces the cooling condition for all
values of g .

5.3.4 Coherence-enhanced cooling


Finally, we consider a feature of the absorption refrigerator that exploits quantum coherence in
the transient dynamics. To this end, we assume that we have a means to turn the refrigerator
on and off. In an implementation based on a superconducting circuit, this can for instance
be achieved using a magnetic field [94]. When the refrigerator is off, the qubits will tend
to the state given in Eq. (303), i.e., each qubit will thermalize with its respective reservoir.
When the refrigerator is turned on, the effective temperature θ defined in Eq. (298) will start
to oscillate, see Fig. 16. These oscillations will eventually damp out towards the steady-state
value. Interestingly however, θ may dip below its steady-state value. If the refrigerator is
switched off when θ reaches its first minimum, it can stay below its steady-state value for
a significant amount of time if the couplings to the environment, κ j , are sufficiently small.
Quantum coherence thus allows for better cooling. This effect was first discussed in Ref. [95]
and recently verified experimentally [92].
The damped oscillations in Fig. 16 are a consequence of the interplay between the unitary
and the dissipative part in the master equation. The unitary part results in oscillations between
the states |110〉 and |001〉, while the dissipative part induces an exponential time-dependence
towards the steady state. To get some insight into the oscillations, we isolate the effect of the
Hamiltonian. To this end, we consider the von Neumann equation

∂ t ρ̂S(t ) = −[Ĥint , ρ̂S(t )]


(312)
= −i g x (t )(|001〉〈110| − |110〉〈001|) + g y(t )(|110〉〈110| − |001〉〈001|),

where
x (t ) = 〈110| ρ̂S(t ) |110〉 , y(t ) = 2Im 〈110| ρ̂S(t ) |001〉 . (313)

59
SciPost Physics Lecture Notes Submission

Figure 16: Coherence-enhanced cooling. The temperature in resonator c, θ , is shown


as a function of time. At t = 0, the refrigerator is switched on. The temperature then
oscillates before it reaches its steady-state value (blue, solid line). Switching the
refrigerator off at the time the temperature is at its first minimum (dashed vertical
line) allows for θ to remain below the steady-state value for a substantial amount of
time (green, dashed line) as illustrated by the grey shading.

In Eq. (312), we dropped Ĥ0 because it commutes both with Ĥint as well as with the initial
state, which we choose to be given by Eq. (303). Equation (312) can be cast into the coupled
differential equations

∂ t x (t ) = −2g y(t ), ∂ t y(t ) = 2g x (t ), (314)

with the initial conditions x (t = 0) = δn and y(t = 0) = 0. These equations may easily be
solved, resulting in

x (t ) = δn cos(2g t ), y(t ) = δn sin(2g t ). (315)

The solution to Eq. (312) then reads

ρ̂S(t ) = ρ̂S(t = 0) − δn sin2 (g t )(|110〉〈110| − |001〉〈001|)


iδn (316)
− sin(2g t )(|001〉〈110| − |110〉〈001|).
2
These oscillations are known as (generalized) Rabi-oscillations and they appear whenever two
states are coherently coupled to each other. From the quantum state, we may easily obtain the
time-dependent occupation of the cold qubit

〈σ̂c† σ̂c 〉 = nFc − δn sin2 (g t ). (317)

We note that δn ≤ nFc , ensuring that the population remains positive at all times.

6 Fluctuations
In this section, we are exploring a key difference between macroscopic systems and small
quantum systems: fluctuations. While in macroscopic systems, fluctuations can generally be
neglected because they are negligible compared to mean values (c.f. Sec. 2.2), fluctuations
do play an important role in small, nano-scale systems. The laws of thermodynamics derived

60
SciPost Physics Lecture Notes Submission

in Sec. 3 only include average values and they hold irrespective of the relevance of fluctua-
tions. In this section, we discuss new laws that describe the behavior of heat and work in
the presence of fluctuations. These new laws include fluctuation theorems [6, 96, 97] and
thermodynamic uncertainty relations [98]. While fluctuation theorems generalize the second
law of thermodynamics to fluctuating systems, thermodynamic uncertainty relations provide
a trade-off between a signal-to-noise ratio and entropy production, and they do not have any
counterpart in macroscopic systems.

6.1 Fluctuation theorems for an isolated system


Before we consider the general scenario introduced in Sec. 3.1, we focus on an isolated system
that undergoes unitary time-evolution governed by a time-dependent Hamiltonian ĤS(t ). We
consider an initial state that is diagonal in the instantaneous energy eigenbasis, i.e.,
X X
ρ̂S(0) = pn |En0 〉〈En0 |, ĤS(t ) = Ent |Ent 〉〈Ent |. (318)
n n
In this case, the time-evolution operator is given by
Rt
d t ′ ĤS (t ′ )
ÛS(t ) = T e −i 0 . (319)
Since the system is isolated, no heat is exchanged with the environment. However, due to
the time-dependence of the Hamiltonian, work will be performed on the system. The average
work performed on the system after time τ reads (see also Sec. 3.3)

 
WS = d t Tr ∂ t ĤS(t ) ρ̂S(t ) . (320)
0

6.1.1 The two-point measurement scheme


In order to go beyond average values, we introduce measurements according to the so-called
two-point measurement scheme [99]:
1. Projectively measure the system energy at t = 0. This results in an outcome En0 .
2. Let the system evolve for time τ according to the time-evolution in Eq. (319).
τ
3. Projectively measure the system energy at t = τ. This results in an outcome Em .
The outcomes of the measurements define a trajectory as the system evolved from state |En0 〉 →
τ τ
|Em 〉. The change in energy for such a trajectory is given by Wm←n = Em − En0 . Since there
is no heat exchange, we interpret this energy change as work. The probability for observing a
trajectory corresponds to the joint probability of measuring the initial and final energies and
it reads
τ
p(m ← n) = pn pm|n = pn |〈Em |ÛS(τ)|En0 〉2 , (321)
where pn is the initial probability of finding the system in state |En0 〉 and pm|n denotes the
τ
conditional probability of finding the system in state |Em 〉 at time τ given that it started out in
state |En 〉 at t = 0.
0

The average energy change may then be evaluated as


X X X 
τ τ τ 0 0 †
〈Wm←n 〉 = p(m ← n)Wm←n = Tr Em |Em 〉〈Em |ÛS(τ) pn |En 〉〈En |ÛS (τ)
n,m m n
 X 
− Tr ÛS(τ) pn En0 |En0 〉〈En0 |ÛS† (τ) (322)
n
Z τ
  
= Tr ĤS(τ)ρ̂S(τ) − Tr ĤS(0)ρ̂S(0) = d t ∂ t Tr ĤS(t )ρ̂S(t ) = WS.
0

61
SciPost Physics Lecture Notes Submission

This further justifies the interpretation of Wm←n as the work performed on the system dur-
τ τ
ing the trajectory |En0 〉 → |Em 〉, i.e., in a single experimental run when En0 and Em are mea-
sured. Note that we use the same brackets to denote averages over trajectories 〈 f (n, m)〉 =
n,m p(m ← n) f (n, m) as we do for ensemble averages 〈 Ô〉 = Tr{ Ô ρ̂}. The object that is
P

averaged over usually makes it clear which average is meant.


Having defined the work along trajectories allows us to investigate the fluctuations in the
performed work. For instance, we may be interested in the higher moments of work defined
as X
k k
〈Wm←n 〉= p(m ← n)Wm←n . (323)
n,m

6.1.2 The backward experiment


In fluctuation theorems, time-reversal plays an important role. For this reason, we introduce
the time-reversal operator Θ̂ [6, 100]. Time-reversal in quantum mechanics is described by
an anti-unitary operator. If we define |ã〉 = Θ̂ |a〉 and | b̃〉 = Θ̂ |b〉, the defining properties of
anti-unitarity are

〈 b̃|ã〉 = 〈b|a〉∗ = 〈a|b〉 , Θ̂ (α |ψ〉 + β |φ〉) = α∗ Θ̂ |ψ〉 + β ∗ Θ̂ |φ〉 . (324)

Note that these relations imply that the time-reversal operator anti-commutes with the imag-
inary unit: Θ̂i = −i Θ̂. Being anti-unitary, some rules that we take for granted do not apply to
the time-reversal operator. For instance, the cyclic invariance of the trace has to be adapted as

Tr{Θ̂ ÂΘ̂−1 } = Tr{Â}∗ . (325)

Another useful identity we will employ reads

〈ã|Â| b̃〉 = 〈b|Θ−1 † Θ̂|a〉. (326)

We note that in some works, a daggered time-reversal operator appears. Here we follow
Refs. [6,100] and avoid such a notation, using instead the inverse of the time-reversal operator
Θ̂−1 . Furthermore, we always consider the time-reversal operator (and its inverse) to act on
the right.
With the help of the time-reversal operator, we define a backward experiment. In this
experiment, the time-evolution is determined by the time-reversed Hamiltonian
Rt
d t ′ H̃S (t ′ )
H̃S(t ) = Θ̂ ĤS(τ − t )Θ̂−1 , ŨS(t ) = T e −i 0 . (327)

The work protocol discussed above, where time-evolution is governed by ĤS(t ) is denoted
as the forward experiment. In addition to transforming the Hamiltonian by the time-reversal
operator, the time argument is inverted in Eq. (327). Any external parameters that are changed
during the forward experiment thus have their time-dependence reversed in the backward
experiment. For instance, if an electric field is ramped up in the forward experiment, it is
ramped down in the backward experiment. A special role is played by external magnetic
fields. In the presence of such fields we have H̃S(B, ⃗ t ) = Θ̂ ĤS(−B,
⃗ τ − t )Θ̂−1 . The reason for
this is that if we included the charges that create the magnetic field in the description, the time-
reversal operator would reverse the momenta of these charges which changes the sign of the
resulting magnetic fields. The time-evolution along the forward and backward experiments
are linked by the so-called microversibility condition

Θ̂ÛS(τ)Θ̂−1 = ŨS−1 (τ) = ŨS† (τ), (328)

62
SciPost Physics Lecture Notes Submission

which may be derived from Eq. (327) and the properties of the time-reversal operator. As an
initial state for the backward experiment, we consider a state that is diagonal in the basis of
the time-reversed Hamiltonian at t = 0
X X
τ τ τ−t τ−t τ−t
ρ̃S(0) = q m Θ̂|Em 〉〈Em |Θ̂−1 , H̃S(t ) = Em Θ̂|Em 〉〈Em |Θ̂−1 . (329)
m m

The backward experiment is then defined similarly to the forward experiment:


τ
1. Projectively measure the system energy at t = 0. This results in an outcome Em .

2. Let the system evolve for time τ according to the time-evolution in Eq. (327).

3. Projectively measure the system energy at t = τ. This results in an outcome En0 .


τ τ
The outcomes of the measurements again define a trajectory |Ẽm 〉 = Θ̂|Em 〉 → Θ̂|En0 〉 = |Ẽn0 〉.
The probability for this trajectory reads
τ 2 τ
p̃(n ← m) = q m |〈Ẽn0 |ŨS(τ)|Ẽm 〉| = q m |〈Em |Θ−1 ŨS† (τ)Θ̂|En0 〉|2
τ
(330)
= q m |〈Em |ÛS(τ)|En0 〉|2 ,

where we employed Eqs. (326) and (328).

6.1.3 Fluctuation theorems


We may now derive fluctuation theorems by taking the ratios of observing time-reversed tra-
jectories in the forward and backward experiment

p̃(n ← m) qm
= . (331)
p(m ← n) pn

Due to microreversibility, the transition probabilities drop out and we are left with the ratio
of initial probabilities. This innocuous expression is actually very powerful. To see this, let us
consider thermal initial states

e −β ĤS (0) e −β ĤS (τ)


ρ̂S(0) = , ρ̃S(0) = Θ̂ Θ̂−1 , (332)
Z0 Zτ

where the partition function reads


X t
Zt = e −β En = e −β Ft , (333)
n

with Ft denoting the free energy for a thermal state at inverse temperature β and Hamiltonian
ĤS(t ), see Eq. (102). For thermal states, the probabilities reduce to
0 τ
pn = e −β(En −F0 ) , q m = e −β(Em −Fτ ) . (334)

Crooks fluctuation theorem: Plugging Eq. (334) into Eq. (331) results in

p̃(n ← m)
= e −β(Wm←n −∆F ) , (335)
p(m ← n)

with ∆F = Fτ − F0 . This relation is known as Crooks fluctuation theorem [101]. It is an


instance of a detailed fluctuation theorem because it involves the probabilities for single tra-
jectories.

63
SciPost Physics Lecture Notes Submission

Jarzynski relation: Multiplying Eq. (335) with p(m ← n) and summing over all n and m,
we find the Jarzynski relation [102]
X X
p̃(n ← m) = p(m ← n)e −β(Wm←n −∆F ) ⇒ e −β Wm←n = e −β∆F . (336)
n,m n,m

This relation is known as an integral fluctuation theorem since it involves an average rather
than individual trajectories. The Jarzynski relation is remarkable because it relates an equi-
librium quantity, the difference in equilibrium free energies on the right hand side, to an out-
of-equilibrium quantity, the work performed in the forward experiment that appears on the
left-hand side. Importantly, there is no requirement of remaining in or close to equilibrium
during the experiment. Since equilibrium free energies are difficult to measure, the Jarzynski
relation has been used to determine free energy landscapes, in particular in single-molecule
pulling experiments [103]. In practice, evaluating the average on the left-hand side may be
challenging as trajectories with exponentially small probabilities may contribute [104].
Finally, we may apply Jensen’s inequality to the Jarzynski relation. Jensen’s inequality
states that for a convex function f (ax 1 + (1 − a)x 2 ) ≤ a f (x 1 ) + (1 − a) f (x 2 ), the inequality

〈 f (X )〉 ≥ f (〈X 〉), (337)

holds for a random variable X . Using this inequality, we find

e −β∆F = e −β Wm←n ≥ e −β〈Wm←n 〉 ⇒ 〈Wm←n 〉 = WS ≥ ∆F. (338)

The final inequality states that the work performed on the system always exceeds the difference
in equilibrium free energies. This inequality is equivalent to the second law of thermodynam-
ics, when both the initial and final states are thermal states (which can always be achieved by
letting the system equilibrate after the experiment). Thus, we find that the Jarzynski relation
and the Crooks fluctuation theorem generalize the second law of thermodyanmics, as they
constrain not only the average value of the performed work but also its fluctuations.

6.2 Fluctuation theorems for the general scenario


We now return to the general scenario introduced in Sec. 3.1. To define trajectories, we write
the initial state [c.f. Eq. (122)] as
⃗ ψi (0), k
⃗ ψi (0), k
⃗ ,
O X
ρ̂tot (0) = ρ̂S(0) τ̂α = p0 (i, k) (339)
α ⃗
i,k

with the states


⃗ = |ψi (0)〉 Ekα , Nkα ,
O
ψi (0), k (340)
α α
α
where
Ĥα Ekα , Nkα = Ekα Ekα , Nkα , N̂α Ekα , Nkα = Nkα Ekα , Nkα . (341)
α α α α α α α α α α

The sub- and superscripts may be confusing here. The eigenvalues of Ĥα are labeled by E αj
⃗ has elements kα . The probabilities in Eq. (339) read
and the vector k
€ Š
Y e −βα Ekα −µα Nkα
α α
⃗ = pi (0)
p0 (i, k) . (342)
α Zα

Finally, we introduced the eigenvalues and eigenstates of the reduced state of the system
through X
ρ̂S(t ) = TrB {ρ̂tot (t )} = pi (t ) |ψi (t )〉〈ψi (t )| . (343)
i

64
SciPost Physics Lecture Notes Submission

6.2.1 Forward trajectories


We now introduce trajectories by

1. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-

tem in its eigenbasis |ψi (0)〉. This results in outcomes: i, k.

2. Let the system evolve for time τ according to the time-evolution operator Û(τ) given in
Eq. (110).

3. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-
tem in its eigenbasis |ψ j (τ)〉. This results in outcomes: j, ⃗l.

⃗ → ψ j (τ), ⃗l . The probability to


We denote the corresponding trajectory by γ : ψi (0), k
observe such a trajectory is given by
2
P(γ) = p0 (i, k) ⃗
⃗ ψ j (τ), ⃗l Û(τ) ψi (0), k . (344)

We may now define various thermodynamic quantities along these trajectories.

Stochastic heat: The stochastic heat exchanged with reservoir α is defined as

Qα (γ) = Elα − µα Nlα − Ekα − µα Nkα .


€ Š € Š
(345)
α α α α

The average of the stochastic heat reduces to


X  
〈Qα (γ)〉 = P(γ)Qα (γ) = Tr Ĥα − µα N̂α [ρ̂tot (τ) − ρ̂tot (0)] = Qα , (346)
γ

which is equal to the definition for


P heat introduced in Eq. (124). In Eq. (346), the sum over
all trajectories is given by γ = i, j,k,
P
⃗ ⃗l and the equality may be proven using the identities

Elα ψ j (τ), ⃗l ψ j (τ), ⃗l , N̂α = Nlα ψ j (τ), ⃗l ψ j (τ), ⃗l ,


X X
Ĥα =
α α
j ,⃗l j,⃗l
(347)
ψ j (τ), ⃗l ψ j (τ), ⃗l .
X
1=
j ,⃗l

Stochastic (chemical) work: Analogously, the stochastic chemical work is defined as

Wα (γ) = µα Nlα − Nkα .


€ Š
(348)
α α

As the stochastic heat, its average reduces to the expected value



〈Wα (γ)〉 = µα Tr N̂α [ρ̂tot (τ) − ρ̂tot (0)] . (349)

Stochastic entropy: Following Ref. [105], we may use the self-information introduced in
Sec. 1.3 to define a stochastic system entropy change

∆S(γ) = − ln p j (τ) + ln pi (0), (350)

which averages to the change in von Neumann entropy

〈∆S(γ)〉 = SvN [ρ̂S(τ)] − SvN [ρ̂S(0)]. (351)

65
SciPost Physics Lecture Notes Submission

Stochastic entropy production: Finally, we can introduce the stochastic entropy production
along a trajectory as
X Qα (γ)
Σ(γ) = kB ∆S(γ) + , (352)
α Tα
which averages to the entropy production introduced in Eq. (123), 〈Σ(γ)〉 = Σ as can be easily
shown using Eqs. (346) and (351). Importantly, while the average entropy production Σ is
always nonnegative, the stochastic entropy production Σ(γ) can become negative.
We note that in contrast to Sec. 6.1, we do not define a stochastic version of the work
associated to the time-dependence of the Hamiltonian. The reason for this is that if the initial
state does not commute with the Hamiltonian, there is no unique way to define stochastic
work that obeys all desired properties [106, 107].

6.2.2 Backward trajectories


To derive fluctuation theorems, we need to introduce backward trajectories. For these, we
consider the initial state

pτ ( j , ⃗l)Θ̂ ψ j (τ), ⃗l ψ j (τ), ⃗l Θ̂−1 ,


O X
ρ̃tot (0) = Θ̂ρ̂S(τ) τ̂α Θ̂−1 = (353)
α j,⃗l

where pτ ( j , ⃗l) is obtained from Eq. (342) by replacing p j (0) → p j (τ), i.e.,
€ Š
Y e −βα Elα −µα Nlα
α α
pτ ( j , ⃗l) = p j (τ) . (354)
α Zα

Note that the initial state of the backward trajectory is quite different from what we considered
in Sec. 6.1. For the system state, we consider the (time-reversed) final state of the forward
experiment. The reservoirs on the other hand are initialized in the same Gibbs states as in
the beginning of the forward experiment. This is consistent with the paradigm that we have
control over the system, but not over the environment which we only describe by constant
temperatures and chemical potentials.
This state is then evolved by the operator
Rt
d t ′ Θ̂ Ĥtot (t ′ )Θ̂−1
Ũ(t ) = T e −i 0 , (355)

which obeys the micro-reversibility condition

Θ̂Û(τ)Θ̂−1 = Ũ † (τ). (356)

The backward trajectories are now defined by


1. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-
tem in its eigenbasis Θ̂|ψ j (τ)〉. This results in outcomes: j, ⃗l.

2. Let the system evolve for time τ according to the time-evolution operator Ũ(τ) given in
Eq. (355).

3. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-

tem in its eigenbasis Θ̂|ψi (0)〉. This results in outcomes: i, k.
We denote the corresponding trajectory by γ̃ : Θ̂ ψ j (τ), ⃗l → Θ̂ ψi (0), k
⃗ . The probability to
observe such a trajectory is given by, using Eq. (326)
2
P̃(γ̃) = pτ ( j , ⃗l) ψ j (τ), ⃗l Θ̂−1 Ũ † (τ)Θ̂ ψi (0), k

2
(357)

= pτ ( j , ⃗l) ψ j (τ), ⃗l Û(τ) ψi (0), k .

66
SciPost Physics Lecture Notes Submission

6.2.3 Fluctuation theorems


A detailed fluctuation theorem may now be obtained by taking the ratio of probabilities for
time-reversed trajectories in the forward and in the backward experiment

P̃(γ̃) pτ ( j, ⃗l) −
P Qα (γ) p j (τ)
Tα +ln pi (0)
= =e α
= e −Σ(γ)/kB . (358)
P(γ) ⃗
p0 (i, k)

As for the isolated system, c.f. Eq. (335), the transition probabilities drop out and the right-
hand side is determined solely by the initial probabilities. The Boltzmann factors in Eq. (342)
result in the stochastic heat, c.f. (345), and together with the initial probabilities of the system,
the exponent adds up to the stochastic entropy production, c.f. Eq. (352).
Equation (358) provides a generalization of the second law of thermodynamics to nano-
scale systems, where fluctuations matter. It is instructive, to consider what happens if Eq. (358)
is applied to a macroscopic process, for instance a glass shattering. In such a process, that we
denote by the trajectory γ, roughly 1023 degrees of freedom are involved (Avogadro constant),
resulting in an entropy production of the order of 1023 kB . The probability to observe the time-
reversed process γ̃ is thus exponentially suppressed by an extremely large number. It is thus
safe to assume that macroscopic processes with negative entropy production do not happen as
their probabilities are virtually zero. For a nano-scale object, the entropy production along a
trajectory may be as low as a few kB , and indeed we may observe the time-reversed trajectory.
This has interesting consequences for the thermodynamic arrow of time. For macroscopic
systems, such an arrow of time is given by the fact that entropy is produced. For nano-scale
systems, fluctuations prevent us from identifying the direction of time with certainty: if you
see a video of a fluctuating system, you may not with certainty identify if the video was played
forward or backward. Recently, machine learning was employed to learn the arrow of time
and the algorithm rediscovered the fluctuation theorem as the underlying thermodynamic
principle [108].
From the detailed fluctuation theorem, we may derive the Crooks fluctuation theorem by
summing over all trajectories that have the same entropy production

P̃(−Σ) X X
= e −Σ/kB , P(Σ) = δ(Σ − Σ(γ))P(γ), P̃(Σ) = δ(Σ − Σ(γ))P̃(γ). (359)
P(Σ) γ γ

Furthermore, an integral fluctuation theorem can be derived

e −Σ(γ) = 1, (360)

from which the second law, 〈Σ(γ)〉 ≥ 0, follows through Jensen’s inequality in complete anal-
ogy to Eqs. (336) and (338). This implies that the fluctuation theorem in Eq. (358) contains
strictly more information than the second law, and therefore can be understood as a general-
ization thereof.

6.3 Full counting statistics


In the last subsection, we defined fluctuations along single trajectories for the general scenario.
Here we show how we can extract these fluctuations from an approximate description based
on Markovian master equations. To this end, we consider the method of full counting statistics
(FCS) which allows us to calculate probability distributions for heat and work, as well as their
associated moments and cumulants. A more detailed introduction can be found in Refs. [18,
54].

67
SciPost Physics Lecture Notes Submission

6.3.1 Counting particles


We start by considering a system that can exchange particles with a fermionic reservoir as
illustrated in Fig. 17. In addition, it might be coupled to other reservoirs. We are interested
in the net number of particles n that entered the fermionic reservoir after time t . Whenever
a fermion enters the reservoir, n is increased by one. When a fermion leaves the reservoir
(enters the system), n is reduced by one. The quantity n is thus a stochastic variable that is
different in each experimental run. We consider the master equation

∂ t ρ̂S(t ) = L̃ρ̂S(t ) + κ(1 − nF )D[ĉ]ρ̂S(t ) + κnF D[ĉ † ]ρ̂S(t ) ≡ Lρ̂S(t ), (361)

where L̃ denotes the time-evolution due to the Hamiltonian as well as all reservoirs except
the one we are interested in and the annihilation operator ĉ destroys a fermion in the system.
Now we identify which terms in the master equation change the values of n. To this end, we
write out the superoperators
1 1
D[ĉ † ]ρ̂ = ĉ † ρ̂ĉ − {ĉĉ † , ρ̂}, D[ĉ]ρ̂ = ĉ ρ̂ĉ † − {ĉ † ĉ, ρ̂}. (362)
2 2
The term ĉ † ρ̂ĉ increases the number of fermions in the system by one. This fermion comes
from the reservoir, thus this term decreases n by one. Similarly, the term ĉ ρ̂ĉ † increases n by
one.
We now introduce the n-resolved density matrix ρ̂S(n) as the density matrix if n fermions
entered the reservoir. From this quantity, we may recover the full density matrix as

X
ρ̂S = ρ̂S(n). (363)
n=−∞

Having identified the terms in the master equation which change n, we may write the time-
evolution of the n-resolved density matrix as

∂ t ρ̂S(n) = L0 ρ̂S(n) + J+ ρ̂S(n − 1) + J− ρ̂S(n + 1), (364)

where the superoperators that increase and decrease n are written as

J+ ρ̂ = κ(1 − nF )ĉ ρ̂ĉ † , J− ρ̂ = κnF ĉ † ρ̂ĉ, (365)

Figure 17: Illustration of full counting statistics. A quantum system can exchange
particles with a reservoir. The net number of particles that enter the reservoir during
time t is denoted by n(t ). Whenever a particle enters the reservoir, n is increased by
one. Whenever a particle leaves the reservoir (and enters the system) n is decreased
by one.

68
SciPost Physics Lecture Notes Submission

and L0 denotes the time-evolution that does not change n such that L = L0 + J+ + J− .
Summing Eq. (363) over n, we thus recover the master equation in Eq. (361).
From the n-resolved density matrix, we can obtain the probability that n fermions entered
the reservoir during time t as
p(n) = Tr{ρ̂S(n)}. (366)
Often we are interested in the moments of this distribution

X
〈n k 〉 = n k p(n), (367)
n=−∞

with the first moment 〈n〉 being the average.


Keeping the information on n comes at a price. Comparing Eqs. (364) to Eq. (361), we
turned a single master equation into infinitely many coupled master equations, one for each
ρ̂S(n). This problem can be simplified by Fourier transforming

X
ρ̂S(χ) = e inχ ρ̂S(n), (368)
n=−∞

where χ is known as the counting field. Fourier transforming Eq. (364), we find

X ∞
X ∞
X
∂ t ρ̂S(χ) = L0 e inχ ρ̂S(n) + J+ e inχ ρ̂S(n − 1) + J− e inχ ρ̂S(n + 1)
n=−∞ n=−∞ n=−∞ (369)
−iχ
= L0 ρ̂S(χ) + e iχ J+ ρ̂S(χ) + e J− ρ̂S(χ) ≡ L(χ)ρ̂S(χ).

This equation has the formal solution

ρ̂S(χ) = e L(χ)t ρ̂S(χ, t = 0). (370)

To find the initial condition, we note that we start counting at t = 0, implying that n = 0 at
t = 0 and therefore

ρ̂S(n, t = 0) = ρ̂S(t = 0)δn,0 ⇔ ρ̂S(χ, t = 0) = ρ̂S(t = 0). (371)

The trace of the χ-dependent density matrix provides the characteristic function

X ∞
X
inχ
Λ(χ) ≡ e p(n) = Tr{ρ̂S(χ)}, Λ(0) = p(n) = 1. (372)
n=−∞ n=−∞

In the field of FCS, the characteristic function is often called the moment-generating function
since the moments of n can be obtained by taking derivatives as

〈n k 〉 = (−i)k ∂χk Λ(χ)|χ=0 . (373)

Note however that in probability theory, the moment-generating function is the Laplace trans-
form of the probability distribution. A particularly useful quantity is the cumulant-generating
function, given by the logarithm of the characteristic function

S(χ) = ln Λ(χ), 〈〈n k 〉〉 = (−i)k ∂χk S(χ)|χ=0 , (374)

which produces the cumulants 〈〈n k 〉〉 upon differentiation. The first cumulant is equal to the
mean
∂χ Λ(χ)
〈〈n〉〉 = −i∂χ S(χ)|χ=0 = −i = 〈n〉. (375)
Λ(χ) χ=0

69
SciPost Physics Lecture Notes Submission

Figure 18: Quantum dot in the high bias regime. Due to a large voltage bias particles
can only enter the dot from the left reservoir and leave to the right reservoir. We are
interested in the net number of particles that enter the right reservoir.

The second cumulant is equal to the variance


2
−∂χ2 Λ(χ)

−i∂χ Λ(χ)
2
〈〈n 〉〉 = −∂χ2 S(χ)|χ=0 = − = 〈n 2 〉 − 〈n〉2 . (376)
Λ(χ) Λ2 (χ)
χ=0 χ=0

The third cumulant 〈〈n 3 〉〉 is related to the skewness of the distribution and the fourth cumulant
to its kurtosis (“tailedness").
Often, we are interested in the long-time behavior of n. In this case, it is useful to use the
spectral decomposition to write X
e L(χ)t = eν j t P j , (377)
j

where ν j denote the eigenvalues of the Liouvillean L and the projectors P j can be obtained
from its eigenvectors. The characteristic function may then be written as
t →∞
X
Λ(χ) = e ν j t Tr{P j ρ̂S(t = 0)} −−−→ e νmax t Tr{Pmax ρ̂S(t = 0)}, (378)
j

where νmax is the χ-dependent eigenvalue of the Liouvillean with the largest real part. Simi-
larly, the cumulant generating function may be written as
t →∞
S(χ) −−−→ νmax t + ln Tr{Pmax ρ̂S(t = 0)} ≃ νmax t . (379)

The cumulant generating function is thus fully determined by the eigenvalue of the Liouvillean
with the largest real part. Furthermore, all cumulants become linear in time, since S(χ) ∝ t .

6.3.2 Example: transport through a quantum dot


As an example, we return to the quantum dot coupled to two fermionic reservoirs, see Fig. 18,
as discussed in Sec. 5.1 in the context of a heat engine. Here we consider the so-called large
bias regime for simplicity, where µL −εd , εd −µR ≫ kB T . In this case, nFL ≃ 1 and nFR ≃ 0, such
that particles can only enter the dot from the left reservoir and leave to the right reservoir. We
want to describe the statistics of the net number of particles that enter the right reservoir. This
system was experimentally investigated in Ref. [109], where the tunneling electrons could be
observed and counted one by one.
The master equation for this scenario is given by

∂ t ρ̂S = κL D[d̂ † ]ρ̂S + κRD[d̂]ρ̂S = Lρ̂S. (380)

Since we only count the particles entering the right reservoir, we may introduce the counting
field by the replacement
κRd̂ ρ̂Sd̂ † → e iχ κRd̂ ρ̂Sd̂ † . (381)

70
SciPost Physics Lecture Notes Submission

This results in the counting-field dependent master equation

∂ t ρ̂S(χ) = Lρ̂S(χ) + κR e iχ − 1 d̂ ρ̂Sd̂ † .



(382)

Noting that the density matrix remains diagonal, and denoting the diagonal elements by p0 (χ)
and p1 (χ), we may cast this master equation into
    
p0 (χ) −κL κRe iχ p0 (χ)
∂t = , (383)
p1 (χ) κL −κR p1 (χ)
| {z }
L(χ)

where L(χ) denotes the Liouvillean (acting on a vector instead of on a matrix). Its eigenvalues
can easily be calculated and read
κL + κR 1Æ
ν± = − ± (κL − κR)2 + 4e iχ κL κR. (384)
2 2
The cumulant generating function in the long-time limit is thus [c.f. Eq. (379)]
κL + κR 1Æ
S(χ)/t = ν+ = − + (κL − κR)2 + 4e iχ κL κR, (385)
2 2
resulting in the average
κL κR
〈n〉 = −i∂χ S(χ)|χ=0 = t = 〈I〉t , (386)
κL + κR

where 〈I〉 denotes the average particle current, see also the power given in Eq. (243). The
variance is given by
κ2L + κ2R
2 2
〈〈n 〉〉 = −∂χ S(χ)|χ=0 = 〈n〉 . (387)
κL + κ2R
Transport becomes particularly simple if one of the couplings far exceeds the other, for
instance if κL ≫ κR. In this case, the cumulant generating function reduces to

S(χ) = κR t e iχ − 1 .

(388)

Note that it is the smaller of the couplings that features in the cumulant generating function.
This is the case because transport at long times is dominated by bottlenecks. In this case,
the bottleneck is the smaller coupling to the right reservoir. The cumulants obtained from
Eq. (388) are all equal and read 〈〈n k 〉〉 = κR t . This is the hallmark of Poissonian transport,
where each particle is independent of the others. Indeed, we may write the characteristic
function as

X (κR t )n inχ
Λ(χ) = e −κR t exp κR t e iχ = e −κR t

e , (389)
n=0
n!
from which we may read off, with the help of Eq. (372),

(κR t )n
p(n) = e −κR t , (390)
n!
which is indeed a Poisson distribution.

71
SciPost Physics Lecture Notes Submission

6.3.3 Counting heat and work


So far, we focused on counting particles. We now illustrate how the same technique can be
applied to evaluate the moments and cumulants of heat and power exchanged with the envi-
ronment. To this end, we consider the singular-coupling master equation given in Eq. (194)
which is here restated for convenience
X
∂ t ρ̂S(t ) = −i[ĤS, ρ̂S(t )] + γαk (ωα,k )D[Ŝα,k ]ρ̂S(t ) = Lρ̂S(t ), (391)
α,k

where the Hamiltonian may include a Lamb shift. The jump operators obey

[Ŝα,k , ĤTD ] = ωα,k Ŝα,k , [Ŝα,k , N̂S] = nα,k Ŝα,k , (392)

where the thermodynamic Hamiltonian is introduced in Sec. 4.4.2. From these relations, we

may conclude that a jump Ŝα,k ρ̂SŜα,k reduces the particle number in the system by nα,k and
the internal energy by ωα,k .
We are interested in the joint distribution of the heat and work exchanged with the different
reservoirs P({Qα , Wα }). This distribution can be obtained from a characteristic function
Y Z Z 
Λ({χα , λα }) = dQα e iQα χα d Wα e i Wα λα P({Qα , Wα }) = Tr {ρ̂S({χα , λα })} . (393)
α

The counting field dependent density matrix can again be obtained by a modified master equa-
tion
∂ t ρ̂S({χα , λα }) = L({χα , λα })ρ̂S({χα , λα }), (394)
where the counting field dependent Liouvillean is obtained from the Liouvillean L in Eq. (391)
by the replacement

Ŝα,k ρ̂SŜα,k → e iχα (ωα,k −µα nα,k ) e iλα µα nα,k Ŝα,k ρ̂SŜα,k

. (395)

In contrast to Eq. (369), the counting fields are weighted by the heat and work associated to
each jump. While Eq. (395) provides a simple recipe, we note that we can rigorously obtain
L({χα , λα }) by introducing counting fields on the unitary evolution with Ĥtot and subsequently
tracing out the environment using Born-Markov approximations [64].
We also note that the fluctuation theorem in Eq. (358) may be derived from the master
equation using the method of FCS. A trajectory γ is then determined by outcomes of initial
and final measurements on the system, as well as the times and types of all jumps that occur
[18, 97].

6.4 The Thermodynamic uncertainty relation


The thermodynamic uncertainty relation (TUR) is an inequality that bounds fluctuations in a
current by the entropy production [98, 110]

〈〈I 2 〉〉 2kB
≥ . (396)
〈I〉2 Σ̇

Here 〈〈I 2 〉〉 and 〈I〉 denote the steady-state current fluctuations and average current respec-
tively (see below), and Σ̇ denotes the entropy production rate.
The TUR can be understood as an upper bound on the signal-to-noise ratio 〈I〉2 /〈〈I 2 〉〉,
provided by dissipation, as expressed through the entropy production rate: to achieve a high
signal-to-noise ratio, a sufficient amount of entropy must be produced. Equivalently, the TUR

72
SciPost Physics Lecture Notes Submission

implies that it is not possible to simultaneously achieve a high current, low fluctuations, and
low dissipation.
In contrast to fluctuation theorems, the TUR has a smaller range of validity. It applies
to classical, Markovian systems. It may therefore be violated in quantum-coherent systems,
see for instance Refs. [70, 111–114]. Markovian quantum systems thus have the potential
to achieve higher signal-to-noise ratios for a given amount of dissipation than their classical
counterparts. We note however that the TUR may also be violated in classical, non-Markovian
systems [115].

6.4.1 Current and current noise


The average and the noise of a current enter the TUR. Here we briefly connect these quantities
to the cumulants obtained from FCS as discussed in Sec. 6.3. Let n count the number of trans-
ferred quanta (e.g., particles or photons exchanged with a reservoir). The current cumulants
are then defined as the time-derivatives of the cumulants of n
t →∞ 〈〈n k 〉〉
〈〈I k 〉〉 = ∂ t 〈〈n k 〉〉 −−−→ , (397)
t
where we used that all cumulants become linear in time in the long-time limit. The first cumu-
lant of the current is simply the average current 〈I〉, while the second cumulant characterizes
the noise of the current.
The relation between current and the counting variable n can be motivated by introducing
a stochastic current, which is the time-derivative of n, i.e.,
Z t
n(t ) = d t ′ I(t ′ ). (398)
0

Since n typically increases or decreases by one if a particle is exchanged, the stochastic current
I(t ) consists of a series of Dirac deltas at these times. With this definition, one may show
Z t Z∞
〈n〉 = d t ′ 〈I(t )〉, lim 〈〈n〉〉 = t dτ〈δI(τ)δI(0)〉 = t SδI , (399)
t →∞
0 −∞

where δI(t ) = I(t ) − 〈I(t )〉 denotes the deviation of the current from its mean and SδI is
known as the zero-frequency power spectrum. More information on noise and fluctuations
can be found for instance in Refs. [18, 116].

6.4.2 Application: heat engine


As an illustrative example, let us consider the TUR applied to a heat engine operating between
a hot and a cold reservoir, as discussed in Sec. 5.1. In this case, Eq. (396) may be re-written
as [117]
η kB Tc 1
P ≤ , (400)
η − ηC 〈〈P 〉〉
2 2
where P denotes the average output power, η the efficiency and ηC the Carnot efficiency
[c.f. Eq. (242)]. The TUR thus implies that we cannot at the same time achieve high power, an
efficiency close to the Carnot efficiency, and low fluctuations in the output power (sometimes
called high constancy). In other words, a high output power at efficiencies close to the Carnot
efficiency is only possible if fluctuations are large.
Equation (400) may be obtained from Eq. (396) by using the expression for the entropy
production rate in Eq. (246) together with the identities
P 〈〈I〉〉
〈I〉 = , 〈〈I〉〉 = , (401)
µc − µh (µc − µh )2

73
SciPost Physics Lecture Notes Submission

which relate the power and its fluctuations to the particle current. The TUR thus implies that
the well-known trade-off between high power and an efficiency close to the Carnot efficiency is
actually a trade-off between three quantities, including the power fluctuations. This sheds light
onto previous works that found finite power at Carnot efficiency close to a phase transition,
where fluctuations diverge [118, 119].
Interesting additional applications of the TUR include the estimation of efficiencies of
molecular motors [120], as well as investigating the precision of biological clocks [121].

7 Summary
These lecture notes provide an introduction to the vast and growing field of quantum thermo-
dynamics. In particular, the following key questions were addressed:

What is thermodynamic equilibrium? This question is addressed in Sec. 2, where we in-


troduced the Gibbs state as well as the concepts of temperature and chemical potential. To
highlight the relevance of the Gibbs state, we provided three motivations: The maximum en-
tropy principle, considering a small part of a system with well defined energy and particle
number, as well as the concept of global passivity.

How do the laws of thermodynamics emerge from quantum mechanics? In Sec. 3, we


tackled this key question that lies at the heart of quantum thermodynamics and we introduced
the general scenario that was used throughout the lecture notes. We discussed how energy
conservation results in the first law of thermodynamics, we introduced the concept of entropy
and entropy production, and we discussed how the second law of thermodynamics can be
understood in terms of loss of information.

How can we model open quantum systems? In Sec. 4 we derived Markovian master equa-
tions that provide a powerful tool to produce approximations for observables in the general
scenario. We discussed how these equations can be derived in a thermodynamically consistent
way and highlighted their limitations.

What can we do with open quantum systems? Section 5 introduced three different quan-
tum thermal machines, a heat engine, an entanglement generator, and a refrigerator. These
devices illustrate how open quantum systems coupled to multiple thermal reservoirs can be
used to perform useful tasks.

How do small systems differ from large ones? In Sec. 6, we focused on a key difference
between microscopic and macroscopic systems: fluctuations. We illustrated how they become
relevant and how they result in new thermodynamic rules including fluctuation theorems,
which generalize the second law of thermodynamics, and the thermodynamic uncertainty re-
lation.

8 Outlook
While a number of topics in the field of quantum thermodynamics is addressed in these lecture
notes, there is much more that has not been touched upon. Here I provide a subjective selection
of exciting research avenues that build on the concepts discussed in these notes.

74
SciPost Physics Lecture Notes Submission

8.1 The thermodynamics of Information


Already in the 19th century, J. C. Maxwell realized, by considering a thought experiment,
that having access to microscopic degrees of freedom allows for performing tasks that are
otherwise forbidden by the second law of thermodynamics [122]. Maxwell considered a “neat-
fingered being”, commonly referred to as Maxwell’s demon, that can see individual particles
in a gas. Using a trap-door, the demon could separate fast from slow particles and thereby
create a temperature gradient out of equilibrium without performing any work, seemingly
reducing entropy and thus violating the second law of thermodynamics. More generally, using
measurement and feedback control, thermal fluctuations can be rectified which may result
in negative entropy production. This seeming contradiction of the laws of thermodynamics
is resolved by appreciating that the demon itself is a physical system. In order to function,
it needs to produce entropy. Including this contribution to the entropy production, the laws
of thermodynamics are restored. Experimental progress has enabled the implementation of
Maxwell’s thought experiment in the laboratory, both in classical [123,124], and more recently
in quantum systems [125–127].
Interestingly, to understand the thermodynamics of feedback-controlled systems, it is not
required to model the physical implementation of feedback control (i.e., the demon). In-
stead, it is sufficient to include the information obtained by the demon in the thermodynamic
bookkeeping [12, 128–130]. Since feedback-controlled systems exploit fluctuations (different
measurement outcomes result in different feedback protocols), and since the fluctuation the-
orem provides a generalization of the second law of thermodynamics to fluctuating systems,
information can be incorporated in the thermodynamic bookkeeping by accordingly modified
fluctuation theorems [131–135]. A better understanding of the thermodynamics of measure-
ment and feedback promises to shed light onto how such systems can be exploited for emerging
nano- and quantum technologies, similar to how a better understanding of thermodynamics
allowed for the refinement of steam engines and refrigerators in the industrial revolution.

8.2 Quantum thermo-kinetic uncertainty relation


The TUR discussed in Sec. 6.4 shows how entropy production bounds the signal-to-noise ratio
of any current. In addition to the TUR, another bound exists known as the kinetic uncertainty
relation (KUR) [136]. The KUR looks very similar to the TUR but instead of the entropy produc-
tion rate, the average rate of transitions between different states, the dynamical activity enters.
While the TUR is tight close to equilibrium (a consequence of the fluctuation-dissipation theo-
rem), it becomes very losoe far from equilibrium because the entropy production rate increases
in this regime. In contrast, the KUR provides a relevant bound far from equilibrium and can
also be applied to irreversible dynamics, such as predator-prey models. Very recently, a tighter
version of the KUR was discovered which is known as the clock uncertainty relation (CUR),
due to its connection to the problem of time estimation [137].
Both the TUR as well as the KUR hold for classical Markovian systems and can be violated
in systems described by a quantum Markovian master equation (see for instance Ref. [70]).
This has motivated efforts in finding similar bounds that constrain signal-to-noise ratios in
quantum systems. Over the last few years, a number of extensions of the TUR and KUR to
the quantum regime where discovered, see for instance Refs. [138–142]. These works provide
insights into the fascinating avenue of research that aims at understanding the fundamental
limitations of precision in open quantum systems, as well as the differences between quantum
stochastic systems.

75
SciPost Physics Lecture Notes Submission

8.3 Finite-size environments


In quantum thermodynamics, the environment is often treated as large and memoryless, be-
ing well described by fixed temperatures and chemical potentials. This is however not always
the case. Indeed in recent experiments, temperature fluctuations were measured [143], and
a single two-level system was used to purify the state of its environment [144–146], resulting
in enhanced coherence times. There is no unique approach to describe the thermodynamics
of such scenarios. Indeed multiple approaches exist, including the extended micro-canonical
master equation [147], where the energy of the environment is treated dynamically and clas-
sical correlations between system and environment are taken into account. Furthermore, re-
cently a thermodynamic description of energy exchanges between arbitrary quantum systems
was put forward [148].
In Ref. [149], a single-level quantum dot coupled to a finite-size reservoir that can be de-
scribed by a fluctuating temperature was investigated. It was found that the thermodynamics
of the quantum dot is described by an entropic temperature instead of the actual, fluctuat-
ing temperature. In particular, the quantum dot equilibrates to the entropic temperature, in
analogy to the constant temperature case considered in Sec. 4.3.
There still remain many open questions related to finite-size reservoirs. For instance, when
can they be described using a fluctuating temperature and when is a non-equilibrium de-
scription needed? This research direction is closely related to understanding systems that are
strongly coupled to their environment [44–47].

8.4 Dissipative phase transitions


Phase transitions are a central concept in macroscopic thermodynamics. A phase transition
denotes a drastic change in a systems’ property upon changing an external variable. Common
examples include the transition of liquid water to ice at zero degrees Celcius, or the onset of
magnetization in a ferromagnet. These phase transitions occur as temperature is changed. In
quantum systems, phase transitions may occur at zero temperature, as quantum fluctuations
result in a drastic state of the ground state properties. These phase transitions are termed
quantum phase transitions.
Phase transitions are heavily exploited in technological applications, including refriger-
ators and sensors, e.g. single photon detectors [150]. Indeed, phase transitions are very
promising for sensing applications due to the drastic change that can be induced by a small
perturbation.
More recently, dissipative phase transitions between out-of-equilibrium steady states have
been investigated [151]. These transitions occurring far from equilibrium provide a rich be-
havior and can be exploited for sensing applications [152]. The thermodynamics of these
transitions remains rather unexplored and provides a promising research avenue.

8.5 Thermodynamics of multiple conserved quantities


Throughout these lecture notes, we considered the grand canonical (and sometimes the canon-
ical) ensemble. In this ensemble, there are two conserved quantities, energy and particle
number. One can imagine adding additional conserved quantities, for instance angular mo-
mentum or any other observable that commutes with the total Hamiltonian. This motivates
the extension of the thermal state to the so-called generalized Gibbs state [153]
e −β (Ĥ− µk Q̂ k )
P
k
ρ̂G = , (402)
Z
where Q̂ k denotes the conserved charges. In this case, thermodynamic forces corresponding to
different conserved quantities can be can be traded off with each other, similar to how the heat

76
SciPost Physics Lecture Notes Submission

engine in Sec. 5.1 exploits a temperature gradient to overcome a voltage bias. In contrast to a
heat engine, which performs one useful task, hybrid thermal machines perform multiple useful
tasks at the same time [154]. For instance, a three-terminal device can be used to produce
work and cool down the coldest reservoir at the same time.
Of particular interest is the study of thermodynamics when the conserved quantities no
longer commute, as is the case for instance with the different components of angular momen-
tum. In this case, established concepts in thermodynamics are challenged and many fascinat-
ing questions are raised [155].

Acknowledgmenets
These lecture notes were created for a Master course at the University of Basel where they
are accompanied by exercise sheets (available upon request). They are based on a previous
set of notes prepared for an 8-hour course at Lund University. I thank all participants of these
courses whose feedback has been crucial for the development of these notes. I also thank Peter
Samuelsson for providing the opportunity to develop my own course at Lund University. Many
thanks also to the teaching assistants in Basel including Marcelo Janovitch, Matteo Brunelli,
Kacper Prech, and Aaron Daniel. Special thanks to Aaron Daniel for illustrating Figs. 1, 2, 3, 5,
6, 7, 15, and 17, and to Marcelo Janovitch for illustrating Fig. 4. Special thanks also to Aaron
Daniel and Nicolas Hunziker for pointing out the appearance of the golden ratio in Sec. 5.2.
I acknowledge funding from the Swiss National Science Foundation (Eccellenza Professorial
Fellowship PCEFP2_194268).

References
[1] P. Weinberger, The discovery of thermodynamics, Philos. Mag. 93(20), 2576 (2013),
doi:10.1080/14786435.2013.784402.

[2] A. Auffèves, Quantum technologies need a quantum energy initiative, PRX Quantum 3,
020101 (2022), doi:10.1103/PRXQuantum.3.020101.

[3] J. P. Pekola, Towards quantum thermodynamics in electronic circuits, Nat. Phys. 11(2),
118 (2015), doi:10.1038/nphys3169.

[4] G. Benenti, G. Casati, K. Saito and R. Whitney, Fundamental aspects of steady-


state conversion of heat to work at the nanoscale, Phys. Rep. 694, 1 (2017),
doi:10.1016/j.physrep.2017.05.008.

[5] J. Stevens, D. Szombati, M. Maffei, C. Elouard, R. Assouly, N. Cottet, R. Das-


sonneville, Q. Ficheux, S. Zeppetzauer, A. Bienfait, A. N. Jordan, A. Auffèves
et al., Energetics of a single qubit gate, Phys. Rev. Lett. 129, 110601 (2022),
doi:10.1103/PhysRevLett.129.110601.

[6] P. Strasberg, Quantum Stochastic Thermodynamics: Foundations and Se-


lected Applications, Oxford University Press, ISBN 9780192895585,
doi:10.1093/oso/9780192895585.001.0001 (2022).

[7] S. Deffner and S. Campbell, Quantum Thermodynamics, Morgan & Claypool Publishers,
ISBN 978-1-64327-658-8, doi:10.1088/2053-2571/ab21c6 (2019).

77
SciPost Physics Lecture Notes Submission

[8] J. Gemmer, M. Michel and G. Mahler, Quantum Thermodynamics: Emergence of Ther-


modynamic Behavior Within Composite Quantum Systems, vol. 784 of Lecture Notes in
Physics, Springer (Berlin), doi:10.1007/978-3-540-70510-9 (2009).

[9] F. Binder, L. A. Correa, C. Gogolin, J. Anders and G. Adesso, eds., Thermodynamics in


the Quantum Regime, Springer, doi:10.1007/978-3-319-99046-0 (2019).

[10] R. Kosloff, Quantum thermodynamics: A dynamical viewpoint, Entropy 15(6), 2100


(2013), doi:10.3390/e15062100.

[11] S. Vinjanampathy and J. Anders, Quantum thermodynamics, Contemp. Phys. 57(4),


545 (2016), doi:10.1080/00107514.2016.1201896.

[12] J. Goold, M. Huber, A. Riera, L. del Rio and P. Skrzypczyk, The role of quantum infor-
mation in thermodynamics – a topical review, J. Phys. A: Math. Theor. 49(14), 143001
(2016), doi:10.1088/1751-8113/49/14/143001.

[13] M. Lostaglio, An introductory review of the resource theory approach to thermodynamics,


Rep. Prog. Phys. 82(11), 114001 (2019), doi:10.1088/1361-6633/ab46e5.

[14] M. T. Mitchison, Quantum thermal absorption machines: refrigerators, engines and clocks,
Contemp. Phys. (2019), doi:10.1080/00107514.2019.1631555.

[15] J. Anders and M. Esposito, Focus on quantum thermodynamics, New J. Phys. 19(1),
010201 (2017), doi:10.1088/1367-2630/19/1/010201.

[16] R. Shankar, Principles of Quantum Mechanics, Springer New York, NY, 2nd edn.,
doi:10.1007/978-1-4757-0576-8 (1994).

[17] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th
Anniversary Edition, Cambridge University Press, doi:10.1017/CBO9780511976667
(2010).

[18] G. T. Landi, M. J. Kewming, M. T. Mitchison and P. P. Potts, Current fluc-


tuations in open quantum systems: Bridging the gap between quantum continu-
ous measurements and full counting statistics, PRX Quantum 5, 020201 (2024),
doi:10.1103/PRXQuantum.5.020201.

[19] Y. Guryanova, N. Friis and M. Huber, Ideal Projective Measurements Have Infinite Resource
Costs, Quantum 4, 222 (2020), doi:10.22331/q-2020-01-13-222.

[20] R. R. Puri, Mathematical Methods of Quantum Optics, Springer, doi:10.1007/978-3-


540-44953-9 (2001).

[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics:
An Introduction, Oxford University Press (2004).

[22] R. Hentschke, Thermodynamics, Springer Berlin, Heidelberg, doi:10.1007/978-3-642-


36711-3 (2013).

[23] H. B. Callen, Thermodynamics and an Introduction to Thermostatistics, Wiley, New York


(1985).

[24] R. Clausius, Ueber verschiedene für die anwendung bequeme formen der hauptgle-
ichungen der mechanischen wärmetheorie, Annalen der Physik 201(7), 353 (1865),
doi:10.1002/andp.18652010702.

78
SciPost Physics Lecture Notes Submission

[25] C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27(3), 379
(1948), doi:10.1002/j.1538-7305.1948.tb01338.x.

[26] T. M. Cover and J. A. Thomas, Elements of information theory, John wiley & Sons, Inc.,
Hoboken, New Jersey, 2nd edn., doi:10.1002/047174882X (2006).

[27] S. Kullback and R. A. Leibler, On information and sufficiency, Ann. Math. Stat. 22, 79
(1951).

[28] V. Vedral, The role of relative entropy in quantum information theory, Rev. Mod. Phys.
74, 197 (2002), doi:10.1103/RevModPhys.74.197.

[29] H. Araki and E. H. Lieb, Entropy inequalities, Commun. Math. Phys. 18(2), 160 (1970),
doi:10.1007/BF01646092.

[30] L. D. Landau and E. M. Lifshitz, Course of Theoretical Physics, Vol. 5: Statistical Physics,
Part 1, Pergamon Press, Oxford, doi:10.1016/C2009-0-24487-4 (1959).

[31] F. Schwabl, Statistical Mechanics, Springer Berlin, Heidelberg, 2nd edn., doi:10.1007/3-
540-36217-7 (2006).

[32] E. T. Jaynes, Information theory and statistical mechanics, Phys. Rev. 106, 620 (1957),
doi:10.1103/PhysRev.106.620.

[33] E. T. Jaynes, Information theory and statistical mechanics. II, Phys. Rev. 108, 171 (1957),
doi:10.1103/PhysRev.108.171.

[34] S. D. Bartlett, T. Rudolph and R. W. Spekkens, Reference frames, superse-


lection rules, and quantum information, Rev. Mod. Phys. 79, 555 (2007),
doi:10.1103/RevModPhys.79.555.

[35] G. B. Arfken, H. J. Weber and F. E. Harris, Mathematical Methods for Physicists, Elsevier
Inc., 7th edn., doi:10.1016/C2009-0-30629-7 (2012).

[36] W. Pusz and S. L. Woronowicz, Passive states and KMS states for general quantum systems,
Commun. Math. Phys. 58(3), 273 (1978), doi:10.1007/BF01614224.

[37] M. Lostaglio, D. Jennings and T. Rudolph, Thermodynamic resource theories, non-


commutativity and maximum entropy principles, New J. Phys. 19(4), 043008 (2017),
doi:10.1088/1367-2630/aa617f.

[38] I. M. Bassett, Alternative derivation of the classical second law of thermodynamics, Phys.
Rev. A 18, 2356 (1978), doi:10.1103/PhysRevA.18.2356.

[39] P. Skrzypczyk, R. Silva and N. Brunner, Passivity, complete passivity, and virtual temper-
atures, Phys. Rev. E 91, 052133 (2015), doi:10.1103/PhysRevE.91.052133.

[40] H. Touchette, Equivalence and nonequivalence of ensembles: Thermodynamic, macrostate,


and measure levels, J. Stat. Phys. 159(5), 987 (2015), doi:10.1007/s10955-015-1212-
2.

[41] M. Esposito, K. Lindenberg and C. V. den Broeck, Entropy production as correlation


between system and reservoir, New J. Phys. 12(1), 013013 (2010), doi:10.1088/1367-
2630/12/1/013013.

79
SciPost Physics Lecture Notes Submission

[42] P. Strasberg, M. G. Díaz and A. Riera-Campeny, Clausius inequality for finite


baths reveals universal efficiency improvements, Phys. Rev. E 104, L022103 (2021),
doi:10.1103/PhysRevE.104.L022103.

[43] P. Strasberg and A. Winter, First and second law of quantum thermodynamics: A consistent
derivation based on a microscopic definition of entropy, PRX Quantum 2, 030202 (2021),
doi:10.1103/PRXQuantum.2.030202.

[44] P. Strasberg and M. Esposito, Stochastic thermodynamics in the strong coupling regime:
An unambiguous approach based on coarse graining, Phys. Rev. E 95, 062101 (2017),
doi:10.1103/PhysRevE.95.062101.

[45] A. Rivas, Strong coupling thermodynamics of open quantum systems, Phys. Rev. Lett.
124, 160601 (2020), doi:10.1103/PhysRevLett.124.160601.

[46] P. Talkner and P. Hänggi, Colloquium: Statistical mechanics and thermodynamics


at strong coupling: Quantum and classical, Rev. Mod. Phys. 92, 041002 (2020),
doi:10.1103/RevModPhys.92.041002.

[47] A. S. Trushechkin, M. Merkli, J. D. Cresser and J. Anders, Open quantum system


dynamics and the mean force Gibbs state, AVS Quantum Sci. 4(1), 012301 (2022),
doi:10.1116/5.0073853.

[48] K. Ptaszyński and M. Esposito, Entropy production in open systems: The predom-
inant role of intraenvironment correlations, Phys. Rev. Lett. 123, 200603 (2019),
doi:10.1103/PhysRevLett.123.200603.

[49] W. Nernst, Sitzungsberichte der königlich preussischen Akademie der Wissenschaften,


chap. Über die Beziehung zwischen Wärmeentwicklung und maximaler Arbeit
bei kondensierten Systemen, p. 933, Berlin, https://archive.org/details/
mobot31753002089495/page/933/ (1906).

[50] P. Taranto, F. Bakhshinezhad, A. Bluhm, R. Silva, N. Friis, M. P. E. Lock, G. Vitagliano,


F. C. Binder, T. Debarba, E. Schwarzhans, F. Clivaz and M. Huber, Landauer vs. Nernst:
What is the true cost of cooling a quantum system?, PRX Quantum 4, 010332 (2023),
doi:10.1103/PRXQuantum.4.010332.

[51] L. Masanes and J. Oppenheim, A general derivation and quantification of the third law
of thermodynamics, Nat. Commun. 8, 14538 (2017), doi:10.1038/ncomms14538.

[52] H.-P. Breuer and F. Petruccione, The theory of open quantum systems, Oxford University
Press, doi:10.1093/acprof:oso/9780199213900.001.0001 (2002).

[53] A. Rivas and S. F. Huelga, Open Quantum Systems: An Introduction, SpringerBriefs in


Physics. Springer Berlin Heidelberg, doi:10.1007/978-3-642-23354-8 (2012).

[54] G. Schaller, Open Quantum Systems Far from Equilibrium, Springer, doi:10.1007/978-
3-319-03877-3 (2014).

[55] H. Carmichael, An Open Systems Approach to Quantum Optics, Springer,


doi:10.1007/978-3-540-47620-7 (1991).

[56] F. Campaioli, J. H. Cole and H. Hapuarachchi, Quantum master equations: Tips and
tricks for quantum optics, quantum computing, and beyond, PRX Quantum 5, 020202
(2024), doi:10.1103/PRXQuantum.5.020202.

80
SciPost Physics Lecture Notes Submission

[57] V. Gorini, A. Kossakowski and E. C. G. Sudarshan, Completely positive dynamical semi-


groups of N-level systems, J. Math. Phys. 17(5), 821 (1976), doi:10.1063/1.522979.

[58] G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys.
48(2), 119 (1976), doi:10.1007/BF01608499.

[59] S. Nakajima, On quantum theory of transport phenomena: Steady diffusion, Prog. Theor.
Phys. 20(6), 948 (1958), doi:10.1143/PTP.20.948.

[60] R. Zwanzig, Ensemble method in the theory of irreversibility, J. Chem. Phys. 33(5), 1338
(1960), doi:10.1063/1.1731409.

[61] A. Rivas, A. D. K. Plato, S. F. Huelga and M. B. Plenio, Markovian master equa-


tions: a critical study, New J. Phys. 12(11), 113032 (2010), doi:10.1088/1367-
2630/12/11/113032.

[62] E. B. Davies, Markovian master equations, Commun. Math. Phys. 39(2), 91 (1974),
doi:10.1007/BF01608389.

[63] E. B. Davies, Markovian master equations. II, Math. Ann. 219(2), 147 (1976),
doi:10.1007/BF01351898.

[64] P. P. Potts, A. A. S. Kalaee and A. Wacker, A thermodynamically consistent Markovian


master equation beyond the secular approximation, New J. Phys. 23(12), 123013 (2021),
doi:10.1088/1367-2630/ac3b2f.

[65] P. P. Hofer, M. Perarnau-Llobet, L. D. M. Miranda, G. Haack, R. Silva, J. B. Brask


and N. Brunner, Markovian master equations for quantum thermal machines: local
versus global approach, New J. Phys. 19(12), 123037 (2017), doi:10.1088/1367-
2630/aa964f.

[66] M. T. Mitchison and M. B. Plenio, Non-additive dissipation in open quantum networks out
of equilibrium, New J. Phys. 20(3), 033005 (2018), doi:10.1088/1367-2630/aa9f70.

[67] J. O. González, L. A. Correa, G. Nocerino, J. P. Palao, D. Alonso and G. Adesso, Testing


the validity of the ‘local’ and ‘global’ GKLS master equations on an exactly solvable model,
Open Syst. Inf. Dyn. 24(04), 1740010 (2017), doi:10.1142/S1230161217400108.

[68] S. Seah, S. Nimmrichter and V. Scarani, Refrigeration beyond weak internal coupling,
Phys. Rev. E 98, 012131 (2018), doi:10.1103/PhysRevE.98.012131.

[69] G. Kiršanskas, M. Franckié and A. Wacker, Phenomenological position and energy


resolving lindblad approach to quantum kinetics, Phys. Rev. B 97, 035432 (2018),
doi:10.1103/PhysRevB.97.035432.

[70] K. Prech, P. Johansson, E. Nyholm, G. T. Landi, C. Verdozzi, P. Samuelsson and P. P. Potts,


Entanglement and thermokinetic uncertainty relations in coherent mesoscopic transport,
Phys. Rev. Res. 5, 023155 (2023), doi:10.1103/PhysRevResearch.5.023155.

[71] H. Spohn and J. L. Lebowitz, Irreversible thermodynamics for quantum sys-


tems weakly coupled to thermal reservoirs, Adv. Chem. Phys 38, 109 (1978),
doi:10.1002/9780470142578.ch2.

[72] R. Alicki, The quantum open system as a model of the heat engine, J. Phys A: Math. Gen.
12(5), L103 (1979), doi:10.1088/0305-4470/12/5/007.

81
SciPost Physics Lecture Notes Submission

[73] A. Trushechkin, Unified Gorini-Kossakowski-Lindblad-Sudarshan quantum master


equation beyond the secular approximation, Phys. Rev. A 103, 062226 (2021),
doi:10.1103/PhysRevA.103.062226.

[74] M. Gerry and D. Segal, Full counting statistics and coherences: Fluctuation symmetry
in heat transport with the unified quantum master equation, Phys. Rev. E 107, 054115
(2023), doi:10.1103/PhysRevE.107.054115.

[75] A. Levy and R. Kosloff, The local approach to quantum transport may violate the
second law of thermodynamics, EPL 107(2), 20004 (2014), doi:10.1209/0295-
5075/107/20004.

[76] T. Novotný, Investigation of apparent violation of the second law of thermodynamics in


quantum transport studies, EPL 59(5), 648 (2002), doi:10.1209/epl/i2002-00174-3.

[77] A. S. Trushechkin and I. V. Volovich, Perturbative treatment of inter-site couplings


in the local description of open quantum networks, EPL 113(3), 30005 (2016),
doi:10.1209/0295-5075/113/30005.

[78] M. Josefsson, A. Svilans, A. M. Burke, E. A. Hoffmann, S. Fahlvik, C. Thelander, M. Lei-


jnse and H. Linke, A quantum-dot heat engine operating close to the thermodynamic effi-
ciency limits, Nat. Nanotechnol. 13(10), 920 (2018), doi:10.1038/s41565-018-0200-5.

[79] N. Brunner, N. Linden, S. Popescu and P. Skrzypczyk, Virtual qubits, virtual tem-
peratures, and the foundations of thermodynamics, Phys. Rev. E 85, 051117 (2012),
doi:10.1103/PhysRevE.85.051117.

[80] J. B. Brask, G. Haack, N. Brunner and M. Huber, Autonomous quantum thermal ma-
chine for generating steady-state entanglement, New J. Phys. 17(11), 113029 (2015),
doi:10.1088/1367-2630/17/11/113029.

[81] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin and W. K. Wootters, Mixed-


state entanglement and quantum error correction, Phys. Rev. A 54, 3824 (1996),
doi:10.1103/PhysRevA.54.3824.

[82] S. Hill and W. K. Wootters, Entanglement of a pair of quantum bits, Phys. Rev. Lett. 78,
5022 (1997), doi:10.1103/PhysRevLett.78.5022.

[83] W. K. Wootters, Entanglement of formation of an arbitrary state of two qubits, Phys. Rev.
Lett. 80, 2245 (1998), doi:10.1103/PhysRevLett.80.2245.

[84] N. Friis, A. R. Lee and D. E. Bruschi, Fermionic-mode entanglement in quantum informa-


tion, Phys. Rev. A 87, 022338 (2013), doi:10.1103/PhysRevA.87.022338.

[85] G. C. Wick, The evaluation of the collision matrix, Phys. Rev. 80, 268 (1950),
doi:10.1103/PhysRev.80.268.

[86] L. G. Molinari, Notes on Wick’s theorem in many-body theory, http://arxiv.org/abs/


1710.09248.

[87] V. Coffman, J. Kundu and W. K. Wootters, Distributed entanglement, Phys. Rev. A 61,
052306 (2000), doi:10.1103/PhysRevA.61.052306.

[88] S. Khandelwal, N. Palazzo, N. Brunner and G. Haack, Critical heat current for operat-
ing an entanglement engine, New J. Phys. 22(7), 073039 (2020), doi:10.1088/1367-
2630/ab9983.

82
SciPost Physics Lecture Notes Submission

[89] J. Bohr Brask, F. Clivaz, G. Haack and A. Tavakoli, Operational nonclassicality in minimal
autonomous thermal machines, Quantum 6, 672 (2022), doi:10.22331/q-2022-03-22-
672.

[90] N. Linden, S. Popescu and P. Skrzypczyk, How small can thermal machines
be? the smallest possible refrigerator, Phys. Rev. Lett. 105, 130401 (2010),
doi:10.1103/PhysRevLett.105.130401.

[91] M. T. Mitchison and P. P. Potts, Physical Implementations of Quantum Absorption Refrig-


erators, p. 149, Springer International Publishing, doi:10.1007/978-3-319-99046-0_6
(2018).

[92] G. Maslennikov, S. Ding, R. Hablutzel, J. Gan, A. Roulet, S. Nimmrichter, J. Dai,


V. Scarani and D. Matsukevich, Quantum absorption refrigerator with trapped ions, Nat.
Commun. 10, 202 (2019), doi:10.1038/s41467-018-08090-0.

[93] M. A. Aamir, P. J. Suria, J. A. M. Guzmán, C. Castillo-Moreno, J. M. Epstein, N. Y. Halpern


and S. Gasparinetti, Thermally driven quantum refrigerator autonomously resets super-
conducting qubit, http://arxiv.org/abs/2305.16710.

[94] P. P. Hofer, M. Perarnau-Llobet, J. B. Brask, R. Silva, M. Huber and N. Brunner, Au-


tonomous quantum refrigerator in a circuit QED architecture based on a Josephson junc-
tion, Phys. Rev. B 94, 235420 (2016), doi:10.1103/PhysRevB.94.235420.

[95] M. T. Mitchison, M. P. Woods, J. Prior and M. Huber, Coherence-assisted single-shot


cooling by quantum absorption refrigerators, New J. Phys. 17(11), 115013 (2015),
doi:10.1088/1367-2630/17/11/115013.

[96] U. Seifert, Stochastic thermodynamics, fluctuation theorems and molecular machines,


Rep. Prog. Phys. 75(12), 126001 (2012), doi:10.1088/0034-4885/75/12/126001.

[97] G. Manzano, J. M. Horowitz and J. M. R. Parrondo, Quantum fluctuation theorems for


arbitrary environments: Adiabatic and nonadiabatic entropy production, Phys. Rev. X 8,
031037 (2018), doi:10.1103/PhysRevX.8.031037.

[98] J. M. Horowitz and T. R. Gingrich, Thermodynamic uncertainty relations constrain non-


equilibrium fluctuations, Nat. Phys. 16(1), 15 (2020), doi:10.1038/s41567-019-0702-
6.

[99] P. Talkner, E. Lutz and P. Hänggi, Fluctuation theorems: Work is not an observable, Phys.
Rev. E 75, 050102 (2007), doi:10.1103/PhysRevE.75.050102.

[100] J. J. Sakurai and J. Napolitano, Modern Quantum Mechanics, Cambridge University


Press, 3rd edn., doi:10.1017/9781108587280 (2020).

[101] G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Phys. Rev.
E 61, 2361 (2000), doi:10.1103/PhysRevE.61.2361.

[102] C. Jarzynski, Nonequilibrium equality for free energy differences, Phys. Rev. Lett. 78,
2690 (1997), doi:10.1103/PhysRevLett.78.2690.

[103] G. Hummer and A. Szabo, Free energy profiles from single-molecule pulling experiments,
Proc. Natl. Acad. Sci. U.S.A. 107, 21441 (2010), doi:10.1073/pnas.1015661107.

[104] C. Jarzynski, Rare events and the convergence of exponentially averaged work values,
Phys. Rev. E 73, 046105 (2006), doi:10.1103/PhysRevE.73.046105.

83
SciPost Physics Lecture Notes Submission

[105] U. Seifert, Entropy production along a stochastic trajectory and an integral fluctuation
theorem, Phys. Rev. Lett. 95, 040602 (2005), doi:10.1103/PhysRevLett.95.040602.

[106] M. Perarnau-Llobet, E. Bäumer, K. V. Hovhannisyan, M. Huber and A. Acin, No-go theo-


rem for the characterization of work fluctuations in coherent quantum systems, Phys. Rev.
Lett. 118, 070601 (2017), doi:10.1103/PhysRevLett.118.070601.

[107] K. V. Hovhannisyan and A. Imparato, Energy conservation and fluctuation theorem are
incompatible for quantum work, Quantum 8, 1336 (2024), doi:10.22331/q-2024-05-
06-1336.

[108] A. Seif, M. Hafezi and C. Jarzynski, Machine learning the thermodynamic arrow of time,
Nat. Phys. 17(1), 105 (2021), doi:10.1038/s41567-020-1018-2.

[109] S. Gustavsson, R. Leturcq, B. Simovič, R. Schleser, T. Ihn, P. Studerus, K. Ensslin, D. C.


Driscoll and A. C. Gossard, Counting statistics of single electron transport in a quantum
dot, Phys. Rev. Lett. 96, 076605 (2006), doi:10.1103/PhysRevLett.96.076605.

[110] T. R. Gingrich, J. M. Horowitz, N. Perunov and J. L. England, Dissipation


bounds all steady-state current fluctuations, Phys. Rev. Lett. 116, 120601 (2016),
doi:10.1103/PhysRevLett.116.120601.

[111] K. Ptaszyński, Coherence-enhanced constancy of a quantum thermoelectric generator,


Phys. Rev. B 98, 085425 (2018), doi:10.1103/PhysRevB.98.085425.

[112] B. K. Agarwalla and D. Segal, Assessing the validity of the thermodynamic


uncertainty relation in quantum systems, Phys. Rev. B 98, 155438 (2018),
doi:10.1103/PhysRevB.98.155438.

[113] A. A. S. Kalaee, A. Wacker and P. P. Potts, Violating the thermodynamic un-


certainty relation in the three-level maser, Phys. Rev. E 104, L012103 (2021),
doi:10.1103/PhysRevE.104.L012103.

[114] A. Rignon-Bret, G. Guarnieri, J. Goold and M. T. Mitchison, Thermodynam-


ics of precision in quantum nanomachines, Phys. Rev. E 103, 012133 (2021),
doi:10.1103/PhysRevE.103.012133.

[115] P. Pietzonka, Classical pendulum clocks break the thermodynamic uncertainty relation,
Phys. Rev. Lett. 128, 130606 (2022), doi:10.1103/PhysRevLett.128.130606.

[116] A. A. Clerk, M. H. Devoret, S. M. Girvin, F. Marquardt and R. J. Schoelkopf, Introduction


to quantum noise, measurement, and amplification, Rev. Mod. Phys. 82, 1155 (2010),
doi:10.1103/RevModPhys.82.1155.

[117] P. Pietzonka and U. Seifert, Universal trade-off between power, efficiency, and
constancy in steady-state heat engines, Phys. Rev. Lett. 120, 190602 (2018),
doi:10.1103/PhysRevLett.120.190602.

[118] M. Polettini, G. Verley and M. Esposito, Efficiency statistics at all times:


Carnot limit at finite power, Phys. Rev. Lett. 114, 050601 (2015),
doi:10.1103/PhysRevLett.114.050601.

[119] M. Campisi and R. Fazio, The power of a critical heat engine, Nat. Commun. 7, 11895
(2016), doi:10.1038/ncomms11895.

84
SciPost Physics Lecture Notes Submission

[120] P. Pietzonka, A. C. Barato and U. Seifert, Universal bound on the efficiency of molecular
motors, J. Stat. Mech.: Theor. Exp. 2016(12), 124004 (2016), doi:10.1088/1742-
5468/2016/12/124004.

[121] R. Marsland, W. Cui and J. M. Horowitz, The thermodynamic uncertainty rela-


tion in biochemical oscillations, J. R. Soc. Interface 16(154), 20190098 (2019),
doi:10.1098/rsif.2019.0098.

[122] J. C. Maxwell, Theory of Heat, Longmans, Green, and Co. (1871).

[123] V. Serreli, C.-F. Lee, E. R. Kay and D. A. Leigh, A molecular information ratchet, Nature
445(7127), 523 (2007), doi:10.1038/nature05452.

[124] S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki and M. Sano, Experimental demonstration


of information-to-energy conversion and validation of the generalized Jarzynski equality,
Nat. Phys. 6(12), 988 (2010), doi:10.1038/nphys1821.

[125] N. Cottet, S. Jezouin, L. Bretheau, P. Campagne-Ibarcq, Q. Ficheux, J. Anders, A. Auf-


fèves, R. Azouit, P. Rouchon and B. Huard, Observing a quantum Maxwell demon at work,
Proc. Natl. Acad. Sci. U.S.A. 114(29), 7561 (2017), doi:10.1073/pnas.1704827114.

[126] Y. Masuyama, K. Funo, Y. Murashita, A. Noguchi, S. Kono, Y. Tabuchi, R. Yamazaki,


M. Ueda and Y. Nakamura, Information-to-work conversion by Maxwell’s demon in a
superconducting circuit quantum electrodynamical system, Nat. Commun. 9(1), 1291
(2018), doi:10.1038/s41467-018-03686-y.

[127] M. Naghiloo, J. J. Alonso, A. Romito, E. Lutz and K. W. Murch, Information gain


and loss for a quantum Maxwell’s demon, Phys. Rev. Lett. 121, 030604 (2018),
doi:10.1103/PhysRevLett.121.030604.

[128] K. Maruyama, F. Nori and V. Vedral, Colloquium: The physics of Maxwell’s demon and
information, Rev. Mod. Phys. 81, 1 (2009), doi:10.1103/RevModPhys.81.1.

[129] T. Sagawa, Thermodynamics of Information Processing in Small Systems, Prog. Theor.


Phys. 127(1), 1 (2012), doi:10.1143/PTP.127.1.

[130] J. M. R. Parrondo, J. M. Horowitz and T. Sagawa, Thermodynamics of information, Nat.


Phys. 11, 131 (2015), doi:10.1038/nphys3230.

[131] T. Sagawa and M. Ueda, Generalized Jarzynski equality under nonequilibrium feedback
control, Phys. Rev. Lett. 104, 090602 (2010), doi:10.1103/PhysRevLett.104.090602.

[132] T. Sagawa and M. Ueda, Nonequilibrium thermodynamics of feedback control, Phys. Rev.
E 85, 021104 (2012), doi:10.1103/PhysRevE.85.021104.

[133] P. P. Potts and P. Samuelsson, Detailed fluctuation relation for arbitrary


measurement and feedback schemes, Phys. Rev. Lett. 121, 210603 (2018),
doi:10.1103/PhysRevLett.121.210603.

[134] T. Yada, N. Yoshioka and T. Sagawa, Quantum fluctuation theorem under quantum
jumps with continuous measurement and feedback, Phys. Rev. Lett. 128, 170601 (2022),
doi:10.1103/PhysRevLett.128.170601.

[135] K. Prech and P. P. Potts, Quantum fluctuation theorem for arbitrary measurement and
feedback schemes, http://arxiv.org/abs/2306.12281.

85
SciPost Physics Lecture Notes Submission

[136] I. D. Terlizzi and M. Baiesi, Kinetic uncertainty relation, J. Phys. A: Math. Theor. 52(2),
02LT03 (2018), doi:10.1088/1751-8121/aaee34.

[137] K. Prech, G. T. Landi, F. Meier, N. Nurgalieva, P. P. Potts, R. Silva and M. Mitchison,


Optimal time estimation and the clock uncertainty relation for stochastic processes, In
preparation.

[138] G. Guarnieri, G. T. Landi, S. R. Clark and J. Goold, Thermodynamics of preci-


sion in quantum nonequilibrium steady states, Phys. Rev. Res. 1, 033021 (2019),
doi:10.1103/PhysRevResearch.1.033021.

[139] Y. Hasegawa, Quantum thermodynamic uncertainty relation for continuous measurement,


Phys. Rev. Lett. 125, 050601 (2020), doi:10.1103/PhysRevLett.125.050601.

[140] Y. Hasegawa, Thermodynamic uncertainty relation for general open quantum systems,
Phys. Rev. Lett. 126, 010602 (2021), doi:10.1103/PhysRevLett.126.010602.

[141] H. J. D. Miller, M. H. Mohammady, M. Perarnau-Llobet and G. Guarnieri, Thermody-


namic uncertainty relation in slowly driven quantum heat engines, Phys. Rev. Lett. 126,
210603 (2021), doi:10.1103/PhysRevLett.126.210603.

[142] T. Van Vu and K. Saito, Thermodynamics of precision in Markovian open quantum dy-
namics, Phys. Rev. Lett. 128, 140602 (2022), doi:10.1103/PhysRevLett.128.140602.

[143] B. Karimi, F. Brange, P. Samuelsson and J. P. Pekola, Reaching the ultimate energy reso-
lution of a quantum detector, Nat. Commun. 11(1), 367 (2020), doi:10.1038/s41467-
019-14247-2.

[144] T. Nakajima, A. Noiri, K. Kawasaki, J. Yoneda, P. Stano, S. Amaha, T. Otsuka, K. Takeda,


M. R. Delbecq, G. Allison, A. Ludwig, A. D. Wieck et al., Coherence of a driven electron
spin qubit actively decoupled from quasistatic noise, Phys. Rev. X 10, 011060 (2020),
doi:10.1103/PhysRevX.10.011060.

[145] M. Spiecker, P. Paluch, N. Gosling, N. Drucker, S. Matityahu, D. Gusenkova, S. Gün-


zler, D. Rieger, I. Takmakov, F. Valenti, P. Winkel, R. Gebauer et al., Two-level system
hyperpolarization using a quantum Szilard engine, Nat. Phys. 19(9), 1320 (2023),
doi:10.1038/s41567-023-02082-8.

[146] G. N. Nguyen, C. Spinnler, M. R. Hogg, L. Zhai, A. Javadi, C. A. Schrader, M. Erbe,


M. Wyss, J. Ritzmann, H.-G. Babin, A. D. Wieck, A. Ludwig et al., Enhanced electron-
spin coherence in a GaAs quantum emitter, Phys. Rev. Lett. 131, 210805 (2023),
doi:10.1103/PhysRevLett.131.210805.

[147] A. Riera-Campeny, A. Sanpera and P. Strasberg, Quantum systems correlated with a finite
bath: Nonequilibrium dynamics and thermodynamics, PRX Quantum 2, 010340 (2021),
doi:10.1103/PRXQuantum.2.010340.

[148] C. Elouard and C. Lombard Latune, Extending the laws of thermodynamics


for arbitrary autonomous quantum systems, PRX Quantum 4, 020309 (2023),
doi:10.1103/PRXQuantum.4.020309.

[149] S. V. Moreira, P. Samuelsson and P. P. Potts, Stochastic thermodynamics of a quan-


tum dot coupled to a finite-size reservoir, Phys. Rev. Lett. 131, 220405 (2023),
doi:10.1103/PhysRevLett.131.220405.

86
SciPost Physics Lecture Notes Submission

[150] C. M. Natarajan, M. G. Tanner and R. H. Hadfield, Superconducting nanowire single-


photon detectors: physics and applications, Supercond. Sci. Technol. 25(6), 063001
(2012), doi:10.1088/0953-2048/25/6/063001.

[151] F. Minganti, A. Biella, N. Bartolo and C. Ciuti, Spectral theory of Liou-


villians for dissipative phase transitions, Phys. Rev. A 98, 042118 (2018),
doi:10.1103/PhysRevA.98.042118.

[152] R. Di Candia, F. Minganti, K. V. Petrovnin, G. S. Paraoanu and S. Felicetti, Critical


parametric quantum sensing, npj Quantum Inf. 9(1), 23 (2023), doi:10.1038/s41534-
023-00690-z.

[153] Y. Guryanova, S. Popescu, A. J. Short, R. Silva and P. Skrzypczyk, Thermodynamics of


quantum systems with multiple conserved quantities, Nat. Commun. 7(1), 12049 (2016),
doi:10.1038/ncomms12049.

[154] G. Manzano, R. Sánchez, R. Silva, G. Haack, J. B. Brask, N. Brunner and P. P. Potts,


Hybrid thermal machines: Generalized thermodynamic resources for multitasking, Phys.
Rev. Res. 2, 043302 (2020), doi:10.1103/PhysRevResearch.2.043302.

[155] S. Majidy, W. F. Braasch, A. Lasek, T. Upadhyaya, A. Kalev and N. Yunger Halpern,


Noncommuting conserved charges in quantum thermodynamics and beyond, Nat. Rev.
Phys. 5(11), 689 (2023), doi:10.1038/s42254-023-00641-9.

87

You might also like