Professional Documents
Culture Documents
2406.19206v1
2406.19206v1
2406.19206v1
Quantum Thermodynamics
Patrick P. Potts
patrick.potts@unibas.ch
Abstract
The theory of quantum thermodynamics investigates how the concepts of heat, work,
arXiv:2406.19206v1 [quant-ph] 27 Jun 2024
and temperature can be carried over to the quantum realm, where fluctuations and ran-
domness are fundamentally unavoidable. These lecture notes provide an introduction to
the thermodynamics of small quantum systems. It is illustrated how the laws of thermo-
dynamics emerge from quantum theory and how open quantum systems can be modeled
by Markovian master equations. Quantum systems that are designed to perform a cer-
tain task, such as cooling or generating entanglement are considered. Finally, the effect
of fluctuations on the thermodynamic description is discussed.
Contents
1 Introduction 3
1.1 Basic concepts 4
1.1.1 Linear algebra 4
1.1.2 The density matrix 6
1.1.3 Second quantization 8
1.1.4 Exact and inexact differentials 9
1.2 Macroscopic thermodynamics 10
1.3 Information theory 14
2 Thermodynamic equilibrium 17
2.1 The grand-canonical ensemble 17
2.1.1 Subsystem of an isolated system 18
2.1.2 Jaynes’ maximum entropy principle 20
2.1.3 Complete passivity 21
2.2 Equivalence of ensembles in the thermodynamic limit 22
1
SciPost Physics Lecture Notes Submission
6 Fluctuations 60
6.1 Fluctuation theorems for an isolated system 61
6.1.1 The two-point measurement scheme 61
6.1.2 The backward experiment 62
6.1.3 Fluctuation theorems 63
6.2 Fluctuation theorems for the general scenario 64
6.2.1 Forward trajectories 65
6.2.2 Backward trajectories 66
6.2.3 Fluctuation theorems 67
6.3 Full counting statistics 67
6.3.1 Counting particles 68
6.3.2 Example: transport through a quantum dot 70
6.3.3 Counting heat and work 72
6.4 The Thermodynamic uncertainty relation 72
6.4.1 Current and current noise 73
6.4.2 Application: heat engine 73
7 Summary 74
8 Outlook 74
8.1 The thermodynamics of Information 75
2
SciPost Physics Lecture Notes Submission
References 77
1 Introduction
By investigating concepts such as heat, work, and temperature, the theory of thermodynamics
was a driving force in the industrial revolution, enabling the improvement and development
of technologies that reshaped society such as steam engines and refrigerators [1]. Quantum
thermodynamics investigates heat, work, temperature, as well as related concepts in quantum
systems. In analogy to macroscopic thermodynamics, this endeavor promises to enable the de-
velopment and improvement of novel quantum- and nano-technologies. Furthermore, a good
understanding of the thermodynamics of quantum systems is key for keeping the energetic
footprint of any scalable quantum technology in check [2].
As the concepts investigated by quantum thermodynamics are very general, the theory is
of relevance for essentially all scenarios involving open quantum systems. This makes the
field extremely broad, including well established topics such as thermoelectricity [3, 4], inves-
tigating how an electrical current arises from a temperature gradient, as well as very recent
investigations into the energetics of single qubit gates [5]. The broad nature of the field im-
plies that the quantum thermodynamics community brings together people with very different
backgrounds.
These lecture notes are meant to provide an introduction to this diverse and fast-growing
field, discussing fundamental concepts and illustrating them with simple examples. After a
short introduction to basic concepts, macroscopic thermodynamics, and information theory
will be reviewed in Sec. 1. Section 2 will address the topic of thermodynamic equilibrium.
This will allow for understanding why a system may often be described by very few parame-
ters, such as temperature and chemical potential, and it will give you a physical understanding
of these fundamental parameters. Section 3 will then discuss how the laws of thermodynam-
ics emerge from quantum mechanics, connecting these very general laws to the microscopic
behavior of quantum systems. Applying the laws of thermodynamics to quantum systems will
allow you to make general predictions on what happens when small systems are placed into
contact with thermal reservoirs. This may improve your physical intuition for how systems in
and out of equilibrium behave, which is an extremely valuable skill as a physicist. In Sec. 4, we
will study quantum master equations as a tool to describe open quantum systems. This is an
extremely useful and versatile tool that can be employed to model many physical systems, e.g.,
quantum dots, NV centers, and optical cavities. Section 5 will then investigate quantum ther-
mal machines, exploring some of the tasks that can be achieved with open quantum systems.
Such tasks include the production of work, refrigeration, and the creation of entanglement.
Finally, in Sec. 6, we will consider fluctuations which become important in nano-scale systems.
You will learn how thermodynamic considerations may result in equalities and inequalities that
can be understood as extensions of the laws of thermodynamics, determining the behavior of
fluctuations around average values.
3
SciPost Physics Lecture Notes Submission
Vectors in Hilbert spaces: A (pure) quantum state can be described by a vector in a complex
Hilbert space H, which is a vector space equipped with an inner product. Such vectors may
be denoted by
|ψ〉 , |ϕ〉 , a |ψ〉 + b |ϕ〉 , (1)
where a and b are complex numbers. For reasons that will become clear later, we also call a
vector a ket. Any vector can be written using a set of linearly independent basis vectors | j 〉 as
X
|ψ〉 = ψ j | j 〉. (2)
j
Dual vectors: For each vector, a dual vector (an element of the dual space H∗ ) exists given
by the Hermitian conjugate of the original vector
A dual vector is also called a bra. The notation introduced in Eqs. (1) and (4) is called Dirac
or bra-ket notation.
4
SciPost Physics Lecture Notes Submission
Inner product: The inner product between two vectors |ψ〉 and |ϕ〉 is denoted by
〈ψ|ϕ〉 ∈ C, (5)
and is a complex number. Note that this makes any dual vector a linear operator mapping
vectors onto the Hilbert space of complex numbers, i.e., 〈ψ| : H → C. With the inner product,
we may define an orthonormal basis | j〉 as one fulfilling
where δi, j denotes the Kronecker delta. Using Eq. (2), we may express any vector through an
orthonormal basis. The inner product between two vectors may then be evaluated as
X X X
∗
〈ψ|ϕ〉 = ψj 〈 j| ϕk |k〉 = ψ∗j ϕ j . (7)
j k j
Composite systems: When describing a composite system that is made of multiple subsys-
tems, a tensor product space can be used. In the case of two subsystems (a bi-partite system),
the total Hilbert space is then given by H = HA ⊗ HB , where HA and HB are the Hilbert spaces
describing subsystems A and B, and ⊗ denotes the Kronecker product.
An orthonormal basis can be constructed from orthonormal bases of the subsystems as
where |a〉 and |b〉 are the basis states of the subsystems.
Sometimes it is necessary to discard one of the subsystems. This is achieved through the
partial trace. For an operator acting on the composite system Ĉ : H → H, the partial trace
over the subsystem B is defined as
X
TrB {Ĉ } = |a〉〈a′ |〈a, b|Ĉ |a′ , b〉, (9)
a,a′ ,b
Row and column vectors: Every complex vector space of finite dimension n is isomorphic
to Cn . What this means is that as long as we consider finite-dimensional Hilbert spaces, we
may use conventional row and column vectors. For instance, a set of orthonormal basis vectors
may be identified by column vectors with the j -th entry equal to one and all others equal to
zero. General vectors can then be expressed through Eq. (2), i.e.,
..
. ψ0
0
ψ1
, 〈ψ| ≡ ψ∗ ψ∗ ψ∗ · · · .
| j 〉 ≡ 1 , 〈 j | ≡ · · · 0 1 0 · · · , ⇒ |ψ〉 ≡
ψ2 0 1 2
0
..
.. .
.
(10)
5
SciPost Physics Lecture Notes Submission
Matrices and the resolution of the identity: An important operator is the identity operator
1 defined by
1 |ψ〉 = |ψ〉 ∀ |ψ〉 . (11)
Using any orthonormal basis, the identity may be written
X
1= | j 〉〈 j | . (12)
j
This equation is called the resolution of the identity and it is heavily used in deriving various
results in quantum mechanics. With its help, we may express an operator in the basis of | j 〉 as
X X
 = | j 〉〈 j |  |k〉〈k| = A j k | j 〉〈k| , (13)
j ,k j ,k
where A j k = 〈 j| Â |k〉 are the matrix elements of Â. Indeed, with the help of the row and
column vectors in Eq. (10), we may identify
and we recover the usual prescription for the Hermitian conjugate of matrices.
Superoperators: We may consider operators as vectors in a different vector space, the so-
called Liouville space. The Hilbert-Schmidt inner product between two operators  and B̂
is defined as Tr{† B̂}. With this inner product, Liouville space is itself a Hilbert space. An
operator that acts on a vector in Liouville space is called a superoperator to distinguish it
from operators that act on states |ψ〉. Superoperators thus act on operators the same way
as operators act on states. Throughout these lecture notes, superoperators are denoted by
calligraphic symboles, e.g., L, while operators can be identified by their hat, e.g., Â (with a
few exceptions such as the identity operator 1).
As discussed above, we may write a vector in Liouville space as a column vector. Starting
from the matrix representation of an operator Â, c.f. Eq. (14), this can be achieved by stacking
its columns. Any superoperator may then be written as a matrix. This procedure is called
vectorization (see Ref. [18] for more details) and it can be highly useful when using numerics
to compute the behavior of open quantum systems.
We may interpret the density matrix as the system being in the pure state ψ j with probability
p j . If multiple p j are non-zero, this describes a scenario of incomplete knowledge.
6
SciPost Physics Lecture Notes Submission
by using the eigenbasis of the operator to define M̂ j = | j 〉〈 j|. In this case, we say that a
projective measurement of the operator  gives the value a j with probability p j = 〈 j|ρ̂| j 〉.
The average value of the measurement outcomes is given by
X
〈Â〉 ≡ Tr{Âρ̂} = aj pj . (16)
j
We also call this the average value of Â. After a projective measurement with outcome a j ,
the system is collapsed into the state | j 〉〈 j |. Repeating the same projective measurement twice
thus necessarily gives the same outcome. While projective measurements are widely used in
quantum theory, they represent an idealization that strictly speaking cannot be implemented
in the laboratory as they would violate the third law of thermodynamics [19].
Classical mixture vs quantum superposition: Let us consider the toss of a coin, where we
identify with |0〉 the outcome tails and with |1〉 the outcome heads. The outcome of such a
coin toss (before observation) is described by the density matrix
1
ρ̂ = (|0〉〈0| + |1〉〈1|) = 1, (17)
2
which is equal to the identity matrix, also called the maximally mixed state because each
outcome is equally likely. Such a mixture of states is completely classical and merely reflects
a scenario of incomplete knowledge, in this case the outcome of the coin toss.
We now compare the p state in Eq. (17) to a quantum superposition provided by the pure
state |+〉 = (|0〉 + |1〉)/ 2
1
ρ̂ = |+〉〈+| = (|0〉〈0| + |1〉〈1| + |0〉〈1| + |1〉〈0|) . (18)
2
In contrast to the maximally mixed state, this quantum state is pure and we thus have perfect
knowledge of the system. Performing a projective measurement on this state in the basis |±〉
will with certainty result in +. However, the state describes a coherent superposition of the
states |0〉 and |1〉. This coherent superposition is distinguished from a mixture by the off-
diagonal elements |0〉〈1| and |1〉〈0| (also called coherences).
Time evolution of an isolated system: The time-evolution of a system that is isolated from
its environment is given by
Rt
d t ′ Ĥ(t ′ )
ρ̂(t ) = Û(t )ρ̂(0)Û † (t ), Û(t ) = T e −i 0 , (19)
where Ĥ(t ) is the Hamiltonian of the system and T denotes the time-ordering operator. The
time-ordered exponential appearing in Eq. (19) can be written as
Rt
d t ′ Ĥ(t ′ )
T e −i 0 = lim e −iδt Ĥ(t ) e −iδt Ĥ(t −δt ) e −iδt Ĥ(t −2δt ) · · · e −iδt Ĥ(2δt ) e −iδt Ĥ(δt ) e −iδt Ĥ(0) ,
δt →0
(20)
7
SciPost Physics Lecture Notes Submission
such that the time argument in the Hamiltonian on the right-hand side increases from the
right to the left of the expression. Each exponential in the product can be understood as the
time-evolution by the infinitesimal time-step δt [20].
Note that in quantum mechanics, a system that is isolated from its environment is denoted
as a closed system. This can be confusing in the field of quantum thermodynamics because
in thermodynamics, a closed system traditionally refers to a system that can exchange energy
but cannot exchange matter with its environment. To avoid confusion, I use the term isolated
system throughout these lecture notes.
Bosons: For a single bosonic mode, a central object is the creation operator ↠, which creates
a boson in this mode. Its Hermitian conjugate â denotes the annihilation operator, removing
a boson from the mode. These operators obey the canonical commutation relations
The state without any bosons is denoted the vacuum state |0〉 and by definition it is annihilated
by the annihilation operator, â |0〉 = 0. The state with n bosons (also called a Fock state) can
be written with help of the creation operator as
n
â†
|n〉 ≡ p |0〉 , 〈n|m〉 = δn,m . (23)
n!
With these definitions, we find the action of the creation and annihilation operators on the
Fock states
p p
â |n〉 = n |n − 1〉 , ↠|n〉 = n + 1 |n + 1〉 , ↠â |n〉 = n |n〉 . (24)
8
SciPost Physics Lecture Notes Submission
Fermions: For a single fermionic mode, the creation and annihilation operators are de-
noted as ĉ † and ĉ respectively. In contrast to bosonic operators, they obey canonical anti-
commutation relations
The latter relation directly implies that ĉ 2 = (ĉ † )2 = 0. A fermionic mode may thus at most
be occupied by a single fermion. This is known as the Pauli exclusion principle. Just like for
bosons, the vacuum state |0〉 is annihilated by the annihilation operator, ĉ |0〉 = 0. The Fock
states can then be written as
where n = 0, 1 and ĉ † ĉ is called the number operator just like for bosons. Note that due
to the Pauli exclusion principle, the occupied state |1〉 is annihilated by the creation operator
ĉ † |1〉 = 0, implying a symmetry between creation and annihilation operators. Indeed, we may
call ĉ † the annihilation operator for a hole which has the vacuum state |1〉.
Multiple modes are denoted by an index and obey the canonical anti-commutation rela-
tions
{ĉ j , ĉk† } = δ j,k , {ĉ j , ĉk } = {ĉ †j , ĉk† } = 0. (29)
Note that operators for different modes anti-commute, implying a sign change when exchang-
ing two fermions. Just as for bosons, the number states can be written as
Y n j
|n0 , n1 , · · ·〉 = |n〉 ≡ ĉ †j |0〉 , ĉ †j ĉ j |n〉 = n j |n〉 . (30)
j
Note however that do to the anti-commutation relations, the order with which ĉ †j act on the
vacuum in the last expression matters.
where the subscript i ( f ) denotes the initial (final) values of the curve γ.
We now consider a differential form
9
SciPost Physics Lecture Notes Submission
written as the differential of a function. We call the differential form in Eq. (33) an inexact
differential if it cannot be written as the differential of a function. In this case, ∂ y a(x , y) ̸=
∂ x b(x , y). Importantly, the integral of an inexact differential generally does depend on the
path of integration and not just its endpoints.
As an example, we consider the exact differential
d z = yd x + x d y, (35)
which is the differential of the function z(x , y) = x y. The differential in Eq. (35) can be
written as the sum of two inexact differentials
where d¯ denotes an inexact differential. We may now integrate these differentials along two
different curves with the same endpoints
As expected, we recover
Z Z
dz = d z = z(1, 1) − z(0, 0) = 1. (40)
γ1 γ2
10
SciPost Physics Lecture Notes Submission
The first law of thermodynamics: The first law of thermodynamics may be written in the
form
dU = d¯W − d¯Q, (41)
The first law states that energy is conserved and that energy changes come in two forms:
heat d¯Q and work d¯W . Quite generally, work is a useful form of energy that is mediated
by macroscopic degrees of freedom. This is in contrast to heat, which is a more inaccessible
form of energy mediated by microscopic degrees of freedom. Note that these quantities are
inexact differentials, i.e., they may depend on the path taken when changing thermodynamic
variables. Here we use a sign convention where work is positive when increasing the energy
of the body while heat is positive when it flows out of the body and into the environment.
The first law of thermodynamics prevents a perpetuum mobile of the first kind, i.e., a
machine that produces the work it needs to run itself indefinitely. In the presence of any type
of losses, such a machine needs to produce work out of nothing, violating Eq. (41). An example
would be a lamp that is powered by a solar cell, illuminated only by the lamp itself, see Fig. 2.
Figure 2: Perpetuum mobile of the first kind. A lamp illuminates a solar panel which
in turn powers the lamp. Without any hidden energy source, such a device cannot
work because of unavoidable losses. At long times, we may set dU = 0. Further-
more, we split the work d¯W = d¯WL − d¯WS, where WL denotes the work exerted
on the lamp by the solar panel and WS denotes the work exerted on the solar panel
by the lamp. In the presence of losses within the solar panel (any other loss mech-
anism results in similar conclusions), it provides strictly less work to the lamp than
it received, i.e., d¯WL < d¯WS. By Eq. (41), this would necessitate another source of
energy, either in the form of work or in the form of heat with d¯Q < 0.
Work may come in different forms. For instance, electrical work (also called chemical
work) is produced when charges are moved from a region with lower voltage to a region with
11
SciPost Physics Lecture Notes Submission
higher voltage, which is what happens when charging a battery. While this type of work will
be the focus of later parts of these lecture notes, here we focus on the traditional example of
mechanical work which is produced by changing the volume of a gas using, e.g., a piston as
illustrated in Fig. 1. In this case, we know from classical mechanics that work is given by d¯W =
−pB d V, where pB denotes the external pressure exerted by the piston. Furthermore, the heat
flowing into the body can be related to the entropy change of the surrounding environment
d¯Q = TB dSB , where the subscript B stands for ”bath´´. This relation follows from describing
the environment as being in thermal equilibrium at temperature TB and we will revisit this
later. We stress that in these expressions for heat and work, the quantities of the surrounding
environment appear (note that d V = −d VB ).
Figure 3: Perpetuum mobile of the second kind. A boat moves across the sea, being
powered by the heat absorbed from the surrounding water. To move in the water, the
boat needs to perform work against the friction force, i.e., d¯W < 0. At long times, we
may neglect any changes in the internal energy or the entropy of the system, i.e., dS =
dU = 0. The work required to move against the friction force must then be provided
by heat coming from the environment, requiring d¯Q < 0. Since the environment
only has a single temperature, this violates the second law of thermodynamics given
in Eq. (42) which requires d¯Q/T ≥ 0.
The second law of thermodynamics prevents a perpetuum mobile of the second kind, i.e.,
a machine that runs indefinitely by extracting heat out of an equilibrium environment. In
this case, there is only one temperature T describing the environment. Furthermore, for long
times we may neglect U as well as S since they remain finite while the produced work −W
increases indefinitely. The first law then requires W = Q which when inserted in Eq. (42)
provides W /T ≥ 0, preventing any production of work. An example would be given by a boat
that moves (performing work against the friction force of the water) by extracting heat from
the surrounding water, see Fig. 3. Note that dissipating work (i.e., turning work into heat,
W = Q ≥ 0) is perfectly allowed by the laws of thermodynamics and indeed is what happens
when an incandescent light bulb is glowing.
12
SciPost Physics Lecture Notes Submission
Qh < 0 Qc > 0
Th S Tc
W <0
Figure 4: Working principle of a heat engine. Heat from a hot reservoir Qh is partly
converted into work W and partly dissipated into the cold reservoir Qc . The dissipa-
tion ensures that the entropy production is positive, as required by the second law
of thermodynamics, and limits the efficiency to be below the Carnot efficiency, see
Eq. (44).
In addition to this example, the second law of thermodynamics provides important limita-
tions on physically allowed processes. Consider a heat engine, where the temperature gradi-
ent between two thermal reservoirs at temperatures Th > Tc is exploited to produce work, see
Fig. 4. Let us further consider a long-time scenario, where any changes in the system may be
neglected compared to the heat and work which continuously increase. The first and second
laws of thermodynamics then read
Qh Qc
W = Qh + Qc , + ≥ 0. (43)
Th Tc
13
SciPost Physics Lecture Notes Submission
Thus, it is possible to turn heat into work in a thermodynamic cycle. Prominent examples
of thermodynamic cycles are the Otto cycle, providing the underlying principle behind car
engines, and the Carnot engine, which is an idealized, reversible engine that reaches the Carnot
efficiency given in Eq. (44).
Maxwell relations: Let us consider the first law of thermodynamics for a reversible process.
As the body is in equilibrium with the environment throughout the process, we may identify
T = TB , p = pB and because dΣ = 0, we have dS = −dSB . The first law thus reads
dU = T dS − pd V = ∂S UdS + ∂V Ud V. (46)
Since this is an exact differential, the last expression holds no matter if the process is reversible
or not. However, the identification of heat and work with the first and second term on the
right-hand side is only justified for reversible processes. Since U is a smooth function, partial
derivatives with respect to different variables commute and we find the Maxwell relation
∂S ∂V U = ∂V T = −∂S p. (47)
Other Maxwell relations may be obtained similarly from other thermodynamic potentials such
as the free energy F = U −T S. The free energy plays a particularly important role as it captures
the capacity of a body to do work (in contrast to U capturing the capacity to provide heat and
work). The differential of the free energy reads
d F = −Sd T − pd V = ∂ T F dS + ∂V F d V, (48)
−∂ T ∂V F = ∂V S = ∂ T p. (49)
14
SciPost Physics Lecture Notes Submission
I j = − ln p j . (52)
It quantifies the information that is gained when observing outcome x j . The self-information
has the following properties:
i) The less probable an event is, the more surprising it is and the more information we gain
ii) The information of uncorrelated events is additive. Let p jX and pkY denote the probabilities
that the independent random variables X and Y take on the values x j and yk respectively.
It then holds that
It quantifies the average information gain when observing the random variable X , or, equiv-
alently, the lack of information before observation. The practical importance of the Shannon
entropy is highlighted by the noiseless coding theorem which states that storing a random vari-
able taken from a distribution with Shannon entropy H requires at least H nats on average (if
H is defined using log2 , then it requires at least H bits). Any further compression results in
loss of information. This theorem can be illustrated using an example taken from Nielsen and
Chuang [17]:
Consider a source that produces the symbols A, B, C , and D with respective probabilities
1/2, 1/4, 1/8, and 1/8. If we do not use any compression, we need two bits of storage per
symbol, e.g.,
A → 00, B → 01, C → 10, D → 11. (56)
However, the symbols provided by the source may be encoded using less bits per symbol on
average by taking into account the probability distribution of the source. The basic idea is to
use less bits for the more frequent symbols. For instance, we may use the following encoding
Note that we cannot use B → 1 because then we can no longer determine if 110 means C or
if it means BBA. With this compression, we find the average number of bits per symbol to be
1/2 · 1 + 1/4 · 2 + 1/8 · 3 + 1/8 · 3 = 7/4. This equals the Shannon entropy (in bits)
1 1 1 1 1 1 1 1 7
H = − log2 − log2 − log2 − log2 = . (58)
2 2 4 4 8 8 8 8 4
It turns out that any compression algorithm that uses less bits results in the loss of information.
15
SciPost Physics Lecture Notes Submission
Von Neumann entropy: The quantum version of the Shannon entropy is given by the von
Neumann entropy X
SvN [ρ̂] = −Tr{ρ̂ ln ρ̂} = − p j ln p j , (59)
j
describes the lack of knowledge we have over a quantum system given a state ρ̂. The von
Neumann entropy is minimized for pure states and maximized for the maximally mixed state
given by the identity matrix
Relative entropy: The relative entropy, also known as the Kullback-Leibler divergence [27] is
useful to quantify information when a random variable is described by an erroneous distribu-
tion. For two distributions {p j } and {q j }, the relative entropy is defined as
X pj
DKL [{p j }||{q j }] = p j ln . (61)
j
qj
By definition, DKL ≥ 0 where equality is only achieved if the two distributions are the same. To
shed some light onto the interpretation of this quantity, let us assume that a random variable
X is distributed according to {p j } but we describe it by the erroneous distribution {q j }. The
information gain that we attribute to the outcome x j then reads − ln q j . On average, this gives
X X X pj
− p j ln q j = − p j ln p j + p j ln = H(X) + DKL [{p j }||{q j }]. (62)
j j j
qj
The left-hand side of this equation provides the information we assign to the random variable
on average, the assumed information. The right-hand side provides the actual average infor-
mation, given by the Shannon entropy, plus the relative entropy. The actual information is thus
strictly smaller than the assumed information and the discrepancy is quantified by the rela-
tive entropy. We can think of the relative entropy as an information loss due to the erroneous
description.
Quantum relative entropy: The generalization of the relative entropy to the quantum sce-
nario is defined as [28]
S[ρ̂||σ̂] = Tr{ρ̂ ln ρ̂ − ρ̂ ln σ̂}. (63)
Just as its classical counterpart, is always non-negative S[ρ̂||σ̂] ≥ 0 and it quantifies the
discrepancy between the assumed and the actual lack of information when a system in state
ρ̂ is erroneously described by the state σ̂.
Mutual information: The mutual information is a measure that characterizes the mutual
dependence of two random variables. It tells us how much information one random variable
has on the other and it is defined as
IMI (X; Y ) = DKL [{p j ,k }||{p jX pkY }] = H(X) + H(Y ) − H(X , Y ). (64)
To extend this definition to the quantum regime, we consider a bi-partite system in a Hilbert
space H = HA ⊗ HB . The quantum mutual information in a quantum state ρ̂AB is defined as
IMI [A; B] = S[ρ̂AB ||ρ̂A ⊗ ρ̂B ] = SvN [ρ̂A ] + SvN [ρ̂B ] − SvN [ρ̂AB ], (65)
16
SciPost Physics Lecture Notes Submission
where we introduced ρ̂A = TrB {ρ̂AB } and ρ̂B = TrA {ρ̂AB }. Since the quantum relative entropy
is a non-negative quantity, so is the mutual information. Furthermore, the mutual information
is bounded from above by a corollary of the Araki–Lieb inequality [29]
where dα denotes the dimension of the Hilbert space Hα . The last inequality follows from the
upper bound on the von Neumann entropy given in Eq. (60).
2 Thermodynamic equilibrium
Equilibrium states may be characterized by a few intensive variables such as temperature T ,
chemical potential µ, and pressure p. In equilibrium, no net currents flow and the system
is stationary, i.e., observables become time-independent. Note the similarity to the main as-
sumption in macroscopic thermodynamics, see Sec. 1.2. This is no coincidence. Indeed, in
macroscopic thermodynamics, systems are assumed to be in local equilibrium (i.e., can be
described by a thermal state with variables that may differ from the environment).
e −β(Ĥ−µ N̂) ¦ ©
ρ̂G = , Z = Tr e −β(Ĥ−µ N̂) , (67)
Z
where β = 1/kB T denotes the inverse temperature, Ĥ is the Hamiltonian, N̂ the particle-
number operator, and Z is called the partition function. For equilibrium states, we may gen-
erally identify the relevant quantities in macroscopic thermodynamics as average values
Because the Gibb’s state is of central importance for much that follows, we now discuss three
different ways of motivating why it provides an adequate description for thermodynamic equi-
librium.
17
SciPost Physics Lecture Notes Submission
where ptot (E, N) is nonzero only if the energy and particle number are close to Etot and Ntot ,
i.e., for Etot ≤ E < Etot + δE and Ntot ≤ N < Ntot + δN. For energies and particle numbers
within these shells, each state is equally likely in the microcanonical ensemble, i.e.,
where Ω(Etot , Ntot ) denotes the density of states and Θ(x ) the Heaviside theta function which
is equal to one for x > 0 and zero otherwise. Being a macroscopic supersystem, the number
of states contributing to the microcanonical ensemble is assumed to be large
At the same time, for energy and particle numbers to be well defined, we require δE ≪ Etot
and δN ≪ Ntot . This also justifies to assume that the density of states is constant in the
energy and particle number shells that contribute to the micro-canonical ensemble. Fur future
reference, we note that the von Neumann entropy of the microcanoncial ensemble is given by
the logarithm of the number of contributing states [see also Eq. (60)]
Since our system of interest is part of the supersystem, we may write the Hamiltonian and
particle-number operator as
18
SciPost Physics Lecture Notes Submission
where the subscript B stands for “bath” and describes the remainder of the supersystem. We
now make a weak coupling approximation that is crucial to obtain the Gibbs state: we assume
that V̂ is sufficiently weak such that we may neglect the coupling energy and the energy and
particle eigenstates of the supersystem approximately factorize
We are interested in the reduced state of the system obtained by taking the partial trace over
the bath
X X
ρ̂S = TrB {ρ̂tot } = 〈EB , NB | ρ̂tot |EB , NB 〉 = pS(ES, NS) |ES, NS〉〈ES, NS| . (76)
EB ,NB ES ,NS
The probability for the system to be in a state with energy ES and particle number NS may be
written as
X ΩB (Etot − ES, Ntot − NS)δEδN
pS(ES, NS) = ptot (ES + EB , NS + NB ) = , (77)
EB ,NB
Ω(Etot , Ntot )δEδN
We note that the factor of kB appears here because this is the thermodynamic, not the von
Neumann entropy. Since the system is considered to be small, we may expand the bath entropy
around Etot and Ntot to obtain
SB (Etot − ES, Ntot − NS) ≃ SB (Etot , Ntot ) − ES∂E SB (E, Ntot )|E=Etot
− NS∂ N SB (Etot , N)| N=Ntot
(79)
ES − µNS
= SB (Etot , Ntot ) − ,
T
where we identified ∂E SB (E, N) = 1/T and ∂ N SB (E, N) = −µ/T . Inserting the last expression
into Eq. (77) results in
where the last equality follows from the normalization of the state. Since this is exactly the
probability for the Gibbs state given in Eq. (67), we have shown that a small subsystem of a
large and isolated supersystem is described by the Gibbs state if the supersystem is described
by the microcanonical ensemble (i.e., has well defined energy and particle number) and if the
system is weakly coupled to the remainder of the supersystem.
19
SciPost Physics Lecture Notes Submission
This form may be motivated by demanding that equilibrium states do not change in time,
which implies [see also Eq. (19)]
and that there are no coherent superpositions of states with different number of particles,
which implies [ N̂, ρ̂] = 0. The latter assumption may be motivated by a particle superselection
rule [34] and it does not apply when considering scenarios with a vanishing chemical potential
µ = 0. We now want to maximize the von Neumann entropy
X
SvN [ρ̂] = − pn ln pn , (84)
n
where the first constraint simply ensures the normalization of the state. Such constrained
maximization problems can be solved by the method of Lagrange multipliers [35]. To this
end, we define the function
X X X X
L=− pn ln pn − λ0 pn − 1 − λ1 pn En − Ē − λ2 pn Nn − N̄ . (86)
n n n n
The solution to the maximization procedure is the distribution {pn } that obeys
∂L
= − ln pn − 1 − λ0 − λ1 En − λ2 Nn = 0. (87)
∂ pn
We now identify
λ1 = β, λ2 = −βµ, e λ0 +1 = Z. (88)
The solution of Eq. (87) then reads
e −β(En −µNn )
pn = , (89)
Z
20
SciPost Physics Lecture Notes Submission
recovering the Gibbs state given in Eq. (67). The Lagrange multipliers or, equivalently, the
quantities β, µ, and Z are fully determined by the constraints given in Eq. (85).
Introducing the grand potential which plays a similar role to the free energy
The last term may be dropped as it does not depend on pn . The last equation then implies
that maximizing the von Neumann entropy given the average energy and average particle
number is mathematically equivalent to minimizing the grand potential (the only constraint
left, corresponding to the Lagrange multiplier λ0 , ensures the normalization of the state).
where ρ̂ denotes the initial state of the system and the unitary Û ≡ Û(τ) is determined by
the time-dependent Hamiltonian, see Eq. (19). We note that with a suitable Hamiltonian, any
unitary operator can be generated.
A passive state can now be defined as a state τ obeying
which implies that no work can be extracted from passive states, no matter how the time-
dependent Hamiltonian is designed. Passive states are of the form
X
τ̂ = pn |En 〉〈En | , with E0 ≤ E1 ≤ E2 · · · p0 ≥ p1 ≥ p2 · · · (95)
n
Remarkably, taking multiple copies of a passive state may result in a state that is no longer
passive. This suggests the definition of completely passive states [36]: a state τ̂ is completely
passive iff
Tr{Ĥn (τn − Û τ̂n Û † )} ≤ 0 ∀ Û, n, (96)
where
Ĥn = Ĥ ⊕ Ĥ ⊕ · · · ⊕ Ĥ, τ̂n = τ̂ ⊗ τ̂ ⊗ · · · ⊗ τ̂. (97)
21
SciPost Physics Lecture Notes Submission
Microcanonical ensemble
X 1
ρ̂M = |E, N〉〈E, N| , (99)
Ē≤E<Ē+δE
Ω(Ē, N̄)δEδN
N̄≤N< N̄+δN
where Ω(Ē, N̄) denotes the density of states that we have encountered in Eq. (71). In the
microcanonical ensemble, both the particle number as well as the energy is fixed, where fixed
means that all states within an energy (particle number) shell of thickness δE (δN) contribute.
Within this shell, the state is assumed to be completely mixed, i.e., all states have equal weight.
As such, the state maximizes the von Neumann entropy for fixed Ē and N̄.
Canonical ensemble
X X e −β E e −β ĤN̄
ρ̂c = |E, N〉〈E, N| = , (100)
Zc Tr{e −β ĤN̄ }
N̄≤N< N̄+δN E
where Ĥ N̄ denotes the Hamiltonian projected onto the relevant particle-number subspace
X X
Ĥ N̄ = |N〉〈N| Ĥ |N〉〈N| , |N〉 = |E, N〉 . (101)
N̄≤N< N̄+δN E
In the canonical ensemble, only the particle number is fixed. The average energy 〈Ĥ〉 is de-
termined by the temperature of the environment. The canonical ensemble is the adequate
equilibrium state when energy (but not particles) can be exchanged with the environment.
The canonical Gibbs state maximizes the von Neumann entropy for a fixed particle number N̄
and an average energy given by 〈Ĥ〉 = Ē. Furthermore, the canonical Gibbs state minimizes
the free energy
F = 〈Ĥ〉 − kB T SvN [ρ̂], (102)
when the particle number N̄ is fixed.
22
SciPost Physics Lecture Notes Submission
Grand-canonical ensemble The grand-canonical Gibbs state is given in Eq. (67). As dis-
cussed above, it maximizes the von Neumann entropy for given average values of energy and
particle number. This is equivalent to minimizing the grand potential given in Eq. (90). The
grand-canonical ensemble is the adequate description when both energy and particles may be
exchanged with the environment. The average energy and particle number are then deter-
mined by the temperature and chemical potential of the environment respectively.
Thermodynamic limit and equivalence of ensembles The thermodynamic limit is the limit
of large systems which formally can be defined as
Ē N̄
V → ∞, = cs t. = cs t. (103)
V V
This can be achieved by setting V ∝ x , Ē ∝ x , N̄ ∝ x , and letting x → ∞. In the ther-
modynamic limit, all ensembles become equivalent [40] because relative fluctuations around
average values vanish. Here, we illustrate this for the canonical ensemble. In this ensemble,
the probability for the system to take on a certain energy obeys
As it turns out, this function becomes highly peaked in the thermodynamic limit. This can be
seen from the relative fluctuations
〈Ĥ 2 〉 − 〈Ĥ〉2 1 kB T 2 1
=− ∂β 〈Ĥ〉 = C∝ , (105)
〈Ĥ〉2 〈Ĥ〉 〈Ĥ〉2 x
C = ∂ T 〈Ĥ〉, (106)
which quantifies the change of energy upon a change of temperature. The last proportionality
in Eq. (105) follows because the heat capacity is an extensive quantity and is thus proportional
to x for large systems. Equation (105) implies that in the thermodynamic limit, the relative
fluctuations vanish. The probability distribution for the system to take on a given relative
energy may then be approximated as a Gaussian distribution (see Ref. [23] for a discussion on
higher moments)
(E/Ē−1)2
Ē −
p(E/Ē) ≃ p e 2kB T 2 C /Ē 2 , (107)
2πkB T 2 C
which tends to a Dirac delta distribution δ(E/Ē − 1) in the thermodynamic limit.
23
SciPost Physics Lecture Notes Submission
where the system is labeled by the subscript S and the reservoirs are labeled by the index α.
the term V̂α denotes the coupling between the system and reservoir α. Because the system
and reservoirs together comprise an isolated system, the time evolution of the total density
matrix is given by
with T denoting the time-ordering operator (see Sec. 1.1.2) and we allow the system Hamilto-
nian to be time-dependent while we assume the reservoir Hamiltonians as well as the coupling
terms to be time-independent.
Figure 7: General scenario. A system can exchange both energy and particles with
N thermal reservoirs, described by the temperatures Tα and the chemical potentials
µα .
As seen in Sec. 1.2, the energy flows between the system and the reservoirs are of cru-
cial importance in thermodynamics. This motivates considering the mean energy change of
reservoir α
∂ t 〈Ĥα 〉 = Tr Ĥα ∂ t ρ̂tot = ∂ t 〈Ĥα − µα N̂α 〉 + µα ∂ t 〈 N̂α 〉 = Jα + Pα . (111)
Here we divided the energy change into a part that we call the heat current Jα and a part that
we call the power Pα that enters reservoir α. Throughout these notes, the sign convention is
such that energy flows are positive when flowing toward the body that their index refers to.
To motivate the separation of energy flows into heat and work, we introduce the concept of
entropy production in the quantum regime.
The von Neumann entropy of the total system can thus not tell us anything about how energy
flows between the system and the reservoirs. To make progress, we consider an effective
description based on local thermal equilibrium [i.e., the reservoirs are described by the Gibbs
state given in Eq. (67)]
24
SciPost Physics Lecture Notes Submission
where ρ̂S(t ) = TrB {ρ̂tot (t )} and we denote the Gibbs state describing reservoir α as
where averages are with respect to the total state 〈X̂ 〉 = Tr{X̂ ρ̂tot (t )} and we used ln τ̂α =
−βα (Ĥα − µα N̂α ) − ln Zα as well as ln(Â ⊗ B̂) = ln(Â) ⊗ 1 + 1 ⊗ ln(B̂). As discussed in Sec. 1.3,
Eq. (114) may be interpreted as the information that is lost due to our effective description.
We now introduce the entropy production rate [41]
XJ
α
Σ̇ = kB ∂ t S [ρ̂tot ||ρ̂S α τ̂α ] = kB ∂ t SvN [ρ̂S] +
N
. (115)
α Tα
The entropy production rate can thus be interpreted as the rate at which information is lost
by our local equilibrium description, due to the buildup of correlations between system and
environment as well as changes to the reservoirs. Note that it is not guaranteed to be posi-
tive. Finite size effects as well as non-Markovian dynamics can result in a negative entropy
production rate (a backflow of information from the reservoirs). However, as we will see later,
for infinitely large and memoryless reservoirs, the entropy production rate is ensured to be
positive at all times as information is irretrievably lost when one can only access the system
alone.
Note that Eq. (115) motivates the interpretation of Jα as a heat flow, such that the entropy
production associated to reservoir α is given by the usual expression for reservoirs which re-
main in thermal equilibrium. Interestingly, we may refine our effective description by using
time-dependent temperatures and chemical potentials [42, 43]
25
SciPost Physics Lecture Notes Submission
Using Eq. (72), the last term on the first line can be shown to vanish. With the help of the last
equation and Eq. (111), we may then derive the first law of thermodynamics (note that the
energy flows are defined to be positive when they enter the location corresponding to their
index)
X X
∂ t 〈ĤS(t )〉 = PS(t ) − [Jα (t ) + Pα (t )] − ∂ t 〈V̂α 〉, PS ≡ 〈∂ t ĤS〉, (121)
α α
where PS denotes the power entering the system due to some external classical drive that
renders ĤS time-dependent. The term due to the coupling energy 〈V̂α 〉 can be neglected when
the coupling is weak, which is a common assumption for open quantum systems. Relaxing
this assumption and considering the thermodynamics of systems that are strongly coupled to
the environment is an exciting ongoing avenue of research [44–47].
i.e., we assume our effective description to be exact at t = 0. In this case, Eq. (114) can be
written as
XQ
α
Σ ≡ kB S [ρ̂tot (t )||ρ̂S(t ) α τ̂α ] = kB SvN [ρ̂S(t )] − kB SvN [ρ̂S(0)] +
N
, (123)
α Tα
and we used that S [ρ̂tot (0)||ρ̂S(0) α τ̂α ] = 0. Since it is expressed as a quantum relative
N
entropy, we have
Σ ≥ 0. (125)
From an information point of view, this inequality tells us that if our effective description is
true at t = 0, then it can only be worse at later times. To understand how our description
becomes worse, we follow Refs. [41, 48] and write the entropy production as
O
Σ = IMI [S; B] + S[ρ̂B (t )|| τ̂α ], (126)
α
where we introduced the reduced state of the environment ρ̂B (t ) = TrS{ρ̂tot (t)} and the
mutual information between system and environment is given by [c.f. (65)]
IMI [S; B] = SvN [ρ̂S(t )] + SvN [ρ̂B (t )] − SvN [ρ̂tot (t )]. (127)
The first term in Eq. (126) describes the correlations between the system and the environment
while the second term describes the departure of the environment from the initial product of
26
SciPost Physics Lecture Notes Submission
Gibbs states. As was recently shown, the departure of the environment from its initial state
provides the dominant contribution to the entropy production [48]. This can be seen from
the upper bound of the mutual information IMI [S; B] ≤ 2dS [c.f. Eq. (66)], where dS denotes
the dimension of the Hilbert space of the system. In the long-time limit, or under steady state
conditions, the mutual information therefore saturates and can no longer contribute to Σ.
Any entropy production due to persisting heat currents, see Eq. (123), can thus be associated
with the environment departing further and further from the initial state with well defined
temperatures and chemical potentials.
As mentioned above, the entropy production rate Σ̇ is not always guaranteed to be positive
(i.e., Σ is not necessarily a monotonously increasing function of time). However, at small times,
Eq. (125) ensures that the entropy production rate is also positive. Furthermore, as we will
see in the next section, the entropy production is positive for infinitely large and memory-less
reservoirs which couple weakly to the system. We note that with a time-dependent effective
description, see Eq. (116), Eqs. (123) and (125) still hold.
As expected, in equilibrium, the system is thus characterized by the same temperature and
chemical potential as the reservoir. We note that equilibrium may only be reached if Ĥtot is
time-independent and if there is only a single temperature and a single chemical potential
characterizing the environment. The exact range of validity of the first equality in Eq. (128)
is a subject of ongoing research [47].
27
SciPost Physics Lecture Notes Submission
Figure 8: Time evolution of total and reduced density matrices. The total density
matrix evolves unitarily, according to Eq. (109). The reduced density matrix at time
t can either be obtained by tracing out the environment from ρ̂tot (t ), or by evolving
ρ̂S(0) by the universal dynamical map E t ,0
As illustrated in Fig. 8, the time-evolution of the reduced density matrix can be described
by a universal dynamical map (UDM). A UDM E is a linear map which transforms a density
matrix into another density matrix. Furthermore, it is independent of the density matrix it acts
upon. In its most general form, a UDM can be written as
X X
E ρ̂ = K̂ j ρ̂ K̂ †j , K̂ †j K̂ j = 1,
j j
1 †
X
†
∂ t ρ̂(t ) = −i[Ĥ(t ), ρ̂(t )] + γk (t ) L̂k (t )ρ̂(t )L̂k (t ) − {L̂k (t )L̂k (t ), ρ̂(t )}
k
2
X (129)
= −i[Ĥ(t ), ρ̂(t )] + γk (t )D[L̂k (t )]ρ̂(t ),
k
where Ĥ(t ) is Hermitian, γk (t ) ≥ 0, and the operators L̂k are referred to as Lindblad jump
operators. For a proof, see Ref. [53]. This form of the master equation is also called GKLS
form, after Gorini, Kosakowski, Sudarshan [57], and Linblad [58] who considered the time-
independent case. In the rest of this chapter, we provide detailed derivations for Markovian
master equations that describe our general scenario introduced in Sec. 3.1.
28
SciPost Physics Lecture Notes Submission
Note that these are projectors as P2 = P . Further, note that we are interested in the time-
evolution of P ρ̂tot (t ), which provides us with an effective description of the form discussed in
Sec. 3.2. We consider the general scenario discussed in Sec. 3.1, i.e., the Hamiltonian is given
by Eq. (108), but we move to an interaction picture (which we denote by a tilde instead of a
hat)
ρ̃tot (t ) = Û0† (t )ρ̂tot (t )Û0 (t ), (131)
determined by the unitary operator
Rt
d t ′ [ĤS (t ′ )+ Ĥα ]
P
Û0 (t ) = T e −i 0 α , (132)
with T denoting time-ordering. In the interaction picture, the time-evolution of the total
density matrix is determined by
X
∂ t ρ̃tot (t ) = −i Ṽα (t ), ρ̃tot (t ) = V (t )ρ̃tot (t ), (133)
α
and the coupling Hamiltonian in the interaction picture is given by Ṽα (t ) = Û0† (t )V̂α Û0 (t ), in
analogy to Eq. (131). Finally, we have expressed the commutator in Eq. (133) with the help of
the superoperator V . In the following, we will assume PV (t )P = 0. This is not a restriction
as it can always be ensured by adding some terms to ĤS and subtracting them from V̂α , as we
will see below. We can then write
∂ t P ρ̃tot (t ) = PV (t )ρ̃tot (t ) = PV (t )Qρ̃tot (t ),
(135)
∂ t Qρ̃tot (t ) = QV (t )ρ̃tot (t ) = QV (t )P ρ̃tot (t ) + QV (t )Qρ̃tot (t ),
where we used P + Q = 1. The formal solution to the second equation is given by
Z t
Qρ̃tot (t ) = G (t , 0)Qρ̃tot (0) + d s G (t , s )QV (s )P ρ̃tot (s ), (136)
0
such that P ρ̃tot (0) = ρ̃tot (0) and Qρ̃tot (0) = 0. Inserting Eq. (136) into Eq. (135), we find
Z t
∂ t P ρ̃tot (t ) = d s PV (t )G (t , s )QV (s)P ρ̃tot (s ). (139)
0
We note that this expression is still exact (for the given initial conditions). Since it explicitly
depends on P ρ̃tot at previous times, it contains memory effects and does not constitute a
Markovian master equation.
29
SciPost Physics Lecture Notes Submission
where we substituted s → t −s and we made use of TrB {Ṽδ (t )Ṽγ (s )ρ̃S(s)⊗α τ̂α } = 0 for δ ̸= γ.
This is similar to the assumption PV (t )P = 0 and can always be ensured by an appropriate
redefinition of the terms in the Hamiltonian as shown below. We note that Eq. (141) is often
obtained by assuming that ρ̃tot (t ) = ρ̃S(t )⊗α τ̂α at all times, the so-called Born approximation.
Here we do not make such an assumption. In agreement with the discussion in the previous
section, we consider ρ̃S(t ) ⊗α τ̂α to be an effective description, which only keeps track of the
system and neglects changes in the environment state as well as correlations between system
and environment.
In addition to the weak coupling approximation, we now make a Markov approximation.
To this end, we assume that the integrand in Eq. (141) decays on a time-scale τB (the bath-
correlation time, more on this below). If this time-scale is short enough, which is the case for
large, memory-less environments, we can assume ρ̃S(t − s ) to approximately remain constant
and replace its time-argument in Eq. (141) by t . Furthermore, using the same argumentation,
we can extend the integral to infinity obtaining
Z∞ X ¦ O ©
∂ t ρ̃S(t ) = − ds TrB Ṽα (t ), Ṽα (t − s), ρ̃S(t ) τ̂α . (142)
0 α α
This equation is Markovian, i.e., it is local in time and does not depend explicitly on the initial
conditions. However, it is not in GKLS form and does not in general preserve the positivity of
the density matrix, i.e., eigenvalues may become negative. The approximations that result in
Eq. (142) are usually called the Born-Markov approximations. For a more formal application
of these approximations, see Refs. [62,63]. Note that under the Born-Markov approximations,
the effect induced by different reservoirs is additive.
To make progress, we write the coupling Hamiltonian in the general form
X X
† †
V̂α = Ŝα,k ⊗ B̂α,k = Ŝα,k ⊗ B̂α,k , (143)
k k
where we used the Hermiticity of V̂α in the second equality. We note that the operators Ŝα,k
and B̂α,k are not necessarily Hermitian. Inserting Eq. (143) into Eq. (142), we find after some
algebra
XXZ ∞ ¦
α
† †
∂ t ρ̃S(t ) = d s Ck,k ′ (s ) S̃α,k ′ (t − s )ρ̃S (t )S̃α,k (t ) − S̃α,k (t )S̃α,k ′ (t − s )ρ̃S (t )
α k,k ′ 0
α
©
† †
+Ck,k ′ (−s ) S̃α,k ′ (t ) ρ̃ S (t ) S̃α,k
(t − s ) − ρ̃ S (t ) S̃α,k
(t − s ) S̃α,k ′ (t ) ,
(144)
where we introduced the bath-correlation functions
α
¦ ©
†
Ck,k ′ (s ) = Tr B̃α,k (s) B̂α,k ′ τ̂α , (145)
30
SciPost Physics Lecture Notes Submission
α
and we used [Ck,k ′
(s )]∗ = Ckα′ ,k (−s ). These bath-correlation functions are usually peaked
around s = 0 and decay over the time-scale τB (indeed, this is how τB is defined). If this
time-scale is short, the integrand in Eq. (144) decays quickly and the Markov assumption
performed above is justified. Note that it is important that this approximation is made in the
interaction picture, where ρ̃S varies slowly (in the Schrödinger picture, ρ̂S tends to oscillate
with frequencies given by the differences of the eigenvalues of ĤS).
While Eq. (144) is in general still not in GKLS form, it happens to reduce to GKLS form
α
when Ck,k ′
(s ) ∝ δk,k ′ and
which implies that Ŝα,k are ladder operators for the Hamiltonian ĤS, removing the energy
ωα,k from the system. Before we consider such an example in detail, we briefly justify two
assumptions made in the derivation above. To this end, we note that if
Tr B̂α,k τ̂α = 0, (147)
then it is straightforward to show that
PV (t )P = 0, Tr Ṽδ (t )Ṽγ (s )ρ̃S(t ) ⊗α τ̂α = 0, ∀δ ̸= γ. (148)
In case Eq. (147) is not true, we may define
′
B̂α,k = B̂α,k − Tr B̂α,k τ̂α , (149)
′
which do fulfill Tr{B̂α,k τ̂α } = 0. Defining
X X
V̂α′ = ′
Ŝα,k ⊗ B̂α,k , ĤS′ (t ) = HS(t ) + Tr B̂α,k τ̂α Ŝα,k , (150)
k α,k
we have Ĥtot (t ) = HS′ (t ) + α (Ĥα + V̂α′ ), i.e., the same total Hamiltonian but now in a form
P
such that Eq. (147) holds, ensuring our assumptions given in Eq. (148).
31
SciPost Physics Lecture Notes Submission
with
X X
ĤS = ϵd d̂ † d̂, ϵq ĉq† ĉq , gq d̂ ĉq† − gq∗ d̂ † ĉq .
ĤB = V̂ = (152)
q q
Here the reservoir is modeled as a collection of non-interacting fermions and the coupling
Hamiltonian describes tunneling of single electrons between the system and the reservoir (the
minus sign arises from the fermionic anti-commutation relations).
We note that for fermions, there is no tensor product structure because operators on the
environment and on the system may anti-commute. Strictly speaking, the derivation in the
last section is thus not valid. However, using a Jordan-Wigner transform, one can map the
fermionic system onto a spin system where such a tensor-product structure is provided [54].
After tracing out the reservoir, the spin operators can then be replaced by fermionic operators
again. For the tunneling Hamiltonian that we often use to describe the system-environment
coupling for fermions, this procedure is equivalent to replacing the tensor product by the usual
product between fermionic operators in the derivation.
Comparing the coupling Hamiltonian to Eq. (143), we may write V̂ = Ŝ0 B̂0 + Ŝ1 B̂1 with
X X
Ŝ0 = d̂, Ŝ1 = d̂ † , B̂0 = gq ĉq† , B̂1 = − gq∗ ĉq . (153)
q q
We further find
and similarly
X X
B̃0 (t ) = gq e iϵq t ĉq† , B̃1 (t ) = − gq∗ e −iϵq t ĉq . (155)
and made use of Tr{ĉq† ĉq ′ τ̂B } = 0 for q ̸= q ′ . Furthermore, it is easy to show that C0,1 (s ) =
C1,0 (s ) = 0 since Tr{(ĉq† )2 τ̂B } = Tr{(ĉq )2 τ̂B } = 0. Finally, in Eq. (156) we introduced the
spectral density X
ρ(ω) = |gq |2 δ(ϵq − ω), (158)
q
32
SciPost Physics Lecture Notes Submission
which will be treated as a continuous function. This is justified whenever the summands in
Eq. (156) are sufficiently smooth such that the sums can be thought of as Riemann sums that
approximate integrals of smooth functions.
Using Eq. (154), we may re-write the master equation in Eq. (144) as
Z ∞
d s C0,0 (s )e iϵd s D[d̂]ρ̃S(t ) + C1,1 (s )e −iϵd s D[d̂ † ]ρ̃S(t )
∂ t ρ̃S(t ) =
−∞
Z ∞ (159)
1 iϵd s −iϵd s
†
− d s sign(s ) C0,0 (s )e − C1,1 (s )e d̂ d̂, ρ̃S(t ) ,
2 −∞
where
1
D[Â]ρ̂ = Âρ̂ † − {† Â, ρ̂}. (160)
2
With the help of the bath-correlation functions given in Eq. (156), we may evaluate the inte-
grals over s to find the transition rates
Z∞ Z∞ Z∞
d s C0,0 (s )e iϵd s = dω d s e −i(ω−ϵd )s ρ(ω)[1 − nF (ω)] = κ[1 − nF (ϵd )],
−∞ −∞ −∞
Z ∞ Z ∞ Z ∞
d s C1,1 (s )e −iϵd s = dω d s e i(ω−ϵd )s ρ(ω)nF (ω) = κnF (ϵd ),
−∞ −∞ −∞
(161)
where we used Z ∞
d s e −i(ω−ϵd )s = 2πδ(ω − ϵd ), (162)
−∞
with P denoting the Cauchy principal value [i.e., P(1/ω) is equal to 1/ω except at ω = 0,
where the principal value vanishes].
Finally, inserting Eqs. (161) and (163) back into Eq. (159), we find the Markovian master
equation in the Schrödinger picture
33
SciPost Physics Lecture Notes Submission
The reservoir thus has two effects: Through the dissipative part of the master equations, it
describes electrons entering (D[d̂ † ]) and leaving (D[d̂]) the quantum dot. Note that to enter
the dot, electrons in the reservoir have to be available, corresponding to the factor nF while to
leave the dot, empty states have to be available corresponding to the factor 1−nF . In addition,
the energy level of the quantum dot is renormalized.
Note that when taking the ratio between the rates of entering and leaving the quantum
dot, we obtain the Boltzmann factor
κ[1 − nF (ϵd )]
e β(ϵd −µ) = . (167)
κnF (ϵd )
This condition is known as local detailed balance and it generally holds for transition rates that
are induced by reservoirs in thermal equilibrium. It ensures that in equilibrium, the system
state tends to a Gibbs state, see also Eq. (173) below.
As discussed above, the Markovian approximation is justified if the bath-correlation func-
tions decay on a time scale which is much faster than the time over which ρ̃S varies. In the
limiting case where both ρ(ω) as well as nF (ω) are independent of ω, the bath-correlation
functions become proportional to a Dirac delta function and the environment becomes truly
memoryless (i.e., τB → 0). In practice, it is sufficient for τB to be much shorter than any rel-
evant time-scale of the system. In energy, this translates to the condition that the functions
ρ(ω) as well as nF (ω) are flat around the relevant energies of the system. For the present
system ρ̃S changes on the time-scale 1/κ. The Markov approximation is then valid as long as
κτB ≪ 1. In energy space, this requires ρ(ω) as well as nF (ω) to be approximately constant
in the interval ω ∈ [ϵd −κ, ϵd +κ]. The spectral density depends on the details of the reservoir
and may or may not fulfill this condition depending on the specific scenario. For the Fermi-
Dirac occupation to be sufficiently flat for the Markov approximation, the following condition
has to hold
κ ≪ max{kB T, |ϵd − µ|}. (168)
At low temperatures, the Fermi-Dirac occupation becomes a step function. Therefore, the
Markovian approximation is not justified at low temperatures if the dot level ϵd is close to
the chemical potential. For a more detailed discussion on the validity of the Born-Markov
approximations, see appendix B.1 of Ref. [64].
where |0〉 denotes the empty state and |1〉 the full state. Here we used the fact that we cannot
have a superposition of states with a different number of electrons in the system due to a
particle superselection rule [34]. Using these basis states, the fermionic operators can be cast
into
∂ t p1 (t ) = −κ[1 − nF (ϵd )]p1 (t ) + κnF (ϵd )p0 (t ) = −κ[p1 (t ) − nF (ϵd )]. (171)
This equation shows that a full dot is emptied with rate κ[1 − nF (ϵd )] whereas an empty dot
is filled with rate κnF (ϵd ). The solution to this differential equation reads
34
SciPost Physics Lecture Notes Submission
The occupation probability thus exponentially goes toward the equilibrium value nF (ϵd ). The
time-scale with which this happens is given by 1/κ. In equilibrium, the system is described by
the thermal state with temperature and chemical potential equal to those of the reservoir, as
demanded by the zeroth law of thermodynamics, and we find
† d̂
e −β(ϵd −µ)d̂
lim ρ̂S(t ) = . (173)
t →∞
Tr{e −β(ϵd −µ)d̂ † d̂ }
This result is closely related to the local detailed balance condition in Eq. (167).
∂ t 〈ĤS〉 = Tr{ϵd d̂ † d̂∂ t ρ̂S} = (ϵd − µ)∂ t 〈d̂ † d̂〉 + µ∂ t 〈d̂ † d̂〉 = −JB (t ) − PB (t ). (174)
To identify the heat current and the power, we used that the change in the number of electrons
in the system is minus the change in the number of electrons in the reservoir, more on this in
Sec. 4.5. Using p1 = 〈d̂ † d̂〉, we find for the heat current and power that flow into the reservoir
JB (t ) = −(ϵd − µ)∂ t 〈d̂ † d̂〉 = (ϵd − µ)κe −κt [p1 (0) − nF (ϵd )],
(175)
PB (t ) = −µ∂ t 〈d̂ † d̂〉 = µκe −κt [p1 (0) − nF (ϵd )].
We thus find that if the dot starts out in a non-equilibrium state, there is an exponentially
decreasing energy flow which can be divided into power and heat. The power flows into
the reservoir whenever p1 (t ) > nF (ϵd ). The heat flow additionally depends on the sign of
ϵd − µ: electrons entering the reservoir above the chemical potential (ϵd − µ > 0) heat up the
reservoir, while electrons entering below the chemical potential (ϵd − µ < 0) cool it down.
This can be understood intuitively by noting that at zero temperature, all states below the
chemical potential are occupied while states above the chemical potential are empty. Electrons
entering the reservoir below the chemical potential thus bring the reservoir closer to the zero-
temperature distribution.
kB JB (t ) p1 (t )
JB
Σ̇(t ) = kB ∂ t SvN [ρ̂S(t )] + = β(ϵd − µ) + ln , (177)
T (ϵd − µ) 1 − p1 (t )
where we used −ṗ1 = JB /(ϵd − µ) which follows from Eq. (175). Using the equality in
Eq. (167), we find
kB JB (t ) p1 (t )[1 − nF (ϵd )]
Σ̇(t ) = ln
(ϵd − µ) [1 − p1 (t )]nF (ϵd )
(178)
p1 [1 − nF (ϵd )]
=kB κ p1 (t )[1 − nF (ϵd )] − [1 − p1 (t )]nF (ϵd ) ln ≥ 0,
[1 − p1 ]nF (ϵd )
35
SciPost Physics Lecture Notes Submission
where we used JB = (ϵd −µ)κ(p1 −nF ) which follows from Eqs. (171) and (175). the positivity
of Σ̇ can be shown by writing it in the form (x − y)(ln x − ln y) ≥ 0 which is ensured to be
positive because the logarithm is a monotonously increasing function. Using the solution in
Eq. (172), we can explicitly write the entropy production rate as
(e −κt δ0 + nF )(1 − nF )
Σ̇(t ) = kB κe −κt δ0 ln , (179)
(1 − nF − e −κt δ0 )nF
i.e., we write the system operators in Eq. (144) in a Fourier series. While this can always be
done, it is particularly simple when the system Hamiltonian is time-independent. In this case,
j
we may introduce its eigenstates as ĤS |Ea 〉 = Ea |Ea 〉. We may then find the operators Ŝα,k by
multiplying Ŝα,k from the left and the right by resolved identities
X
j j
Ŝα,k = |Ea 〉〈Ea | Ŝα,k |E b 〉〈E b | ⇒ [Ŝα,k , ĤS] = ω j Ŝα,k , (181)
a,b
| {z }
j
Ŝα,k
with ω j = E b − Ea .
With the help of Eq. (180), we may cast Eq. (144) into
X X h ′ j † j † j ′ i
e i(ω j −ω j ′ )t Γk,k
α j
∂ t ρ̃S(t ) = ′ (ω j ′ ) Ŝ ρ̃
α,k ′ S
(t ) Ŝα,k
− Ŝα,k
Ŝ ρ̃
α,k ′ S
(t ) + H.c.
α,k,k ′ j , j ′
(182)
with Z ∞
α α 1
Γk,k ′ (ω) ≡ d s e iωs Ck,k ′ (s ) = γαk,k ′ (ω) + i∆αk,k ′ (ω), (183)
0 2
where γαk,k ′ (ω) and ∆αk,k ′ (ω) are both real.
In the remainder of this section, we will consider a time-independent Hamiltonian ĤS, as
α
well as bath-correlation functions that obey Ck,k ′
∝ δk,k ′ for simplicity. This will simplify the
notation as well as the derivation of the laws of thermodynamics for the master equations we
consider.
|ω j − ω j ′ |τS ≫ 1 ∀ j ̸= j ′ . (184)
36
SciPost Physics Lecture Notes Submission
In this case, we may drop all terms in Eq. (182) with j ̸=j ′ because they average out over a
time-scale shorter than τS due to the oscillating term exp i(ω j − ω j ′ )t .
Going back to the Schrödinger picture, we then find the GKLS master equation
X
γαk (ω j )D[Ŝα,k ]ρ̂S(t ),
j
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + (185)
α,k, j
with ∆αk (ω j ) = ∆αk,k (ω j ). We note that from Eq. (181), it follows that [ĤS, ĤLS] = 0.
Due to Eq. (184), the secular approximation works well for systems which have no small
gaps in ĤS. As we will see, thermal machines may consist of multiple parts that are weakly
coupled and have small energy gaps induced by the weak coupling. In such systems, the
j
secular approximation breaks down. Note that in order to obtain the jump operators Ŝα,k , the
Hamiltonian ĤS needs to be diagonalized first. This implies that already obtaining the master
equation may be a formidable task.
We note that in the secular approximation, for a non-degenerate system Hamiltonian, the
populations decouple from the coherences. Indeed, the dissipative part of Eq. (185) describes
classical jumps between the eigenstates of ĤS. Concretely, this means that in the energy-
eigenbasis, the off-diagonal terms of the density matrix tend to zero and the dynamics can
be described by a classical rate equation involving only the populations. While this may be a
good approximation, there are many situations of interest where coherences between energy
eigenstates are important in the presence of thermal reservoirs and the secular approximation
is no longer justified [65–70].
A particularly appealing feature of the secular approximation is the fact that the laws of
thermodynamics are ensured to hold [10]:
0th law: If all reservoirs are at the same inverse temperature β and chemical potential µ,
then the steady state of Eq. (185) reduces to the Gibbs state
t →∞ e −β(ĤS −µ N̂S )
ρ̂S(t ) −−−→ . (187)
Tr e −β(ĤS −µ N̂S )
In equilibrium, the system is thus described by the same temperature and chemical potential
as the environment.
These definitions are to be compared with the definitions for the power and heat current in
the general scenario, c.f. Eq. (111). There, we defined heat and work using the changes in the
37
SciPost Physics Lecture Notes Submission
energy and particle numbers in the reservoirs. When using a master equation for the reduced
system state, we no longer have access to the properties of the reservoirs. However, we may
still infer heat and work because the term Lα in the master equation describes the exchange
of energy and particles with reservoir α. An increase in the particle number due to Lα implies
a decrease of the particle number in the reservoir α by the same amount, and similarly for
energy. This is discussed in more detail below, see Sec. 4.5.
2nd law: The second law of thermodynamics also follows from Eq. (185) and it may be
shown that [71, 72]
X J (t )
α
Σ̇ = kB ∂ t SvN [ρ̂S(t )] + ≥ 0. (191)
α Tα
While we focused on a time-independent system Hamiltonian here, the secular approxi-
mation may analogously be applied for a time-dependent Hamiltonian and the laws of ther-
modynamics continue to hold in this case, see Ref. [10] and references therein.
|ω j − ω j ′ |τB ≪ 1 ∀ j , j ′. (192)
where ωα,k has to be chosen such that |ωα,k − ω j |τB ≪ 1 for all j . Because of Eq. (192),
one may for instance choose ωα,k to equal the average of all ω j . Equation (193) is justified
for all s ≲ τB , i.e., exactly the values of s which are relevant in the integral of Eq. (144). As
a consequence of Eq. (193), we may replace Γkα (ω j ′ ) by Γkα (ωα,k ) in Eq. (182) which, in the
Schrödinger picture, results in the GKLS master equation
X
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] + γαk (ωα,k )D[Ŝα,k ]ρ̂S(t ), (194)
α,k
38
SciPost Physics Lecture Notes Submission
Hamiltonian is then used to compute the internal energy of the system. As argued in Ref. [64],
the mistake we make by replacing ĤS with ĤTD in the thermodynamic bookkeeping is smaller
than the resolution of heat that the master equation in Eq. (194) ensures. Within the accu-
racy of our model, the replacement is thus completely justified and in many cases it can be
compared to neglecting the system-bath coupling in the thermodynamic bookkeeping. For the
thermodynamic Hamiltonian, we find
0th law: If all reservoirs are at the same inverse temperature β and chemical potential µ,
then the steady state of Eq. (194) reduces to the Gibbs state
t →∞ e −β(ĤTD −µ N̂S )
ρ̂S(t ) −−−→ . (197)
Tr e −β(ĤTD −µ N̂S )
In equilibrium, the system is thus described by the same temperature and chemical potential
as the environment.
with
Jα (t ) = −Tr ĤTD − µα N̂S Lα ρ̂S(t ) , Pα (t ) = −µα Tr N̂SLα ρ̂S(t ) . (200)
2nd law: The second law of thermodynamics also follows from Eq. (194) and it may be
shown that
X J (t )
α
Σ̇ = kB ∂ t SvN [ρ̂S(t )] + ≥ 0. (201)
α Tα
As for the secular approximation, the results from this section may be extended to a time-
dependent Hamiltonian and the laws of thermodynamics continue to hold in this case [64].
We note that in particular in thermodynamic contexts, it is often the case that there are
positive and negative Bohr frequencies, with each S̃α,k (t ) containing only frequencies of one
sign. In this case, we may perform the singular-coupling limit for the positive and negative
Bohr frequencies separately. In this case, the master equation in Eq. (194) is also valid if
Eq. (192) is only respected for frequencies ω j and ω j ′ of the same sign. Below, we will discuss
an example where this is the case, see Sec. 4.6.
39
SciPost Physics Lecture Notes Submission
1. Drop all terms with j and j ′ such that |ω j − ω j ′ |τS ≫ 1, in analogy to the secular
approximation.
The resulting master equation also obeys the laws of thermodynamics for an appropriately
chosen thermodynamic Hamiltonian. The thermodynamic consistency of this approach was
recently also shown using the method of full counting statistics [74].
where the averages are taken with respect to the exact, total density matrix ρ̂tot (t ). In a master
equation written in the form of Eq. (198), we only have access to the left-hand side of the last
equation, which is given by
X
∂ t 〈 N̂S〉 = Tr{ N̂SLα ρ̂S(t )}. (203)
α
connecting the definition in Eqs. (111) to Eqs. (190) and (200). Since the contributions of
the different reservoirs to the Liouvillean are additive, we may infer the change of particle
number induced by each reservoir and, by exploiting the conservation of particles, we may
infer ∂ t 〈 N̂α 〉 even though we only have access to the reduced system state.
A similar analysis can be done for energy. From total energy conservation (assuming as
above a time-independent Hamiltonian), we have
X X
∂ t 〈ĤS〉 = −∂ t 〈Ĥα 〉 + 〈V̂α 〉 ≃ −∂ t 〈Ĥα 〉, (205)
α α
where we neglected the energy stored in the coupling between system and environment be-
cause we assume it to be small in order to derive a Markovian master equation. From a master
equation written in the form of Eq. (198), we may write the left-hand side of Eq. (205) as
X
∂ t 〈ĤS〉 = Tr{ĤSLα ρ̂S(t )}, (206)
α
40
SciPost Physics Lecture Notes Submission
While this is indeed the appropriate heat current for master equations in the secular approxi-
mation, see Eq. (190), we introduced a different definition in Eq. (200) based on a thermody-
namic Hamiltonian. To understand why, it is necessary to appreciate that Markovian master
equations rely on assumptions on time-scales. In particular, the Markov approximation ne-
glects the time-evolution of the system during the bath-correlation time, see Eq. (142). This
neglects the finite life-time of the particles in the system. Through the time-energy uncertainty
relation, this neglects the energy broadening of the states in the system. As a consequence,
we no longer know the exact energy at which particles are exchanged with the environment.
In the singular-coupling limit, this is exacerbated because we also neglect small differences in
the Bohr frequencies of the system Hamiltonian, see Eq. (193).
The fact that we no longer know the exact energy at which particles enter the environment
implies that we lose resolution in the energy exchanged with the environment. As a result, a
naive application of Eq. (208) can result in violations of the laws of thermodynamics, see for
instance Ref. [75]. It is important to stress however that any violations should remain absent
or negligibly small as long as the approximations that result in the master equation are well
justified [76, 77].
It is however desirable to have a framework that mathematically ensures the laws of ther-
modynamics. This can be obtained by introducing the thermodynamic Hamiltonian to quan-
tify the internal energy of the system [64] which results in the definition for the heat currents
[c.f. Eq. (200)]
Importantly, the use of the thermodynamic Hamiltonian should only slightly affect the values
of the heat currents, i.e., ∂ t 〈ĤS〉 ≃ ∂ t 〈ĤTD 〉 and Jα′ ≃ Jα , such that both definitions of the heat
current are acceptable. Should the two definitions result in substantial differences, then the
approximations that went into the master equation are most likely no longer justified.
Since we will mainly consider master equations in the singular-coupling limit below, and
since we prefer to have a framework that ensures the laws of thermodynamics exactly and not
only approximately, we will use the heat current given in Eq. (209) and the power given in
Eq. (204).
ĤS = ϵ d̂L† d̂L + ϵ d̂R† d̂R + g (d̂L† d̂R + d̂R† d̂L ) = (ϵ + g )d̂+† d̂+ + (ϵ − g )d̂−† d̂− , (211)
41
SciPost Physics Lecture Notes Submission
Figure 10: Double quantum dot coupled to two fermionic reservoirs. The reservoirs
are characterized by temperatures Tα and chemical potentials µα . The couplings
between the reservoirs and the double dot are given by κα , and g denotes the inter-
dot tunnel coupling. A temperature bias (TL ̸= TR) and/or a chemical potential bias
(µL ̸= µR) may result in particle and heat transport through the double dot in the
steady state.
where we chose the on-site energies ϵ of the two dots to be equal for simplicity and we intro-
duced the eigenmodes
1 1 1
d̂± = p d̂R ± d̂L , d̂R = p d̂+ + d̂− , d̂L = p d̂+ − d̂− . (212)
2 2 2
The reservoirs are again modeled by a collection of non-interacting electrons that are tunnel-
coupled to the respective dots
X X
† † ∗
d̂α† ĉα,q ,
Ĥα = ϵα,q ĉα,q ĉα,q , V̂α = gα,q d̂α ĉα,q − gα,q (213)
q q
with α = L, R. As in the example of a single quantum dot, we may write the coupling Hamil-
tonian as V̂α = Ŝα,0 B̂α,0 + Ŝα,1 B̂α,1 and we find
d̂+ d̂−
S̃R,0 (t ) = e i ĤS t d̂Re −i ĤS t = e −i(ϵ+g )t p + e −i(ϵ−g )t p ,
2 2
(214)
d̂+ d̂−
S̃L,0 (t ) = e i ĤS t d̂L e −i ĤS t = e −i(ϵ+g )t p − e −i(ϵ−g )t p .
2 2
Comparing these expressions to Eq. (180), we may read off the frequencies ω j = ϵ ± g , as
j †
well as the corresponding operators Ŝα,0 . Similarly, we find S̃α,1 (t ) = S̃α,0 (t ), which involve
the frequencies ω j = −(ϵ ± g ).
The bath-correlation functions are obtained in analogy to the single-dot case in Sec. 4.3
and read
Z∞ Z∞
α
C0,0 (s ) = dωe −iωs ρα (ω)[1 − nFα (ω)], α
C1,1 (s ) = dωe iωs ρα (ω)nFα (ω),
−∞ −∞
(215)
with the Fermi-Dirac occupation
1
nFα (ϵ) = ϵ−µα
. (216)
e +1 kB Tα
γα
0
(ω) = κα [1 − nFα (ω)], γα
1
(−ω) = κα nFα (ω), (217)
42
SciPost Physics Lecture Notes Submission
The master equation in the secular approximation describes classical hopping into and out-of
the delocalized eigenmodes described by the operators d̂± . Any coherences between these
eigenmodes decay. We may thus interpret the secular master equation as a classical master
equation. However, note that the eigenmodes themselves describe electrons being in a coher-
ent superposition between the two dots.
Evaluating Eq. (190), we find the heat flow and power
X ϵ σ − µα X κ
α
κα 〈d̂σ† d̂σ 〉 − nFα (ϵσ ) , 〈d̂σ† d̂σ 〉 − nFα (ϵσ ) . (222)
Jα = Pα = µ α
σ=± 2 σ=± 2
From Eq. (184), we conclude that the secular approximation is justified when
1 1
= κα ≪ g = |ω j − ω j ′ |. (223)
τS 2
This implies that for strong coupling between the dots, the secular approximation can safely be
applied. However, once g becomes comparable to either coupling κα , it is no longer justified.
In this case, one should apply the singular-coupling limit.
This implies that the frequencies appearing in Eq. (194) are ωα,0 = ϵ and ωα,1 = −ϵ. We note
that there is no restriction on |ωα,0 − ωα,1 | because S̃α,0 only involves the frequencies ϵ ± g
43
SciPost Physics Lecture Notes Submission
while S̃α,1 only involves the frequencies −(ϵ ± g ). With the substitution in Eq. (224), we find
the GKLS master equation
X
κα nFα (ϵ)D[d̂α† ] + [1 − nFα (ϵ)]D[d̂α ] ρ̂S(t ), (225)
∂ t ρ̂S(t ) = −i[ĤS + ĤLS, ρ̂S(t )] +
α=L,R
The master equation in the singular-coupling limit is also known as a local master equation,
because the jump operators act locally on the left and right quantum dots. In contrast to the
secular master equation (also denoted global master equation), the populations and coher-
ences do not decouple and we may find coherence and even entanglement between the two
quantum dots [70]. We note that the local master equation may also be obtained heuristically
by first setting g = 0, deriving the master equation as in Sec. 4.3, and then reinstating g in
the Hamiltonian. For certain scenarios, this heuristic approach may result in master equations
that violate the laws of thermodynamics [75]. For this reason, it is recommended to perform
the singular-coupling approximation as outlined above. We then obtain a thermodynamically
consistent framework with the thermodynamic Hamiltonian
In the thermodynamic bookkeeping, we thus neglect the coupling between the dots, in analogy
to how we neglect the system-bath coupling energy.
Evaluating Eq. (200), we find the heat flow and power
In this approximation, each electron that enters reservoir α carries the heat ϵ − µα and the
power µα , resulting in the simple relation Jα /Pα = (ϵ − µα )/µα .
The singular-coupling limit is justified when ρα (ϵ ± g ) ≃ ρα (ϵ) as well as nFα (ϵ ± g ) ≃
α
nF (ϵ). The second condition is obeyed when
Note that g ≪ κ is not required for the singular-coupling limit to be justified. The secular
approximation and the singular-coupling limit may thus be justified at the same time. Indeed,
since τS ≫ τB this is what we expect from Eqs. (184) and (192).
44
SciPost Physics Lecture Notes Submission
the only thing that happens is thermalization, we will find heat flows in the steady state and
we will see how heat can be converted into work and how work can be used to refrigerate. The
system we consider is a spinless, single-level quantum dot tunnel-coupled to two heat baths
with
X X X
† † ∗
ĤS = ϵd d̂ † d̂, Ĥα = ϵα,q ĉα,q ĉα,q , V̂α = d̂ gα,q ĉα,q − d̂ † gα,q ĉα,q , (231)
q q q
where α =c, h labels the reservoirs according to their temperatures Tc ≤ Th . Just as for the
quantum dot coupled to a single reservoir, Eq. (144) is already in GKLS form. Since the terms
in the master equation corresponding to different reservoirs are additive, we find
with
Lα ρ̂ = κα [1 − nFα (ϵd )]D[d̂]ρ̂ + κα nFα (ϵd )D[d̂ † ]ρ̂, (233)
where nFα is the Fermi-Dirac occupation with temperature Tα and chemical potential µα , see
Eq. (216). Here we neglected the renormalization of ϵd which is given by a straightforward
generalization of Eq. (166).
where
κc nFc (ϵd ) + κh nFh (ϵd )
γ = κc + κh , n̄ = . (235)
κc + κh
Comparing to Eq. (171), we find that the quantum dot behaves just like a quantum dot coupled
to a single heat bath with coupling strength γ and mean occupation n̄. The solution thus reads
Figure 11: Quantum dot heat engine. A quantum dot is coupled both to a hot reser-
voir with temperature Th and chemical potential µh as well as a cold reservoir with
temperature Tc and chemical potential µc . A temperature bias, Th > Tc , allows for
a particle current against a chemical potential bias (µh ̸= µc , for electrons, this cor-
responds to a voltage bias). Thereby, heat is converted into chemical work.
45
SciPost Physics Lecture Notes Submission
where the power and heat currents are defined in agreement with Eq. (190)
Explicitly, we find
ϵ d − µα
Pα = µα κα e −γt [p1 (0) − n̄] + µα κα [n̄ − nFα (ϵd )], Jα = Pα . (239)
µα
Just as for a single reservoir, there is a transient term in the power which decreases exponen-
tially in time. In contrast to the single reservoir case, there is now also a time-independent
term which remains in steady state.
In the steady state, the observables of the system do not change. We can use this fact to
draw a number of conclusions without using the explicit solutions for the power and the heat
currents. In particular, since the left-hand side of Eq. (237) vanishes, we find
P = Pc + Ph = −(Jc + Jh ), (241)
P µc − µh ϵ d − µc
η= = =1− , (242)
−Jh ϵ d − µh ϵ d − µh
where we introduced the efficiency η which is given by the ratio between the power (the
output of the heat engine) and the heat current from the hot reservoir (the input of the heat
engine). Using the explicit solution for the power in Eq. (239), we find
κc κh
P= (µc − µh )[nFh (ϵd ) − nFc (ϵd )]. (243)
κc + κh
This quantity vanishes at zero voltage (µc = µh ), as well as at the stopping voltage where
nFh (ϵd ) = nFc (ϵd ), see also Fig. 13 b).
Let us now consider under which conditions the system acts as a heat engine, i.e., heat from
the hot reservoir is converted into power. From Eq. (243), we can identify different regimes
depending on the signs of P, Jc , and Jh . These regimes are illustrate in Figs. 12 and 13 (a).
For µc ≥ µh (µc ≤ µh ), we find that the quantum dot acts as a heat engine for large positive
(negative) values of ϵd . In both cases, we find from Eq. (243) that power is positive as long as
ϵd − µc Tc Tc
≥ ⇒ η≤1− = ηC . (244)
ϵ d − µh Th Th
We thus find that the efficiency is bounded from above by the Carnot efficiency as long as the
power output is non-negative.
46
SciPost Physics Lecture Notes Submission
47
SciPost Physics Lecture Notes Submission
1.0
0.8
0.6
0.4
0.2
0.0
0.0
Figure 13: Performance of the quantum dot heat engine. (a) Power and heat currents
as a function of ϵd . The regimes illustrated in Fig. 12 can clearly be identified. (b)
“Lasso" diagram. Along the curve, the voltage bias is increased in the direction of
the arrow. For µc = µh , both the power as well as the efficiency vanishes. Increasing
µc then results in a finite power and efficiency, until the power vanishes again at
the stopping voltage, where the efficiency reaches ηC . Such plots are called Lasso
diagrams because the curve usually goes back to the origin at the stopping voltage,
see also Fig. 14. Parameters: κc = κh , ϵd = 2κc , kB Tc = 0.3κc , kB Th = 0.8κc ,
µc = κc , µh = 0.
which is positive since each term in the sum is positive in complete analogy to Eq. (178). In
the steady state, we find
κc κh
Σ̇ = kB [nFh (ϵd ) − nFc (ϵd )] [βc (ϵd − µc ) − βh (ϵd − µh )] ≥ 0, (249)
κc + κh
which vanishes at the Carnot point, where nFh (ϵd ) = nFc (ϵd ), η = ηC , and both the power as
well as the heat currents vanish. We stress that while an equilibrium situation (i.e., Tc = Th and
µc = µh ) ensures nFh (ϵd ) = nFc (ϵd ), the Carnot point can also be reached out of equilibrium.
The interplay between power and efficiency is illustrated in Fig. 13. We find that at maxi-
mum power, the efficiency reaches above 60% of the Carnot efficiency. Similar values where
found experimentally in Ref. [78]. As mentioned above, the Carnot efficiency is obtained at
the stopping voltage. This is a consequence of the fact that there is only a single energy at
which transport happens. At the stopping voltage, all transport is blocked implying that both
the charge as well as the heat currents vanish. This implies that there is no dissipation (Σ̇ = 0)
and the efficiency takes on the Carnot value (see also Ref. [79]). In reality, as well as in the
experiment of Ref. [78], this ideal filtering effect is spoiled by the broadening of the energy
level which is neglected in the Markovian master equation, see Fig. 14. Including this energy
broadening, one finds that at some energies, charges move from hold to cold while at other
energies, they move in the other direction. At the stopping voltage, this results in a vanishing
of power but still a net heat current, such that the efficiency vanishes. In this case, Fig. 13 (b)
takes on the shape of a lasso.
5.1.4 Refrigeration
As discussed above, the quantum dot can also act as a refrigerator in the regime where P < 0,
Jh > 0, and Jc < 0. In this case, electrical power is used to reverse the natural flow of heat,
resulting in a heat flow out of the cold reservoir and into the hot reservoir. The efficiency of
48
SciPost Physics Lecture Notes Submission
Figure 14: Effect of level broadening. At the Carnot point, where nFh (ϵd ) = nFc (ϵd ),
our Markovian treatment predicts P = Jh = 0 and η = ηC . This occurs because the
quantum dot is approximated as being a perfect energy filter and no transport occurs
at the Carnot point as illustrated in the left panel. In reality, the energy level of the
quantum dot is broadened. Above ϵd , nFh (ϵ) > nFc (ϵ) and we get a particle current
from hot to cold. Below ϵd , nFh (ϵ) < nFc (ϵ) resulting in a particle current from cold to
hot. These particle currents cancel, resulting in P = 0. However, since the particles
above and below ϵd have a different energy, their contribution to the heat current
does not cancel resulting in −Jh > 0 which implies η = 0. If the Markovian approxi-
mation is justified, Jh is small. However, it remains finite which always implies η = 0
at the stopping voltage.
−Jc ηC
COP
− ηCOP
Σ̇ = ⇒ 0 ≤ ηCOP ≤ ηCOP
C
, (251)
Th ηCOP
C
ηCOP
where the second inequality holds for Jc ≤ 0 and we introduced the Carnot value for the COP
Tc
ηCOP
C
= . (252)
Th − Tc
We note that as Tc → Th , ηCOP
C
diverges. This reflects the fact that in principle, it is possible
to move heat in between two reservoirs with equal temperature without investing any work.
In our system, we find from Eqs. (238) and (240)
ϵ d − µc
ηCOP = . (253)
µc − µh
From this, we find that ηCOP vanishes when ϵd = µc and takes on the Carnot value at ϵd =
µc Th −µh Tc
Th −Tc , which is exactly the point where the regime of the refrigerator meets the regime
of the heat engine, see Fig. 12. Interestingly, both the COP as well as the efficiency reach
their maximum value at this point, where no transport takes place. The Carnot point is often
called the point of reversibility. At this point nothing happens but it can be seen as the limit of
converting heat into work infinitely slowly and without wasting any energy (thus, reversibly).
Equivalently, taking the limit from the other side, it can be seen as the limit of cooling the cold
reservoir reversibly.
49
SciPost Physics Lecture Notes Submission
5.2.1 Entanglement
Before we consider the thermal machine itself, we provide a brief introduction to entanglement
which is one of the most intriguing features of quantum mechanics. For more information, see
the Book by Nielsen and Chuang [17]. To consider entanglement, we require a bi-partite
Hilbert space H = HA ⊗ HB , where HA denotes the Hilbert space of Alice and HB is Bob’s
Hilbert space. A quantum state on this bi-partite Hilbert space is then said to be a product state
if it reads ρ̂ = ρ̂A ⊗ ρ̂B . This corresponds to the scenario where Alice and Bob have access to
their respective states ρ̂A and ρ̂B , without any correlations between them. A classical mixture
of product states is called a separable state
X X
j j
ρ̂ = p j ρ̂A ⊗ ρ̂B , p j ≥ 0, p j = 1. (254)
j j
Such a state contains classical correlations. For instance, it can describe the rather trivial
example where Alice and Bob by fruit together. With equal probabilities, they buy two apples
or two oranges. If Alice has an apple, we know Bob has an apple as well and similarly for
oranges. Obviously, no fruits are entangled in this example.
A state is entangled iff it is not separable, i.e., if it cannot be written in the form of Eq. (254).
In this case, the correlations between Alice and Bob are no longer classical in nature. Entangle-
ment may thus be seen as a form of correlation that goes beyond classical correlations. Here
we illustrate this with two simple examples. First, we consider the state
1 1 1
ρ̂cl = (|00〉〈00| + |11〉〈11|) = |0〉〈0| ⊗ |0〉〈0| + |1〉〈1| ⊗ |1〉〈1| , (255)
2 2 2
which is evidently separable. It corresponds to the apple-orange scenario above, where |00〉
could denote an apple for both Alice and Bob. As an example for an entangled state, we
consider one of the Bell states
1 1
|Φ+ 〉 = p (|00〉 + |11〉) , |Φ+ 〉〈Φ+ | = (|00〉〈00| + |11〉〈11| + |00〉〈11| + |11〉〈00|) .
2 2
(256)
This state cannot be written in the form of Eq. (254) due to the off-diagonal elements. Indeed,
it is the maximally entangled state for two qubits.
The amount of entanglement can be quantified by the entanglement of formation [81].
Loosely speaking, it is determined by the number of Bell states that are required to prepare the
given state using only local operations and classical communication (LOCC). For two qubits,
the entanglement of formation is a number between zero and one, where zero is obtained for
separable states and one for Bell states.
50
SciPost Physics Lecture Notes Submission
which is monotonically related to the entanglement of formation. Just as the latter, the con-
currence ranges from zero, obtained for separable states, to one, reached for Bell states. The
concurrence of a state ρ̂ can be computed with the help of the auxiliary state
where the star denotes complex conjugation in the computational basis (|0〉 , |1〉). Let λ j de-
note the eigenvalues of ρ̂ ρ̃ in decreasing order, i.e., λ1 ≥ λ2 ≥ λ3 ≥ λ4 . The concurrence
may then be written as
¦ Æ Æ Æ Æ ©
C [ρ̂] = max 0, λ1 − λ2 − λ3 − λ4 . (258)
This quantity lies between zero and one, where C [ρ̂] = 0 holds for separable states and
C [ρ̂] > 0 for entangled states, with unity reached only for Bell states.
Fermionic entanglement: The discussion above assumed that the Hilbert space has the
tensor-product structure H = HA ⊗ HB . For fermions, this is not the case because fermionic
operators corresponding to Alice and Bob anti-commute. This implies that any definition for
entanglement needs to be reconsidered for fermions [84]. To make progress, we denote by
|00〉 the vacuum state, where no fermion is present. Using the creation operators on the dots
d̂α† , we may then define the states
|10〉 = d̂L† |00〉 , |01〉 = d̂R† |00〉 , |11〉 = d̂L† d̂R† |00〉 = −d̂R† d̂L† |00〉 . (259)
Note the minus sign in the last equation, which forces us to associate to the state |11〉 a spe-
cific order for the fermionic operators. It turns out that in our system, we may compute the
concurrence as if these states were two-qubit states. This is the case because we will only find
a single off-diagonal element corresponding to a coherent superposition of a single electron
between the two modes (i.e., between |01〉 and |10〉). In general, more care has to be taken
when evaluating the entanglement of fermionic systems (even for two modes) [84].
〈d̂i† d̂ †j d̂k d̂l 〉 = 〈d̂i† d̂l 〉〈d̂ †j d̂k 〉 − 〈d̂i† d̂k 〉〈d̂ †j d̂l 〉. (263)
51
SciPost Physics Lecture Notes Submission
For more information on solving master equations for Gaussian processes, see Ref. [18].
From the master equation in Eq. (260), one may derive
v + ⃗b,
∂ t v⃗ = A⃗ (264)
with
〈d̂L† d̂L 〉
−κL 0 −i g ig κL nFL (ϵ)
〈d̂ † d̂R〉 0 −κR ig −i g κRnFR(ϵ)
v⃗ = R A= ⃗b =
〈d̂ † d̂R〉 ,
κL +κR , . (265)
−i g ig − 2 0
0
L κL +κR
〈d̂R† d̂L 〉 i g −i g 0 − 2 0
At steady state, we may set the LHS of Eq. (264) to zero and we find
vss + ⃗b,
0 = A⃗ ⇒ v⃗ss = −A−1 ⃗b. (266)
In order to connect the averages in v⃗ to the state of the double quantum dot, we write
X
n ,n
ρ̂S = ρ ′L ′R nL , nR nL′ , nR′ . (267)
nL ,nR
nL ,nR
nL′ ,nR′
With the basis states given in Eq. (259), the matrix elements may be cast into
n ,n
n n ′ n ′ n n
o
ρ ′L ′R = 〈nL , nR| ρ̂S nL′ , nR′ = Tr d̂L† L d̂R† R |00〉〈00| d̂R R d̂L L ρ̂S
nL ,nR
D n ′ n ′ n n E (268)
= d̂L† L d̂R† R 1 − d̂L† d̂L 1 − d̂R† d̂R d̂R R d̂L L .
Here we used |00〉〈00| = 1 − d̂L† d̂L 1 − d̂R† d̂R which may be verified by applying this opera-
tor to all basis states. Note that due to the fermionic anti-commutation relations, and because
there is no coherence between states with a different total number of electrons, the only aver-
ages required to compute the density matrix that are not of the form 〈d̂α† d̂β 〉 can be reduced
by Eq. (263).
A lengthy but straightforward calculation then results in the steady state
†
†
−β̄(ϵ−µ̄)(d̂L† d̂L +d̂R† d̂R)
1 e −βL (ϵ−µL )d̂L d̂L e −βR(ϵ−µR)d̂Rd̂R 2e
§
ρ̂ss = κL κR + 4g
4g 2 + κL κR ZL ZR Z̄ 2
(269)
2g κL κR(nF − nF )
L R ª
i d̂L† d̂R − d̂R† d̂L ,
−
κL + κR
where we omitted the argument of the Fermi functions for brevity and the barred quantities
are defined by the equation
where in the last equality, the choice of α does not matter. Note that the terms in the first line
of Eq. (269) are both product states. Thus, it is the second line, including the coherence factor
d̂L† d̂R − d̂R† d̂L = |10〉〈01| − |01〉〈10|, which is responsible for any entanglement we might get.
52
SciPost Physics Lecture Notes Submission
Limiting cases: It is instructive to consider the steady state in some limiting cases. First,
we consider an equilibrium scenario where βL = βR = β̄ and µL = µR = µ̄. In this case, the
second line of Eq. (269) vanishes and the first two terms become proportional to each other.
We thus find, as expected from Eq. (197)
ĤTD = ϵ d̂L† d̂L + d̂R† d̂R , N̂S = d̂L† d̂L + d̂R† d̂R.
(273)
Note that this is a product state due to the additive nature of ĤTD . In the singular-coupling
master equation considered here, the thermal state does not exhibit any entanglement. This
is different at large coupling g , where the secular approximation should be used instead [70].
Second, we consider the limit where κL , κR ≫ g . In this case, only the first term in
Eq. (269) is relevant. This term describes two quantum dots, each in equilibrium with the
reservoir it couples to. The state is thus a product of Gibbs states with the local reservoir
temperatures and chemical potentials.
The third case we consider is the case where κL = 0. In this case, we find that only the
second term in Eq. (269) survives with β̄ = βR and µ̄ = µR. The system thus equilibrates with
the right reservoir, taking the form of Eq. (272).
Finally, we consider the limit g ≫ κL , κR. Just as in the last case, the second term in
Eq. (269) dominates and the system equilibrates to the average occupation n̄ with the state
reducing to Eq. (272). In all these limiting cases, no entanglement is found as the term in-
volving coherence always drops out.
Concurrence: To quantify the entanglement in Eq. (269), we consider the concurrence. The
steady state given in Eq. (269) is of the form
p0 0 0 0
0 pL α 0
ρ̂ss = , (274)
0 α∗ pR 0
0 0 0 pd
where the basis states are |00〉, |10〉, |01〉, |11〉. For such a state, the concurrence reduces to
p
C [ρ̂ss ] = max{0, 2|α| − 2 p0 pd }. From Eq. (269), we find
We thus find that entanglement is indeed generated. Interestingly, the entanglement is gener-
ated due to reservoirs which are out of equilibrium, as can be inferred from the term |nFR − nFL |.
This is a surprising result because usually, any coupling to the environment only reduces co-
herence and entanglement in the system. This is because entanglement is monogamous, which
means that if A is strongly entangled with B, it cannot at the same time be strongly entangled
with C [87]. When a system couples to its environment, this results in entanglement between
the system and the reservoirs, which generally reduces the inter-system entanglement. Here
we encounter a different behavior, where an out-of-equilibrium environment induces entan-
glement within a system.
53
SciPost Physics Lecture Notes Submission
4g 2 κL κR(nFL − nFR)
JR = −Tr ĤTD − µR N̂S LRρ̂ss = (ϵ − µR) , (276)
(κL + κR)(4g 2 + κL κR)
with the thermodynamic Hamiltonian given in Eq. (273). Noting the similarity of this expres-
sion with the first term in the concurrence, see Eq. (275), we find that the system is entangled
when the heat current exceeds a critical value [88]
v
2κL κR t
4g 2 4g 2
u
2
|JR| ≥ |ϵ − µR||g | (1 − nF ) 1 − nF +
L R
(1 − n̄) nF nF +
L R 2
n̄ .
4g 2 + κL κR κL κR κL κR
(277)
When we can trust the master equation we employ to be a valid description, the heat current
may thus serve as an indicator for entanglement.
Maximal entanglement: The concurrence in Eq. (275) can be maximized over all parame-
ters, which results in p
5−1
C [ρ̂ss ] = ≃ 0.309. (278)
4
p
This value is obtained by the parameters nFL = 1, nFR = 0, κL = κR, g /κL = ( 5 − 1)/4.
The Fermi functions should thus be maximally different, which results in the largest current
and off-diagonal density matrix element. The couplings to the reservoirs should be equal
and interestingly, the ratio between the interdot coupling and the system-bath coupling is
determined by the golden ratio ϕ. Indeed, we find
p
1+ 5 κL 1 |α|
ϕ≡ = = = p . (279)
2 2g 2C 2 p0 pd
As shown in Ref. [89], the steady state for these parameters is not only entangled but can
also be used for quantum teleportation. The entanglement is further enhanced when includ-
ing electron-electron interactions, which may even result in nonlocal states that violate Bell
inequalities [70].
54
SciPost Physics Lecture Notes Submission
Which of these goals is more relevant may depend on the specific circumstances. As we see
below, different figures of merit can be used to describe the performance of the refrigerator in
reaching the different goals.
We stress that in contrast to the previous examples, we do not consider quantum dots
where fermions are exchanged with the environment. Instead, we consider qubits that can
exchange photons with the environment. We therefore do not have chemical potentials in this
section (since the µ = 0 for photons).
From the interaction Hamiltonian in Eq. (282), we may anticipate the cooling mechanism:
Two excitations, one in the hot qubit and one in the cold qubit, are turned into an exciation in
the room qubit by the term σ̂r† σ̂c σ̂h . This excitation is then dissipated into the room reservoir.
The hot qubit is then re-excited by the hot reservoir, preparing to remove any excitation in the
55
SciPost Physics Lecture Notes Submission
cold qubit coming from the cold reservoir. The interaction in Eq. (282) is most effective when
it is in resonance, i.e., when the states |001〉 and |110〉 have the same energy. We thus demand
ϵr = ϵc + ϵ h . (284)
We note that the interaction in Eq. (282) would be unphysical for fermions, where it would
imply that one fermion is turned into two fermions.
We describe the coupling to the environment using the master equation in the singular-
coupling limit
Lα = κ̃α nBα D[σ̂α† ] + κ̃α (nBα + 1)D[σ̂α ] = κα nFα D[σ̂α† ] + κα (1 − nFα )D[σ̂α ], (286)
where the Bose-Einstein occupation and the Fermi-Dirac occupation are given by
1 1
nBα = nFα = . (287)
e ϵα /kB Tα −1 e ϵα /kB Tα +1
This master equation can be derived using reservoirs described by noninteracting bosons (with
µ = 0) X X
ϵα,q â†α,q âα,q , gα,q â†α,q σ̂α + σ̂α† âα,q .
Ĥα = V̂α = (288)
q k
This naturally results in the dissipators given by the first equality in Eq. (286), including the
Bose-Einstein occupations. In the second equality in Eq. (286), the dissipators are written in
terms of the Fermi-Dirac occupations by using
The last equality ensures that the ratio between the rates of absorbing and emitting energy is
given by the Boltzmann factor. This is known as local detailed balance and it ensures that the
equilibrium state is a Gibb’s state. Note that while κ̃α is determined by the reservoir’s spectral
density and is temperature independent, κα depends on temperature Tα .
Heat currents: From the master equation, we may extract the heat currents that enter the
different reservoirs. As discussed in Sec. 4.4.2, the singular-coupling limit requires the use
of a thermodynamic Hamiltonian for a consistent thermodynamic bookkeeping. The correct
choice for the thermodynamic Hamiltonian depends on the details of the Hamiltonian. Given
the resonance condition in Eq. (284), the appropriate thermodynamic Hamiltonian is given
by ĤTD = Ĥ0 , i.e., we neglect the interaction between the qubits in the thermodynamic book-
keeping. The heat currents then read
Similarly to what we found for the systems based on quantum dots, the sign of the heat cur-
rents are determined by which occupation is larger, the occupation of the reservoir nFα or the
occupation of the qubit 〈σ̂α† σ̂α 〉.
To make progress, it is instructive to consider the time-evolution of the qubit occupation
∂ t 〈σ̂α† σ̂α 〉 = Tr σ̂α† σ̂α ∂ t ρ̂S(t ) = −i〈[σ̂α† σ̂α , Ĥint ]〉 + κα nFα − 〈σ̂α† σ̂α 〉 .
(291)
56
SciPost Physics Lecture Notes Submission
The second term on the right-hand side can be identified as −Jα /ϵα by comparing to Eq. (290).
The first term can be related to the following quantity
I ≡ 2g Im〈σ̂r† σ̂c σ̂h 〉 = i〈[σ̂c† σ̂c , Ĥint ]〉 = i〈[σ̂h† σ̂h , Ĥint ]〉 = −i〈[σ̂r† σ̂r , Ĥint ]〉. (292)
In the steady state, we have ∂ t 〈σ̂α† σ̂α 〉 = 0. From Eqs. (291) and (292), we may then infer
Jc = −ϵc I, Jh = −ϵh I, Jr = ϵr I. (293)
This remarkably simple relation between the heat currents arises due to the simple structure
of the Hamiltonian. Heat may only traverse the system by exchanging an excitation in the
room qubit with two excitations, one in the cold and one in the hot qubit.
The laws of thermodynamics: In the steady state, Eq. (292) allows us to express the laws
of thermodynamics in a particularly simple form. The first law of thermodynamics may be
written as
Jc + Jh + Jr = I(ϵr − ϵc − ϵh ) = 0, (294)
which holds because of the resonance condition in Eq. (284). Obviously, the first law still holds
if this resonance condition is not met. However, in that case the thermodynamic Hamiltonian
should be chosen differently which would modify Eq. (290).
The second law of thermodynamics reads
ϵc + ϵh ϵc ϵh
Jc Jh Jr
+ + =I − − ≥ 0. (295)
Tc Th Tr Tr Tc Th
In contrast to the first law, we may not verify the second law without explicitly solving the
master equation. However, knowing that it holds, the second law can tell us when the device
operates as a refrigerator. From Eq. (292), we see that the cold reservoir is cooled when I ≥ 0.
From the second law, we can infer that this is the case if the term in brackets in Eq. (295) is
positive. This results in the condition for cooling
ϵc Tc (Th − Tr )
≤ ≡ ηCOP , (296)
ϵh Th (Tr − Tc ) C
where ηCOP
C
denotes the Carnot value for the coefficient of performance of an absorption re-
frigerator, see also the next subsection.
Goal 1: For this goal, the desired output is a large heat current out of the cold reservoir. The
natural figure of merit is then the coefficient of performance (COP)
−Jc ϵc
ηCOP ≡ = ≤ ηCOP . (297)
−Jh ϵh C
As for the heat engine in Sec. 5.1, the coefficient of performance can be understood as the de-
sired output divided by the consumed resource, which in this case is the heat provided by the
hot reservoir. For the absorption refrigerator under consideration, the COP is determined only
by the qubit energy splittings, due to Eq. (292). As shown above, the second law of thermo-
dynamics provides an upper bound on the COP, under the assumption that the device indeed
operates as a refrigerator. This bound diverges when Tr → Tc , as heat can in principle be
moved in between two reservoirs of the same temperature without consuming any resources.
57
SciPost Physics Lecture Notes Submission
Goal 2: When cooling the qubit coupled to the cold reservoir, the natural figure of merit is
its occupation. It may be quantified by an effective temperature
ϵc 1
θ= ⇔ 〈σ̂c† σ̂c 〉 = ϵc . (298)
1+〈σ̂c† σ̂c 〉
kB ln e kB θ
+1
〈σ̂c† σ̂c 〉
which depends on the quantity I encountered before. The occupation may asymptotically
reach zero in the limit
g κr κh ϵh ϵr
→ 0, = → ∞, → 0, → ∞. (300)
κr κc κc kB Th kB Tr
These conditions are very demanding and can obviously never be reached exactly. This can
be understood as an implication of the third law of thermodynamics, which prevents cooling
a system down to the ground state.
(j)
where ρ̂S is proportional to g j . We note that the Hamiltonian is already written as the sum
of a g independent part, Ĥ0 , and the interaction Hamiltonian which is linear in g . We then
write the master equation in Eq. (285) order by order in g , starting with the zeroth order
(0) (0) (0) (0) (0)
∂ t ρ̂S (t ) = −i[Ĥ0 , ρ̂S (t )] + Lc ρ̂S (t ) + Lh ρ̂S (t ) + Lr ρ̂S (t ). (302)
This master equation describes three independent qubits, each coupled to its respective ther-
mal reservoir. The steady state is thus simply a product of three Gibbs states (at different
temperatures) and can be written as
where we abbreviated
58
SciPost Physics Lecture Notes Submission
Note that δn provides the population of the state |110〉 minus the population of the state
|001〉 at g = 0. The term in Eq. (305) acts like a source term in the differential equation in
Eq. (304). This suggests the ansatz
(1)
ρ̂S = x |001〉〈110| + x ∗ |110〉〈001| . (307)
This ansatz is Hermitian and trace-less, which is important to keep the density matrix in
Eq. (301) Hermitian and with trace one. With this ansatz, we find
(1)
[Ĥ0 , ρ̂S ] = (ϵr − ϵc − ϵh )x |001〉〈110| − (ϵr − ϵc − ϵh )x ∗ |110〉〈001| , (308)
and
(1)
κj (1)
L j ρ̂S = − ρ̂S . (309)
2
These identities can be inserted into Eq. (304) to obtain
(1) 2i g δn
ρ̂S = − (|001〉〈110| − |110〉〈001|), (310)
κc + κh + κr
which allows us to determine
4g 2 δn
I = 2g Im〈σ̂r† σ̂c σ̂h 〉 = 2g Im 〈110| ρ̂S |001〉 = . (311)
κc + κh + κr
From this expression, we find that for small g , the heat currents are quadratic in g . Further-
more, we obtain cooling whenever δn > 0. One may show that this inequality is equivalent
to ϵc /ϵh < ηCOP
C
. This perturbative calculation thus reproduces the cooling condition for all
values of g .
where
x (t ) = 〈110| ρ̂S(t ) |110〉 , y(t ) = 2Im 〈110| ρ̂S(t ) |001〉 . (313)
59
SciPost Physics Lecture Notes Submission
In Eq. (312), we dropped Ĥ0 because it commutes both with Ĥint as well as with the initial
state, which we choose to be given by Eq. (303). Equation (312) can be cast into the coupled
differential equations
with the initial conditions x (t = 0) = δn and y(t = 0) = 0. These equations may easily be
solved, resulting in
We note that δn ≤ nFc , ensuring that the population remains positive at all times.
6 Fluctuations
In this section, we are exploring a key difference between macroscopic systems and small
quantum systems: fluctuations. While in macroscopic systems, fluctuations can generally be
neglected because they are negligible compared to mean values (c.f. Sec. 2.2), fluctuations
do play an important role in small, nano-scale systems. The laws of thermodynamics derived
60
SciPost Physics Lecture Notes Submission
in Sec. 3 only include average values and they hold irrespective of the relevance of fluctua-
tions. In this section, we discuss new laws that describe the behavior of heat and work in
the presence of fluctuations. These new laws include fluctuation theorems [6, 96, 97] and
thermodynamic uncertainty relations [98]. While fluctuation theorems generalize the second
law of thermodynamics to fluctuating systems, thermodynamic uncertainty relations provide
a trade-off between a signal-to-noise ratio and entropy production, and they do not have any
counterpart in macroscopic systems.
61
SciPost Physics Lecture Notes Submission
This further justifies the interpretation of Wm←n as the work performed on the system dur-
τ τ
ing the trajectory |En0 〉 → |Em 〉, i.e., in a single experimental run when En0 and Em are mea-
sured. Note that we use the same brackets to denote averages over trajectories 〈 f (n, m)〉 =
n,m p(m ← n) f (n, m) as we do for ensemble averages 〈 Ô〉 = Tr{ Ô ρ̂}. The object that is
P
Note that these relations imply that the time-reversal operator anti-commutes with the imag-
inary unit: Θ̂i = −i Θ̂. Being anti-unitary, some rules that we take for granted do not apply to
the time-reversal operator. For instance, the cyclic invariance of the trace has to be adapted as
We note that in some works, a daggered time-reversal operator appears. Here we follow
Refs. [6,100] and avoid such a notation, using instead the inverse of the time-reversal operator
Θ̂−1 . Furthermore, we always consider the time-reversal operator (and its inverse) to act on
the right.
With the help of the time-reversal operator, we define a backward experiment. In this
experiment, the time-evolution is determined by the time-reversed Hamiltonian
Rt
d t ′ H̃S (t ′ )
H̃S(t ) = Θ̂ ĤS(τ − t )Θ̂−1 , ŨS(t ) = T e −i 0 . (327)
The work protocol discussed above, where time-evolution is governed by ĤS(t ) is denoted
as the forward experiment. In addition to transforming the Hamiltonian by the time-reversal
operator, the time argument is inverted in Eq. (327). Any external parameters that are changed
during the forward experiment thus have their time-dependence reversed in the backward
experiment. For instance, if an electric field is ramped up in the forward experiment, it is
ramped down in the backward experiment. A special role is played by external magnetic
fields. In the presence of such fields we have H̃S(B, ⃗ t ) = Θ̂ ĤS(−B,
⃗ τ − t )Θ̂−1 . The reason for
this is that if we included the charges that create the magnetic field in the description, the time-
reversal operator would reverse the momenta of these charges which changes the sign of the
resulting magnetic fields. The time-evolution along the forward and backward experiments
are linked by the so-called microversibility condition
62
SciPost Physics Lecture Notes Submission
which may be derived from Eq. (327) and the properties of the time-reversal operator. As an
initial state for the backward experiment, we consider a state that is diagonal in the basis of
the time-reversed Hamiltonian at t = 0
X X
τ τ τ−t τ−t τ−t
ρ̃S(0) = q m Θ̂|Em 〉〈Em |Θ̂−1 , H̃S(t ) = Em Θ̂|Em 〉〈Em |Θ̂−1 . (329)
m m
2. Let the system evolve for time τ according to the time-evolution in Eq. (327).
p̃(n ← m) qm
= . (331)
p(m ← n) pn
Due to microreversibility, the transition probabilities drop out and we are left with the ratio
of initial probabilities. This innocuous expression is actually very powerful. To see this, let us
consider thermal initial states
with Ft denoting the free energy for a thermal state at inverse temperature β and Hamiltonian
ĤS(t ), see Eq. (102). For thermal states, the probabilities reduce to
0 τ
pn = e −β(En −F0 ) , q m = e −β(Em −Fτ ) . (334)
Crooks fluctuation theorem: Plugging Eq. (334) into Eq. (331) results in
p̃(n ← m)
= e −β(Wm←n −∆F ) , (335)
p(m ← n)
63
SciPost Physics Lecture Notes Submission
Jarzynski relation: Multiplying Eq. (335) with p(m ← n) and summing over all n and m,
we find the Jarzynski relation [102]
X X
p̃(n ← m) = p(m ← n)e −β(Wm←n −∆F ) ⇒ e −β Wm←n = e −β∆F . (336)
n,m n,m
This relation is known as an integral fluctuation theorem since it involves an average rather
than individual trajectories. The Jarzynski relation is remarkable because it relates an equi-
librium quantity, the difference in equilibrium free energies on the right hand side, to an out-
of-equilibrium quantity, the work performed in the forward experiment that appears on the
left-hand side. Importantly, there is no requirement of remaining in or close to equilibrium
during the experiment. Since equilibrium free energies are difficult to measure, the Jarzynski
relation has been used to determine free energy landscapes, in particular in single-molecule
pulling experiments [103]. In practice, evaluating the average on the left-hand side may be
challenging as trajectories with exponentially small probabilities may contribute [104].
Finally, we may apply Jensen’s inequality to the Jarzynski relation. Jensen’s inequality
states that for a convex function f (ax 1 + (1 − a)x 2 ) ≤ a f (x 1 ) + (1 − a) f (x 2 ), the inequality
The final inequality states that the work performed on the system always exceeds the difference
in equilibrium free energies. This inequality is equivalent to the second law of thermodynam-
ics, when both the initial and final states are thermal states (which can always be achieved by
letting the system equilibrate after the experiment). Thus, we find that the Jarzynski relation
and the Crooks fluctuation theorem generalize the second law of thermodyanmics, as they
constrain not only the average value of the performed work but also its fluctuations.
The sub- and superscripts may be confusing here. The eigenvalues of Ĥα are labeled by E αj
⃗ has elements kα . The probabilities in Eq. (339) read
and the vector k
Y e −βα Ekα −µα Nkα
α α
⃗ = pi (0)
p0 (i, k) . (342)
α Zα
Finally, we introduced the eigenvalues and eigenstates of the reduced state of the system
through X
ρ̂S(t ) = TrB {ρ̂tot (t )} = pi (t ) |ψi (t )〉〈ψi (t )| . (343)
i
64
SciPost Physics Lecture Notes Submission
1. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-
⃗
tem in its eigenbasis |ψi (0)〉. This results in outcomes: i, k.
2. Let the system evolve for time τ according to the time-evolution operator Û(τ) given in
Eq. (110).
3. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-
tem in its eigenbasis |ψ j (τ)〉. This results in outcomes: j, ⃗l.
Stochastic entropy: Following Ref. [105], we may use the self-information introduced in
Sec. 1.3 to define a stochastic system entropy change
65
SciPost Physics Lecture Notes Submission
Stochastic entropy production: Finally, we can introduce the stochastic entropy production
along a trajectory as
X Qα (γ)
Σ(γ) = kB ∆S(γ) + , (352)
α Tα
which averages to the entropy production introduced in Eq. (123), 〈Σ(γ)〉 = Σ as can be easily
shown using Eqs. (346) and (351). Importantly, while the average entropy production Σ is
always nonnegative, the stochastic entropy production Σ(γ) can become negative.
We note that in contrast to Sec. 6.1, we do not define a stochastic version of the work
associated to the time-dependence of the Hamiltonian. The reason for this is that if the initial
state does not commute with the Hamiltonian, there is no unique way to define stochastic
work that obeys all desired properties [106, 107].
where pτ ( j , ⃗l) is obtained from Eq. (342) by replacing p j (0) → p j (τ), i.e.,
Y e −βα Elα −µα Nlα
α α
pτ ( j , ⃗l) = p j (τ) . (354)
α Zα
Note that the initial state of the backward trajectory is quite different from what we considered
in Sec. 6.1. For the system state, we consider the (time-reversed) final state of the forward
experiment. The reservoirs on the other hand are initialized in the same Gibbs states as in
the beginning of the forward experiment. This is consistent with the paradigm that we have
control over the system, but not over the environment which we only describe by constant
temperatures and chemical potentials.
This state is then evolved by the operator
Rt
d t ′ Θ̂ Ĥtot (t ′ )Θ̂−1
Ũ(t ) = T e −i 0 , (355)
2. Let the system evolve for time τ according to the time-evolution operator Ũ(τ) given in
Eq. (355).
3. Projectively measure all reservoir energies and particle numbers {Ĥα , N̂α } and the sys-
⃗
tem in its eigenbasis Θ̂|ψi (0)〉. This results in outcomes: i, k.
We denote the corresponding trajectory by γ̃ : Θ̂ ψ j (τ), ⃗l → Θ̂ ψi (0), k
⃗ . The probability to
observe such a trajectory is given by, using Eq. (326)
2
P̃(γ̃) = pτ ( j , ⃗l) ψ j (τ), ⃗l Θ̂−1 Ũ † (τ)Θ̂ ψi (0), k
⃗
2
(357)
⃗
= pτ ( j , ⃗l) ψ j (τ), ⃗l Û(τ) ψi (0), k .
66
SciPost Physics Lecture Notes Submission
P̃(γ̃) pτ ( j, ⃗l) −
P Qα (γ) p j (τ)
Tα +ln pi (0)
= =e α
= e −Σ(γ)/kB . (358)
P(γ) ⃗
p0 (i, k)
As for the isolated system, c.f. Eq. (335), the transition probabilities drop out and the right-
hand side is determined solely by the initial probabilities. The Boltzmann factors in Eq. (342)
result in the stochastic heat, c.f. (345), and together with the initial probabilities of the system,
the exponent adds up to the stochastic entropy production, c.f. Eq. (352).
Equation (358) provides a generalization of the second law of thermodynamics to nano-
scale systems, where fluctuations matter. It is instructive, to consider what happens if Eq. (358)
is applied to a macroscopic process, for instance a glass shattering. In such a process, that we
denote by the trajectory γ, roughly 1023 degrees of freedom are involved (Avogadro constant),
resulting in an entropy production of the order of 1023 kB . The probability to observe the time-
reversed process γ̃ is thus exponentially suppressed by an extremely large number. It is thus
safe to assume that macroscopic processes with negative entropy production do not happen as
their probabilities are virtually zero. For a nano-scale object, the entropy production along a
trajectory may be as low as a few kB , and indeed we may observe the time-reversed trajectory.
This has interesting consequences for the thermodynamic arrow of time. For macroscopic
systems, such an arrow of time is given by the fact that entropy is produced. For nano-scale
systems, fluctuations prevent us from identifying the direction of time with certainty: if you
see a video of a fluctuating system, you may not with certainty identify if the video was played
forward or backward. Recently, machine learning was employed to learn the arrow of time
and the algorithm rediscovered the fluctuation theorem as the underlying thermodynamic
principle [108].
From the detailed fluctuation theorem, we may derive the Crooks fluctuation theorem by
summing over all trajectories that have the same entropy production
P̃(−Σ) X X
= e −Σ/kB , P(Σ) = δ(Σ − Σ(γ))P(γ), P̃(Σ) = δ(Σ − Σ(γ))P̃(γ). (359)
P(Σ) γ γ
e −Σ(γ) = 1, (360)
from which the second law, 〈Σ(γ)〉 ≥ 0, follows through Jensen’s inequality in complete anal-
ogy to Eqs. (336) and (338). This implies that the fluctuation theorem in Eq. (358) contains
strictly more information than the second law, and therefore can be understood as a general-
ization thereof.
67
SciPost Physics Lecture Notes Submission
where L̃ denotes the time-evolution due to the Hamiltonian as well as all reservoirs except
the one we are interested in and the annihilation operator ĉ destroys a fermion in the system.
Now we identify which terms in the master equation change the values of n. To this end, we
write out the superoperators
1 1
D[ĉ † ]ρ̂ = ĉ † ρ̂ĉ − {ĉĉ † , ρ̂}, D[ĉ]ρ̂ = ĉ ρ̂ĉ † − {ĉ † ĉ, ρ̂}. (362)
2 2
The term ĉ † ρ̂ĉ increases the number of fermions in the system by one. This fermion comes
from the reservoir, thus this term decreases n by one. Similarly, the term ĉ ρ̂ĉ † increases n by
one.
We now introduce the n-resolved density matrix ρ̂S(n) as the density matrix if n fermions
entered the reservoir. From this quantity, we may recover the full density matrix as
∞
X
ρ̂S = ρ̂S(n). (363)
n=−∞
Having identified the terms in the master equation which change n, we may write the time-
evolution of the n-resolved density matrix as
Figure 17: Illustration of full counting statistics. A quantum system can exchange
particles with a reservoir. The net number of particles that enter the reservoir during
time t is denoted by n(t ). Whenever a particle enters the reservoir, n is increased by
one. Whenever a particle leaves the reservoir (and enters the system) n is decreased
by one.
68
SciPost Physics Lecture Notes Submission
and L0 denotes the time-evolution that does not change n such that L = L0 + J+ + J− .
Summing Eq. (363) over n, we thus recover the master equation in Eq. (361).
From the n-resolved density matrix, we can obtain the probability that n fermions entered
the reservoir during time t as
p(n) = Tr{ρ̂S(n)}. (366)
Often we are interested in the moments of this distribution
∞
X
〈n k 〉 = n k p(n), (367)
n=−∞
where χ is known as the counting field. Fourier transforming Eq. (364), we find
∞
X ∞
X ∞
X
∂ t ρ̂S(χ) = L0 e inχ ρ̂S(n) + J+ e inχ ρ̂S(n − 1) + J− e inχ ρ̂S(n + 1)
n=−∞ n=−∞ n=−∞ (369)
−iχ
= L0 ρ̂S(χ) + e iχ J+ ρ̂S(χ) + e J− ρ̂S(χ) ≡ L(χ)ρ̂S(χ).
To find the initial condition, we note that we start counting at t = 0, implying that n = 0 at
t = 0 and therefore
The trace of the χ-dependent density matrix provides the characteristic function
∞
X ∞
X
inχ
Λ(χ) ≡ e p(n) = Tr{ρ̂S(χ)}, Λ(0) = p(n) = 1. (372)
n=−∞ n=−∞
In the field of FCS, the characteristic function is often called the moment-generating function
since the moments of n can be obtained by taking derivatives as
Note however that in probability theory, the moment-generating function is the Laplace trans-
form of the probability distribution. A particularly useful quantity is the cumulant-generating
function, given by the logarithm of the characteristic function
which produces the cumulants 〈〈n k 〉〉 upon differentiation. The first cumulant is equal to the
mean
∂χ Λ(χ)
〈〈n〉〉 = −i∂χ S(χ)|χ=0 = −i = 〈n〉. (375)
Λ(χ) χ=0
69
SciPost Physics Lecture Notes Submission
Figure 18: Quantum dot in the high bias regime. Due to a large voltage bias particles
can only enter the dot from the left reservoir and leave to the right reservoir. We are
interested in the net number of particles that enter the right reservoir.
The third cumulant 〈〈n 3 〉〉 is related to the skewness of the distribution and the fourth cumulant
to its kurtosis (“tailedness").
Often, we are interested in the long-time behavior of n. In this case, it is useful to use the
spectral decomposition to write X
e L(χ)t = eν j t P j , (377)
j
where ν j denote the eigenvalues of the Liouvillean L and the projectors P j can be obtained
from its eigenvectors. The characteristic function may then be written as
t →∞
X
Λ(χ) = e ν j t Tr{P j ρ̂S(t = 0)} −−−→ e νmax t Tr{Pmax ρ̂S(t = 0)}, (378)
j
where νmax is the χ-dependent eigenvalue of the Liouvillean with the largest real part. Simi-
larly, the cumulant generating function may be written as
t →∞
S(χ) −−−→ νmax t + ln Tr{Pmax ρ̂S(t = 0)} ≃ νmax t . (379)
The cumulant generating function is thus fully determined by the eigenvalue of the Liouvillean
with the largest real part. Furthermore, all cumulants become linear in time, since S(χ) ∝ t .
Since we only count the particles entering the right reservoir, we may introduce the counting
field by the replacement
κRd̂ ρ̂Sd̂ † → e iχ κRd̂ ρ̂Sd̂ † . (381)
70
SciPost Physics Lecture Notes Submission
Noting that the density matrix remains diagonal, and denoting the diagonal elements by p0 (χ)
and p1 (χ), we may cast this master equation into
p0 (χ) −κL κRe iχ p0 (χ)
∂t = , (383)
p1 (χ) κL −κR p1 (χ)
| {z }
L(χ)
where L(χ) denotes the Liouvillean (acting on a vector instead of on a matrix). Its eigenvalues
can easily be calculated and read
κL + κR 1Æ
ν± = − ± (κL − κR)2 + 4e iχ κL κR. (384)
2 2
The cumulant generating function in the long-time limit is thus [c.f. Eq. (379)]
κL + κR 1Æ
S(χ)/t = ν+ = − + (κL − κR)2 + 4e iχ κL κR, (385)
2 2
resulting in the average
κL κR
〈n〉 = −i∂χ S(χ)|χ=0 = t = 〈I〉t , (386)
κL + κR
where 〈I〉 denotes the average particle current, see also the power given in Eq. (243). The
variance is given by
κ2L + κ2R
2 2
〈〈n 〉〉 = −∂χ S(χ)|χ=0 = 〈n〉 . (387)
κL + κ2R
Transport becomes particularly simple if one of the couplings far exceeds the other, for
instance if κL ≫ κR. In this case, the cumulant generating function reduces to
S(χ) = κR t e iχ − 1 .
(388)
Note that it is the smaller of the couplings that features in the cumulant generating function.
This is the case because transport at long times is dominated by bottlenecks. In this case,
the bottleneck is the smaller coupling to the right reservoir. The cumulants obtained from
Eq. (388) are all equal and read 〈〈n k 〉〉 = κR t . This is the hallmark of Poissonian transport,
where each particle is independent of the others. Indeed, we may write the characteristic
function as
∞
X (κR t )n inχ
Λ(χ) = e −κR t exp κR t e iχ = e −κR t
e , (389)
n=0
n!
from which we may read off, with the help of Eq. (372),
(κR t )n
p(n) = e −κR t , (390)
n!
which is indeed a Poisson distribution.
71
SciPost Physics Lecture Notes Submission
where the Hamiltonian may include a Lamb shift. The jump operators obey
where the thermodynamic Hamiltonian is introduced in Sec. 4.4.2. From these relations, we
†
may conclude that a jump Ŝα,k ρ̂SŜα,k reduces the particle number in the system by nα,k and
the internal energy by ωα,k .
We are interested in the joint distribution of the heat and work exchanged with the different
reservoirs P({Qα , Wα }). This distribution can be obtained from a characteristic function
Y Z Z
Λ({χα , λα }) = dQα e iQα χα d Wα e i Wα λα P({Qα , Wα }) = Tr {ρ̂S({χα , λα })} . (393)
α
The counting field dependent density matrix can again be obtained by a modified master equa-
tion
∂ t ρ̂S({χα , λα }) = L({χα , λα })ρ̂S({χα , λα }), (394)
where the counting field dependent Liouvillean is obtained from the Liouvillean L in Eq. (391)
by the replacement
†
Ŝα,k ρ̂SŜα,k → e iχα (ωα,k −µα nα,k ) e iλα µα nα,k Ŝα,k ρ̂SŜα,k
†
. (395)
In contrast to Eq. (369), the counting fields are weighted by the heat and work associated to
each jump. While Eq. (395) provides a simple recipe, we note that we can rigorously obtain
L({χα , λα }) by introducing counting fields on the unitary evolution with Ĥtot and subsequently
tracing out the environment using Born-Markov approximations [64].
We also note that the fluctuation theorem in Eq. (358) may be derived from the master
equation using the method of FCS. A trajectory γ is then determined by outcomes of initial
and final measurements on the system, as well as the times and types of all jumps that occur
[18, 97].
〈〈I 2 〉〉 2kB
≥ . (396)
〈I〉2 Σ̇
Here 〈〈I 2 〉〉 and 〈I〉 denote the steady-state current fluctuations and average current respec-
tively (see below), and Σ̇ denotes the entropy production rate.
The TUR can be understood as an upper bound on the signal-to-noise ratio 〈I〉2 /〈〈I 2 〉〉,
provided by dissipation, as expressed through the entropy production rate: to achieve a high
signal-to-noise ratio, a sufficient amount of entropy must be produced. Equivalently, the TUR
72
SciPost Physics Lecture Notes Submission
implies that it is not possible to simultaneously achieve a high current, low fluctuations, and
low dissipation.
In contrast to fluctuation theorems, the TUR has a smaller range of validity. It applies
to classical, Markovian systems. It may therefore be violated in quantum-coherent systems,
see for instance Refs. [70, 111–114]. Markovian quantum systems thus have the potential
to achieve higher signal-to-noise ratios for a given amount of dissipation than their classical
counterparts. We note however that the TUR may also be violated in classical, non-Markovian
systems [115].
Since n typically increases or decreases by one if a particle is exchanged, the stochastic current
I(t ) consists of a series of Dirac deltas at these times. With this definition, one may show
Z t Z∞
〈n〉 = d t ′ 〈I(t )〉, lim 〈〈n〉〉 = t dτ〈δI(τ)δI(0)〉 = t SδI , (399)
t →∞
0 −∞
where δI(t ) = I(t ) − 〈I(t )〉 denotes the deviation of the current from its mean and SδI is
known as the zero-frequency power spectrum. More information on noise and fluctuations
can be found for instance in Refs. [18, 116].
73
SciPost Physics Lecture Notes Submission
which relate the power and its fluctuations to the particle current. The TUR thus implies that
the well-known trade-off between high power and an efficiency close to the Carnot efficiency is
actually a trade-off between three quantities, including the power fluctuations. This sheds light
onto previous works that found finite power at Carnot efficiency close to a phase transition,
where fluctuations diverge [118, 119].
Interesting additional applications of the TUR include the estimation of efficiencies of
molecular motors [120], as well as investigating the precision of biological clocks [121].
7 Summary
These lecture notes provide an introduction to the vast and growing field of quantum thermo-
dynamics. In particular, the following key questions were addressed:
How can we model open quantum systems? In Sec. 4 we derived Markovian master equa-
tions that provide a powerful tool to produce approximations for observables in the general
scenario. We discussed how these equations can be derived in a thermodynamically consistent
way and highlighted their limitations.
What can we do with open quantum systems? Section 5 introduced three different quan-
tum thermal machines, a heat engine, an entanglement generator, and a refrigerator. These
devices illustrate how open quantum systems coupled to multiple thermal reservoirs can be
used to perform useful tasks.
How do small systems differ from large ones? In Sec. 6, we focused on a key difference
between microscopic and macroscopic systems: fluctuations. We illustrated how they become
relevant and how they result in new thermodynamic rules including fluctuation theorems,
which generalize the second law of thermodynamics, and the thermodynamic uncertainty re-
lation.
8 Outlook
While a number of topics in the field of quantum thermodynamics is addressed in these lecture
notes, there is much more that has not been touched upon. Here I provide a subjective selection
of exciting research avenues that build on the concepts discussed in these notes.
74
SciPost Physics Lecture Notes Submission
75
SciPost Physics Lecture Notes Submission
76
SciPost Physics Lecture Notes Submission
engine in Sec. 5.1 exploits a temperature gradient to overcome a voltage bias. In contrast to a
heat engine, which performs one useful task, hybrid thermal machines perform multiple useful
tasks at the same time [154]. For instance, a three-terminal device can be used to produce
work and cool down the coldest reservoir at the same time.
Of particular interest is the study of thermodynamics when the conserved quantities no
longer commute, as is the case for instance with the different components of angular momen-
tum. In this case, established concepts in thermodynamics are challenged and many fascinat-
ing questions are raised [155].
Acknowledgmenets
These lecture notes were created for a Master course at the University of Basel where they
are accompanied by exercise sheets (available upon request). They are based on a previous
set of notes prepared for an 8-hour course at Lund University. I thank all participants of these
courses whose feedback has been crucial for the development of these notes. I also thank Peter
Samuelsson for providing the opportunity to develop my own course at Lund University. Many
thanks also to the teaching assistants in Basel including Marcelo Janovitch, Matteo Brunelli,
Kacper Prech, and Aaron Daniel. Special thanks to Aaron Daniel for illustrating Figs. 1, 2, 3, 5,
6, 7, 15, and 17, and to Marcelo Janovitch for illustrating Fig. 4. Special thanks also to Aaron
Daniel and Nicolas Hunziker for pointing out the appearance of the golden ratio in Sec. 5.2.
I acknowledge funding from the Swiss National Science Foundation (Eccellenza Professorial
Fellowship PCEFP2_194268).
References
[1] P. Weinberger, The discovery of thermodynamics, Philos. Mag. 93(20), 2576 (2013),
doi:10.1080/14786435.2013.784402.
[2] A. Auffèves, Quantum technologies need a quantum energy initiative, PRX Quantum 3,
020101 (2022), doi:10.1103/PRXQuantum.3.020101.
[3] J. P. Pekola, Towards quantum thermodynamics in electronic circuits, Nat. Phys. 11(2),
118 (2015), doi:10.1038/nphys3169.
[7] S. Deffner and S. Campbell, Quantum Thermodynamics, Morgan & Claypool Publishers,
ISBN 978-1-64327-658-8, doi:10.1088/2053-2571/ab21c6 (2019).
77
SciPost Physics Lecture Notes Submission
[12] J. Goold, M. Huber, A. Riera, L. del Rio and P. Skrzypczyk, The role of quantum infor-
mation in thermodynamics – a topical review, J. Phys. A: Math. Theor. 49(14), 143001
(2016), doi:10.1088/1751-8113/49/14/143001.
[14] M. T. Mitchison, Quantum thermal absorption machines: refrigerators, engines and clocks,
Contemp. Phys. (2019), doi:10.1080/00107514.2019.1631555.
[15] J. Anders and M. Esposito, Focus on quantum thermodynamics, New J. Phys. 19(1),
010201 (2017), doi:10.1088/1367-2630/19/1/010201.
[16] R. Shankar, Principles of Quantum Mechanics, Springer New York, NY, 2nd edn.,
doi:10.1007/978-1-4757-0576-8 (1994).
[17] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th
Anniversary Edition, Cambridge University Press, doi:10.1017/CBO9780511976667
(2010).
[19] Y. Guryanova, N. Friis and M. Huber, Ideal Projective Measurements Have Infinite Resource
Costs, Quantum 4, 222 (2020), doi:10.22331/q-2020-01-13-222.
[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics:
An Introduction, Oxford University Press (2004).
[24] R. Clausius, Ueber verschiedene für die anwendung bequeme formen der hauptgle-
ichungen der mechanischen wärmetheorie, Annalen der Physik 201(7), 353 (1865),
doi:10.1002/andp.18652010702.
78
SciPost Physics Lecture Notes Submission
[25] C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27(3), 379
(1948), doi:10.1002/j.1538-7305.1948.tb01338.x.
[26] T. M. Cover and J. A. Thomas, Elements of information theory, John wiley & Sons, Inc.,
Hoboken, New Jersey, 2nd edn., doi:10.1002/047174882X (2006).
[27] S. Kullback and R. A. Leibler, On information and sufficiency, Ann. Math. Stat. 22, 79
(1951).
[28] V. Vedral, The role of relative entropy in quantum information theory, Rev. Mod. Phys.
74, 197 (2002), doi:10.1103/RevModPhys.74.197.
[29] H. Araki and E. H. Lieb, Entropy inequalities, Commun. Math. Phys. 18(2), 160 (1970),
doi:10.1007/BF01646092.
[30] L. D. Landau and E. M. Lifshitz, Course of Theoretical Physics, Vol. 5: Statistical Physics,
Part 1, Pergamon Press, Oxford, doi:10.1016/C2009-0-24487-4 (1959).
[31] F. Schwabl, Statistical Mechanics, Springer Berlin, Heidelberg, 2nd edn., doi:10.1007/3-
540-36217-7 (2006).
[32] E. T. Jaynes, Information theory and statistical mechanics, Phys. Rev. 106, 620 (1957),
doi:10.1103/PhysRev.106.620.
[33] E. T. Jaynes, Information theory and statistical mechanics. II, Phys. Rev. 108, 171 (1957),
doi:10.1103/PhysRev.108.171.
[35] G. B. Arfken, H. J. Weber and F. E. Harris, Mathematical Methods for Physicists, Elsevier
Inc., 7th edn., doi:10.1016/C2009-0-30629-7 (2012).
[36] W. Pusz and S. L. Woronowicz, Passive states and KMS states for general quantum systems,
Commun. Math. Phys. 58(3), 273 (1978), doi:10.1007/BF01614224.
[38] I. M. Bassett, Alternative derivation of the classical second law of thermodynamics, Phys.
Rev. A 18, 2356 (1978), doi:10.1103/PhysRevA.18.2356.
[39] P. Skrzypczyk, R. Silva and N. Brunner, Passivity, complete passivity, and virtual temper-
atures, Phys. Rev. E 91, 052133 (2015), doi:10.1103/PhysRevE.91.052133.
79
SciPost Physics Lecture Notes Submission
[43] P. Strasberg and A. Winter, First and second law of quantum thermodynamics: A consistent
derivation based on a microscopic definition of entropy, PRX Quantum 2, 030202 (2021),
doi:10.1103/PRXQuantum.2.030202.
[44] P. Strasberg and M. Esposito, Stochastic thermodynamics in the strong coupling regime:
An unambiguous approach based on coarse graining, Phys. Rev. E 95, 062101 (2017),
doi:10.1103/PhysRevE.95.062101.
[45] A. Rivas, Strong coupling thermodynamics of open quantum systems, Phys. Rev. Lett.
124, 160601 (2020), doi:10.1103/PhysRevLett.124.160601.
[48] K. Ptaszyński and M. Esposito, Entropy production in open systems: The predom-
inant role of intraenvironment correlations, Phys. Rev. Lett. 123, 200603 (2019),
doi:10.1103/PhysRevLett.123.200603.
[51] L. Masanes and J. Oppenheim, A general derivation and quantification of the third law
of thermodynamics, Nat. Commun. 8, 14538 (2017), doi:10.1038/ncomms14538.
[52] H.-P. Breuer and F. Petruccione, The theory of open quantum systems, Oxford University
Press, doi:10.1093/acprof:oso/9780199213900.001.0001 (2002).
[54] G. Schaller, Open Quantum Systems Far from Equilibrium, Springer, doi:10.1007/978-
3-319-03877-3 (2014).
[56] F. Campaioli, J. H. Cole and H. Hapuarachchi, Quantum master equations: Tips and
tricks for quantum optics, quantum computing, and beyond, PRX Quantum 5, 020202
(2024), doi:10.1103/PRXQuantum.5.020202.
80
SciPost Physics Lecture Notes Submission
[58] G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys.
48(2), 119 (1976), doi:10.1007/BF01608499.
[59] S. Nakajima, On quantum theory of transport phenomena: Steady diffusion, Prog. Theor.
Phys. 20(6), 948 (1958), doi:10.1143/PTP.20.948.
[60] R. Zwanzig, Ensemble method in the theory of irreversibility, J. Chem. Phys. 33(5), 1338
(1960), doi:10.1063/1.1731409.
[62] E. B. Davies, Markovian master equations, Commun. Math. Phys. 39(2), 91 (1974),
doi:10.1007/BF01608389.
[63] E. B. Davies, Markovian master equations. II, Math. Ann. 219(2), 147 (1976),
doi:10.1007/BF01351898.
[66] M. T. Mitchison and M. B. Plenio, Non-additive dissipation in open quantum networks out
of equilibrium, New J. Phys. 20(3), 033005 (2018), doi:10.1088/1367-2630/aa9f70.
[68] S. Seah, S. Nimmrichter and V. Scarani, Refrigeration beyond weak internal coupling,
Phys. Rev. E 98, 012131 (2018), doi:10.1103/PhysRevE.98.012131.
[72] R. Alicki, The quantum open system as a model of the heat engine, J. Phys A: Math. Gen.
12(5), L103 (1979), doi:10.1088/0305-4470/12/5/007.
81
SciPost Physics Lecture Notes Submission
[74] M. Gerry and D. Segal, Full counting statistics and coherences: Fluctuation symmetry
in heat transport with the unified quantum master equation, Phys. Rev. E 107, 054115
(2023), doi:10.1103/PhysRevE.107.054115.
[75] A. Levy and R. Kosloff, The local approach to quantum transport may violate the
second law of thermodynamics, EPL 107(2), 20004 (2014), doi:10.1209/0295-
5075/107/20004.
[79] N. Brunner, N. Linden, S. Popescu and P. Skrzypczyk, Virtual qubits, virtual tem-
peratures, and the foundations of thermodynamics, Phys. Rev. E 85, 051117 (2012),
doi:10.1103/PhysRevE.85.051117.
[80] J. B. Brask, G. Haack, N. Brunner and M. Huber, Autonomous quantum thermal ma-
chine for generating steady-state entanglement, New J. Phys. 17(11), 113029 (2015),
doi:10.1088/1367-2630/17/11/113029.
[82] S. Hill and W. K. Wootters, Entanglement of a pair of quantum bits, Phys. Rev. Lett. 78,
5022 (1997), doi:10.1103/PhysRevLett.78.5022.
[83] W. K. Wootters, Entanglement of formation of an arbitrary state of two qubits, Phys. Rev.
Lett. 80, 2245 (1998), doi:10.1103/PhysRevLett.80.2245.
[85] G. C. Wick, The evaluation of the collision matrix, Phys. Rev. 80, 268 (1950),
doi:10.1103/PhysRev.80.268.
[87] V. Coffman, J. Kundu and W. K. Wootters, Distributed entanglement, Phys. Rev. A 61,
052306 (2000), doi:10.1103/PhysRevA.61.052306.
[88] S. Khandelwal, N. Palazzo, N. Brunner and G. Haack, Critical heat current for operat-
ing an entanglement engine, New J. Phys. 22(7), 073039 (2020), doi:10.1088/1367-
2630/ab9983.
82
SciPost Physics Lecture Notes Submission
[89] J. Bohr Brask, F. Clivaz, G. Haack and A. Tavakoli, Operational nonclassicality in minimal
autonomous thermal machines, Quantum 6, 672 (2022), doi:10.22331/q-2022-03-22-
672.
[90] N. Linden, S. Popescu and P. Skrzypczyk, How small can thermal machines
be? the smallest possible refrigerator, Phys. Rev. Lett. 105, 130401 (2010),
doi:10.1103/PhysRevLett.105.130401.
[99] P. Talkner, E. Lutz and P. Hänggi, Fluctuation theorems: Work is not an observable, Phys.
Rev. E 75, 050102 (2007), doi:10.1103/PhysRevE.75.050102.
[101] G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Phys. Rev.
E 61, 2361 (2000), doi:10.1103/PhysRevE.61.2361.
[102] C. Jarzynski, Nonequilibrium equality for free energy differences, Phys. Rev. Lett. 78,
2690 (1997), doi:10.1103/PhysRevLett.78.2690.
[103] G. Hummer and A. Szabo, Free energy profiles from single-molecule pulling experiments,
Proc. Natl. Acad. Sci. U.S.A. 107, 21441 (2010), doi:10.1073/pnas.1015661107.
[104] C. Jarzynski, Rare events and the convergence of exponentially averaged work values,
Phys. Rev. E 73, 046105 (2006), doi:10.1103/PhysRevE.73.046105.
83
SciPost Physics Lecture Notes Submission
[105] U. Seifert, Entropy production along a stochastic trajectory and an integral fluctuation
theorem, Phys. Rev. Lett. 95, 040602 (2005), doi:10.1103/PhysRevLett.95.040602.
[107] K. V. Hovhannisyan and A. Imparato, Energy conservation and fluctuation theorem are
incompatible for quantum work, Quantum 8, 1336 (2024), doi:10.22331/q-2024-05-
06-1336.
[108] A. Seif, M. Hafezi and C. Jarzynski, Machine learning the thermodynamic arrow of time,
Nat. Phys. 17(1), 105 (2021), doi:10.1038/s41567-020-1018-2.
[115] P. Pietzonka, Classical pendulum clocks break the thermodynamic uncertainty relation,
Phys. Rev. Lett. 128, 130606 (2022), doi:10.1103/PhysRevLett.128.130606.
[117] P. Pietzonka and U. Seifert, Universal trade-off between power, efficiency, and
constancy in steady-state heat engines, Phys. Rev. Lett. 120, 190602 (2018),
doi:10.1103/PhysRevLett.120.190602.
[119] M. Campisi and R. Fazio, The power of a critical heat engine, Nat. Commun. 7, 11895
(2016), doi:10.1038/ncomms11895.
84
SciPost Physics Lecture Notes Submission
[120] P. Pietzonka, A. C. Barato and U. Seifert, Universal bound on the efficiency of molecular
motors, J. Stat. Mech.: Theor. Exp. 2016(12), 124004 (2016), doi:10.1088/1742-
5468/2016/12/124004.
[123] V. Serreli, C.-F. Lee, E. R. Kay and D. A. Leigh, A molecular information ratchet, Nature
445(7127), 523 (2007), doi:10.1038/nature05452.
[128] K. Maruyama, F. Nori and V. Vedral, Colloquium: The physics of Maxwell’s demon and
information, Rev. Mod. Phys. 81, 1 (2009), doi:10.1103/RevModPhys.81.1.
[131] T. Sagawa and M. Ueda, Generalized Jarzynski equality under nonequilibrium feedback
control, Phys. Rev. Lett. 104, 090602 (2010), doi:10.1103/PhysRevLett.104.090602.
[132] T. Sagawa and M. Ueda, Nonequilibrium thermodynamics of feedback control, Phys. Rev.
E 85, 021104 (2012), doi:10.1103/PhysRevE.85.021104.
[134] T. Yada, N. Yoshioka and T. Sagawa, Quantum fluctuation theorem under quantum
jumps with continuous measurement and feedback, Phys. Rev. Lett. 128, 170601 (2022),
doi:10.1103/PhysRevLett.128.170601.
[135] K. Prech and P. P. Potts, Quantum fluctuation theorem for arbitrary measurement and
feedback schemes, http://arxiv.org/abs/2306.12281.
85
SciPost Physics Lecture Notes Submission
[136] I. D. Terlizzi and M. Baiesi, Kinetic uncertainty relation, J. Phys. A: Math. Theor. 52(2),
02LT03 (2018), doi:10.1088/1751-8121/aaee34.
[140] Y. Hasegawa, Thermodynamic uncertainty relation for general open quantum systems,
Phys. Rev. Lett. 126, 010602 (2021), doi:10.1103/PhysRevLett.126.010602.
[142] T. Van Vu and K. Saito, Thermodynamics of precision in Markovian open quantum dy-
namics, Phys. Rev. Lett. 128, 140602 (2022), doi:10.1103/PhysRevLett.128.140602.
[143] B. Karimi, F. Brange, P. Samuelsson and J. P. Pekola, Reaching the ultimate energy reso-
lution of a quantum detector, Nat. Commun. 11(1), 367 (2020), doi:10.1038/s41467-
019-14247-2.
[147] A. Riera-Campeny, A. Sanpera and P. Strasberg, Quantum systems correlated with a finite
bath: Nonequilibrium dynamics and thermodynamics, PRX Quantum 2, 010340 (2021),
doi:10.1103/PRXQuantum.2.010340.
86
SciPost Physics Lecture Notes Submission
87