Theoretical Molecular Biophysics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 358

biological and medical physics,

biomedical engineering

For further volumes:


http://www.springer.com/series/3740
biological and medical physics,
biomedical engineering
The fields of biological and medical physics and biomedical engineering are broad, multidisciplinary and
dynamic. They lie at the crossroads of frontier research in physics, biology, chemistry, and medicine. The
Biological and Medical Physics, Biomedical Engineering Series is intended to be comprehensive, covering a
broad range of topics important to the study of the physical, chemical and biological sciences. Its goal is to
provide scientists and engineers with textbooks, monographs, and reference works to address the growing
need for information.
Books in the series emphasize established and emergent areas of science including molecular, membrane,
and mathematical biophysics; photosynthetic energy harvesting and conversion; information processing;
physical principles of genetics; sensory communications; automata networks, neural networks, and cellu-
lar automata. Equally important will be coverage of applied aspects of biological and medical physics and
biomedical engineering such as molecular electronic components and devices, biosensors, medicine, imag-
ing, physical principles of renewable energy production, advanced prostheses, and environmental control and
engineering.
Editor-in-Chief:
Elias Greenbaum, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
Editorial Board:
Masuo Aizawa, Department of Bioengineering, Judith Herzfeld, Department of Chemistry,
Tokyo Institute of Technology, Yokohama, Japan Brandeis University, Waltham, Massachusetts, USA
Olaf S. Andersen, Department of Physiology, Mark S. Humayun, Doheny Eye Institute,
Biophysics & Molecular Medicine, Los Angeles, California, USA
Cornell University, New York, USA Pierre Joliot, Institute de Biologie
Robert H. Austin, Department of Physics, Physico-Chimique, Fondation Edmond
Princeton University, Princeton, New Jersey, USA de Rothschild, Paris, France
James Barber, Department of Biochemistry, Lajos Keszthelyi, Institute of Biophysics, Hungarian
Imperial College of Science, Technology Academy of Sciences, Szeged, Hungary
and Medicine, London, England Robert S. Knox, Department of Physics
Howard C. Berg, Department of Molecular and Astronomy, University of Rochester, Rochester,
and Cellular Biology, Harvard University, New York, USA
Cambridge, Massachusetts, USA Aaron Lewis, Department of Applied Physics,
Victor Bloomf ield, Department of Biochemistry, Hebrew University, Jerusalem, Israel
University of Minnesota, St. Paul, Minnesota, USA Stuart M. Lindsay, Department of Physics
Robert Callender, Department of Biochemistry, and Astronomy, Arizona State University,
Albert Einstein College of Medicine, Tempe, Arizona, USA
Bronx, New York, USA David Mauzerall, Rockefeller University,
Britton Chance, Department of Biochemistry/ New York, New York, USA
Biophysics, University of Pennsylvania, Eugenie V. Mielczarek, Department of Physics
Philadelphia, Pennsylvania, USA and Astronomy, George Mason University, Fairfax,
Virginia, USA
Steven Chu, Lawrence Berkeley National
Laboratory, Berkeley, California, USA Markolf Niemz, Medical Faculty Mannheim,
University of Heidelberg, Mannheim, Germany
Louis J. DeFelice, Department of Pharmacology,
Vanderbilt University, Nashville, Tennessee, USA V. Adrian Parsegian, Physical Science Laboratory,
National Institutes of Health, Bethesda,
Johann Deisenhofer, Howard Hughes Medical Maryland, USA
Institute, The University of Texas, Dallas,
Texas, USA Linda S. Powers, University of Arizona,
Tucson, Arizona, USA
George Feher, Department of Physics,
University of California, San Diego, La Jolla, Earl W. Prohofsky, Department of Physics,
California, USA Purdue University, West Lafayette, Indiana, USA
Hans Frauenfelder, Andrew Rubin, Department of Biophysics, Moscow
Los Alamos National Laboratory, State University, Moscow, Russia
Los Alamos, New Mexico, USA Michael Seibert, National Renewable Energy
Ivar Giaever, Rensselaer Polytechnic Institute, Laboratory, Golden, Colorado, USA
Troy, New York, USA David Thomas, Department of Biochemistry,
Sol M. Gruner, Cornell University, University of Minnesota Medical School,
Ithaca, New York, USA Minneapolis, Minnesota, USA
P.O.J. Scherer
Sighart F. Fischer

Theoretical
Molecular Biophysics

With 178 Figures

123
Professor Dr. Philipp Scherer
Professor Dr. Sighart F. Fischer
Technical University München, Physics Department T38
¨
85748 Munchen, Germany
E-mail: philipp.scherer@ph.tum.de, sighart.fischer@ph.tum.de

Biological and Medical Physics, Biomedical Engineering ISSN 1618-7210


ISBN 978-3-540-85609-2 e-ISBN 978-3-540-85610-8
DOI 10.1007/978-3-540-85610-8
Springer Heidelberg Dordrecht London New York
Library of Congress Control Number: 2010930050

© Springer-Verlag Berlin Heidelberg 2010


This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or
parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in
its current version, and permission for use must always be obtained from Springer. Violations are liable to
prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.

Cover design: eStudio Calamar Steinen

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Preface

Biophysics deals with biological systems, such as proteins, which fulfill a vari-
ety of functions in establishing living systems. While the biologist uses mostly
a phenomenological description, the physicist tries to find the general con-
cepts to classify the materials and dynamics which underly specific processes.
The phenomena span a wide range, from elementary processes, which can
be induced by light excitation of a molecule, to communication of living sys-
tems. Thus, different methods are appropriate to describe these phenomena.
From the point of view of the physicist, this may be Continuum Mechanics to
deal with membranes, Hydrodynamics to deal with transport through vessels,
Bioinformatics to describe evolution, Electrostatics to deal with aspects of
binding, Statistical Mechanics to account for temperature and to learn about
the role of the entropy, and last but not least Quantum Mechanics to under-
stand the electronic structure of the molecular systems involved. As can be
seen from the title, Molecular Biophysics, this book will focus on systems
for which sufficient information on the molecular level is available. Compared
to crystallized standard materials studied in solid-state physics, the biological
systems are characterized by very big unit cells containing proteins with thou-
sands of atoms. In addition, there is always a certain amount of disorder, so
that the systems can be classified as complex. Surprisingly, the functions like
a photocycle or the folding of a protein are highly reproducible, indicating a
paradox situation in relation to the concept of maximum entropy production.
It may seem that a proper selection in view of the large diversity of phe-
nomena is difficult, but exactly this is also the challenge taken up within this
book. We try to provide basic concepts, applicable to biological systems or
soft matter in general. These include entropic forces, phase separation, coop-
erativity and transport in complex systems, like molecular motors. We also
provide a detailed description for the understanding of elementary processes
like electron, proton, and energy transfer and show how nature is making use
of them for instance in photosynthesis. Prerequisites for the reader are a basic
understanding in the fields of Mechanics, Electrostatics, Quantum Mechanics,
and Statistics. This means the book is for graduate students, who want to
VI Preface

specialize in the field of Biophysics. As we try to derive all equations in detail,


the book may also be useful to physicists or chemists who are interested
in applications of Statistical Mechanics or Quantum Chemistry to biological
systems. This book is the outcome of a course presented by the authors as a
basic element of the newly established graduation branch “Biophysics” in the
Physics Department of the Technische Universität Muenchen.
The authors thank Dr. Florian Dufey and Dr. Robert Raupp-Kossmann for
their contributions during the early stages of the evolving manuscript.

Garching, Sighart Fischer


May 2010 Philipp Scherer
Contents

Part I Statistical Mechanics of Biopolymers

1 Random Walk Models for the Conformation . . . . . . . . . . . . . . . 3


1.1 The Freely Jointed Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Entropic Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Force–Extension Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Two-Component Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Force–Extension Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Two-Component Model with Interactions . . . . . . . . . . . . . 11
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2 Flory–Huggins Theory for Biopolymer Solutions . . . . . . . . . . . 19


2.1 Monomeric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Polymeric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.1 Stability Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Critical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.3 Phase Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Part II Protein Electrostatics and Solvation

3 Implicit Continuum Solvent Models . . . . . . . . . . . . . . . . . . . . . . . 37


3.1 Potential of Mean Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Dielectric Continuum Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Born Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Charges in a Protein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Generalized Born Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
VIII Contents

4 Debye–Hückel Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 Electrostatic Shielding by Mobile Charges . . . . . . . . . . . . . . . . . . 45
4.2 1–1 Electrolytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Charged Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Charged Cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.5 Charged Membrane (Goüy–Chapman Double Layer) . . . . . . . . . 52
4.6 Stern Modification of the Double Layer . . . . . . . . . . . . . . . . . . . . . 57
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5 Protonation Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1 Protonation Equilibria in Solution . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2 Protonation Equilibria in Proteins . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.1 Apparent pKa Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2.2 Protonation Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2.3 Protonation Enthalpy Relative to the Uncharged State . 68
5.2.4 Statistical Mechanics of Protonation . . . . . . . . . . . . . . . . . 69
5.3 Abnormal Titration Curves of Coupled Residues . . . . . . . . . . . . 70
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Part III Reaction Kinetics

6 Formal Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.1 Elementary Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2 Reaction Variable and Reaction Rate . . . . . . . . . . . . . . . . . . . . . . 75
6.3 Reaction Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3.1 Zero-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.2 First-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.3 Second-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.4 Dynamical Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.5 Competing Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.6 Consecutive Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.7 Enzymatic Catalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.8 Reactions in Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.8.1 Diffusion-Controlled Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.8.2 Reaction-Controlled Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7 Kinetic Theory: Fokker–Planck Equation . . . . . . . . . . . . . . . . . . 87


7.1 Stochastic Differential Equation for Brownian Motion . . . . . . . . 87
7.2 Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.3 Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.3.1 Sharp Initial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.2 Absorbing Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.4 Fokker–Planck Equation for Brownian Motion . . . . . . . . . . . . . . . 93
Contents IX

7.5 Stationary Solution to the Focker–Planck Equation . . . . . . . . . . 94


7.6 Diffusion in an External Potential . . . . . . . . . . . . . . . . . . . . . . . . . 96
7.7 Large Friction Limit: Smoluchowski Equation . . . . . . . . . . . . . . . 98
7.8 Master Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

8 Kramers’ Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


8.1 Kramers’ Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2 Kramers’ Calculation of the Reaction Rate . . . . . . . . . . . . . . . . . 102

9 Dispersive Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


9.1 Dichotomous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.1.1 Fast Solvent Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.1.2 Slow Solvent Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . 111
9.1.3 Numerical Example (Fig. 9.3) . . . . . . . . . . . . . . . . . . . . . . . 112
9.2 Continuous Time Random Walk Processes . . . . . . . . . . . . . . . . . . 112
9.2.1 Formulation of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.2.2 Exponential Waiting Time Distribution . . . . . . . . . . . . . . 114
9.2.3 Coupled Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.3 Power Time Law Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Part IV Transport Processes

10 Nonequilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 125


10.1 Continuity Equation for the Mass Density . . . . . . . . . . . . . . . . . . 125
10.2 Energy Conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
10.3 Entropy Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
10.4 Phenomenological Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
10.5 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

11 Simple Transport Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


11.1 Heat Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
11.2 Diffusion in an External Electric Field . . . . . . . . . . . . . . . . . . . . . 134
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

12 Ion Transport Through a Membrane . . . . . . . . . . . . . . . . . . . . . . . 139


12.1 Diffusive Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
12.2 Goldman–Hodgkin–Katz Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
12.3 Hodgkin–Huxley Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
X Contents

13 Reaction–Diffusion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147


13.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
13.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
13.3 Fitzhugh–Nagumo Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Part V Reaction Rate Theory

14 Equilibrium Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155


14.1 Arrhenius Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
14.2 Statistical Interpretation of the Equilibrium Constant . . . . . . . . 157

15 Calculation of Reaction Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159


15.1 Collision Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
15.2 Transition State Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
15.3 Comparison Between Collision Theory and Transition State
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
15.4 Thermodynamical Formulation of TST . . . . . . . . . . . . . . . . . . . . . 165
15.5 Kinetic Isotope Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
15.6 General Rate Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
15.6.1 The Flux Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

16 Marcus Theory of Electron Transfer . . . . . . . . . . . . . . . . . . . . . . . 173


16.1 Phenomenological Description of ET . . . . . . . . . . . . . . . . . . . . . . . 173
16.2 Simple Explanation of Marcus Theory . . . . . . . . . . . . . . . . . . . . . . 175
16.3 Free Energy Contribution of the Nonequilibrium
Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
16.4 Activation Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
16.5 Simple Model Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
16.5.1 Charge Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
16.5.2 Charge Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
16.6 The Energy Gap as the Reaction Coordinate . . . . . . . . . . . . . . . . 187
16.7 Inner-Shell Reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
16.8 The Transmission Coefficient for Nonadiabatic
Electron Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Part VI Elementary Photophysics

17 Molecular States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


17.1 Born–Oppenheimer Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
17.2 Nonadiabatic Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Contents XI

18 Optical Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201


18.1 Dipole Transitions in the Condon Approximation . . . . . . . . . . . . 201
18.2 Time Correlation Function Formalism . . . . . . . . . . . . . . . . . . . . . . 202
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

19 The Displaced Harmonic Oscillator Model . . . . . . . . . . . . . . . . . 205


19.1 The Time Correlation Function in the Displaced Harmonic
Oscillator Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
19.2 High-Frequency Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
19.3 The Short-Time Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

20 Spectral Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209


20.1 Dephasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
20.2 Gaussian Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
20.2.1 Long Correlation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
20.2.2 Short Correlation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
20.3 Markovian Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

21 Crossing of Two Electronic States . . . . . . . . . . . . . . . . . . . . . . . . . 219


21.1 Adiabatic and Diabatic States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
21.2 Semiclassical Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
21.3 Application to Diabatic ET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
21.4 Crossing in More Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

22 Dynamics of an Excited State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


22.1 Green’s Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
22.2 Ladder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
22.3 A More General Ladder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
22.4 Application to the Displaced Oscillator Model . . . . . . . . . . . . . . . 240
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Part VII Elementary Photoinduced Processes

23 Photophysics of Chlorophylls and Carotenoids . . . . . . . . . . . . . 247


23.1 MO Model for the Electronic States . . . . . . . . . . . . . . . . . . . . . . . . 248
23.2 The Free Electron Model for Polyenes . . . . . . . . . . . . . . . . . . . . . . 249
23.3 The LCAO Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
23.4 Hückel Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
23.5 Simplified CI Model for Polyenes . . . . . . . . . . . . . . . . . . . . . . . . . . 253
23.6 Cyclic Polyene as a Model for Porphyrins . . . . . . . . . . . . . . . . . . . 253
23.7 The Four Orbital Model for Porphyrins . . . . . . . . . . . . . . . . . . . . 254
23.8 Energy Transfer Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
XII Contents

24 Incoherent Energy Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259


24.1 Excited States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
24.2 Interaction Matrix Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
24.3 Multipole Expansion of the Excitonic Interaction . . . . . . . . . . . . 262
24.4 Energy Transfer Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
24.5 Spectral Overlap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
24.6 Energy Transfer in the Triplet State . . . . . . . . . . . . . . . . . . . . . . . 268

25 Coherent Excitations in Photosynthetic Systems . . . . . . . . . . . 269


25.1 Coherent Excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
25.1.1 Strongly Coupled Dimers . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
25.1.2 Excitonic Structure of the Reaction Center . . . . . . . . . . . 273
25.1.3 Circular Molecular Aggregates . . . . . . . . . . . . . . . . . . . . . . 274
25.1.4 Dimerized Systems of LH2 . . . . . . . . . . . . . . . . . . . . . . . . . . 280
25.2 Influence of Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
25.2.1 Symmetry-Breaking Local Perturbation . . . . . . . . . . . . . . 285
25.2.2 Periodic Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
25.2.3 Diagonal Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
25.2.4 Off-Diagonal Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

26 Ultrafast Electron Transfer Processes


in the Photosynthetic Reaction Center . . . . . . . . . . . . . . . . . . . . 297

27 Proton Transfer in Biomolecules . . . . . . . . . . . . . . . . . . . . . . . . . . . 301


27.1 The Proton Pump Bacteriorhodopsin . . . . . . . . . . . . . . . . . . . . . . 301
27.2 Born–Oppenheimer Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
27.3 Nonadiabatic Proton Transfer (Small Coupling) . . . . . . . . . . . . . 305
27.4 Strongly Bound Protons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
27.5 Adiabatic Proton Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Part VIII Molecular Motor Models

28 Continuous Ratchet Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311


28.1 Transport Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
28.2 Chemical Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
28.3 The Two-State Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
28.3.1 The Chemical Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
28.3.2 The Fast Reaction Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
28.3.3 The Fast Diffusion Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
28.4 Operation Close to Thermal Equilibrium . . . . . . . . . . . . . . . . . . . 319
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

29 Discrete Ratchet Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323


29.1 Linear Model with Two Internal States . . . . . . . . . . . . . . . . . . . . . 324
Contents XIII

Part IX Appendix

A The Grand Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . 329


A.1 Grand Canonical Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
A.2 Connection to Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

B Time Correlation Function of the Displaced Harmonic


Oscillator Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
B.1 Evaluation of the Time Correlation Function . . . . . . . . . . . . . . . . 333
B.2 Boson Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
B.2.1 Derivation of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
B.2.2 Derivation of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
B.2.3 Derivation of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
B.2.4 Derivation of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 336

C The Saddle Point Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Part I

Statistical Mechanics of Biopolymers


1
Random Walk Models for the Conformation

In this chapter, we study simple statistical models for the entropic forces which
are due to the large number of conformations characteristic for biopolymers
like DNA or proteins (Fig. 1.1).

R
O
H H
Ψ Φ H
H + Cα N C
N Cα
H H C ω
H
O R

Fig. 1.1. Conformation of a protein. The relative orientation of two successive


protein residues can be described by three angles (Ψ, Φ, ω). For a real protein, the
ranges of these angles are restricted by steric interactions, which are neglected in
simple models

1.1 The Freely Jointed Chain


We consider a three-dimensional chain consisting of M units. The con-
figuration can be described by a point in a 3(M + 1)-dimensional space
(Fig. 1.2):
(r0 , r2 , . . . , rM ). (1.1)
The M bond vectors
bi = ri − ri−1 (1.2)
have a fixed length |bi | = b and are randomly oriented. This can be described
by a distribution function:
1
P (bi ) = δ(|bi | − b). (1.3)
4πb2
4 1 Random Walk Models for the Conformation

Fig. 1.2. Freely jointed chain with bond length b

Since the different units are independent, the joint probability distribution
factorizes

M
P (b1 , . . . , bM ) = P (bi ). (1.4)
i=1

There is no excluded volume interaction between any two monomers. Obvi-


ously, the end-to-end distance


N
R= bi (1.5)
i=1

has an average value of R = 0 since


 
R= bi = M bi P (bi ) = 0. (1.6)

The second moment is


⎛ ⎞
  
R2 = ⎝ bi bj ⎠ = bi bj (1.7)
i j i,j
 
= b2i + bi bj = M b2 .
i i=j

1.1.1 Entropic Elasticity

The distribution of the end-to-end vector is


 

P (R) = P (b1 , . . . , bM )δ R − bi d3 b1 , . . . , d3 bM . (1.8)

This integral can be evaluated by replacing the δ-function by the Fourier


integral: 
1
δ(R) = e−ikR d3 k (1.9)
(2π)3
which gives
 M 

−ikR 1
P (R) = 3
d ke δ(|bi | − b)e ikbi 3
d bi . (1.10)
i=1
4πb2
1.1 The Freely Jointed Chain 5

The inner integral can be evaluated in polar coordinates:



1
δ(|bi | − b)eikbi d3 bi (1.11)
4πb2
 2π  ∞  π
1
= dφ b2i dbi δ(|b i | − b) sin θ dθeikbi cos θ .
0 0 4πb2 0

The integral over θ gives


 π
2 sin kbi
sin θ dθeikbi cos θ = (1.12)
0 kbi

and hence
  ∞
1 1 2 sin kbi
δ(|bi | − b)e ikbi 3
d bi = 2π dbi δ(bi − b)b2i
4πb2 0 4πb 2 kbi
sin kb
= , (1.13)
kb
and finally we have
 M
1 3 −ikR sin kb
P (R) = d ke . (1.14)
(2π)3 kb

The function M
sin kb
(1.15)
kb
has a very sharp maximum at kb = 0. For large M , it can be approximated
quite accurately by a Gaussian:
M
sin kb
≈ e−M(1/6)k
2 2
b
(1.16)
kb

which gives
 3/2
1 −ikR −(M/6)k2 b2 3
e−3R
2
/(2b2 M)
P (R) ≈ 3
d ke e = .
(2π)3 2πb2 M
(1.17)

We consider R as a macroscopic variable. The free energy is (no internal


degrees of freedom, E = 0)

3R2
F = −T S = −kB T ln P (R) = kB T + const. (1.18)
2b2 M
The quadratic dependence on L is very similar to a Hookean spring. For a
potential energy
6 1 Random Walk Models for the Conformation

ks 2
V = x , (1.19)
2
the probability distribution of the coordinate is

ks
e−ks x /2kB T
2
P (x) = (1.20)
2πkB T
which gives a free energy of

ks x2
F = −kB T ln P = const + . (1.21)
2
By comparison, the apparent spring constant is
3kB T
ks = . (1.22)
M b2

1.1.2 Force–Extension Relation

We consider now a chain with one fixed end and an external force acting in
x-direction at the other end [1] (Fig. 1.3).
The projection of the ith segment onto the x-axis has a length of (Fig. 1.4)

bi = −b cos θ ∈ [−b, b]. (1.23)

We discretize the continuous range of bj by dividing the interval [−b, b] into


n bins of width Δb = 2b/n corresponding to the discrete values li , i = 1, . . . , n.

b
κ

Fig. 1.3. Freely jointed chain with external force

b
φ

θ
x

Fig. 1.4. Projection of the bond vector


1.1 The Freely Jointed Chain 7

The chain members are divided into n groups according to their bond
projections bj . The number of units in each group is denoted by Mi so that


n
Mi = M (1.24)
i=1

and the end-to-end length is



n
li Mi = L. (1.25)
i=1

The probability distribution is

sin(θ)dθdφ
P (θ, φ)dθdφ = . (1.26)

Since we are only interested in the probability of the li , we integrate over φ

sin(θ)dθ
P (θ)dθ = (1.27)
2
and transform variables to have
1 1
P (l)dl = P (−b cos θ)d(−b cos θ) = d(−b cos θ) = dl. (1.28)
2b 2b
The canonical partition function is
 M !  Mi   z Mi
Z(L, M, T ) =  zi = M! i
. (1.29)
{Mi }

Mi li =L j Mj ! i M
{Mi }
i i!

The zi = z are the independent partition functions of the single


 units which
we assume as independent of i. The degeneracy factor M !/ Mi ! counts the
number of microstates for a certain configuration {Mi }. The summation is
only over configurations with a fixed end-to-end length. This makes the eval-
uation rather complicated. Instead, we introduce a new partition function by
considering an ensemble with fixed force and fluctuating length

Δ(κ, M, T ) = Z(L, M, T )eκL/kBT . (1.30)
L

Approximating the logarithm of the sum by the logarithm of the maximum


term, we see that
− kB T ln Δ = −kB T ln Z − κL (1.31)
gives the Gibbs free enthalpy1
1
−κL corresponds to +pV .
8 1 Random Walk Models for the Conformation

0.5

0
L/Mb

–0.5

–1
–20 –10 0 10 0
κb/kBT

Fig. 1.5. Force–extension relation

G(κ, M, T ) = F − κL. (1.32)

In this new ensemble, the summation over L simplifies the partition function:
   z Mi   (zeκli /kB T )Mi
Δ= eκ Mi li /kB T
M! = M!
Mi ! Mi !
{Mi } {Mi }

M
= zeκli /kB T = ξ(κ, T )M . (1.33)

Now returning to a continuous distribution of li = −b cos θ, we have to


evaluate  b
sinh t
ξ= P (l)dl, zeκl/kB T = z (1.34)
−b t
with (Fig. 1.5)
κb
t= . (1.35)
kB T
From
dG = −SdT − Ldκ, (1.36)
we find

∂G  ∂ z κb
L= − = M kB T ln sinh
∂κ T ∂κ κb/kB T kB T

1 b κb κb
= M kB T − + coth = M bL ,
κ kB T kB T kB T

with the Langevin function


1
L(x) = coth(x) − . (1.37)
x
1.2 Two-Component Model 9

1.2 Two-Component Model


A one-dimensional random walk model can be applied to a polymer chain
which is composed of two types of units (named α and β), which may intercon-
vert. This model is for instance applicable to the dsDNA → SDNA transition
or the α-Helix → random coil transition of proteins which show up as a
plateau in the force–extension curve [1, 2].
We follow the treatment given in [1] which is based on an explicit evalu-
ation of the partition function. Alternatively, the two-component model can
be mapped onto a one-dimensional Ising model, which can be solved by the
transfer matrix method [3, 4]. We assume that of the overall M units, Mα are
in the α-configuration and M − Mα are in the β-configuration. The lengths
of the two conformers are lα and lβ , respectively (Fig. 1.6).

1.2.1 Force–Extension Relation

The total length is given by

L = Mα lα + (M − Mα )lβ = Mα (lα − lβ ) + M lβ . (1.38)

The number of configurations with length L is given by



L − M lβ M!
Ω(L) = Ω Mα = =

. (1.39)
lα − lβ L−Mlβ
! Mlα −L !
lα −lβ lα −lβ

From the partition function


(L−Mlβ )/(lα −lβ ) (Mlα −L)/(lα −lβ )
Z = zαMα zβM−Mα Ω = zα zβ Ω, (1.40)

application of Stirling’s approximation gives for the free energy

F = −kB T ln Z
L − M lβ M lα − L
= −kB T ln zα − kB T ln zβ − kB T (M ln M − M )
lα − lβ lα − lβ

L − M lβ L − M lβ L − M lβ
= −kB T − ln +
lα − lβ lα − lβ lα − lβ

M lα − L M lα − L M lα − L
− ln + . (1.41)
lα − lβ lα − lβ lα − lβ

lα lβ

Fig. 1.6. Two-component model


10 1 Random Walk Models for the Conformation

zb /za = 5
2

κ(1a–1b )/kBT
zb /za = 1

0
1/6 1/2 1/1.2
zb /za = 0.2
–2

–4

0 0.2 0.4 0.6 0.8 1


δ

Fig. 1.7. Force–extension relation for the two-component model

The derivative of the free energy gives the force–extension relation



∂F kB T M lβ − L kB T zβ
κ= = ln + ln . (1.42)
∂L lα − lβ L − M lα lα − lβ z α

This can be written as a function of the fraction of segments in the α-


configuration
Mα L − M lβ
δ= = (1.43)
M M (lα − lβ )
in the somewhat simpler form (Fig. 1.7)

lα − l β δ zβ
κ = ln + ln . (1.44)
kB T 1−δ zα
The mean extension for zero force is obtained by solving κ(L) = 0:

z α lα + z β lβ
L0 = M , (1.45)
zα + zβ

L̄0 − M lβ zα
δ0 = = . (1.46)
M (lα − lβ ) zα + zβ
Taylor series expansion around L0 gives the linearized force–extension rela-
tion:
∂F kB T (zα + zβ )2
κ= = (L − L̄0 ) + · · ·
∂L M (lα − lβ ) 2 zα zβ
kB T (zα + zβ )2
≈ (δ − δ0 )
lα − lβ zα zβ
kB T 1
= (δ − δ0 ). (1.47)
lα − lβ δ0 (1 − δ0 )
1.2 Two-Component Model 11

1.2.2 Two-Component Model with Interactions

We now consider additional interaction between neighboring units. We intro-


duce the interaction energies wαα , wαβ , wββ for the different pairs of neigh-
bors and the numbers Nαα , Nα,β , Nββ of such interaction terms. The total
interaction energy is then

W = Nαα wαα + Nαβ wαβ + Nββ wββ . (1.48)


The numbers of pair interactions are not independent from the numbers of
units Mα , Mβ . Consider insertion of an additional α-segment into a chain.
Figure 1.8 counts the possible changes in interaction terms. In any case by
insertion of an α-segments, the expression 2Nαα + Nαβ increases by 2.
Similarly, insertion of an extra β-segment increases 2Nββ + Nαβ by 2
(Fig. 1.9).
This shows that there are linear relationships of the form
2Nαα + Nαβ = 2Mα + c1 ,
2Nββ + Nαβ = 2Mβ + c2 . (1.49)
The two constants depend on the boundary conditions as can be seen from an
inspection of the shortest possible chains with two segments. They are zero
for periodic boundaries and will therefore be neglected in the following since
the numbers Mα and Mβ are much larger (Fig. 1.10).
We substitute
1
Nαα = Mα − Nαβ , (1.50)
2

α Mα Mβ Nαα Nαβ Nββ 2Nαα + Nαβ

∗∗∗αα∗∗∗ +1 +1 +2

∗∗∗αβ∗∗∗ +1 +1 +2

∗∗∗βα∗∗∗ +1 +1 +2

∗∗∗ββ∗∗∗ +1 +2 −1 +2

Fig. 1.8. Insertion of an α-segment. 2Nαα + Nαβ increases by 2

Mα Mβ Nαα Nαβ Nββ 2Nββ + Nαβ


β

∗∗∗αα∗∗∗ +1 −1 +2 +2

∗∗∗αβ∗∗∗ +1 +1 +2

∗∗∗βα∗∗∗ +1 +1 +2

∗∗∗ββ∗∗∗ +1 +1 +2

Fig. 1.9. Insertion of a β-segment. 2Nββ + Nαβ increases by 2


12 1 Random Walk Models for the Conformation

Nα Nβ Nαα Nαβ Nββ 2Nαα + Nαβ–2Nα 2Nββ + Nαβ–2Nβ

α α 2 0 1 0 0 −2 0

β β 0 2 0 0 1 0 −2

α β 1 1 0 1 0 −1 −1

α α 2 0 2 0 0 0 0

β β 0 2 0 0 2 0 0

α β 1 1 0 2 0 0 0

Fig. 1.10. Determination of the constants. c1 and c2 are zero for periodic boundary
conditions and in the range −2, . . . , 0 for open boundaries

1
Nββ = Mβ − Nαβ , (1.51)
2
w = wαα + wββ − 2wαβ (1.52)
to have the interaction energy

1 1
W = wαα Mα − Nαβ + wββ Mβ − Nαβ + wαβ Nαβ
2 2
w
= wαα Mα + wββ (M − Mα ) − Nαβ . (1.53)
2
The canonical partition function is

g(Mα , Nαβ )e−W (Nαβ )/kB T
M
Z(Mα , T ) = zαMα zβ β
Nαβ


(M−Mα )
= zα e−wαα /kB T zβ e−wββ /kB T

× g(Mα , Nαβ )eNαβ w/2kB T . (1.54)
Nαβ

The degeneracy factor g will be evaluated in the following.


The chain can be divided into blocks containing only α-segments (α-
blocks) or only β-segments (β-blocks). The number of boundaries between
α-blocks and β-blocks obviously is given by Nαβ . Let Nαβ be an odd number.
Then there are (Nαβ +1)/2 blocks of each type. (We assume that Nαβ , Mα , Mβ
are large numbers and neglect small differences of order 1 for even Nαβ .) In
each α-block, there is at least one α-segment. The remaining Mα −(Nαβ +1)/2
α-segments have to be distributed over the (Nαβ + 1)/2 α-blocks (Fig. 1.11).
Therefore, we need the number of possible ways to arrange Mα − (Nαβ +
1)/2 segments and (Nαβ − 1)/2 walls which is given by the number of ways
1.2 Two-Component Model 13

αα ββ α β
αα β α ββ
α ββ αα β
α β αα ββ

ββ αα β α
ββ α β αα
β αα ββ α
β α ββ αα

Fig. 1.11. Degeneracy factor g. The possible eight configurations are shown for
M = 4, Mα = 3, Mβ = 3, and Nαβ = 3

ααα ααα

ααα α αα
ααα αα α
ααα ααα
α αα α αα
ααα α α α
ααα α αα
αα α αα α
ααα αα α
ααα ααα

Fig. 1.12. Distribution of three segments over three blocks. This is equivalent to
the arrangement of three segments and two border lines

to distribute the (Nαβ − 1)/2 walls over the total of Mα − 1 objects which is
given by (Fig. 1.12)
(Mα − 1)! Mα !
Nαβ −1 N +1
≈ Nαβ N
. (1.55)
( 2 )!(Mα − αβ2 )! ( 2 )!(Mα − 2αβ )!
The same consideration for the β-segments gives another factor of
(M − Mα )!
N Nαβ
. (1.56)
( 2αβ )!(M − Mα − 2 )!

Finally, there is an additional factor of 2 because the first block can be of


either type. Hence for large numbers, we find
(Mα )! (M − Mα )!
g(Mα , Nαβ ) = 2 Nαβ N N Nαβ
. (1.57)
( 2 )!(Mα − 2αβ )! ( 2αβ )!(M − Mα − 2 )!

We look for the maximum summand of Z as a function of Nαβ . The corre-



sponding number will be denoted as Nαβ and is determined from the condition
14 1 Random Walk Models for the Conformation


w ∂
0= ln g(Mα , Nαβ )ewNαβ /2kB T = + ln g(Mα , Nαβ ).
∂Nαβ 2kB T ∂Nαβ
(1.58)
Stirling’s approximation gives
∗ ∗ ∗
w 1 Nαβ 1 Nαβ Nαβ
0= + ln Mα − + ln M − Mα − − ln
2kB T 2 2 2 2 2
(1.59)
or  
N∗ N∗
w (Mα − 2αβ )(M − Mα − 2αβ )
0= + ln N∗
. (1.60)
kB T ( αβ )2 2
Taking the exponential gives
∗ ∗ 2
Nαβ Nαβ Nαβ
Mα − M − Mα − = e−w/kB T . (1.61)
2 2 2

Introducing the relative quantities



Mα Nαβ
δ= , γ= , (1.62)
M 2M
we have to solve the quadratic equation

(δ − γ)(1 − δ − γ) = γ 2 e−w/kB T . (1.63)

The solutions are



−1 ± (1 − 2δ)2 + 4e−w/kB T δ(1 − δ)
γ= . (1.64)
2(e−w/kB T − 1)

Series expansion in w/kB T gives


kB T 1 w
γ= + + + ···
2w 4 24kB T

kB T 1 1 w
···± − + δ(1 − δ) − + (δ − δ ) −
2 2
+ · · · . (1.65)
2w 4 24 kB T

The − alternative diverges for w → 0 whereas the + alternative



2 2 w 1 w
γ = δ(1 − δ) + (δ − δ ) −δ 2
(1 − δ) + 2δ(δ − 1)
2 3
· · · (1.66)
kB T 2 kB T

approaches the value


γ0 = δ − δ 2 (1.67)
which is the only solution of the one case

(δ − γ0 )(1 − δ − γ0 ) = γ02 → δ(1 − δ) − γ = 0. (1.68)


1.2 Two-Component Model 15


For Nαβ , we obtain approximately

∗ w
Nαβ = 2M δ(1 − δ) + (δ − δ 2 )2 + ··· . (1.69)
kB T
Let us now apply the maximum term method, which approximates the
logarithm of a sum by the logarithm of the maximum summand:
F = −kB T ln Z(Mα , T )
≈ −kB T Mα ln zα − kB T (M − Mα ) ln zβ + Mα wαα + (M − Mα )wββ


wNαβ
−kB T ln g(Mα , Nαβ )− . (1.70)
2
The force–length relation is now obtained from

L−Mlβ
∂F ∂F ∂ lα −lβ 1 ∂F
κ= = =
∂L ∂Mα ∂L lα − lβ ∂Mα

1 ∂
= −kB T ln zα + kB T ln zβ + wαα − wββ − kB T ln g
lα − lβ ∂Mα
∗ ∗
1 ∂N αβ ∂ wN αβ
+ ∗ −kB T ln g − . (1.71)
lα − lβ ∂Mα ∂Nαβ 2

The last part vanishes due to the definition of Nαβ . Now using Stirling’s
formula, we find
N∗
∂ Mα M − Mα − 2αβ δ 1−δ−γ
ln g = ln + ln ∗ = ln + ln
∂Mα M − Mα Mα −
N αβ 1−δ δ−γ
2
(1.72)
and substituting γ, we have finally
(lα − lβ ) zβ e−wββ /kB T (1 − δ)
κ = ln −w /k T
− ln
kB T zα e αα B δ
2
w w
+(2δ − 1) + δ(3δ − 2δ 2 − 1) . (1.73)
kB T kB T
Linearization now gives
zα e−wαα /kB T
δ0 = ,
zα e−wαα /kB T + zβ e−wββ /kB T
kB T 1
κ= (δ − δ0 )
lα − lβ δ0 (1 − δ0 )
2
w w
+ (2δ0 − 1) + (3δ02 − δ0 − 2δ03 )
kB T kB T
 2 
w w
+ 2 + (6δ0 − 6δ02 − 1) (δ − δ0 ). (1.74)
kB T kB T
16 1 Random Walk Models for the Conformation

10

–5

–10
0 0.2 0.4 0.6 0.8 1
δ

Fig. 1.13. Force–length relation for the interacting two-component model. Dashed
curves: exact results for w/kB T = 0, ±2, ±5. Solid curves: series expansion (1.73)
which gives a good approximation for |w/kB T | < 2

For negative w, a small force may lead to much larger changes in length
than with no interaction. This explains, for example, how in proteins huge
channels may open although the acting forces are quite small. In the case
of Myoglobin, this is how the penetration of oxygen in the protein becomes
possible (Fig. 1.13).

Problems
1.1. Gaussian Polymer Model
The simplest description of a polymer is the Gaussian polymer model which
considers a polymer to be a series of particles joined by Hookean springs
n

n−1

(a) The vector connecting monomers n−1 and n obeys a Gaussian distribution
with average zero and variance
 
(rn − rn−1 )2 = b2 .

Determine the distribution function P (rn − rn−1 ) explicitly.


(b) Assume that the distance vectors rn − rn−1 are independent and calculate
the distribution of end-to-end vectors P (rN − r0 ).
(c) Consider now a polymer under the action of a constant force κ in x-
direction. The potential energy of a conformation is given by

N
f
V = (rn − rn−1 )2 − κ(xN − x0 )
n=1
2
1.2 Two-Component Model 17

and the probability of this conformation is

P ∼ e−V /kB T .

Determine the effective spring constant. f


(d) Find the most probable configuration by searching for the minimum of
the energy
∂V ∂V ∂V
= = = 0.
∂xn ∂yn ∂zn
Calculate the length of the polymer for the most probable configuration
(according to the maximum term method, the average value coincides with
the most probable value in the thermodynamic limit)

1.2. Three-Dimensional Polymer Model


Consider a model of a polymer in three-dimensional space consisting of N -
links of length b. The connection of the links i and i + 1 is characterized by
the two angles φi and θi . The vector ri can be obtained from the vector r1
through application of a series of rotation matrices2

ri = R1 R2 · · · Ri−1 r1 θ

with ⎛ ⎞ φ
0 ri+1
r1 = ⎝ 0 ⎠
b ri

⎛ ⎞⎛ ⎞
cos φi sin φi 0 cos θi 0 − sin θi
Ri = Rz (φi )Ry (θi ) = ⎝ − sin φi cos φi 0 ⎠ ⎝ 0 1 0 ⎠.
0 0 1 sin θi 0 cos θi

2 
N
Calculate the mean square of the end-to-end distance i=1 ri for the
following cases:
(a) The averaging · · ·  includes averaging over all angles φi and θi .
(b) The averaging · · ·  includes averaging over all angles φi while the angles
θi are held fixed at a common value θ.
(c) The averaging · · ·  includes averaging over all angles φi while the angles
θi are held fixed at either θ2i+1 = θa or θ2i = θb depending on whether
the number of the link is odd or even.
2
The rotation matrices act in the laboratory fixed system (xyz). Trans-
formation into the coordinate system of the segment (x y  z  ) changes
the order of the matrices. For instance, r2 = R(y, θ1 )R(z, φ1 )r1 =
R(z, φ1 )R(y  , θ1 )R−1 (z, φ1 )R(z, φ1 )r1 = R(z, φ1 )R(y  , θ1 )r1 (this is sometimes
discussed in terms of active and passive rotations).
18 1 Random Walk Models for the Conformation

(d) How large must N be, so that it is a good approximation to keep only
terms which are proportional to N ?
(e) What happens in the second case if θ is chosen as very small (wormlike
chain)?
Hint. Show first that after averaging over the φi , the only terms of the matrix
which have to be taken into account are the elements (Ri )33 . The appearing
summations can be expressed as geometric series.

1.3. Two-Component Model


We consider the two-component model of a polymer chain which consists
of M -segments of two different types α, β (internal degrees of freedom are
neglected). The number of configurations with length L is given by the
Binomial distribution:
M!
Ω(L, M, T ) = , L = Mα lα + (M − Mα )lβ .
Mα !(M − Mα )!

(a) Make use of the asymptotic expansion3 of the logarithm of the Γ -function
√ 1 1
ln(Γ (z + 1)) = (ln z − 1)z + ln( 2π) + ln z + + O(z −3 ),
2 12z
N ! = Γ (N + 1)
to calculate the leading terms of the force–extension relation which is
obtained from

κ= (−kB T ln Ω(L, M, T )).
∂L
Discuss the error of Stirling’s approximation for M = 1,000 and lβ /lα = 2.
(b) Now switch to an ensemble with constant force κ. The corresponding
partition function is

Z(κ, M, T ) = eκL/kB T Ω(L, M, T ).
L

Calculate the first two moments of the length


∂ ∂
L=− (−kB T ln Z) = Z −1 kB T Z,
∂κ ∂κ
∂2
L2 = Z −1 (kB T )2Z,
∂κ2
 2
and discuss the relative uncertainty σ = ( L2 − L )/L. Determine the
maximum of σ.

3
Several asymptotic expansions can be found in [5].
2
Flory–Huggins Theory for Biopolymer
Solutions

In the early 1940s, Flory [6] and Huggins [7], both working independently,
developed a theory based upon a simple lattice model that could be used to
understand the nonideal nature of polymer solutions. We consider a lattice
model where the lattice sites are chosen to be the size of a solvent molecule
and where all lattice sites are occupied by one molecule [1].

2.1 Monomeric Solution


As the simplest example, consider the mixing of a low-molecular-weight sol-
vent (component α) with a low-molecular-weight solute (component β). The
solute molecule is assumed to be the same size as a solvent molecule and
therefore every lattice site is occupied by one solvent molecule or by one
solute molecule at a given time (Fig. 2.1).
The increase in entropy ΔSm due to mixing of solvent and solute is given
by
N!
ΔSm = kB ln Ω = kB ln , (2.1)
Nα !Nβ !
where N = Nα + Nβ is the total number of lattice sites. Using Stirling’s
approximation leads to

ΔSm = kB (N ln N − N − Nα ln Nα + Nα − Nβ ln Nβ + Nβ )
= kB (N ln N − Nα ln Nα − Nβ ln Nβ )
Nα Nβ
= −kB Nα ln − kB Nβ ln . (2.2)
N N
Inserting the volume fractions
Nα Nβ
φα = , φβ = , (2.3)
Nα + Nβ Nα + Nβ
the mixing entropy can be written in the well-known form
20 2 Flory–Huggins Theory for Biopolymer Solutions

solvent

solute

Fig. 2.1. Two-dimensional Flory–Huggins lattice

ΔSm = −N kB (φα ln φα + φβ ln φβ ). (2.4)


Neglecting boundary effects (or using periodic b.c.), the number of nearest
neighbor pairs is (c is the coordination number)
c
Nnn = N . (2.5)
2
These are divided into
Nα c N φ2α c Nβ c N φ2β c
Nαα = φα = , Nββ = φβ = ,
2 2 2 2
Nαβ = N φα φβ c. (2.6)
The average interaction energy is
1 1
w= N cφ2α wαα + N cφ2β wββ + N cφα φβ wαβ (2.7)
2 2
which after the substitution
1
wαβ = (wαα + wββ − w) (2.8)
2
becomes
1
w = − N cφα φβ w
2
1 1
= + N cφα (φα wαα + φβ wαα ) + N cφβ (φβ wββ + φα wββ ) (2.9)
2 2
and since φα + φβ = 1
1 1
w = − N cφα φβ w + N c(φα wαα + φβ wββ ). (2.10)
2 2
Now the partition function is
N!
Z = zαNα zβ β e−Nα cwαα /2kB T e−Nβ cwββ /2kB T eNα Nβ cw/2N kB T
N
Nα !Nβ !
N!
= (zα e−cwαα /2kB T )Nα (zβ e−cwββ /2kB T )Nβ eNα Nβ cw/2N kB T .
Nα !Nβ !
(2.11)
2.1 Monomeric Solution 21

0 χ =2

–0.2
χ =2

ΔFm /NkBT –0.4 χ=1

–0.6 χ =0

0 0.2 0.4 0.6 0.8 1


φα

Fig. 2.2. Free energy change ΔF/N kB T of a binary mixture with interaction

The free energy is

F = −kB T ln Z = −Nα kB T ln zα − Nβ kB T ln zβ
c c Nα Nβ cw
= +Nα wαα + Nβ wββ − + N kB T (φα ln φα + φβ ln φβ ).
2 2 2N
(2.12)

For the pure solvent, the free energy is


c
F (Nα = N, Nβ = 0) = −Nα kB T ln zα + Nα wαα (2.13)
2
and for the pure solute
c
F (Nα = 0, Nβ = N ) = −Nβ kB T ln zβ + Nβ wββ . (2.14)
2
Hence the change in free energy is
Nα Nβ cw
ΔFm = − + N kB T (φα ln φα + φβ ln φβ ) (2.15)
2N
with the energy change (van Laar heat of mixing)

Nα Nβ cw cw
ΔEm = − = −N φα φβ = N kB T χφα φβ . (2.16)
2N 2
The last equation defines the Flory interaction parameter (Fig. 2.2):
cw
χ=− . (2.17)
2kB T
For χ > 2, the free energy has two minima and two stable phases exist. This
is seen from solving
22 2 Flory–Huggins Theory for Biopolymer Solutions

∂ΔF ∂
0= = N kB T (χφα (1 − φα ) + φα ln φα + (1 − φα ) ln(1 − φα ))
∂φα ∂φα

φα
= N kB T χ(1 − 2φα ) + ln . (2.18)
1 − φα

This equation has as one solution φα = 1/2. This solution becomes instable
for χ > 2 as can be seen from the sign change of the second derivative

∂ 2 ΔF 1 − 2χφα + 2χφ2α
= N k B T = N kB T (4 − 2χ). (2.19)
∂φ2α φ2α − φα

2.2 Polymeric Solution


Now, consider Nβ polymer molecules which consist of M units and hence
occupy a total of M Nβ lattice sites. The volume fractions are (Fig. 2.3)

Nα M Nβ
φα = , φβ = (2.20)
Nα + M Nβ Nα + M Nβ

and the number of lattice sites is

N = Nα + M Nβ . (2.21)

The entropy change is given by

ΔS = ΔSm + ΔSd = kB ln Ω(Nα , Nβ ). (2.22)

It consists of the mixing entropy and a contribution due to the different con-
formations of the polymers (disorder entropy). The latter can be eliminated
by subtracting the entropy for Nα = 0:

ln Ω(Nα , Nβ )
ΔSm = ΔS − ΔSd = kB . (2.23)
ln Ω(0, Nβ )

In the following, we will calculate Ω(Nα , Nβ ) in an approximate way. We use


a mean-field method where one polymer after the other is distributed over the

solvent

solute

Fig. 2.3. Lattice model for a polymer


2.2 Polymeric Solution 23

a r= 2 r =3 r=4 r=5

3 4
1 2 1 3 2 1 1 2

b r=5

3 4
2 1

Fig. 2.4. Available positions. The second segment can be placed onto four positions,
the third segment onto three positions. For the following segments, it is assumed
that they can be also placed onto three positions (a). Configurations like (b) are
neglected

lattice, taking into account only the available volume but not the configuration
of all the other polymers. Under that conditions, Ω factorizes

1 

Ω= νi , (2.24)
Nβ ! i=1

where νi counts the number of possibilities to put the ith polymer onto the lat-
tice. It will be calculated by adding one segment after the other and counting
the number of possible ways:


M
νi+1 = ni+1
s . (2.25)
s=1

The first segment of the (i + 1)th polymer molecule can be placed onto

ni+1
1 = N − iM (2.26)

lattice sites (Fig. 2.4).


The second segment has to be placed on a neighboring position. Depending
on the coordination number of the lattice, there are c possible neighboring
sites. But only a fraction of
iM
f =1− (2.27)
N
of these is unoccupied. Hence for the second segment, we have

iM
ni+1
2 = c 1 − . (2.28)
N
24 2 Flory–Huggins Theory for Biopolymer Solutions

For the third segment only c − 1 neighboring positions are available



iM
n3 = (c − 1) 1 −
i+1
. (2.29)
N
For the following segments r = 4 . . . M , we assume that the number of possible
sites is the same as for the third segment. This introduces some error since
for some configurations the number is reduced due to the excluded volume.
This error, however, is small compared to the crudeness of the whole model.
Multiplying all the factors, we have
M−1
iM
νi+1 = (N − iM )c(c − 1) M−2
1−
N
M−1
c−1
≈ (N − iM )M (2.30)
N
and the entropy is
⎡ ⎤

Nβ M−1
1 c − 1
ΔS = kB ln ⎣ (N − iM )M ⎦
Nβ ! i=1 N

c−1
= −kNBβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln
N


+kB M ln(N − iM ). (2.31)
i=1

The sum will be approximated by an integral


 

Nβ Nβ
N N
ln(N − iM ) ≈ ln(N − M x)dx = x− (ln(N − M x) − 1)  β
0 M 0
i=1
Nα N
=− (ln Nα − 1) + (ln N − 1). (2.32)
M M
Finally, we get

c−1
ΔS = −kB Nβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln
N
+kB (N ln N − N + Nα − Nα ln Nα ). (2.33)

The disorder entropy is obtained by substituting Nα = 0 and N = M Nβ :

ΔSd = ΔS(Nα = 0)

c−1
= −kB Nβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln
M Nβ
+kB (M Nβ ln M Nβ − M Nβ ). (2.34)
2.2 Polymeric Solution 25

0.6

0.5

ΔSm /kBT
0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
φβ

Fig. 2.5. Mixing entropy. ΔSm /N kB is shown as a function of φβ for M = 1, 2,


10, 100

The mixing entropy is given by the difference (Fig. 2.5):

ΔSm = ΔS − ΔSd
= kB (N ln N − N + Nα − Nα ln Nα − M Nβ ln M Nβ + M Nβ )
+kB Nβ (M − 1)(ln M Nβ − ln N )
= kB (N ln N − Nα ln Nα − M Nβ ln M Nβ + M Nβ ln M Nβ
−M Nβ ln N − Nβ ln M Nβ + Nβ ln N )

Nα M Nβ
= −kB Nα ln + Nβ ln = −kB Nα ln φα − kB Nβ ln φβ
N N

φβ
= −N kB φα ln φα + ln φβ . (2.35)
M
In comparison with the expression for a solution of molecules without internal
flexibility, we obtain an extra contribution to the entropy of
M Nβ Nβ
− Nβ kB ln + Nβ kB ln = −Nβ kB ln M. (2.36)
N N
Next, we calculate the change of energy due to mixing, ΔEm . wαα is the
interaction energy between nearest-neighbor solvent molecules, wββ between
nearest-neighbor polymer units (not chemically bonded), and wαβ between
one solvent molecule and one polymer unit. The probability that any site
is occupied by a solvent molecule is φα and by a polymer unit is φβ . We
introduce an effective coordination number c which takes into account that a
solvent molecule has c neighbors, whereas a polymer segment interacts only
with c-2 other molecules. Then
Nα Nβ
Nαα = cφα , Nββ = M cφβ , (2.37)
2 2
Nαβ = cφα Nβ . (2.38)
26 2 Flory–Huggins Theory for Biopolymer Solutions

In the pure polymer φβ = 1 and Nββ = M cNβ /2, whereas in the pure solvent
Nαα = cNα/2 . The energy change is
Nα Nβ
ΔEm = cwαα φα + wββ M cφβ + wαβ M cφα Nβ
2 2
Nα Nβ
−wαα c − wββ M c
2 2
Nα Nβ
= −cwαα φβ − cM wββ φα + wαβ M cφα Nβ
2 2
wαα + wββ
= wαβ − cN φα φβ
2
w
= − cN φα φβ = N kB T χφα φβ , (2.39)
2
with the Flory interaction parameter
wc
χ=− . (2.40)
2kB T
For the change in free energy, we find (Figs. 2.6–2.8)
ΔFm ΔEm ΔSm φβ
= − = φα ln φα + ln φβ + χφα φβ . (2.41)
N kB T N kB T N kB M

2.3 Phase Transitions


2.3.1 Stability Criterion
In equilibrium, the free energy (if volume is constant) has a minimum value.
Hence, a homogeneous system becomes unstable and separates into two phases

0.2

0.1

0
ΔFm / NkBT

– 0.1

– 0.2

– 0.3

– 0.4
0 0.2 0.4 0.6 0.8 1
Φβ

Fig. 2.6. Free energy as a function of solute concentration. ΔFm /N kB T is shown


for M = 1,000 and χ = 0.1, 0.5, 1.0, 2.0
2.3 Phase Transitions 27

– 0.1

ΔFm / NkBT
– 0.2

– 0.3

– 0.4

– 0.5
0 0.2 0.4 0.6 0.8 1
Φβ

Fig. 2.7. Free energy as a function of solute concentration. ΔFm /N kB T is shown


for χ = 1.0 and M = 1, 2, 10, 1,000

0.2

0.1
ΔFm / NkBT

– 0.1

– 0.2
0 0.2 0.4 0.6 0.8 1
Φβ

Fig. 2.8. Free energy as a function of solute concentration. ΔFm /N kB T is shown


for χ = 2.0 and M = 1, 2, 10, 1,000

if the free energy of the two-phase system is lower, that is, the following
equation can be fulfilled:

ΔFm (φβ , N ) > ΔFm (φβ , N  ) + ΔFm (φβ , N − N  ). (2.42)

But since ΔFm = N Δfm (φβ ) is proportional to N , this condition becomes

N Δfm (φβ ) > N  Δfm (φβ ) + (N − N  )Δfm (φβ ). (2.43)

Since the total numbers Nα and Nβ are conserved, we have

N φβ = N  φβ + (N − N  )φβ (2.44)

or
φβ − φβ φβ − φβ
N = N , (N − N  ) = N . (2.45)
φβ − φβ φβ − φβ
28 2 Flory–Huggins Theory for Biopolymer Solutions

stable
φβ’
φβ
φβ’’

metastable

φβ’ φβ φβ’’

φβ’’
φβ
φβ’ instable

Fig. 2.9. Stability criterion for the free energy

But since N as well as (N − N  ) should be positive numbers, there are two


possible cases:

φβ − φβ > 0, φβ − φβ > 0, φβ − φβ > 0, (2.46)

φβ − φβ < 0, φβ − φβ < 0, φβ − φβ < 0, (2.47)


which means that either φβ < φb < φβ or φβ > φb > φβ . By renaming, we
always can choose the order

φβ < φβ < φβ . (2.48)

The stability criterion becomes (Fig. 2.9)

φβ − φβ 
φβ − φβ
Δfm (φβ ) > Δf m (φ ) + Δfm (φβ ), (2.49)
φβ − φβ β
φβ − φβ

which can be written with the abbreviation

h = φβ − φβ , h = φβ − φβ (2.50)

as
Δf (φβ − h ) − Δf (φβ ) Δf (φβ + h ) − Δf (φβ )
+ <0 (2.51)
h h
but that means the curvature has to be negative or locally

∂ 2 Δf (φβ )
< 0. (2.52)
∂φ2β

2.3.2 Critical Coupling

Instabilities appear above a certain value of χ = χc (M ). To find this critical


value, we have to look for the metastable case with
2.3 Phase Transitions 29

∂2 1 1
0= Δf (φb ) = + − 2χ. (2.53)
∂ 2 φb 1 − φb M φb
In principle, the critical χ value can be found from solving this quadratic
equation for φb and looking for real roots in the range 0 ≤ φβ ≤ 1. Here,
however, a simpler strategy can be applied. At the boundaries of the interval
[0, 1] of possible φb -values, the second derivative is positive

∂2 1
Δf (φb ) → >0 for φb → 0, (2.54)
∂ 2 φb M φb

∂2 1
Δf (φb ) → > 0 for φb → 1. (2.55)
2
∂ φb 1 − φb
Hence, we look for a minimum of the second derivative, that is, we solve

∂3 1 1
0= Δf (φb ) = − . (2.56)
∂ 3 φb (1 − φb )2 M φ2b

This gives immediately


1
φbc = √ . (2.57)
1+ M
Above the critical point, the minimum of the second derivative is negative.
Hence we are looking for a solution of

∂2 ∂3
2
Δf (φb ) = 3 Δf (φb ) = 0. (2.58)
∂ φb ∂ φb
Inserting φbc into the second derivative gives
1 2
0=1+ + √ − 2χ, (2.59)
M M
which determines the critical value of χ

(1 + M )2 1 1 1
χc = = +√ + . (2.60)
2M 2 M 2M

From dF = −SdT + μα dNα + μβ dNβ , we obtain the change of the chemical


potential as

∂ΔF  ∂N ∂ ∂φβ ∂
Δμα = μα − μα =
0
= + N Δf (φβ )
∂Nα Nβ ,T ∂Nα ∂N ∂Nα ∂φβ
= Δf (φβ ) − φβ Δf  (φβ )

1
= kB T ln(1 − φβ ) + 1 − φβ + χφ2β . (2.61)
M
30 2 Flory–Huggins Theory for Biopolymer Solutions

Now the derivatives of Δμα are



Δμα = −φβ Δf  (φβ ), (2.62)
∂φβ
∂2
Δμα = −Δf  (φβ ) − φβ Δf  (φβ ). (2.63)
∂φ2β

Hence, the critical point can also be found by solving


∂2 ∂
Δμα = Δμα = 0. (2.64)
∂φ2β ∂φβ

Employing the ideal gas approximation

μ = kB T ln(p) + const, (2.65)

this gives for the vapor pressure (Figs. 2.10 and 2.11)

= e(μα −μα )/kB T = (1 − φβ )eχφβ +(1−1/M)φβ
0 2
(2.66)
p0α
and since the exponential is a monotonous function, another condition for the
critical point is
∂ pα ∂ 2 pα
= = 0. (2.67)
∂φβ p0α ∂ 2 φβ p0α

2.3.3 Phase Diagram

In the simple Flory–Huggins theory, the interaction parameter is proportional


to 1/T . Hence we can write it as

1.5 χ= 3
P/P0

1
χ= 2

χ= 1
0.5
χ= 0

0
0 0.2 0.4 0.6 0.8 1
φβ

Fig. 2.10. Vapor pressure of a binary mixture with interaction (M = 1)


2.3 Phase Transitions 31

1.0001

1.0000

0.9999
p/p0
0.9998

0.9997

0.9996

0.9995
0 0.02 0.04 0.06 0.08 0.1
Φβ

Fig. 2.11. Vapor pressure of a polymer solution with interaction for M = 1,000, χ =
0.5, 0.532, 0.55

T 0 χ0
χ= (2.68)
T
and discuss the free energy change as a function of φβ and T :

φβ
ΔF = N kB T (1 − φβ ) ln(1 − φβ ) + ln φβ + N kB T0 χ0 φβ (1 − φβ ).
M
(2.69)

The turning points follow from



∂2 T T
0= ΔF = N kB + − 2T0 χ0 (2.70)
∂φ2β 1 − φβ M φβ
as

1 T (1 − M ) ± T 2 (M − 1)2 + 4T0 χ0 M (T0 χ0 M − T − 4M T )
φβ,TP = + .
2 4T0 χ0 M
(2.71)

This defines the spinodal curve which separates the instable from the meta-
stable region (Fig. 2.12).
Which is the minimum free energy of a two-phase system? The free energy
takes the form

ΔF = ΔF 1 + ΔF 2 = N 1 kB T Δf (φ1β ) + N 2 kB T Δf (φ2β ) (2.72)

with
M Nβj
N j = Nαj + M Nβj , φjβ = . (2.73)
Nj
32 2 Flory–Huggins Theory for Biopolymer Solutions

0.8

0.6
φβ

0.4

0.2 M =1 M =10 M =100

0
0 0.5 1 1.5 2
T / (T0χ0)

Fig. 2.12. Spinodal

The minimum free energy can be found from the condition that exchange of
solvent or solute molecules between the two phases does not change the free
energy, that is, the chemical potentials in the two phases are the same:
 
∂ΔF 1 ∂ΔF 2 ∂ΔF 1 ∂ΔF 2
0 = dΔF = − dNα + − dNβ , (2.74)
∂Nα1 ∂Nα2 ∂Nβ1 ∂Nβ2

0 = μ1α − μ2α = μ1β − μ2β (2.75)


or    
∂ 2 ∂
0 = Δf − 1
Δf 1
− Δf − φβ 2 Δf ,
φ1β 2 2
(2.76)
∂φ1β ∂φβ
   
∂ ∂
0 = Δf + (1 − φβ ) 1 Δf − Δf + (1 − φβ ) 2 Δf .
1 1 1 2 2 2
(2.77)
∂φβ ∂φβ
From the difference of these two equations, we find
∂Δf 1 ∂Δf 2
= (2.78)
∂φ1β ∂φ2β

and hence the slope of the free energy has to be the same for both phases.
Inserting into the first equation then gives
Δf 1 − Δf 2 = (φ1β − φ2β )Δf  , (2.79)
which shows that the concentrations of the two phases can be found by the
well-known “common tangent” construction (Fig. 2.13).
These so-called binodal points give the border to the stable one-phase
region. Between spinodal and binodal the system is metastable. It is stable in
relation to small fluctuations since the curvature of the free energy is positive.
It is, however, instable against larger-scale fluctuations (Fig. 2.14).
2.3 Phase Transitions 33

2
φβ
1
φβ

Fig. 2.13. Common tangent construction

stable one−phase

meta meta
stable stable
stable
two−phase
spinodal binodal

φβ

Fig. 2.14. Phase diagram

Problems
2.1. Osmotic Pressure of a Polymer Solution
Polymer solution Pure Solvent

μ1(P,T) = μ01(P’,T)

Calculate the osmotic pressure Π = P − P  for the Flory–Huggins model of a


polymer solution. The difference of the chemical potential of the solvent

∂ΔFm 
μα (P, T ) − μα (P, T ) =
0
∂Nα Nβ ,T

can be obtained from the free energy change


φβ
ΔFm = N kB T φα ln φα + ln φβ + χφα φβ
M
34 2 Flory–Huggins Theory for Biopolymer Solutions

and the osmotic pressure is given by

∂μ0α
μ0α (P  , T ) − μ0α (P, T ) = −Π .
∂P
Taylor series expansion gives the virial expansion of the osmotic pressure
as a series of powers of φβ . Truncate after the quadratic term and discuss
qualitatively the dependence on the interaction parameter χ (and hence also
on temperature).

2.2. Polymer Mixture


Consider a mixture of two types of polymers with chain lengths M 1 and
M 2 and calculate the mixing free energy change ΔFm . Determine the crit-
ical values φc and χc . Discuss the phase diagram for the symmetrical case
M1 = M2 .
Part II

Protein Electrostatics and Solvation


3
Implicit Continuum Solvent Models

Since an explicit treatment of all solvent atoms and ions is not possible in
many cases, the effect of the solvent on the protein has to be approximated
by implicit models which replace the large number of dynamic solvent modes
by a continuous medium [8–10].

3.1 Potential of Mean Force


In solution, a protein occupies a conformation γ with the probability given
by the Boltzmann factor

e−U(X,Y )
P (X, Y ) =  , (3.1)
dXdY e−U (X,Y )

where X stands for the coordinates of the protein (including the protonation
state) and Y stands for the coordinates of the solvent. The potential energy
can be formally split into three terms:

U (X, Y ) = Uprot (X) + Usolv (Y ) + Uprot,solv (X, Y ). (3.2)

The mean value of a physical quantity which depends only on the protein
coordinates Q(X) is
 
Q = dXdY, Q(X)P (X, Y ) = dXQ(X)P(X), (3.3)

where we define a reduced probability distribution for the protein



P (X) = dY P (X, Y ), (3.4)

which is represented by introducing the potential of mean force W (x) as


38 3 Implicit Continuum Solvent Models

e−W (X)/kB T
P (X) =  (3.5)
dXe−W (X)/kB T
with

e−W (X)/kB T = e−Uprot (X)/kB T dY e−(Usolv (Y )+Uprot,solv (X,Y ))/kB T

= e−(Uprot (X)+ΔW (X))/kB T . (3.6)

ΔW accounts implicitly but exactly for the solvents effect on the protein.

3.2 Dielectric Continuum Model


In the following, we discuss implicit solvent models, which treat the solvent
as a dielectric continuum. In response to the partial charges of the protein
qi , polarization of the medium produces an electrostatic reaction potential φR
(Fig. 3.1).
If the medium behaves linearly (no dielectric saturation), the reaction
potential is proportional to the charges:

φRi = fij qj . (3.7)
j

Let us now switch on the charges adiabatically by introducing a factor

qi → qi λ, 0 < λ < 1. (3.8)

The change of the free energy is


  qi qj λ 
dF = φi qi dλ = dλ + fij qj qi λdλ (3.9)
i
4πεrij ij
i=j

and thermodynamic integration gives the change of free energy due to Coulom-
bic interactions:
⎛ ⎞
 1  qi qj 
ΔFelec = λdλ ⎝ + fij qj qi ⎠
0 4πεrij ij
i=j
1  qi qj 1
= + fij qj qi . (3.10)
2 4πεrij 2 ij
i=j

The first part is a property of the protein and hence included in Uprot . The
second part is the mean force potential:
1
ΔWelec = fij qi qj . (3.11)
2 ij
3.3 Born Model 39

Ions

H+ Na+ K+ Ca2+ Mg2+ OH− Cl−

positively charged residues negatively charged residues


Histidine Lysine Arginine Aspartate Glutamate
N N
N O O
N
N O O
N

N N N N N

O O O O O

dipolar residues
Asparagine Glutamine
Threonine Serine N
N O
OH OH O

N N
N
O O N
Methionine O
Cysteine O
SH S

N N
O O

polarizable residues
Tyrosine
H
Phenylalanine Tryptophan
O
N

N N N
O O O

Fig. 3.1. Charged, polar, and polarizable groups in proteins. Biological macro-
molecules contain chemical compounds with certain electrostatic properties. These
are often modeled using localized electric multipoles (partial charges, dipoles, etc.)
and polarizabilities

3.3 Born Model


The Born model [11] is a simple continuum model to calculate the solvation
free energy of an ion. Consider a point charge q in the center of a cavity with
radius a (the so-called Born radius of the ion). The dielectric constant is ε1
inside the sphere and ε2 outside (Fig. 3.2).
40 3 Implicit Continuum Solvent Models

q
ε1
r

ε2
a

Fig. 3.2. Born model for ion solvation

The electrostatic potential is given outside the sphere by


q
Φ= (3.12)
4πε2 r
and inside by
q
Φ = Φ0 + . (3.13)
4πε1 r
From the continuity of the potential at the cavity surface, we find the reaction
potential
q 1 1
Φ0 = − . (3.14)
4πa ε2 ε1
Since there is only one charge, we have

1 1 1
f1,1 = − (3.15)
4πa ε2 ε1

and the solvation energy is given by the famous Born formula:



q2 1 1
ΔWelec = − . (3.16)
8πa ε2 ε1

3.4 Charges in a Protein


The continuum model can be applied to a more general charge distribution
in a protein. An important example is an ion pair within a protein (εr = 2)
surrounded by water (εr = 80). We study an idealized model where the protein
is represented by a sphere (Fig. 3.3).
We will first treat a single charge within the sphere. A system of charges
can then be treated by superposition of the individual contributions.
Using polar coordinates (r, θ, ϕ), the potential of a system with axial
symmetry (no dependence on ϕ)can be written with the help of Legendre
polynomials as
∞
φ= (An rn + Bn r−(n+1) )Pn (cos θ). (3.17)
n=0
3.4 Charges in a Protein 41

Θ
−q +q R
−s s
ε1
ε2

Fig. 3.3. Ion pair in a protein

The general solution can be written as the sum of a special solution and a
harmonic function. The special solution is given by the multipole expansion:

q 1  s
n

q
= Pn (cos θ). (3.18)
4πε1 |r − s+ | 4πε1 r n=0 r

Since the potential has to be finite at large distances, outside it has the form


φ2 = Bn r−(n+1) Pn (cos θ) (3.19)
n=0

and inside the potential is given by



 qsn −(n+1)
φ1 = (An rn + r )Pn (cos θ). (3.20)
n=0
4πε1

At the boundary, we have two conditions


qsn −(n+1)
φ1 (R) = φ2 (R) → Bn R−(n+1) = An Rn + R , (3.21)
4πε1
∂ ∂
ε1 φ1 (R) = ε2 φ2 (R) = 0, (3.22)
∂r ∂r
ε2 qsn −(n+2)
→− (n + 1)Bn R−(n+2) = nAn Rn−1 − (n + 1) R , (3.23)
ε1 4πε1
from which the coefficients can be easily determined:
qsn −1−2n (ε1 − ε2 )(n + 1)
An = R ,
4πε1 nε1 + (n + 1)ε2
qsn (2n + 1)ε1
Bn = . (3.24)
4πε1 nε1 + (n + 1)ε2
The potential inside the sphere is
q
φ1 = + φR (3.25)
4πε1 |r − s+ |
42 3 Implicit Continuum Solvent Models

with the reaction potential


∞
qsn −1−2n (ε1 − ε2 )(n + 1) n
φR = R r Pn (cos θ) (3.26)
n=0
4πε1 nε1 + (n + 1)ε2

and the electrostatic energy is given (without the infinite self-energy) by



1 q  qsn −1−2n (ε1 − ε2 )(n + 1) n
qφ(s, cos θ = 1) = R s , (3.27)
2 2 n=1 4πε1 nε1 + (n + 1)ε2

which for ε2
ε1 is approximately

q2 1 1 1  s
2n q2 1 1 1 1
− =− − . (3.28)
2 4πR ε2 ε1 R 2 4πR ε1 ε2 1 − s2 /R2

Consider now two charges ±q at symmetric positions ±s. The reaction


potentials of the two charges add up to

φR = φR R
+ + φ− (3.29)

and the electrostatic free energy is given by

−q 2 1 1 1 1
+ qφR (s) + qφR (s) + (−q)φR R
+ (−s) + (−q)φ− (−s). (3.30)
4πε1 (2s) 2 + 2 − 2 2

By comparison we find

q2 1 q2 1 1 1 1
f++ = qφR (s) = − − , (3.31)
2 2 +
2 4πR ε1 ε2 1 − s2 /R2

(−q)2 1 q2 1 1 1 1
f−− = (−q)φR − (−s) = − − , (3.32)
2 2 2 4πR ε1 ε2 1 − s2 /R2
(−q)q 1
f−+ = (−q)φR + (−s)
2 2

(−q)  qsn −1−2n (ε1 − ε2 )(n + 1)
= R (−s)n
2 n=1 4πε1 nε1 + (n + 1)ε2
∞ s
2n
−q 2 1 1 1 
≈ − (−)n
2 4πR ε2 ε1 n=0 R
2

(−q ) 1 1 1 1
=− − , (3.33)
2 4πR ε1 ε2 s /R2 + 1
2

q(−q) 1 (−q)q
f+− = qφR − (s) = f−+ , (3.34)
2 2 2
3.5 Generalized Born Models 43

and finally the solvation energy is


∞
q 2 sn −1−2n (ε1 − ε2 )(n + 1) n
Welec = R (s − (−s)n ), (3.35)
n=1
4πε 1 nε 1 + (n + 1)ε 2


1 1 1 1 1 1 1 1
Welec ≈ −q 2
− − (−q )
2

4πR ε1 ε2 1 − s /R
2 2 4πR ε1 ε2 s /R2 + 1
2
2

(2qs) 1 1 1
=− − . (3.36)
8πR ε1 ε2 1 − s4 /R4

If the extension of the system of charges in the protein is small compared


to the radius s R, the multipole expansion of the reaction potential con-
verges rapidly. Since the total charge of the ion pair is zero, the monopole
contribution(n = 0)
(1) Q2 1 1
Welec = − (3.37)
8πR ε2 ε1
(which is nothing but the Born energy (3.16)) vanishes and the leading term
is the dipole contribution1 (n = 1)

(2) p2 (ε1 − ε2 )
Welec = . (3.38)
4πε1 R3 ε1 + 2ε2

3.5 Generalized Born Models


The generalized Born model approximates the protein as a sphere with a
dielectric constant different from that of the solvent [12, 13].
Still and coworkers [13] proposed an approximate expression for the
solvation free energy of an arbitrary distribution of N charges:

N
1 1 1 qi qj
Welec = − (3.39)
8π ε2 ε1 i,j=1
fGB (rij , aij )

with the smooth function


! # $
! r 2
= "rij
2 + a a exp − ij
fGB i j , (3.40)
2ai aj

where the effective Born radius ai accounts for the effect of neighboring solute
atoms. Several methods have been developed to calculate appropriate values
of the ai [13–16].

1
This has been associated with several names (Bell, Onsager, and Kirkwood).
44 3 Implicit Continuum Solvent Models

Expression (3.39) interpolates between the Born energy of a total charge


N q at short distances

1 1 1 N 2 q2
Welec → − for rij → 0 (3.41)
8π ε2 ε1 a

and the sum of the individual Born energies plus the change in Coulombic
energy
⎛ ⎞

1 1 1 ⎝ qi2  qi qj ⎠ √
Welec → − + for rij
ai aj (3.42)
8π ε2 ε1 i
a i rij
i=j

for a set of well-separated charges. It gives a reasonable description in many


cases without the computational needs of a full electrostatics calculation.
4
Debye–Hückel Theory

Mobile charges like Na+ or Cl− ions are very important for the functioning
of Biomolecules. In a stationary state, their concentration depends on the
local electrostatic field which is produced by the charge distribution of the
Biomolecule, which in turn depends for instance on the protonation state
and on the conformation. In this chapter, we present continuum models
to describe the interaction between a Biomolecule and surrounding mobile
charges [17–20].

4.1 Electrostatic Shielding by Mobile Charges


We consider a fully dissociated (strong) electrolyte containing Ni mobile ions
of the sort i = 1 . . . with charges Zi e per unit volume. The charge density of
the mobile charges is given by the average numbers of ions per volume:

mob (r) = Zi eN i (r). (4.1)
i

The electrostatic potential is given by the Poisson equation:

εΔφ(r) = −(r) = −mob (r) − fix (r). (4.2)

Debye and Hückel [21] used Boltzmann’s theorem to determine the mobile
charge density. Without the presence of fixed charges, the system is neutral:

0 = 0mob = Zi eNi0 (4.3)
i

and the constant value of the potential can be chosen as zero:

φ0 = 0. (4.4)

The fixed charges produce a change of the potential. The electrostatic energy
of an ion of sort i is
46 4 Debye–Hückel Theory

Wi = Zi eφ(r) (4.5)
and the density of such ions is given by a Boltzmann distribution:

Ni (r) e−Zi eφ(r)/kB T


= (4.6)
Ni0 e−Zi eφ0 /kB T
or
Ni (r) = Ni0 e−Zi eφ(r)/kB T . (4.7)
The total mobile charge density is

mob (r) = Zi eNi0 e−Zi eφ(r)/kB T (4.8)
i

and we obtain the Poisson–Boltzmann equation:



εΔφ(r) = − Zi eNi0 e−Zi eφ(r)/kB T − fix (r). (4.9)
i

If the solution is very dilute, we can expect that the ion–ion interaction is
much smaller than thermal energy:

Zi eφ kB T (4.10)

and linearize the Poisson–Boltzmann equation:



Zi e
εΔφ(r) = −fix (r) − Zi eNi0 1 − φ(r) + · · · . (4.11)
i
kB T

The first summand vanishes due to electroneutrality and we find finally


1
Δφ(r) − κ2 φ(r) = − fix (r) (4.12)
ε
with the inverse Debye length
%
e2  0 2
λ−1
Debye =κ= Ni Zi . (4.13)
εkB T i

4.2 1–1 Electrolytes


If there are only two types of ions with charges Z1,2 = ±1 (also in semicon-
ductor physics), the Poisson–Boltzmann equation can be written as

e e2 e
Δφ(r) + N 0 (e−eφ(r)/kB T − eeφ(r)/kB T ) = − fix (r) (4.14)
kB T εkB T εkB T
4.3 Charged Sphere 47

which, after substitution


 = e
φ(r) φ(r), (4.15)
kB T
takes the form
 − κ2 sinh(φ(r))
 e
Δφ(r) =− fix (r) (4.16)
εkB T
 = f (r ) = f (κr)
and with the scaled radius vector r = κr → φ(r)
e
Δf (r ) − sinh(f (r )) = − fix (κ−1 r ). (4.17)
κ2 εkB T

4.3 Charged Sphere


We consider a spherical protein (radius R) with a charged sphere (radius a)
in its center (Fig. 4.1).
For a spherical problem, we only have to consider the radial part of the
Laplacian and the linearized Poisson–Boltzmann equation becomes outside
the protein:
1 d2
(rφ(r)) − κ2 φ(r) = 0, (4.18)
r dr2
which has the solution
c1 e−κr + c2 eκr
φ2 (r) = . (4.19)
r
Since the potential should vanish at large distances, we have c2 = 0. Inside
the protein (a < r < R), solution of the Poisson equation gives
Q
φ1 (r) = c3 + . (4.20)
4πε1 r
At the boundary, we have the conditions

c1 e−κR Q
φ1 (R) = φ2 (R) → c3 = − , (4.21)
R 4πε1 R
−κR
∂ ∂ Q e e−κR
ε1 φ1 (R) = ε2 φ2 (R) → − = c 1 ε 2 − − κ , (4.22)
∂r ∂r 4πR2 R2 R

R a
ε1
ε2
κ

Fig. 4.1. Simple model of a charged protein


48 4 Debye–Hückel Theory

10

1 κ = 0.1

potential Φ κ = 1.0
0.1
κ = 2.0

0.01
0 2 4
r

Fig. 4.2. Charged sphere in an electrolyte. The potential φ(r) is shown for a = 0.1,
R = 1.0

which determine the constants:


QeκR
c1 = (4.23)
4πε2 (1 + κR)

and
Q Q
c3 = − + . (4.24)
4πε1 R 4πε2 R(1 + κR)
Together, we find the potential inside the sphere

Q 1 1 Q
φ1 (r) = − + (4.25)
4πε1 r R 4πε2 R(1 + κR)

and outside (Fig. 4.2)

Q e−κ(r−R)
φ2 (r) = . (4.26)
4πε2 (1 + κR) r

The ion charge density is given by

mob (r) = ε2 Δφ2 = ε2 κ2 φ2 ; (4.27)

hence, the ion charge at distances between r and r + dr is given by


Q
κ re−κ(r−R) dr. (4.28)
(1 + κR)

This function has a maximum at rmax = 1/κ and decays exponentially at


larger distances (Fig. 4.3).
4.4 Charged Cylinder 49

0.6 κ = 2.0
0.5

charge density
κ = 0.5
0.4
0.3 κ = 1.0
0.2
0.1
0
1 1.5 2 2.5 3 3.5 4
r

Fig. 4.3. Charge density around the charged sphere for a = 0.1, R = 1.0

Let the charge be concentrated on the surface of the inner sphere. Then we
have

Q 1 1 Q
φ1 (a) = − + . (4.29)
4πε1 a R 4πε2 R(1 + κR)

Without the medium (ε2 = ε1 , κ = 0), the potential would be


Q
φ01 (a) = . (4.30)
4πε1 a
Hence, the solvation energy is

1 1 Q2 1 1
W = QφR = Q(φ1 (a) − φ01 (a)) = − , (4.31)
2 2 8πR ε2 (1 + κR) ε1

which for κ = 0 is given by the well-known Born formula:



Q2 1 1
W =− − . (4.32)
8πR ε1 ε2

For a → R, ε1 = ε2 this becomes the solvation energy of an ion in solution:

Q2 κ
ΔGsol = W = − . (4.33)
8πε (1 + κR)

4.4 Charged Cylinder


Next, we discuss a cylinder of radius a and length l a carrying the net
charge N e uniformly distributed on its surface:
Ne
σ= . (4.34)
2πal
50 4 Debye–Hückel Theory

r
a
z

Fig. 4.4. Charged cylinder model

Outside the cylinder, this charge distribution is equivalent to a linear distribu-


tion of charges along the axis of the cylinder with a one-dimensional density
(Fig. 4.4):
Ne e
= 2πaσ = . (4.35)
l b

For the general case of a charged cylinder in an ionic solution, we have to


restrict the discussion to the linearized Poisson–Boltzmann equation which
becomes outside the cylinder

1 d d
r φ(r) = κ2 φ(r). (4.36)
r dr dr

Substitution r → x = κr gives the equation

d2 1 d
φ(x) + φ(x) − φ(x) = 0. (4.37)
dx2 x dx
Solutions of this equation are the modified Bessel functions of order zero
denoted as I0 (x) and K0 (x). For large values of x,

lim I0 (x) = ∞, lim K0 (x) = 0, (4.38)


x→∞ x→∞

and hence the potential in the outer region has the form

φ(r) = C1 K0 (κr). (4.39)

Inside the cylinder surface, the electric field is given by Gauss’ theorem:

2πrlε1 E(r) = N e, (4.40)

dφ(r) Ne
E(r) = − = , (4.41)
dr 2πε1 rl
and hence the potential inside is
Ne
φ(r) = C2 − ln r. (4.42)
2πε1 l
4.4 Charged Cylinder 51

The boundary conditions are


Ne
φ(a) = C1 K0 (κa) = C2 − ln a, (4.43)
2πε1 l
dφ(a) Ne dφ(a)
ε1 =− = ε2 = ε2 C1 (−κK1 (κa)), (4.44)
dr 2πal dr
from which we find
Ne
C1 = (4.45)
2πalε2 κK1 (κa)
and
N e K0 (κa) Ne
C2 = + ln a. (4.46)
2πalε2 κ K1 (κa) 2πε1 l
The potential is then outside
N e K0 (κr)
φ(r) = (4.47)
2πalε2 κ K1 (κa)
and inside
N e K0 (κa) Ne Ne
φ(r) = + ln a − ln r. (4.48)
2πalε2 κ K1 (κa) 2πε1 l 2πε1 l
For small κa → 0, we use the asymptotic behavior of the Bessel functions
2
K0 (x) → ln − γ + ··· , γ = 0.577 · · · ,
x
1
K1 (x) → + ··· (4.49)
x
to have approximately
Ne Ne
C1 ≈ κa = ,
2πalε2 κ 2πlε2

Ne 2 Ne
C2 ≈ ln −γ + ln a. (4.50)
2πlε2 κa 2πε1 l
The potential outside is
Ne κ

φ(r) = − γ + ln + ln r (4.51)
2πlε2 2
and inside is

Ne 2 Ne Ne
φ(r) = ln −γ + ln a − ln r
2πlε2 κa 2πε1 l 2πε1 l

Ne γ 1 κ 1 1 1
= − − ln + − ln a − ln r . (4.52)
2πl ε2 ε2 2 ε1 ε2 ε1
Outside the potential consists of the potential of the charged line (ln r) and
an additional contribution from the screening of the ions (Fig. 4.5).
52 4 Debye–Hückel Theory

1
2 4 6 8 10
r

Fig. 4.5. Potential of a charged cylinder with unit radius a = 1 for κ = 0.05. Solid
curve: K0 (κr)/(κK1 (κ)). Dashed curve: approximation by − ln(κ/2) − ln r − γ

+ − − −
+ − +
+ −
+ −
+ − +
+
+ − −
+ − −
+ −
+
+ −
− +
+ +
+ − −
+ − −

Fig. 4.6. Goüy–Chapman double layer

4.5 Charged Membrane (Goüy–Chapman Double Layer)


We approximate the surface charge of a membrane by a thin layer charged
with a homogeneous charge distribution (Fig. 4.6).
Goüy [22, 23] and Chapman [24] derived the potential similar to Debye–
Hückel theory. For a 1–1 electrolyte (e.g., NaCl), the one-dimensional Poisson–
Boltzmann equation has the form (with transformed variables as above)
d2
f (x) − sinh(f (x)) = g(x), (4.53)
dx2
where the source term g(x) = −(e/κ2 εkB T )(x/κ) has the character of a
δ-function centered at x = 0. Consider an area A of the membrane and
integrate along the x-axis:
  +0
dA (x)dx = σ0 A, (4.54)
−0
4.5 Charged Membrane (Goüy–Chapman Double Layer) 53
  +0   +0
e
dA dxg(x) = − 2
dA κdx (x )
−0 κ εkB T −0
e
=− σ0 A. (4.55)
κεkB T
Hence we identify
eσ0
(x) = σ0 δ(x), g(x) = − δ(x). (4.56)
κεkB T
The Poisson–Boltzmann equation can be solved analytically in this simple
case. But first, we study the linearized homogeneous equation:

d2
f (x) − f (x) = 0 (4.57)
dx2
with the solution
f (x) = f0 e±x (4.58)
or going back to the potential
kB T
φ(x) = f0 e±κx = φ0 e±κx . (4.59)
e
The membrane potential is related to the surface charge density. Let us assume
that on the left side (x < 0), the medium has a dielectric constant of ε1 and
on the right side ε. Since in one dimension the field in a dielectric medium
does not decay, we introduce a shielding constant κ1 on the left side and take
the limit κ1 → 0 to remove contributions not related to the membrane charge.
The potential then is given by

φ0 e−κx x > 0,
φ(x) = (4.60)
φ0 eκ1 x x < 0.

The membrane potential φ0 is determined from the boundary condition:


dφ dφ
ε (+0) − ε1 (−0) = −σ0 , (4.61)
dx dx
which gives
− εκφ0 − ε1 κ1 φ0 = −σ0 . (4.62)
In the limit κ1 → 0, we find
σ0
φ0 = . (4.63)
εκ
For x < 0 the potential is constant and for x > 0 the charge density is
(Figs. 4.7 and 4.8)
d2 φ(x)
(x) = −ε = −σ0 κe−κx , (4.64)
dx2
54 4 Debye–Hückel Theory

a φ(x)
φo

x
b
ρ(x)

Fig. 4.7. Electrolytic double layer. (a) The potential is continuous and decreases
exponentially in the electrolyte. (b) The charge density has a sharp peak on the
membrane which is compensated by the exponentially decreasing charge in the
electrolyte

1
co-ions
Φ
0
total
–1
counter-ions

–2

–3
0 1 2 3 4 5
r

Fig. 4.8. Charge density of counter- and co-ions. For an exponentially decaying
potential Φ(r) (full curve), the density of counter- and co-ions (dashed curves) and
the total charge density (dash-dotted curve) are shown

which adds up to a total net charge per unit area of


 ∞
(x)dx = −σ0 . (4.65)
0

Hence, the system is neutral and behaves like a capacity of


σ0 A A
= εκA = ε . (4.66)
φ0 LDebye

The solution of the nonlinear homogeneous equation can be found multiplying


the equation with df (x)/dx:
df d2 f df
2
= sinh(f ) (4.67)
dx dx dx
4.5 Charged Membrane (Goüy–Chapman Double Layer) 55

and rewriting this as


2
1 d df d
= cosh(f ), (4.68)
2 dx dx dx

which can be integrated


2
df
= 2 [cosh(f ) + C] . (4.69)
dx

The constant C is determined by the asymptotic behavior1


df
lim f (x) = lim =0 (4.70)
x→∞ x→∞ dx
and obviously has the value C = −1. Making use of the relation
2
f
cosh(f ) − 1 = 2 sinh , (4.71)
2

we find
d f (x)
f (x) = ±2 sinh . (4.72)
dx 2
Separation of variables then gives
df
= ±dx (4.73)
2 sinh(f /2)

with the solution


x C
f (x) = 2 ln ± tanh + . (4.74)
2 2
For x > 0, only the plus sign gives a physically meaningful expression. The
constant C is generally complex-valued. It can be related to the potential at
the membrane surface:

f (0)/2 1 + ef (0)/2
C = 2 arctanh(e ) = ln . (4.75)
1 − ef (0)/2

For f (0) > 0, the argument of the logarithm becomes negative. Hence, we
replace in (4.74) C by C + iπ to have (Fig. 4.9)

x C iπ x C
f (x) = 2 ln tanh + + = −2 ln tanh + , (4.76)
2 2 2 2 2

1
We consider only solutions for the region x > 0 in the following.
56 4 Debye–Hückel Theory

a b
1 5

0.8 4

0.6 3
f(x)

0.4 2

0.2 1

0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
x x
a b
15
1
10
f’’(x)

0.5
5

0 0
0 1 2 3 4 5 0 1 2 3 4 5
x x

Fig. 4.9. One-dimensional Poisson–Boltzmann equation. The solutions of the full


(full curves) and the linearized equation (broken curves) are compared for (a) B =
−1 and (b) B = −5

where C is now given by



−f (0)/2 1 + e−f (0)/2
C = 2 arctanh(e ) = ln . (4.77)
1 − e−f (0)/2
The integration constant C is again connected to the surface charge density
by
dφ σo
(0) = − (4.78)
dx ε
and from
d kB T d kB T 
φ(x) = f (κx) = κf (κx), (4.79)
dx e dx e
we find
σ0 kB T 
=− κf (0). (4.80)
ε e

Now the derivative is in the case f (0) > 0 given by



 x C 1
f (x) = tanh + − &x C
' (4.81)
2 2 tanh 2 + 2
4.6 Stern Modification of the Double Layer 57

and especially
 C 1
f (0) = tanh − (4.82)
2 tanh(C/2)
and we have to solve the equation
1 eσ0
t− =− = B, (4.83)
t kB T κε

which yields2 √
B B2 + 4
t= + , (4.84)
2 2
 √ 
B B2 + 4
C = 2arctanh + . (4.85)
2 2

4.6 Stern Modification of the Double Layer


Real ions have a finite radius R. Therefore, they cannot approach the mem-
brane closer than R and the ion density has a maximum possible value, which
is reached when the membrane is occupied by an ion layer. To account for
this, Stern [25] extended the Goüy–Chapman model by an additional ion
layer between the membrane and the diffusive ion layer (Fig. 4.10).

φ(x)
φM

φ0

d x
+ −
+ − − + −
+ −
+
+ − − +
+
+ −
+
+ − − − − +

Stern layer Goüy−Chapman layer


charged plate (membrane)

Fig. 4.10. Stern modification of the double layer

2
The second root leads to imaginary values.
58 4 Debye–Hückel Theory

Within the Stern layer of thickness d, there are no charges and the potential
drops linearly from the membrane potential φM to a value φ0 :
φM − φ0
φ(x) = φM − x, 0 < x < d. (4.86)
d
In the diffusive Goüy–Chapman layer, the potential decays exponentially

φ(x) = φ0 e−κ(x−d) . (4.87)

Assuming the same dielectric constant for both layers, we have to fulfill the
boundary condition
dφ φM − φ0
(d) = − = −κφ0 (4.88)
dx d
and hence
φM
φ0 = . (4.89)
1 + κd
The total ion charge in the diffusive layer now is
 ∞  ∞
qdiff = (x)dx = −Aεκ2 φ0 e−κ(x−a) dx = −Aεκφ0
d d
εκ
= −A φM (4.90)
1 + κd
and the capacity is
−qdiff Aε
C= = , (4.91)
φM d + κ−1
which are just the capacities of the two layers in series
1 Ad AλDebye 1 1
= + = + . (4.92)
C ε ε CStern Cdiff

Problems
4.1. Membrane Potential
Consider a dielectric membrane in an electrolyte with an applied voltage V .
V

I II III
κ κ
εw εm εw

0 L x
4.6 Stern Modification of the Double Layer 59

Solve the linearized Poisson–Boltzmann equation

d2
φ(x) = κ2 (φ(x) − φ(0) )
dx2
with boundary conditions

φ(0) (I) = φ(−∞) = 0,


φ(0) (III) = φ(∞) = V,

and determine the voltage difference

ΔV = φ(L) − φ(0).

Calculate the charge density mob (x) and the integrated charge on both sides
of the membrane. What is the capacity of the membrane?

4.2. Ionic Activity


The chemical potential of an ion with charge Ze is given in terms of the
activity a by μ = μ0 + kB T ln a. Assume that the deviation from the ideal
behavior μideal = μ0 + kB T ln c is due to electrostatic interactions only. Then
for an ion with radius R, Debye–Hückel theory gives

Z 2 e2 κ
μ − μideal = kB T ln a − kB T ln c = Δ(μα − μ0α )Gsolv = − .
8πε (1 + κR)

For a 1–1 electrolyte, calculate the mean activity coefficient



c
 c c a+ a−
γ± = γ+ γ− =
c+ c−

and discuss the limit of extremely dilute solutions (Debye–Hückel limiting


law).
5
Protonation Equilibria

Some of the amino acid residues building a protein can be in different


protonation states and therefore differently charged states (Fig. 5.1).

O O
Aspartic Acid
C C
O− Glutamic Acid
OH

NH2 NH+3
C C Arginine
NH NH

NH2 NH+3 Lysine

N NH+ Histidine

Fig. 5.1. Functional groups which can be in different protonation states

In this chapter, we discuss the dependence of the free energy of a protein on


the electrostatic interactions of its charged residues. We investigate the chem-
ical equilibrium between a large number of different protein conformations
and the dependence on the pH value [26].

5.1 Protonation Equilibria in Solution


We consider a dilute aqueous solution containing N titrable molecules
(Fig. 5.2) (i.e., which can be in two different protonation states
0 = deprotonated, 1 = protonated). For one molecule, we have for fixed
protonation state
G0 = F0 + pV0 = −kB T ln z0 + pV0 , (5.1)
G1 = F1 + pV1 = −kB T ln z1 + pV1 , (5.2)
62 5 Protonation Equilibria

H2O H2O
H2O
H2O H2O H2O
H2O H2O
+
H3O H2O
H2O H2O
H2O H2O
H2O
O −
OH
R C
H2O H2O
OH H2O
H2O
H2O
H2O H2O H2O
H2O

Fig. 5.2. Titration in solution

and the free enthalpy difference between protonated and deprotonated form
of the molecule is
z1
G1 − G0 = −kB T ln + pΔV. (5.3)
z0

In the following, the volume change will be neglected.1 If we now put such
N molecules into the solution, the number of protonated molecules can fluc-
tuate by exchanging protons with the solvent. Removal of one proton from
the solvent costs a free enthalpy of

ΔG = −μH+ , (5.4)

where μH+ is the chemical potential of the proton in solution.2 Hence, we


come to the grand canonical partition function

Ξ= ZM eμM/kB T , (5.5)

where the partition function for fixed number of protonated molecules M is


given by
N!
ZM = z M z N −M (5.6)
M !(N − M )! 1 0
if the N molecules cannot be distinguished. Hence we find
 N!
Ξ= (z1 eμ/kB T )M z0N −M = (z0 + z1 eμ/kB T )N . (5.7)
M !(N − M )!

1
At atmospheric pressure, the mechanic work term pΔV is very small. We pre-
fer to discuss the free enthalpy G in the following since experimentally usually
temperature and pressure are constant.
2
We omit the index H+ in the following.
5.1 Protonation Equilibria in Solution 63

0.8

protonation degree
0.6

0.4

0.2

0
–4 –2 0 2 4
μ − ΔGp

Fig. 5.3. Henderson–Hasselbalch titration curve. The protonation degree (5.9) is


shown for kB T = 1, 2, 5

The average number of protonated molecules can be found from

∂ z1 eμ/kB T
M= ln Ξ = N (5.8)
∂(μ/kB T ) z0 + z1 eμ/kB T
and the protonation degree, that is, the fraction of protonated molecules is
(Fig. 5.3)
M 1 1
= = . (5.9)
N 1 + (z0 /z1 )e−μ/kB T 1 + e(G1 −G0 −μ)/kB T

In physical chemistry [27, 28], the following quantities are usually introduced:
• The activity of a species ai is defined by

μi = μ0i + kB T ln ai (5.10)

in analogy to μi = μ0i + kB T ln pi /p0i for the ideal gas. For very dilute
solutions, it can be approximated by
ci
μi = kB T ln Ni − kB T ln zi = μ0i + kB T ln , (5.11)
c0
where (0) indicates the standard state (usually p0 = 1 atm, c0 = 1 mol/l).
• The concentration of protons is measured by the pH value3

μH+ c(H+ )
pH = − log10 aH+ = − ≈ − log10 . (5.12)
kB T ln(10) c0
• The standard reaction enthalpy of an acid–base equilibrium

AH  A− + H+ , (5.13)
3
The standard enthalpy of formation for a proton is zero per definition.
64 5 Protonation Equilibria

where  
0= νi μi = ΔG0r + kB T νi ln ai , (5.14)

a(A− )a(H+ )
Ka = e−ΔGr /kB T =
0
, (5.15)
a(AH)
is measured by the pKa value:

1 ΔG0r a(A− )a(H+ )
pKa = − log10 (Ka ) = = − log10
ln(10) kB T a(AH)
⎛ − ⎞
c(A ) c(H+ )
≈ − log10 ⎝ c0c(AH)c0 ⎠ , (5.16)
c0

which is usually simply written as4



c(A− )c(H+ )
pKa = − log10 , (5.17)
c(AH)

which together with (5.12) gives the Henderson–Hasselbalch equation:



c(A− )
pH − pKa = log10 . (5.18)
c(AH)

The standard reaction enthalpy of the acid–base equilibrium is (with the


approximation (5.11))

ΔG0r = μ0HA + μ0 − + μ0H+


M
= −(kB T ln L − kB T ln z1 ) + (kB T ln L − kB T ln z0 )
z0
= −kB T ln = −(G1 − G0 ). (5.19)
z1
In the language of physical chemistry, the protonation degree (5.9) is
given by
c(AH) 1
= . (5.20)
c(AH) + c(A− ) 1 + 10(pH−pKa )

5.2 Protonation Equilibria in Proteins


In the protein, there are additional steric and electrostatic interactions with
other groups of the protein, which contribute to the energies of the titrable
site (Fig. 5.4).

4
But you have to be aware that all concentrations have to be taken in units of the
standard concentration c0 . The argument of a logarithm should be dimensionless.
5.2 Protonation Equilibria in Proteins 65

H2O
H2O
H2O

COOH H2O H2O


δ− H2O
δ+
OH−
H2O
δ− δ+ H2O
δ−
δ+ δ+ H2O
Cl−
δ+ δ+
H2O NH2
+ δ−
δ−
VCoul
COO− δ− H2O
δ−
δ+
H2O δ+ δ− H O+
3
H2O
H2O δ+ δ−
Na+ H2O
H2O

Fig. 5.4. Titration in a protein. Electrostatic interactions with fixed charges


(charged residues and background charges) and mobile charges (ions) have to be
taken into account

ΔGsol – +
RMH RM + H

ΔG1 ΔG0

PMH PM−+ H+
ΔGprot

Fig. 5.5. Thermodynamic cycle. The protonation enthalpy of a titrable group in


the protein (PMH) differs from that of a model compound in solution (RMH) [29]

5.2.1 Apparent pKa Values

The pKa of a titrable group depends on the interaction with background


charges, with all the other residues and with the solvent which contains dipolar
molecules and free moving ions. As a consequence, the pKa value is different
from that of a model compound containing the titrable group in solution
(Fig. 5.5).
The difference of protonation enthalpies

ΔΔG = ΔGprot − ΔGsolv = ΔG0 − ΔG1 (5.21)

can be divided into three parts:

ΔΔG = ΔΔE + pΔΔV − T ΔΔS. (5.22)


66 5 Protonation Equilibria

In the following, the volume change will be neglected. The pKa value of a
group in the protein
1 1
pKaprot = ΔGprot = (ΔGsolv + ΔΔG)
kB T ln(10) kB T ln(10)
ΔΔG
= pKamodel + (5.23)
kB T ln(10)

is called the apparent pKa of the group in the protein. It depends on


the mutual interactions of the titrable groups and hence on the pH value.
Therefore, titration of groups in a protein cannot be described by a simple
Henderson–Hasselbalch equation.

5.2.2 Protonation Enthalpy

The amino acids forming a protein can be enumerated by their appearance


in the primary structure and are therefore distinguishable. The protonation
state of a protein with N titrable sites will be described by the protonation
vector:

1 if group i is protonated,
s = (s1 , s2 , . . . , sS ) with si = (5.24)
0 if group i is deprotonated.

The number of protonation states

Npstates = 2N

can be very big for real proteins.5 Proteins are very flexible and have a
large number of configurations. These will be denoted by the symbol γ which
summarizes all the orientational angles between neighboring residues of the
protein. The apparent pKa values will in general depend on this configura-
tion vector, since for instance distances between the residues depend on the
configuration.
The enthalpy change by protonating the ith residue is denoted as

G(s1 , . . . , si−1 , 1i , si+1 , . . . , sN , γ) − G(s1 , . . . , si−1 , 0i , si+1 , . . . , γ)


 1,s 0,s
= ΔGi,intr + (Ei,j j − Ei,j j ) (5.25)
j=i

with the Coulombic interaction between two residues i, j in the protonation


states si , sj (Fig. 5.6):
s ,s
Ei,ji j (γ). (5.26)

5
We do not take into account that some residues can be in more than two
protonation states.
5.2 Protonation Equilibria in Proteins 67

s si = 0 s
i−1 i+1

s si = 1 s
i−1 i+1

Fig. 5.6. Protonation of one residue. The protonation state of the ith residue
changes together with its charge. Electrostatic interactions are divided into an
intrinsic part and the interactions with all other residues

The so-called intrinsic protonation energy ΔGi,intr is the protonation


energy of residue i if there are no Coulombic interactions with other residues,
that is, if all other residues are in their neutral state. It can be estimated
from the model energy (5.3) and (5.19), taking into account all remaining
interactions with background charges6 and the different solvation energies,
which can be calculated using Born models (3.16) and (3.39) or by solving
the Poisson–Boltzmann equation (4.9) and (4.12):

ΔGi,intr ≈ ΔGi,solv + ΔEi,bg + ΔEi,Born . (5.27)

Let us now calculate the protonation enthalpy of a protein with S of its N


titrable residues protonated. The contributions from the intrinsic enthalpy
changes can be written as 
si ΔGi,intr . (5.28)
i

The Coulombic interactions are divided into pairs of protonated residues


 1,1
si sj Ei,j , (5.29)
i<j

pairs of unprotonated residues


 0,0
 0,0  0,0
 0,0
(1 − si )(1 − sj )Ei,j = Ei,j + si sj Ei,j − si Ei,j , (5.30)
i<j i<j i<j i=j

and interactions between one protonated and one unprotonated residue


 1,0 0,1


1,0 0,1

si (1 − sj )Ei,j + (1 − si )sj Ei,j =− si sj Ei,j + Ei,j


i<j i<j
 1,0
+ si Ei,j . (5.31)
i=j

6
That is, the charge distribution of the protein backbone and the nontitrable
residues.
68 5 Protonation Equilibria

AH A–+ H+ BH+ B + H+

s 1 0 1 0
0
s 1 1 0 0
0
q = s−s 0 −1 +1 0

qc = 1−2s0 −1 −1 1 1

Fig. 5.7. Protonation states. Protonation state si , protonation of the neutral state
s0i , charge qi , and charge of the non-neutral state qic are correlated

Summing up these contributions and subtracting the Coulombic interactions


of the fully deprotonated protein, we find the enthalpy change
⎛ ⎞
  1,0

ΔG(s) = si ⎝ΔGi,intr + Ei,j − Ei,j ⎠ +
0,0
si sj Wi,j (5.32)
i j=i i<j

with the interaction parameter


Wij = Eij (1, 1) − Eij (1, 0) − Eij (0, 1) + Eij (0, 0). (5.33)
In fact for each pair i, j, only one of the summands is nonzero.

5.2.3 Protonation Enthalpy Relative to the Uncharged State


In the literature, the enthalpy is often taken relative to a reference state (s0i )
where all titrable residues are in their uncharged state. As is illustrated in
Fig. 5.7, the formal dimensionless charge of a residue is given by
qi = si − s0i .
The enthalpy change relative to the uncharged reference state is
ΔG(s) − ΔG(s0 )

N 
N 
N
1,0 0,0
= (si − s0i )ΔGi,intr + (si − s0i ) (Ei,j − Ei,j )
i=1 i=1 j=i,j=1
1  & '
+ Wi,j (si − s0i )(sj − s0j ) + s0i sj + si s0j − 2s0i s0j . (5.34)
2 i
j=i

Note that Wij = Wji and therefore the last sum can be simplified
1  & '
Wi,j (si − s0i )(sj − s0j ) + s0i sj + si s0j − 2s0i s0j
2 i
j=i
1 
= Wi,j (qi qj + 2(si − s0i )s0j ). (5.35)
2 i
j=i
5.2 Protonation Equilibria in Proteins 69

Consider now the expression


N 
N
1,0 0,0
ΔW = qi (Ei,j − Ei,j + s0j Wi,j ). (5.36)
i=1 j=i,j=1

For each residue j = i, there are only two alternatives7


1,0 0,0
s0j = 0 → Ei,j − Ei,j = 0, (5.37)
1,0 0,0 1,1 0,1
s0j = 1 → Ei,j − Ei,j + Wi,j = Ei,j − Ei,j = 0, (5.38)
and hence we have
ΔW = 0, (5.39)

N
1
ΔG(s) − ΔG(s0 ) = qi ΔGi,intr + Wi,j qi qj . (5.40)
2
si =s0i i=j

5.2.4 Statistical Mechanics of Protonation

The partition function for a specific total charge


 
Q= qi = (si − s0i ) (5.41)

is given by
 
Z(Q) = e−ΔG/kB T

{s, qi =Q} γ
   
= e −[ (si −s0i )ΔGi,intr + 12 Wi,j qi qj ]/kB T
. (5.42)

{s, qi =Q} γ

We come to the grand canonical partition function by introducing the factor



(si −s0i )/kB T
eμQ/kB T = eμ (5.43)

and summing over all possible charge states of the protein



Ξ= Z(Q)eμQ/kB T
Q
  
= e −[ (si −s0i )(ΔGi,intr (γ)−μ)+ 12 Wi,j (γ)qi qj ]/kB T
. (5.44)
γ,s

7
The Coulombic interaction vanishes if one of the residues is in the neutral state.
70 5 Protonation Equilibria

With the approximation (5.27) and (5.3), the partition function


 
 (i) qi
z1
Z(Q) ≈ (i)
Zconf (5.45)
 z0
{s, qi =Q}

becomes the product of one factor which relates to the internal degrees of
freedom which are usually assumed to be configuration-independent8 and a
second factor which depends only on the configurational degrees of freedom:
  
e−[ (si −si )(ΔEi,bg +ΔEi,Born )+ 2
0 1
Wi,j qi qj ]/kB T
Zconf (s) = . (5.46)
γ

5.3 Abnormal Titration Curves of Coupled Residues


Let us consider a simple example of a model protein with only two titrable
sites of the same type. The free enthalpies of the four possible states are
(Fig. 5.8)
1,1 0,0
ΔG(AH, AH) = ΔG1,intr + ΔG2,intr + E1,2 − E1,2 ,
ΔG(A− , AH) − ΔG(AH, AH) = −ΔG1,intr ,
ΔG(AH, A− ) − ΔG(AH, AH) = −ΔG2,intr ,
ΔG(A− , A− ) − ΔG(AH, AH) = −ΔG1,intr − ΔG2,intr + W12
0,0
= −ΔG2,intr − ΔG1,intr + E1,2 . (5.47)

0.8
protonation degree

0.6

0.4

0.2

0
–10 0 10
μ

Fig. 5.8. Abnormal titration curves. Two interacting residues with neutral proto-
nated states (AH), ΔG1,intr = ΔG2,intr = 1.0, W = −5, 0, 5, 10

8
Protonation-dependent degrees of freedom can be important in certain cases [30].
5.3 Abnormal Titration Curves of Coupled Residues 71

The grand partition function is

Ξ = 1 + e−(−ΔG1,intr +μ)/kB T + e−(−ΔG2,intr +μ)/kB T


+e−(−ΔG2,intr−ΔG1,intr +2μ+W )/kB T . (5.48)

The average protonation values are

1 + e−(−ΔG2,intr +μ)/kB T
s1 = ,
Ξ
1 + e−(−ΔG1,intr +μ)/kB T
s2 = . (5.49)
Ξ

Problems
5.1. Abnormal Titration Curves
Consider a simple example of a model protein with only two titrable sites
of the same type. Determine the relative free enthalpies of the four possible
states.

(B,B)
+
+2H
B B

(B,BH+)

B +H+
BH+

(BH+,B)

BH+ B +H+

(BH+,BH+)

BH+ BH+

From the grand canonical partition function (the number of protons is not
fixed), calculate the protonation degree for both sites and discuss them as a
function of the interaction energy W .
Part III

Reaction Kinetics
6
Formal Kinetics

In this chapter, we discuss the phenomenological description of elementary


chemical reactions and photophysical processes with the help of rate equations.

6.1 Elementary Chemical Reactions


The basic steps of chemical reactions can be divided into several classes of
elementary reactions. They can be photoinduced or thermally activated, may
involve the transfer of an electron or proton, and are accompanied by struc-
tural changes, like breaking and forming bonds or at least a reorganization of
bond lengths and angles (Figs. 6.1 and 6.2).
All elementary reactions are reversible. There is a dynamical equilibrium
between forward and back reaction, which are independent, for instance

H2 + J2  2HJ. (6.1)

6.2 Reaction Variable and Reaction Rate


We consider a general stoichiometric equation for the reaction of several
species1 
νi Ai = 0 (6.2)
i

and define a reaction variable x based on the concentration of the species Ai


by
ci = ci,0 + νi x (6.3)
1
The stoichiometric coefficients νi are positive for products and negative for educts.
This is the conventional definition. Products and educts can be exchanged at least
in principle, since the back reaction is always possible.
76 6 Formal Kinetics

A + hν A* A* + B A + B* bonds are
absorption energy transfer conserved

reorganization
D*A D+ A– D+A– B D+A B– of bond lengths
charge separation charge transfer and angles

Fig. 6.1. Elementary reactions without bond reformation

A−H + B A− + B−H+
proton transfer
bimolecular
H− Cl H Cl H− Br
+ +
bonds are
Br−Br Br Br Br− Cl
broken or
activated
complex formed

CH2
CH –3 CH CH2 unimolecular
CH2 CH2
isomerization

Fig. 6.2. Elementary reactions with bond reformation

as
ci − ci,0
x= (6.4)
νi
and the reaction rate as its time derivative
dx 1 dci
r= = . (6.5)
dt νi dt

6.3 Reaction Order


The progress of a chemical reaction can be frequently described by a simple
rate expression such as

r = kcn1 1 cn2 2 · · · = k cni i (6.6)
i∈educts

with the rate constant k. For such a system, the exponent2 of the ith term is
called the order of the reaction with respect to this substance and the sum of
all the exponents is called the overall reaction order.
2
For more complicated reactions, the exponents need not be integers. For simple
reactions, they are given by the stoichiometric coefficients ni = |νi | of the educts
(the products for the back reaction).
6.3 Reaction Order 77

6.3.1 Zero-Order Reactions

Zero-order reactions proceed at the same rate regardless of concentration. The


rate expression for a reaction of this type is
dc
= k0 (6.7)
dt
which can be integrated
c = c0 + k0 t. (6.8)
Zero-order reactions appear when the determining factor is an outside source
of energy (light) or when the reaction occurs on the surface of a catalyst.

6.3.2 First-Order Reactions

First-order reactions describe the decay of an excited state, for instance a


radioactive decay
A∗ → A. (6.9)
The rate expression is
dcA∗ dcA
=− = −kcA∗ , (6.10)
dt dt
which gives an exponential decay

cA∗ = cA∗ (0)e−kt (6.11)

with a constant half-period


ln(2)
τ1/2 = . (6.12)
k

6.3.3 Second-Order Reactions

A second-order reaction between two different substances

A+B → · · · (6.13)

obeys the equations


dcA dcB
= = −k2 cA cB , (6.14)
dt dt
which can be written down using the reaction variable x and the initial
concentrations a, b as
cA = a − x, cB = b − x, (6.15)
dx
= k2 (a − x)(b − x). (6.16)
dt
This can be integrated to give
78 6 Formal Kinetics

1 b(a − x)
ln = k2 t. (6.17)
a − b a(b − x)

If two molecules of the same type react with each other, we have instead
dcA
− = −k2 c2A (6.18)
dt
which gives an algebraic decay
1
cA (t) = 1 , (6.19)
k2 t + a

where the half-period now depends on the initial concentration


1
τ1/2 = . (6.20)
k2 a
An example is exciton–exciton annihilation in the light harvesting complex of
photosynthesis:
A∗ + A∗ → A+A. (6.21)

6.4 Dynamical Equilibrium


We consider a first-order reaction together with the back reaction

k1

A B. (6.22)

k−1

The reaction variable of the back reaction will be denoted by y. The concen-
trations are

cA (t) = a − x + y, (6.23)
cB (t) = b + x − y, (6.24)

and the reaction rates are


dx
= k1 cA = k1 (a − x + y), (6.25)
dt
dy
= k−1 cB = k−1 (b + x − y). (6.26)
dt
Introducing an overall reaction variable

z =x−y (6.27)
6.6 Consecutive Reactions 79

and the equilibrium value


k1 a − k−1 b
s= , (6.28)
k1 + k−1
we have
dz
= k1 a − k−1 b − (k1 − k−1 )z = (k1 + k−1 )(s − z) (6.29)
dt
which for z(0) = 0 has the solution

z = s(1 − e−(k1 +k−1 )t ). (6.30)

The reaction approaches the equilibrium with a rate constant k1 + k−1 . In


equilibrium, z = s and dz/dt = 0. The equilibrium concentrations are
k−1
cA = a − s = (a + b) , (6.31)
k1 + k−1
k1
cB = b + s = (a + b) , (6.32)
k1 + k−1
and the equilibrium constant is
cA k−1
K= = . (6.33)
cB k1

6.5 Competing Reactions


If one species decays via separate independent channels (fluorescence, electron
transfer, radiationless transitions, etc.), the rates are additive:
dcA
= −(k1 + k2 + · · · )cA . (6.34)
dt

6.6 Consecutive Reactions


We consider a chain consisting of two first-order reactions3

k1 k2
A → B → C. (6.35)

The reaction variables are denoted by x, y and the initial concentrations by


a, b, c. The concentrations are

cA = a − x, (6.36)

3
With negligible back reactions
80 6 Formal Kinetics

cB = b + x − y, (6.37)
cC = c + y, (6.38)
and their time derivatives are

dcA dx
=− = −k1 cA = −k1 (a − x), (6.39)
dt dt
dcB dx dy
= − = k1 cA − k2 cB = k1 a − k2 b + (k2 − k1 )x − k2 y, (6.40)
dt dt dt
dcC dy
= = k2 cB = k2 (b + x − y). (6.41)
dt dt
The first equation gives an exponential decay:

cA = ae−k1 t . (6.42)

Integration of
dcB
+ k2 cB = k1 ae−k1 t (6.43)
dt
gives the concentration of the intermediate state:

k1 a −k1 t k1 a
cB = e + b− e−k2 t . (6.44)
k2 − k1 k2 − k1

If at time zero only the species A is present, the concentration of B has a


maximum at
1 k1
tmax = ln (6.45)
k1 − k2 k2
with the value

k1 k2 k2 k1 k2
cB,max = a exp ln − exp ln .
k1 − k2 k1 − k2 k1 k1 − k2 k1
(6.46)

6.7 Enzymatic Catalysis

Enzymatic catalysis is very important for biochemical reactions. It can be


described schematically by formation of an enzyme–substrate complex fol-
lowed by decomposition into enzyme and product:

k1
E + S  ES  E + P. (6.47)
k−1
6.7 Enzymatic Catalysis 81

We consider the limiting case of negligible k−2 k2 and large concentration


of substrate cS
cE . Then we have to solve the equations
dcS
≈ −k1 cE c0S + k−1 cES ,
dt
dcE
≈ −k1 cE c0S + (k−1 + k2 )cES ,
dt
dcES
≈ k1 cE c0S − (k−1 + k2 )cES ,
dt
dcP
= k2 cES . (6.48)
dt
First, we solve the equations for dcE /dt and dcES /dt:

d cES −k−1 − k2 k1 c0S cES
= . (6.49)
dt cE k−1 + k2 −k1 c0S cE

The matrix has one eigenvalue λ = 0 corresponding to a stationary solution:


 k1 c0 
−k−1 − k2 k1 c0S S 0
k−1 +k2 = . (6.50)
k−1 + k2 −k1 c0S 1 0

The stationary concentration of the ES complex is


k1 c S cE
cstat
ES = cS cE = (6.51)
k−1 + k2 KM
with the Michaelis constant
k−1 + k2
KM = . (6.52)
k1
The second eigenvalue relates to the time constant for reaching the stationary
state:

−k−1 − k2 k1 c0S 1 1
= −(k c 0
+ k−1 + k ) . (6.53)
k−1 + k2 −k1 c0S −1 1 S 2
−1

For the initial conditions

cES (0) = cP (0) = 0, (6.54)

we find
c0E
cES (t) = (1 − e−(k1 +k−1 +k2 )t ),
1 + Kc0M
S

c0E c0S −(k1 +k−1 +k2 )t
cE (t) = c0
1 + e . (6.55)
1 + KSM KM
82 6 Formal Kinetics

rmax=k2cE,tot
1

reaction rate r
cSk2cE,tot/KM

0.5

0
0 0.5 1
cS

Fig. 6.3. Michaelis–Menten kinetics

The stationary state is stable, since any deviation will decrease exponentially.
The overall rate of the enzyme-catalyzed reaction is given by the rate of
product formation:
dcP dcS
r= =− = k2 cES (6.56)
dt dt
and with the total concentration of enzyme

cE,tot = cE + cES , (6.57)

we have
c E cS (cE,tot − cES )cS
cES = = (6.58)
KM KM
and hence
cE,tot cS
cES = . (6.59)
cS + K M
The overall reaction rate is given by the Michaelis–Menten equation (Fig. 6.3)

k2 cE,tot cS
r= , (6.60)
K M + cS
r cS
= , rmax = k2 cE,tot . (6.61)
rmax cS + K M

6.8 Reactions in Solutions


In solutions, the reacting molecules approach each other by diffusive motion,
forming a reactive complex within a solvent cage which has a lifetime of
typically 100 ps (Fig. 6.4).
6.8 Reactions in Solutions 83

B
A

solvent cage
A B

Fig. 6.4. Formation of a reactive complex

a b c
reaction rate

0.1

0.01
0 5 0 5 0 5
time

Fig. 6.5. Transition from the diffusion-controlled limit to the reaction-controlled


limit. r = k2 c{AB} is calculated numerically for cA (0) = cB (0) = 1, k2 = 1: (a) k1 =
k−1 = 0.1, (b) k1 = k−1 = 1.0, and (c) k1 = k−1 = 10

Formally, this can be described by an equilibrium between the free reac-


tants A and B and a reactive complex {AB}
k1 k2
A+B  {A B} → C. (6.62)
k−1
The concentrations obey the equations (Fig. 6.5)
dcA dcB
= = −k1 cA cB + k−1 c{AB} ,
dt dt
dcC
= k2 c{AB} ,
dt
dc{AB}
= k1 cA cB − (k−1 + k2 )c{AB} . (6.63)
dt
Let us consider two limiting cases.

6.8.1 Diffusion-Controlled Limit


If the reaction rate k2 is large compared to k±1 , we find for the stationary
solution approximately
84 6 Formal Kinetics

k2 c{AB} ≈ k1 cA cB (6.64)
and hence for the overall reaction rate
dcC
= k2 c{AB} ≈ k1 cA cB . (6.65)
dt
The reaction rate is determined by the formation of the reactive complex.

6.8.2 Reaction-Controlled Limit

If, on the other hand k2 k±1 , an equilibrium between reactants and reactive
complex will be established
c{AB} k1
A + B  { A B}, =K= . (6.66)
cA cB k−1
Now, the overall reaction rate
dcC
= k2 c{AB} = k2 KcA cB (6.67)
dt
is determined by the reaction rate k2 and the constant of the diffusion
equilibrium.

Problems
6.1. pH Dependence of Enzyme Activity
Consider an enzymatic reaction where the substrate can be in two protonation
states
S− + H+  HS
and the enzyme reacts only with the deprotonated form

E + S−  ES− → E + P.

Calculate the reaction rate as a function of the proton concentration cH+

6.2. Polymerization at the End of a Polymer


Consider the multiple equilibrium between the monomer M and the i-mer iM

c2M = Kc2M ,

c3M = KcM c2M ,


..
.
ciM = KcM c(i−1)M ,
6.8 Reactions in Solutions 85

where the equilibrium constant K is assumed to be independent of the degree


of polymerization. Calculate the concentration of the i-mer ciM and the mean
degree of polymerization 
iciM
i = i .
i ciM

6.3. Primary Salt Effect


Consider the reaction of two ionic species A and B with charges ZA,B e which
are in equilibrium with an activated complex X (with charge (ZA + ZB )e)
which decays into the products C and D:

k1
A+B  X → C + D.

The equilibrium constant is


aX
K= ,
aA aB
where the activities are given by the Debye–Hückel approximation

Zi2 e2 κ e2 
μi = μ0i + kB T ln ci − , κ2 = Ni Zi2 .
8π kB T
Calculate the reaction rate
dcC
r= .
dt
7
Kinetic Theory: Fokker–Planck Equation

In this chapter, we consider a model system (protein) interacting with a


surrounding medium which is only taken implicitly into account. We are inter-
ested in the dynamics on a time scale slower than the medium fluctuations.
The interaction with the medium is described approximately as the sum of
an average force and a stochastic force [31].

7.1 Stochastic Differential Equation for Brownian


Motion
The simplest example describes one-dimensional Brownian motion of a big
particle in a sea of small particles. The average interaction leads to damping
of the motion which is described in terms of a velocity-dependent damping
term:
dv(t)
= −γv(t). (7.1)
dt
This equation alone leads to exponential relaxation v = v(0)e−γt which is not
compatible with thermodynamics, since the average kinetic energy should be
(m/2)v 2 = kB T /2 in equilibrium. Therefore, we add a randomly fluctuating
force which represents the collisions with many solvent molecules during a
finite time interval τ . The result is the Langevin equation
dv(t)
= −γv(t) + F (t) (7.2)
dt
with the formal solution
 t

−γt
v(t) = v0 e + eγ(t −t) F (t )dt . (7.3)
0

The average of the stochastic force has to be zero because the equation of
motion for the average velocity should be
88 7 Kinetic Theory: Fokker–Planck Equation

dv(t)
= −γv(t). (7.4)
dt
We assume that many collisions occur during τ and therefore forces at different
times are not correlated:

F (t)F (t ) = Cδ(t − t ). (7.5)

The velocity correlation function is


   
t t
 −γ(t+t )
v(t)v(t ) = e v02 + dt1 dt2 e γ(t1 +t2 )
F (t1 )F (t2 ) . (7.6)
0 0

Without losing generality, we assume t > t and substitute t2 = t1 + s to find



v(t)v(t ) = v02 e−γ(t+t )
 t  t −t1

+e−γ(t+t ) dt1 ds eγ(2t1 +s) F (t1 )F (t1 + s)
0 −t1
 t

−γ(t+t )
= v02 e−γ(t+t ) +e dt1 e2γt1 C
0
  e 2γt
−1
= v02 e−γ(t+t ) + e−γ(t+t ) C. (7.7)

The exponential terms vanish very quickly and we find
 C
v(t)v(t ) → e−γ|t −t| . (7.8)

Now, C can be determined from the average kinetic energy as

mv 2  kB T mC 2γkB T
= = →C= . (7.9)
2 2 2 2γ m
The mean square displacement of a particle starting at x0 with velocity v0 is
(  2 )  t  t
t
(x(t) − x(0))2  = dt1 v(t1 ) = v(t1 )v(t2 )dt1 dt2
0 0 0
 t t
2 −γ(t1 +t2 ) kB T −γ|t1 −t2 |
= v0 e + e dt (7.10)
0 0 m

and since  t 2
t
−γ(t1 +t2 ) 1 − e−γt
e dt1 dt2 = (7.11)
0 0 γ
7.2 Probability Distribution 89

and
 t t  t  t1
−γ|t1 −t2 |
e dt1 dt2 = 2 dt1 e−γ(t1 −t2 ) dt2
0 0 0 0
2 2
= t − 2 (1 − e−γt ), (7.12)
γ γ
we obtain

kB T (1 − e−γt )
2
2kB T 2kB T
(x(t) − x(0))  = v0 −
2 2
+ t− (1 − e−γt ).
m γ2 mγ mγ 2
(7.13)

If we had started with an initial velocity distribution for the stationary state

v02  = kB T /m, (7.14)

then the first term in (7.12) would vanish. For very large times, the leading
term is1
(x(t) − x(0))2  = 2Dt (7.15)
with the diffusion coefficient
kB T
D= . (7.16)

7.2 Probability Distribution


Now, we discuss the probability distribution W (v). The time evolution can be
described as

W (v, t + τ ) = P (v, t + τ |v  , t)W (v  , t)dv  . (7.17)

To derive an expression for the differential ∂W (v, t)/∂t, we need the transition
probability P (v, t + τ |v  , t) for small τ . Introducing Δ = v − v  , we expand the
integrand in a Taylor series

P (v, t + τ |v  , t)W (v  , t) = P (v, t + τ |v − Δ, t)W (v − Δ, t)


∞ n
(−1)n n ∂
= Δ
n=0
n! ∂v
×P (v + Δ, t + τ |v, t)W (v, t). (7.18)

Inserting this into the integral gives


∞ n 
(−1)n ∂
W (v, t + τ ) = Δn P (v + Δ, t + τ |v, t)dΔ W (v, t)
n=0
n! ∂v
(7.19)
1
This is the well-known Einstein result for the diffusion constant D.
90 7 Kinetic Theory: Fokker–Planck Equation

and assuming that the moments exist which are defined by

Mn (v  , t, τ ) = (v(t + τ ) − v(t))n |v(t)=v



= (v − v  )n P (v, t + τ |v  , t)dv, (7.20)

we find n
∞
(−1)n ∂
W (v, t + τ ) = Mn (v, t, τ )W (v, t). (7.21)
n=0
n! ∂v
Expanding the moments into a Taylor series
1 1
Mn (v, t, τ ) = Mn (v, t, 0) + D(n) (v, t)τ + · · · , (7.22)
n! n!
we have finally2

 n

W (v, t + τ ) − W (v, t) = − D(n) (v, t)W (v, t)τ + · · · , (7.23)
1
∂v

which gives the equation of motion for the probability distribution3


∞ n
∂W (v, t)  ∂
= − D(n) (v, t)W (v, t). (7.24)
∂t 1
∂v

If this expansion stops after the second term,4 the general form of the one-
dimensional Fokker–Planck equation results

∂W (v, t) ∂ (1) ∂ 2 (2)
= − D (v, t) + 2 D (v, t) W (v, t) (7.25)
∂t ∂v ∂x

7.3 Diffusion
Consider a particle performing a random walk in one dimension due to
collisions. We use the stochastic differential equation5
dx
= v0 + f (t), (7.26)
dt
where the velocity has a drift component v0 and a fluctuating part f (t) with

f (t) = 0, f (t)f (t ) = qδ(t − t ). (7.27)


2
The zero-order moment does not depend on τ .
3
This is known as the Kramers–Moyal expansion.
4
It can be shown that this is the case for all Markov processes.
5
This is a so-called Wiener process.
7.3 Diffusion 91

The formal solution is simply


 t
x(t) − x(0) = v0 t + f (t )dt . (7.28)
0

The first moment


 τ
M1 (x0 , t, τ ) = x(t + τ ) − x(t)|x(t)=x0 = v0 τ + f (t )dt (7.29)
0

gives
D(1) = v0 . (7.30)
The second moment is

M2 (x0 , t, τ )
 τ  τ  τ
= v02 τ 2 + v0 τ f (t )dt + f (t1 )f (t2 )dt1 dt2 . (7.31)
0 0 0

The second term vanishes and the only linear term in τ comes from the double
integral
 τ τ  τ  τ −t1
f (t1 )f (t2 )dt1 dt2 = dt1 qδ(t )dt = qτ. (7.32)
0 0 0 −t1

Hence
q
D(2) = (7.33)
2
and the corresponding Fokker–Planck equation is the diffusion equation

∂W (x, t) ∂W (x, t) ∂ 2 W (x, t)


= −v0 +D (7.34)
∂t ∂x ∂x2
with the diffusion constant D = D(2) .

7.3.1 Sharp Initial Distribution

We can easily find the solution for a sharp initial distribution W (x, 0) =
δ(x − x0 ) by taking the Fourier transform:
 ∞
* (k, t) =
W dx W (x, t) e−ikx . (7.35)
−∞

We obtain the algebraic equation

* (k, t)
∂W * (k, t)
= (−Dk 2 + iv0 k)W (7.36)
∂t
92 7 Kinetic Theory: Fokker–Planck Equation

0.5

0.4

0.3

0.2

0.1

0
–4
–2 1
0 2
2
x 4 3 t
6
8 4
10
12 5

Fig. 7.1. Solution (7.38) of the diffusion equation for sharp initial conditions

which is solved by
+ ,
* (k, t) = W
W *0 exp (−Dk 2 + iv0 k)t + ikx0 . (7.37)

Inverse Fourier transformation then gives6


 
1 (x − x0 − v0 t)2
W (x, t) = √ exp − , (7.38)
4πDt 4Dt

which is a Gaussian distribution centered at xc = x0 + v0 t with a variance of


(x − xc )2  = 4Dt (Fig. 7.1).

7.3.2 Absorbing Boundary

Consider a particle from species A which can undergo a chemical reaction


with a particle from species B at position xA = 0

A+B → AB. (7.39)

If the reaction rate is very fast, then the concentration of A vanishes at x = 0


which gives an additional boundary condition

6
with the proper normalization factor
7.4 Fokker–Planck Equation for Brownian Motion 93

a b
1
2
0.9

1 0.8
W(x,t)

0.7

CA(t)
0
0.6
–1 0.5
0.4
–2
0.3
–2 0 2 0 2 4 6 8 10
x t

Fig. 7.2. Solution from the mirror principle. (a) The probability density distribution
(7.42) at t = 0.25, 0.5, 1, 2, 5 and (b) the decay of the total concentration (7.43) are
shown for 4D = 1

W (x = 0, t) = 0. (7.40)

Starting again with a localized particle at time zero with

W (x, 0) = δ(x − x0 ), v0 = 0, (7.41)

the probability distribution



1 (x−x0 )2 (x+x0 )2
W (x, t) = √ e− 4Dt − e− 4Dt (7.42)
4πDt
is a solution which fulfills the boundary conditions. This solution is similar
to the mirror principle known from electrostatics. The total concentration of
species A in solution is then given by (Fig. 7.2)
 ∞
x0
CA (t) = dx W (x, t) = erf √ . (7.43)
0 4Dt

7.4 Fokker–Planck Equation for Brownian Motion


For Brownian motion, we have from the formal solution
 τ
v(τ ) = v0 (1 − γτ + · · · ) + (1 + γ(t1 − τ ) + · · · )F (t1 )dt1 . (7.44)
0

The first moment7

M1 (v0 , t, τ ) = v(τ ) − v(0) = −γτ v0 + · · · (7.45)

7
Here and in the following, we use F (t) = 0.
94 7 Kinetic Theory: Fokker–Planck Equation

gives
D(1) (v, t) = −γv. (7.46)
The second moment follows from

(v(τ ) − v0 )2 
 τ  τ
= (v0 γτ )2 + (1 + γ(t1 + t2 − 2τ · · · ))F (t1 )F (t2 )dt1 dt2 . (7.47)
0 0

The double integral gives


 τ  τ −t1
2γkB T 
dt1 dt (1 + γ(2t1 + t − 2τ + · · · )) δ(t )
0 −t1 m
 τ
2γkB T
= dt1 (1 + γ(2t1 − 2τ + · · · ))
0 m
2γkB T
= τ + ··· (7.48)
m
and we have
γkB T
. D(2) = (7.49)
m
The higher moments have no contributions linear in τ and the resulting
Fokker–Planck equation is
∂W (v, t) ∂ γkB T ∂ 2
= γ (vW (v, t)) + W (v, t). (7.50)
∂t ∂v m ∂v 2

7.5 Stationary Solution to the Focker–Planck Equation


The Fokker–Planck equation can be written in the form of a continuity
equation:
∂W (v, t) ∂
= − S(v, t) (7.51)
∂t ∂v
with the probability current

γkT mv ∂
S(v, t) = − W (v, t) + W (v, t) . (7.52)
m kB T ∂v
The probability current has to vanish for a stationary solution (with open
boundaries −∞ < v < ∞)
∂ mv
W (v, t) = − W (v, t), (7.53)
∂v kB T
which has the Maxwell distribution as its solution

m
e−mv /2kB T .
2
Wstat (v, t) = (7.54)
2πkB T
7.5 Stationary Solution to the Focker–Planck Equation 95

Therefore, we conclude that the Fokker–Planck equation describes systems


that reach thermal equilibrium, starting from a nonequilibrium distribution.
In the following, we want to look at the relaxation process itself. We start
with
∂W (v, t) ∂W (v, t) ∂ 2 W (v, t)
= γW (v, t) + γv +D (7.55)
∂t ∂v ∂v 2
and introduce the new variables

ρ = veγt , y(ρ, t) = W (ρe−γt , t) (7.56)

which transform the differentials according to


∂W ∂y ∂ρ ∂y
= = eγt ,
∂v ∂ρ ∂v ∂ρ
2 2
∂ W ∂ y
= e2γt 2 ,
∂v 2 ∂ρ
∂W ∂y ∂y ∂ρ ∂y ∂y
= + = + γρ . (7.57)
∂t ∂t ∂ρ ∂t ∂t ∂ρ
This leads to the new differential equation:

∂y ∂2y
= γy + De2γt 2 . (7.58)
∂t ∂ρ
To solve this equation, we introduce new variables again

y = χeγt , (7.59)

which results in
∂χ ∂2χ
= De2γt 2 . (7.60)
∂t ∂ρ
Now, we introduce a new time scale
1 2γt
θ= (e − 1), (7.61)

dθ = e2γt dt, (7.62)


satisfying the initial condition θ(t = 0) = 0. Finally, we have to solve a
diffusion equation
∂χ ∂2χ
=D 2 (7.63)
∂θ ∂ρ
which gives (7.38)

1 (ρ − ρ0 )2
χ(ρ, θ, ρ0 ) = √ exp − . (7.64)
4πDθ 4πDθ
96 7 Kinetic Theory: Fokker–Planck Equation

After back substitution of all variables, we find


 
m m (v − v0 e−γt )2
W (v, t) = exp − . (7.65)
2πkB T (1 − e−2γt ) 2kB T (1 − e−2γt )
This solution shows that the system behaves initially like
 
1 (v − v0 )2
W (v, t) ≈ √ exp − (7.66)
4πDt 4Dt
and relaxes to the Maxwell distribution with a time constant Δt = 1/2γ.

7.6 Diffusion in an External Potential


We consider motion of a particle under the influence of an external (mean)
force K(x) = −(d/dx)U (x). The stochastic differential equations for position
and velocity are
ẋ = v, (7.67)
1
v̇ = −γv + K(x) + F (t). (7.68)
m
We will calculate the moments for the Kramers–Moyal expansion. For small
τ , we have
 τ
Mx = x(τ ) − x(0) = v(t)dt = v0 τ + · · · ,
 τ
0

1
Mv = v(τ ) − v(0) = −γv(t) + K(x(t)) + F (t) dt
0 m

1
= −γv0 + K(x0 ) τ + · · · ,
m
 τ τ
Mxx = (x(τ ) − x(0))2  = v(t1 )v(t2 )dt1 dt2 = v02 τ 2 + · · · ,
0 0
Mvv = (v(τ ) − v(0))2 
2  τ τ
1
= −γv0 + K(x0 ) τ 2 + F (t1 )F (t2 )dt1 dt2
m 0 0
2γkB T
= τ + ··· (7.69)
m
The drift and diffusion coefficients are
D(x) = v,
1
D(v) = −γv + K(x),
m
D(xx) = 0,
γkB T
D(vv) = , (7.70)
m
7.6 Diffusion in an External Potential 97

which leads to the Klein–Kramers equation


- .
∂W (x, v, t) ∂ ∂ (v) ∂2
= − D(x) − D + 2 D(vv) W (x, v, t),
∂t ∂x ∂v ∂v
- .
∂W (x, v, t) ∂ ∂ K(x) γkB T ∂ 2
= − v+ (γv − )+ W (x, v, t).
∂t ∂x ∂v m m ∂v 2
(7.71)

This equation can be divided into a reversible and an irreversible part:


∂W
= (Lrev + Lirrev )W, (7.72)
∂t
- . - .
∂ 1 ∂U ∂ ∂ γkB T ∂ 2
Lrev = −v + , Lirrev = γv + . (7.73)
∂x m ∂x ∂v ∂v m ∂v 2
The reversible part corresponds to the Liouville operator for a particle moving
in the potential without friction
- .
∂H ∂ ∂H ∂ p2
L= − , H= + U (x). (7.74)
∂x ∂p ∂p ∂x 2m
Obviously
LH = 0 (7.75)
and
 
H
Lirrev exp −
kB T
 /  2 0
H mv γkB T mv m
= exp − γ − γv + − = 0. (7.76)
kB T kB T m kB T kB T

Therefore, the Klein–Kramers equation has the stationary solution

Wstat (x, v, t) = Z −1 e−(mv


2
/2 +U (x))/kB T
(7.77)

with  
dvdx e−(mv
2
/2+U (x))/kB T
Z= . (7.78)

The Klein–Kramers equation can be written in the form of a continuity


equation:
∂ ∂ ∂
W = − Sx − Sv (7.79)
∂t ∂x ∂v
with the probability current
Sx = vW, (7.80)
- .
1 ∂U γkB T ∂W
Sv = − γv + W− . (7.81)
m ∂x m ∂v
98 7 Kinetic Theory: Fokker–Planck Equation

7.7 Large Friction Limit: Smoluchowski Equation


For large friction constant γ, we may neglect the second derivative with respect
to time and obtain the stochastic differential equation
1 1
ẋ = K(x) + F (t) (7.82)
mγ γ
and the corresponding Fokker–Planck equation is the Smoluchowski equation:
- .
∂W (x, t) 1 ∂ kB T ∂ 2
= − K(x) + W (x, t), (7.83)
∂t mγ ∂x mγ ∂x2
which can be written with the mean force potential U (x) as
- .
∂W (x, t) 1 ∂ ∂ ∂U
= kB T + W (x, t). (7.84)
∂t mγ ∂x ∂x ∂x

7.8 Master Equation


The master equation is a very general linear equation for the probability
density. If the variable x takes on only integer values, it has the form
∂Wn 
= (wm→n Wm − wn→m Wn ) , (7.85)
∂t m

where Wn is the probability to find the integer value n and wm→n is the
transition probability. For continuous x, the summation has to be replaced by
an integration

∂W (x, t)
= (wx →x W (x , t) − wx→x W (x, t)) dx . (7.86)
∂t
The Fokker–Planck equation is a special form of the master equation with

∂ ∂2
wx →x = − D(1) (x) + 2 D(2) (x) δ(x − x ). (7.87)
∂x ∂x
So far, we have discussed only Markov processes where the change of proba-
bility at time t only depends on the probability at time t. If memory effects
are included, the generalized Master equation results.

Problems
7.1. Smoluchowski Equation
Consider a one-dimensional random walk. At times tn = nΔt, a particle at
position xj = jΔx jumps either to the left side j − 1 with probability wj− or
7.8 Master Equation 99

to the right side j + 1 with probability wj+ = 1 − wj− . The probability to find
a particle at site j at the time n + 1 is then given by
+ −
Pn+1,j = wj−1 Pn,j−1 + wj+1 Pn,j+1 .

Show that in the limit of small Δx, Δt, the probability distribution P (t, x)
obeys a Smoluchowski equation

w−j +
wj

x
j−1 j j+1

7.2. Eigenvalue Solution to the Smoluchowski Equation


Consider the one-dimensional Smoluchowski equation
- .
∂W (x, t) 1 ∂ ∂ ∂U ∂
= kB T + W (x, t) = − S(x)
∂t mγ ∂x ∂x ∂x ∂x

for a harmonic potential


mω 2 2
U (x) = x .
2
Show that the probability current can be written as

kB T −U(x)/kT ∂ U(x)/kB T
S(x, t) = − e e W (x, t)
mγ ∂x

and that the Fokker–Planck operator can be written as


- .
1 ∂ ∂ ∂U kB T ∂ −U (x)/kB T ∂ U (x)/kB T
LFP = kB T + = e e
mγ ∂x ∂x ∂x mγ ∂x ∂x

and can be transformed into a hermitian operator by

L = eU(x)/2kB T LFP e−U(x)/2kB T .

Solve the eigenvalue problem

Lψn (x) = λn ψn (x)

and use the function ψ0 (x) to construct a special solution

W (x, t) = eλ0 t e−U(x)/2kB T ψ0 (x).


100 7 Kinetic Theory: Fokker–Planck Equation

7.3. Diffusion Through a Membrane


A

kA km

kB km

A membrane with M pore proteins separates two half-spaces A and B. An ion


X may diffuse through M pore proteins in the membrane from A to B or vice
versa. The rate constants for the formation of the ion-pore complex are kA
and kB , respectively, while km is the constant for the decay of the ion–pore
complex independent of the side to which the ion escapes. Let PN (t) denote
the probability that there are N ion–pore complexes at time t. The master
equation for the probability is

dPN (t)
= − [(kA + kB )(M − N ) + 2km N ] PN (t)
dt
+ (kA + kB )(M − N + 1)PN −1 (t) + 2km (N + 1)PN +1 (t). (7.88)

Calculate mean and variance of the number of complexes



N= N PN ,
N

2
σ2 = N 2 − N
and the mean diffusion current
dNB dNA
JAB = − ,
dt dt
where NA,B is the number of ions in the upper of lower half-space.
8
Kramers’ Theory

Kramers [32] used the concept of Brownian motion to describe motion of


particles over a barrier as a model for chemical reactions in solution. The prob-
ability distribution of a particle moving in an external potential is described
by the Klein–Kramers equation (7.71):
- .
∂W (x, v, t) ∂ ∂ K(x) γkB T ∂ 2
= − v+ γv − + W (x, v, t)
∂t ∂x ∂v m m ∂v 2
∂ ∂
= − Sx − Sv (8.1)
∂x ∂v
and the rate of the chemical reaction is related to the probability current Sx
across the barrier.

8.1 Kramers’ Model


A particle in a stable minimum A has to reach the transition state B by
diffusive motion and then converts to the product C. The minimum well and
the peak of the barrier are both approximated by parabolic functions (Fig. 8.1)
m 2
UA = ω (x − x0 )2 , (8.2)
2 a
m
U ∗ = Ea − ω ∗2 x2 . (8.3)
2
Without the chemical reaction, the stationary solution is

Wstat = Z −1 exp {−H/kB T } . (8.4)

We assume that the perturbation due to the reaction is small and make the
following ansatz (Fig. 8.2)

W (x, v) = Wstat y(x, v), (8.5)


102 8 Kramers’ Theory

U(x)

A B C

U*
UA
x0 0 reaction coordinate x

Fig. 8.1. Kramers’ model

Wstat
y

x0 0

Fig. 8.2. Ansatz function. The stationary solution of a harmonic well is multiplied
with the function y(x, v) which switches from 1 to 0 at the saddle point

where the partition function is approximated by the harmonic oscillator


expression
2πkB T
Z= . (8.6)
mωa
The probability distribution should fulfill two boundary conditions:
1. In the minimum of the A-well, the particles should be in thermal equilib-
rium and therefore
 
H
W (x, v) = Z −1 exp − → y = 1 if x ≈ x0 . (8.7)
kB T

2. On the right side of the barrier (x > 0), all particles A are converted into
C and there will be no particles of the A species here

W (x, v) = 0 → y = 0 if x > 0. (8.8)

8.2 Kramers’ Calculation of the Reaction Rate

Let us now insert the ansatz function into the Klein–Kramers equation. For
the reversible part, we have
   
H H
Lrev exp − y(x, v) = exp − Lrev y(x, v) (8.9)
kB T kB T
8.2 Kramers’ Calculation of the Reaction Rate 103

and for the irreversible part


- .  
∂ γkB T ∂ 2 H
γv + exp − y(x, v)
∂v m ∂v 2 kB T
  - . 
H ∂ γkB T ∂ 2
= exp − −γv + y . (8.10)
kB T ∂v m ∂v 2
Note the subtle difference. The operator (∂/∂v)v is replaced by −v(∂/∂v).
Together, we have the following equation for y
∂y 1 ∂U ∂y ∂y γkB T ∂ 2 y
0 = −v + − γv + , (8.11)
∂x m ∂x ∂v ∂v m ∂v 2
which becomes in the vicinity of the top
∂y ∂y ∂y γkB T ∂ 2 y
0 = −v − ω ∗2 x − γv + . (8.12)
∂x ∂v ∂v m ∂v 2
Now, we make a transformation of variables

(x, v) → (η, ξ) = (x, v − λx). (8.13)

With
∂ ∂ ∂ ∂ ∂
x = η, v = ξ + λη, = −λ , = , (8.14)
∂x ∂η ∂ξ ∂v ∂ξ
(8.12) is transformed to

∂ ∂ γkB T ∂ 2
0 = (ξ + λη) y + ((λ − γ)ξ + (λ2 − λγ − ω 2 )η) y + y. (8.15)
∂η ∂ξ m ∂ξ 2
Now choose
γ γ2
λ= + + ω ∗2 (8.16)
2 4
to have the simplified equation
∂ ∂ γkB T ∂ 2
0 = (ξ + λη) y + ((λ − γ)ξ) y + y (8.17)
∂η ∂ξ m ∂ξ 2
which obviously has solutions which depend only on ξ and obey
∂ γkB T ∂ 2
ξ Φ(ξ) = − Φ(ξ). (8.18)
∂ξ m(λ − γ) ∂ξ 2

The general solution of this equation is1


 % 
m(λ − γ)
Φ(ξ) = C1 + C2 erf ξ . (8.19)
2γkB T
1
Note that λ − γ is always positive. This would not be true for the second root.
104 8 Kramers’ Theory

Now, we impose the boundary condition Φ → 0 for x → ∞, which means


ξ → −∞. From this, we find
C1 = −C2 erf(−∞) = C2 . (8.20)
Together, we have for the probability density
W (x, v)

 m 2 # % $
mωa v + U (x) m(λ − γ)
= C2 exp − 2 1 + erf (v − λx)
2πkB T kB T 2γkB T
(8.21)
and the flux over the barrier is approximately

S = dv vW (0, v)
 # % $
mωa −U (x∗ )/kB T −mv 2 /2kB T m(λ − γ)
= C2 e dv ve 1 + erf v
2πkB T 2γkB T

mωa −U (0)/kB T 2kB T γ
= C2 e 1− . (8.22)
2πkB T m λ
In the A-well, we have approximately
# mωa2 (x−xa )2
$
mv 2
mωa 2 +
W (x, v) ≈ 2C2 exp − 2
. (8.23)
2πkB T kB T
The total concentration is approximately
 
[A] = dx dvWA (x, v) = 2C2 . (8.24)

Hence, we find
ωa −U(0)/kB T γ
e
S = [A] 1− . (8.25)
2π λ
The square root can be written as
! γ 1 γ2
! !− + ∗2
!1 − γ ! 2 4 +ω
" 1 =" 1
γ2 γ2
γ
2 + 4 + ω ∗2 γ
2 + 4 +ω
∗2

! 1 2
!
! − γ + γ 2 + ω ∗2
! 2 4
=!" 2
2

− γ4 + γ4 + ω ∗2
1
2
− γ2 + γ4 + ω ∗2
= (8.26)
ω∗
8.2 Kramers’ Calculation of the Reaction Rate 105

and finally, we arrive at Kramers’ famous result


 
S ωa −U(0)/kB T γ γ2 ∗2
k= = e − + +ω . (8.27)
[A] 2πω ∗ 2 4

The bracket is Kramers’ correction to the escape rate. In the limit of high
friction, series expansion in γ −1 gives
ωa ω ∗ −U(0)/kB T
k = γ −1 e . (8.28)

In the limit of low friction, the result is
ωa −U(0)/kB T
k= e . (8.29)

9
Dispersive Kinetics

In this chapter, we consider the decay of an optically excited state of a


donor molecule in a fluctuating medium. The fluctuations are modeled by
time-dependent decay rates k (electron transfer), k−1 (back reaction), kda
(deactivation by fluorescence or radiationless transitions), and kcr (charge
recombination to the ground state). (Fig. 9.1)
The time evolution is described by the system of rate equations:
d
W (D∗ ) = −k(t)W (D∗ ) + k−1 (t)W (D+ A− ) − kda W (D∗ ),
dt
d
W (D+ A− ) = k(t)W (D∗ ) − k−1 (t)W (D+ A− ) − kcr W (D+ A− ), (9.1)
dt
which has to be combined with suitable equations describing the dynamics of
the environment. First, we will apply a very simple dichotomous model [33].

k
D*
D+A−
k −1
kda
kcr

DA

Fig. 9.1. Electron transfer in a fluctuating medium. The rates are time dependent

9.1 Dichotomous Model


The fluctuations of the rates are modeled by random jumps between two
different configurations (±) of the environment which modulate the val-
ues of the rates. The probabilities of the two states are determined by the
108 9 Dispersive Kinetics

master equation:

d W (+) −α β W (+)
= , (9.2)
dt W (−) α −β W (−)

which has the general solution:

W (+) = C1 + C2 e−(α+β)t ,
α
W (−) = C1 − C2 e−(α+β)t . (9.3)
β
Obviously, the equilibrium values are
β α
Weq (+) = , Weq (−) = (9.4)
α+β α+β
and the correlation function is (with Q± = ±1)

Q(t)Q(0) = Weq (+)(P (+, t|+, 0) − P (−, t|+, 0))


+Weq (−)(P (−, t|−, 0) − P (+, t|−, 0))
= (Weq (+) − Weq (−))2 + 4Weq (+)Weq (−)e−(α+β)t
= Q2 + (Q2  − Q2 )e−(α+β)t . (9.5)

Combination of the two systems of equations (9.1) and (9.2) gives the equation
of motion
d
W = AW (9.6)
dt
for the four-component state vector
⎛ ⎞
W (D∗ , +)
⎜ W (D∗ , −) ⎟
W=⎜ ⎝ W (D+ A− , +) ⎠
⎟ (9.7)
W (D+ A− , −)

with the rate matrix


⎛ ⎞
−α − k + − kda β +
k−1 0
⎜ α −β − k − − kda 0 −
k−1 ⎟
A=⎜ ⎝
⎟.

k +
0 −α − k−1 − kcr
+
β
− −
0 k α −β − k−1 − kcr
(9.8)

Generally, the solution of this equation can be expressed by using the left and
right eigenvectors and the eigenvalues λ of the rate matrix which obey

ARν = λν Rν , (9.9)

Lν A = λν Lν . (9.10)
9.1 Dichotomous Model 109

α
α D+A– (−)

D*(+) D*(–)
+−
k D A (+)
β

Fig. 9.2. Gated electron transfer

For initial values W(0), the solution is given by1


4
(Lν · W(0))
W(t) = Rν eλν t . (9.11)
ν=1
(L ν · Rν )

In the following, we consider a simplified case of gated electron transfer with


±
kda = kcr = k−1 = k − = 0 (Fig. 9.2). Then the rate matrix becomes
⎛ ⎞
−α − k + β 0 0
⎜ α −β 0 0 ⎟
⎜ ⎟. (9.12)
⎝ k+ 0 −α β ⎠
0 0 α −β
As initial values, we chose
⎛ ⎞ ⎛ β ⎞
Weq (+) α+β
⎜ Weq (−) ⎟ ⎜ α ⎟
W0 = ⎜

⎟ = ⎜ α+β
⎠ ⎝ 0
⎟.
⎠ (9.13)
0
0 0

There is one eigenvalue λ1 = 0 corresponding to the eigenvectors:


⎛ ⎞
0
⎜0⎟ & '
R1 = ⎜ ⎟
⎝ β ⎠ , L1 = 1 1 1 1 . (9.14)
α
4
This reflects simply conservation of ν=1 Wν in this special case. The
contribution of the zero eigenvector is
⎛ ⎞ ⎛ ⎞
0 0
L1 · W 0 1 ⎜ ⎟ ⎜
⎜0⎟=⎜ 0 ⎟
⎟.
R1 = ⎝ ⎠ ⎝ (9.15)
L1 · R1 α+β β Weq (+) ⎠
α Weq (−)
1
In the case of degenerate eigenvalues, linear combinations of the corresponding
vectors can be found such that Lν · Lν  = 0 for ν = ν  .
110 9 Dispersive Kinetics

A second eigenvalue λ2 = −(α + β) corresponds to the equilibrium in the final


state D+ A− where no further reactions take place
⎛ ⎞
0
⎜ 0 ⎟ & '
R2 = ⎜ ⎟
⎝ 1 ⎠ , L2 = α −β α −β . (9.16)
−1
The contribution of this eigenvalue is
L2 · W 0
R2 = 0 (9.17)
L2 · R2
since we assumed equilibrium in the initial state. The remaining two eigen-
values are
α + β + k 1
λ3,4 = − ± (α + β + k)2 − 4βk (9.18)
2 2
and the resulting decay will be in general biexponential. We consider two
limits.

9.1.1 Fast Solvent Fluctuations

In the limit of small k, we expand the square root to find


α+β α+β k α−β k
λ3,4 = − ± − ± + ··· . (9.19)
2 2 2 α+β 2
One of the eigenvalues is
α
λ3 = −(α + β) − k + ··· (9.20)
α+β
In the limit of k → 0, the corresponding eigenvectors are
⎛ ⎞
1
⎜ −1 ⎟ & '
R3 = ⎜ ⎟
⎝ −1 ⎠ , L3 = α −β 0 0 (9.21)
1
and will not contribute significantly. The second eigenvalue
β
λ4 = − k + · · · = −Weq (+)k (9.22)
α+β
is given by the average rate. The eigenvectors are
⎛ ⎞
β
⎜ α ⎟ & '
R4 = ⎜ ⎟
⎝ −β ⎠ , L4 = 1 1 0 0 (9.23)
−α
9.1 Dichotomous Model 111

and the contribution to the dynamics is


⎛ ⎞
β
(L4 · W0 ) 1 ⎜ ⎟
⎜ α ⎟ e−λ4 t .
R4 eλ4 t = (9.24)
(L4 · R4 ) α + β −β ⎠

−α

The total time dependence is approximately given by


⎛ ⎞
Weq (+)e−λ4 t
⎜ Weq (−)e−λ4 t ⎟
W(t) = ⎜ ⎟
⎝ Weq (+)(1 − e−λ4 t ) ⎠ . (9.25)
Weq (−)(1 − e−λ4 t )

9.1.2 Slow Solvent Fluctuations

In the opposite limit, we expand the square root for small k −1 to find
α+β k 1& '
λ3,4 = − − ± k + (α − β) + 2αβk −1 + · · · , (9.26)
2 2 2
αβ
λ3 = −β + + ··· , (9.27)
k
⎛ ⎞
0
⎜ 1 ⎟ & '
R3 = ⎜ ⎟
⎝ 0 ⎠ , L3 = α k 0 0 , (9.28)
−1
λ4 = −k − α + · · · , (9.29)
⎛ ⎞
k
⎜ −α ⎟ & '
R4 = ⎜ ⎟
⎝ −k ⎠ , L4 = 1 0 0 0 (9.30)
α
and the time evolution is approximately
⎛ ⎞
β
e−kt
α+β

⎜ ⎟
⎜ α
e−βt − βk e−kt ⎟
W(t) = ⎜ ⎟.
α+β
⎜ ⎟ (9.31)

β
(1 − e−kt )
α+β

α
α+β 1 − e−βt + βk e−kt

This corresponds to an inhomogeneous situation. One part of the ensemble is


in a favorable environment and decays with the fast rate k. The rest has to
wait for a suitable fluctuation which appears with a rate of β.
112 9 Dispersive Kinetics

9.1.3 Numerical Example (Fig. 9.3)

a 10 0 b

10 –1
occupation of the initial state

10 –2

10 –3
c 10 0 d

10 –1

10 –2

10 –3
0 5 10 0 5 10
time

Fig. 9.3. Nonexponential decay. Numerical solutions of (9.12) are shown for α = 0.1,
β = 0.9, (a) k = 0.2, (b) k = 2, (c) k = 5, and (d) k = 10. Dotted curves show the
two components of the initial state and solid curves show the total occupation of
the initial state

9.2 Continuous Time Random Walk Processes


Diffusive motion can be modeled by random walk processes along a one-
dimensional coordinate.

9.2.1 Formulation of the Model

The fluctuations of the coordinate X(t) are described as random jumps [34,
35]. The time intervals between the jumps (waiting time) and the coordinate
changes are random variables with independent distribution functions:

ψ(tn+1 − tn ) and f (Xn+1 , Xn ). (9.32)

The probability that no jump happened in the interval 0 . . . t is given by the


survival function:
 t  ∞
Ψ0 (t) = 1 − dt ψ(t ) = dt ψ(t ) (9.33)
0 t

and the probability of finding the walker at position X at time t is given by


(Fig. 9.4)
9.2 Continuous Time Random Walk Processes 113

X
X(t)

0 t1 t2 t3 t4 t

Fig. 9.4. Continuous time random walk

 ∞
P (X, t) = P (X, 0) ψ(t )dt
t
 t  ∞

+ dt dX  ψ(t − t )f (X, X  )P (X  , t ). (9.34)
0 −∞

Two limiting cases are well known from the theory of collisions. The correlated
process with

f (X, X  ) = f (X − X  ) (9.35)

corresponds to weak collisions. It includes normal diffusion processes as a


special case. For instance if we chose

ψ(tn+1 − tn ) = δ(tn+1 − tn − Δt) (9.36)

and

f (X − X  ) = pδ(X − X  − ΔX) + (1 − p)δ(X − X  + ΔX), (9.37)

we have

P (X, t + Δt) = pP (X − ΔX, t) + qP (X + ΔX, t), p+q =1 (9.38)

and in the limit Δt → 0 and ΔX → 0, Taylor expansion gives

∂P ∂P ∂2P
P (X, t) + Δt + · · · = P (X, t) + (q − p)ΔX + ΔX 2 + · · · (9.39)
∂t ∂X ∂X 2
The leading terms constitute a diffusion equation

∂P ΔX ∂P ΔX 2 ∂ 2 P
P = (q − p) + (9.40)
∂t Δt ∂X Δt ∂X 2
with drift velocity (q − p)(ΔX/Δt) and diffusion constant ΔX 2 /Δt.
The uncorrelated process, on the other hand, with

f (X, X  ) = f (X) (9.41)


114 9 Dispersive Kinetics

corresponds to strong collisions. This kind of process can be analyzed analyt-


ically and will be applied in the following.
The (normalized) stationary distribution Φeq of the uncorrelated process
obeys
 ∞  t  ∞
   
Φeq (X) = Φeq (X) ψ(t )dt + f (X) dt ψ(t − t ) dX  φeq (X  )
t 0 −∞
 ∞  t
= Φeq (X) ψ(t )dt + f (X) dt ψ(t ), (9.42)
t 0

which shows that


f (X) = Φeq (X). (9.43)

9.2.2 Exponential Waiting Time Distribution

Consider an exponential distribution of waiting times


 ∞

ψ(t) = τ −1 e−t/τ , Ψ0 (t) = τ −1 e−t /τ dt = e−t/τ . (9.44)
t

It can be obtained from a Poisson process which corresponds to the master


equation:
dPn
= −τ −1 Pn + τ −1 Pn−1 , n = 0, 1, 2, . . . (9.45)
dt
with the solution
(t/τ )n −t/τ
Pn (0) = δn,0 , Pn (t) = e (9.46)
n!
if we identify the survival function with the probability to be in the initial
state P0

Ψ (t) = P0 (t) = e−t/τ . (9.47)

The general uncorrelated process (9.34) becomes for an exponential distribu-


tion
 t 

P (X, t) = P (X, 0)e−t/τ + dt τ −1 e−(t−t )/τ dX  f (X, X  )P (X  , t ).
0
(9.48)

Laplace transformation gives



1 τ −1
P̃ (X, s) = P (X, 0) + dX  f (X, X  )P̃ (X  , s), (9.49)
s + τ −1 s + τ −1
9.2 Continuous Time Random Walk Processes 115

which can be simplified



(s + τ −1 )P̃ (X, s) = P (X, 0) + τ −1 dX  f (X, X  )P̃ (X  , s). (9.50)

Back transformation gives



d
+ τ −1 P (X, t) = τ −1 dX  f (X, X  )P (X  , t) (9.51)
dt
and finally

∂ 1 1
P (X, t) = − P (X, t) + dX  f (X, X  )P (X  , t), (9.52)
∂t τ τ
which is obviously a Markovian process, since it involves only the time t.
For the special case of an uncorrelated process with exponential waiting time
distribution, the motion can be described by

P (X, t) = LP (X, t), (9.53)
∂t
1
LP (X, t) = − (P (X, t) − φeq (X)P (t)) . (9.54)
τ

9.2.3 Coupled Equations

Coupling of motion along the coordinate X with the reactions gives the
following system of equations [36, 37]:
∂ & '
P (X, t) = −k(X) + L1 − τ1−1 P (X, t) + k−1 (X)C(X, t),
∂t
∂ & '
C(X, t) = −k−1 (X) + L2 − τ2−1 C(X, t) + k(X)P (X, t), (9.55)
∂t
where P (X, t)ΔX and C(X, t)ΔX are the probabilities of finding the system
in the electronic state D∗ and D+ A− , respectively; L1,2 are operators describ-
−1
ing the motion in the two states and the rates τ1,2 account for depopulation
via additional channels. For the uncorrelated Markovian process (9.54), the
rate equations take the form

∂ P (X, t)
∂t C(X, t)

k(X) + τ1−1 + τ −1 −k−1 (X) P (X, t)
=−
−k(X) k−1 (X) + τ2−1 + τ −1 C(X, t)

φ1 (X) P (t)
+τ −1 , (9.56)
φ2 (X) C(t)
116 9 Dispersive Kinetics

which can be written in matrix notation as



R(X, t) = −A(X)R(X, t) + τ −1 B(X)R(t). (9.57)
∂t
Substitution
R(X, t) = exp {−A(X)U(X, t)} (9.58)
gives
 

− A(X) R(X, t) + exp −A(X) U(X, t)
∂t
= −A(X)R(X, t) + τ −1 B(X)R(t), (9.59)

U(X, t) = τ −1 exp {A(X)t} B(X)R(t). (9.60)
∂t
Integration gives
 t
U(X, t) = U(X, 0) + τ −1 exp {A(X)t } B(X)R(t )dt , (9.61)
0

R(X, t) = exp(−A(X)t)R(X, 0)
 t
−1
+τ exp(A(X)(t − t))B(X)R(t )dt (9.62)
0

and the total populations obey the integral equation:

R(t) = exp(−At)R(0)
 t
+ τ −1 exp(A(t − t))BR(t )dt , (9.63)
0

which can be solved with the help of a Laplace transformation:


 ∞
R̃(s) = e−st R(t)dt, (9.64)
0
 ∞
e−st exp(−At)dt = (s + A)−1 , (9.65)
0

 ∞  t
−st
e dt exp(A(t − t))BR(t )dt
0 0
−1
= (s + A) BR̃(s). (9.66)

The Laplace transform of the integral equation

R̃(s) = (s + A)−1 R(0) + τ −1 (s + A)−1 BR̃(s) (9.67)


9.2 Continuous Time Random Walk Processes 117

is solved by
4 5−1
R̃(s) = 1 − τ −1 (s + A)−1 B (s + A)−1 R(0). (9.68)

We assume that initially, the system is in the initial state D∗ and the motion
is equilibrated:
φ1 (X)
R(X, 0) = . (9.69)
0
For simplicity, we treat here only the case of τ12 → ∞. Then we have

k + τ −1 −k−1
A= , (9.70)
−k k−1 + τ −1


−1 1 s + τ −1 + k−1 k−1
(s + A) =
(s + τ )(s + τ −1 + k + k−1 )
−1 k s + τ −1 + k
(9.71)

and with the abbreviations


−1
1
α= 1+ (k + k−1 ) (9.72)
s + τ −1

and 
f (X)1,2 = φ1,2 (X)f (X)dX, (9.73)

we find

(s + A)−1 R(0)


 −1   1 
s+τ −1 − (s+τ −1 )2 αk1
1
φ1 α α (s + τ −1 ) − k
= =
(s+τ −1 )2 αk1
1
(s + τ −1 )2 k
(9.74)

as well as

(s + A)−1 B
 
−1 − (s+τ −1 )2 αk1 (s+τ −1 )2 αk−1 2
1 1 1
= s+τ (9.75)
(s+τ −1 )2 αk1 s+τ −1 − (s+τ −1 )2 αk−1 2
1 1 1

and the final result becomes


1
R̃(s) =
s(s2 + τ −1 (s + αk1 + αk−1 2 ))

s(s + τ −1 ) − sαk1 + τ −1 αk−1 2
× . (9.76)
(s + τ −1 )αk1
118 9 Dispersive Kinetics

Let us discuss the special case of thermally activated electron transfer.


Here
αk1 , αk−1 2 1 (9.77)
and the decay of the initial state is approximately given by

(s + τ −1 ) + τ −1 s−1 αk−1 2 s + K2
P (s) = −1 −1
= 2
s + τ s (1 + αk1 + αk−1 2 )
2 s + s(K1 + K2 )
(9.78)

with
  
k−1 (X)
K2 = τ −1 1 + s−1 dXφ2 (X) (9.79)
1 + s+τ1 −1 (k(X) + k−1 (X))

k−1 (X)
≈ dXφ2 (X) , (9.80)
1 + τ (k(X) + k−1 (X))

k(X)
K1 = dXφ1 (X) . (9.81)
1 + τ (k(X) + k−1 (X))

This can be visualized as the result of a simplified kinetic scheme


d
P  = −K1 P  + K2 C, (9.82)
dt
d
C = K1 P  − K2 C (9.83)
dt
with the Laplace transform

sP̃ − P (0) = −K1 P̃ + K2 C̃, (9.84)


sC̃ − C(0) = K1 P̃ + K2 C̃, (9.85)

which has the solution


sP0 + K2 (P0 + C0 ) sC0 + K1 (C0 + P0 )
P = , C= . (9.86)
s(s + K1 + K2 ) s(s + K1 + K2 )

In the time domain, we find

K2 + K1 e−(K1 +K2 )t K1
P (t) = , C(t) = (1 − e−(K1 +K2 )t ). (9.87)
K1 + K2 K1 + K2
Let us now consider the special case that the back reaction is negligible and
k(X) = kΘ(X). Here, we have

s(s + τ −1 ) − sαk1
P̃ (s) = , (9.88)
s(s2 + τ −1 (s + αk1 ))
9.3 Power Time Law Kinetics 119

k(X)

φ(X)
−1
τ

X
k

Fig. 9.5. Slow solvent limit

  ∞
k(X) k
αk1 = dXφ1 (X) k(X)
= φ1 (X) k
dX
1+ s+τ −1 0 1+ s+τ −1
 ∞  0
s + τ −1
= bk , b= φ1 (X)dX, a = 1−b = φ1 (X)dX, (9.89)
k + s + τ −1 0 −∞
−1
(s + τ −1 − bk k+s+τ
s+τ
−1 ) s + τ −1 + k(1 − b)
P̃ (s) = = . (9.90)
(s2 + τ −1 (s + bk s+τ −1
k+s+τ −1 )) s2 + s(τ −1 + k) + bkτ −1
Inverse Laplace transformation gives a biexponential behavior

(μ+ + k(1 − 2b))e−t(k+μ− )/2 − (μ− + k(1 − 2b))e−t(k+μ+ )/2


P (t) = (9.91)
μ+ − μ−

with 
μ± = τ −1 ± k 2 + τ −2 + 2kτ −1 (1 − 2b). (9.92)
If the fluctuations are slow τ −1 k, then

k 2 + τ −2 + 2kτ −1 (1 − 2b) = k + (1 − 2b)τ −1 + · · · , (9.93)

μ+ = k + 2(1 − b)τ −1 + · · · , μ− = −k + 2bτ −1 + · · · , (9.94)


and the two time constants are approximately (Fig. 9.5)

k + μ+ k + μ−
= k + ··· , = bτ −1 + · · · (9.95)
2 2

9.3 Power Time Law Kinetics


The last example can be generalized to describe the power time law as
observed for CO rebinding in myoglobin at low temperatures. The protein
motion is now modeled by a more general uncorrelated process.2
2
A much more detailed discussion is given in [37].
120 9 Dispersive Kinetics

We assume that the rate k is negligible for X < 0 and very large for
X > 0. Consequently, only jumps X < 0 → X > 0 are considered. Then the
probability obeys the equation

P (X, t)|X<0
 ∞  0  t
= P (X, 0) ψ(t )dt + dX  dt ψ(t − t )f (X)P (X  , t )
t −∞ 0
 t  0
 
= φeq (X)Ψ0 (t) + φeq (X) dt ψ(t − t ) dX  P (X  , t ) (9.96)
0 −∞
 ∞
1 − ψ̃(s)
Ψ0 (t) = ψ(t )dt , Ψ˜0 (s) = (9.97)
t s
and the total occupation of inactive configurations is
 0  t
  
P< (t) = dXφeq (X) Ψ0 (t) + dt ψ(t − t )P< (t )
−∞ 0
 t
  
= a Ψ0 (t) + dt ψ(t − t )P< (t ) . (9.98)
0

Laplace transformation gives


P̃< (s) = a Ψ̃0 (s) + ψ̃(s)P̃< (s) (9.99)

with  0
a= dXφeq (X) (9.100)
−∞0
and the decay of the initial state is given by

aΨ̃0 (s) 1
P̃< (s) = = . (9.101)
1 − aψ̃(s) s + 1−a
aΨ ˜(s) 0

For a simple Poisson process (9.44) with


1
Ψ̃0 = , (9.102)
s + τ −1
this gives
a
P̃< (s) = , (9.103)
s + (1 − a)τ −1
which reproduces the exponential decay found earlier in the slow solvent limit
(9.95)
P< (t) = ae−t (1−a)/τ . (9.104)
The long-time behavior is given by the asymptotic behavior for s → 0. As
P< (t) → 0 for t → ∞, this is also the case for P̃< (s) in the limit s → 0.
Hence, the asymptotic behavior must be
9.3 Power Time Law Kinetics 121

aΨ̃0 (s)
P̃< (s) ≈ → 0, s → 0, (9.105)
1−a
a
P< (t) → Ψ0 (t), t → ∞. (9.106)
1−a
To describe a power time law at long times
P< (t) → t−β , t → ∞, (9.107)
P̃< (s) → sβ−1 , s → 0, (9.108)
the waiting time distribution has to be chosen as
1
Ψ0 (t) ∼ , t → ∞,
(zt)β
which implies
Ψ̃0 (s) ∼ z −β sβ−1 , (9.109)
−1
where z is the characteristic time for reaching the asymptotics. Finally, we
find
1 1
P̃< (s) ∼ 1−a β 1−β = . (9.110)
s+ a z s
s(1 + (z̃/s)β )
In the time domain, this corresponds to the Mittag–Leffler function3

 (−1)l (z̃t)βl
P< (t) = = Eβ (−(z̃t)β ) (9.111)
Γ (βl + 1)
l=0

which can be approximated by the simpler function:


1
. (9.112)
1 + (t/τ )β

Problems
9.1. Dichotomous Model for Dispersive Kinetics
k(X)

P(D*,X)
α
P(D*,−) P(D*,+)
β
X
k− k+
α
P(D+A−,−) P(D+A−,+)
β
3
which has also been discussed for nonexponential relaxation in inelastic solids and
dipole relaxation processes corresponding to Cole–Cole spectra.
122 9 Dispersive Kinetics

Consider the following system of rate equations


⎛ ⎞ ⎛ ⎞⎛ ⎞
P (D∗ , +) −k+ − α β 0 0 P (D∗ , +)
d ⎜
⎜ P (D

, −) ⎟
⎟=⎜
⎜ α −k− − β 0 0 ⎟ ⎜
⎟ ⎜ P (D

, −) ⎟ ⎟
⎝ + −
dt P (D A , +) ⎠ ⎝ k+ 0 −α β ⎠ ⎝ P (D A , +) ⎠
+ −

P (D+ A− , −) 0 k− α −β P (D+ A− , −).

Determine the eigenvalues of the rate matrix M . Calculate the left and right
eigenvectors approximately for the two limiting cases:
(a) Fast fluctuations k± α, β. Show that the initial state decays with an
average rate.
(b) Slow fluctuations k±
α, β. Show that the decay is nonexponential.
Part IV

Transport Processes
10
Nonequilibrium Thermodynamics

Biological systems are far from thermodynamic equilibrium. Concentration


gradients and electrostatic potential differences are the driving forces for
diffusive currents and chemical reactions (Fig. 10.1).

energy
JE

JA

A+B C JC
JB

JS
entropy

Fig. 10.1. Open nonequilibrium system. Energy and particles are exchanged with
the environment. Chemical reactions change the concentrations and consume or
release energy. Entropy is produced by irreversible processes like diffusion and heat
transport

In this chapter, we consider a system composed of n species k = 1, . . . , n


which can undergo r chemical reactions j = 1, . . . , r. We assume that the
system is locally in equilibrium and that all thermodynamic quantities are
locally well defined.

10.1 Continuity Equation for the Mass Density


We introduce the partial mass densities
mk
k = N k = ck m k , (10.1)
V
126 10 Nonequilibrium Thermodynamics

the total density


 1  M
= k = mk N k = , (10.2)
V V
k k

and the mass fraction


mk mk N k k
xk = Nk =  = . (10.3)
M k m k N k 

From the conservation of mass, we have



d d ∂
M k = mk N k = k dV
dt dt V ∂t
 r 
=− k vk dA + mk νkj rj dV (10.4)
(V ) j=1 V

which can also be expressed in the form of a continuity equation

∂ r
k = −div(k vk ) + mk νkj rj
∂t j=1

r
= −div(mk Jk ) + mk νkj rj (10.5)
j=1

with the diffusion fluxes Jk . For the total mass density



= k , (10.6)
k

we have
∂ 
 r
 = −div k v k + mk νkj rj . (10.7)
∂t j=1 k

Due to conservation of mass for each reaction, the last term vanishes

mk νkj = 0 (10.8)
k

and with the center of mass velocity


1
v= k v k , (10.9)

k

we have

 = −div(v). (10.10)
∂t
10.2 Energy Conservation 127

10.2 Energy Conservation


We define the specific values of enthalpy, entropy, and volume h, s, −1 ,
respectively, by
H = h  V,
S = s  V,
V = −1  V. (10.11)
The differential of the enthalpy is

dH = T dS + V dp + μk dNk , (10.12)
k

where μk is a generalized chemical potential, which also includes the potential


energy of electrostatic or gravitational fields which are assumed to be time-
independent. For the specific quantities, we find
V (hd +  dh) = T V (sd +  ds) + V dp
 V
+ μk (xk d +  dxk ) (10.13)
mk
k

and if we combine all terms with d,


 
 μk  xk
 dh − T  ds − dp −  dxk = Ts + μk − h d. (10.14)
mk mk
k k

The right-hand side can be written as


 
1 
TS + μk Nk − H d, (10.15)
V
k

which vanishes in equilibrium. Hence, we have for the specific quantities


 1
dh = T ds + −1 dp + μk dxk . (10.16)
mk
k

In the following, we consider a resting solvent without convection. We neglect


the center of mass motion and its contribution to the total energy. The dif-
fusion currents are taken relative to the solvent.1 For constant pressure, we
have
∂(h) ∂(s)  μk ∂k
=T + (10.17)
∂t ∂t mk ∂t
k
and the enthalpy obeys the continuity equation
∂(h)
= −div(Jh ) (10.18)
∂t
with the enthalpy flux Jh .
1
A more general discussion can be found in [38].
128 10 Nonequilibrium Thermodynamics

10.3 Entropy Production


The entropy change 
dS = d s dV (10.19)
V

has contributions from the local entropy production dSi and from exchange
with the surroundings dSe :

dS = dSi + dSe . (10.20)

The local entropy production can only be positive:

dSi ≥ 0, (10.21)

whereas entropy exchange with the surroundings can be positive or negative


(Fig. 10.2).
We denote by σ the entropy production per volume:

dSi
= σ dV. (10.22)
dt V

The entropy exchange is connected with the total entropy flux:



dSe
=− Js dA. (10.23)
dt (V )

Together, we have
∂(s)
= −div(Js ) + σ, (10.24)
∂t
σ ≥ 0. (10.25)
From (10.17) the time derivative of the entropy per volume is

∂(s) 1 ∂(h) 1  μk ∂k


= − (10.26)
∂t T ∂t T mk ∂t
k

dSe

dSi

dSe

Fig. 10.2. Entropy change. The local entropy production dSi is always positive,
whereas entropy exchange with the environment dSe can have both signs
10.3 Entropy Production 129

and inserting (10.5) and (10.26)


⎛ ⎞
1  μk ⎝ r
∂(s) 1
= − div(Jh ) − −div(mk Jk ) + mk νkj rj ⎠ , (10.27)
∂t T T mk j=1
k

which can be written as


 
∂(s) 1  μk 1
= −div Jh + Jk + Jh grad
∂t T T T
k
 μ
1 
k
− Jk grad + Aj rj (10.28)
T T j
k

with the chemical affinities



Aj = − μk νkj . (10.29)
k

From comparison of (10.28) and (10.24), we find the entropy flux


 
1 
Js = Jh − μk Jk (10.30)
T
k

and the entropy production


 μ
1 
1 k
σ = Jh grad − Jk grad + Aj rj
T T T j
k
  
 1 1 1 
= Jh − μk Jk grad − Jk grad(μk ) + Aj rj .
T T T j
k k
(10.31)

We define the heat flux as2


Jq = T Js . (10.32)
The entropy production is a bilinear form
 
σ = Kq Jq + Kk Jk + Kj rj (10.33)
k j

of the fluxes
Jq , Jk , rj (10.34)
2
The definition of the heat flux is not unique in the case of simultaneous diffusion
 transport. Our definition is in analogy to the relation T dS = dH −
and heat
V dp − k μk dNk for an isobaric system. With this convention, the diffusion flux
does not depend on the temperature gradient explicitly.
130 10 Nonequilibrium Thermodynamics

and the thermodynamic forces



1
Kq = grad ,
T
1
Kk = − grad(μk ),
T
1
Kj = Aj . (10.35)
T
Instead of entropy production, alternatively the rate of energy dissipation can
be considered which follows from multiplication of (10.31) with T
1  
T σ = − Jq grad(T ) − Jk grad(μk ) + Aj rj . (10.36)
T j
k

10.4 Phenomenological Relations


In equilibrium, all fluxes and thermodynamic forces vanish

T = const, μk = const, Aj = 0. (10.37)

The entropy production is also zero and entropy has its maximum value. If
the deviation from equilibrium is small, the fluxes3 can be approximated as
linear functions of the forces:

Jα = Lα,β Kβ , (10.38)
β

where the so-called Onsager coefficients Lα,β are material constants.4 As


Onsager has shown, the matrix of coefficients is symmetric Lα,β = Lβ,α . The
entropy production has the approximate form
 
σ= Jα Kα = Lα,β Kα Kβ . (10.39)
α α,β

According to the second law of thermodynamics, σ ≥ 0 and therefore the


matrix of Onsager coefficients is positive definite.

10.5 Stationary States


Under certain circumstances,5 stationary states are characterized by a min-
imum of entropy production, which is compatible with certain external
conditions. Consider a system with n independent fluxes and thermodynamic
3
Only a set of independent fluxes should be used here.
4
Fluxes with different tensorial character are not coupled.
5
Especially the Onsager coefficients have to be constants.
10.5 Stationary States 131

T T+ΔT

Jq

Fig. 10.3. Combined diffusion and heat transport

forces. If the forces K1 , . . . , Km are kept fixed and a stationary state is reached,
then the fluxes corresponding to the remaining forces vanish:

Jα = 0, α = m + 1, . . . , n (10.40)

and the entropy production is minimal since


∂σ
= Jα , α = m + 1, . . . , n. (10.41)
∂Kα |K1 ,...,Km

A simple example is the coupling of diffusion and heat transport between two
reservoirs with fixed temperatures T and T + ΔT (Fig. 10.3).
In a stationary state, the diffusion current must vanish Jd = 0 since any dif-
fusion would lead to time-dependent concentrations. The entropy production
is
σ = Jq Kq + Jd Kd . (10.42)
If the force
1 ΔT
Kq = grad ≈− 2 (10.43)
T T
is fixed by keeping the temperature distribution fixed, then in the stationary
state there is only transport of heat but not of mass. The entropy production
is

σ = Lqq Kq2 + Ldd Kd2 + 2Lqd Kq Kd


2
Lqq Ldd − L2qd 2 Lqd
= Kq + Ldd Kd + Kq , (10.44)
Ldd Ldd

where due to the positive definiteness of Lij

Ldd > 0, Lqq > 0, LddLqq − L2qd > 0. (10.45)

As a function of Kd , the entropy production is minimal for


Lqd 1
0 = Kd + Kq = Jd (10.46)
Ldd Ldd
132 10 Nonequilibrium Thermodynamics

hence in the stationary state. Obviously, the chemical potential adjusts such
that
1 Lqd ΔT
Kd = − Δμ = , (10.47)
T Ldd T 2
which leads to a concentration gradient6 :
Δc Lqd ΔT
≈− . (10.48)
c Ldd kB T 2

Problems

10.1. Entropy Production by Chemical Reactions

Consider an isolated homogeneous system with T = const and μk = const but


nonzero chemical affinities

Aj = − μk νkj
k

and determine the rate of entropy increase.

6
This is connected to the Seebeck effect.
11
Simple Transport Processes

11.1 Heat Transport


Consider a system without chemical reactions and diffusion and with constant
chemical potentials.1 The thermodynamic force is

1 1
Kq = grad ≈ − 2 gradT (11.1)
T To
and the phenomenological relation is known as Fourier’s law:
Jq = −κgradT. (11.2)
The entropy flux is
1 1
Js = Jq = Jh (11.3)
T T
and the energy dissipation is
1 κ 2
T σ = − Jq gradT = (gradT ) . (11.4)
T T
From the continuity equation for the enthalpy, we find
∂(h) ∂T
= cp = κΔT (11.5)
∂t ∂t
and hence the diffusion equation for heat transport
∂T κ
= ΔT. (11.6)
∂t cp
For stationary heat transport in one dimension, the temperature gradient is
constant, hence also the heat flux. The entropy flux, however, is coordinate
dependent due to the local entropy production (Fig. 11.1).
1 κ
divJs = −Jq grad(T ) = 2 (gradT )2 = σ. (11.7)
T2 T
1
We neglect the temperature dependence of the chemical potential here.
134 11 Simple Transport Processes

Jq

T1 T2
JS

Fig. 11.1. Heat transport

11.2 Diffusion in an External Electric Field


Consider an ionic solution at constant temperature without chemical reac-
tions. The thermodynamic force is
1
Kk = − grad(μk ). (11.8)
T
For a dilute solution, we have2

μk = μ0k + kB T ln(ck ) + Zk eΦ (11.9)

and hence the thermodynamic force


kB 1
Kk = − grad(ck ) − grad (Zk eΦ) . (11.10)
ck T
The entropy production is given by
 kB T
Tσ = Jk − grad ck − Zk e grad Φ . (11.11)
ck
k

The phenomenological relations have the general form



Jk = Lk,k Kk , (11.12)
k

where the interaction mobilities Lk,k vanish for very small ion concen-
trations, whereas the direct mobilities Lk,k are only weakly concentration
dependent [39, 40]. Inserting the forces, we find
 kB 1 
Jk = − Lk,k gradck − grad(Φ) Lk,k Zk e, (11.13)
ck T
k  k

which for small ion concentrations can be written more simply by introducing
the diffusion constant as
ck Zk eDk
Jk = −Dk grad ck − grad Φ, (11.14)
kB T
2
The concentration (particles per volume) is ck = k /mk = xk /mk .
11.2 Diffusion in an External Electric Field 135

which is known as the Nernst–Planck equation. This equation can be under-


stood in terms of motion in a viscous medium (7.1). For the motion of a mass
point, we have
mv̇ = F − mγv. (11.15)
In a stationary state, the velocity is
1
v= F (11.16)

and the particle current
c cD
J = cv = F= F, (11.17)
mγ kB T
where we used the Einstein relation
kB T
D= . (11.18)

From comparison with the Nernst–Planck equation, we find

kB T cZeD
F=− Dgradc + grad Φ
cD kB T
kB T
=− gradc − grad ZeΦ. (11.19)
c
The charge current is

Zk eck Dk kB T
Jel = Zk eJk = − gradck + gradZk eΦ
kB T ck
(Zk e)2 ck Dk
= −Zk eDk grad ck − grad Φ. (11.20)
kB T
The prefactor of the potential gradient is connected to the electrical conduc-
tivity:
(Zk e)2 ck Dk
Gk = . (11.21)
kB T
Together with the continuity equation
∂ck
= −div Jk , (11.22)
∂t
we arrive at a Smoluchowski-type equation

∂ck ck Zk eD
= div Dk gradck + gradΦ . (11.23)
∂t kB T
136 11 Simple Transport Processes

− +

− grad(c)

− grad(Φ)

Fig. 11.2. Diffusion in an electric field

In equilibrium, we have
μk = const (11.24)
and hence
kB T ln ck + Zk eΦ = const (11.25)
or
ck = c0k e−Zk eΦ/kB T . (11.26)
This is sometimes described as an equilibrium of the currents due to concen-
tration gradient on the one side and to the electrical field on the other side
(Fig. 11.2).
The Nernst–Planck equation has to be solved together with the Poisson–
Boltzmann equation for the potential

div(εgrad Φ) = − Zk eck . (11.27)

In one dimension, the Poisson–Nernst–Planck equations are


dc ZecD dΦ
J = −D − , (11.28)
dx kB T dx
d d 
 Φ=− Zk eck . (11.29)
dx dx

Problems
11.1. Synthesis of ATP from ADP
In mitochondrial membranes, ATP is synthesized according to

out  ATP + H2 O + 2Hin


ADP + POH + 2H+ +
11.2 Diffusion in an External Electric Field 137

where (in) and (out) refer to inside and outside of the mitochondrial mem-
brane.
Express the equilibrium constant K as a function of the potential difference

Φin − Φout (11.30)

and the proton concentration ratio

c(H+ +
in )/c(Hout ). (11.31)
12
Ion Transport Through a Membrane

12.1 Diffusive Transport


We can draw a close analogy between diffusion and reaction on the one side
and electronic circuit theory on the other side. We regard the membrane as a
resistance for the current of diffusion and the difference in chemical potential
as the driving force (Fig. 12.1):

μ = μ0 + kB T ln c, (12.1)

μ − μ0
c = exp . (12.2)
kB T
Assuming linear variation of the chemical potential over the thickness a of the
membrane, we have
x
μ(x) = μinside + Δμ. (12.3)
a
The diffusion current is
∂μ Δμ
J = −C = −C (12.4)
∂x a
and the constant is determined from the diffusion equation in the center of
the membrane:

∂c ∂ ∂ kB T ∂c ∂ ∂
0= = − (J) = − −C = D(x) c
∂t ∂x ∂x c ∂x ∂x ∂x
D
→C = c (12.5)
kB T
with
μinside + (Δμ/2) μoutside + μinside √
c = exp = exp = cinside coutside . (12.6)
kB T 2kB T
140 12 Ion Transport Through a Membrane

cinside coutside

Fig. 12.1. Transport across a membrane

J
μoutside μinside

μ0 μ0

Fig. 12.2. Equivalent circuit for the diffusion through a membrane

Finally, we have (Fig. 12.2)

DcΔμ Δμ
J =− =− . (12.7)
akB T R

If we include a gradient of the electrostatic potential, then the equations


become
μ = μ0 + kB T ln c + ZeΦ, (12.8)
μ − μ0 − ZeΦ
c = exp , (12.9)
kB T
∂μ Δμ kB T Δ ln c + ZeΔΦ
J = −C = −C = −C , (12.10)
∂x a a
and the electric current is

(Ze)2 kB T ln c
I = ZeJ = − Δ + Δφ
R Ze
= −g(VNernst − V ) (12.11)

with the Nernst potential


kB T coutside
VNernst = ln (12.12)
Ze cinside
and the membrane potential (Fig. 12.3)

V = Φinside − Φoutside . (12.13)


12.2 Goldman–Hodgkin–Katz Model 141

I
inside

g VN
V

outside

Fig. 12.3. Equivalent circuit for the electric current

μoutside μinside
R R
Jm
C

μ0 μ0

Fig. 12.4. Equivalent circuit with capacity

The transport through biological membranes occurs often via the formation
of pore–ligand complexes. The number of these complexes depends on the
chemical potential of the ligands and needs time to build up. It may therefore
be more realistic to associate also a capacity Cm with the membrane, which
we define in analogy to the electronic capacity via the change of the potential
(Fig. 12.4)
dμm
Jm = Cm . (12.14)
dt
The change of the potential is
dμm kB T dc
= (12.15)
dt c dt
and the current is from the continuity equation
Jm dc
− divJ = = . (12.16)
a dt
Hence, the capacity is
ac
Cm = . (12.17)
kB T

12.2 Goldman–Hodgkin–Katz Model


In the rest state of a neuron, the potassium concentration is higher in the
interior whereas there are more sodium and chlorine ions outside. The con-
centration gradients have to be produced by (energy-consuming) ion pumps.
142 12 Ion Transport Through a Membrane

membrane
outside JNa+ inside
JK+

JCl−

charge
density

potential Φm

0 d x

Fig. 12.5. Coupled ionic fluxes

Let us first consider only one type of ions and constant temperature. From
(11.25), we have
kB T
Φ+ ln ck = const. (12.18)
Zk e
Usually, the potential is defined as zero on the outer side and
kB T ck,outside
Vk = Φinside − Φoutside = ln (12.19)
Zk e ck,inside

is the Nernst potential (12.12) of the ion species k.


We now want to calculate the potential for a steady state with more than
one ionic species present. If we take the ionic species Na+ , K+ , and Cl−
which are most important in nerve excitation that will define the so-called
Goldman–Hodgkin–Katz model [41] (Fig. 12.5).
We want to calculate the dependence of the fluxes Jk on the concentrations
ck,in and ck,out . To that end, we multiply the Nernst–Planck (11.28) equation
by eyk where
Φ
yk = Zk e (12.20)
kB T
to get

dck dyk d
Jk eyk = −Dk eyk + ck = −Dk (ck eyk ). (12.21)
dx dx dx

This can be integrated over the thickness d of the membrane


12.2 Goldman–Hodgkin–Katz Model 143
 d  d

d
Jk eyk dx = −D (ck eyk )dx = −D ck,in eZk eΦm /kB T − ck,out .
0 0 dx
(12.22)

We assume a linear variation of the potential across the membrane


x
Φ(x) = Φm . (12.23)
d
This is an approximation which is consistent with the condition of electroneu-
trality. On the boundary between liquid and membrane, the first derivative
of Φ will be discontinuous corresponding to a surface charge distribution.1
Assuming a stationary state with ∂J/∂x = 0, we can evaluate the left-hand
side of (12.22):
kB T d Zk eΦm /kB T

Jk e − 1 = −D ck (d)eZk eΦm /kB T − ck (0) , (12.24)


Zk eΦm
which gives the current

D Zk eΦm ck,in eZk eΦm /kB T − ck,out


Jk = − . (12.25)
d kB T eZk eΦm /kB T − 1
In a stationary state, the total charge current has to vanish

I= Zk eJk = 0 (12.26)
k

and we find
e2 Φm  ck,in eZk eΦm /kB T − ck,out
Dk Zk2 = 0. (12.27)
kB T d eZk eΦm /kB T − 1
With ZNa+ = ZK+ = 1, ZCl− = −1 and
eΦm
ym = , b m = ey m , (12.28)
kB T
we have
cNa,in bm − cNa,out cK+ ,in bm − cK+ ,out
DNa+ + DK+ (12.29)
bm − 1 bm − 1
−1
cCl,in bm − cCl,out
+DCl− = 0. (12.30)
b−1
m −1

Multiplication with (bm − 1) gives

DNa+ (cNa,in bm − cNa,out ) + DK+ (cK+ ,in bm − cK,out ) (12.31)


−bm DCl− (CCl,in b−1
m − cCl,out ) = 0 (12.32)
1
We do not consider a possible variation of  here.
144 12 Ion Transport Through a Membrane

and we find
DNa+ cNa,out + DK+ cK,out + DCl− cCl,in
bm = , (12.33)
DNa+ cNa,in + DK+ cK,in + DCl− cCl,out
and finally
kB T D + cNa,out + DK+ cK,out + DCl− cCl,in
Φm = ln Na . (12.34)
e DNa+ cNa,in + DK+ cK,in + DCl− cCl,out
This formula should be compared with the Nernst equation:
kB T ck,outside
Vk = Φinside − Φoutside = ln . (12.35)
Zk e ck,inside
The ionic contributions appear weighted with their mobilities. The Nernst
equation is obtained if the membrane is permeable for only one ionic species.

12.3 Hodgkin–Huxley Model


In 1952, Hodgkin and Huxley won the Nobel prize for their quantitative
description of the squid giant axon dynamics [42].
They thought of the axon membrane as an electrical circuit. They assumed
independent currents of sodium and potassium, a capacitive current, and a
catch-all leak current. The total current is the sum of these (Fig. 12.6):

Iext = IC + INa + IK + IL . (12.36)

The capacitive current is


dV
IC = Cm . (12.37)
dt
For the ionic currents, we have (12.11)

Ik = gk (V − Vk ), (12.38)

where gk is the channel conductance which depends on the membrane poten-


tial and on time, and Vk is the specific Nernst potential (12.12). The macro-
scopic current relates to a large number of ion channels which are controlled

Vinside
Im INa IK ILeak
Iext
Cm

Voutside

Fig. 12.6. Hodgkin–Huxley model


12.3 Hodgkin–Huxley Model 145

by gates which can be in either permissive or nonpermissive state. Hodgkin


and Huxley considered a simple rate process for the transition between the
two states with voltage-dependent rate constants

α(V )
closed  open. (12.39)
β(V )

The variable p which denotes the number of open gates obeys a first-order
kinetics:
dp
= α(1 − p) − βp. (12.40)
dt
Hodgkin and Huxley introduced a gating particle n describing the potas-
sium conductance (Fig. 12.7) and two gating particles for sodium (m being
activating and h being inactivating). They used the specific equations

gK = gk p4n , (12.41)

gNa = g Na p3m ph , (12.42)

K+

membrane membrane

n−gates

inactive state open state

Fig. 12.7. Potassium gates

40
potential (mV)

0
– 40
– 80
1.0
probability

0.5

0.0
0 100 200 300 400 500
time (ms)

Fig. 12.8. Hodgkin–Huxley model. Equation (12.43) is solved numerically [43]. Top:
excitation by two short rectangular pulses Iext (t) (solid ) and membrane potential
V (t) (dashed ). Bottom: fraction of open gates pm (full ), pn (dashed ), and ph (dash-
dotted )
146 12 Ion Transport Through a Membrane

40

potential (mV)
0
– 40
–80
1.0
probability

0.5

0.0
0 100 200 300 400 500
time (ms)

Fig. 12.9. Refractoriness of the Hodgkin–Huxley model. A second pulse cannot


excite the system during the refractory period. Details as in Fig. 12.8

which gives finally a highly nonlinear differential equation:


dV
Iext (t) = Cm + gk p4n (V − VK ) + gNa p3m ph (V − VNa ) + gL (V − VL ). (12.43)
dt
This equation has to be solved numerically. It describes many electrical
properties of the neuron-like threshold and refractory behavior, repetitive
spiking, and temperature dependence (Figs. 12.8 and 12.9). Later, we will
discuss a simplified model with similar properties in more detail (Sect. 13.3).
13
Reaction–Diffusion Systems

Reaction–diffusion systems are described by nonlinear equations and show the


formation of structures. Static as well as dynamic patterns found in biology
can be very realistically simulated [44].

13.1 Derivation

We consider coupling of diffusive motion and chemical reactions. For constant


temperature and total density, we have the continuity equation
∂ 
ck = −div Jk + νkj rj (13.1)
∂t
j

together with the phenomenological equation


 
Jk = − Lkj grad k ln cj = − Dkj grad cj , (13.2)
j j

which we combine to get


∂  
ck = Dkj Δcj + νkj rj , (13.3)
∂t j j

where the reaction rates are nonlinear functions of the concentrations:



νkj rj = Fk ({ci }). (13.4)
j

We assume for the diffusion coefficients the simplest case that the different
species diffuse independently (but possibly with different diffusion constants)
148 13 Reaction–Diffusion Systems

and that the diffusion fluxes are parallel to the corresponding concentration
gradients. We write the diffusion–reaction equation in matrix form
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
c1 D1 c1 F1 ({c})
∂ ⎜ . ⎟ ⎜ .. ⎟ ⎜ .. ⎟ ⎜ .. ⎟
⎝ .. ⎠ = ⎝ . ⎠Δ⎝ . ⎠ + ⎝ . ⎠ (13.5)
∂t
cN DN cN FN ({c})

or briefly

c = DΔc + F(c). (13.6)
∂t

13.2 Linearization
Since a solution of the nonlinear equations is generally possible only numeri-
cally, we investigate small deviations from an equilibrium solution c0 1 with

c0 = 0, (13.7)
∂t
Δc0 = 0. (13.8)
Obviously, the equilibrium corresponds to a root

F(c0 ) = 0. (13.9)

We linearize the equations by setting

c = c0 + ξ (13.10)

and expand around the equilibrium



∂ ∂F 
ξ = DΔξ + F(c0 + ξ) = DΔξ + ξ. (13.11)
∂t ∂c c0

Denoting the matrix of derivatives by M0 , we can discuss several types of


instabilities:
Spatially constant solutions. For a spatially constant solution, we have

ξ = M0 ξ (13.12)
∂t
with the formal solution
ξ = ξ 0 exp(M0 t). (13.13)
Depending on the eigenvalues of M , there can be exponentially growing
or decaying solutions, oscillating solutions, and exponentially growing or
decaying solutions.
1
We assume tacitly that such a solution exists.
13.3 Fitzhugh–Nagumo Model 149

Plane waves. Plane waves are solutions of the linearized problem. Using the
Ansatz
ξ = ξ0 ei(ωt−kx) (13.14)
gives
iωξ = −k 2 Dξ + M0 ξ = Mk ξ. (13.15)
For a stable plane wave solution, λ = iω is an eigenvalue of

Mk = M0 − k 2 D (13.16)

with
(λ) ≤ 0. (13.17)
If there are purely imaginary eigenvalues for some k, they correspond to stable
solutions which are spatially inhomogeneous and lead to formation of certain
patterns.

13.3 Fitzhugh–Nagumo Model


The Hodgkin–Huxley model (12.43) describes characteristic properties like
the threshold behavior and the refractory period where the membrane is not
excitable. Since the system of equations is rather complicated, a simpler model
was developed by Fitzhugh (1961) and Nagumo (1962) which involves only
two variables u, v (membrane potential and recovery variable):

u3
u̇ = u − − v + I(t), (13.18)
3
v̇ = (u + a − bv). (13.19)
The standard parameter values are

a = 0.7, b = 0.8,  = 0.08. (13.20)

Nagumo studied an electronic circuit with a tunnel diode which is described


by rather similar equations (Fig. 13.1):

I = C V̇ + Idiode (V ) + ILR , (13.21)


V − E − RILR
V = RILR + LI˙LR + E → I˙LR = . (13.22)
L
The Fitzhugh–Nagumo model can be used to describe the excitation propa-
gation along the membrane. After rescaling the original equations and adding
diffusive terms, we obtain the reaction–diffusion equations:
3
∂ u u − u3 − v + I(t) 1 u
= + Δ . (13.23)
∂t v (u − bv + a) δ v
150 13 Reaction–Diffusion Systems

R
C
V
tunnel L
diode
E

Fig. 13.1. Nagumo circuit

2
v

–1
–3 –2 –1 0 1 2
u

Fig. 13.2. Nullclines of the Fitzhugh–Nagumo equations. The solutions of (13.24)


and (13.25) are shown for different values of the current I = 0, 1, 2, 3

The evolution of the FN model can be easily represented in a two-dimensional


uv-plot. The u-nullcline and the v-nullcline are defined by
u3
u̇ = 0 → v = u − + I0 , (13.24)
3
a+u
v̇ = 0 → v = . (13.25)
b
The intersection of the nullclines is an equilibrium since here u̇ = v̇ = 0
(Fig. 13.2).
Consider small deviations from the equilibrium values
u = ueq + μ v = veq + η. (13.26)
The linearized equations are
μ̇ = (1 − u2eq )μ − η, (13.27)
η̇ = (μ − bη). (13.28)
Obviously, instability has to be expected for u2eq < 1.
13.3 Fitzhugh–Nagumo Model 151

The matrix of derivatives



1 − u2eq −1
M0 = (13.29)
 −b

has the eigenvalues


1 1

λ= 1 − u2eq − b ± (1 − u2eq + b)2 − 4 . (13.30)


2
The square root is zero for
1

ueq = ± 1 + b ± 2 . (13.31)

This defines a region around ueq = ±1 where the eigenvalues have a nonzero
imaginary part (Fig. 13.3).
For standard parameters, we distinguish the regions
1 √
I : |ueq | > 1 + b + 2 M  = 1.277,
II : 0.706 < |ueq | < 1.277,
1

III : |ueq | < 1 + b − 2  = 0.706. (13.32)

Within region II, the real part of both eigenvalues is (Fig. 13.4)
1
λ = (1 − u2eq − b) (13.33)
2
and since √ √
1 + b − 2  < u2eq < 1 + b + 2 , (13.34)
we have
√ √
1 − u2eq − b − 2  + 2b < 0 < 1 − u2eq − b + 2  + 2b, (13.35)

0.3
I II III II I
0.2

0.1
Im(λ)

–0.1

–0.2

–2 –1 0 1 2
ueq

Fig. 13.3. Imaginary part of the eigenvalues


152 13 Reaction–Diffusion Systems

III
II II
0

Re(λ) I I

–1

–2

–3
–2 –1 0 1 2
ueq

Fig. 13.4. Real part of the eigenvalues

√ √
λ + b −  < 0 < λ + b + , (13.36)

|λ + b| < . (13.37)
For standard parameters,

− 0.35 < λ < 0.22. (13.38)

Oscillating instabilities appear in region II if



 < b →  < b, (13.39)

which is the case for standard parameters. Outside region II instabilities


appear if at least the larger of the two real eigenvalues is positive or
1
(1 − u2eq + b)2 − 4 > u2eq − 1 + b. (13.40)

This is the case if the right-hand side is negative or

u2eq < 1 − b (13.41)

hence in the center of region III if b < 1. For standard parameters 1 −


b = 0.936 and according to (13.32), the whole region III is instable. If the
right-hand side of (13.40) is positive, we can take the square

(1 − u2eq + b)2 − 4 > (u2eq − 1 + b)2 , (13.42)


which simplifies to
1
u2eq < 1 − . (13.43)
b
This is false for b < 1 and hence for standard parameters the whole region I
is stable.
Part V

Reaction Rate Theory


14
Equilibrium Reactions

In this chapter, we study chemical equilibrium reactions. In thermal equilib-


rium of forward and backward reactions, the overall reaction rate vanishes and
the ratio of the rate constants gives the equilibrium constant, which usually
shows an exponential dependence on the inverse temperature.1

14.1 Arrhenius Law


Reaction rate theory goes back to Arrhenius who in 1889 investigated the
temperature-dependent rates of inversion of sugar in the presence of acids.
Empirically, a temperature dependence is often observed of the form

k(T ) = Ae−Ea /kB T (14.1)

with the activation energy Ea . Considering a chemical equilibrium (Fig. 14.1)

k
A  B, (14.2)
k

this gives for the equilibrium constant


k
K= (14.3)
k
and
Ea − Ea
ln K = ln k − ln k  = ln A − ln A − . (14.4)
kB T

1
An overview over the development of rate theory during the past century is given
in [45].
156 14 Equilibrium Reactions

reactants products
Ea

E’a
ΔH0

reaction coordinate

Fig. 14.1. Arrhenius law

In equilibrium, the thermodynamic forces vanish

T = const, (14.5)

A= μk νk = 0. (14.6)
k

For dilute solutions with

μk = μ0k + kB T ln ck , (14.7)

we have
 
μ0k νk + kB T νk ln ck = 0, (14.8)
k k

which gives the van’t Hoff relation for the equilibrium constant
  0
μ νk ΔG0
ln(Kc ) = νk ln ck = − k k = − . (14.9)
kB T kB T
k

The standard reaction free energy can be divided into an entropic and an
energetic part:

ΔG0 −ΔH 0 ΔS 0
− = + . (14.10)
kB T kB T k
Since volume changes are not important at atmospheric pressure, the free
reaction enthalpy gives the activation energy difference

Ea − Ea = ΔH 0 . (14.11)

A catalyst can only change the activation energies but never the difference
ΔH 0 .
14.2 Statistical Interpretation of the Equilibrium Constant 157

14.2 Statistical Interpretation of the Equilibrium


Constant
The chemical potential can be obtained as

∂F ∂ ln Z
μk = = −kB T . (14.12)
∂Nk T,V,N  ∂Nk T,V,N 
k k

Using the approximation of the ideal gas, we have


 z Nk
k
Z= (14.13)
Nk !
and 
ln Z ≈ Nk ln zk − Nk ln Nk + Nk , (14.14)
k

which gives the chemical potential


zk
μk = −kB T ln . (14.15)
Nk
Let us consider a simple isomerization reaction

A  B. (14.16)

The partition functions for the two species are (Fig. 14.2)
 
zA = e−n (A)/kB T , zB = e−n (B)/kB T . (14.17)
n=0,1,... n=0,1,...

In equilibrium,
μB − μA = 0, (14.18)

ε3(B)

ε2(B)
ε3(A)
ε1(B)
ε2(A)
ε0(B)
ε1(A)
Δε0
ε0(A)
A B

Fig. 14.2. Statistical interpretation of the equilibrium constant


158 14 Equilibrium Reactions
zB zA
− kB T ln = −kB T ln , (14.19)
NB NA
zB NB
= = (NB /V )(NA /V )−1 = Kc , (14.20)
zA NA
 
e−n (B)/kB T e−(n (B)−0 (B))/kB T
e−Δ/kB T .
n=0,1,... n=0,1,...
Kc =  −n (A)/kB T
=  −(n (A)−0 (A))/kB T
n=0,1,... e n=0,1,... e
(14.21)

This is nothing but a thermal distribution over all energy states of the system.
15
Calculation of Reaction Rates

The activated behavior of the reaction rate can be understood from a sim-
ple model of two colliding atoms (Fig. 15.1). In this chapter, we discuss the
connection to transition state theory, which takes into account the internal
degrees of freedom of larger molecules and explains not only the activation
energy, but also the prefactor of the Arrhenius law [27, 28, 46].

15.1 Collision Theory


The Arrhenius expression consists of two factors. The exponential gives the
number of reaction partners with enough energy and the prefactor gives the
collision frequency times the efficiency.
We consider the collision of two spherical particles A and B in a coordinate
system, where B is fixed and A is moving with the relative velocity

vr = v A − vB . (15.1)

The two particles collide if the distance between their centers is smaller than
the sum of the radii:
d = rA + rB . (15.2)
During a time interval dt, the number of possible collision partners is

cB πd2 vr dt (15.3)

and the number of collisions within a volume V is

Ncoll /V = cA cB π(rA + rB )2 vr dt. (15.4)

Assuming independent Maxwell distributions for the velocities


3  
m ma va2 mb vb2
f (va vb )d3 va d3 vb = exp − − d3 va d3 vb , (15.5)
2πkB T 2kB T 2kB T
160 15 Calculation of Reaction Rates

vr

d = ra +rb
vr dt

Fig. 15.1. Collision of two particles

we have a Maxwell distribution also for the relative velocities


3/2  
μ μvr2
3
f (vr )d vr = exp − d3 vr (15.6)
2πkB T 2kB T

with the reduced mass


MA MB
μ= . (15.7)
MA + MB
The average relative velocity then is
%
8kB T
vr = (15.8)
πμ

and the collision frequency is


%
dNcoll 8kB T
= ca cb πd2 . (15.9)
V dt πμ

If both particles are of the same species, we have instead



dNcoll 1 2 2 16kB T
= ca πd , (15.10)
V dt 2 πM
since each collision involves two molecules. Since each collision will not
lead to a chemical reaction, we introduce the reaction cross section σ(E)
which depends on the kinetic energy of the relative velocity. As a simple
approximation, we use the reaction cross section of hard spheres:

0 if E < E0 ,
σ(E) = (15.11)
π(ra + rb )2 if E > E0 ,

where E0 is the minimum energy necessary for a reaction to occur (Fig. 15.2).
The distribution of relative kinetic energy can be determined as follows.
From the one-dimensional Maxwell distribution

m
e−mv /2kB T dv,
2
f (v)dv = (15.12)
2πkB T
15.1 Collision Theory 161

σ(E)

Eo E

Fig. 15.2. Hard-sphere approximation to the reaction cross section

we find the distribution of relative kinetic energy for one particle as1

ma −Ea /kB T dEa 1
f (Ea )dEa = 2 e √ =√ e−Ea /kB T dEa .
2πkB T 2mEa πkB T Ea
(15.13)
The joint distribution for the two particles is
f (Ea , Eb )dEa dEb

1 1
= √ e−Ea /kB T dEa √ e−Eb /kB T dEb (15.14)
πkB T Ea πkB T Eb
and the distribution of the total kinetic energy is given by
 E
f (E) = f (Ea , E − Ea )dEa
0
 E
1 1
=  e−Ea /kB T −(E−Ea )/kB T dEa
πkB T 0 Ea (E − Ea )
 E
1 −E/kB T dEa
= e  . (15.15)
πkB T 0 Ea (E − Ea )
This integral can be evaluated by substitution
Ea = E sin2 θ, dEa = 2E sin θ cos θ dθ,
 E  π/2  π/2
dEa 2E sin θ cos θdθ
 = √ = 2dθ = π (15.16)
0 Ea (E − Ea ) 0 E sin2 θE cos2 θ 0

and we find
1 −E/kB T
f (E) = e . (15.17)
kB T
Now
 ∞  ∞
f (E)σ(E)dE = f (E)πd2 dE = πd2 e−E0 /kB T (15.18)
0 E0

1
There is a factor of 2 since v 2 = (−v)2 .
162 15 Calculation of Reaction Rates

and we have finally


%
2 8kB T −E0 /kB T
r = ca cb πd e . (15.19)
πμ

For nonspherical molecules, the reaction rate depends on the relative orienta-
tion. Therefore, a so-called steric factor p is introduced:
%
8kB T −E0 /kB T
r = ca cb pπd2 e . (15.20)
πμ

There is no general way to calculate p and often it has to be estimated.


Together, p and d2 give an effective molecular diameter2

d2eff = pd2 . (15.21)

15.2 Transition State Theory

According to transition state theory [47, 48], the reactants form an equilib-
rium amount of an activated complex which decomposes to yield the products
(Figs. 15.3 and 15.4).
The reaction path [49] leads over a saddle point called the transition state.
According to TST, the reaction of two substances A and B is written

K k1
A+B  [A−B]‡ → C+D . (15.22)
reactants activated complex products

activated complex
reactants products

reaction coordinate

Fig. 15.3. Transition state theory. The reactants are in chemical equilibrium with
the activated complex which can decompose into the products

2
The quantity πd2 /4 is equivalent to the collision cross section used in connection
with nuclear reactions.
15.2 Transition State Theory 163

A+BC
RAB

[A−B−C]
R*AB

AB+C

R*BC RBC

Fig. 15.4. Potential energy contour diagram. The potential energy for the reaction
A+BC → [A−B−C]‡ → AB+C is shown schematically as a function of the distances
RAB and RBC . In general, three coordinates are needed to give the relative positions
of the nuclei, but if atom A approaches the molecule BC, the direction of minimum
potential energy is along the line of centers for the reaction. Arrows indicate the
reaction coordinate. The activated complex corresponds to a saddle point of the
potential energy surface

If all molecules behave as ideal gases, the equilibrium constant K may be


calculated as
c[A−B]‡ z[A−B]‡ −ΔE0 /kB T
Kc = = e , (15.23)
cA cB zA zB
where ΔE0 is the difference in zero-point energies of the reactants and the acti-
vated complex. The activated complex may be considered a normal molecule,
except that one of its vibrational modes is so loose that it allows immedi-
ate decomposition into the products. The corresponding contribution to the
partition function is
1 kB T
zω→0 = → . (15.24)
1− e−ω/kB T ω
Thus, the equilibrium constant becomes

kB T z ‡ −ΔE0 /kB T
K= e , (15.25)
ω zA zB

where z ‡ differs from z[A−B]‡ by the contribution of the unique vibrational


mode.
If ω/2π is considered the decomposition frequency, then the rate of
decomposition is

dcC ω kB T z ‡ −ΔE0 /kB T


r= = c[A−B]‡ = e cA cB = k2 cA cB . (15.26)
dt 2π 2π zA zB
164 15 Calculation of Reaction Rates

If not every activated complex decays into products, the expression for the
rate constant has to be modified by a transmission coefficient κ:

kB T z ‡ −ΔE0 /kB T
k2 = κ e . (15.27)
2π zA zB
In most cases, κ is close to 1. Important exceptions are the formation of
a diatomic molecule, since the excess energy can only be eliminated by
third-body collisions or radiation and reactions that involve changes from
one type of electronic state to another – for instance in certain cis–trans
isomerizations.

15.3 Comparison Between Collision Theory


and Transition State Theory
For comparison, we apply transition state theory to the reaction of two spher-
ical particles. For the reactants, only translational degrees of freedom are
relevant giving the partition function

(2πmkB T )3/2
zT = . (15.28)
h3
For the activated complex, we consider the rotation around the center of mass.
The moment of inertia is
mA mB
I = (rA + rB )2 (15.29)
mA + mB
and the partition function is

8π 2 IkB T
zR = . (15.30)
h2
The rate expression from TST is

kB T (2π(mA + mB )kB T )3/2 h3 2


k= 3/2 3/2
8π kB T
h (2πkB T )3 mA mB
(rA + rB )2 mA mB −ΔE0 /kB T
× e
h2 mA + mB

 mA + mB
= 8πkB T (rA + rB )2 e−ΔE0 /kB T , (15.31)
mA mB
which is the same result as obtained from collision theory.
We consider now a bimolecular reaction. Each reactant has three trans-
lational, three rotational, and 3N − 6 vibrational3 degrees of freedom. The
3
3N − 5 for a collinear molecule
15.4 Thermodynamical Formulation of TST 165

partition functions are


3 3 3NA −6
zA = zT zR zV ,
3 3 3NB −6
zB = zT zR zV ,
3 3 3NA +3NB −7
z ‡ = zT zR zV . (15.32)

The reaction rate from TST is


kB T z ‡ −E0 /kB T kB T zV5
−E0 /kB T
kTST = e ≈ 3 z3 e . (15.33)
h zA zB h zT R

For the rigid sphere model


3
zA = zB = zT , z ‡ = zT
3 2
zR (15.34)

hence the rate constant is


2
kB T zR −E0 /kB T
krigid = 3 e . (15.35)
h zT

From comparison, we see that the steric factor


5
zV
p≈ (15.36)
zR

which is typically in the order of 10−5 .

15.4 Thermodynamical Formulation of TST


Consider again the reaction

A+B  [A−B] → products (15.37)

with the equilibrium constant


cAB‡
Kc = . (15.38)
cA cB

The TST rate expression (15.25) gives4

kB T
k2 = Kc (15.39)
2π

4
The activated complex is treated as a normal molecule with the exception of the
special mode.
166 15 Calculation of Reaction Rates

and with (14.9) we have for an ideal solution

kB T −ΔG0‡ /kB T kB T −ΔH 0‡ /kB T ΔS 0‡ /kB


k2 = e = e e . (15.40)
2π 2π
The temperature dependence of the rate constant is

d ln k2 1 ∂ ΔG0‡
= − . (15.41)
dT T ∂T kB T
Now for ideal gases or solutions, the chemical potential has the form
z
μ = kB T ln c − kB T ln (15.42)
V
hence  zi
ΔG0‡ = −kB T νi ln (15.43)
V
and
∂ ΔG0‡  ∂  Ui
=− νi ln zi = νi = ΔU 0‡ . (15.44)
∂T kB T ∂T kB T 2
Comparison with the Arrhenius law

d Ea Ea
ln A − = (15.45)
dT kB T kB T 2

shows that the activation energy is

Ea = kB T + ΔU 0‡ ≈ kB T + ΔH 0‡ (ideal solutions). (15.46)

For a bimolecular reaction in solution, we find from


kB T ΔS 0‡ /kB −ΔH 0‡ /kB T kB T ΔS 0‡ /kB −(Ea −kB T )/kB T
rTST = e e = e e (15.47)
h h
that the steric factor is determined by the entropy of formation of the activated
complex.

15.5 Kinetic Isotope Effects


There are two origins of the kinetic isotope effect. The first is quantum
mechanical tunneling through the potential barrier.
It is only important at very low temperatures and for reactions involving
very light atoms. For the simplest model case of a square barrier, the tunneling
probability is approximately
# $
2m(V0 − E)
P = exp −2 a . (15.48)
2
15.5 Kinetic Isotope Effects 167

ln(k)

V0

E tunneling
activated

a 1/kBT

Fig. 15.5. Tunneling through a rectangular barrier. Left: the tunneling probability
depends on the height and width of the barrier. Right: at low temperatures, tunneling
dominates (dashed ). At higher temperatures, activated behavior is observed (dash-
dotted )

activated complex
ln(k)

Eo(H)
Eo(D)
reactants
Ea(H)

Ea(D)
Eo(H)
Eo(D)
1/kBT

Fig. 15.6. Isotope dependence of the activation energy

The tunneling probability depends on the mass m but is independent of


temperature (Fig. 15.5).
The second origin of isotope effects is the difference in activation energy
for reactions involving different isotopes. Vibrational frequencies and therefore
vibrational zero-point energies depend on the reduced mass μ of the vibrating
system: %
 f
ω = . (15.49)
2π μ
Since the transition state is usually more loosely bound than the reactants,
the vibrational levels are more closely spaced. Therefore, the activation energy
is higher for the heavier isotopomer (a normal isotope effect). The maximum
isotope effect is obtained when the bond involving the isotope is completely
broken in the transition state. Then, the difference in activation energies is
simply the difference in zero-point energies of the reactants (Fig. 15.6).
168 15 Calculation of Reaction Rates

15.6 General Rate Expressions

The potential energy surface has a saddle point in the transition state region.
The surface S ∗ , passing through that saddle point along the direction of steep-
est ascent, defines the border separating the reactants from the products in
quite a natural way. We introduce the so-called intrinsic reaction coordinate
(IRC) qr as the path of steepest descent from the cole connecting reactants
and products. We set qr = 0 for the points on the surface S ∗ . The instanta-
neous rate r(t) is given by the flux of systems that cross the surface S ∗ at time
t and are on the product side at t → ∞. This definition allows for multiple
crossings of the surface S ∗ . In general, however, it will depend on how exactly
the surface S ∗ is chosen. Only if the fluxes become stationary, this dependence
vanishes.

15.6.1 The Flux Operator

Classically, the flux is the product of the number of systems passing through
the surface S ∗ and their velocity vr normal to that surface. As we defined the
reaction coordinate qr to be normal to the surface S ∗ , the velocity is
dqr
vr = (15.50)
dt
and the classical flux is
   
pr
F  = dn−1 q dn p vr W (q, p, t) = dn q dn p δ(qr ) W (q, p, t).
mr
(15.51)

Here, W (q, p, t) is the phase-space distribution and mr is the reduced mass


corresponding to the coordinate qr . We may therefore define the classical flux
operator
pr
F (q, p) = δ(qr ) . (15.52)
mr
In the quantum mechanical case, we have to modify this definition since p
and q do not commute. It can be shown that we have to use the symmetrized
product
1 1
F = (pr δ(qr ) + δ(qr )pr ) = {pr , δ(qr )} (15.53)
2mr 2mr
with
 ∂
pr = . (15.54)
i ∂qr
The projection operator on to the systems on the product side is simply given
by
P = θ(qr ). (15.55)
15.6 General Rate Expressions 169

However, we need the projector P (t) on that states that at some time t in the
future are on the product side. If H is the system Hamiltonian, then

P (t) = eiHt/ θ(qr )eiHt/ (15.56)

and that part of the density matrix ρ(t) that will end in the product side in
the far future is

lim P (t − t)ρ(t)P (t − t). (15.57)
t →∞

The rate is given by

r(t) = lim tr (P (t − t)ρ(t)P (t − t)F ) . (15.58)


t →∞

The time dependence of the density matrix is given by the van Neumann
equation
dρ(t)
i = [H, ρ(t)]. (15.59)
dt
In thermal equilibrium,

ρeq = Q−1 exp (−βH) , Q = tr (exp(−βH)) (15.60)

is time independent as [exp(−βH), H] = 0. It can be shown that for t → ∞,


the density matrix ρeq and the projector P (t) commute5 :

lim P (t − t)ρeq P (t − t) = lim P (t )ρeq . (15.61)


t →∞ t →∞

The rate will then be also independent of t:


 

r = Q−1 lim tr e−βH eiHt / θ(qr ) e−iHt / F . (15.62)


t →∞

This expression can be modified such that the limit operation does not appear
explicitly any more. To that end, we note that the trace vanishes for t =06
and

 t = ∞
−1
r = Q tr e −βH iHt /
e θ(qr ) e −iHt /
F  
t =0
 ∞

d −βH iHt / −iHt /


= Q−1 e e θ(qr ) e F dt . (15.63)
0 dt
5
Asymptotically, the states on the product side will leave the reaction zone with
positive momentum pr , so that we may replace θ(qr ) with θ(pr ). Again asymptot-
ically, these states are eigenfunctions of the Hamiltonian, so that P (t = ∞) and
exp(−βH) commute.
6
This follows from time inversion symmetry: the trace has to be symmetric with
respect to that operation. Time inversion changes pr → −pr . The operators ρ and
qr are not affected. So the trace has to be symmetric and antisymmetric at the
same time.
170 15 Calculation of Reaction Rates

For the derivative, we find


d iHt /  i  
e θ(qr ) e−iHt / = eiHt / [H, θ(qr )] e−iHt / . (15.64)
dt 
Since only the kinetic energy p2r /2mr does not commute with θ(qr ), the
commutator is
- 2 .
pr  ∂ pr ∂ pr
[H, θ(qr )] = , θ(qr ) = θ(qr ) − θ(qr )
2mr i ∂qr 2mr ∂qr 2mr

 pr pr ∂ ∂ pr
= δ(qr ) + θ(qr ) − θ(qr )
i 2mr 2mr ∂qr ∂qr 2mr

 pr pr 
= δ(qr ) + δ(qr ) = F. (15.65)
i 2mr 2mr i
Therefore, we can express the reaction rate as an integral over the flux–flux
correlation function
 ∞  ∞
& '
r = Q−1 tr e−βH F (t)F (0) dt = F (t)F (0)dt. (15.66)
0 0

Ultimately, this expression for the reaction rate has been used a lot in
numerical computations. It is quite general and may easily be extended to
nonadiabatic reactions.

Problems
15.1. Transition State Theory
v

[AB]
A+B
δx

Instead of using a vibrational partition function to describe the motion of the


activated complex over the reaction barrier, we can also use a translational
partition function. We consider all complexes lying within a distance δx of the
barrier to be activated complexes. Use the translational partition function for
a particle of mass m in a box of length δx to obtain the TST rate constant.
Assume that the average velocity of the particles moving over the barrier is7

‡ 1 2kB T
v = .
2 πm‡
7
The particle moves to the left or right side with equal probability.
15.6 General Rate Expressions 171

15.2. Harmonic Transition State Theory


For systems such as solids, which are well described as harmonic oscillators
around stationary points, the harmonic form of TST is often a good approx-
imation, which can be used to evaluate the TST rate constant, which can
then be written as the product of the probability of finding the system in the
transition state and the average velocity at the transition state:

kTST = vδ(x − x‡ ).

For a one-dimensional model, assume that the transition state is an exit point
from the parabola at some position x = x‡ with energy ΔE = mω 2 x‡2 /2 and
evaluate the TST rate constant

ΔE

x x
16
Marcus Theory of Electron Transfer

In 1992, Rudolph Marcus received the Nobel Prize for chemistry. His theory
is currently the dominant theory of electron transfer in chemistry. Originally,
Marcus explained outer-sphere electron transfer, introducing reorganization
energy and electronic coupling to relate the thermodynamic transition state
to nonequilibrium fluctuations of the medium polarization [50–52].

16.1 Phenomenological Description of ET


We want to describe the rate of electron transfer (ET) between two species, D
and A, in solution. If D and A are neutral, this is a charge separation process:

D + A → D+ + A − . (16.1)

If the charge is transferred from one molecule to the other, this is a charge
resonance process
D− + A → D + A− (16.2)
or
D + A+ → D+ + A. (16.3)
In the following, we want to treat all these kinds of processes together. There-
fore from now on, the symbols D and A implicitly have the meaning DZD and
AZA and the general reaction scheme

DZD + AZA → DZD +1 + AZA −1 (16.4)

will be simply denoted as

D + A → D+ + A − . (16.5)

Marcus theory is a phenomenological theory, in which the assumption is made,


that ET proceeds in the following three steps:
174 16 Marcus Theory of Electron Transfer

1. The two species approach each other by diffusive motion and form the
so-called encounter complex:

kdif

D + A  [D − A] . (16.6)
k−dif

2. In the activated complex, ET takes place

ket
‡ ‡
[D − A]  [D A− ] .
+
(16.7)
k−et

The actual ET is much faster than the motion of the nuclei. The nuclear
conformation of the activated complex and the solvent polarization do not
change. According to the Franck–Condon principle, the two states [D − A]‡
and [D+ A− ]‡ have to be isoenergetic.
3. After the transfer, the polarization returns to a new equilibrium and the
products separate
ksep

[D A− ] → D+ + A− .
+
(16.8)

The overall reaction rate is


d
r= c − = ksep c[D+ A− ]‡ . (16.9)
dt A
We want to calculate the observed quenching rate
d
r=− cD = kq cD cA . (16.10)
dt
We consider the stationary state
d d
0= c ‡ = c + − ‡. (16.11)
dt [D−A] dt [D A ]
We deduce that

0 = kdif cD cA + k−et c[D+ A− ]‡ − (k−dif + ket )c[D−A]‡ , (16.12)

0 = ket c[D−A]‡ − (k−et + ksep )c[D+ A− ]‡ . (16.13)


Eliminating the concentration of the encounter complex from these two
equations, we find
ket
c[D+ A− ]‡ = c ‡
k−et + ksep [D−A]
ket kdif
= cD cA (16.14)
k−dif k−et + k−dif ksep + ket ksep
16.2 Simple Explanation of Marcus Theory 175

and the observed quenching rate is


kdif
kq = k−dif k−dif k−et
. (16.15)
1+ ket + ket ksep

The reciprocal quenching rate can be written as


1 1 k−dif k−dif k−et
= + + . (16.16)
kq kdif kdif ket kdif ket ksep
There are some interesting limiting cases:
• If ket
k−dif and ksep
k−et , then the observed quenching rate is given
by the diffusion rate kq = kdif and the ET rate is limited by the formation
of the activated complex.
• If the probability of ET is very small ket k−dif and k−et kdif , then
the quenching rate is given by kq = ket kdif /k−dif = ket Kdif .
• If diffusion is not important as in solids or proteins and in the absence of
further decay channels, the quenching rate is given by kq ≈ ket .

16.2 Simple Explanation of Marcus Theory


The TST expression for the ET rate is
ωN ‡
ket = κet e−ΔG /kB T , (16.17)

where κet describes the probability of a transition between the two elec-
tronic states DA and D+ A− , and ωN is the effective frequency of the nuclear
motion. We consider a collective reaction coordinate Q which summarizes the
polarization coordinates of the system (Fig. 16.1).
Initially, the polarization is in equilibrium with the reactants and for small
polarization fluctuations, series expansion of the Gibbs free energy gives
a 2
Gr (Q) = G(0)
r + Q . (16.18)
2

Gr (Q)
Gp(Q)

ER
ΔG
ΔG

Q =0 Qs Q1 Q

Fig. 16.1. Displaced oscillator model


176 16 Marcus Theory of Electron Transfer

ER −ΔG

activationless

activated inverted
ln(k)

Fig. 16.2. Marcus parabola

If the polarization change induced by the ET is not too large (no dielectric
saturation), the potential curve for the final state will have the same curvature
but is shifted to a new equilibrium value Q1 :
a a
p + (Q − Q1 ) = Gr + ΔG + (Q − Q1 ) .
Gp (Q) = G(0) 2 (0) 2
(16.19)
2 2
In the new equilibrium, the final state is stabilized by the reorganization
energy1
a
ER = −(Gp (Q1 ) − Gp (0)) = Q21 . (16.20)
2

The transition between the two states is only possible if due to some thermal
fluctuations the crossing point is reached which is at
a 2
2 Q1+ ΔG ER + ΔG
Qs = = √ . (16.21)
aQ1 2aER
The corresponding activation energy is

a 2 (ΔG + ER )2
ΔG‡ = Qs = (16.22)
2 4ER
hence the rate expression becomes

ωN (ΔG + ER )2
ket = κet exp − . (16.23)
2π 4ER kB T
If we plot the logarithm of the rate as a function of the driving force ΔG,
we obtain the famous Marcus parabola which has a maximum at ΔG = −ER
(Fig. 16.2).
1
Sometimes the reorganization energy is defined as the negative quantity Gp (Q1 )−
Gp (0).
16.3 Free Energy Contribution of the Nonequilibrium Polarization 177

16.3 Free Energy Contribution of the Nonequilibrium


Polarization
In Marcus theory, the solvent is treated in a continuum approximation. The
medium is characterized by its static dielectric constant εst and its optical
dielectric constant εop = ε0 n2 . The displacement D is generated by the charge
distribution :
div D = . (16.24)
In equilibrium, it is related to the electric field by

D = ε0 E + P = εE. (16.25)

Here P is the polarization, which is nothing else but the influenced dipole
density. It consists of two parts [53]:
1. The optical or electronical polarization Pop . It is due to the induced dipole
moments due to the response of the molecular electrons to the applied field.
At optical frequencies, the electrons still move fast enough to follow the
electric field adiabatically.
2. The inertial polarization Pin which is due to the reorganization of the
solvent molecules (reorientation of the dipoles, changes of molecular geom-
etry). This part of the polarization corresponds to the static dielectric
constant. The nuclear motion of the solvent molecules cannot follow the
rapid oscillations at optical frequencies.
The total frequency-dependent polarization is

P(ω) = Pop (ω) + Pin (ω) = D(ω) − ε0 E(ω) = (ε(ω) − ε0 )E(ω). (16.26)

In the static limit, this becomes

Pop + Pin = (εst − ε0 )E (16.27)

and at optical frequencies

Pop = (εop − ε0 )E. (16.28)

Hence the contribution of the inertial polarization is in the static limit

Pin = (εst − εop )E. (16.29)

In general, rot D = 0 and the calculation of the field for given charge distribu-
tion is difficult. Therefore, we use a model of spherical ions behaving as ideal
conductors with the charge distributed over the surface. Then in our case, the
displacement is normal to the surface and rot D = 0 and the displacement is
the same as it would be generated by the charge distribution  in vacuum.
The corresponding electric field in vacuum would be ε−1 0 D and the field in
178 16 Marcus Theory of Electron Transfer

the medium can be expressed as


1 1 1
E= (D − P) = D − (Pop + Pin )
ε0 ε0 ε0
1 1
= (D − Pin ) − (εop − ε0 )E,
ε0 ε0
1
E= (D − Pin ), (16.30)
εop
where we assumed that the optical polarization reacts instantaneously to
changes of the charge distribution or to fluctuations of the inertial polar-
ization. In equilibrium, (16.29) gives
1
E= (D − (εst − εop )E),
εop
1
E= D. (16.31)
εst
If the polarization is in equilibrium with the field (1/ε0 )D produced by the
charge distribution of either AB or A+ B− , the free energy G of the two configu-
rations will, in general, not be the same. We designate these two configurations
as I and II, respectively. Marcus calculated the free energy of a nonequilibrium
polarization for which the free energies of the reaction complexes [AB]‡ and
[A+ B− ]‡ become equal. The contribution to the free energy is
   
1
Gel = d3 r E dD = d3 r (D − Pin )dD, (16.32)
εop
which can be divided into two contributions:
1. The energy without inertial polarization

1 D2
Gel,0 = d3 r . (16.33)
εop 2
2. The contribution of the inertial polarization

1
− d3 rPin dD. (16.34)
εop
In equilibrium, we have

1 1 1
Pin = (εst − εop )E = (εst − εop ) D = εop − D (16.35)
εst εop εst
and we can express the free energy as a function of Pin
 2
eq 1 3 1 Pin
Gel = Gel,0 − 1 d r . (16.36)
εop − εst
1 2 εop
16.3 Free Energy Contribution of the Nonequilibrium Polarization 179

(2)
P

Peq
(1)

DI/II D*I/II D

Fig. 16.3. Calculation of the free energy change

Now, we have to calculate the free energies of the activated complexes. Con-
sider a thermal fluctuation of the inertial polarization. We will do this in two
steps (Fig. 16.3).
In the first step, it is assumed that the polarization is always in equilib-
rium with the charge distribution. However, this charge density will be chosen
such that the nonequilibrium polarization P‡in of the activated complex is
generated. The electric fields are D∗ and E∗ :

1 ‡ 1 1
Pin = − D∗ . (16.37)
εop εop εst

The corresponding free energy is


  D∗   D∗
1 1
ΔG1el = d3 r D dD − d3 r Pin dD
εop D εop D

1 P‡2 − P2
= G∗el,0 − Gel,0 − d3 r in 2 in . (16.38)
1
εop − 1
εst
2εop

In the second step, the electric field of the charge distribution will be reduced
to the equilibrium value while keeping the nonequilibrium polarization fixed.2
Then this fixed polarization acts as an additional displacement:
1 1
E= (D − P‡in ) − Pop , (16.39)
ε0 ε0
where the optical polarization depends on the electric field according to (16.28)

1 ‡ εop
E = (D − Pin ) − − 1 E. (16.40)
ε0 ε0

2 ‡
We tacitly assume that this is possible. In general, the polarization Pin will also
modify the charge distribution on the reactants. If, however, the distance between
the two ions is large in comparison with the radii, these changes can be neglected.
Marcus calls this the point-charge approximation.
180 16 Marcus Theory of Electron Transfer

Hence, if we treat the optical polarization as following instantaneously, the


slow fluctuations of the inertial polarization
1
E= (D − P‡in ), (16.41)
εop

which means the inertial polarization is shielded by the optical polarization


it creates. We may now calculate the change in free energy as
  D
1
ΔG2el = (D − P‡in )dD
d3 r
εop D∗

∗ 1 ‡
= Gel,0 − Gel,0 − d3 r P (D − D∗ ). (16.42)
εop in

If we substitute D∗ from (16.37), we get


 
1 ‡ 1 P‡in Pin − P‡in
P (D − D∗ ) = (16.43)
εop in 1
εop − 1
εst
εop εop

and the total free energy change due to the polarization fluctuation is

G‡el − Geq 1
el = ΔGel + ΔGel
2

1 P‡2 − P2
=− 1 d3 r in 2 in
εop − εst
1 2εop

1 1
− 1 d3 r 2 P‡in (Pin,eq − P‡in )
εop − εst
1 εop
  2
1 3 1 P‡in − Pin,eq
= 1 d r . (16.44)
εop − εst
1 2 εop

16.4 Activation Energy


The free energy G‡el is a functional of the polarization fluctuation

δP = P‡in − Pin . (16.45)

If we minimize the free energy

δG‡el = 0, (16.46)

we find that
δP = 0, G‡el = Geq
el . (16.47)
16.4 Activation Energy 181

Marcus asks now, for which polarizations P‡in of the two states I and II become
isoenergetic, i.e.
ΔU ‡ = UII‡ − UI‡ = 0. (16.48)
We want to rephrase the last condition in terms of the free energy

G = U − T S + pV ≈ U − T S. (16.49)

We conclude that
ΔG‡ = G‡II − G‡I = −T ΔS ‡ . (16.50)
Here (Fig. 16.4)
 2
1 1 δPI
G‡I = 3
d r + Geq eq ∞
el,I (R) − Gel,I (∞) + GI ,
1
εop − 1
εst
2 εop
 2
1 1 δPII
G‡II = d3 r + Geq eq ∞
el,II (R) − Gel,II (∞) + GII . (16.51)
1
εop − 1
εst
2 εop

The quantity
ΔG∞ = G∞ ∞
II − GI = ΔGvac + ΔGsolv,∞ (16.52)
is the difference of free energies for an ideal solution of A and B on the
one side and A+ and B− on the other side at infinite distance R → ∞.
Usually, it is determined experimentally. It contains all the information on
the internal structure of the ions. The entropy difference is the difference of
internal entropies of the reactants since the contribution from the polarization
drops out due to P‡in,I = P‡in,II . Usually, this entropy difference is small. The
electrostatic energies Geqel,I/II are composed of the polarization contribution
(the solvation free energy) and the mutual interaction of the reactants which
will be considered later. The reaction free energy is

ΔGeq = ΔG∞ + ΔGeq eq


el (R) − ΔGel (∞). (16.53)

It can be easily seen that the condition ΔG‡ = 0 will be fulfilled for infinitely
many polarizations P‡in . We will therefore look for that one which minimizes
simultaneously the free energy of the transition state G‡I,II . We introduce a
Lagrange parameter m to impose the isoenergeticity condition:
6 7
δ G‡I (P‡in ) + m(G‡II (P‡in ) − G‡I (P‡in )) = 0. (16.54)

The variation is with respect to P‡in or, equivalently with respect to δPI for
fixed change of the inertial polarization:

ΔPin = δPI − δPII = (P‡in − PI,in ) − (P‡in − PII,in ) = PII,in − PI,in . (16.55)
182

R=∞ R=∞ R
∞ eq eq
ΔGsolv,I GI (R) – GI (∞)
A B eq ¹ ¹
AI AI B,I B GI (R) GI(R,P )
Gvac A B,I B
Gvac A G∞ G∞

ΔGvac ΔG∞ ΔG0


ΔG#

∞ eq
Geq
16 Marcus Theory of Electron Transfer

ΔGsolv,II II (R) – GII (∞)


A,II B,II A,II B,II A+ B− Geq (R) ¹ ¹
A+ Gvac Gvac B− A+ G∞ G∞ B− II GII(R,P )

Fig. 16.4. Free energy differences


16.4 Activation Energy 183

#
GI/II
GIeq
δ PI#
δ PII GIIeq
#
Pin,I Pin Pin,II Pin

ΔPin
m 1−m

Fig. 16.5. Variation of the nonequilibrium polarization

We have to solve
6 7
δ (1 − m)G‡I (PI,in + δPI ) + mG‡II (PII,in − ΔPin + δPI,in ) = 0. (16.56)

Due to the quadratic dependence of the free energies, we find the condition

[(1 − m)δPI + m(δPI − ΔPin )] = 0 (16.57)

and hence the solution (Fig. 16.5)

δP‡I = mΔPin . (16.58)

The Lagrange parameter m is determined by inserting this solution into


the isoenergicity condition:

− T ΔS = ΔG‡ (P‡in )
= G‡II (Pin,II + (m − 1)ΔPin ) − G‡I (Pin,I + mΔPin )
 2
1 1 ΔPin
= ΔGeq + ((m − 1)2 − m2 ) 1 d3
r
εop − εst
1 2 εop
= ΔGeq + (1 − 2m)λ, (16.59)

where

λ = GII (Pin,II − ΔPin ) − GII (Pin,II )


 2 
1 1 ΔPin 1 1 1 2
= 1 3
d r = − d3 r (ΔD)
εop − 1 2
εst
ε op ε op ε st 2
(16.60)
184 16 Marcus Theory of Electron Transfer

is the reorganization energy. Solving for m we find

ΔGeq + T ΔS ‡ + λ
m= . (16.61)

The free energy of activation is given by
 2
1 1 ΔPin
ΔGa = G‡I − Geq
I = m
2
d3 r
1
εop − 1
εst
2 εop
(ΔGeq + T ΔS ‡ + λ)2
= m2 λ = . (16.62)

Finally, we insert this into the TST rate expression to find
 
ωN ωN (ΔGeq + T ΔS ‡ + λ)2
ket = κet e−ΔGa /kB T = κet exp − . (16.63)
2π 2π 4λkB T

16.5 Simple Model Systems


We will now calculate reorganization energy and free energy differences explic-
itly for a model system consisting of two spherically reactants. We assume that
the charge distribution of the reactants is not influenced by mutual Coulombic
interaction or changes of the medium polarization (we apply the point-charge
approximation again) (Fig. 16.6).
We have already calculated the solvation energy at large distances 43

(1) Q2 Q2B 1 1
Welec = − A − − (16.64)
8πaA 8πaB ε0 εst
from which we find the polarization energy at large distances
2
1 QA Q2B
Geq
el (R = ∞) = + . (16.65)
8πεst aA aB
We now consider a finite distance R. If we take the origin of the coordi-
nate system to coincide with the center of the reactant A, then the dielectric
displacement is

1 r r−R
DI/II (r) = QA,I/II 3 + QB,I/II (16.66)
4π r |r − R|3

aA aB

Fig. 16.6. Simple donor–acceptor geometry


16.5 Simple Model Systems 185

and the polarization free energy



D2
Geq
el,I/II = d3 r (16.67)
2εst
contains two types of integrals. The first is
 3
d r
I1 = . (16.68)
r4
The range of this integral is outside the spheres A and B. As the integrand
falls off rapidly with distance, we may extend the integral over the volume of
the other sphere. Introducing polar coordinates, we find
 ∞
dr 4π
I1 ≈ 4π 2 = . (16.69)
a r a
The second kind of integral is

r(r − R)
I2 = d3 r . (16.70)
r3 |r − R|3

Again we extend the integration range and use polar coordinates


 π  ∞
r2 − rR cos θ
I2 ≈ 2π sin θ dθ r2 dr 3 2 . (16.71)
0 0 r (r + R2 − 2rR cos θ)3/2

The integral over r gives


 ∞ ∞
r − R cos θ 1 
dr = − √  = 1 (16.72)
0 (r + R − 2rR cos θ)
2 2 3/2
r + R − 2rR cos θ 0
2 2 R

and the integral over θ then gives


 π
2π sin θ 4π
I2 = dθ = . (16.73)
0 R R
Together, we have the polarization free energy
 2
1 1 QA Q2B 1 2QA QB
Geqel = d3
rD 2
 = + + (16.74)
2εst 8πεst aA aB 8πεst R
QA QB
= Geq (∞) +
4πεst R
and the difference

e2 1 + 2ZA 1 − 2ZB ZB − ZA − 1
Geq
el,II − Geq
el,I = + +2 . (16.75)
8πεst aA aB R
186 16 Marcus Theory of Electron Transfer

Similarly, the reorganization energy is



1 1 1
λ= − d3 r (ΔD)2
2 εop εst

1 1 1 ΔQ2A ΔQ2B 2ΔQA ΔQB
= − + + . (16.76)
8π εop εst aA aB R

Now since ΔQA = −ΔQB = e, we can write this as



e2 1 1 1 1 2
λ= − + − . (16.77)
8π εop εst aA aB R

We consider here the most important special cases.

16.5.1 Charge Separation

If an ion pair is created during the ET reaction (QA,I = QB,I = 0, QA,II = e,


QB,II = −e), we have

Q2A Q2B 2QA QB 0 (I)
+ + = . (16.78)
aA aB R
1
aA + 1
aB − 2
R (II)

The free energy is


Geq
el,I = 0,

e2 1 1 2 e2
Geq
el,II = + − = Geq
el,II (∞) − (16.79)
8πεst aA aB R 4πεst R
and the activation energy becomes
e2
(ΔG∞ + T ΔS ‡ + λ − 4πεst R )
2
ΔGa = . (16.80)

If the process is photoinduced, the free energy at large distance can be
expressed in terms of the ionization potential of the donor, the electron affinity
of the acceptor, and the energy of the optical transition (Fig. 16.7)

ΔG∞ = EA(A) − (IP(D) − ω). (16.81)

16.5.2 Charge Shift

For a charge shift reaction of the type QA,I = QB,II = −e, QB,I = QA,II = 0,

e2 e2
Geq
I = , Geq
II = , (16.82)
8πεst aA 8πεst aB
16.6 The Energy Gap as the Reaction Coordinate 187

ΔG
+
D

IP(D*)

D*
IP(D)

hω A
EA(A)

D A

Fig. 16.7. Photoinduced charge separation. At large distance, the energy difference
ΔG∞ is given by EA(A) − IP(D∗ ) = EA(A) − (IP(D) − ω)

and for the second type QA,I = QB,II = 0, QB,I = QA,II = e,

e2 e2
Geq
I = , Geq
II = . (16.83)
8πεst aB 8πεst aA
The activation energy now does not contain a Coulombic term

(ΔG∞ + T ΔS ‡ + λ)2
ΔGa = . (16.84)

16.6 The Energy Gap as the Reaction Coordinate


We want to construct one single reaction coordinate which summarizes the
polarization fluctuations. To that end, we consider the energy gap

ΔU ‡ = ΔG‡ + T ΔS ‡ ≈ G‡II (P‡in ) − G‡I (P‡in )


= G‡II (PII,in − ΔPin + δPI,in ) − G‡I (PI,in + δPI,in )
 2
1 1 3 (δPI,in − ΔPin )
= ΔGeq + 1 d r
εop − εst
1 2 εop
 2
1 1 δPI,in
− 1 d3 r
εop − 1 2
εst
εop

1 1 ΔP2in − 2δPI,in ΔPin
= ΔGeq + 1 d3 r
εop − εst
1 2 ε2op
= ΔGeq + λ − δU (t) (16.85)

with the thermally fluctuating energy



1 δPI,in ΔPin
δU (t) = d3 r . (16.86)
1
εop − 1
εst
ε2op
188 16 Marcus Theory of Electron Transfer

ΔU

λ λ
I
Geq
ΔGeq
II
Geq
0 2λ δU

Fig. 16.8. Energy gap as reaction coordinate

In the equilibrium of the reactants,

δU = 0, ΔU = ΔGeq + λ (I) (16.87)

and in the equilibrium of the products (Fig. 16.8)

δU = 2λ, ΔU = ΔGeq − λ (II). (16.88)

The free energies now take the form


1
GI = Geq
I + (δU )2 , (16.89)

1
GII = Geq
II + (δU − 2λ)2 . (16.90)

The free energy fluctuations in the reactant state are given by
1
GI − Geq
I = δU 2 . (16.91)

If we identify this with the thermal average kB T /2 of a one-dimensional
harmonic motion, we have

δU 2  = 2kB T λ. (16.92)

The fluctuations of the energy gap can for instance be investigated with
molecular dynamics methods. They can be modeled by diffusive motion in
a harmonic potential U (Q) = Q2 /4λ:
 2 
∂ ∂ 1 ∂ ∂U
W (Q, t) = −DE W + W − κδ(Q − λ)W, (16.93)
∂t ∂Q2 kB T ∂Q ∂Q
where the diffusion constant is
2kB T λ
DE = (16.94)
τL
and τL is the longitudinal relaxation time of the medium.
16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer 189

16.7 Inner-Shell Reorganization

Intramolecular modes can also contribute to the reorganization. Low-frequency


modes (ω < kB T ) which can be treated classically can be taken into account
by adding an inner-shell contribution to the reorganization energies:

λ = λouter + λinner . (16.95)

Often C–C stretching modes at ω ≈ 1,400 cm−1 change their equilibrium


positions during the ET process. They can accept the excess energy in the
inverted region and reduce the activation energy. Often the Franck–Condon
progression of one dominant mode ωv is used to analyze experimental data
in the inverted region. Since ωv
kB T , thermal occupation is negligible.
For a harmonic oscillator, the relative intensity of the 0 → n transition3 is

gr2n −gr2
FC(0, n) = e (16.96)
n!
and the sum over all transitions gives the rate expression
∞ 2n

ω −g2 g (ΔG + ER + nωv )2
k= κel e exp − , (16.97)
2π n=0
n! 4ER kB T

where g 2 ωv is the partial reorganization energy of the stretching mode


(Figs. 16.9 and 16.10).

16.8 The Transmission Coefficient for Nonadiabatic


Electron Transfer
The transmission coefficient κ will be considered later in more detail. For adi-
abatic electron transfer, it approaches unity. For nonadiabatic transfer (small
crossing probability), it depends on the electronic coupling matrix element Vet .
The quantum mechanical rate expression for nonadiabatic transfer becomes
in the high-temperature limit (ω kB T for all coupling modes)

2πVet2 e−(ΔG+λ) /4λkB T


2

k= √ . (16.98)
 4πλkB T

3
A more detailed discussion follows later.
190 16 Marcus Theory of Electron Transfer

activated region inverted region


n=0
3 3
2 2
1 1

n=0

n’ = 0 n’ = 0

Fig. 16.9. Accepting mode. In the inverted region, vibrations can accept the excess
energy

1
electron transfer rate k

0.1

0.01

0 0.5 1 1.5 2
ΔG

Fig. 16.10. Inclusion of a high-frequency mode progression. Equation (16.97) is


evaluated for typical values of ωv = 0.2 eV, g = 1, and λ = 0.5 eV (broken curve).
The full curves show the contributions of different 0 → n transitions

Problems

16.1. Marcus Cross Relation


(a) Calculate the activation energy for the self-exchange reaction

kAA
A− + A → A + A−

in the harmonic model


a 2 a
GR (Q) = Q , GP (Q) = (Q − Q1 )2 .
2 2
16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer 191

(b) Show that for the cross reaction

k
A+D → A− + D+

the reorganization energy is given by the average of the reorganization


energies for the two self-exchange reactions:
λAA + λDD
λ=
2
and the rate k can be expressed as

k = kAA kDD Keq f

where kAA and kDD are the rate constants of the self-exchange reactions,
Keq is the equilibrium constant of the cross reaction and f is a factor,
which is usually close to unity.
Part VI

Elementary Photophysics
17
Molecular States

Biomolecules have a large number of vibrational degrees of freedom, which


are more or less strongly coupled to transitions between different electronic
states. In this chapter, we discuss the vibronic states on the basis of the
Born–Oppenheimer separation and the remaining nonadiabatic coupling of
the adiabatic states. In the following, we do not consider translational or rota-
tional motion which is essentially hindered for Biomolecules in the condensed
phase.

17.1 Born–Oppenheimer Separation


In molecular physics, usually the Born–Oppenheimer separation of electronic
(r) and nuclear motion (Q) is used. The molecular Hamiltonian (without
considering spin or relativistic effects) can be written as

H = TN (Q) + Tel (r) + VN (Q) + VeN (Q, r) + Vee (r) (17.1)

with kinetic energy


 2 ∂ 2  2 ∂ 2
TN = − , Tel = − (17.2)
j
2mj ∂Q2j 2me ∂rk2

and Coulombic interaction

V = VN (Q) + VeN (Q, r) + Vee (r)


 Z j Z j  e2  Z j e2  e2
= − + . (17.3)

4πε|Rj − Rj  | 4πε|Rj − rk | 
4πε|rk − rk |
j,j j,k k,k

The Born–Oppenheimer wave function is a product

ψ(r, Q)χ(Q), (17.4)


196 17 Molecular States

where the electronic part depends parametrically on the nuclear coordi-


nates. The nuclear masses are much larger than the electronic mass. There-
fore, the kinetic energy of the nuclei is neglected for the electronic motion.
The electronic wave function is obtained approximately, from the eigenvalue
problem,

(Tel + V )ψs (r, Q) = Es (Q)ψs (r, Q), (17.5)

which has to be solved separately for each set of nuclear coordinates. Using
now the Born–Oppenheimer product ansatz, we have

Hψs (r, Q)χs (Q) = TN ψs (r, Q)χs (Q) + Es (Q)ψs (r, Q)χs (Q)
= ψs (r, Q)(TN + Es (Q))χs (Q)
 
 2 ∂ 2 ψs (r, Q) ∂χs (Q) ∂ψs
− χs (Q) + . (17.6)
j
2mj ∂Q2j ∂Qj ∂Qj

The sum constitutes the so-called nonadiabatic interaction Vnad . If it is neglec-


ted in lowest order, the nuclear wave function is a solution of the eigenvalue
problem:
(TN + Es (Q))χs (Q) = Eχs (Q). (17.7)
Often the potential energy Es (Q) can be expanded around the equilibrium
configuration Q0s 1
1 ∂2
Es (Q) = Es (Q0s ) + (Qj − Qj0s )(Qj  − Qj  0s ) Es + · · · (17.8)
2  ∂Qj ∂Qj 
j,j

Within the harmonic approximation, the matrix of mass-weighted second


derivatives
1 ∂2
Djj  = √ (17.9)
mj mj  ∂Qj ∂Qj 

is diagonalized 
Djj  urj = ωr2 urj (17.10)
j

and the nuclear motion becomes a superposition of independent normal mode


vibrations with amplitudes qr and frequencies ωr 2
1 
Qj − Qj0s = √ qr urj . (17.11)
mj r

After transformation to normal mode coordinates, the eigenvalue problem


(17.7) decouples according to
1
which will be different for different electronic states s in general.
2
These quantities will be different for different electronic states s of course.
17.2 Nonadiabatic Interaction 197
 ωr2 2 2 ∂ 2

Es + TN = Es(0) + q − (17.12)
r
2 r 2 ∂qr2

and the nuclear wave function factorizes



χs = χs,r,n(r) (qr ). (17.13)
r

The total energy of a molecular state is



1
Es0 + ωr(s) n(r) + . (17.14)
r
2

The vibrations form a very dense manifold of states for biological molecules.
This is often schematically represented in a Jablonsky diagram3 (Figs. 17.1–
17.4)

17.2 Nonadiabatic Interaction


The nonadiabatic interaction couples the adiabatic electronic states.4 If we
expand the wave function as a linear combination of the adiabatic wave
functions, which form a complete basis at each configuration Q

Ψ= Cs ψs (r, Q)χs (Q), (17.15)
s

we have to consider the nonadiabatic matrix elements5

E .
. .
. .
.
. S2
. T2
.
.
S1 .
.
T1
.
.
.

S0

Fig. 17.1. Jablonsky diagram

3
which usually also shows electronic states of higher multiplicity.
4
Sometimes called channels.
5
Without a magnetic field, the molecular wave functions can be chosen real-valued.
198 17 Molecular States

O CH3 CH3
H CH3
2 3 H
CH3 1
I II 4
N N H
H
H H H
CH3 N N
IV III CH3
H V
H
CH2 H
O O
CH2 O
phytyl O O CH3

Fig. 17.2. Bacteriopheophytine-b

300

0.1 /cm–1
number of normal modes

250

200

150

100

50

0
0 1000 2000 3000 4000
normal mode frequency (cm–1)

Fig. 17.3. Distribution of normal mode frequencies. Normal modes for


bacteriopheophytine-b were calculated quantum chemically. The phytyl tail was
replaced by a methyl group. The cumulative frequency distribution (solid ) is almost
linear at small frequencies corresponding to a constant density of 0.1 cm−1 normal
modes (dashed )
20
10

16
10
DOS (1/cm–1)

12
10

8
10

4
10

0
10 0 1000 2000 3000 4000
frequency (cm–1)

Fig. 17.4. Density of vibrational states. The density of vibrational states including
overtones and combination tones was evaluated for bacteriopheophytine-b with a
simple counting algorithm
17.2 Nonadiabatic Interaction 199

nad
Vs,s = dr dQ χs (Q)ψs (r, Q) Vnad ψs (r, Q)χs (Q)
 
 2  
∂2
=− dQ χs (Q) χs (Q) dr ψs (r, Q) ψs (r, Q)
j
2mj ∂Q2j
 2  
∂ ∂
− dQ χs (Q) χs (Q) dr ψs (r, Q) ψs (r, Q) .
2mj ∂Qj ∂Qj
(17.16)

The first term is generally small. If we evaluate the second at an equilibrium


configuration Q(0) of the state s and neglect the dependence on Q6 we have
 2 
nad,el ∂
 ≈ −Vs,s
nad
Vs,s dQ χs (Q) χs (Q) , (17.17)
2mj ∂Qj

nad,el (0) ∂ (0)
Vs,s = dr ψs (r, Q ) ψs (r, Q ) . (17.18)
∂Qj
Within the harmonic approximation, the gradient operator can be expressed
as
1 ∂  
r ∂ ωr +
√ = uj = r
uj (b − br ) (17.19)
mj ∂Qj r
∂qr r
2 r
and the nonadiabatic matrix element becomes
 
nad,el
 2 ω r  
 ,n(r  ),s,n(r) = −Vs,s
Vsnad dq1 dq2 · · · χs r n(r )
2 2  r
⎛ ⎞
  

×⎝ χs,r ,n(r ) ⎠ n(r) + 1χs,r,n(r)+1 + n(r)χs,r,n(r)−1 . (17.20)


r  =r

This expression simplifies considerably if the mixing of normal modes in the


state s can be neglected (the so-called parallel mode approximation). Then
the overlap integral factorizes into a product of Franck–Condon factors:
⎛ ⎞

  2
ωr ⎝  
nad,el
 ,n(r  ),s,n (r) = −Vs,s
Vsnad FCsr s (n , n)⎠
2 2  r =r

√  √ 

× nr + 1FCsr s (nr , nr + 1) + nr FCsr s (nr nr − 1) (17.21)

with 

FCrs s (nr nr ) = dqr χs r,n (r) (qr ) χs,r,n(r) (qr ). (17.22)

6
This is known as the Condon approximation.
18
Optical Transitions

We consider allowed optical transitions between two electronic states of a


molecule, for instance the ground state (g) and an excited singlet state (e).
Within the adiabatic approximation (17.13), we take into account transitions
between the manifolds of Born–Oppenheimer vibronic states

ψg (r; Q)χg (Q) → ψe (r; Q)χe (Q). (18.1)

Assuming that the occupation of initial states is given by a canonical distri-


bution P (χg ), the probability for dipole transitions in an oscillating external
field
E0 exp(iω0 t) (18.2)
is given by the golden rule expression
2π  2
k= P (χg ) |ψg χg |E0 er| ψe χe | δ(ωeg − ω0 ). (18.3)
 χ
g

18.1 Dipole Transitions in the Condon Approximation


The matrix element of the dipole operator er can be simplified by performing
the integration over the electronic coordinates:
 

dQχg (Q)χe (Q) drψg∗ (r; Q)erψe (r; Q)

= dQχ∗g (Q)χe (Q) Mge (Q). (18.4)

The dipole moment function Meg (Q) is expanded as a series with respect to
the nuclear coordinates around the equilibrium configuration
∂Meg
Meg (Q) = Meg (Qeq ) + (Q − Qeq ) + · · · (18.5)
∂Q
202 18 Optical Transitions

If its equilibrium value does not vanish for symmetry reasons, the dipole
moment can be approximated by neglecting all higher-order terms (this is
known as Condon approximation)

Meg (Q) ≈ μeg = Meg (Qeq ). (18.6)

The transition rate becomes a product of an electronic factor and an overlap


integral of the nuclear wave functions which is known as Franck–Condon factor
(in fact, the Franck–Condon-weighted density of states)
2π  2
k= |E0 μeg |2 P (χg ) |χg (Q)|χe (Q)| δ(ωeg − ω0 ) (18.7)
 χ ,χ
g e


= |E0 μeg |2 FCF(ω0 ). (18.8)


18.2 Time Correlation Function Formalism


In the following, we rewrite the transition rate within the time correlation
function (TCF) formalism.
The unperturbed molecular Hamiltonian is

H0 = |ψe (He + ω00 )ψe | + |ψg Hg ψg |, (18.9)

where we introduced the Hamiltonians for the nuclear motion in the two states:

Hg(e) = TN + Eg(e) (Q) (18.10)

with
Hg(e) χg(e) = ωg(e) χg(e) (18.11)
1
and ω00 is the pure electronic, so-called “0–0” transition energy. The energy
of the transition
|ψg χg  → |ψe χe  (18.12)
is
ωeg = ω00 + ωe − ωg . (18.13)
The interaction between molecule and radiation field is within the Condon
approximation2
H  = |ψe (−E0 μeg )ψg | + h.c. (18.14)
After replacing the δ-function by a Fourier integral
 ∞
1
δ(ω) = eiωt dt, (18.15)
2π −∞
1
Including the zero-point energy difference.
2
We consider here only the nondiagonal term.
18.2 Time Correlation Function Formalism 203

(18.7) becomes
 
|E0 μeg |2
k= dt P (χg )|χg |χe |2 ei(ωe −ωg +ω00 −ω0 )t , (18.16)
2 χg χe

which can be written as a thermal average over the vibrations of the ground
state
|E0 μeg |2
   e−βHg 
−iωg t
k= dt χ g e |χ e eiωe t χe |χg ei(ω00 −ω0 )t
2 χg χe
Q g

2  8 9
|E0 μeg |
= dt e−i(ω0 −ω00 )t e−itHg / eitHe / (18.17)
 2 g

|E0 μeg |2
= dt e−i(ω0 −ω00 )t F (t) (18.18)
2
with the correlation function
8 9
F (t) = e−itHg / eitHe / (18.19)
g

which will be evaluated for some simple models in the following chapter. From
comparison with (18.8), we find that the Franck–Condon-weighted density of
states is given by

1
FCF(ω) = dte−i(ω0 −ω0 0) F (t). (18.20)
2π

Problems
18.1. Absorption Spectrum
Within the Born–Oppenheimer approximation, the rate for optical transi-
tions of a molecule from its ground state to an electronically excited state is
proportional to

α(ω) = Pi |i|μ|f |2 δ(+ωf − ωi − ω).
i,f

Show that this golden rule expression can be formulated as the Fourier integral
of the dipole moment correlation function

1
dt eiωt μ(0)μ(t)
2π
and that within the Condon approximation, this reduces to the correlation
function of the nuclear motion.
19
The Displaced Harmonic Oscillator Model

We would like now to discuss a more specific model for the transition between
the vibrational manifolds. We apply the harmonic approximation for the
nuclear motion which is described by Hamiltonians

Hg,e = ωr(g,e) br(g,e)+ br(g,e) (19.1)
r

and discuss the model of displaced oscillators.

19.1 The Time Correlation Function in the Displaced


Harmonic Oscillator Approximation
In a very simplified model, we neglect mixing of the normal modes (parallel
mode approximation) and frequency changes (Fig. 19.1).
This leads to the “displaced harmonic oscillator” model (DHO) with

Hg = ωr b+
r br , (19.2)
r
  
He = ωr (b+
r + gr )(br + gr ) = Hg + gr ωr (b†r + br ) + gr2 ωr .
r r r
(19.3)

In this approximation, the normal mode frequencies and eigenvectors are


the same for both electronic states but the equilibrium positions are shifted
relative to each other. The correlation function (18.19)
8 9

F (t) = e−(it/)Hg e(it/)He = Q−1 tr e−Hg /kB T e−(it/)Hg e(it/)He (19.4)


g

with
Q = tr(e−Hg /kB T ) (19.5)
206 19 The Displaced Harmonic Oscillator Model

hω00

Fig. 19.1. Displaced oscillator model

factorizes in the parallel mode approximation:



F (t) = Fr (t) (19.6)
r

−ωr b†r br /kB T −itωr b†r br itωr (b†r +gr )(br +gr )
Fr (t) = Q−1
r tr e e e
8 † †
9
= e−itωr br br eitωr (br +gr )(br +gr ) . (19.7)

As shown in the appendix, this can be evaluated as


& '
Fr (t) = exp gr2 (eiωt − 1)(nr + 1) + (e−iωt − 1)nr (19.8)

with the average phonon numbers


1
nr = . (19.9)
eωr /kB t −1
Expression (19.7) contains phonon absorption (positive frequencies) and emis-
sion processes (negative frequencies). We discuss two important limiting
cases.

19.2 High-Frequency Modes


In the limit ωr
kT , the average phonon number
1
nr = (19.10)
eω/kB T −1
is small and the correlation function becomes
& '
Fr (t) → exp gr2 (eiωt − 1) . (19.11)

Expansion of Fr (t) as a power series of gr2 gives


 g 2j
e−gr ei(jωr )t
2
r
Fr (t) = (19.12)
j
j!
19.3 The Short-Time Approximation 207

a b
0.80

0.60
f (hΔω)

0.40

0.20

0.00
–400 0 400 – 400 0 400
Δhω (cm–1)

Fig. 19.2. Progression of a low-frequency mode. The Fourier transform of (19.8) is


shown for typical values of ω = 50 cm−1 , 1, Er = 50 cm−1 and (a) kT = 10 cm−1 ,
(b) kT = 200 cm−1 . A small damping was introduced to obtain finite linewidths

which corresponds to a progression of transitions 0 → j ωr with Franck-


Condon factors (Fig. 19.2)

gr2j −gr2
F C(0, j) = e . (19.13)
j!

19.3 The Short-Time Approximation


To obtain a smooth approximation to the spectrum
 ∞ 
1
f (Δω) = dt e−iΔωt Fr (t), (19.14)
2π −∞ r

we expand the oscillating functions

ωr2 t2
eiωr t = 1 + iωr t − + ··· (19.15)
2
and find
 # $
1 ∞  
ωr2 t2
−iΔω
f (Δω) = dt e exp − gr2 iωr t
+ 1) gr2 (2n̄r
2π −∞ r r
2
%   2 2 
1 2π (Δω − g ωr )
=  2 2 exp −  2 2 r r .
2π g ω
r r r (2n̄ r + 1) 2 ( g ω
r r r (2n̄ r + 1))

(19.16)
208 19 The Displaced Harmonic Oscillator Model

f( hΔω)
Δ

Er hΔω

Fig. 19.3. Gaussian envelope

With the definition of the reorganization energy



Er = gr2 ωr (19.17)
r

and the width 


Δ2 = gr2 (ωr )2 (2n̄r + 1), (19.18)
r

it takes the form


 
1 (Δω − Er )2
f (Δω) = √ exp − . (19.19)
Δ 2π 2Δ2

In the high-temperature limit (all modes can be treated classically), the width
becomes (Fig. 19.3)

Δ2 → 2gr2 ωr kB T = 2Er kB T (19.20)

and
1 (Δω − Er )2
f (Δω) → √ exp − . (19.21)
4πEr kB T 4Er kB T
20
Spectral Diffusion

Electronic excitation energies of a chromophore within a protein environment


are not static quantities but fluctuate in time. This can be directly observed
with the methods of single molecule spectroscopy. If instead an ensemble aver-
age is measured, then the relative time scales of measurement and fluctuations
determine if an inhomogeneous distribution is observed or if the fluctuations
lead to a homogeneous broadening. In this chapter, we discuss simple mod-
els [54–56], which are capable of describing the transition between these two
limiting cases.

20.1 Dephasing
We study a semiclassical model of a two-state system. Due to interaction with
the environment, the energies of the two states are fluctuating quantities. The
system is described by a time-dependent Hamiltonian

E1 (t) 0
H0 = (20.1)
0 E2 (t)
and the time evolution of the unperturbed system
  t 
1
ψ(t) = exp H0 dt ψ(0) = U0 (t)ψ(0) (20.2)
i 0
can be described with the help of a propagator
  t   (1/i)  t E (t)dt 
1 e 0 1
U0 (t) = exp H0 dt = t . (20.3)
i 0 e(1/i) 0 E2 (t)dt

Optical transitions are induced by the interaction operator



0 μeiωt
H = . (20.4)
μe−iωt 0
210 20 Spectral Diffusion

We make the following ansatz

ψ(t) = U0 (t)φ(t) (20.5)

and find
d d
(H0 + H  )ψ = i ψ = H0 U0 (t)φ + iU0 (t) φ. (20.6)
dt dt
Hence
d
φ = U0 (−t)H  U0 (t)φ.
i (20.7)
dt
The product of the operators gives
 t 
 0 μeiωt−(i/) 0 (E2 −E1 )dt
U0 (−t)H U0 (t) = t (20.8)
μe−iωt+(i/) 0 (E2 −E1 )dt 0

and (20.7) can be written in the form



d 0 μ(t)eiωt
i φ = , (20.9)
dt μ(t)∗ e−iωt 0

where the time-dependent dipole moment μ(t) obeys the differential equation
d 1
μ(t) = (E2 (t) − E1 (t))μ(t)
dt i
= −iω21 (t)μ(t) = −i(ω21  + δω(t))μ(t). (20.10)

Starting from the initial condition



1
φ(t = 0) = , (20.11)
0

we find in lowest order



1
φ(t) = t (20.12)
1
i 0 dt e−iωt μ∗ (t)

and the transition probability is given by


 t  t
1  
P (t) = 2 dt dt eiω(t −t ) μ(t )μ∗ (t )
 0 0
 
|μ0 |2 t  t  iω(t −t ) −iω21 (t −t ) −i  t δω(t )dt
= 2 dt dt e e e t .
 0 0
(20.13)

The ensemble average gives for stationary fluctuations


 
|μ0 |2 t t   iω(t −t ) −iω21 (t −t )
P (t) = 2 dt dt e e F (t − t ) (20.14)
 0 0
20.2 Gaussian Fluctuations 211

with the dephasing function


  t 
 
F (t) = exp −i δω(t )dt . (20.15)
0

With the help of the Fourier transformation


 ∞

F (t) = e−iω t F̂ (ω)dω  ,
−∞

the transition probability becomes


  t t
|μ0 |2 ∞     
P (t) = 2 dω  dt dt ei(ω−ω )(t −t ) e−iω21 (t −t ) F̂ (ω)
 −∞ 0 0
2  ∞
|μ0 | 2(1 − cos((ω − ω  − ω21 )t)
= 2 dω  F̂ (ω) , (20.16)
 −∞ (ω − ω  − ω21 )2

where the quotient approximates a δ-function for longer times

2(1 − cos((ω − ω  − ω21 )t)


→ 2πtδ(ω − ω  − ω21 ) (20.17)
(ω − ω  − ω21 )2

and finally the golden rule expression is obtained in the form

P (t) 2π|μ0 |2
→ F̂ (ω − ω21 ). (20.18)
t 2

20.2 Gaussian Fluctuations


For Gaussian fluctuations, the dephasing function can be approximated by a
cumulant expansion. Consider the expression ln(eiλx ). Series expansion with
respect to λ gives

ixeiλx  λ2 −x2 eλx eλx  − ixeλx 2


ln(eiλx ) = λ + + ··· (20.19)
eiλx  2 eλx 2

Setting now λ = 1 gives


1
ln(eix ) = ix − (x2  − x2 )
2
1
+ (−ix3  − 3−x2 ix + 2ix3 ) − · · · (20.20)
6
which for a distribution with zero mean simplifies to
1 2 1
ln(eix ) = x  + x3  + · · · (20.21)
2 6
212 20 Spectral Diffusion

Especially for a Gaussian distribution


1
√ e−x /2Δ ,
2 2
W (x) = (20.22)
Δ 2π
all cumulants except the first and second order vanish. This can be seen from
 ∞
eix W (x)dx = e−Δ /2 .
2
e  =
ix
(20.23)
−∞

If for instance the frequency fluctuations are described by diffusive  t motion


in a harmonic potential, then the distribution of the integral 0 δω(t)dt is
Gaussian. Finally, we find
 (  2 )
t
1  
F (t) = exp − δω(t )dt
2 0
 t  t 
1    
= exp − dt dt δω(t )δω(t )
2 0 0
 
1 t  t 
= exp − dt dt δω(t − t )δω(0) . (20.24)
2 0 0

We assume that in the simplest case, the correlation of the frequency fluctu-
ations decays exponentially

δω(t)δω(0) = Δ2 e−t/τc . (20.25)

Then integration gives


F (t) = exp −Δ2 τc2 (e−|t|/τc − 1 + |t|/τc ) . (20.26)

Let us look at the limiting cases.

20.2.1 Long Correlation Time

The limit of long correlation time corresponds to the inhomogeneous case


where δω(t)δω(0) = Δ2 is constant. We expand the exponential for t τc :

t2
e−t/τc − 1 + t/τc = + ··· (20.27)
2τc2

F (t) = e−Δ
2 2
t /2
(20.28)
and the transition probability is
 
|μ0 |2 t  t  i(t −t )(ω−ω21 ) −Δ2 (t −t )2 /2
P (t) = 2 dt dt e e . (20.29)
 0 0
20.2 Gaussian Fluctuations 213

100

10 –1
F(t)

10 –2

10 –3
–10 –5 0 5 10
time t

Fig. 20.1. Time correlation function of the Kubo model. F (t) from (20.26) is shown
for Δ = 1 and several values of τc . For large correlation times, the Gaussian
exp(−Δ2 t2 /2) is approximated whereas for short correlation times the correla-
tion function becomes approximately exponential and the width increases with
τ −1 . Correspondingly, the Fourier transform becomes sharp in this case (motional
narrowing)

Since F (t) is symmetric, we find


 
|μ0 |2 t  t  i(t −t )(ω−ω21 ) −Δ2 (t −t )2 /2
P (t) = dt dt e e
42 −t −t
 ∞
|μ0 |2
dt eit(ω−ω21 ) e−Δ t /2
2 2
≈ 2
t (20.30)
2 −∞

and the lineshape has the form of a Gaussian (Fig. 20.1)



P (t) (ω − ω21 )2
lim ∼ exp − . (20.31)
t 2Δ2

20.2.2 Short Correlation Time


For very short correlation time τc t, we approximate
& '
F (t) = exp −Δ2 τc |t| = exp (−|t|/T2) (20.32)
with the dephasing time
T2 = (Δ2 τc )−1 . (20.33)
The lineshape has now the form of a Lorentzian
 ∞
2T2−1
dt eit(ω−ω21 ) e−|t|/T2 = −2 . (20.34)
−∞ T2 + (ω − ω21 )2
This result shows the motional narrowing effect when the correlation time is
short or the motion very fast.
214 20 Spectral Diffusion

20.3 Markovian Modulation


Another model which can be analytically investigated describes the frequency
fluctuations by a Markovian random walk. We discuss the simplest case of a
dichotomous process (9.2), that is, the oscillator frequency switches randomly
between two values ω± . This is, for instance, relevant for NMR spectra of a
species which undergoes a chemical reaction

A+B  AB, (20.35)

where the NMR resonance frequencies depend on the chemical environment.


For a dichotomous Markovian process switching between two states X =
±, we have for the conditional transition probability

P (X, t + τ |X0 t0 ) = P (X, t + τ |+, t)P (+, t|X0 t0 )


+P (X, t + τ |−, t)P (−, t|X0 t0 ). (20.36)

For small time increment τ , linearization gives

P (+, t + τ |+, t) = 1 − ατ,


P (−, t + τ |−, t) = 1 − βτ,
P (−, t + τ |+, t) = ατ,
P (+, t + τ |−, t) = βτ, (20.37)

hence
P (+, t + τ | + t0 )
= P (+, t + τ |+, t)P (+, t| + t0 ) + P (+, t + τ |−, t)P (−, t| + t0 )
= (1 − ατ )P (+, t|+, t0 ) + βτ P (−, t|+, t0 ) (20.38)

and we obtain the differential equation



P (+, t|+, t0 ) = −αP (+, t|+, t0 ) + βP (−, t|+, t0 ). (20.39)
∂t
Similarly, we derive

P (−, t|−, t0 ) = −βP (−, t|−, t0 ) + αP (+, t|−, t0 ),
∂t

P (−, t|+, t0 ) = −βP (−, t|+, t0 ) + αP (+, t|+, t0 ),
∂t

P (+, t|−, t0 ) = −αP (+, t|−, t0 ) + βP (−, t|−, t0 ). (20.40)
∂t
In a stationary system, the probabilities depend only on t − t0 and the
differential equations can be written as a matrix equation

P(t) = A P(t) (20.41)
∂t
20.3 Markovian Modulation 215

with
P (+, t|+, 0) P (+, t|−, 0)
P(t) = (20.42)
P (−, t|+, 0) P (−, t|−, 0)
and the rate matrix
−α β
A= . (20.43)
α −β
Let us define the quantity
8  t  9
Q(X, t|X  , 0) = e−i 0 ω(t )dt , (20.44)
X,X 

where the average is taken under the conditions that the system is in state X
at time t and in state X  at time 0. Then we find
8 t   9
Q(X, t + τ |X  , 0) = e−i 0 ω(t )dt eiω(t)τ
X,X 

= Q(X, t + τ |+, t)Q(+, t|X , 0)
+Q(X, t + τ |−, t)Q(−, t|X  , 0). (20.45)

Expansion for small τ gives


 
Q(+, t + τ |+, t) = e−iω+ τ +,+ = 1 − (iω+ + α)τ,
 
Q(−, t + τ |−, t) = e−iω− τ −,− = 1 − (iω− + β)τ,
 
Q(−, t + τ |+, t) = e−iω+ τ −,+ = ατ,
 
Q(+, t + τ |−, t) = e−iω− τ +,− = βτ, (20.46)

and for the time derivatives


∂ 1
Q(+, t|+, 0) = lim [(−iω+ − α)τ Q(+, t|+, 0) + βτ Q(−, t|+, 0)]
∂t τ →0 τ
= −(iω+ + α)Q(+, t|+, 0) + βQ(−, t|+, 0),

Q(−, t|−, 0) = −(iω− − +β)Q(−, t|−, 0) + αQ(+, t|−, 0),
∂t
∂ 1
Q(+, t|−, 0) = lim [(−iω+ − α)τ Q(+, t|−, 0) + βτ Q(−, t|−, 0)]
∂t τ →0 τ
= −(iω+ + α)Q(+, t|−, 0) + βQ(−, t|−, 0),

Q(−, t|+, 0) = −(iω− + β)Q(−, t|+, 0) + αQ(+, t|+, 0), (20.47)
∂t
or in matrix notation

Q = (−iΩ + A)Q (20.48)
∂t
with
Q(+, t|+, 0) Q(+, t|−, 0)
Q= (20.49)
Q(−, t|+, 0) Q(−, t|−, 0)
216 20 Spectral Diffusion

and
ω+
Ω= . (20.50)
ω−
Equation (20.48) is formally solved by

Q(t) = exp {(−iΩ + A)t} . (20.51)

For a steady-state system, we have to average over the initial state and to sum
over the final states to obtain the dephasing function. This can be expressed
as
   
β β
F (t) = (1, 1)Q(t) α+β
α = (1, 1) exp {(−iΩ + A)t} α+β
α . (20.52)
α+β α+β

Laplace transformation gives


  
∞ β
F(s) = e −st
F (t)dt = (1, 1)(s + iΩ − A) −1 α+β
α , (20.53)
0 α+β

which can be easily evaluated to give


1
F (s) =
(s + iω1 )(s + iω2 ) + (α + β)s + i(αω2 + βω1 )
 β 
s + iω2 − β −β
×(1, 1) α+β
−α s + iω1 − α α
α+β

s + (α + β) + i βωα+β
2 +αω1

= . (20.54)
(s + iω1 )(s + iω2 ) + (α + β)s + i(αω2 + βω1 )

Since the dephasing function generally obeys the symmetry F (−t) = F (t)∗ ,
the lineshape is obtained from the real part
 ∞  ∞  ∞
iωt iωt −iωt ∗
2π F̂ (ω) = e F (t)dt = e F (t)dt + e F (t)dt
−∞ 0 0

= 2 F (−iω)
αβ (ω1 − ω2 )2
=2
2 .
α+β
(ω − ω1 )2 (ω − ω2 )2 + (α + β)2 ω − αω2 +βω1
α+β

(20.55)

Let us introduce the average frequency


αω2 + βω1
ω= (20.56)
α+β
20.3 Markovian Modulation 217

60
a b c
50
40
30
20
10
0
0 1 2 0 1 2 0 1 2
frequency ω

Fig. 20.2. Motional narrowing. The lineshape (20.55) is evaluated for ω1 = 1.0,
ω2 = 1.5 and α = β =(a) 0.02, (b) 0.18, and (c) 1.0

and the correlation time of the dichotomous process

τc = ωc−1 = (α + β)−1 . (20.57)

In the limit of slow fluctuations (ωc → 0), two sharp resonances appear at
ω = ω1,2 with relative weights given by the equilibrium probabilities

P (+) = β/(α + β), P (−) = α/(α + β). (20.58)

With increasing ωc , the two resonances become broader and finally merge
into one line. For further increasing ωc the resonance, which is now centered
at ω, becomes very narrow (this is known as the motional narrowing effect)
(Fig. 20.2).

Problems
20.1. Motional Narrowing
Discuss the poles of the lineshape function and the motional narrowing effect
(20.55) for the symmetrical case α = β = (2τc )−1 and ω1,2 = ω ± Δω/2.
21
Crossing of Two Electronic States

21.1 Adiabatic and Diabatic States


We consider the avoided crossing of two electronic states along a nuclear
coordinate Q. The two states are described by adiabatic wave functions
ad
ψ1,2 (r, Q) (21.1)

and the wave function

ψ1ad (r, Q)χ1 (Q) + ψ2ad (r, Q)χ2 (Q) (21.2)

will be also written in matrix notation [57] as



& ad ad
' χ1 (Q)
ψ1 (r, Q) ψ2 (r, Q) . (21.3)
χ2 (Q)

We will assume that the electronic wave functions are chosen real-valued (no
magnetic field). From the adiabatic energies

dr ψsad (Tel + V )ψsad ad
 = δs,s Es (Q) (21.4)

and the matrix elements of the nuclear kinetic energy


 2
2 ∂
− ad
dr ψs (r, Q) ψsad
 (r, Q)
2m ∂Q2
2 nad ∂ 2 nad,(2) 2 ∂ 2
=− Vs,s (Q) − Vs,s (Q) − (21.5)
2m ∂Q 2m 2m ∂Q2
with the first- and second-order nonadiabatic coupling matrix elements

2 nad 2 ∂ ad
− Vs,s (Q) = − dr ψsad (r, Q) ψs (r, Q) , (21.6)
2m 2m ∂Q
220 21 Crossing of Two Electronic States


2 nad,(2) 2 ∂ 2 ad
− Vs,s (Q) = − drψsad (r, Q) ψ  (r, Q) , (21.7)
2m 2m ∂Q2 s
we construct the Hamiltonian for the nuclear wave function
2 ∂ 2 2 nad ∂ 2 nad,(2)
H(Q) = − + E ad
(Q) − V (Q) − V  (Q) (21.8)
2m ∂Q2 2m ∂Q 2m s,s
with the matrices
- ad . / nad nad
0
E1 (Q) V11 (Q) V12 (Q)
E ad (Q) = , V nad
(Q) = ,
E2ad (Q) V21nad nad
(Q) V22 (Q)
⎡ ⎤
nad,(2) nad,(2)
V (Q) V (Q)
V nad,(2) (Q) = ⎣ ⎦.
11 12
(21.9)
nad,(2) nad,(2)
V21 (Q) V22 (Q)
The nuclear part of the wave function is determined by the eigenvalue
equation:
χ1 (Q) χ1 (Q)
H(Q) =E . (21.10)
χ2 (Q) χ2 (Q)
Now for real-valued functions, we have

nad ∂ ad
Vss (Q) = dr ψsad (r, Q) ψ
∂Q s
 
∂ ∂ ad
=  (r, Q) −
dr ψsad (r, Q)ψsad dr ψs ψsad

∂Q ∂Q
& '
= − Vsnad s (Q) (21.11)

and the matrix V nad is antisymmetric1


nad

0 V12
V nad = . (21.12)
−V12
nad
0

We want to define a new basis of so-called diabatic electronic states for


which the nonadiabatic interaction is simplified. Therefore, we introduce a
transformation with a unitary matrix

c11 c12
C= (21.13)
c21 c22

and form the linear combinations


& ' & '
ψ dia = ψ1dia (r, Q) ψ2dia (r, Q) = ψ1ad (r, Q) ψ2ad (r, Q) C = ψ ad C. (21.14)

nad ∗
 = −(Vs ,s ) .
1 nad
For complex wave functions instead Vs,s
21.1 Adiabatic and Diabatic States 221

The nuclear kinetic energy transforms into



2 ∂ 2 dia
− dr ψ dia † (r, Q) ψ (Q)
2m ∂Q2
 
2 ∂ 2 ψ ad ad ∂ C
2
∂2
=− dr C † ψ ad † C + ψ + ψ ad
C
2m ∂Q2 ∂Q2 ∂Q2
ad ad

∂ψ ∂C ∂ψ ∂ ∂C ∂
+2 +2 C + 2ψ ad (21.15)
∂Q ∂Q ∂Q ∂Q ∂Q ∂Q

2 ∂ 2
C ∂ 2
∂C
=− C † V nad,(2) C + C † 2
+ 2
+ 2C † V nad
2m ∂Q ∂Q ∂Q

∂ ∂C ∂
+ 2C † V nad C + 2C † . (21.16)
∂Q ∂Q ∂Q
The gradient operator vanishes if we chose C to be a solution of
∂C
= −V nad (Q)C. (21.17)
∂Q
This can be formally solved by
 ∞ 
C = exp V nad (Q)dQ . (21.18)
Q

For antisymmetric V nad , the exponential function can be easily evaluated2 to


give
 ∞
cos ζ(Q) sin ζ(Q) nad
C= , ζ(Q) = V12 (Q)dQ. (21.19)
− sin ζ(Q) cos ζ(Q) Q

Due to (21.17), the last two summands in (21.16) cancel and



2 ∂ 2 dia
− dr ψ dia † (r, Q) ψ (Q)
2m ∂Q2
 
2 † nad,(2)
2
†∂ C ∂2 † nad ∂C
=− C V C+C + + 2C V . (21.20)
2m ∂Q2 ∂Q2 ∂Q
Applying (21.17) once more, we find that
∂2C ∂C
V nad,(2) C + + 2V nad
∂Q2 ∂Q
nad
∂V ∂C ∂C
= V nad,(2) C − C − V nad + 2V nad
∂Q ∂Q ∂Q
nad
∂V
= V nad,(2) C − C − (V nad )2 C. (21.21)
∂Q

2
for instance by a series expansion
222 21 Crossing of Two Electronic States

The derivative of the nonadiabatic coupling matrix is


  2 
∂V nad ∂ψ ∂ψ  ∂ ψ
= dr + dr ψ
∂Q ∂Q ∂Q ∂Q2
= (V nad )T V nad + V nad,(2) = −(V nad )2 + V nad,(2) (21.22)
and therefore (21.21) vanishes. Together, we have finally

2 ∂ 2 dia 2 ∂ 2
− dr ψ dia † (r, Q) ψ (Q) = − . (21.23)
2m ∂Q2 2m ∂Q2
From
 
dr ψ dia † (Tel + V )ψ dia = dr C † ψ ad † (Tel + V )ψ ad C, (21.24)

we find the matrix elements of the electronic Hamiltonian


 = C † E ad (Q)C
H

cos2 ζE1ad + sin2 ζE2ad sin ζ cos ζ(E1ad − E2ad )
= . (21.25)
sin ζ cos ζ(E1ad − E2ad ) cos2 ζE2ad + sin2 ζE1ad
At the crossing point
(cos2 ζ0 − sin2 ζ0 )(E1ad − E2ad ) = 0, (21.26)
which implies
1
cos2 ζ0 = sin2 ζ0 = . (21.27)
2
Expanding the sine and cosine functions around the crossing point, we have
E1 +E2 & '
 ≈ 2 + (E&2 − E1 )(ζ − ζ0 ) + · ·'· (E1 − E2 ) 12 + (ζ − ζ0 )2 + · · ·
H .
(E1 − E2 ) 12 + (ζ − ζ0 )2 + · · · E1 +E2
2
− (E2 − E1 )(ζ − ζ0 ) + · · ·
(21.28)
Expanding furthermore the matrix elements
Δ
E1ad = Ē + + g1 (Q − Q0 ) + · · · , (21.29)
2
Δ
E2ad = E − + g2 (Q − Q0 ) + · · · , (21.30)
2
ζ − ζ0 = −(Q − Q0 )V12
nad
(Q0 ), (21.31)
where Δ is the splitting of the adiabatic energies at the crossing point, the
interaction matrix becomes

 g1 + g2 −(Q − Q0 )V12nad
(Q0 )Δ Δ + g1 −g2
(Q − Q0 )
H =E+ Q+ g1 −g2
2 2 + ···
2 (Q − Q0 ) (Q − Q0 )V12 (Q0 )Δ
Δ nad
2 2 +
(21.32)
We see that in the diabatic basis, the interaction is given by half the splitting
of the adiabatic energies at the crossing point (Fig. 21.1).
21.2 Semiclassical Treatment 223

E1ad
E1dia

Q

Edia
2
Ead
2

Fig. 21.1. Curve crossing

21.2 Semiclassical Treatment


Landau [58] and Zener [59] investigated the curve-crossing process treating
the nuclear motion classically by introducing a trajectory

Q(t) = Q0 + vt, (21.33)

where they assumed a constant velocity in the vicinity of the crossing point.
They investigated the time-dependent model Hamiltonian
1 ∂ΔE
E1 (Q(t)) V − 2 ∂t t V
H(t) = = 1 ∂ΔE (21.34)
V E2 (Q(t)) V 2 ∂t t

and solved the time-dependent Schrödinger equation



∂ c1 c
i = H(t) 1 , (21.35)
∂t c2 c2

which reads explicitely


∂c1
i = E1 (t)c1 + V c2 ,
∂t
∂c2
i = E2 (t) + V c1 . (21.36)
∂t
Substituting

E1 (t)dt
c1 = a1 e(1/i) ,

(1/i) E2 (t)dt
c2 = a 2 e , (21.37)

the equations simplify


∂a1 
i = V e(1/i) (E2 (t)−E1 (t)dt a2 ,
∂t
∂a2 
i = V e−(1/i) (E2 (t)−E1 (t)dt a1 . (21.38)
∂t
224 21 Crossing of Two Electronic States

P
high velocity

low velocity

Fig. 21.2. Velocity dependence of the transition. At low velocities, the electronic
wave function follows the nuclear motion adiabatically corresponding to a transition
between the diabatic states. At high velocities, the probability for this transition
(21.42) becomes slow. Full : diabatic states and dashed : adiabatic states

Let us consider the limit of small V and calculate the transition probability
in lowest order. From the initial condition a1 (−∞) = 1, a2 (−∞) = 0, we get
 t
∂ΔE t2
(E2 (t ) − E1 (t ))dt =
, (21.39)
0 ∂t 2
 ∞   %
1 1 ∂ΔE t2 V 2π
a2 (∞) ≈ V exp − dt = (21.40)
i −∞ i ∂t 2 i −i(∂ΔE/∂t)
and the transition probability is

2πV 2
P12 = |a2 (∞)|2 = . (21.41)
|∂ΔE/∂t|

Landau and Zener calculated the transition probability for arbitrary coupling
strength (Fig. 21.2)

2πV 2
P12 = 1 − exp −
LZ
. (21.42)
|∂ΔE/∂t|

21.3 Application to Diabatic ET

If we describe the diabatic potentials as displaced harmonic oscillators

mω 2 2 mω 2
E1 (Q) = Q , E2 (Q) = ΔG + (Q − Q1 )2 , (21.43)
2 2
the energy gap is with

Q1 = , (21.44)
mω 2
mω 2 2 √
E2 − E1 = ΔG + (Q1 − 2Q1 Q) = ΔG + λ − ω 2mλQ. (21.45)
2
21.4 Crossing in More Dimensions 225

1 (1−P12)P12
(1−P12)P12
2
1−P12

P12

Fig. 21.3. Multiple crossing. The probability of crossing during one period of the
oscillation is 2P12 (1 − P12 ) if the two crossings are independent

The time derivative of the energy gap is


∂(E2 (Q) − E1 (Q)) √ ∂Q
= −ω 2mλ (21.46)
∂t ∂t
and the average velocity is
  ∞
|v|e−mv /2kT dV
2
∂Q −∞ 2kT
=  ∞ = . (21.47)
∂t e −mv 2 /2kT dV mπ
−∞

The probability of curve crossing during one period of the oscillation is given
by 2P12 (1 − P12 ) ≈ 2P12 (see Fig. 21.3).
Together, this gives a transmission coefficient
2πV 2 2πV 2 2π 1
κel = 2  √ 1  =

√ (21.48)
 ω 4πλkT
 ω 2λ 2kT 
π 

and a rate of
2πV 2 1
k= √ e−ΔGa /kT . (21.49)
 4πλkT

21.4 Crossing in More Dimensions


The definition of a diabatic basis is not so straightforward in the general case
of more nuclear degrees of freedom [60]. Generalization of (21.17) gives
∂C
= −Vjnad (Q)C. (21.50)
∂Qj
Now consider
∂ ∂
 (Q) −
V nad V nad
 (Q)
∂Qi ss ,j ∂Qj ss ,i

∂ ad ∂ ad
= dr ψs (r, Q) ψs (r, Q)
∂Qi ∂Qj

∂ ad ∂ ad
− dr ψs (r, Q) ψs (r, Q) . (21.51)
∂Qj ∂Qi
226 21 Crossing of Two Electronic States

We assume completeness of the states ψsad and insert



ψsad (r)ψsad (r ) = δ(r − r ) (21.52)
s

to obtain
 
∂ ad ∂ ad 
dr dr ψs (r, Q) ψsad ad 
 (r)ψs (r ) ψs (r , Q)
∂Qi ∂Qj
s

∂ ad ad  ∂ ad
− ad
ψ (r, Q) ψs (r)ψs (r ) ψ  (r, Q) , (21.53)
∂Qj s ∂Qi s
which can be written as
 & '
 ,i Vs ,s ,j − Vs,s ,j Vs ,s ,i .
nad nad nad nad
Vs,s (21.54)
s =s,s

If only two states are involved, then this sum vanishes


∂ ∂
V nad (Q) − V nad (Q) = 0 (21.55)
∂Qi 1,2,j ∂Qj 1,2,i

and a potential function exists, which provides a solution to (21.50). For more
than two states, an integrability condition like (21.55) is not fulfilled and only
part of the nonadiabatic coupling can be eliminated.3 Often a crude diabatic
basis is used, which refers to a fixed configuration near the crossing point and
does not depend on the nuclear coordinates at all. In a diabatic representation,
the Hamiltonian of a two-state system has the general form

H11 (Qi ) H12 (Qi )
H= . (21.56)
H12 (Qi )† H22 (Qi )

In one dimension a crossing of the corresponding adiabatic curves (the


eigenvalues of H) is very unlikely, since it occurs only if the two conditions

H11 (Q) − H22 (Q) = H12 (Q) = 0 (21.57)

are fulfilled simultaneously.4 In two dimensions, this is generally the case in


a single point, which leads to a so-called conical intersection. This type of
curve crossing is very important for ultrafast transitions. In more than two
dimensions, crossings appear at higher-dimensional surfaces. The terminology
of conical intersections is also used here (Fig. 21.4).
3
This is similar to a vector field v in three dimensions which can be divided into
a longitudinal or irrotational part with rot v = 0 and a rotational part with
div v = 0.
4
If the two states are of different symmetry, then H12 = 0 and a crossing is possible
in one dimension.
21.4 Crossing in More Dimensions 227

Ead

Q2

Q1

Fig. 21.4. Conical intersection

Problems
21.1. Crude Adiabatic Model
Consider the crossing of two electronic states along a coordinate Q. As basis
functions, we use two coordinate-independent electronic wave functions which
diagonalize the Born–Oppenheimer Hamiltonian at the crossing point Q0 :

(Tel + V (Q0 ))ϕ1,2 = e1,2 ϕ1,2 .

Use the following ansatz functions

Ψ1 (r, Q) = (cos ζ(Q)ϕ1 (r) − sin ζ(Q)ϕ2 (r))χ1 (Q),

Ψ2 (r, Q) = (sin ζ(Q)ϕ1 (r) + cos ζ(Q)ϕ2 (r))χ2 (Q),


which can be written in more compact form

c s χ1
(Ψ1 , Ψ2 ) = (ϕ1 , ϕ2 ) .
−s c χ2

The Hamiltonian is partitioned as

H = TN + Tel + V (r, Q0 ) + ΔV (r, Q).

Calculate the matrix elements of the Hamiltonian



H11 H12
H21 H22

c −s ϕ1 2 ∂ 2 & 1 2' c s
= − + T + V (Q ) + ΔV ϕ , ϕ ,
ϕ†2 −s c
el 0
s c 2m ∂Q2
(21.58)

where χ(Q) and ζ(Q) depend on the coordinate Q whereas the basis func-
tions ϕ1,2 do not. Chose ζ(Q) such that Tel + V is diagonalized. Evaluate the
nonadiabatic interaction terms at the crossing point Q0 .
22
Dynamics of an Excited State

In this chapter, we would like to describe the dynamics of an excited state |s
which is prepared, for example, by electronic excitation, due to absorption of
radiation. This state is an eigenstate of the diabatic Hamiltonian with energy
Es0 . Close in energy to |s is a manifold of other states {|l}, which are not
populated during the short-time excitation, since from the ground state only
the transition |0 → |s is optically allowed. The l-states are weakly coupled
to a continuum of bath states1 and therefore have a finite lifetime (Fig. 22.1).

E
V

|s> |> Γ

|0>

Fig. 22.1. Dynamics of an excited state. The excited state |s decays into a manifold
of states |l which are weakly coupled to a continuum

The bath states will not be considered explicitly. Instead, we use a non-
Hermitian Hamiltonian for the subspace spanned by |s and {|l}. We assume
that the Hamiltonian is already diagonal2 with respect to the manifold {|l},
which has complex energies3

1
For instance, the field of electromagnetic radiation.
2
For non-Hermitian operators, we have to distinguish left eigenvectors and right
eigenvectors.
3
This is also known as the damping approximation.
230 22 Dynamics of an Excited State

G+

E
G−

Fig. 22.2. Integration contour for G± (t)

Γl
El0 = l − i . (22.1)
2
This describes the exponential decay ∼ e−Γl t of the l-states into the continuum
states.
Thus, the model Hamiltonian takes the form
⎛ 0 ⎞
Es Vs1 · · · VsL
⎜ V1s E10 ⎟
⎜ ⎟
H = H0 + V = ⎜ . .. ⎟. (22.2)
⎝ .. . ⎠
VLs EL

22.1 Green’s Formalism

The corresponding Green’s operator or resolvent [61] is defined as


 1
G(E) = (E − H)−1 = |n n|. (22.3)
n
E − En

For a Hermitian Hamiltonian, the poles of the Green’s operator are on the
real axis and the time evolution operator (the so-called propagator) is defined
by (Fig. 22.2)

 = G+ (t) − G− (t),
G(t) (22.4)
 ∞±i
1
G± (t) = e−(iE/)t G(E)dE. (22.5)
2πi −∞±i


G(t) is given by an integral, which encloses all the poles En . Integration
between two poles does not contribute, since the integration path can be
taken to be on the real axis and the two contributions vanish. The integration
22.1 Green’s Formalism 231

G+

E
−G−

 = G+ (t) − G− (t)
Fig. 22.3. Integration contour for G(t)

t<0

G+

E
t>0

Fig. 22.4. Deformation of the integration contour for G+ (t)

along a small circle around a pole gives 2πi times the residual value which is
given by e−iEn t/ (Fig. 22.3).
Hence, we find

 = t
G(t) |ne−(iE/)t n| = exp H . (22.6)
i

For times t < 0, the integration path for G+ can be closed in the upper half of
the complex plane where the integrand e−(iE/)t G(E) vanishes exponentially
for large |E| (Fig. 22.4).
Hence
G+ (t) = 0 for t < 0. (22.7)
We shift the integration path for times t > 0 into the lower half of the complex
plane, where again the integrand vanishes exponentially and we have to sum
over all residuals to find

G+ (t) = G(t) for t > 0. (22.8)

 is the time evolution operator for t > 0.4 There are


Hence, G+ (t) = Θ(t)G(t)
additional interactions for times t < 0 which prepare the initial state
4 
Similarly, we find G− (t) = (Θ(t − 1))G(t) is the time evolution operator for
negative times.
232 22 Dynamics of an Excited State

G+
E

Fig. 22.5. Integration contour for a non-Hermitian Hamiltonian

ψ(t = 0) = |s. (22.9)

The integration contour for a non-Hermitian Hamiltonian can be chosen as the


real axis for G+ (t), which now becomes the Fourier transform of the resolvent
(Fig. 22.5).
Dividing the Hamiltonian in the diagonal part H 0 and the interaction V ,
we have
(G0 )−1 = E − H 0 , (22.10)
G−1 = E − H = (G0 )−1 − V. (22.11)
Multiplication from the left with G0 and from the right with G gives the
Dyson equation:
G = G0 + G0 V G. (22.12)
We iterate that once more

G = G0 + G0 V (G0 + G0 V G) = G0 + G0 V G0 + G0 V G0 V G (22.13)

and project on the state |s

s|G|s = s|G0 |s + s|G0 V G0 |s + s|G0 V G0 V G|s. (22.14)

Now, G0 is diagonal and V is nondiagonal. Therefore

s|G0 V G0 |s = s|G0 |ss|V |ss|G0 |s = 0 (22.15)

and

s|G0 V G0 V G|s = s|G0 |ss|V |ll|G0 |ll|V |ss|G|s, (22.16)



Gss = G0ss + G0ss Vsl G0ll Vls Gss , (22.17)
l

G0ss 1
Gss =  = , (22.18)
1 − Gss l Vsl Gll Vls
0 0 E − Es0 − Rs
with the level shift operator
 |Vsl |2
Rs = . (22.19)
E − l + i(Γl /2)
l
22.2 Ladder Model 233

The poles of the Green’s function Gss are given by the implicit equation
 |Vsl |2
Ep = Es0 + . (22.20)
Ep − El0
l

Generally, the Green’s function is meromorphic and can be represented as


 Ap
Gss (E) = , (22.21)
p
E − Ep

where the residuals are defined by

Ap = lim Gss (E)(E − Ep ). (22.22)


E→Ep

The probability of finding the system still in the state |s at time t > 0 is


Ps (t) = |s|G(t)|s|2 ss (t)|2 ,
= |G (22.23)

where the propagator is the Fourier transform of the Green’s function:


 ∞ 
 1
G+ss (t) = e(E/i)t Gss (E)dE = θ(t) Ap e(Ep /i)t . (22.24)
2πi −∞ p

22.2 Ladder Model


We now want to study a simplified model which can be solved analytically.
The energy of |s is set to zero. The manifold {|l} consists of infinitely equally
spaced states with equal width
Γ
El0 = α + lΔ − i (22.25)
2
and the interaction Vsl = V is taken independent of l. With this simplification,
the poles are solutions of


l=∞
V2
Ep = , (22.26)
Ep − α − lΔ + i(Γ/2)
l=−∞

which can be written using the identity5



 1
cot(z) = (22.27)
z − lπ
l=−∞

5
cot(z) has single poles at z = lπ with residues limz→lπ cot(z)(z − lπ) = 1.
234 22 Dynamics of an Excited State

as
V 2π π Γ
Ep = cot Ep − α + i . (22.28)
Δ Δ 2
For the following discussion, it is convenient to measure all energy quantities
in units of π/Δ and define

 = απ/Δ,
α Γ = Γ π/Δ, (22.29)

p = Ep π/Δ,
E V = V π/Δ, (22.30)
to have # $
  iΓp   Γ
Ep = Ep −
r
= V cot Ep − α
2
+i , (22.31)
2 2
which can be split into real and imaginary part:

Epr pr − α
sin(2(E ))
= , (22.32)
V 2 *p − Γ) − cos(2(E
cosh(Γ r − α
p ))

Γp sinh(Γp − Γ)


− = . (22.33)
2V 2 *p − Γ) − cos(2(E
cosh(Γ r − α
p ))
We now have to distinguish two limiting cases.

The Small Molecule Limit

For small molecules, the density of state 1/Δ is small. If we keep the lifetime
of the l-states fixed, then Γ is small in this limit. The poles are close to the real
axis.6 We approximate the real part by solving the real equation (Fig. 22.6)

E (0) = V 2 cot E (0) − α


 (22.34)
p p

and consider first the solution which is closest to zero. Expanding around E p(0) ,
we find
 
   iΓ
Ep + ξ = V cot Ep + ξ − α
(0) 2 (0)
+
2
 


2 iΓ
= V cot E
2 p − α
(0)
 − V 1 + cot E
2 p − α
(0)
 ξ+ ,
2
(22.35)

6
We assume that α = 0.
22.2 Ladder Model 235

cot (x-1)
0

–4 –2 0 2 4
x

Fig. 22.6. Small molecule limit. The figure shows the graphical solution of
x = cot(x − 1)

which simplifies to
 

2 iΓ
ξ = −V 1 + cot E2
 −α
(0)
 ξ+ (22.36)
p
2

and has the solution



 (0)2
V 2 1 + Vp 4
E
i i
ξ = − Γp = − Γ , (22.37)
2 2  p(0)2
1+V  2 E
1 + V 4

which in the case V small but V > E


p(0) gives

Γp ≈ V 2 Γ, (22.38)

V2
Γp ≈ π 2
Γ. (22.39)
Δ2
The other poles are approximately undisturbed
pr = α
E  + lπ, (22.40)
 r2
V 2 +
E p

*p = Γ 2
Γ V
 r2
→ Γ (22.41)
1 + V 2 +
E p
2
V
236 22 Dynamics of an Excited State

and the residuum of the Green’s function follows from


1

  p + dE − α 
(Ep + dE) − V cot E
2 + iΓ
2
1
=
,
p + dE)
 − V 2 cot E
p − α   2
(E + iΓ
2 + V 2 dE(1
 + cot(E
p − α
+ iΓ
2 ) ) + ···
1 1 1

2 = , (22.42)
 1 + V 2 1 + cot E p − α    2
dE V + E p
dE + iΓ
2

which shows that only contributions from poles close to zero are important.

The Statistical Limit

Consider now the limit Γp > 1 and V > 1. This condition is usually fulfilled
in biomolecules (Fig. 22.7).
If the poles are far away enough from the real axis, then on the real axis
the cotangent is very close to −i and

V 2π π Γ V 2π
Rs = cot Ep − α + i ≈ −i . (22.43)
Δ Δ 2 Δ

Thus
1
Gss (E) = (22.44)
E + i(V 2 π/Δ)

4 4
2 2
0 0
−2 −2
−4 −4

−4 −4 −4 −4
−2 −2
−2 −2
0 0 0 0
Re(z) Im(z) 2 Re(z)
Im(z) 2 2 2
4 4 4 4

Fig. 22.7. Complex cotangent function. Left: imaginary part and right: real part
of cot(z + 3i) are shown together with (z) and (z), respectively
22.3 A More General Ladder Model 237

and the initial state decays as

| = e−kt
2
P (t) = |e(−iV π/Δ)t 2
(22.45)

with the rate


2πV 2 2πV 2
k= = ρ, (22.46)
Δ 
where the density of states is
1
ρ= . (22.47)
Δ

22.3 A More General Ladder Model


We now consider a more general model where the spacing of the states is not
constant and the matrix elements are different. The Hamiltonian has the form
⎛ ⎞
Es Vs1 · · · VsL
⎜ V1s E1 ⎟
⎜ ⎟
H=⎜ . .. ⎟. (22.48)
⎝ .. . ⎠
VLs EL

We take the energies relative to the lowest l-state (Fig. 22.8)

Es = E0 + ΔE, El = E0 + l . (22.49)

We start from the golden rule expression


 2πV 2  2πV 2
k= sl
δ(Es − El ) = sl
δ(ΔE − l ), (22.50)
 
l l

where the density of final states is given by


 
ρ(Es ) = δ(Es − El ) = δ(ΔE − l ). (22.51)
l l

Es

ΔE εl
Eo

Fig. 22.8. Generalized ladder model


238 22 Dynamics of an Excited State

We represent the δ-function by a Fourier integral:


 ∞
1
δ(ω) = e−iωt dt, (22.52)
2π −∞
 ∞
E 1
ω= → δ(E) = e(E/i)t dt, (22.53)
 2π −∞
V2  ∞
k= sl
e(ΔE−l /i)t dt. (22.54)
2 −∞
l

Interchanging integration and summation, we find


 ∞  2  
Vsl ΔE − l
k= dt exp t . (22.55)
−∞ 2 i
l

We introduce
V2
sl (il /)t
z= e (22.56)
2
l

and write the rate as


 ∞
k= dt e−(i/)ΔE t+ln(z) . (22.57)

This integral will now be evaluated approximately using the saddle point
(p. 339) method.7 The saddle point equation for the ladder model becomes

i d ln z(ts )
− ΔE + = 0. (22.58)
 dt
From the derivative
d ln z 1  Vsl2 il itl /
= e , (22.59)
dt z 2 
l

we find
1  Vsl2 itl /
ΔE = l e . (22.60)
z 2
l

Since ΔE is real, we look for a saddle point on the imaginary axis and write
its
= −β, (22.61)

where the new variable β is real. The saddle point equation now has a
quasithermodynamic meaning:

1  Vsl2 −βl
ΔE = l e = l , (22.62)
z 2
l
7
Also known as method of steepest descent or Laplace method.
22.3 A More General Ladder Model 239

where β plays the role of 1/kB T and gl = Vsl2 /2 that of a degeneracy factor.
The second derivative relates to the width of the energy distribution:
   2
d2 z
d2 ln z d dz dz
= dt − dt
dt 2
2
=
dt dt z z z
 2
1  −2l −βl 1  il −β
= gl 2 e − 2 gl e
z  z 
l l
 2 8 9
  2
=− + . (22.63)
 2 
We approximate now the integral by the contribution around the saddle point:
 ∞ %
−(i/)ΔE t+ln(z) βΔE 2π2
k= dt e = z(ts )e
∞   − 2
2
%
V2 2π
sl −β(l −ΔE)
= e . (22.64)
 2  − 2
l

For comparison, let us consider the simplified model with Vsl = V and l =
lΔ. Then

V 2  −βlΔ V 2 1
z= 2 e = 2 . (22.65)
  1 − e−βΔ
l=0
The saddle point equation is
∂ Δ
ΔE = − ln z = βΔ (22.66)
∂β e −1
which determines the “temperature”

1 Δ
β= ln 1 + (22.67)
Δ ΔE
1
≈ if Δ ΔE. (22.68)
ΔE
Then
Δ
l  = ≈ ΔE, (22.69)
−1
eΔ/ΔE
∂2 Δ2 eΔ/ΔE
2l  − 2 = ln z = & '2 ≈ Δe ,
2
(22.70)
∂β 2 eΔ/ΔE − 1
V2 1 V 2 ΔE
z(ts ) = ≈ , (22.71)
2 1 − e−Δ/ΔE 2 Δ
and the rate becomes

V 2 ΔE 1 2π2 2πV 2 1 1
k= 2 e = e , (22.72)
 Δ Δe2 Δ 2π

where the numerical value of e/ 2π = 1.08
240 22 Dynamics of an Excited State

22.4 Application to the Displaced Oscillator Model

We now want to discuss a displaced oscillator model (p. 205) for the transition
between the vibrational manifolds of two electronic states.
2π 2
k= V F CD(ΔE)

 
V2 ∞ (−it/)ΔE (it/)Hf −(it/)Hi V2 ∞
= 2 dt e e e = 2 dt e(−it/)ΔE F (t),
 −∞  −∞
(22.73)

where Hf is the Hamiltonian of the nuclear vibrations in the final state and
the correlation function F (t) = exp(g(t)) for independent displaced oscillators
is taken from (31.12). The rate becomes

V2 ∞
k= 2 dt e(−it/)ΔE
 −∞
# $
 4 5
× exp gr2 (nr + 1)(eiωr t − 1) + nr (e−iωr t − 1) .
r
(22.74)

The short-time expansion gives


 
V 2 2π2 (ΔE − Er )2
k= 2 exp − . (22.75)
 Δ2 2Δ2

If all modes can be treated classically ωr kB T , the phonon number is


n̄r = kB T /ωr and

Δ2 ≈ 2kB T gr2 ωr = 2kB T Er (22.76)
r

which gives for the rate in the classical limit the Marcus expression
 
2πV 2 1 (ΔE − Er )2
k= exp − . (22.77)
 4πkB T Er 4Er kB T

The saddle point equation is


i  4 5
ΔE = gr2 iωr (nr + 1)eiωr t − iωr nr e−iωr t
 r
 4 5
=i gr2 ωr (nr + 1)e−iωr t − nr eiωr t , (22.78)
r
22.4 Application to the Displaced Oscillator Model 241

which we solve approximately by linearization



iΔE = i gr2 ωr [1 − (2n̄r + 1)iωr t + · · · ]
r
t
= iEr − Δ2 + ··· , (22.79)

ΔE − Er
ts = −i . (22.80)
Δ2
At the saddle point, we have
its its 
− ΔE + g(ts ) = − ΔE + gr2 [(nr + 1)(iωr ts ) − nr iωr ts + · · · ]
  r
its  Δ2 t2
=− ΔE + i gr2 ωr ts − 2 s
 r
 2
its −(Er − ΔE)2
=− (ΔE − Er ) − Δ2
 2Δ4
(ΔE − Er )2
=− . (22.81)
2Δ2
The second derivative is
 4 5 Δ2
− gr2 ωr2 (nr + 1)e−iωr t + nr eiωr t = − 2 + · · · (22.82)
r


and finally the rate again is


 
V 2 2π2 (ΔE − Er )2
k= 2 exp − . (22.83)
 Δ2 2Δ2

At low temperatures kB T ω, the saddle point equation simplifies to



ΔE = gr2 ωr eiωr t . (22.84)
r

To solve this equation, we introduce an average frequency (major accepting


modes) ω with  
gr2 ωr = gr2 
ω = S
ω, (22.85)
r r
 2 
r gr ωr
=
ω , S= gr2 , (22.86)
S r

which, after insertion into the saddle point equation, gives the approximation

ωeits ω
ΔE = S (22.87)
242 22 Dynamics of an Excited State

0
10

10–3

10–6
rate k
10–9

10–12

10–15

10–18
0 5 10 15
energy gap Δ

Fig. 22.9. Energy gap law. The rate (22.90) is shown as a function of the energy
gap in units of the average frequency for S = 0.2, 0.5, 1, 2

with the solution


1 ΔE
its = ln . (22.88)
 S
ω ω
The second derivative is
 1
− gr2 ωr2 eiωr ts ≈ −S ω 
 2 eln(ΔE/Sω) = − ΔE ω (22.89)
r


and the rate is


 
V2 2π ΔE 1 ΔE ΔE
k= 2 exp − ln +S −S
 
ΔE ω  ω
 S ω S
ω
 - .
2
V 2π ΔE ΔE
= exp −S − ln −1 . (22.90)
 ΔEω 
ω S
ω

The dependence on ΔE is known as the energy gap law (Fig. 22.9)

Problems
22.1. Ladder Model
Solve the time evolution for the ladder model approximately
⎛ ⎞
0 V ··· V
⎜ V E1 ⎟
⎜ ⎟
H =⎜ . .. ⎟ , Ej = α + (j − 1)Δ.
⎝ .. . ⎠
V En
22.4 Application to the Displaced Oscillator Model 243

First derive an integral equation for C0 (t) only by substitution. Then replace
the sum by integration over ω = j(Δ/) and extend the integration over the
whole real axis. Replace the integral by a δ-function and show that the initial
state decays exponentially with a rate

2πV 2
k= .
Δ
Part VII

Elementary Photoinduced Processes


23
Photophysics of Chlorophylls and Carotenoids

Chlorophylls and carotenoids are very important light receptors. Both classes
of molecules have a large π-electron system which is delocalized over many
conjugated bonds and is responsible for strong absorption bands in the visible
region (Figs. 23.1 and 23.2).

O CH3 CH3
H CH3
2 3 H
CH3 1
I II 4
N N H
H Mg H
CH3 N N
IV III CH3
H
H V
CH2 H
O O
CH2 O
phytyl O O
74 CH3

Fig. 23.1. Structure of bacteriochlorophyll-b [62]. This is the principal green pig-
ment of the photosynthetic bacterium Rhodopseudomonas viridis. Dashed curves
indicate the delocalized π-electron system. Chlorophylls have an additional double
bond between positions 3 and 4. Variants have different side chains at positions 2
and 3

Fig. 23.2. Structure of β-carotene. The basic structure of the carotenoids is a


conjugated chain made up of isoprene units. Variants have different end groups
248 23 Photophysics of Chlorophylls and Carotenoids

23.1 MO Model for the Electronic States

The electronic wave functions of larger molecules are usually described by


introducing one-electron wave functions φ(r) and expanding the wave function
in terms of one or more Slater determinants. Singlet ground states can be in
most cases described quite sufficiently by one determinant representing a set
of doubly occupied orbitals:

|S0  = |φ1↑ φ1,↓ · · · φnocc,↑ φnocc,↓ |


 
 φ1↑ (r1 ) φ1↓ (r1 ) · · · φnocc,↓ (r1 ) 

 φ1↑ (r2 ) φ1↓ (r2 ) · · · φnocc,↓ (r2 ) 

1  . .. .. 
=   .. . .  . (23.1)

(2Nocc )!  

 
 φ1↑ (r2N ) φ1↓ (r2N ) φ (r
nocc,↓ 2Nocc ) 
occ occ

Excited states can be described as a linear combination of excited electronic


configurations. The lowest excited state can often be reasonably approxi-
mated as the transition of one electron from the highest occupied orbital
φnocc (HOMO) to the lowest unoccupied orbital φnocc+1 (LUMO). The singlet
excitation is given by
1
|S1  = √ (|φ1↑ φ1,↓ · · · φnocc,↑ φnocc+1,↓ | − |φ1↑ φ1,↓ · · · φnocc,↓ φnocc+1,↑ |).
2
(23.2)
The molecular orbitals can be determined with the help of more or less
sophisticated methods (Fig. 23.3).

energy

unoccupied
orbitals
LUMO

HOMO
occupied
orbitals

Fig. 23.3. HOMO–LUMO transition. The singlet ground state is approximated


by a Slater determinant of doubly occupied molecular orbitals. The lowest singlet
excitation can be described by promotion of an electron from the highest occupied
to the lowest unoccupied orbital for most chromophores
23.2 The Free Electron Model for Polyenes 249

23.2 The Free Electron Model for Polyenes

We approximate a polyene with N double bonds by a one-dimensional box of


length L = (2N + 1)d.1 The orbitals of the free electron model

2 ∂ 2
H=− + V (r) (23.3)
2me ∂x2
have to fulfill the boundary condition φ(0) = φ(L) = 0. Therefore, they are
given by
2 πsx
φs (x) = sin (23.4)
L L
with energies (Fig. 23.4)
 2
2 L
2 πs
2 πsx
2 1 πs
Es = sin dx = . (23.5)
L 0 2me L L 2me L

Since there are 2N π-electrons, the energy of the lowest excitation is estimated
as

ΔE = En+1 − En
π 2 2 π 2 2
= ((N + 1)2
− N 2
) = . (23.6)
2me d2 (2N + 1)2 2me d2 (2N + 1)

V(x)
s=2 s=1

H d H
C C C C C C C C
H H H H H H H H

Fig. 23.4. Octatetraene. Top: geometry-optimized structure. Middle: potential


energy and lowest two eigenfunctions of the free electron model. Bottom: linear
model with equal bond lengths d

1
The end of the box is one bond length behind the last C-atom.
250 23 Photophysics of Chlorophylls and Carotenoids

The transition dipole matrix element for the singlet–singlet transition is


  
1
μ = |φnocc↑ φnocc↓ | (−er)| √ (|φnocc↑ φnocc+1↓ | − |φnocc↓ φnocc+1↑ |)
2
√ 
= − 2e dr φnocc (r)rφnocc+1 (r)
√ 2 L (N + 1)πx N πx
= − 2e sin x sin dx
L 0 L L
√ √
2 2e 4L2 N (N + 1) 8 2ed n(N + 1)
= = , (23.7)
L π 2 (2N + 1)2 π2 2N + 1

which grows with increasing length of the polyene. The absorption coefficient
is proportional to
N 2 (N + 1)2
α ∼ μ2 ΔE ∼ , (23.8)
(2N + 1)3
which depends close to linear on the number of double bonds N .

23.3 The LCAO Approximation


The molecular orbitals are usually expanded in a basis of atomic orbitals

φ(r) = Cs ϕs (r), (23.9)
s

where the atomic orbitals are centered on the nuclei and the coefficients are
determined from diagonalization of a certain one-electron Hamiltonian (for
instance, Hartree–Fock, Kohn–Sham, semiempirical approximations such as
AM1)
Hψ = Eψ. (23.10)
Inserting the LCAO wave function gives
 
Cs Hϕs (r) = E Cs ϕs (r) (23.11)

and projection on one of the atomic orbitals ϕs gives a generalized eigenvalue
problem:
 
0 = d3 r ϕs (r) Cs (H − E)ϕs (r)
s

= Cs (Hs s − ESs s ). (23.12)
s
23.4 Hückel Approximation 251

β β β β β β β

C C C C C C C C

Fig. 23.5. Hückel model for polyenes

23.4 Hückel Approximation

The Hückel approximation [63] is a very simple LCAO model for the π-
electrons. It makes the following approximations (Fig. 23.5):
• The diagonal matrix elements Hss = α have the same value (Coulomb
integral) for all carbon atoms.
• The overlap of different pz -orbitals is neglected Sss = δss .
• The interaction between bonded atoms is Hss = β (resonance integral)
and has the same value for all bonds.
• The interaction between nonbonded atoms is zero Hss = 0.
The Hückel matrix for a linear polyene has the form of a tridiagonal matrix
⎛ ⎞
α β
⎜β α β ⎟
⎜ ⎟
⎜ .. .. .. ⎟
H =⎜ . . . ⎟ (23.13)
⎜ ⎟
⎝ β α β ⎠
β α

and can be easily diagonalized with the eigenvectors (Fig. 23.6)



2 2N
rsπ
φr = sin ϕs , r = 1, 2, . . . , 2N. (23.14)
2N + 1 s=1 2N + 1

The eigenvalues are



Er = α + 2β cos . (23.15)
2N + 1
The symmetry of the molecule is C2h (Fig. 23.7). All π-orbitals are antisym-
metric with respect to the vertical reflection σh . With respect to the C2
rotation, they have alternating a- or b-symmetry. Since σh × C2 = i, the
orbitals are of alternating au - and bg -symmetry. The lowest transition energy
is in the Hückel model

(N + 1)π Nπ
ΔE = EN +1 − EN = 2β cos − cos , (23.16)
2N + 1 2N + 1
252 23 Photophysics of Chlorophylls and Carotenoids

LUMO au

HOMO bg

au

bg

au

Fig. 23.6. Hückel orbitals for octatetraene

C2h E C2 σ i
Ag 1 1 1 1
Au 1 1 –1 –1
Bg 1 –1 –1 1
Bu 1 –1 1 –1

Fig. 23.7. Character table of the symmetry group C2h . All π-orbitals are antisym-
metric with respect to the reflection σ and are therefore of au - or bg -symmetry

which can be simplified with the help of (x = π/(2N + 1))

cos(N + 1)x − cos N x


1
= (ei(N +1)x + e−i(N +1)x − eiN x − e−iN x )
2
1 i(N +1/2)x ix/2 1
= e (e − e−ix/2 ) + e−i(N +1/2)x (e−ix/2 − eix/2 )
2 2
1 x
= −2 sin N+ x sin (23.17)
2 2

to give2
π 1
ΔE = −4β sin ∼ for large n. (23.18)
4N + 2 N
This approximation can be improved if the bond length alternation is taken
into account, by using two alternating β-values [64]. The resulting energies
are 
Ek = α ± β 2 + β 2 + 2ββ  cos k, (23.19)

2
β is a negative quantity.
23.6 Cyclic Polyene as a Model for Porphyrins 253

S3 Ag

S2 Bu

S1 Ag

Ag Ag Ag Bu S0 Ag

Fig. 23.8. Simplified CI model for linear polyenes

where the k-values are solutions of


β sin(N + 1)k + β  sin N k = 0. (23.20)
In the Hückel model, the lowest excited state has the symmetry au × bg = Bu
and is strongly allowed. However, it has been well established that for longer
polyenes, the lowest excited singlet is an optically forbidden totally symmetric
Ag state [65,66]. Such a low-lying dark state has been observed experimentally
also for carotenoids [67] and can be only understood if correlation effects are
taken into account [68].

23.5 Simplified CI Model for Polyenes


In a very simple model, Kohler [69] treats only transitions from the bg -HOMO
into the au -LUMO and bg -LUMO+1 orbitals. The HOMO–LUMO+1 transi-
tion as well as the double HOMO–LUMO transition are both of Ag -symmetry
and can therefore interact. If the interaction is strong enough, then the low-
est excited state will be of Ag -symmetry and therefore optically forbidden
(Fig. 23.8).

23.6 Cyclic Polyene as a Model for Porphyrins


For a cyclic polyene with N carbon atoms the Hückel matrix has the form
⎛ ⎞
α β β
⎜β α β ⎟
⎜ ⎟
⎜ ⎟
H = ⎜ ... ... ..
. ⎟ (23.21)
⎜ ⎟
⎝ β α β ⎠
β β α
with eigenvectors

1  iks
N
2π 2π 2π
φk = √ e ϕs , k = 0, , 2 , . . . , (N − 1) (23.22)
N s=1 N N N
254 23 Photophysics of Chlorophylls and Carotenoids

3 4 3 4

α β α β

2−
N N
2 5 2 5
2+
NH HN N Mg N
1 6 1 6
N N

δ γ δ γ

8 7 8 7

Fig. 23.9. Free base porphin and Mg-porphin. The bond character was assigned on
the basis of the bond lengths from HF-optimized structures

and eigenvalues
Ek = α + 2β cos k. (23.23)
This can be used as a model for the class of porphyrin molecules [70, 71]
(Fig. 23.9).
For the metal–porphyrin, there are in principle two possibilities for the
assignment of the essential π-system, an inner ring with 16 atoms or an outer
ring with 20 atoms, both reflecting the D4h -symmetry of the molecule. Since
it is not possible to draw a unique chemical structure, we have to count the
π-electrons. The free base porphin is composed of 14 H-atoms (of which the
peripheral ones are not shown), 20 C-atoms and 4 N-atoms which provide
a total of 14 + 4 × 20 + 5 × 4 = 114 valence electrons. There are 42 σ-
bonds (including those to the peripheral H-atoms) and two lonely electron
pairs involving a total of 88 electrons. Therefore, the number of π-electrons is
114 − 88 = 26. Usually, it is assumed that the outer double bonds are not so
strongly coupled to the π-system and a model with 18 π-electrons distributed
over the N = 16-ring is used. For the metal porphin, the number of H-atoms
is only 12 but there are four lonely electron pairs instead of two. Therefore,
the number of non-π valence electrons is the same as for the free base porphin.
The total number of valence electrons is again 114 since the Mg atom formally
donates two electrons.

23.7 The Four Orbital Model for Porphyrins


Gouterman [72–74] introduced the four orbital model which considers only
the doubly degenerate HOMO (k = ±4 × 2π/N ) and LUMO (k = ±5 ×
2π/N ) orbitals. There are four HOMO–LUMO transitions (Fig. 23.10). Their
intensities are given by
23.7 The Four Orbital Model for Porphyrins 255

9
−8 8
−7 7
−6 6

−5 5

−4 4

−3 3
−2 2
−1 1
k=0

Fig. 23.10. Porphin orbitals


1  −ik s iks
μ= e e ϕ∗s (r)(−er)ϕs (r)d3 r (23.24)
N 
ss

which is approximately3
⎛ & '⎞
cos &s × 2π
1  −ik s iks N'
μ= e e δss (−eR) ⎝ sin s × π
N
⎠, (23.25)
N 
ss 0

where the z-component vanishes and the perpendicular components have


circular polarization4

1 i −eR  i(k−k ±2π/N )s −eR
μ √ , ±√ , 0 = √ e = √ δk−k ,±2π/N . (23.26)
2 2 2N s 2

The selection rule is


k − k  = ±2π/N. (23.27)
Hence, two of the HOMO–LUMO transitions are allowed and circularly
polarized:
2π 2π
4 →5 ,
N N
2π 2π
−4 → −5 . (23.28)
N N
If configuration interaction is taken into account, these four transitions split
into two degenerate excited states of E-symmetry. The higher one carries
most of the intensity and corresponds to the Soret band in the UV. The
3
Neglecting differential overlaps and assuming a perfect circular arrangement.
4
For an average R = 3 Å, this gives a total intensity of 207 Debye2 which is
comparable to the 290 Debye2 from a HF/CI calculation.
256 23 Photophysics of Chlorophylls and Carotenoids

a b c

H y H y H
H H H
H x H x H
H H H

eV
y 8.6D
x y
2.0D (E) x x 10D
5 x,y
10D (E) x 8.1D
y 8.0D x 1.0D
4

x 0.5D
3 y 4.0D x 1.4D
0.9D (E)
y 7D

Fig. 23.11. Electronic structure of porphins and porphin derivatives. 631G-HF/CI


calculations were performed with GAMESS [132]. Numbers give transition dipoles
in Debyes. Dashed lines indicate very weak or dipole-forbidden transitions. (a) Mg-
porphin has two allowed transitions of E-symmetry, corresponding to the weak
Q-band at lower and the strong B-band at higher energy. (b) In Mg-chlorin
(dihydroporphin), the double bond 1–2 is saturated similar to chlorophylls. The
Q-band splits into the lower and stronger Qy -band and the weaker Qx -band.
(c) In Mg-tetrahydroporphin, double bonds 1–2 and 5–6 are saturated similar to
bacteriochlorophyll. The Qy -band is shifted to lower energies

lower one is very weak and corresponds to the Q-band in the visible [75]. As
a result of ab initio calculations, the four orbitals of the Gouterman model
stay approximately separated from the rest of the MO orbitals [76]. If the
symmetry is disturbed as for chlorin, the Q-band splits into the lower Qy -band
which gains large intensity and the higher weak Qx -band. Figure 23.11 shows
calculated spectra for Mg-porphin, Mg-chlorin, and Mg-tetrahydroporphin.

23.8 Energy Transfer Processes


Carotenoids and chlorophylls are very important for photosynthetic sys-
tems [77]. Chlorophyll molecules with different absorption maxima are used to
harvest light energy and to direct it to the reaction center where the special
pair dimer has the lowest absorption band of the chlorophylls. Carotenoids
can act as additional light-harvesting pigments in the blue region of the
spectrum [78, 79] and transfer energy to chlorophylls which absorb at longer
wavelengths. Carotenoids are also important as triplet quenchers to prevent
23.8 Energy Transfer Processes 257

Chlorophyll Chlorophyll Chlorophyll


dimer
B

S0

Fig. 23.12. Chlorophyll–chlorophyll energy transfer. Energy is transferred via a


sequence of chromophores with decreasing absorption maxima

Chlorophyll Carotenoid
B
Bu
Ag
Q

T
T
S0 S0

Fig. 23.13. Carotenoid–chlorophyll energy transfer. Carotenoids absorb light in


the blue region of the spectrum and transfer energy to chlorophylls which absorb
at longer wavelengths (solid arrows). Carotenoid triplet states are below the lowest
chlorophyll triplet and therefore important triplet quenchers (dashed arrow )

the formation of triplet oxygen and for dissipation of excess energy [80]
(Figs. 23.12 and 23.13).

Problems
23.1. Polyene with Bond Length Alternation

C
C C

C C

C C
C

Consider a cyclic polyene with 2N -carbon atoms with alternating bond


lengths.
258 23 Photophysics of Chlorophylls and Carotenoids

The Hückel matrix has the form


⎛ ⎞
α β β
⎜ β α β ⎟
⎜ ⎟
⎜ . . ⎟

H=⎜ β  .. .. ⎟.

⎜ .. ⎟
⎝ . α β⎠
β β α

(a) Show that the eigenvectors can be written as

c2n = eikn , c2n−1 = ei(kn+χ) .

(b) Determine the phase angle χ and the eigenvalues for β = β 


(c) We want now to find the eigenvectors of a linear polyene. Therefore, we
use the real-valued functions

c2n = sin kn, c2n−1 = sin(kn + χ), n = 1, . . . , N.

with the phase angle as in (b).


We now add two further carbon atoms with indices 0 and 2N + 1. The
first of these two obviously has no effect since c0 = sin(0 × k) = 0. For
the atom 2N + 1, we demand that the wave function again vanishes which
restricts the possible k-values:

0 = c2N +1 = sin((N + 1)k + χ) = (eiχ+i(N +1)k ).

For these k-values, the cyclic polyene with 2N + 2 atoms is equivalent to


the linear polyene with 2N -atoms as the off-diagonal interaction becomes
irrelevant. Show that the k-values obey the equation

β sin((N + 1)k) + β  sin(N k) = 0.

(d) Find a similar treatment for a linear polyene with odd number of C-atoms.
24
Incoherent Energy Transfer

We consider the transfer of energy from an excited donor molecule D∗ to an


acceptor molecule A
D∗ + A → D + A∗ . (24.1)

24.1 Excited States


We assume that the optical transitions of both molecules can be described by
the transition between the highest occupied (HOMO) and lowest unoccupied
(LUMO) molecular orbitals. The Hartree–Fock ground state of one molecule
can be written as a Slater determinant of doubly occupied molecular orbitals:

|HF = |φ1↑ φ1↓ · · · φHO↑ φHO↓ |. (24.2)

Promotion of an electron from the HOMO to the LUMO creates a singlet or


a triplet state both of which are linear combinations of the four determinants:

| ↑↓ = |φ1↑ φ1↓ · · · φHO↑ φLU↓ |,


| ↓↑ = |φ1↑ φ1↓ · · · φHO↓ φLU↑ |,
| ↑↑ = |φ1↑ φ1↓ · · · φHO↑ φLU↑ |,
| ↓↓ = |φ1↑ φ1↓ · · · φHO↓ φLU↓ |. (24.3)

Obviously, the last two determinants are the components of the triplet state
with Sz = ±1:

|1, +1 = | ↑↑,


|1, −1 = | ↓↓. (24.4)

Linear combination of the first two determinants gives the triplet state with
Sz = 0:
1
|1, 0 = √ (| ↑↓ + | ↓↑) (24.5)
2
260 24 Incoherent Energy Transfer

and the singlet state


1
|0, 0 = √ (| ↑↓ − | ↓↑). (24.6)
2
Let us now consider the states of the molecule pair DA. The singlet ground
state is
|φ1↑ φ1↓ · · · φHO,D,↑ φHO,D,↓ φHO,A,↑ φHO,A,↓ |, (24.7)
which will simply be denoted as

|1 DA = |D↑ D↓ A↑ A↓ |. (24.8)

The excited singlet state of the donor is


1
|1 D∗ A = √ (|D∗↑ D↓ A↑ A↓ | − |D∗↓ D↑ A↑ A↓ |) (24.9)
2
and the excited state of the acceptor is
1
|1 DA∗  = √ (|D↑ D↓ A∗↑ A↓ | − |D↑ D↓ A∗↓ A↑ |). (24.10)
2

24.2 Interaction Matrix Element


The interaction responsible for energy transfer is the Coulombic electron–
electron interaction:
e2
Vee = . (24.11)
4πε|r1 − r2 |
With respect to the basis of molecular orbitals, its matrix elements are denoted
as

V (φ1 φ1 φ2 φ2 )



e2
= d3 r1 d3 r2 φ∗1σ (r1 )φ∗2,σ (r2 ) φ1 σ (r1 )φ2 ,σ (r2 ). (24.12)
4πε|r1 − r2 |
The transfer interaction

Vif = 1 D∗ A|Vee |1 DA∗ 


1
= |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ |
2
1
− |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ 
2
1
− |D∗↓ D↑ A↑ A↓ |Vee ||D↑ D↓ A∗↑ A↓ |
2
1
+ |D∗↓ D↑ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ | (24.13)
2
24.2 Interaction Matrix Element 261

excitonic exchange
D* A* D* A*

D A D A

Fig. 24.1. Intramolecular energy transfer. Left: the excitonic or Förster mech-
anism [81, 82] depends on transition intensities and spectral overlap. Right: the
exchange or Dexter mechanism [83] depends on the overlap of the wave functions

consists of four summands. The first one has two contributions


1 1 1
|D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ | = V (D∗ DA A∗ ) − V (D∗ A∗ AD), (24.14)
2 2 2
where the first part is the excitonic interaction and the second part is the
exchange interaction (Fig. 24.1).
The second summand
1 1
− |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ | = V (D∗ DA A∗ ) (24.15)
2 2
has no exchange contribution due to the spin orientations. The two remaining
summands are just mirror images of the first two. Altogether, the interaction
for singlet energy transfer is

1 D∗ A|Vee |1 DA∗  = 2V (D∗ DAA∗ ) − V (D∗ A∗ AD). (24.16)

In the triplet case,

Vif = 3 D∗ A|Vee |3 DA∗ 


1
= |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ |
2
1
+ |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ 
2
1
+ |D∗↓ D↑ A↑ A↓ |Vee ||D↑ D↓ A∗↑ A↓ |
2
1 ∗
+ |D↓ D↑ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ |
2
= −V (D∗ A∗ AD). (24.17)

Here, energy can only be transferred by the exchange coupling [83]. Since
this involves the overlap of electronic wave functions, it is important at small
distances. In the singlet state, the excitonic interaction [81, 82] allows for
energy transfer also at larger distances.
262 24 Incoherent Energy Transfer

ρ1 ρ
2

R1 R2
r1 r2
D A

Fig. 24.2. Multipole expansion. For large intermolecular distance R = |R1 − R2 |,


the distance of the electrons r1 − r2 can be expanded as a Taylor series with respect
to (r1 − R1 )/R and (r2 − R2 )/R

24.3 Multipole Expansion of the Excitonic Interaction

We will now apply a multipole expansion to the excitonic matrix element


(Fig. 24.2)

Vexc = 2V (D∗ DAA∗ )



e2
= 2 d3 r1 d3 r2 φ∗D∗ (r1 )φ∗A (r2 ) φD (r1 )φA∗ (r2 ). (24.18)
4πε|r1 − r2 |

We take the position of the electrons relative to the centers of the molecules

r1,2 = R1,2 + 1,2 (24.19)

and expand the Coulombic interaction


1 1
= (24.20)
|r1 − r2 | |R1 − R2 + 1 − 2 |

using the Taylor series

1 1 1 R 1 3|R|(R)2 − |R|3 2
= − + + ··· (24.21)
|R + | |R| |R|2 |R| 2 |R|6

for
R = R1 − R2 ,  =  1 − 2 . (24.22)
The zero-order term vanishes due to the orthogonality of the orbitals. The
first-order term gives

e2
−2 R d3 1 d3 2 φ∗D∗ (r1 )φD (r1 )(1 − 2 )φ∗A (r2 )φA∗ (r2 ) (24.23)
4πε|R|3

and also vanishes due to the orthogonality. The second-order term gives the
leading contribution:
24.4 Energy Transfer Rate 263

e2
d3 1 d3 2 φ∗D∗ (r1 )φD (r1 )
4πε|R|6
×(3|R|(R1 − R2 )2 − |R|3 (1 − 2 )2 )φ∗A (r2 )φA∗ (r2 ). (24.24)

Only the integrals over mixed products of 1 and 2 are not zero. They can
be expressed with the help of the transition dipoles:
√ 
μD = 1 D|er|1 D∗  = 2 d3 1 φ∗D∗ (1 )e1 φD (1 ),

1 ∗
√  3
μA =  A|er| A  = 2 d 2 φ∗A (2 )e2 φA∗ (2 ).
1
(24.25)

Finally1 the leading term of the excitonic interaction is the dipole–dipole term:

e2 & 2 '
Vexc = |R| μD μA − 3(RμD )(RμA ) . (24.26)
4πε|R|5

This is often written with an orientation-dependent factor K as


K
Vexc = |μD ||μA |. (24.27)
|R|3

24.4 Energy Transfer Rate

We consider excitonic interaction of two molecules. The Hamilton operator is


divided into the zero-order Hamiltonian

H0 = |D∗ AHD∗ A D∗ A| + |DA∗ HDA∗ DA∗ | (24.28)

and the interaction operator

H  = |D∗ AVexc DA∗ | + |DA∗ Vexc D∗ A|. (24.29)

We neglect electron exchange between the two molecules and apply the Con-
don approximation. Then taking energies relative to the electronic ground
state |DA, we have

HD∗ A = ωD∗ + HD∗ + HA ,


HDA∗ = ωA∗ + HD + HA∗ , (24.30)

with electronic excitation energies ωD∗ (A∗ ) and nuclear Hamiltonians HD ,


HD∗ , HA , HA∗ . Now, consider once more Fermi’s golden rule for the transition
between vibronic states |i → |f :
1
Local dielectric constant and local field correction [84, 85] will be taken into
account implicitly by considering the transition dipoles as effective values.
264 24 Incoherent Energy Transfer

2π 
k= Pi |i|H  |f |2 δ(ωf − ωi )

i,f
 
1
= Pi i|H  |f f |H  |iei(ωf −ωi )t dt
2
i,f
 
1
= Pi i|H  |f eiωf t f |H  |eiωi t idt (24.31)
2
i,f
 
1
= i|Q−1 e−H0 /kB T H  eitH0 / H  eitH0 / |idt
2
 i
1
= dt H  (0)H  (t) . (24.32)
2
Initially, only the donor is excited. Then the average is restricted to the
vibrational states of D∗ A:

1
k = 2 dt H  (0)H  (t)D∗ A . (24.33)

With the transition dipole operators

μ̂D = |DμD D∗ | + |D∗ μD D|,


μ̂A = |AμA A∗ | + |A∗ μA A|, (24.34)

the rate becomes



K2
k= dt μ̂D (0)μ̂A (0)μ̂D (t)μ̂A (t)D∗ A .
2 R 6
Here, we assumed that the orientation does not change on the relevant time
scale. Since each of the dipole operators acts only on one of the molecules, we
have

1 K2
k= 2 dt μ̂D (0)μ̂D (t)μ̂A (0)μ̂A (t)D∗ A
 |R|6

1 K2
= 2 dt μ̂D (0)μ̂D (t)D∗ A μ̂A (0)μ̂A (t)D∗ A
 |R|6

1 K2
= 2 dt μ̂D (0)μ̂D (t)D∗ μ̂A (0)μ̂A (t)A . (24.35)
 |R|6

24.5 Spectral Overlap


The two factors are related to the acceptor absorption and donor fluorescence
spectra. Consider optical transitions between the singlet states |1 D∗  → |1 D
and |1 A →1 A∗ . The number of fluorescence photons per time is given by the
24.5 Spectral Overlap 265

Einstein coefficient for spontaneous emission2


3
2ωD ∗D
AD∗ D = |μD |2 (24.36)
3ε0 hc3
with the donor transition dipole moment (24.25)
√  3
μD = 2 d r φD∗ (er)φD . (24.37)

The frequency-resolved total fluorescence is then

2ω 3
Ie (ω) = AD∗ D ge (ω) = |μD |2 ge (ω) (24.38)
3ε0 hc3
with the normalized lineshape function ge (ω).
The absorption cross section can be expressed with the help of the Einstein
ω
coefficient for absorption B12 as [86]

∗ ωga (ω)/c.
ω
σa (ω) = BAA (24.39)

The Einstein coefficients for absorption and emission are related by

π 2 c3 1 π
ω
BAA ∗ = AA∗ A = |μA |2
ωA∗ A
3 3 2 ε 0

and therefore (Fig. 24.3)


1 π
σa (ω) = |μA |2 ωga (ω).
3 cε0

In the following, we discuss absorption and emission in terms of the


modified spectra:

3cε0 σa (ω)
α(ω) = = |μ̂A |2 ga (ω)
 π ω
= Pi |i|μ̂A |f |2 δ(ωf − ωi − ω)
i,f
  
1
    e−H/kB T 
= dt e −iωt 
i μ̂A  f eiωf t f |μ̂A e−iωi t |i
2π i
Q

1 −iωt
= dt e μ̂A (0)μ̂A (t)A

2
A detailed discussion of the relationships between absorption cross section and
Einstein coefficients is found in [86].
266 24 Incoherent Energy Transfer

EA*
0−0

α η

EA

Fig. 24.3. Absorption and emission. Within the displaced oscillator model, the
0 → 0 transition energy is located halfway between the absorption (α) and emission
maximum (η)

and3
3ε0 hc3 Ie (ω)
η(ω) = = |μD |2 ge (ω)
2ω 3

= Pf |f |μ̂D |i|2 δ(ωf − ωi − ω)
i,f

1
    e−H/kB T  

= dt e−iωt f  eiωf t μ̂D  i e−iωi t i|μ̂D |f 
2π Q
f

1
= dt e−iωt μ̂D (t)μ̂D (0)D∗


1
= dt eiωt μ̂D (0)μ̂D (t)D∗ . (24.40)

Within the Condon approximation
8 9
μˆA (0)μ̂A (t)A = eiωA∗ t |μA |2 eitHA∗ / e−itHA /
A
= eiωA∗ t |μA |2 FA (t) (24.41)
and therefore the lineshape function is the Fourier transform of the correlation
function: 
1
ga (ω) = dt ei(ω−ωA∗ )t FA (t). (24.42)

Similarly,
8 9
μˆD (0)μ̂D (t)D∗ = e−iωD∗ t |μD |2 eitHD / e−itHD∗ /
D∗
= e−iωD∗ t |μD |2 FD∗ (t) (24.43)
3
Note the sign change which appears since we now have to average over the excited
state vibrations.
24.5 Spectral Overlap 267

and 
1
ge (ω) = dt ei(ω−ωD∗ )t FD∗ (t). (24.44)

With the inverse Fourier integrals

μ̂A (0)μ̂A (t)A = dωeiωt α(ω),

μ̂D (0)μ̂D (t)D∗ = dωe−iωt η(ω), (24.45)

(24.35) becomes
  
1 K2 
k= dt dωe η(ω) dω  e−iω t α(ω  )
iωt
2 |R|6

1 K2
= dωdω  η(ω)α(ω  ) 2πδ(ω − ω  )
2 |R|6

2π K 2
= dωη(ω)α(ω)
2 |R|6

2π K 2
= |μA | |μD |
2 2
dωge (ω)ga (ω). (24.46)
2 |R|6

This is the famous rate expression for Förster [81] energy transfer which
involves the spectral overlap of donor emission and acceptor absorption
[87, 88]. For optimum efficiency of energy transfer, the maximum of the accep-
tor absorption should be at longer wavelength than the maximum of the donor
emission (Fig. 24.4).
ω

ω
α(D)

D*
A*
α(A)
η(D)

D
A
η(A)

Fig. 24.4. Energy transfer and spectral overlap. Efficient energy transfer requires
good spectral overlap of donor emission and acceptor absorption. This is normally
only the case for transfer between unlike molecules
268 24 Incoherent Energy Transfer

24.6 Energy Transfer in the Triplet State


Consider now energy transfer in the triplet state. Here, the transitions are
optically not allowed. We consider a more general interaction operator which
changes the electronic state of both molecules simultaneously and can be
written as4
H  = |D∗ |AVx A∗ |D| + h.c. (24.47)
We start from the rate expression (24.32)

1
k = 2 dt H  (0)H  (t). (24.48)

The thermal average has to be taken over the initially populated state |D∗ A

1
k = 2 dt H  (0)H  (t)D∗ A .

In the static case (Vx = const),
 8 9
1
k = 2 dt H  eitHDA∗ / H  e−itHD∗ A /
 D∗ A
 8 9
1
= 2 dt ei(ωA∗ −ωD∗ )t H  eit(HD +HA∗ )/ H  e−it(HD∗ +HA )/ ∗
 D A
2  8 9 8 9
V
= x2 dt ei(ωA∗ −ωD∗ )t eitHD / e−itHD∗ / eitHA∗ / e−itHA /
 D∗ A
2 
V
= x2 dt eit(ωA∗ −ωD∗ ) FD∗ (t)FA (t) (24.49)

with the correlation functions
8 9
FA (t) = eitHA∗ / e−itHA / , (24.50)
A
8 9
FD∗ (t) = eitHD / e−itHD∗ / . (24.51)
D∗
Introducing lineshape functions (24.42) and (24.44) similar to the excitonic
case, the rate becomes
  
V2 
k = x2 dt eit(ωA∗ −ωD∗ ) dω  e−i(ω −ωD∗ )t ge (ω  ) dωei(ω−ωA∗ )t ga (ω)


Vx2
= 2 dωdω  ge (ω  )ga (ω)2πδ(ω − ω  )


2πVx2
= dωga (ω)ge (ω), (24.52)

which is very similar to the Förster expression (24.46). The excitonic interac-
tion is replaced by the exchange coupling matrix element and the overlap of
the optical spectra is replaced by the overlap of the Franck–Condon-weighted
densities of states.
4
We assume that the wave function of the pair can be factorized approximately.
25
Coherent Excitations in Photosynthetic Systems

LH1

RC

LH2

Fig. 25.1. Energy transfer in bacterial photosynthesis

Photosynthetic units of plants and bacteria consist of antenna complexes


and reaction centers. Rings of closely coupled chlorophyll chromophores form
the light-harvesting complexes which transfer the incoming photons very effi-
ciently and rapidly to the reaction center where the photon energy is used
to create an ion pair. In this chapter, we concentrate on the properties of
strongly coupled chromophore aggregates (Fig. 25.1).
270 25 Coherent Excitations in Photosynthetic Systems

25.1 Coherent Excitations


If the excitonic coupling is large compared to fluctuations of the excitation
energies, a coherent excitation of two or more molecules can be generated.

25.1.1 Strongly Coupled Dimers

Let us consider a dimer consisting of two strongly coupled molecules A and B


as in the reaction center of photosynthesis. The two excited states

|A∗ B, |AB∗  (25.1)

are mixed due to the excitonic interaction. The eigenstates are given by the
eigenvectors of the matrix

EA∗ B V
. (25.2)
V EAB∗
For a symmetric dimer, the diagonal energies have the same value and the
eigenvectors can be characterized as symmetric or antisymmetric
⎛ 1 ⎞ ⎛ 1 ⎞
√ − √1 √ √1
⎠ = EA∗ B − V
2 2 2 2
⎝ ⎠ EA∗ B V ⎝ .
V EA∗ B EA∗ B + V
√1
2
√1
2
− √1 √1
2 2
(25.3)

The two excitonic states are split by 2V .1 The transition dipoles of the two
dimer bands are given by
1
μ± = √ (μA ± μB ) (25.4)
2
and the intensities are given by
1 2
|μ± |2 = (μ + μ2B ± 2μA μB ). (25.5)
2 A
For a symmetric dimer μ2A = μ2B = μ2 and

|μ± |2 = μ2 (1 ± cos α), (25.6)

where α denotes the angle between μA and μB . In case of an approximately


C2 -symmetric structure, the components of μA and μB are furthermore related
by symmetry operations (Fig. 25.2).
If we choose the C2 -axis along the z-axis, we have
⎛ ⎞ ⎛ ⎞
μBx −μAx
⎝ μBy ⎠ = ⎝ −μAy ⎠ (25.7)
μBz μAz
1
This is known as Davidov splitting.
25.1 Coherent Excitations 271

C2

|+>

|A*B> 2V
|AB*>

|−>

absorption

Fig. 25.2. The “special pair” dimer. Top: the nearly C2 -symmetric molecular
arrangement for the reaction center of Rhodopseudomonas viridis (molekel graphics
[89]). The transition dipoles of the two molecules (arrows) are essentially antiparallel.
Bottom: the lower exciton component carries most of the oscillator strength

and therefore ⎛ ⎞
0
1 1
√ (μA + μB ) = √ ⎝ 0 ⎠ ,
2 2 2μ
Az
⎛ ⎞

1 1 ⎝ Ax ⎠
√ (μA − μB ) = √ 2μAy , (25.8)
2 2 0
which shows that the transition to the state |+ is polarized along the sym-
metry axis whereas the transition to |− is polarized perpendicularly. For the
special pair dimer, interaction with internal charge transfer states |A+ B−  and
|A− B+  has to be considered. In the simplest model, the following interaction
matrix elements are important (Fig. 25.3).
The local excitation A∗ B is coupled to the CT state A+ B− by transferring
an electron between the two LUMOs
272 25 Coherent Excitations in Photosynthetic Systems

A+B − A−B +
UL UH UL

A*B AB*

Fig. 25.3. Extended dimer model. The optical excitations A∗ B and AB∗ are coupled
to the ionic states A+ B− and A− B+

A∗ B|H|A+ B− 
1
= (A∗↑ A↓ − A∗↓ A↑ )B↑ B↓ H(B∗↑ A↓ − B∗↓ A↑ )B↑ B↓ 
2
= HA∗ ,B∗ = UL (25.9)

and to the CT state A− B+ by transferring an electron between the HOMOs

A∗ B|H|A− B+ 
1
= (A∗↑ A↓ − A∗↓ A↑ )B↑ B↓ H(A∗↑ B↓ − A∗↓ B↑ )A↑ A↓ 
2
= −HA,B = UH . (25.10)

Similarly, the second local excitation couples to the CT states by

AB∗ |H|A+ B− 
1
= (B∗↑ B↓ − B∗↓ B↑ )A↑ A↓ H(B∗↑ A↓ − B∗↓ A↑ )B↑ B↓ 
2
= −HA,B = UH , (25.11)

AB∗ |H|A− B+ 
1
= (B∗↑ B↓ − B∗↓ B↑ )A↑ A↓ H(A∗↑ B↓ − A∗↓ B↑ )A↑ A↓ 
2
= HA∗ ,B∗ = UL . (25.12)

The interaction of the four states is summarized by the matrix


⎛ ⎞
EA∗ B V UL UH
⎜ V EB∗ A UH UL ⎟
H=⎜ ⎝ UL UH EA+ B−
⎟.
⎠ (25.13)
UH UL EA− B+
Again for a symmetric dimer, EB∗ A = EAB∗ and EA+ B− = EA− B+ and the
interaction matrix can be simplified by transforming to symmetrized basis
functions with the transformation matrix
25.1 Coherent Excitations 273

|A+B−> , |A−B+>
+

U− U+

|A*B> , |AB*>

Fig. 25.4. Dimer states

⎛ ⎞
√1 √1
2 2
⎜ − √1 √1 ⎟
⎜ ⎟
S=⎜ 2 2
√1 √1 ⎠
⎟. (25.14)
⎝ 2 2
− √12 √12

The transformation gives


⎛ ⎞
E∗ − V UL − UH
⎜ E∗ + V UL + UH ⎟
S −1 H S = ⎜
⎝ UL − UH
⎟,
⎠ (25.15)
ECT
UL + UH ECT

where the states of different symmetry are decoupled (Fig. 25.4)



E∗ + V UL + UH E∗ − V UL − UH
H+ = , H− = . (25.16)
UL + UH ECT UL − UH ECT

25.1.2 Excitonic Structure of the Reaction Center

The reaction center of bacterial photosynthesis consists of six chromophores.


Two bacteriochlorophyll molecules (PL , PM ) form the special pair dimer.
Another bacteriochlorophyll (BL ) and a bacteriopheophytine (HL ) act as the
acceptors during the first electron transfer steps. Both have symmetry-related
counterparts (BM , HM ) which are not directly involved in the charge separa-
tion process. Due to the short neighbor distances (11–13 Å), delocalization of
the optical excitation has to be considered to understand the optical spec-
tra. Whereas the dipole–dipole approximation is not applicable to the strong
coupling of the dimer chromophores, it has been used to estimate the remain-
ing excitonic interactions in the reaction center. The strongest couplings are
expected for the pairs PL BL , BL HL , PM BM , BM HM due to favorable distances
and orientational factors (Fig. 25.5; Table 25.1).
Starting from a system with full C2 -symmetry the excitations again can
be classified as symmetric or antisymmetric. The lowest dimer band interacts
274 25 Coherent Excitations in Photosynthetic Systems

PM 7.4 PL
10.8 10.5
BM BL
12.9 12.5
10.6 10.7

HM HL

Fig. 25.5. Transition dipoles of the reaction center Rps. viridis [90–93]. Arrows
show the transition dipoles of the isolated chromophores. Center–center distances
(as defined by the nitrogen atoms) are given in Å

Table 25.1. Excitonic couplings for the reaction center of Rps. viridis

HM BM PM PL BL HL
HM 160 33 −9 −11 6
BM 160 −168 −37 29 −11
PM 33 −168 770 −42 −7
PL −9 −37 770 −189 35
BL −11 29 −42 −189 167
HL 6 −11 −7 35 167
The matrix elements were calculated with the dipole–dipole
approximation for μ2 = 45 Debye2 for bacteriochlorophyll and
30 Debye2 for bacteriopheophytine [94]. All values in cm−1

with the antisymmetric combinations of B and H excitations. Due to the


differences of excitation energies, this leads only to a small amount of state
mixing. For the symmetric states, the situation is quite different as the upper
dimer band is close to the B excitation (Fig. 25.6).
If the symmetry is disturbed by structural differences or interactions with
the protein, excitations of different symmetry character interact. Qualitatively,
we expect that the lowest excitation is essentially the lower dimer band, the
highest band2 reflects absorption from the pheophytines and in the region of
the B absorption, we expect mixtures of the B∗ excitations and the upper
dimer band.

25.1.3 Circular Molecular Aggregates


We consider now a circular aggregate of N -chromophores as it is found in the
light-harvesting complexes of photosynthesis [96–98] (Figs. 25.7 and 25.8).
We align the CN -symmetry axis along the z-axis. The position of the nth
molecule is
⎛ ⎞
cos(2π n/N )
Rn = R ⎝ sin(2π n/N ) ⎠ , n = 0, 1, . . . , N − 1, (25.17)
0
2
in the Qy region
25.1 Coherent Excitations 275

16000
14000
PCT*(±)

H*(±)

12000
P*(+)

cm–1
B*(±)

P*(–)

10000
Fig. 25.6. Excitations of the reaction center. The absorption spectrum of Rps.
viridis in the Qy region is assigned on the basis of symmetric excitonic excita-

tions [95]. The charge transfer states PCT are very broad and cannot be observed
directly

CN

Fig. 25.7. Circular aggregate

which can also be written with the help of a rotation matrix


⎛ ⎞
cos(2π/N ) − sin(2π/N )
SN = ⎝ sin(2π/N ) cos(2π/N ) ⎠ (25.18)
1
as ⎛ ⎞
1
Rn = SN
n
R0 , R0 = ⎝ 0 ⎠. (25.19)
0
Similarly, the transition dipoles are given by

μ n = SN
n
μ0 . (25.20)

The component parallel to the symmetry axis is the same for all monomers

μn,z = μ (25.21)
276 25 Coherent Excitations in Photosynthetic Systems

Fig. 25.8. Light-harvesting complex. The light-harvesting complex from Rhodopseu-


domonas acidophila (structure 1kzu [99–103] from the protein data bank [104, 105])
consists of a ring of 18 closely coupled (shown in red and blue) and another ring of 9
less strongly coupled bacteriochlorophyll molecules (green). Rasmol graphics [106]

whereas for the component in the perpendicular plane



μn,x cos(n2π/N ) − sin(n2π/N ) μ0,x
= . (25.22)
μn,y sin(n2π/N ) cos(n2π/N ) μ0,y

We describe the orientation of μ0 in the x − y plane by the angle φ:



μ0,x cos(φ)
= μ⊥ . (25.23)
μ0,y sin(φ)

Then we have (Fig. 25.9)


⎛ ⎞ ⎛ ⎞
μn,x μ⊥ cos(φ + n2π/N )
⎝ μn,y ⎠ = ⎝ μ⊥ cos(φ + n2π/N ) ⎠ . (25.24)
μn,z μ
25.1 Coherent Excitations 277

φ + 2π
N


N
φ
x

Fig. 25.9. Orientation of the transition dipoles

We denote the local excitation of the nth molecule by

|n = |A0 A2 · · · A∗n · · · AN −1 . (25.25)

Due to the symmetry of the system, the excitonic interaction is invariant


against the SN rotation and therefore

m|V |n = m − n|V |0 = 0|V |n − m. (25.26)

Without a magnetic field, the coupling matrix elements can be chosen real
and depend only on |m − n|. Within the dipole–dipole approximation, we
have furthermore3

V|m−n| = m|V |n (25.27)


e2 & '
= |Rmn |2 μm μn − 3(Rmn μm )(Rmn μn ) . (25.28)
4πε|Rmn |5

The interaction matrix has the form (Table 25.2)


⎛ ⎞
E0 V1 V2 · · · V2 V1
⎜ V1 E1 V1 · · · V3 V2 ⎟
⎜ ⎟
⎜ .. ⎟
⎜ V2 V1 . . . . ⎟
H =⎜ ⎜ . .
⎟.
⎟ (25.29)
⎜ .. .. V2 ⎟
⎜ ⎟
⎝ V2 V3 V1 ⎠
V1 V2 · · · V2 V1 EN −1

The excitonic wave functions are easily constructed as


N −1
1  ikn
|k = √ e |n (25.30)
N n=0

3
A more realistic description based on a semiempirical INDO/S method is given
by [107].
278 25 Coherent Excitations in Photosynthetic Systems

Table 25.2. Excitonic couplings

|m − n| V|m−n| (cm−1 )
1 −352.5
2 −43.4
3 −12.6
4 −5.1
5 −2.6
6 −1.5
7 −0.9
8 −0.7
9 −0.6
The matrix elements are
calculated with the dipole–
dipole approximation (25.27)
for R = 26.5Å, φ0 = 66◦ , and
μ2 = 37 Debye2

with

k= l, l = 0, 1, . . . , N − 1, (25.31)
N

N −1 N −1
1   ikn −ik n 
k  |H|k = e e n |H|n
N  n=0
n =0
N −1 N −1
1   ikn −ik n
= e e H|n−n | . (25.32)
N  n=0
n =0

We substitute
m = n − n (25.33)
to get
N −1 n−N +1
1   i(k−k )n+ik m
k  |H|k = e H|m|
N n=0 m=n

N −1

= δk,k eik m H|m|
m=0
= δk,k (E0 + 2V1 cos k + 2V2 cos 2k + · · · ) . (25.34)

For even N , the lowest and highest states are not degenerate whereas for all
of the other states (Fig. 25.10)

Ek = EN −k = E−k . (25.35)
25.1 Coherent Excitations 279

E(k)
9
10 = – 8 8
11 = – 7 7
12 = – 6 6
13 = – 5 5

14 = – 4 4
13 = – 3 3
12 = – 2 2
11 = – 1 1
l=0
−π 0 π k

Fig. 25.10. Exciton dispersion relation. The figure shows the case of negative
excitonic interaction for which the k = 0 state is the lowest in energy

The transition dipoles of the k-states are given by


N −1 N −1
1  ikn 1  ikn n
μk = √ e μn = √ e SN μ 0
N n=0 N n=0
⎛ ⎞
N −1 μ cos(2π n/N ) + μy sin(2π n/N )
1  ikn ⎝ x
= √ e μy cos(2π n/N ) − μx sin(2π n/N ) ⎠ . (25.36)
N n=0 μz

For the z-component, we have (Fig. 25.11)


N −1 √
1
√ μz eikn = N μz δk,0 . (25.37)
N n=0

For the component in the x, y-plane, we introduce complex polarization


vectors corresponding to circular polarized light:

±i
μk,± = √12 √ 2
0 μk
⎛ ⎞⎛ ⎞
N ) − sin( N )
cos( 2πn 2πn
N−1
μx
1 ⎜ ⎟
= √ eikn √12 ± √i2 0 ⎝ sin( 2πn ) cos( 2πn ) ⎠ ⎝ μy ⎠
N n=0 N N
μz
1
⎛ ⎞
1  ikn √1 ±i2nπ/N √i ±i2nπ/N
⎝ x ⎠
N −1 μ
= √ e 2
e ± 2
e 0 μy
N n=0 μ z

1 
N −1

±i μx
= √ ei(k±2π/N ) √12 √ 2
N n=0 μy
√ 1 1
= N δk,∓2π/N μ± with μ± = √ (μx ± iμy ) = √ μ⊥ e±iφ . (25.38)
2 2
280 25 Coherent Excitations in Photosynthetic Systems

9
10 = – 8 8
11 = – 7 7
12 = – 6 6
13 = – 5 5

14 = – 4 4
13 = – 3 3
12 = – 2 2
11 = – 1 1
0

circ. pol. circ. pol.


(−) pol (+)

ground state

Fig. 25.11. Allowed optical transitions. The figure shows the case of negative
excitonic interaction where the lowest three exciton states carry intensity

Linear polarized light with a polarization vector


& ' 1
1

−i
cos Θ sin Θ 0 = √ eiΘ √12 √ 2
0 + √ e−iΘ √12 √i
2
0 (25.39)
2 2
couples to a linear combination of the k = ±2π/N states since

N
μk,Θ = μ⊥ ei(Φ−Θ) δk,−2π/N + e−i(Φ−Θ) δk,2π/N . (25.40)
2
The absorption strength does not depend on the polarization angle Θ
|μ−2π/N,Θ |2 + |μ2π/N,Θ |2 = N μ2⊥ . (25.41)

25.1.4 Dimerized Systems of LH2


The light-harvesting complex LH2 of Rps. acidophila contains a ring of nine
weakly coupled chlorophylls and another ring of nine stronger coupled chloro-
phyll dimers. The two units forming a dimer will be denoted as α, β and the
number of dimers as N (Fig. 25.12).
The transition dipole moments are
⎛ & '⎞ ⎛ ⎞
N − ν + φα
sin θα cos n 2π sin θα
⎜ ⎟
μn,α = μ ⎝ sin θα sin &n 2π − ν + φα ' ⎠ = μSN
n
Rz (−ν + φα ) ⎝ 0 ⎠ ,
N
cos θα cos θα
⎛ & 2π '⎞ ⎛ ⎞
sin θβ cos n N + ν + φβ sin θβ
⎜ ⎟
μn,β = μ ⎝ sin θβ sin &n 2π + ν + φβ ' ⎠ = μSN
n
Rz (+ν + φβ ) ⎝ 0 ⎠ .
N
cos θβ cos θβ
(25.42)
25.1 Coherent Excitations 281

z
φβ θα θβ

2ν α1 β1
φα

y

y x

Fig. 25.12. BChl arrangement for the LH2 complex of Rps. acidophila [99–103].
Arrows represent the transition dipole moments of the BChl molecules as calcu-
lated on the 631G/HF-CI level. Distances are with respect to the center of the four
nitrogen atoms. Rα = 26.1 Å, Rβ = 26.9 Å, 2ν = 20.7◦ , φα = 70.5◦ , φβ = −117.7◦ ,
θα = 83.7◦ , and θβ = 80.3◦

For a circle with C2N -symmetrical positions (2ν = π/N, θα = θβ ) and


alternating transition dipole directions (φβ = π + φα ), we find the relation
π

μn,β = Rz (2ν + φβ − φα )μn,α = Rz + π μn,α . (25.43)


N
The experimental value of
2ν + φβ − φα = 208.9◦ (25.44)
is in fact very close to the value of 180◦ + 20◦ (or π + (π/N )).
We have to distinguish the following interaction matrix elements between
two monomers in different unit cells (Figs. 25.13; Table 25.3):
Vα,α,|m| , Vα,β,m = Vβ,α,−m , Vα,β,−m = Vβ,α,m , Vβ,β,|m| . (25.45)
The interaction matrix of one dimer is

Eα Vdim
. (25.46)
Vdim Eβ
The wave function has to be generalized as
N −1
1  ikn
|k, s = √ e (Csα |nα + Csβ |nβ), (25.47)
N n=0
N −1 N −1
  1   i(kn−k n ) &
k s |H|ks = e Cs α Csα Hαα|n−n |
N n=0 
n =0
'
+Cs β Csβ Hββ|n−n | + Cs α Csβ Hαβ(n−n ) + Cs β Csα Hβα(n−n ) ,
282 25 Coherent Excitations in Photosynthetic Systems

Vαβ1

Vββ1 Vαα1

β1
β2 α2 α1
βN
V βα 1V dim
αN

Fig. 25.13. Couplings in the dimerized ring

Table 25.3. Excitonic couplings for the LH2 complex of Rps. acidophila

n Vαβn Vβαn Vααn Vββn


0 356.1 356.1
1 13.3 324.5 −49.8 −35.0
2 2.8 11.8 −6.1 −4.0
3 1.0 2.4 −1.8 −1.0
4 0.7 0.9 −0.9 −0.4
The matrix elements were calculated within the
dipole–dipole approximation for the structure shown
in Fig. 25.12 for μ2 = 37 Debye2 . All values in cm−1

−1 N −1
1 & ' N    Hαα|m| Hαβm Csα
= Cs α Cs β ei(k−k )n+ik m ,
N Hβαm Hββ|m| Csβ
n=0 m=0

 N −1 N −1 
& ' m=0 eikm Hαα|m| m=0 eikm Hαβm Csα
= δk,k Cs α Cs β N −1 N −1 .
ikm ikm Csβ
m=0 e Hβαm m=0 e Hββm
(25.48)

The coefficients Csα and Csβ are determined by diagonalization of the matrix:

Eα + 2Vαα,1 cos k + · · · Vdim + eik Vαβ,1 + e−ik Vαβ,−1 + · · ·
Hk = −ik .
Vdim + e Vαβ,1 + e Vαβ,−1 + · · ·
ik
Eα + 2Vββ,1 cos k + · · ·
(25.49)
25.1 Coherent Excitations 283

If we consider only interactions between nearest neighbors, this simplifies to



Eα Vdim + e−ik W
Hk = with W = Vαβ,−1 . (25.50)
Vdim + eik W Eβ

In the following, we discuss the limit Eα = Eβ . The general case is discussed


in the “Problems section.” The eigenvectors of

Eα Vdim + e−ik W
Hk =
Vdim + eik W Eα

are given by
1 1
√ , (25.51)
2 ±eiχ
where the angle χ is chosen such that

V + eik W = U (k)eiχ (25.52)

with
U (k) = sign(V )|V + eik W |
and the eigenvalues are
1
Ek,± = Eα ± U (k) = Eα ± sign(V ) Vdim
2 + W 2 + 2V
dim W cos k. (25.53)

The transition dipoles follow from


N −1
1  ikn
μk,± =√ e (μn,α ± eiχ μn,β )
2N n=0
N −1
μ  ikn n
=√ e SN
2N n=0
⎛⎛ ⎞ ⎛ ⎞⎞
sin θα cos(−ν + φα ) sin θβ cos(ν + φβ )
× ⎝⎝ sin θα sin(−ν + φα ) ⎠ ± eiχ ⎝ sin θβ sin(ν + φβ ) ⎠⎠. (25.54)
cos θα cos θβ

Similar selection rules as for the simple ring system follow for the first factor
N −1
& ' μ  ikn & '
0 0 1 μk,± = √ e cos θα ± eiχ cos θβ
2N n=0

N
= δk,0 μ (cos θα ± cos θβ ) , (25.55)
2

N −1
1  i(k±2π/N )n & '& '
√1 √i
2 2
0 μk,± = √ e 1 i 0 μα,1 ± eiχ μβ,1
2N n=0
284 25 Coherent Excitations in Photosynthetic Systems

N
= δk,∓2π/N μ sin θα ei(φα −ν) ± eiχ sin θβ ei(φ,Mβ +ν) ,
2
(25.56)

N −1
1  i(k±2π/N )n & '& '
−i
√1 √
2 2
0 μk,± = √ e 1 −i 0 μα,1 ± eiχ μβ,1
2N n=0

N
0= δk,∓2π/N μ sin θα ei(φα −ν) ∓ eiχ sin θβ ei(φβ +ν) .
2
(25.57)

The second factor determines the distribution of intensity among the + and −
states.
In the limit of 2N -equivalent molecules, V = W , 2ν +φβ −φα = π+(π/N ),
and θα = θβ and we have

V + eik W = (1 + eik )V = eik/2 (e−ik/2 + e+ik/2 )V = eik/2 2V cos k/2


(25.58)

hence
χ = k/2, U (k) = 2V cos k/2, (25.59)
and (25.56) becomes

1 i N
√ √ 0 μk,± = δk,∓2π/N μ sin θei(φα −ν) 1 ∓ ei(k/2+π/N ) ,
2 2 2

which is zero for the upper case (+ states) and



2Nδk,2π/N μ sin θei(φα −ν) (25.60)

for the (−) states. Similarly (25.57) becomes


1 √
√ (μk,x − iμk,y ) = 2Nδk,+2π/N μ sin θe−i(φα −ν) (25.61)
2
for the − states and zero for the + states. (For positive V , the + states are
higher in energy than the − states.) For the z-component, the selection rule
of k = 0 implies (Fig. 25.14)

μz,+ = 2Nδk,0 μ cos θ, μz,− = 0. (25.62)

In the LH2 complex, the transition dipoles of the two dimer halves are
nearly antiparallel. The oscillator strengths of the N molecules are concen-
trated in the degenerate next to lowest transitions k = ±2π/N . This means
that the lifetime of the optically allowed states will be reduced compared
to the radiative lifetime of a monomer. In the LH2 complex, the transi-
tion dipoles have only a very small z-component. Therefore in a perfectly
25.2 Influence of Disorder 285

E(k)

circ circ

circ circ

−2π/N 0 2π/N π k

Fig. 25.14. Dispersion relation of the dimer ring

symmetric structure, the lowest (k = 0) state is almost forbidden and has


a longer lifetime than the optically allowed k = ±2π/N states. Due to the
degeneracy of the k = ±2π/N states, the absorption of photons coming along
the symmetry axis does not depend on the polarization.

25.2 Influence of Disorder

So far, we considered a perfectly symmetrical arrangement of the chro-


mophores. In reality, there exist deviations due to the protein environment and
to low-frequency nuclear motion which leads to variations of the site energies,
the coupling matrix elements, and the transition dipoles. Experimentally, a
pair of mutually perpendicular states has been observed which were assigned
to the strong k = ±2π/N states [96, 108]. This can be only understood if the
degeneracy is lifted by some symmetry-breaking perturbation which gives as
zero-order eigenstates linear combinations of the k = ±2π/N states (25.40).
In the following, we neglect the effects of dimerization and study a ring of
N -equivalent chromophores.

25.2.1 Symmetry-Breaking Local Perturbation

We consider perturbation of the symmetry by a specific local interaction, for


instance due to an additional hydrogen bond at one site. We assume that the
excitation energy at the site n0 = 0 is modified by a small amount δE. The
Hamiltonian

N −1 
N −1 
N −1
H= |nE0 n| + |0δE0| + |nVnn n | (25.63)
n=0 n=0 n =0,n =n
286 25 Coherent Excitations in Photosynthetic Systems

is transformed to the basis of k-states (25.30) to give


1  −ik n +ikn 
k  |H|k = e n |H|n
N 
n,n
1  i(k−k )n δE 1   −ik n +ikn
= e E0 + + e Vnn
N n N N n 
n =n
δE
= δk,k Ek + . (25.64)
N
Obviously, the energy of all k-states is shifted by the same amount:
δE
k|H|k = Ek + . (25.65)
N
The degeneracy of the pairs | ± k is removed due to the interaction

δE
k|H| − k = . (25.66)
N
The zero-order eigenstates are the linear combinations
1
|k±  = √ (|k ± | − k) (25.67)
2
with zero-order energies (Fig. 25.15)

δE
E(k) = Ek + (nondegenerate states),
N
δE
E(k− ) = Ek , E(k+ ) = Ek + 2 (degenerate states). (25.68)
N
Only the symmetric states are affected by the perturbation. The interaction
between degenerate pairs is given by the matrix elements

(−) (+)
.. ..
CN .. .. . .
. .
−2 2
y
−1 1
δE
x l=0

Fig. 25.15. Local perturbation of the symmetry. This figure shows the case of
δE > 0
25.2 Influence of Disorder 287

 δE
k+ |H|k+  = δk,k Ek + 2 ,
N

k− |H|k−  = δk,k Ek ,

k+ |H|k−  = 0, (25.69)

and the coupling to the nondegenerate k = 0 state is



2δE
0|H|k+  = ,
N
0|H|k−  = 0. (25.70)

Optical transitions to the |1±  states are linearly polarized with respect to
perpendicular axes. First-order intensity is transferred from the |1+  state to
the |0 and the other |k+  states and hence the |1−  state absorbs stronger
than the |1+  state which is approximately

2δE 2δE
|1+  + |0 + |2+  + · · · (25.71)
N (E0 − E1 ) N (E2 − E1 )
Its intensity is reduced by a factor of
1

. (25.72)
1+ δE 2
N2
2
(E0 −E1 )2 + 4
(E2 −E1 )2 + ···

25.2.2 Periodic Modulation

The local perturbation (25.64) has Fourier components


1  inΔk
H  (Δk) = k|H  |k + Δk = e δEn (25.73)
N n

for all possible values of Δk. In this section, we discuss the most impor-
tant Fourier components separately. These are the Δk = ±4π/N components
which mix the k = ±2π/N states and the Δk = ±2π/N components which
couple the k = ±2π/N to the k = 0, ±4π/N states and thus redistribute the
intensity of the allowed states.
A general modulation of the diagonal energies with Fourier components
Δk = ±2π/N is given by

δEn = δE cos(χ0 + 2nπ/N ) (25.74)

and its matrix elements are


eiχ0 δE e−iχ0 δE
H  (Δk) = δΔk,−2π/N + δΔk,2π/N . (25.75)
2 2
288 25 Coherent Excitations in Photosynthetic Systems

Transformation to linear combinations of the degenerate pairs


1 & '
|k, + = √ eiχ0 | − k + e−iχ0 |k k > 0,
2
1 & '
|k, − = √ −e−iχ0 | − k + eiχ0 |k k>0 (25.76)
2
leads to two decoupled sets of states with
δE
0|H  |k, + = √ cos(2χ0 )δk,2π/N ,
2
0|H  |k, − = 0,
k, −|H  |k  , + = 0,
δE
k, −|H  |k  , − = cos χ0 (δk −k,2π/N + δk −k,−2π/N ),
2
δE
k, +|H  |k  , + = cos(3χ0 )(δk −k,2π/N + δk −k,−2π/N ).
2
The |k = 2π/N, ± states are linearly polarized (Fig. 25.16)

eiχ0 eiχ0 √
μ2π/N,+,Θ = √ μ−2π/N,Θ + √ μ2π/N,Θ = N μ⊥ cos(Φ + χ0 − Θ),
2 2
e−iχ0 eiχ0 √
μ2π/N,−,Θ = − √ μ−2π/N,Θ + √ μ2π/N,Θ = i Nμ⊥ sin(Φ + χ0 − Θ).
2 2
(25.77)

Let us now consider a C2 -symmetric modulation [109]

δEn = δE cos(χ0 + 4πn/N ) (25.78)

(−) (+)
.. .. .. ..
. . . .
−2 2

−1 1

l=0

Fig. 25.16. C1 -symmetric perturbation. Modulation of the diagonal energies by the


perturbation δEn = δE cos(χ0 + 2πn/N ) does not mix the |k, + and |k, − states.
In first order, the allowed k = ±2π/N states split into two states |k = 2π/N, ±
with mutually perpendicular polarization and intensity is transferred to the |k = 0
and |k = 4π/N, ± states
25.2 Influence of Disorder 289

.. .. .. ..
. . . .
−2 2

−1 1

l=0

Fig. 25.17. C2 -symmetric perturbation. Modulation of the diagonal energies by


the perturbation δEn = δE cos(χ0 + 4πn/N ) splits the |k = ±2π/N  states in zero
order into a pair with equal intensity and mutually perpendicular polarization

with matrix elements


δE & iχ0 '
k|H  |k   = e δk −k,−4π/N + e−iχ0 δk −k,4π/N . (25.79)
2
The |k = ±2π/N  states are mixed to give the zero-order states
1 & '
|2π/N, + = √ | − 2π/N  + eiχ0 |2π/N  ,
2
1 & '
|2π/N, − = √ | − 2π/N  − eiχ0 |2π/N  , (25.80)
2
which are again linearly polarized (Fig. 25.17).
A perturbation of this kind could be due to an elliptical deformation of
the ring [109], but also to interaction with a static electric field, which is given
up to second order by
1
δEn = −pn E + Et αn E, (25.81)
2
where the permanent dipole moments are given by an expression similar4 to
(25.24) ⎛ ⎞
p⊥ cos(Φ + n2π/N )
pn = SNn
p0 = ⎝ p⊥ sin(Φ + n2π/N ) ⎠ (25.82)
p
and the polarizabilities transform as
n −n
αn = SN α0 SN . (25.83)

With the field vector ⎛ ⎞


E⊥ cos ξ
E = ⎝ E⊥ sin ξ ⎠ , (25.84)
E
4
The angle Φ is different for permanent and transition dipoles.
290 25 Coherent Excitations in Photosynthetic Systems

we find
− pn E = −p⊥ E⊥ cos(Φ + n2π/N − ξ) (25.85)
and
1 t 1 −n
E αn E = (Et SN
n
)α0 (SN E)
2 2
E2 +
= ⊥ αxx cos2 (n2π/N − ξ) + αyy sin2 (n2π/N − ξ)
2
+ 2αxy sin(n2π/N − ξ) cos(n2π/N − ξ)}
+ E| E⊥ {αxz cos(n2π/N − ξ) + αyz sin(n2π/N − ξ)}
E 2
+ αzz . (25.86)
2
The quadratic term has Fourier components Δk = ±4π/N and can
therefore act as an elliptic perturbation.
Let us assume that the polarizability is induced by the coupling to charge
transfer states like in the special pair dimer. Due to the molecular arrange-
ment, the polarizability will be zero along the CN -symmetry axis and will
have a maximum along a direction in the xy-plane. We denote the angle of
this direction by η. The polarizability α0 simplifies to
⎛ ⎞
α⊥ cos2 η α⊥ sin η cos η 0
α0 = ⎝ α⊥ sin η cos η α⊥ sin2 η 0 ⎠ (25.87)
0 0 0

and the second-order term


1 t E 2 α⊥ +
E αn E = ⊥ 1 + cos(2η − 2ξ + n4π/N )
2 4 ,
+ cos(−2η − 2ξ + n4π/N ) (25.88)

now has only Fourier components Δk = 0, ±4π/N .

25.2.3 Diagonal Disorder

Let us now consider a static distribution of site energies [110–112]. The


Hamiltonian

N −1 
N −1 
N −1
H= |n(E0 + δEn )n| + |nVnn n | (25.89)
n=0 n=0 n =0,n =n

contains energy shifts δEn which are assumed to have a Gaussian distribution
function:
1
P (δEn ) = √ exp(−δEn2 /Δ2 ). (25.90)
Δ π
25.2 Influence of Disorder 291

Transforming to the delocalized states, the Hamiltonian becomes


1  −ik n +ikn 
k  |H|k = e n |H|n
N 
n,n
1  i(k−k )n 1   −ik n +ikn
= e (E0 + δEn ) + e Vnn
N n N n 
n =n
1  i(k−k )n
= δk,k Ek + e δEn (25.91)
N n

and due to the disorder the k-states are mixed. The energy change of a non-
degenerate state (for instance k = 0) is in lowest-order perturbation theory
given by the diagonal matrix element
1 
δEk = δEn (25.92)
N n

as the average of the local energy fluctuations. Obviously, the average is zero

δEk  = δEn  = 0 (25.93)

and
1 
δEk2  = δEn δEn . (25.94)
N2 
n,n

If the fluctuations of different sites are uncorrelated

δEn δEn  = δn,n δE 2 , (25.95)

the width of the k-state


δE 2 
δEk2  =
N

is smaller by a factor 1/ N .5 If the fluctuations are fully correlated, on the
other hand, the k-states have the same width as the site energies.
For the pairs of degenerate states ±k, we have to consider the secular
matrix (Δk = k − k  = 2k)
 1  1  inΔk 
N n δEn N ne δEn
1  1 
H1 = , (25.96)
−inΔk
N ne δEn N n δEn

which has the eigenvalues


%
1  1  inΔk 
δE±k = δEn ± e δEn e−in Δk δEn . (25.97)
N n N n  n

5
This is known as exchange or motional narrowing.
292 25 Coherent Excitations in Photosynthetic Systems

Obviously, the average is not affected


 
δE+k + δE−k
=0 (25.98)
2

and the width is given by


 2 2 
δE+k + δE−k 1  1  i(n−n )Δk
= 2 δEn δEn  + 2 e δEn δEn 
2 N 
N 
nn nn

δE 2 
=2 . (25.99)
N

25.2.4 Off-Diagonal Disorder

Consider now fluctuations also of the coupling matrix elements, for instance
due to fluctuations of orientation and distances. The Hamiltonian contains an
additional perturbation

 1  −ik n +ikn
Hkk  = e δVnn . (25.100)
N 
nn

For uncorrelated fluctuations with the properties6

δVnn  = 0, (25.101)
δVnn δVmm  = (δnm δn m + δnm δn m − δnm δn m δnn )δV|n−n
2
 | ,

(25.102)

it follows

Hkk  = 0 (25.103)

 1   ik(n−m)+ik (m −n )


|Hkk |  =
2
e δVnn δVmm 
N2 
n n m m
1  1  i(k+k )(n−n )
= 2 δVnn2
 + e δVnn
2

N  N2
nn
δE 2  1 
= + δV|m|
2
 [1 + cos (m(k + k  ))] . (25.104)
N N
m=0

If the dominant contributions come from the fluctuation of site energies and
nearest neighbor couplings, this simplifies to

 1
|Hkk |  =
2
(δE 2  + 2δV±1
2
(1 + cos(k + k  )). (25.105)
N
6
We assume here that the fluctuation amplitudes obey the CN -symmetry.
25.2 Influence of Disorder 293

For the nondegenerate states, the width is


δE 2  + 4δV±1
2

δEk2  = (25.106)
N
and for the degenerate pairs the eigenvalues of the secular matrix become

δE±k = Hkk ± |Hk,−k | . (25.107)

Again the average of the two is not affected


 
δE+k + δE−k
= 0, (25.108)
2
whereas the width now is given by
 2 2 
δE+k + δE−k
= Hkk2
 + Hk,−k
2

2
2 & '
= δE 2  + δV±12
 (3 + cos(2k)) . (25.109)
N
Off-diagonal disorder has a similar effect as diagonal disorder, but it influences
the optically allowed k = ±2π/N states stronger than the states in the center
of the exciton band. More sophisticated investigations, including also partially
correlated disorder and disorder of the transition dipoles, can be found in the
literature [110–115].

Problems
25.1. Photosynthetic Reaction Center
The “special pair” in the photosynthetic reaction center of Rps. viridis is
a dimer of two bacteriochlorophyll molecules whose centers of mass have a
distance of 7 Å. The transition dipoles of the two molecules include an angle
of 139◦.
C2

139º

μA 7Å μB

Calculate energies and intensities of the two dimer bands from a simple
exciton model
−Δ/2 V
H=
V Δ/2
as a function of the energy difference Δ and the interaction V . The Hamilto-
nian is represented here in a basis spanned by the two localized excited states
|A∗ B and |B∗ A.
294 25 Coherent Excitations in Photosynthetic Systems

25.2. Light-Harvesting Complex


The circular light-harvesting complex of the bacterium Rps. acidophila con-
sists of nine bacteriochlorophyll dimers in a C9 -symmetric arrangement. The
two subunits of a dimer are denoted as α and β. The exciton Hamiltonian
with nearest-neighbor and next-to-nearest-neighbor interactions only is (with
the index n taken as modulo 9)


9
H= {Eα |n; αn; α| + Eβ |n; βn; β|
n=1
+ Vdim (|n; αn; β| + |n; βn; α|)
+ Vβα,1 (|n; αn − 1; β| + |n; βn + 1; α|)
+ Vαα,1 (|n; αn + 1; α| + |n; αn − 1; α|)
+ Vββ,1 (|n; βn + 1; β| + |n; βn − 1; β|)} .

Transform the Hamiltonian to delocalized states

1  ikn 1  ikn
9 9
|k; α = e |n; α, |k; β = e |n; β,
3 n=1 3 n=1

k = l 2π/9, l = 0, ±1, ±2, ±3, ±4.


(a) Show that states with different k-values do not interact

k  , α(β)|H|k, α(β) = 0 if k = k  .

(b) Find the matrix elements

Hαα (k) = k; α|H|k; α, Hββ (k) = k; β|H|k; β, Hαβ (k) = k; α|H|k; β.

(c) Solve the eigenvalue problem



Hαα (k) Hαβ (k) Cα Cα
∗ = E1,2 (k) .
Hαβ (k) Hββ (k) Cβ Cβ

(d) The transition dipole moments are given by


⎛ ⎞
sin θ cos(φα − ν + nφ)
μn,α = μ ⎝ sin θ sin(φα − ν + nφ) ⎠ ,
cos θ
⎛ ⎞
sin θ cos(φβ + ν + nφ)
μn,β = μ ⎝ sin θ sin(φβ + ν + nφ) ⎠ ,
cos θ

ν = 10.3◦ , φα = −112.5◦, φβ = 63.2◦, and θ = 84.9◦ .


Determine the optically allowed transitions from the ground state and
calculate the relative intensities.
25.2 Influence of Disorder 295

25.3. Exchange Narrowing


Consider excitons in a ring of chromophores with uncorrelated diagonal dis-
order. Show that in lowest order, the distribution function of Ek is Gaussian.
Hint. Write the distribution function as
 
δEn
P (δEk = X) = dδE1 dδE2 · · · P (δE1 )P (δE2 ) · · · δ X −
N

and replace the δ-function with a Fourier integral.


26
Ultrafast Electron Transfer Processes
in the Photosynthetic Reaction Center

In this chapter, we discuss a model for ultrafast electron transfer processes in


biological systems where the density of vibrational states is very high. The
vibrational modes which are coupled to the electronic transition1 are treated
within the model of displaced parallel (17.2) harmonic oscillators (Chap. 19).
This gives the following approximation to the diabatic potential surfaces
 ω2
r
Ei = Ei0 + qr2 ,
r
2
 ω2
Ef = Ef0 + r
(qr − Δqr )2 , (26.1)
r
2

and the energy gap


 (ωr Δqr )2 
Ef − Ei = ΔE + − ωr2 Δqr qr . (26.2)
r
2 r

The total reorganization energy is the sum of all partial reorganization energies
 (ωr Δqr )2 
ER = = gr2 ωr (26.3)
r
2 r

with the vibronical coupling strength



ωr
gr = Δqr . (26.4)
2
The golden rule expression gives the rate for nonadiabatic electron transfer in
analogy to (18.8) (Fig. 26.1)

1
Which have different equilibrium positions in the two states |i and |f .
298 26 Ultrafast Electron Transfer Processes

Fig. 26.1. Electron transfer processes in the reaction center of bacterial photosyn-
thesis. After photoexcitation of the special pair dimer (red ), an electron is transferred
to a bacteriopheophytine (blue) in few picoseconds via another bacteriochlorophyll
(green). The electron is then transferred on a longer time scale to the quinone QA
and finally to QB via a nonheme ferreous ion. The electron hole in the special pair
is filled by an electron from the hemes of the cytochrome c (orange). After a sec-
ond excitation of the dimer, another electron is transferred via the same route to
the semiquinone to form ubiquinone which then diffuses freely within the mem-
brane. This figure was created with Rasmol [106] based on the structure of Rps.
viridis [90–93] from the protein data bank [104, 105]
26 Ultrafast Electron Transfer Processes 299
 
2πV 2   
ket = P ({nr }) (FCr (nr , nr ))2 δ ΔE + ωr (nr − nr )
 n n r r
r r
2
2πV
= FCD(ΔE), (26.5)

where the Franck–Condon-weighted density of states FCD(ΔE) can be
expressed with the help of the time correlation formalism (18.2) as

1
FCD(ΔE) = dt e−itΔE/ F (t) (26.6)
2π
with
 & '
F (t) = exp gr2 (eiωt − 1)(nr + 1) + (e−iωt − 1)nr . (26.7)
r

Quantum chemical calculations for the reaction center [116–119] put the elec-
tronic coupling in the range of V = 5–50 cm−1 for the first step P∗ → P+ B−
and V = 15–120 cm−1 for the second step P+ B− → P+ H− . For a rough esti-
mate, we take a reorganization energy of 2,000 cm−1 and a maximum of the
Franck–Condon-weighted density of states FCD ≈ 1/2,000 cm−1 yielding an
electronic coupling of V = 21 cm−1 for the first and V = 42 cm−1 for the
second step in quite good agreement with the calculations.
27
Proton Transfer in Biomolecules

Intraprotein proton transfer is perhaps the most fundamental flux in the bio-
sphere [120]. It is essential for such important processes as photosynthesis,
respiration, and ATP synthesis [121]. Within a protein, protons appear only
in covalently bound states. Here, proton transfer is essentially the motion
along a hydrogen bond, for instance peptide or DNA H-bonds (Fig. 27.1) or
the more complicated pathways in proton-pumping proteins.

a b
O C
N H O C H
N H O C H O
N N
N H

N
H O C N H N
N H O C
N H O C N N
N
H

Fig. 27.1. Proton transfer in peptide H-bonds (a) and in DNA H-bonds (b)

The energy barrier for this reaction depends strongly on the conforma-
tion of the reaction complex, concerning as well its geometry as its protonation
state. Due to the small mass of the proton, the description of the proton
transfer process has to be discussed on the basis of quantum mechanics.

27.1 The Proton Pump Bacteriorhodopsin


Rhodopsin proteins collect the photon energy in the process of vision. Since
the end of the 1950s, it has been recognized that the photoreceptor molecule
retinal undergoes a structural change, the so-called cis–trans isomerization
upon photoexcitation (G. Wald, R. Granit, H.K. Hartline, Nobel prize for
302 27 Proton Transfer in Biomolecules

ASP96

5.0

2.9
6.8
LYS216
ASP85
2.9 2.6
RET
2.6
2.8
ARG82
2.5
3.2

3.3 5.7
3.4
3.5
GLU194
2.3
GLU204
Fig. 27.2. X-ray structure of bR. The most important residues and structural waters
are shown, together with possible pathways for proton transfer. Numbers show O–O
and O–N distances in Å. Coordinates are from the structure 1c3w [122] in the protein
data bank [104, 105]. Molekel graphics [89]

medicine 1961). Rhodopsin is not photostable and its spectroscopy remains


difficult. Therefore, much more experimental work has been done on its
bacterial analogue bacteriorhodopsin, which performs a closed photocycle
(Figs. 27.2–27.4).
Bacteriorhodopsin is the simplest known natural photosynthetic system.
Retinal is covalently bound to a lysine residue forming the so-called protonated
Schiff base. This form of the protein absorbs sunlight very efficiently over a
large spectral range (480–360 nm).
After photoinduced isomerization, a proton is transferred from the Schiff
base to the negatively charged ASP85 (L → M ). This induces a large blue
shift. In wild-type bR, under physiological conditions, a proton is released to
the extracellular medium on the same time scale of 10 μs. The proton release
group is believed to consist of GLU194, GLU204, and some structural waters.
During the M -state, a large rearrangement on the cytoplasmatic side appears,
which is not seen spectroscopically and which induces the possibility to repro-
tonate the Schiff base from ASP96 subsequently (M → N ). ASP96 is then
reprotonated from the cytoplasmatic side and the retinal reisomerizes ther-
mally to the all trans-configuration (N → O). Finally, a proton is transferred
from ASP85 via ARG82 to the release group and the ground state bR is
recovered.
27.2 Born–Oppenheimer Separation 303

BR ASP96
570
11 13 14
light
12 N
H+
O 5 ms
ASP85 4 ps
640 J/K
ASP96 ARG82 590
PRG ASP96
11 13 14
11 13 14
12 N 12
H+ NH+
ASP85
5 ms ARG82 H+
ASP85
H+ PRG 1 μs
ARG82
cytoplasmic PRG
side
ASP96
11 13 14 ASP96
11 13 14
12
+ 12
NH
NH+
N L
560 ASP85 550
ARG82 ASP85
10 μs ARG82
PRG ASP96 PRG
5 ms 11 13 14 H+
12
N
H+

extracellular
M ASP85
410 side
ARG82
PRG

Fig. 27.3. The photocycle of bacteriorhodopsin [123]. Different states are labeled
by the corresponding absorption maximum (nm)

11 13 +
14 Lys216
12 N
H
11 13 14
12 +
NH
Lys216

Fig. 27.4. Photoisomerization of bR

27.2 Born–Oppenheimer Separation


Protons move on a faster time scale than the heavy nuclei but are slower than
the electrons. Therefore, we use a double Born–Oppenheimer approximation
for the wave function1
1
The Born–Oppenheimer approximation and the nonadiabatic corrections to it are
discussed more systematically in Sect. 17.1.
304 27 Proton Transfer in Biomolecules

ψ(r, RP , Q)χ(Rp , Q)Φ(Q). (27.1)

The protonic wave function χ depends parametrically on the coordinates Q of


the heavy atoms and the electronic wave function ψ depends parametrically
on all nuclear coordinates. The Hamiltonian consists of the kinetic energy
contributions and a potential energy term:

H = Tel + Tp + TN + V. (27.2)

The Born–Oppenheimer approximation leads to the following hierarchy of


equations. First, all nuclear coordinates (Rp , Q) are fixed and the electronic
wave function of the state s is obtained from

(Tel + V (r; Rp , Q))ψs (r; Rp , Q) = Eel,s (Rp , Q)ψs (r; Rp , Q). (27.3)

In the second step, only the heavy atoms (Q) are fixed but the action of the
kinetic energy of the proton on the electronic wave function is neglected. Then
the wave function of proton state n obeys

(Tp + Eel,s (Rp ; Q))χs,n (Rp ; Q) = εs,n (Q)χs,n (Rp ; Q). (27.4)

The electronic energy Eel (Rp , Q) plays the role of a potential energy surface
for the nuclear motion. It is shown schematically in Fig. 27.5 for one proton

1.2
1
0.8
0.6 0.2

0.4
0.1
0.2
0 0 y

– 0.2 – 0.1
–1
–0.5 0
0.5 – 0.2
1
x 1.5 2
2.5

Fig. 27.5. Proton transfer potential. The figure shows schematically the potential
which, as a function of the proton coordinate x, has two local minima corresponding
to the structures · · · N–H · · · O · · · and · · · N · · · H–O · · · and which is modulated by
a coordinate y representing the heavy atoms
27.3 Nonadiabatic Proton Transfer (Small Coupling) 305

Rp Rp Rp

Fig. 27.6. Proton tunneling. The double well of the proton along the H-bond is
shown schematically for different configurations of the reaction complex

coordinate (for instance, the O–H bond length) and one promoting mode of
the heavy atoms which modulates the energy gap between the two minima.
For fixed positions Q of the heavy nuclei, Eel (Rp , Q) has a double-well
structure as shown in Fig. 27.6. The rate of proton tunneling depends on the
configuration and is most efficient if the two localized states are in resonance.
Finally, the wave function of the heavy nuclei in state α is the approximate
solution of
(TN + εs,n (Q))Φs,n,α (Q) = Es,n,α Φs,n,α (Q). (27.5)

27.3 Nonadiabatic Proton Transfer (Small Coupling)


Initial and final states (s, n) and (s , m) are stabilized by the reorganization
of the heavy nuclei. The transfer occurs whenever fluctuations bring the two
states in resonance (Fig. 27.7).
We use the harmonic approximation
a
s,n + (Q − Qs ) + · · · = εs,0 (Q) + (εs,n − εs,0 )
εs,n (Q) = ε(0) (0) 2
(27.6)
2
similar to nonadiabatic electron transfer theory (Chap. 16). The activation
free energy for the (s, 0) → (s , 0) transition is given by

(ΔG0 + ER )2
ΔG‡00 = (27.7)
4ER
and for the transition (s, n) → (s , m), we have approximately

ΔG‡nm = ΔG‡0 + (εsm − εs0 ) − (εs n − εs 0 )


(ΔG0 + ER + εsm − εs n − εs0 + εs 0 )2
= . (27.8)
4ER
The partial rate knm is given in analogy to the ET rate (16.23) by
 
ω P ΔG‡nm
knm = κnm exp − . (27.9)
2π kT
306 27 Proton Transfer in Biomolecules

εsm εs’n
εs0
εs’0

#
ΔGo ΔG00 Q

Fig. 27.7. Harmonic model for proton transfer

The interaction matrix element will be calculated using the BO approximation


and treating the heavy nuclei classically

Vsns m = dr dRp ψs (r, RP , Q)χsn (Rp , Q)V χs m (Rp , Q)ψs (r, RP , Q)
 
= dRp dr ψs (r, RP , Q)V ψs (r, RP , Q) χsn (Rp , Q)χs m (Rp , Q)

el
= dRp Vss  (Rp , Q) χsn (Rp , Q)χs m (Rp , Q). (27.10)

The electronic matrix element



el
Vss  = ψs (r, Rp , Q)V ψs (r, Rp , Q)dr (27.11)

is expanded around the equilibrium position of the proton Rp∗



 (Rp , Q) ≈ Vss (RP , Q) + · · ·
el el
Vss (27.12)

and we have approximately



Vsns m ≈ Vss
el nm
 (RP , Q)Sss (Q) (27.13)

with the overlap integral of the protonic wave functions (Franck–Condon


factor) 
nm
Sss (Q) = χsn χs m dRp . (27.14)

27.4 Strongly Bound Protons


The frequencies of O–H or N–H bond vibrations are typically in the range of
3,000 cm−1 which is much larger than thermal energy. If, on the other hand,
the barrier for proton transfer is on the same order, only the transition between
the vibrational ground states is involved. For small electronic matrix element
V el , the situation is very similar to electron transfer described by Marcus
27.4 Strongly Bound Protons 307

theory (Chap. 16) and the reaction rate is given by Borgis and Hynes [124] for
symmetrical proton transfer (ER
ΔG0 , ΔG‡00 = ER /4):

2πV02 1 ER
k= exp − . (27.15)
 4πkT ER 4kT

In the derivation of (27.15), the Condon approximation has been used which
corresponds to application of (27.12) and approximation of the overlap integral
(27.14) by a constant value. However, for certain modes (which influence the
distance of the two potential minima), the modulation of the coupling matrix
element can be much stronger than in the case of electron transfer processes
due to the larger mass of the proton. The dependence on the transfer distance
(0) (0)
δQ = Qs − Qs is approximately exponential:

Vs0s 0 ∼ e−αδQ , (27.16)

where typically α ≈ 25–35A−1 , whereas for electron transfer processes typical


values are around α ≈ 0.5–1A−1 . Borgis and Hynes [124] found the following
result for the rate when the low-frequency substrate mode with frequency Ω
and reduced mass M is thermally excited

2πV 2  1 ΔGtot
k= exp − χSC , (27.17)
 4πkT 4ΔGtot kT

where the average coupling square can be expanded as


 #  2 $
α2 4kT Ω Ω
V  = V0 exp
2 2
+ +O . (27.18)
2M Ω Ω 3kT kT

The activation energy is

2 α2
ΔGtot = ΔG‡ + (27.19)
8M
and the additional factor χSC is given by

γSC kT
χSC = exp γSC 1 − (27.20)
4ΔGtot

with
αD
γSC = , (27.21)
M Ω2
where
∂ΔH
D= (Q0 ) (27.22)
∂Q
measures the modulation of the energy difference by the promoting mode Ω.
308 27 Proton Transfer in Biomolecules

27.5 Adiabatic Proton Transfer

If V el is large and the tunnel factor approaches unity (for s = s ), then the
application of the low-friction result (8.29) gives
 
ω ΔG‡nmad
knm = exp − , (27.23)
2π kT

where the activation energy has to be corrected by the tunnel splitting


1
ΔG‡nm
ad
= ΔG‡nm − Δεpnm . (27.24)
2
Part VIII

Molecular Motor Models


28
Continuous Ratchet Models

A particular class of proteins (molecular motors) can move linearly along its
designated track, against an external force, by using chemical energy. The
movement of a single protein within a cell is subject to thermal agitation
from its environment and is, therefore, a Brownian motion with drift.
The following discussion is based on the pioneering work by Jülicher and
Prost [125–128].

28.1 Transport Equations

Treating the center of mass of the motor, as subject to Brownian motion


within a periodic potential, we apply a Smoluchowski equation with periodic
boundary conditions:

∂ kB T ∂ 2 1 ∂ ∂U
W (x, t) = W (x, t) + W (x, t)
∂t mγ ∂x2 mγ ∂x ∂x
2
kB T ∂ 1 ∂
= W (x, t) − (F (x)W (x, t)) , (28.1)
mγ ∂x2 mγ ∂x
U (x + L) = U (x), (28.2)

which can be written in terms of the probability flux


∂ ∂
W (x, t) = − S(x, t), (28.3)
∂t ∂x
kB T ∂ 1
S(x, t) = − W (x, t) + F (x)W (x, t). (28.4)
mγ ∂x mγ
By comparison with the continuity equation for the density

ρ = −div(ρv), (28.5)
∂t
312 28 Continuous Ratchet Models

we find
∂W ∂ ∂
= − S = − (W v) (28.6)
∂t ∂x ∂x
hence the drift velocity at x is
S(x)
vd (x) = (28.7)
W (x)
and the average velocity of systems in the interval 0, L is
L L
0 W (x)vd (x)dx 0 S(x)dx
v = L = L . (28.8)
0 W (x)dx 0 W (x)dx

Equation (28.6) shows that for a stationary solution, the flux has to be con-
stant. However, the stationary distribution is easily found by integration and
has the form

mγS x U(x )/kB T 
Wst = e−U (x)/kB T C − e dx . (28.9)
kB T 0
From the periodic boundary condition, we find
  L 
−U (0)/kB T mγ U (x )/kB T 
Wst (L) = e C− S e dx
kB T 0

= Wst (0) = e−U(0)/kB T C, (28.10)

which shows that in a stationary state, there is zero flux (Fig. 28.1)

S=0 (28.11)

and no average velocity


v = 0. (28.12)
If an external force is applied to the system

U (x) → U (x) − xFext , (28.13)

U(x)
S>0
W(x)

F>0

F=0 S=0

Fig. 28.1. Periodic potential with external force


28.2 Chemical Transitions 313

then the stationary solution is




W (x) = e(xF −U(x))/kB T C − S e(U (x)−xF )/kB T dx . (28.14)
kB T

Assuming periodicity for W (x)

W (0) = W (L),
  
L
−U (0)/kB T mγ
Ce =e (LF−U(0))/kB T
C− S e (U (x)−xF )/kB T
dx ,
kB T 0

(28.15)

there will be a net flux which is a function of the external force:


kB T eLF/kB T − 1
S= W (0)  L = S(F ). (28.16)
mγ e(U (x)−U(0)−(x−L)F )/kB T dx
0

From
kB T ∂ 1 ∂
Sst = − Wst (x) − Wst (x) (U (x) − xFext ) , (28.17)
mγ ∂x mγ ∂x
we find
1 ∂
mγS = − (kB T ln W (x) + U + Uext ), (28.18)
W (x) ∂x
which can be interpreted in terms of a chemical potential [129]

μ(x) = kB T ln W (x) + U (x) + Uext (x) (28.19)

as
∂ mγS
μ(x) = − = −mγvd (x), (28.20)
∂x W (x)
where the right-hand side gives the energy which is dissipated by a particle
moving in a viscous medium.

28.2 Chemical Transitions


We assume that the chemistry of the motor can be described by a number
m of discrete states or conformations i = 1, . . . , m. Many biochemical models
focus on a model with four states (Fig. 28.2).
Transitions between these states are fast compared to the motion of the
motor and will be described by chemical kinetics

kij
i  j. (28.21)
kji
314 28 Continuous Ratchet Models

P M−ADP−P
3

M−ADP 4 M−ATP
2

M
ADP ATP

Fig. 28.2. Chemical cycle of the four-state model

The motion in each conformation will be described by a Smoluchowski equa-


tion in a corresponding potential Ui (x) which is assumed to be periodic and
asymmetric
Ui (x + L) = Ui (x), Ui (x) = Ui (−x) (28.22)
and where the potential of an external force may be added

Ui → Ui (x) − xFext . (28.23)

Together, we have the equations


∂ ∂ 
Wi (x, t) + Si (x, t) = (kji (x)Wi (x, t) − kij (x)Wj (x, t)) , (28.24)
∂t ∂x
i=j

kB T ∂ 1 ∂
Si (x, t) = − Wi (x, t) − Wi (x, t) Ui (x). (28.25)
mγ ∂x γm ∂x

28.3 The Two-State Model


We discuss in the following a simple two-state model which has only a small
number of parameters, but which is very useful to understand the general
behavior. The model is a system of two coupled differential equations:
∂ ∂
W1 + S1 = −k12 W1 + k21 W2 ,
∂t ∂x
∂ ∂
W2 + S2 = −k21 W2 + k12 W2 . (28.26)
∂t ∂x

28.3.1 The Chemical Cycle

The energy source is the hydrolysis reaction

ATP → ADP+P. (28.27)


28.3 The Two-State Model 315

P M−ADP−P

M−ADP β1 β2 α2 α1 M−ATP

M
ADP ATP

Fig. 28.3. Simplified chemical cycle

U1,0

U2,1

U1,1
U2,2

Fig. 28.4. Hierarchy of potentials

The motor protein undergoes a chemical cycle which can be divided into four
states. In a simplified two-state model, this cycle has to be further divided into
two substeps. One option is to combine ATP binding and hydrolysis (Fig. 28.3)

α1
M+ATP  M−ADP−P (28.28)
α2

as well as the release of ADP and P


β1
M+ADP+P  M−ADP−P. (28.29)
β2

Since each cycle consumes the energy from hydrolysis of one ATP, we consider
a sequence of states characterized by the motor state and the number of total
cycles (Fig. 28.4)
α1 β2 α1
· · · M1,0  M2,1  M1,1  M2,2 · · · (28.30)
α2 β1 α2
The potential energies Us,n and Us,n+m only differ by a vertical shift
corresponding to the consumption of m energy units

Δμ = μATP − μADP − μP (28.31)


316 28 Continuous Ratchet Models

and therefore

Us,n = Us − nΔμ. (28.32)

Detailed balance of the chemical reactions implies


α1
= e(U1,n −U2,n+1 )/kB T = e(U1 −U2 +Δμ)/kB T , (28.33)
α2
β1
= e(U1,n −U2,n )/kB T = e(U1 −U 2 )/kB T . (28.34)
β2
The time evolution of the motor is described by a hierarchy of equations
∂ ∂
W1,n + S1,n = −(α1 + β1 )W1,n + α2 W2,n+1 + β2 W2,n ,
∂t ∂x
∂ ∂
W2,n + S2,n = −(α2 + β2 )W2,n + β1 W1,n + α1 W1,n−1 . (28.35)
∂t ∂x
Obviously, we can sum over the hierarchy
 
Ws = Ws,n , Ss = Ss,n (28.36)
n n

to obtain the two-state model:


∂ ∂
W1 + S1 = −(α1 + β1 )W1 + (α2 + β2 )W2 , (28.37)
∂t ∂x
∂ ∂
W2 + S2 = −(α2 + β2 )W2 + (α1 + β1 )W1 , (28.38)
∂t ∂x
where the transition rates are superpositions:

k21 = α2 + β2 , (28.39)
k12 = α1 + β1 = α2 e(U1 −U2 +Δμ)/kB T + β2 e(U 1 −U2 )/kB T . (28.40)

If Δμ = 0, then these rates obey the detailed balance

k12 (x)
= e−(U2 −U1 )/kB T , (28.41)
k21 (x)

which indicates that the system is not chemically driven and is only subject
to thermal fluctuations. The steady state is then again given by

Wi = N e−Ui (x)/kB T (28.42)

for which the flux is zero. For Δμ > 0, the system is chemically driven and
spontaneous motion with v = 0 can occur. However, for a symmetric system
with U1,2 (−x) = U1,2 (x), the steady-state distributions are also symmetric
28.3 The Two-State Model 317

Wst
U(x)

WF F(x)

Fig. 28.5. Symmetric system

and the velocity


 L
1 ∂U1 ∂U2
v = − dx W1 + W2 (28.43)
mγ 0 ∂x ∂x
vanishes, since the forces are antisymmetric functions (Fig. 28.5).
The ATP hydrolysis rate
 L
r= dx(α1 W1 − α2 W2 ), (28.44)
0

on the other hand, is generally not zero in this case since α1,2 are symmet-
ric functions. Therefore, two conditions have to be fulfilled for spontaneous
motion to occur: detailed balance has to be broken (Δμ = 0) and the system
must have polar symmetry.

28.3.2 The Fast Reaction Limit


If the chemical reactions are faster than the diffusive motion, then there is
always a local equilibrium
W1 (x, t) k21 (x)
= (28.45)
W2 (x, t) k12 (x)
and the total probability W = W1 + W2 obeys the equation

∂ kB T ∂ 2 1 ∂ ∂
W = W+ (U1 W1 + U2 W2 )
∂t mγ ∂x2 mγ ∂x ∂x

kB T ∂ 2 1 ∂ ∂ k21 k12
= W+ U1 + U2 W ,
mγ ∂x2 mγ ∂x ∂x k12 + k21 k12 + k21
(28.46)
which describes motion in the average potential.

28.3.3 The Fast Diffusion Limit


If the diffusive motion is faster than the chemical reactions, the motor can work
as a Brownian ratchet. In the time between chemical reactions, equilibrium is
established in both states
318 28 Continuous Ratchet Models

Fig. 28.6. Brownian ratchet

1−p2
p2

1−p1

p1

Fig. 28.7. Fast diffusion limit far from equilibrium

Wst0 = c1,2 e−U1,2 (x)/kB T . (28.47)


If the barrier height is large and the equilibrium distribution is confined around
the minima, the diffusion process can be approximated by a simpler rate
process. Every ATP consumption corresponds to a transition to the second
state. Then the system moves to one of the two neighboring minima with
probabilities p2 and 1 − p2 , respectively. From here it makes a transition back
to the first state. Finally, it moves right or left with probability p1 and 1 − p1 ,
respectively (Fig. 28.6).
During this cycle, the system proceeds in a forward direction with proba-
bility (Fig. 28.7)
p+ = p 1 p 2 (28.48)
backwards with probability
p− = (1 − p1 )(1 − p2 ) (28.49)
or it returns to its initial position with probability
p0 = p2 (1 − p1 ) + p1 (1 − p2 ). (28.50)
The average displacement per consumed ATP is
x ≈ L(p+ − p− ) = L(p1 + p2 − 1) (28.51)
and the average velocity is
v = rx. (28.52)
In the presence of an external force, the efficiency can thus be estimated as
−Fext −Fext L
η= x = (p1 + p2 − 1). (28.53)
Δμ Δμ
28.4 Operation Close to Thermal Equilibrium 319

28.4 Operation Close to Thermal Equilibrium


For constant T and μi , the local heat dissipation is given by
 ∂Uk

T σ(x) = Fext − Sk + r(x)Δμ. (28.54)
∂x
k

In a stationary state, the total heat dissipation is



Π = T σ(x)dx = Fext v + rΔμ, (28.55)

since  
∂U ∂S
S dx = − U (x) dx = 0. (28.56)
∂x ∂x
Under the action of an external force Fext , the velocity and the hydrolysis rate
are functions of the external force and Δμ:
v = v(Fext , Δμ),
r = r(Fext , Δμ). (28.57)
Close to equilibrium, linearization gives
v = λ11 Fext + λ12 Δμ,
r = λ21 Fext + λ22 Δμ. (28.58)
The rate of energy dissipation is given by mechanical plus chemical work
2
Π = Fext v + rΔμ = λ11 Fext + λ22 Δμ2 + (λ12 + λ21 )Fext Δμ (28.59)
which must be always positive due to the second law of thermodynamics. This
is the case if
λ11 > 0, λ22 > 0, and λ11 λ22 − λ12 λ21 > 0. (28.60)
The efficiency of the motor can be defined as
−Fext v −λ11 Fext
2
− λ12 Fext Δμ
η= =
rΔμ λ22 Δμ2 + λ21 Fext Δμ
or shorter
λ11 a2 + λ12 a Fext
η=− , a= . (28.61)
λ22 + λ21 a Δμ
It vanishes for zero force but also for v = 0 and has a maximum at (Figs. 28.8
and 28.9)
 √
1 − (λ12 λ21 /λ11 λ22 ) − 1 (1 − 1 − Λ)2 λ12 λ21
a= , ηmax = , Λ= .
λ21 /λ22 Λ λ11 λ22
(28.62)
320 28 Continuous Ratchet Models

0.8

velocity v/Δμ
0.6

0.4

0.2

0
–1 –0.8 –0.6 –0.4 –0.2 0
external force F/Δμ

Fig. 28.8. Velocity–force relation

0.4
efficiency η

0.2

0
–1 –0.5 0
external force F/Δμ

Fig. 28.9. Efficiency of the motor

Problems
28.1. Deviation from Equilibrium
Make use of the detailed balance conditions
α1
= e(Δμ−ΔU)/kB T ,
α2
β1
= e−ΔU/kB T ,
β2
and calculate the quantity
k12 α1 + β1
Ω= − e−ΔU/kB T = − e−ΔU/kB T .
k21 α2 + β2
Use the approximation
C(ATP)
Δμ = Δμ0 + kB T ln
C(ADP)C(P)
28.4 Operation Close to Thermal Equilibrium 321

and express Ω in terms of the the equilibrium constant


0
Keq = eΔμ /kB T
.

Consider the limiting cases

Δμ = 0 but Δμ kB T

and
Δμ
kB T.
29
Discrete Ratchet Models

Simplified motor models describe the motion of the protein as a hopping


process between positions xn combined with a cyclical transition between
internal motor states:

M1,xn  M2,xn · · ·  MN,n1  M1,xn+1 · · · (29.1)

For example, the four-state model of the ATP hydrolysis can be easily trans-
lated into such a scheme (Fig. 29.1).

M1
ATP

M2
ADP−P

M2
ADP−P

M1
ADP

M1
xn xn+1

Fig. 29.1. Discrete motor model


324 29 Discrete Ratchet Models

29.1 Linear Model with Two Internal States


Fisher [130] considers a linear periodic process with two internal states:
α1 β1
· · · M1,xn  M2,xn  M1,xn+1 · · · (29.2)
α2 β2
The master equation is
d
P1n = −(α1 + β2 )P1n + α2 P2n + β1 P2,n−1 ,
dt
d
P2n = −(α2 + β1 )P2n + α1 P1n + β2 P1,n+1 , (29.3)
dt
and the stationary solution corresponds to the zero eigenvalue of the matrix

−α1 − β2 α2 + β1
. (29.4)
α1 + β2 −α2 − β1
With the help of the left and right eigenvectors

& ' α2 + β1
L 0 = 1 1 , R0 = , (29.5)
α1 + β2
we find
α2 + β1 α1 + β2
P1,st = , P2,st = (29.6)
α2 + β1 + α1 + β2 α1 + β2 + α2 + β1
and the stationary current is
α1 β1 − α2 β2
S = α1 P1,st − α2 P2,st = . (29.7)
α1 + β2 + α2 + β1
With the definition of
α2 β2
ω= (29.8)
α1 + α2 + β1 + β2
and
α1 β1
Γ = = eΔμ/kB T , (29.9)
α2 β2
the current can be written as
S = (Γ − 1)ω. (29.10)
Consider now application of an external force corresponding to an additional
potential energy
Un = −n d Fext . (29.11)
Assuming that the motion corresponds to the transition M2,n  M1,n+1 , the
corresponding rates will become dependent on the force (Fig. 29.2)
β1 = β10 eΘFd/kB T ,
β2 = β20 e−(1−Θ)Fd/kB T . (29.12)
29.1 Linear Model with Two Internal States 325

G(x)

ΔG1
ΔG2 F=0

F< 0

x1 x2 x
d

Fig. 29.2. Influence of the external force. The external force changes the activation
barriers which become approximately ΔG1 (F ) = ΔG1 (0) − x1 F and ΔG2 (F ) =
ΔG2 (0) + x2 F

vo

−Fstal −F

Fig. 29.3. Velocity–force relation

Θ is the so-called splitting parameter. The force-dependent current is

α2 β20 e−(1−Θ)Fd/kB T
S = (eΔμ/kB T +Fd/kB T − 1) ,
α1 + α2 + β10 eΘFd/kB T + β20 e−(1−Θ)Fd/kB T
(29.13)

which vanishes for the stalling force:


Δμ
Fstal = − . (29.14)
d
Close to equilibrium, we expand (Fig. 29.3)

Γ − 1 = eΔμ/kB T +Fd/kB T − 1
Δμ
= e(1−F/Fstal )Δμ/kB T − 1 ≈ (1 − F/Fstal ) , (29.15)
kB T

Fd (β10 + (α1 + α2 )(1 − Θ))
ω ≈ ω0 1 − . (29.16)
kB T (α1 + α2 + β10 + β20 )
Part IX

Appendix
A
The Grand Canonical Ensemble

Consider an ensemble of M systems which can exchange energy as well as par-


ticles with a reservoir. For large M , the total number of particles Ntot = M N
and the total energy Etot = M E have well-defined values since the relative
widths decrease as
1 1
2 2
Ntot − Ntot
2
1 2 −E
Etot tot 1
∼ √ , ∼ √ . (A.1)
Ntot M Etot M
Hence also the average number and energy can be assumed to have well-
defined values (Fig. A.1).

E,N T, μ

Fig. A.1. Ensemble of systems exchanging energy and particles with a reservoir

A.1 Grand Canonical Distribution


We distinguish different microstates j which are characterized by the num-
ber of particles Nj and the energy Ej . The number of systems in a certain
microstate j will be denoted by nj . The total number of particles and the
total energy of the ensemble in a macrostate with nj systems in the state j
are  
Ntot = nj Nj , Etot = nj Ej (A.2)
j j
330 A The Grand Canonical Ensemble

and the number of systems is



M= nj . (A.3)
j

The number of possible representations is given by a multinomial coefficient:


M!
W ({nj }) =  . (A.4)
j nj !

From Stirling’s formula, we have



ln W ≈ ln(M !) − (nj ln nj − nj ). (A.5)
j

We search for the maximum of (A.5) under the restraints imposed by (A.2)
and (A.3). To this end, we use the method of undetermined factors (Lagrange
method) and consider the variation
⎛      ⎞
  
δ ⎝ln W − α nj − M − β nj Ej − Etot − γ nj Nj − Ntot ⎠
j j j


= δnj (− ln nj − α − βEj − γNj ) = 0. (A.6)
j

Since the nj now can be varied independently, we find

nj = exp(−α − βEj − γNj ). (A.7)

The unknown factors α, β, and γ have to be determined from the restraints.


First we have
 
nj = e−α e−βEj −γNj = M. (A.8)
j j

With the grand canonical partition function



Ξ= e−βE j −γNj , (A.9)
j

the probability of a certain microstate is given by

nj e−βEj −γNj
P (Ej , Nj ) =  = (A.10)
j nj Ξ

and further
M
e−α = . (A.11)
Ξ
A.2 Connection to Thermodynamics 331

From
 M
nj Ej = Ej e−βEj −γNj = Etot , (A.12)
j
Ξ j

we find the average energy per system



nj Ej ∂
E= =− ln Ξ (A.13)
M ∂β
and similarly from
 M
nj N j = Nj e−βEj −γNj = Ntot (A.14)
j
Ξ j

the average particle number of a system



nj N j ∂
N= =− ln Ξ. (A.15)
M ∂γ
Equations (A.13) and (A.15) determine the parameters β, γ implicitly and
then α follows from (A.11).

A.2 Connection to Thermodynamics


Entropy is given by

S = −k P (Ej , Nj ) ln P (Ej , Nj )

= −k P (Ej , Nj )(−βEj − γNj − ln Ξ)
= kβE + kγN + k ln Ξ. (A.16)

From thermodynamics, the Duhem–Gibbs relation is known which states for


the free enthalpy
G = U − T S + pV = μN
where U = E and S, V, N are the thermodynamic averages. Solving for the
entropy, we have
U + pV − μN
S= (A.17)
T
and comparison with (A.16) shows that
1 μ
β= , γ=− , (A.18)
kB T kB T
kB T ln Ξ = pV. (A.19)
The summation over microstates j with energy Ej and Nj particles can be
replaced by a double sum over energies Ei (N ) and particle number N to give
332 A The Grand Canonical Ensemble

Ξ= e−β(E(N )−μN ) ,
E,N

which can be written as a sum over canonical partition functions with different
particle numbers
  
Ξ= eβμN e−βE(N ) = eβμN Q(N ). (A.20)
N E N
B
Time Correlation Function of the Displaced
Harmonic Oscillator Model

In the following, we evaluate the time correlation function of the displaced


harmonic oscillator model (19.7)
8 † †
9
fr (t) = e−itωr br br eitωr (br +gr )(br +gr ) . (B.1)

B.1 Evaluation of the Time Correlation Function


To proceed, we need some theorems which will be derived afterwards.
Theorem 1. A displacement of the oscillator potential energy minimum can
be formulated as a canonical transformation
† †
ωr (b†r + gr )(br + gr ) = e−gr (br −br ) ωr b†r br egr (br −br ) . (B.2)

With the help of this relation, the single mode correlation function becomes
8 † † † †
9
Fr (t) = e−itωr br br e−gr (br −br ) eitωbr br egr (br −br ) . (B.3)

The first three factors can be interpreted as another canonical transformation.


To this end, we apply the following theorem.
Theorem 2. The time-dependent boson operators are given by
† †
e−iωtb b b† eiωtb b
= b† e−iωt ,
† †
e−iωtb b beiωtb b
= beiωt . (B.4)

we find
 & ' & '
Fr (t) = exp −gr (b†r e−iωr t − br eiωt ) exp gr (b†r − br ) . (B.5)

The two exponentials can be combined due to the following theorem.


334 B Time Correlation Function of the Displaced Harmonic Oscillator Model

Theorem 3. If the commutator of two operators A and B is a c-number then

eA+B = eA eB e−(1/2)[A,B] . (B.6)

The commutator is

− gr2 [b†r e−iωr t − br eiωt , b†r − br ] = −gr2 (e−iωt − eiωt ) (B.7)

and we have

1
Fr (t) = exp − gr2 (e−iωt − eiωt )
2
 & '
× exp −gr b†r (e−iωt − 1) + gr br (eiωt − 1) . (B.8)

The remaining average is easily evaluated due to the following theorem.


Theorem 4. For a linear combination of br and b†r , the second-order cumu-
lant expansion is exact
8 † 9
eμb +τ b = e1/2(μb +τ b)  = eμτ b b+1/2 .
† 2 †
(B.9)

The average square is


8& '2 9 2
b†r (e−iωt − 1) − br (eiωt − 1) gr
 
= −gr2 (2 − eiωt − e−iωt ) b†r br + br b†r
= −gr2 (2 − eiωt − e−iωt )(2nr + 1) (B.10)

with the average phonon number1


1
nr = . (B.11)
eβωr −1
Finally we have

1 1
Fr (t) = exp − gr2 (2 − eiωt − e−iωt )(2nr + 1) − gr2 (e−iωt − eiωt )
2 2
& 2 iωt −iωt
'
= exp gr (e − 1)(nr + 1) + (e − 1)nr . (B.12)

B.2 Boson Algebra


B.2.1 Derivation of Theorem 1

Consider the following unitary transformation


† †
A = e−g(b −b) †
b beg(b −b)
(B.13)
1
In the following, we use the abbreviation β = 1/kB T .
B.2 Boson Algebra 335

and make a series expansion

dA 1 2 d2 A
A = A(0) + g + g + ··· (B.14)
dg 2 dg 2
The derivatives are

dA 
= [b† b, b† − b] = b† [b, b† ] + [b† , −b]b = (b + b+ ),
dg g=0

d2 A 
= [[b† b, b† − b], b† − b] = [b + b† , b† − b] = 2,
dg 2 g=0

dn A 
= 0 for n ≥ 3, (B.15)
dg n g=0

and the series is finite

A = b† b + g(b† + b) + g 2 = (b† + g)(b + g). (B.16)

Hence for any of the normal modes


† †
ωr (b†r + gr )(br + gr ) = e−gr (br −br ) ωr b†r br egr (br −br ) . (B.17)

B.2.2 Derivation of Theorem 2

Consider † †
A = eτ b b be−τ b b
with τ = −iωt. (B.18)
Make again a series expansion
dA
= [b+ b, b] = −b, (B.19)

d2 A
= [b+ b, −b] = b, etc., (B.20)
dτ 2

τ2 (itω)2
A=b 1−τ + − · · · = b 1 + itω + + · · · = b eiωt . (B.21)
2 2
Hermitian conjugation gives
† †
e−iωtb b b+ eiωtb b
= b† e−iωt . (B.22)

B.2.3 Derivation of Theorem 3

Consider the operator

f (τ ) = e−Bτ e−Aτ e(A+B)τ (B.23)


336 B Time Correlation Function of the Displaced Harmonic Oscillator Model

as a function of the c-number τ . Differentiation gives

df (τ )
= e−Bτ (−A − B)e−Aτ e(A+B)τ + e−Bτ e−Aτ (A + B)e(A+B)τ

= e−Bτ [e−Aτ , B]e(A+B)τ . (B.24)

Now if the commutator [A, B] is a c-number, then

[An , B] = A[An−1 , B] + [A, B]An−1 = · · · n[A, B]An−1 (B.25)

and therefore
∞  (−τ )n
(−τ )n n
[e−Aτ , B] = [A , B] = An−1 [A, B]
n=0
n! n
(n − 1)!
= −τ [A, B]e−Aτ (B.26)

and (B.24) gives

df (τ )
= −τ [A, B]e−Bτ e−Aτ e(A+B)τ = −τ [A, B]f (τ ) (B.27)

which is for the initial condition f (0) = 1 solved by
2
τ
f (τ ) = exp − [A, B] . (B.28)
2

Substituting τ = 1 finally gives

e−B e−A e(A+B) = e−(1/2)[A,B] . (B.29)

B.2.4 Derivation of Theorem 4

This derivation is based on [131]. For one single oscillator, consider the linear
combination
μb† + τ b = A + B, (B.30)
[A, B] = −μτ. (B.31)
Application of (B.29) gives
8 † 9 8 † 9
eμb +τ b = eμb eτ b eμτ /2 (B.32)

and after exchange of A and B


8 † 9 8 †
9
eμb +τ b = eτ b eμb e−μτ /2 . (B.33)
B.2 Boson Algebra 337

Combination of the last two equations gives


8 †
9 8 † 9
eτ b eμb = eμb eτ b eμτ . (B.34)

Using the explicit form of the averages, we find


† †

† †

Q−1 tr e−βωb b eτ b eμb = Q−1 tr e−βωb b eμb eτ b eμτ (B.35)

and due to the cyclic invariance of the trace operation, the right-hand side
becomes † †
Q−1 tr(eτ b e−βωb b eμb )eμτ (B.36)
which can be written as
† † † †
Q−1 tr(e−βωb b e+βωb b eτ b e−βωb b eμb )eμτ
8 † † †
9
= e+βωb b eτ b e−βωb b eμb eμτ . (B.37)

Application of (B.4) finally gives the relation


   & ' & '
exp(τ b) exp(μb† ) = exp τ e−βω b exp μb† eμτ (B.38)
which can be iterated to give
   & ' & ' −βω
exp(τ b) exp(μb† ) = exp τ e−2βω b exp μb† eμτ (1+e )
= ···
8
& ' 9 −βω
+···+e−nβω )
= exp τ e−(n+1)βω b exp μb† eμτ (1+e (B.39)
and in the limit n → ∞

 †
  †
 μτ
exp(τ b) exp(μb ) = exp(μb ) exp
1 − e−βω

μτ
= exp (B.40)
1 − e−βω
since only the zero-order term of the expansion of the exponential gives a
nonzero contribution to the average. With the average number of vibrations
1
ñ = , (B.41)
eβω −1
we have  
exp(τ b) exp(μb† ) = exp (μτ (ñ + 1)) (B.42)
and finally 8 † 9
eμb +τ b = eμτ (ñ+1/2) . (B.43)
The average of the square is
 †   
(μb + τ b)2 = μτ b† b + b b† = μτ (2ñ + 1), (B.44)
which shows the validity of the theorem.
C
The Saddle Point Method

The saddle point method is an asymptotic method to calculate integrals of


the type  ∞
enφ(x) dx (C.1)
−∞

for large n. If the function φ(x) has a maximum at x0 , then the integrand
also has a maximum there which becomes very sharp for large n. Then the
integral can be approximated by expanding the exponent around x0

1 d2 (nφ(x)) 
 (x − x0 ) + · · ·
2
nφ(x) = nφ(x0 ) + (C.2)
2 dx2 x0

as a Gaussian integral
 %


e nφ(x)
dx ≈ e nφ(x0 )
. (C.3)
−∞ n|φ (x0 )|

The method can be extended to integrals in the complex plane


 
enφ(z) dz = en(φ(z)) ein(φ(z)) dz. (C.4)
C C

If the integration contour is deformed such that the imaginary part is constant,
then the Laplace method is applicable and gives
 
nφ(z) in(φ(z0 ))
e dz = e en(φ(z)) dz. (C.5)
C C

The contour C  and the expansion point are determined from

φ (z0 ) = 0, (C.6)

φ(z) = u(z) + iv(z) = u(z) + iv(z0 ). (C.7)


340 C The Saddle Point Method

Now consider the imaginary part as a function in R2 . The gradient is



∂v ∂v
 v(x, y) = , . (C.8)
∂x ∂y
For an analytic function φ(x + iy), we have
u(x + dx + iy + idy) + iv(x + dx + iy + idy) − (u(x + iy) + iv(x + iy))
∂(u + iv) ∂(u + iv)
= dx + dy, (C.9)
∂x ∂y
which can be written as
dφ dφ
dz = (dx + idy) (C.10)
dz dz
only if
∂(u + iv) ∂(u + iv)
i = (C.11)
∂x ∂y
or
∂u ∂v ∂v ∂u
= , =− . (C.12)
∂x ∂y ∂x ∂y
Hence the gradient of the imaginary part

∂u ∂u
 v(x, y) = − , (C.13)
∂y ∂x
is perpendicular to the gradient of the real part

∂u ∂u
 u(x, y) = , (C.14)
∂x ∂y
which gives the direction of steepest descent. The method is known as saddle
point method since a maximum of the real part always is a saddle point which
can be seen from the expansion
1
φ(z) = φ(z0 ) + φ (z0 )(z − z0 )2 + · · · (C.15)
2
We substitute
dz 2 = dx2 − dy 2 + 2idx dy (C.16)
and find for the real part
1& '
(φ(z)) = (φ(z0 )) − (φ (z0 ))(dx2 − dy 2 ) − 2(φ (z0 ))dx dy
2
1 (φ ) −(φ ) dx
= (φ(z0 )) − (dx, dy) . (C.17)
2 −(φ ) −(φ ) dy
The eigenvalues of the matrix are

± (φ )2 + (φ )2 = ±|φ |. (C.18)
Solutions

Problems of Chap. 1

1.1. Gaussian Polymer Model


(a) Δri = ri − ri−1 , i = 1, . . . , N,
 
1 27 3(Δri )2
P (Δri ) = 3 exp −
b 8π 3 2b2
is the normalized probability function with
 ∞  ∞
3
P (Δri )d Δri = 1, Δr2i P (Δri )d3 Δri = b2 .
−∞ −∞


N
(b) rN − r0 = Δri ,
i=1
 

N
P Δri = R
i=1

   
∞ ∞ 
N 
N
= d Δr1 · · ·
3 3
d ΔrN P (Δri )δ R− Δri
−∞ −∞ i=1 i=1

    N
∞ ∞
1 1 27
= d3 keikR d Δr1 · · ·
3 3
d ΔrN
(2π)3 −∞ −∞ b3 8π 3

  
3Δr2
× exp − 2i − ikΔri
2b

1
= d3 keikR
(2π)3
342 Solutions
/   0N

1 27 3Δr2
× d Δr exp − 2 − ikΔr
3
b3 8π 3 −∞ 2b

1 N b2 2
= d ke
3 ikR
exp − k
(2π)3 6
 
1 27 3R2
= 3 exp − .
b 8π 3 N 3 2N b2
   
1 f
(c) exp − Δri − κ
2
Δri
kB T 2
  
1 f
= exp − Δr2i − κΔri
i
kB T 2

f 3 3kB T
= 2 →f = .
2kB T 2b b2

(d) xN − x0 = , yN − y0 = zN − z0 = 0,
f
N κb2
L = xN − x0 = .
3kB T
1.2. Three-Dimensional Polymer Model
(a) N b 2 .
2 1 + x 2x(x2 − 1)
(b) b N + with x = cos θ
1−x (1 − x)2
1 + cos x
≈ N b2 .
1 − cos x
(1 + cos θ1 )(1 + cos θ2 )
(c) N b2 .
1 − cos θ1 cos θ2
(d) N
a/b with the coherence length
1 + cos x
a=b .
(1 − cos x) cos(x/2)

4 4 8
(e) N b 2
−1 −b 2
− 4 .
θ2 θ2 θ
1.3. Two-Component Model

1 M lα − L
(a) κ = −kB T ln
lα − lβ L − M lβ

1 1 lα − lβ lα − lβ
−kB T + + − .
2M lα − 2L 2M lβ − L 12(L − M lβ )2 12(M lα − L)2
Solutions 343

The exact solution can be written with the digamma function Ψ which is
well known by algebra programs as

1 L − M lβ M lα − L
κ = −kB T −Ψ +1 +Ψ .
lα − lβ lα − lβ lα − lβ
The error of the asymptotic expansion is largest for L ≈ M lα or L ≈
M lβ . The following table compares the relative errors of the Stirling’s
approximation and the higher-order asymptotic expansion for M = 1,000
and lβ /lα = 2:

L/lα Stirling Asympt. expansion


1,000.2 0.18 0.13
1,000.5 0.11 0.009
1,001 0.065 0.00094
1,005 0.019 2.5×10−6


M
(b) Z(κ, M, T ) = eκlα /kB T + eκlβ /kB T ,

lα eκlα /kB T + lβ eκlβ /kB T


L=M ,
eκlα /kB T + eκlβ /kB T
2
2 lα − lβ
L2 = L + M eκ(lα +lβ )/kB T ,
eκlα /kB T + eκlβ /kB T
2
2 κ(lα +lβ )/kB T lα − lβ
σ = Me ,
eκlα /kB T + eκlβ /kB T
σ 1
∼√ ,
L N
∂σ lα eκlα /kB T + lβ eκlβ /kB T
=0 for (lα + lβ ) = 2 ,
∂κ eκlα /kB T + eκlβ /kB T
hence for
κ=0
∂ 2 σ2 M
(κ = 0) = − (lα − lβ )2 < 0 → maximum
∂κ2 k(kB T )2
also a maximum of σ since the square root is monotonous.
344 Solutions

Problems of Chap. 2

2.1. Osmotic Pressure of a Polymer Solution



1
μα (P, T ) − μα (P, T ) = kB T ln(1 − φβ ) + 1 −
0 2
φβ + χφβ ,
M
∂μ0α (P, T )
μ0α (P  , T ) − μ0α (P, T ) = μα (P, T ) − μ0α (P, T ) = −Π
,
∂P
0 −1
∂μα (P, T ) 1
Π=− kB T ln(1 − φβ ) + 1 − φβ + χφ2β .
∂P M
For the pure solvent,
G
μ0α = ,

dG = −S dT + V dP + μ0α (P, T )dN,

∂μ0α  V
= ,
∂P T,Nα Nα

Nα 1 2 1 3 1
Π=− kB T −φβ − φβ − φβ + · · · + 1 − 2
φβ + χφβ
V 2 3 M

Nα kB T 1 1
= φβ + − χ φ2β + · · · ,
V M 2
χ0 T 0
χ= ,
T
high T :
1
− χ > 0, Π > 0 good solvent
2
low T:
Π < 0 bad solvent, possibly phase separation.
2.2. Polymer Mixture

φ1 φ2
ΔF = N kB T ln φ1 + ln φ2 + χφ1 φ2 ,
M1 M2
1
φ2,c =  ,
1+ M2 /M1

1  
1 1
χc = M1 + M2 √ + √ ,
2 M2 M1 M1 M2
Solutions 345

symmetric case
1
φc =
2
2
χc = can be small, demixing possible.
M

Problems of Chap. 4
4.1. Membrane Potential

W
ΦI = Beκx , ΦII = B 1 + κx , ΦIII = V − Be−κ(x−L) ,
M
V
B= ,
2 + (w /M )κL
Q/A = W κB per area A,
Q/A W κ 1
C/A = = = .
V 2 + (W /M )κL (2/w κ) + (L/M )
4.2. Ion Activity
a Z 2 e2 κ
kB T ln γ c = kB T ln =− ,
c 8π 1 + κR
1 Z 2 e2 κ
c
ln γ+ c
= ln γ− =− ,
kB T 8π 1 + κR

1 Z 2 e2 κ κ
c
ln γ± =− + ,
kB T 16π 1 + κR+ 1 + κR−
κ → 0 for dilute solution,
1 Z 2 e2 κ
c
ln γ± →− .
kB T 8π

Problems of Chap. 5
5.1. Abnormal Titration Curve
ΔG(B, B) = 0,
ΔG(BH+ , B) = ΔG1,int ,
ΔG(B, BH+ ) = ΔG2,int ,
ΔG(BH+ , BH+ ) = ΔG1,int + ΔG2,int + W1,2 ,
346 Solutions

Ξ = 1 + e−β(ΔG1,int−μ) + e−β(ΔG2,int−μ) + e−β(ΔG1,int+ΔG2,int −W +2μ) ,


e−β(ΔG1,int−μ) + e−β(ΔG1,int+ΔG2,int −W +2μ)
s1 = ,
Ξ
e−β(ΔG2,int−μ) + e−β(ΔG1,int+ΔG2,int −W +2μ)
s2 = .
Ξ

Problems of Chap. 6
6.1. pH Dependence of Enzyme Activity
r 1
= ,
rmax 1 + (1 + (cH+ /K))(KM /cS )
c H+ c S −
K= , cs = cS− + cHS .
cHS
6.2. Polymerization at the End of a Polymer
(KcM )i
ciM = KcM c(i−1)M = · · · = ,
K
∞
i(KcM )i 1
i = i=1
∞ = for KcM < 1.
i=1 (Kc M )i 1 − KcM
6.3. Primary Salt Effect
r = k1 cX ,
 
cX Z A Z B e2 κ
K= exp − ,
cA cB 4πkB T
 
Z A Z B e2 κ
cX = KcA cB exp .
4πkB T

Problems of Chap. 7
7.1. Smoluchowski Equation
P (t + Δt, x) = eΔt(∂/∂t) P (t, x),
w± (x ± Δx)P (t, x ± Δx) = e±Δx(∂/∂x) w± (x)P (t, x),
eΔt(∂/∂t) P (t, x) = eΔx(∂/∂x) w+ (x)P (t, x) + e−Δx(∂/∂x) w− (x)P (t, x),

P (t, x) + Δt P (t, x) + · · · = (w+ (x) + w− (x))P (x, t)
∂t
∂ Δx2 ∂ 2
+Δx (w+ (x)−w− (x))P (x, t)+ (w+ (x) + w− (x))P (x, t)+ · · · ,
∂x 2 ∂x2
Solutions 347

∂ Δx ∂ Δx2 ∂ 2
P (t, x) = (w+ (x) − w− (x))P (x, t) + P (x, t) + · · · ,
∂t Δt ∂x Δt ∂x2
Δx2 kB T +
D= , K(x) = − (w (x) − w− (x)).
Δt Δx
7.2. Eigenvalue Solution to the Smoluchowski Equation

kB T −U/kB T ∂ U/kB T
− e e W
mγ ∂x

kB T −U/kB T U/kB T ∂W 1 ∂U
=− e e +e U/kB T
W
mγ ∂x kB T ∂x
kB T ∂W 1 ∂U
=− − W = S,
mγ ∂x mγ ∂x

∂ ∂ kB T −U/kB T ∂ U/kB T
LFP W = − S = e e W ,
∂x ∂x mγ ∂x
kB T ∂ −U/kB T ∂ U/kB T
LFP = e e ,
mγ ∂x ∂x
kB T ∂ −U/kB T ∂ U/2kB T
L = eU/2kB T LFP e−U/2kB T = eU/2kB T e e ,
mγ ∂x ∂x

∂ ∂ kB T U/2kB T
LH = eU/2kB T − e−U/kB T − e = L,
∂x ∂x mγ
since
kB T
= constant.

For an eigenfunction ψ of L, we have
λψ = Lψ = eU/2kB T LFP e−U/2kB T ψ.
Hence

LFP e−U/2kB T ψ = λ e−U/2kB T ψ


gives an eigenfunction of the Fokker–Planck operator to the same eigenvalue
λ. A solution of the Smoluchowski equation is then given by
W (x, t) = eλt e−U/2kB T ψ(x).
The Hermitian operator is very similar to the harmonic oscillator in second
quantization

kB T ∂ −U/2kB T ∂ U/2kB T
L= eU/2kB T e e−U/2kB T e
mγ ∂x ∂x

kB T ∂ 1 ∂U ∂ 1 ∂U
= − +
mγ ∂x 2kB T ∂x ∂x 2kB T ∂x
348 Solutions

kB T ∂ mω 2 ∂ mω 2
= − x + x
mγ ∂x 2kB T ∂x 2kB T
⎛ % ⎞ ⎛ % ⎞
2 2 2
ω ⎝ kB T ∂ 1 mω ⎠ ⎝ kB T ∂ 1 mω ⎠
=− − x + x
γ mω 2 ∂x 2 kB T mω 2 ∂x 2 kB T

ω2 ∂ 1 ∂ 1 ω2 +
=− − ξ + ξ =− b b
γ ∂ξ 2 ∂ξ 2 γ
with Boson operators
b+ b − bb+ = 1.
From comparison with the harmonic oscillator, we know the eigenvalues
ω2
λn = − n, n = 0, 1, 2, . . .
γ
The ground state obeys

∂ 1 ∂U
aψ0 = + ψ0 = 0
∂x 2kB T ∂x
with the solution
ψ0 = e−U(x)/2kB T .
This corresponds to the stationary solution of the Smoluchowski equation:
%
mω 2 −U (x)/kB T
W = e .
2πkB T

7.3. Diffusion Through a Membrane


kAB = kA + kB ,

dN 
M
dPN 
M 
N
0= = N =− kAB M N PN + (kAB − 2km )N 2 PN
dt dt
N =1 N =1 N =1


M 
M
+ kAB M N PN −1 − kAB (N − 1)N PN −1
N =2 N =2


M−1
+ 2km N (N + 1)PN +1
N =1

≈ −kAB M N + (kAB − 2km )N 2 + kAB M (1 + N ) − kAB (N 2 + N )


+2km (N 2 − N )
Solutions 349

= kAB M − kAB N − 2km N ,


kA + kB
N =M ,
kA + kB + 2km

dN 2  dPN 
M 
N
0= = N2 =− kAB M N 2 PN + (kAB − 2km )N 3 PN
dt dt
N =1 N =1


M 
M
+ kAB M N PN −1 −
2
kAB (N − 1)N 2 PN −1
N =2 N =2


M−1
+ 2km N 2 (N + 1)PN +1
N =1

≈ −kAB M N 2 + (kAB − 2km )N 3 + kAB M (N 2 + 2N + 1)


−kAB (N 3 + 2N 2 + N ) + 2km (N 3 − 2N 2 + N )
= kAB M (2N + 1) − kAB (2N 2 + N) + 2km (−2N 2 + N )
= kAB M + N (2kAB M − kAB + 2km ) − N 2 (2kAB + 4km )

kAB M + (2kAB M − kAB + 2km )M kABk+2k


AB
m
N2 =
2kAB + 4km
2
2kAB km kAB
= 2
M+ M 2,
(kAB + 2km ) (kAB + 2km )2
and the variance is
2 2km kAB 2km 2
N2 − N = 2
M= N .
(kAB + 2km ) kAB M
The diffusion current from A → B is
dNA dNB 
J= − = (−kA (M − N )PN + km N PN )
dt dt
N

− (−kB (M − N )PN + km N PN )
N

= (kB − kA )(M − N )PN = (kB − kA )(M − N ).
N
350 Solutions

Problems of Chap. 9
9.1. Dichotomous Model
λ1 = 0,
⎛ ⎞
0
⎜0⎟ (L1 P0 ) 1
L1 = ( 1 1 1 1), R1 = ⎜ ⎟
⎝β ⎠, = ,
(L1 R1 ) α+β
α
λ2 = −(α + β),
⎛ ⎞
0
⎜ 0 ⎟ (L2 P0 )
L2 = ( α −β α −β), R2 = ⎜ ⎟
⎝ −1 ⎠ , = 0,
(L2 R2 )
1
k− + k+ + α + β
λ3,4 = −
2
1 
± (α + β)2 + (k+ − k− )2 + 2(β − α)(k− − k+ ),
2
Fast fluctuations:
α β
λ3 = − k− − k+ + O(k 2 ),
α+β α+β
⎛ ⎞
β
⎜ α ⎟
L3 ≈ (1, 1, 0, 0), R3 ≈ ⎜ ⎟
⎝ −β ⎠ ,
−α

(L3 P0 ) 1
≈ ,
(L3 R3 ) α+β
α β
λ4 = −(α + β) − k+ − k− + O(k 2 ),
α+β α+β
⎛ ⎞
1
⎜ −1 ⎟ (L4 P0 )
L4 ≈ (−α, β, 0, 0), R4 ≈ ⎜ ⎟
⎝ 1 ⎠ , (L4 R1 ) ≈ 0,
−1
⎛ ⎞ ⎛ β

0 α+β
⎜ 0 ⎟ ⎜ α+β ⎜ α ⎟
⎟ λ3 t
P(t) ≈ ⎜
⎝ α+β
β ⎠ ⎟ + ⎜ β ⎟e → P (D∗ ) = eλ3 t .
⎝ − α+β ⎠
α
α+β − α+β
α
Solutions 351

Slow fluctuations:
λ3 ≈ −k+ − α,
⎛ ⎞
k+ − k−
⎜ −α ⎟
L3 ≈ (k+ − k− , −β, 0, 0), R3 ≈ ⎜ ⎟
⎝ −(k+ − k− ) ⎠ ,
α
L3 P0 β 1
≈ ,
L3 R3 α + β k+ − k−
λ4 ≈ −k− − β,
⎛ ⎞
β
⎜ k+ − k− ⎟ L4 P0 α 1
L4 ≈ (α, k+ − k− , 0, 0), R4 ≈ ⎜⎝
⎟,
⎠ L4 R4 ≈ α + β k+ −k− ,
−β
−(k+ − k− )
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
0 1 0
⎜ 0 ⎟ β ⎜ ⎟ ⎜ ⎟
P(t) ≈ ⎜ β ⎟ ⎜ 0 ⎟ −(k+ +α)t + α ⎜ 1 ⎟ e−(k− +β)t ,
⎝ α+β ⎠ + α + β ⎝ −1 ⎠ e α+β ⎝ 0 ⎠
α
α+β
0 −1

β α −(k− +β)t
P (D∗ ) ≈ e−(k+ +α)t + e .
α+β α+β

Problems of Chap. 10
10.1. Entropy Production

0 = dH = T dS + V dp + μk dNk ,
k
  
T dS = − μk dNk = − μk νkj dξj = Aj dξj ,
k j k j

dS  Aj
= rj .
dt j
T

Problems of Chap. 11
11.1. ATP Synthesis
At chemical equilibrium,

0=A=− νk μk
352 Solutions

= μ0 (ADP) + kB T ln c(ADP) + μ0 (POH) + kB T ln c(POH)


+
+2kB T ln c(Hout ) + 2eΦout
−μ0 (ATP) − kB T ln c(ATP) − μ0 (H2 O) − kB T ln c(H2 O)
−2kB T ln c(Hin
+
) − 2eΦin ,
kB T ln K = −ΔG0 = μ0 (ADP) − μ0 (ATP) + μ0 (POH) − μ0 (H2 O)
+
c(ATP)c(H2 O)c2 (Hin )
= kB T ln + + 2e(Φin − Φout )
c(ADP)c(POH)c2 (Hout )
+
c(ATP)c(H2 O) c(Hin )
= kB T ln + 2kB T ln + + 2e(Φin − Φout ).
c(ADP)c(POH) c(Hout )

Problems of Chap. 15
15.1. Transition State Theory
K‡ k‡

A+B  [AB] → P,

k = k ‡ c[AB]‡ = k ‡ K ‡ cA cB ,

c[AB]‡ q[AB]‡ −ΔH ‡ /kB T q[AB] ‡ ‡

K = = e = qx e−ΔH /kB T ,
cA cB qA qB qA qB

2πmkB T
qx = δx,
h

v‡ 1 kB T
k‡ = = ,
δx δx 2πm
√ ‡
1 kB T 2πmkB T q[AB]‡ −ΔH ‡ /kB T
k= δx e cA cB
δx 2πm h qA qB

kB T q[AB]‡ −ΔH ‡ /kB T
= e cA cB .
h qA qB
15.2. Harmonic Transition State Theory
 ∞ −mω2 x2 /2k T
‡ kB T −∞ e B
δ(x − x‡ )
k = vδ(x − x ) = ∞
2πm −∞
e−mω x /2kB T
2 2

2 ‡2
kB T e−mω x /2kB T
= 
2πm 2πkB T /mω 2
ω −ΔE/kB T
= e .

Solutions 353

Problems of Chap. 16

16.1. Marcus Cross Relation


A+A− → A− +A, λA = 2ΔE(Aeq → A−
eq ),

D+D+ → D+ +D, λD = 2ΔE(Deq → Deq


+
),
A+D → A− +D+ , λAD = ΔE(Aeq → A−
eq ) + ΔE(Deq → Deq ) =
+ λA +λD
2 ,

ωA −λA /4kB T
kA = e ,

ωB −λB /4kB T
kB = e ,

KAD = e−ΔG/kB T ,
 
ωAD (λAD + ΔG)2
kAD = exp −
2π 4λAD kB T
 
ωAD λA + λD ΔG ΔG2
= exp − − −
2π 8kB T 2kB T 4λAD kB T
%  
 2
ωAD ΔG2
= kA kB KAD exp − .
ωA ωD 4λAD kB T

Problems of Chap. 18
18.1. Absorption Spectrum
    −βH 
1 e
α= dt eiωt i| e−iωi t μf eiωf t f μi
2π Q
i,f

1
= dt eiωt e−iHt/ μeiHt/ μ
2π

1
= dt eiωt μ(0)μ(t)
2π

|μeg |2
≈ dt eiωt e−iHg t/ eiHe t/ g .
2π
354 Solutions

Problems of Chap. 20

20.1. Motional Narrowing


(s + iω1 )(s + iω2 ) + (α + β)(s + iω)

Δω Δω
=− Ω+ Ω− − iωc Ω,
2 2
Ω = ω − ω,
Δω 2
Ω2 − + iωc Ω = 0,
4
2
iωc Δω 2 ω2
Ω+ = − c.
2 4 4
For ωc Δω, the poles are approximately at
iωc Δω
Ωp = − ±
2 2
and two lines are observed centered at the unperturbed frequencies ω ± Δω/2
and with their width determined by ωc . For ωc = Δω, the two poles coincide at
iωc
Ωp = −
2
and a single line at the average frequency ω appears. For ωc
Δω, one pole
approaches zero according to
Δω 2
Ωp = −i ,
4ωc
which corresponds to a sharp line at the average frequency ω. The other pole
approaches infinity as
Ωp = −iωc .
It contributes a broad line at ω which vanishes in the limit of large ωc
(Fig. 32.1).

Problems of Chap. 21

21.1. Crude Adiabatic Model



∂C −s c ∂ζ † ∂C 0 1 ∂ζ
= , C = ,
∂Q −c −s ∂Q ∂Q −1 0 ∂Q
2 2
∂2C −s c ∂ ζ ∂ζ
= −C ,
∂Q2 −c −s ∂Q2 ∂Q
Solutions 355

Im

−Δω/2 Δω/2
Re

Fig. 32.1. Poles of the lineshape function

2
∂2C 0 1 ∂2ζ ∂ζ
C† = − ,
∂Q2 −1 0 ∂Q2 ∂Q

∂2 2
†∂ C † ∂C ∂ ∂2
dr C † Φ† ΦC = C + 2C +
∂Q2 ∂Q2 ∂Q ∂Q ∂Q2
2
∂2 0 1 ∂2ζ ∂ζ 0 1 ∂ζ ∂
= + − + 2 ,
∂Q2 −1 0 ∂Q2 ∂Q −1 0 ∂Q ∂Q

dr C † Φ† (Tel + V0 + ΔV )ΦC

  
† † † † E(Q) − ΔE(Q)
2 V (Q)
= C EC + C dr Φ ΔV Φ, C=C C,
V (Q) E(Q) + ΔE(Q)
2

dr C † Φ† H Φ C
  2
2 ∂ 2 E(Q) − ΔE(Q) V (Q) 2 ∂ζ
=− + C† 2 C+
2m ∂Q2 V (Q) E(Q)+ ΔE(Q)
2
2m ∂Q

2 0 1 ∂2ζ ∂ζ ∂
− +2 ,
2m −1 0 ∂Q2 ∂Q ∂Q
V (Q)
cos ζ sin ζ =  ,
4V (Q)2 + ΔE(Q)2
ΔE(Q)
cos ζ 2 − sin ζ 2 =  ,
4V (Q)2 + ΔE(Q)2
∂ ∂ζ 2V ΔE ∂ζ
(cs)2 = 2cs(c2 − s2 ) = ,
∂Q ∂Q 4V 2 + ΔE 2 ∂Q
356 Solutions

∂ ∂ V2
(cs)2 =
∂Q ∂Q 4V 2 + ΔE 2

2V ∂V V2 ∂ΔE ∂V
= − 2ΔE + 8V ,
4V 2 + ΔE 2 ∂Q (4V 2 + ΔE 2 )2 ∂Q ∂Q
∂ζ ΔE ∂V V ∂ΔE 1 ∂V
= − ≈ ,
∂Q 4V 2 + ΔE 2 ∂Q 4V 2 + ΔE 2 ∂Q ΔE ∂Q
2 2
∂2ζ ΔE ∂∂QV2 − V ∂∂QΔE
2 ∂V
∂ΔE
∂ΔE 2ΔE ∂Q + 8V ∂Q
∂V
= − ΔE − V
∂Q2 4V 2 + ΔE 2 ∂Q ∂Q (4V 2 + Δe2 )2
1 ∂2V 2 ∂ΔE ∂V
≈ 2
− ,
ΔE ∂Q ΔE 2 ∂Q ∂Q

 ≈ −  ∂ + E − 4V + ΔE
2 2 2 2
H √
2m ∂Q2 E + 4V 2 + ΔE 2
2
2 1 ∂V
+
2m ΔE 2 ∂Q

2 0 1 1 ∂2V 2 ∂ΔE ∂V 2 ∂V ∂
− − + .
2m −1 0 ΔE ∂Q2 ΔE 2 ∂Q ∂Q ΔE ∂Q ∂Q

Problems of Chap. 22
22.1. Ladder Model

n
iĊ0 = V Cj ,
j=1

iĊj = Ej Cj + V C0 ,

Cj = uj e(Ej /i)t ,

iu̇j e(Ej /i)t = V C0 ,


 t
V 
uj = e−(Ej /i)t C0 (t )dt ,
i 0
 t
V 
Cj = ei(Ej /)(t−t ) C0 (t )dt ,
i 0
Ej = α + j ∗ Δω,

V  V 2  t i(jΔω+α/)(t−t )
n
Ċ0 = Cj = − 2 e C0 (t )dt ,
i j=1  0
Solutions 357
α
ω = jΔω + ,


  ∞
  dω 2π
ei(jΔω+α/)(t−t ) Δj → eiω(t−t ) = δ(t − t ),
j=−∞ −∞ Δω Δω

2πV 2 2πV 2
Ċ0 = − C0 = − ρ(E)C0 ,
Δω 
1 1
ρ(E) = = .
Δω ΔE

Problems of Chap. 23
23.1. Hückel Model with Alternating Bonds

(a) αeikn + βei(kn+χ) + β  ei(kn+k+χ) = eikn α + βeiχ + β  ei(k+χ) ,


αei(kn+χ) + β  ei(kn−k) + βeikn = ei(kn+χ) α + β  e−i(k+χ) + βe−iχ .

(b) βeiχ + β  ei(k+χ) = β  e−i(k+χ) + βe−iχ ,



2
β  e−ik + β β  −ik/2
e + βeik/2 β  e−ik/2 + βeik/2
e2iχ =  ik = e−ik  ik/2 = e−ik 2 ,
β e +β βe + βe−ik/2 β + β 2 + 2ββ  cos k

β  e−ik/2 + βeik/2
eiχ = ±e−ik/2  ,
β 2 + β 2 + 2ββ  cos k
ββ  e−ik + β 2 + β 2 + ββ  eik
λ = α + βeiχ + β  ei(k+χ) = α ± 
β 2 + β 2 + 2ββ  cos k

=α± β 2 + β 2 + 2ββ  cos k.
 

β  e−ik/2 + βeik/2
−ik/2
(c) 0= e iχ+i(N +1)k
=  ±e  e i(N +1)k
β 2 + β 2 + 2ββ  cos k
 
β  eiN k + βei(N +1)k
= ±  ,
β 2 + β 2 + 2ββ  cos k
0 = β  sin(N k) + β sin(N + 1)k.
(d) For a linear polyene with 2N − 1 carbon atoms, use again eigenfunctions
c2n = sin(kn) = (eikn ),
c2n−1 = sin(kn + χ) = (ei(kn+χ) ),
and chose the k-values such that
(eiN k ) = sin(N k) = 0.
358 Solutions

Problems of Chap. 25
25.1. Special Pair Dimer

c −s −Δ/2 V c s
s c V Δ/2 −s c
is diagonalized if
(c2 − s2 )V = csΔ
or with c = cos χ, s = sin χ
2V
tan(2χ) = ,
Δ
1
c2 − s2 = cos 2χ =  ≥ 0,
1 + (4V 2 /Δ2 )
1
2cs = sin(2χ) =  ≥ 0.
1 + (Δ2 /4V 2 )
The eigenvalues are (Fig. 32.2)

Δ 2
E± = ± (c − s ) + 2csV
2
2
⎛ ⎞
Δ 1 1 
= ±⎝ 1 +V 1 ⎠ = ±1 Δ2 + 4V 2 .
2 1 + 4V 2 1+ Δ2 2
Δ2 4V 2

The transition dipoles are


μ+ = sμa + cμb ,
μ− = cμa − sμb ,

6.0

4.0
relative energies E / 2V

2.0

0.0

–2.0

–4.0

–6.0
0 2 4 6 8 10
disorder Δ /2V

Fig. 32.2. Energy splitting of the two dimer bands


Solutions 359

2.0

relative intensities
1.0

0.0
0 2 4 6 8 10
disorder Δ/2V

Fig. 32.3. Intensities of the two dimer bands

and the intensities (for |μa | = |μb | = μ)


 
cos α
|μ± |2 = μ2 (1 ± 2cs cos α) = μ2 1± 
1 + (Δ2 /4V 2 )
with (Fig. 32.3)
cos α = −0.755.

25.2. LH2
1  −ikn
|n; α = e |k; α,
3
k


9 
9
1  −i(k−k )n
Eα |n; αn; α| = Eα e |k; αk  ; α|
n=1 n=1
9 
k,k

= δk,k Eα |k; αk; α|,


9 
9
1  −i(k−k )n
Eα |n; βn; β| Eα e |k; βk  ; β|
n=1 n=1
9 
k,k

= δk,k Eα |k; βk; β|,


9 
9
1  −i(k−k )n
Vdim |n; αn; β| = Vdim e |k; αk  ; β|
n=1 n=1
9 
k,k

= δk,k Vdim |k; αk; β|,


360 Solutions


9 
9
1  −i(k−k )n
Vβα,1 |n; αn − 1; β| = Vβα,1 e−ik e |k; αk  ; β|
n=1 n=1
9 
k,k

= δk,k Vβα,1 e−ik |k; αk; β|,


9 
9
1  −i(k−k )n
Vβα,1 |n; βn + 1; α| = Vβα,1 eik e |k; βk  ; α|
n=1 n=1
9 
k,k

= δk,k Vβα,1 eik |k; βk; α|,


9
Vαα,1 (|n; αn + 1; α| + h.c.)
n=1


9
1  −i(k−k )n
= Vαα,1 eik e |k; αk  ; α| + h.c.
n=1
9 
k,k

= δk,k 2Vαα,1 cos k|k; αk; α|,


Hαα (k) = Eα + 2Vαα1 cos k,
Hββ (k) = Eβ + 2Vββ1 cos k,

Hαβ (k) = Vdim + e−ik Vβα,1 ,


Hβα (k) = Vdim + eik Vβα,1 ,

Eα + 2Vαα1 cos k V + e−ik W
H(k) =
V + eik W Eβ + 2Vββ1 cos k

−Δk /2 V + e−ik W
= Ek + .
V + eik W Δk /2
Perform a canonical transformation with

c −se−iχ
S= .
seiχ c
S † H S becomes diagonal if
c2 (V + eik W ) − s2 e2iχ (V + e−ik W ) + cseiχ Δk = 0.
Chose χ such that
V + eik W = sign V |V + eik W |eiχ = U (k)eiχ
and solve
(c2 − s2 )U + csΔk = 0
Solutions 361

k=0

Eα+Eβ
2

k=0

Fig. 32.4. Energy levels of LH2

by1

U 1 |U/Δ|
c2 − s2 = −sign  , cs =  .
Δ 1 + (4U 2 /Δ2 ) 1 + (4U 2 /Δ2 )
The eigenvalues are (Fig. 32.4)
1 2
E± (k) = E k ± sign V Δ + 4U 2
2
Eα + Eβ
= + (Vαα1 + Vββ1 ) cos k
2
% 2
Eα − Eβ + 2(Vαα1 − Vββ1 ) cos k
± sign V + V 2 + W 2 + 2V W cos k
2

1  ikn 1  ikn
μk,+ = c e μnα + seiχ e μnβ
3 n 3 n

1  ikn n 1  ikn n
=c e S9 μ0α + seiχ e S9 Rz (2ν + φβ − φα )μ0,α
3 n 3 n
⎛ ⎞
9 n − sin 9 n
cos 2π 2π
1  ikn ⎜ ⎟
= e ⎝ sin 2π n cos 2π n ⎠
3 n 9 9
1
⎛ ⎛ ⎛⎞⎞ ⎞
cos  − sin  sin θ
× ⎝c + seiχ ⎝ sin  cos  ⎠⎠ μ0 ⎝ 0 ⎠ .
1 cos θ


1
The sign is chosen such that for |Δ| |U |, the solution becomes c = s = 1/ 2.
362 Solutions

The sum over n again gives the selection rules


k = 0 z-polarization,

k=± circular xy-polarization.
9
The second factor gives for z-polarization
μ = 3μ0 (c + seiχ ) cos θ,
|μ|2 = 9μ20 (1 + 2cs cos χ) cos2 θ,
with
cos2 θ ≈ 0.008
and for polarization in the xy-plane

c + seiχ cos 
μ = 3μ0 sin θ ,
seiχ sin 

|μ|2 = 9μ20 sin2 θ(1 + 2 cos  cs cos χ),


with
sin2 θ ≈ 0.99, cos  ≈ −0.952.
The intensities of the (k, −) states are
|μz |2 = 9μ20 (1 − 2cs cos χ) cos2 θ,

|μ⊥ |2 = 9μ20 sin2 θ(1 − 2 cos  cs cos χ).

25.3. Exchange Narrowing


 
δEn
P (δEk = X) = dδE1 dδE2 · · · P (δE1 )P (δE2 ) · · · δ X − ,
N
  
1 1
dt δE1 dδE2 · · · √ e−δE1 /Δ · · · eit(X− δEn /N )
2 2
P (δEk = X) =
2π Δ π
  N
1 1 −δE12 /Δ2 −itδE1 /N
= dt e itX
√ dt δE1 e
2π Δ π

1
dt eitX e−Δ t /4N
2 2
=


N −X 2 N/Δ2
= √ e . (32.19)
πΔ
Solutions 363

Problems of Chap. 28

28.1. Deviation from Equilibrium


α2
Ω = e−ΔU/kB T (1 − eΔμ/kB T ) ,
α2 + β2

(Δμ0 −ΔU (x))/kB T α2 (x) C(ATP)
Ω(x) = e Keq − .
α2 (x) + β2 (x) C(ADP)C(P)
For Δμ = 0 but Δμ kB T , Ω becomes a linear function of Δμ
α2 (x) Δμ
Ω(x) → −e−ΔU (x)/kB T ,
α2 (x) + β2 (x) kB T
whereas in the opposite limit Δμ
kB T it becomes proportional to the
concentration ratio:
α2 (x) C(ATP)
Ω(x) → −e−ΔU (x)/kB T
0
eΔμ /kB T .
α2 (x) + β2 (x) C(ADP)C(P)
References

1. T.L. Hill, An Introduction to Statistical Thermodynamics (Dover, New York,


1986)
2. S. Cocco, J.F. Marko, R. Monasson, arXiv:cond-mat/0206238v1
3. C. Storm, P.C. Nelson, Phys. Rev. E 67, 51906 (2003)
4. K.K. Mueller-Niedebock, H.L. Frisch, Polymer 44, 3151 (2003)
5. C. Leubner, Eur. J. Phys. 6, 299 (1985)
6. P.J. Flory, J. Chem. Phys. 10, 51 (1942)
7. M.L. Huggins, J. Phys. Chem. 46, 151 (1942)
8. M. Feig, C.L. Brooks III, Curr. Opin. Struct. Biol. 14, 217 (2004)
9. B. Roux, T. Simonson, Biophys. Chem. 78, 1 (1999)
10. A. Onufriev, Annu. Rep. Comput. Chem. 4, 125 (2008)
11. M. Born, Z. Phys. 1, 45 (1920)
12. D. Bashford, D. Case, Annu. Rev. Phys. Chem. 51, 29 (2000)
13. W.C. Still, A. Tempczyk, R.C. Hawley, T. Hendrickson, JACS 112, 6127 (1990)
14. G.D. Hawkins, C.J. Cramer, D.G. Truhlar, Chem. Phys. Lett. 246, 122 (1995)
15. M. Schaefer, M. Karplus, J. Phys. Chem. 100, 1578 (1996)
16. I.M. Wonpil, M.S. Lee, C.L. Brooks III, J. Comput. Chem. 24, 1691 (2003)
17. F. Fogolari, A. Brigo, H. Molinari, J. Mol. Recognit. 15, 377 (2002)
18. A.I. Shestakov, J.L. Milovich, A. Noy, J. Colloid Interface Sci. 247, 62 (2002)
19. B. Lu, D. Zhang, J.A. McCammon, J. Chem. Phys. 122, 214102 (2005)
20. P. Koehl, Curr. Opin. Struct. Biol. 16, 142 (2006)
21. P. Debye, E. Hueckel, Phys. Z. 24, 305 (1923)
22. G. Goüy, Comt. Rend. 149, 654 (1909)
23. G. Goüy, J. Phys. 9, 457 (1910)
24. D.L. Chapman, Philos. Mag. 25, 475 (1913)
25. O. Stern, Z. Elektrochem. 30, 508 (1924)
26. A.-S. Yang, M. Gunner, R. Sampogna, K. Sharp, B. Honig, Proteins 15, 252
(1993)
27. P.W. Atkins, Physical Chemistry (Freeman, New York, 2006)
28. W.J. Moore, Basic Physical Chemistry (Prentice-Hall, New York, 1983)
29. M. Schaefer, M. Sommer, M. Karplus, J. Phys. Chem. B 101, 1663 (1997)
30. R.A. Raupp-Kossmann, C. Scharnagl, Chem. Phys. Lett. 336, 177 (2001)
31. H. Risken, The Fokker–Planck Equation (Springer, Berlin, 1989)
32. H.A. Kramers, Physica 7, 284 (1941)
366 References

33. P.O.J. Scherer, Chem. Phys. Lett. 214, 149 (1993)


34. E.W. Montroll, H. Scher, J. Stat. Phys. 9, 101 (1973)
35. E.W. Montroll, G.H. Weiss, J. Math. Phys. 6, 167 (1965)
36. A.A. Zharikov, P.O.J. Scherer, S.F. Fischer, J. Phys. Chem. 98, 3424 (1994)
37. A.A. Zharikov, S.F. Fischer, Chem. Phys. Lett. 249, 459 (1995)
38. S.R. de Groot, P. Mazur, Irreversible Thermodynamics (Dover, New York,
1984)
39. D.G. Miller, Faraday Discuss. Chem. Soc. 64, 295 (1977)
40. R. Paterson, Faraday Discuss. Chem. Soc. 64, 304 (1977)
41. D.E. Goldman, J. Gen. Physiol. 27, 37 (1943)
42. A.L. Hodgkin, A.F. Huxley, J. Physiol. 117, 500 (1952)
43. J. Kenyon, How to solve and program the Hodgkin–Huxley equa-
tions (http://134.197.54.225/department/Faculty/kenyon/Hodgkin&Huxley/
pdfs/HH.Program.pdf)
44. J. Vreeken, A friendly introduction to reaction–diffusion systems, Internship
Paper, AILab, Zurich, July 2002
45. E. Pollak, P. Talkner, Chaos 15, 026116 (2005)
46. F.T. Gucker, R.L. Seifert, Physical Chemistry (W.W. Norton, New York, 1966)
47. S. Glasstone, K.J. Laidler, H. Eyring, The Theory of Rate Processes (McGraw-
Hill, New York, 1941)
48. K.J. Laidler, Chemical Kinetics, 3rd edn. (Harper & Row, New York, 1987)
49. G.A. Natanson, J. Chem. Phys. 94, 7875 (1991)
50. R.A. Marcus, Annu. Rev. Phys. Chem. 15, 155 (1964)
51. R.A. Marcus, N. Sutin, Biochim. Biophys. Acta 811, 265 (1985)
52. R.A. Marcus, Angew. Chem. Int. Ed. Engl. 32, 1111 (1993)
53. A.M. Kuznetsov, J. Ulstrup, Electron Transfer in Chemistry and Biology
(Wiley, Chichester, 1998), p. 49
54. P.W. Anderson, J. Phys. Soc. Jpn. 9, 316 (1954)
55. R. Kubo, J. Phys. Soc. Jpn. 6, 935 (1954)
56. R. Kubo, in Fluctuations, Relaxations and Resonance in Magnetic Systems,
ed. by D. ter Haar (Plenum, New York, 1962)
57. T.G. Heil, A. Dalgarno, J. Phys. B 12, 557 (1979)
58. L.D. Landau, Phys. Z. Sowjetun. 1, 88 (1932)
59. C. Zener, Proc. R. Soc. Lond. A 137, 696 (1932)
60. E.S. Kryachko, A.J.C. Varandas, Int. J. Quant. Chem. 89, 255 (2001)
61. E.N. Economou, Green’s Functions in Quantum Physics (Springer, Berlin,
1978)
62. H. Scheer, W.A. Svec, B.T. Cope, M.H. Studler, R.G. Scott, J.J. Katz, JACS
29, 3714 (1974)
63. A. Streitwieser, Molecular Orbital Theory for Organic Chemists (Wiley, New
York, 1961)
64. J.E. Lennard-Jones, Proc. R. Soc. Lond. A 158, 280 (1937)
65. B.S. Hudson, B.E. Kohler, K. Schulten, in Excited States, ed. by E.C. Lin
(Academic, New York, 1982), pp. 1–95
66. B.E. Kohler, C. Spangler, C. Westerfield, J. Chem. Phys. 89, 5422 (1988)
67. T. Polivka, J.L. Herek, D. Zigmantas, H.-E. Akerlund, V. Sundstrom, Proc.
Natl. Acad. Sci. USA 96, 4914 (1999)
68. B. Hudson, B. Kohler, Synth. Met. 9, 241 (1984)
69. B.E. Kohler, J. Chem. Phys. 93, 5838 (1990)
References 367

70. W.T. Simpson, J. Chem. Phys. 17, 1218 (1949)


71. H. Kuhn, J. Chem. Phys. 17, 1198 (1949)
72. M. Gouterman, J. Mol. Spectrosc. 6, 138 (1961)
73. M. Gouterman, J. Chem. Phys. 30, 1139 (1959)
74. M. Gouterman, G.H. Wagniere, L.C. Snyder, J. Mol. Spectrosc. 11, 108 (1963)
75. C. Weiss, The Porphyrins, vol. III (Academic, New York, 1978), p. 211
76. D. Spangler, G.M. Maggiora, L.L. Shipman, R.E. Christofferson, J. Am. Chem.
Soc. 99, 7470 (1977)
77. B.R. Green, D.G. Durnford, Annu. Rev. Plant Physiol. Plant Mol. Biol. 47,
685 (1996)
78. H.A. Frank et al., Pure Appl. Chem. 69, 2117 (1997)
79. R.J. Cogdell et al., Pure Appl. Chem. 66, 1041 (1994)
80. H. van Amerongen, R. van Grondelle, J. Phys. Chem. B 105, 604 (2001)
81. T. Förster, Ann. Phys. 2, 55 (1948)
82. T. Förster, Disc. Faraday Trans. 27, 7 (1965)
83. D.L. Dexter, J. Chem. Phys. 21, 836 (1953)
84. F.J. Kleima, M. Wendling, E. Hofmann, E.J.G. Peterman, R. van Grondelle,
H. van Amerongen, Biochemistry 39, 5184 (2000)
85. T. Pullerits, M. Chachisvilis, V. Sundström, J. Phys. Chem. 100, 10787 (1996)
86. R.C. Hilborn, Am. J. Phys. 50, 982 (1982), revised 2002
87. S.H. Lin, Proc. R. Soc. Lond. A 335, 51 (1973)
88. S.H. Lin, W.Z. Xiao, W. Dietz, Phys. Rev. E 47, 3698 (1993)
89. MOLEKEL 4.0, P. Fluekiger, H.P. Luethi, S. Portmann, J. Weber, Swiss
National Supercomputing Centre CSCS, Manno Switzerland, 2000
90. J. Deisenhofer, H. Michel, Science 245, 1463 (1989)
91. J. Deisenhofer, O. Epp, K. Miki, R. Huber, H. Michel, Nature 318, 618 (1985)
92. J. Deisenhofer, O. Epp, K. Miki, R. Huber, H. Michel, J. Mol. Biol. 180, 385
(1984)
93. H. Michel, J. Mol. Biol. 158, 567 (1982)
94. E.W. Knapp, P.O.J. Scherer, S.F. Fischer, BBA 852, 295 (1986)
95. P.O.J. Scherer, S.F. Fischer, in Chlorophylls, ed. by H. Scheer (CRC, Boca
Raton, 1991), pp. 1079–1093
96. R.J. Cogdell, A. Gall, J. Koehler, Q. Rev. Biophys. 39, 227 (2006)
97. M. Ketelaars et al., Biophys. J. 80, 1591 (2001)
98. M. Matsushita et al., Biophys. J. 80, 1604 (2001)
99. K. Sauer, R.J. Cogdell, S.M. Prince, A. Freer, N.W. Isaacs, H. Scheer,
Photochem. Photobiol. 64, 564 (1996)
100. A. Freer, S. Prince, K. Sauer, M. Papitz, A. Hawthorntwaite-Lawless, G.
McDermott, R. Cogdell, N.W. Isaacs, Structure 4, 449 (1996)
101. M.Z. Papiz, S.M. Prince, A. Hawthorntwaite-Lawless, G. McDermott, A. Freer,
N.W. Isaacs, R.J. Cogdell, Trends Plant Sci. 1, 198 (1996)
102. G. McDermott, S.M. Prince, A. Freer, A. Hawthorntwaite-Lawless, M. Papitz,
R. Cogdell, Nature 374, 517 (1995)
103. N.W. Isaacs, R.J. Cogdell, A. Freer, S.M. Prince, Curr. Opin. Struct. Biol. 5,
794 (1995)
104. E.E. Abola, F.C. Bernstein, S.H. Bryant, T.F. Koetzle, J. Weng, in Crys-
tallographic Databases – Information Content, Software Systems, Scientific
Applications, ed. by F.H. Allen, G. Bergerhoff, R. Sievers (Data Commission
of the International Union of Crystallography, Cambridge, 1987), p. 107
368 References

105. F.C. Bernstein, T.F. Koetzle, G.J.B. Williams, E.F. Meyer Jr., M.D. Brice,
J.R. Rodgers, O. Kennard, T. Shimanouchi, M. Tasumi, J. Mol. Biol. 112, 535
(1977)
106. R. Sayle, E.J. Milner-White, Trends Biochem. Sci. 20, 374 (1995)
107. Y. Zhao, M.-F. Ng, G.H. Chen, Phys. Rev. E 69, 032902 (2004)
108. A.M. van Oijen, M. Ketelaars, J. Köhler, T.J. Aartsma, J. Schmidt, Science
285, 400 (1999)
109. C. Hofmann, T.J. Aartsma, J. Köhler, Chem. Phys. Lett. 395, 373 (2004)
110. S.E. Dempster, S. Jang, R.J. Silbey, J. Chem. Phys. 114, 10015 (2001)
111. S. Jang, R.J. Silbey, J. Chem. Phys. 118, 9324 (2003)
112. K. Mukai, S. Abe, Chem. Phys. Lett. 336, 445 (2001)
113. R.G. Alden, E. Johnson, V. Nagarajan, W.W. Parson, C.J. Law, R.G. Cogdell,
J. Phys. Chem. B 101, 4667 (1997)
114. V. Novoderezhkin, R. Monshouwer, R. van Grondelle, Biophys. J. 77, 666
(1999)
115. M.K. Sener, K. Schulten, Phys. Rev. E 65, 31916 (2002)
116. A. Warshel, S. Creighton, W.W. Parson, J. Phys. Chem. 92, 2696 (1988)
117. M. Plato, C.J. Winscom, in The Photosynthetic Bacterial Reaction Center, ed.
by J. Breton, A. Vermeglio (Plenum, New York, 1988), p. 421
118. P.O.J. Scherer, S.F. Fischer, Chem. Phys. 131, 115 (1989)
119. L.Y. Zhang, R.A. Friesner, Proc. Natl. Acad. Sci. USA 95, 13603 (1998)
120. M. Gutman, Structure 12, 1123 (2004)
121. P. Mitchell, Biol. Rev. Camb. Philos. Soc. 41, 445 (1966)
122. H. Luecke, H.-T. Richter, J.K. Lanyi, Science 280, 1934 (1998)
123. R. Neutze et al., BBA 1565, 144 (2002)
124. D. Borgis, J.T. Hynes, J. Chem. Phys. 94, 3619 (1991)
125. F. Juelicher, in Transport and Structure: Their Competitive Roles in Biophysics
and Chemistry, ed. by S.C. Müller, J. Parisi, W. Zimmermann. Lecture Notes
in Physics (Springer, Berlin, 1999)
126. A. Parmeggiani, F. Juelicher, A. Ajdari, J. Prost, Phys. Rev. E 60, 2127 (1999)
127. F. Jülicher, A. Ajdari, J. Prost, Rev. Mod. Phys. 69, 1269 (1997)
128. F. Jülicher, J. Prost, Progr. Theor. Phys. Suppl. 130, 9 (1998)
129. H. Qian, J. Math. Chem. 27, 219 (2000)
130. M.E. Fisher, A.B. Kolomeisky, arXiv:cond-mat/9903308v1
131. N.D. Mermin, J. Math. Phys. 7, 1038 (1966)
132. M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J.
Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis,
J.A. Montgomery J. Comput. Chem. 14, 1347 (1993)
Index

activation, 155 dephasing, 209


Arrhenius, 155 dephasing function, 211
ATP, 314 detailed balance, 316
diabatic states, 219
bacteriorhodopsin, 301
dichotomous, 107
binary mixture, 21
dielectric continuum, 38
binodal, 32
diffusion, 90
Boltzmann, 46
dimer, 270
Born, 40, 49
dipole, 43
Born energy, 43, 44
dipole moment, 201
Born radius, 43
Born–Oppenheimer, 195, 201, disorder, 285
303 disorder entropy, 24
Brownian motion, 87, 93, 311 dispersive kinetics, 107
Brownian ratchet, 318 displaced harmonic oscillator, 205
displaced oscillator model, 240
carotenoids, 247 double layer, 52, 57
Chapman, 52 double well, 305
charge separation, 173
charged cylinder, 49 Einstein coefficient, 265
charged sphere, 48 electrolyte, 45
chemical potential, 127 electron transfer rate, 176, 184,
chlorophylls, 247 189
collision, 159 elementary reactions, 75
common tangent, 32
energy gap law, 242
Condon, 201
energy transfer, 259
contribution, 127
energy transfer rate, 267
coordination number, 25
entropic elasticity, 4
correlated process, 113
entropy production, 128–130
critical coupling, 28
enzymatic catalysis, 80
ctrw, 112
equilibrium constant, 155, 157, 163
cumulant expansion, 211
excited state, 229
Debye, 45 excitonic interaction, 261
Debye length, 46 external force, 6
370 Index

Fitzhugh–Nagumo, 149 molecular aggregates, 274


Flory, 19 molecular motors, 311
Flory parameter, 21, 26 motional narrowing, 213
Fokker–Planck, 87 multipole expansion, 41, 262
force–extension relation, 8, 10, 15
Franck–Condon, 189 Nagumo, 150
Franck-Condon factor, 202, 207 Nernst equation, 144
free electron model, 249 Nernst potential, 140, 142
freely jointed chain, 3 Nernst–Planck equation, 135
Förster, 267 nonadiabatic interaction, 196, 197
nonequilibrium, 125
Gaussian, 213
Goüy, 52 octatetraene, 252
Onsager, 130
optical transitions, 201
heat of mixing, 21
Henderson–Hasselbalch, 64 pKa , 66
Huggins, 19 parallel mode approximation, 199
Hückel, 45, 251 phase diagram, 30, 33
point, 222
ideal gas, 30 Poisson, 114
implicit model, 37 Poisson–Boltzmann equation, 46
interaction, 11 polarization, 177
interaction energy, 20, 25 polyenes, 249
ion pair, 40 polymer solutions, 19
isotope effect, 166 power time law, 119
Prost, 311
Juelicher, 311 protonation equilibria, 61
Klein–Kramers equation, 97 ratchet, 311
Kramers, 101 reaction order, 76
Kramers–Moyal expansion, 96 reaction potential, 38
reaction rate, 75
ladder model, 233, 237 reaction variable, 75
Langevin function, 8 reorganization energy, 186, 189, 208
lattice model, 19
LCAO, 250 saddle point, 339
level shift, 232 Schiff base, 302
LH2, 280 Slater determinant, 248, 259
Lorentzian, 213 Smoluchowski, 98, 311
solvation energy, 43, 49
Marcus, 173 spectral diffusion, 209
master equation, 98, 107 spinodal, 31
maximum term, 7, 15 stability criterion, 26, 28
Maxwell, 94, 159 state crossing, 219
mean force, 37, 38 stationary, 130
membrane potential , 140 Stern, 57
metastable, 28
Michaelis Menten, 82 time correlation function, 202, 205
mixing entropy, 19, 25 titration, 62, 63, 65, 70
Index 371

transition state, 162 van Laar, 21


tunneling, 305 van’t Hoff, 156
two-component model, 9
uncorrelated process, 113 waiting time, 112

You might also like