Physical Chemistry

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Chemistry, Physical

By Seymour Z. Lewin

I INTRODUCTION

Chemistry, Physical, field of science that applies the laws of physics to elucidate the properties of chemical substances and
clarify the characteristics of chemical phenomena. The term physical chemistry is usually applied to the study of the physical
properties of substances, such as vapor pressure, surface tension, viscosity, refractive index, density, and crystallography, as well
as to the study of the so-called classical aspects of the behavior of chemical systems, such as thermal properties, equilibria, rates
of reactions, mechanisms of reactions, and ionization phenomena (see Chemical Reaction; Heat; Heat Transfer; Ionization). In its
more theoretical aspects, physical chemistry attempts to explain spectral properties of substances in terms of fundamental
quantum theory; the interaction of energy with matter; the nature of chemical bonding; the relationships correlating the number
and energy states of electrons in atoms and molecules with the observable properties shown by these systems; and the electrical,
thermal, and mechanical effects of individual electrons and protons on solids and liquids.

II HISTORICAL DEVELOPMENT

The earliest phase of the development of physical chemistry as a specialized field of study was devoted to investigating the
problem of chemical affinities, or the widely varying extents and degrees of vigor with which various substances react with each
other. Common examples are the easy corrosion of iron compared to gold, and the fact that oxygen supports combustion but
nitrogen does not.

A The 19th Century

It was first assumed that rapid reactions were those that proceeded to completion. It was soon realized, however, that these were
determined independently; the degree of completeness of a reaction is determined by its so-called equilibrium constant, a concept
introduced in 1864 by the Norwegian chemists Cato Maximilian Guldberg and Peter Waage, whereas the rate of a reaction is
determined by the intimacy of contact between the reactants, the presence or absence of a catalyst (see Catalysis), and other
variables.

The British chemist John Dalton proposed his atomic theory in 1803 and it was placed on a firm footing in 1811 when the Italian
physicist Amedeo Avogadro made clear the distinction between atoms and molecules of elementary substances. About the same
time, the concepts of heat, energy, work, and temperature began to be clarified and made more precise. The first law of
thermodynamics, according to which heat and work are mutually exactly interconvertible, was first clearly stated by the German
physicist Julius Robert von Mayer in 1842. The second law of thermodynamics, according to which spontaneous processes occur
with an increase in the degree of disorder in the system, was enunciated by the German mathematical physicist Rudolf Julius
Emanuel Clausius and the British mathematician and physicist William Thomson, later Lord Kelvin, in 1850-51. See Atom;
Thermodynamics.

These developments made it possible to begin to interpret the properties of gases, which represent the simplest states of matter, in
terms of the behavior of their individual molecules. In the period 1860-75, Clausius, the Austrian physicist Ludwig Boltzmann,
and the British physicist James Clerk Maxwell showed how to account for the ideal gas law in terms of a kinetic theory of matter
(see Gases). From this beginning have flowed all the subsequent insights into the kinetics of reactions and the laws of chemical
equilibrium.

Important contributions to the field of physical chemistry were made toward the end of the 18th century by the French chemist
Comte Claude Louis Berthollet, who studied the rate and reversibility of reactions, and the Anglo-American physicist Benjamin
Thompson who attempted to deduce the mechanical equivalent of heat. In 1824 the French physicist Nicolas Léonard Sadi
Carnot published his studies of the correlation between heat and work, which established him as the founder of modern
2

thermodynamics, and in 1836 the Swedish chemist Jöns Jakob Berzelius assessed the role played by catalysts in accelerating
reactions. The application of the first and second laws of thermodynamics to heterogenous substances in 1875 by the American
mathematical physicist Josiah Willard Gibbs and his discovery of the phase rule laid down the theoretical basis of physical
chemistry. The German physical chemist Walther Hermann Nernst, who in 1906 enunciated the third law of thermodynamics,
also made a lasting contribution to the study of physical properties, molecular structures, and reaction rates.

The Dutch physical chemist Jacobus Hendricus van't Hoff, generally regarded as the father of chemical kinetics, initiated the
foundation of stereochemistry in 1874 with his work on optically active carbon compounds and three-dimensional and
asymmetrical molecular structures. Three years later, he related thermodynamics to chemical reactions and developed a method
for establishing the order of reactions. In 1889 the Swedish chemist Svante August Arrhenius investigated the speeding of
chemical reactions with increase in temperature and enunciated the theory of electrolytic dissociation, known as Arrhenius's
theory.

B The 20th Century

The development of chemical kinetics has continued into the 20th century with the contributions to the study of molecular
structures, reaction rates, and chain reactions by physical chemists such as Irving Langmuir of the United States, Jens Anton
Christiansen of Denmark, Michael Polanyi of Great Britain, and Nikolay Semenov of the Soviet Union, and important basic
research continues today.

In 1923 the American chemist Gilbert Newton Lewis further clarified the principles of chemical thermodynamics enunciated by
Gibbs.

The great watershed period for the development of physical chemistry was 1900-30. In 1897 the German physicist Max Planck
had proposed that energy in certain systems is “quantized,” or occurs in discrete units or packages, just as matter occurs in
discrete units, the atoms. In 1913, the Danish physicist Niels Bohr showed how the concept of quantization served fully to
explain the spectrum of atomic hydrogen. In 1926-29, the Austrian physicist Erwin Schrödinger and the German physicist
Werner Heisenberg developed the picture of the wave function, a mathematical expression incorporating the wave-particle
duality of electrons, and showed how to calculate useful properties from this formula. From these beginnings, modern concepts
of the structures of atoms and the nature of the bindings between atoms have evolved.

III SUBDIVISIONS OF PHYSICAL CHEMISTRY

The main subdivisions of the study of physical chemistry are chemical thermodynamics; chemical kinetics; the gaseous state; the
liquid state; solutions; the solid state; electrochemistry; colloid chemistry; photochemistry; and statistical thermodynamics.

A Chemical Thermodynamics

This branch studies energy in its various forms as related to matter. It examines the ways in which the internal energy, degree of
organization or order, and ability to do useful work are related to temperature, heat absorbed or evolved, change of state (for
instance, from liquid to gas, gas to liquid, or solid to liquid), work done on or by the system in the form of the flow of electrical
currents, formation of surfaces and changes in surface tension, changes in volume or pressure, and formation or disappearance of
chemical species.

B Chemical Kinetics

This field studies the rates of chemical processes as a function of the concentration of the reacting species, of the products of the
reaction, of catalysts and inhibitors, of various solvent media, of temperature, and of all other variables that can affect the
reaction rate. It is an essential part of the study of chemical kinetics to seek to relate the precise fashion in which the reaction rate
varies with time to the molecular nature of the rate-controlling intermolecular collision involved in generating the reaction
3

products. Most reactions involve a series of stepwise processes, the sum of which corresponds to the overall, observed reaction
proportions (or stoichiometry) in which the reactants combine and the products form; only one of these steps, however, is
generally the rate-controlling one, the others being much faster. By determining the nature of the rate-controlling process from
the mathematical analysis of the reaction kinetics and by investigating how the reaction conditions (for instance, solvent, other
species, and temperature) affect this step, or cause some other process to become the rate-controlling one, the physical chemist
can deduce the mechanism of a reaction.

C The Gaseous State

This branch is concerned with the study of the properties of gases, in particular, the law that interrelates the pressure, volume,
temperature, and quantity of a gas. This law is expressed in mathematical form as the “equation of state” of the gas. For an ideal
gas (that is, a hypothetical gas consisting of molecules whose dimensions are negligibly small, and which do not exert forces of
attraction or repulsion on each other), the equation of state has the simple formula: PV = nRT, where P is pressure, V is volume,
n is the number of moles of the substance, R is a constant, and T is the absolute (or Kelvin) temperature. For real gases, the
equation of state is more complicated, containing additional variables due to the effects of the finite sizes and force fields of the
molecules. Mathematical analysis of the equations of state of real gases permits the physical chemist to deduce much about the
relative sizes of molecules, as well as the strengths of the forces they exert on each other.

D The Liquid State

This field studies the properties of liquids, in particular, the vapor pressure, boiling point, heat of vaporization, heat capacity,
volume per mole, viscosity, compressibility, and the manner in which these properties are affected by the temperature and
pressure at which they are measured and by the chemical nature of the substance itself.

E Solutions

This branch studies the special properties that arise when one substance is dissolved in another. In particular, it investigates the
solubility of substances and how it is affected by the chemical nature of both the solute and the solvent. It also involves the study
of the electrical conductivity and colligative properties (the boiling point, freezing point, vapor pressure, and osmotic pressure) of
solutions of electrolytes, which are substances that yield ions when dissolved in a polar solvent such as water.

F The Solid State

This branch deals with the study of the internal structure, on a molecular and atomic scale, of solids, and the elucidation of the
physical properties of solids in terms of this structure. This includes the mathematical analysis of the diffraction patterns
produced when a beam of X rays is directed at a crystal. By using this method, physical chemists have gained valuable insights
into the packing arrangements adopted by various types of ions and atoms. They have also learned the symmetries and
crystallographies of most solid substances as well as their cohesive forces, heat capacities, melting points, and optical properties.
See Crystal.

G Electrochemistry

This branch is concerned with the study of chemical effects produced by the flow of electric currents across interfaces (as at the
boundary between an electrode and a solution) and, vice versa, the electrical effects produced by the displacement or transport of
ions across boundaries or within gases, liquids, or solids (see Electricity). Measurements of electrical conductivity in liquids yield
insights into ionization equilibria and the properties of ions. In solids, such measurements provide information about the states of
the electrons in crystal lattices and in insulators, semiconductors, and metallic conductors. Measurements of voltages (electric
potentials) yield knowledge of the concentrations of ionic species and of the driving forces of reactions that involve the gain or
loss of electrons from a variety of reactants. See Electrochemistry.
4

H Colloid Chemistry

This branch studies the nature and effects of surfaces and interfaces on the macroscopic properties of substances. These studies
involve the investigation of surface tension, interfacial tension (the tension that exists in the plane of contact between a liquid and
a solid, or between two liquids), wetting and spreading of liquids on solids, adsorption of gases or of ions in solution on solid
surfaces, Brownian motion of suspended particles, emulsification, coagulation, and other properties of systems in which tiny
particles are immersed in a fluid medium. See Colloid.

I Photochemistry

This branch concerns the study of the effects resulting from the absorption of electromagnetic radiation by substances, as well as
the ability of substances to emit electromagnetic radiation when energized in various ways. When X radiation interacts with
matter, electrons may be ejected from their places in the interiors of atoms, ions, or molecules, and measuring the energies of
these electrons reveals much about the nature of the electron arrangement within the atom, ion, or molecule. Similarly,
investigation of the absorption of ultraviolet and visible light discloses the structure of the valence, or binding electrons (see
Ultraviolet Radiation); absorption of infrared radiation provides information about the vibrational motions and binding forces
within molecules; and absorption of microwaves permits scientists to deduce the nature of the rotational motions of molecules,
and from this the exact geometries (internuclear distances) of the molecules. The study of the interaction of electromagnetic
radiation with matter, when that interaction does not result in chemical changes, is often designated as spectrochemistry, and the
term photochemistry is then used only for those interactions that produce chemical changes. Examples of photochemistry (that is,
light-induced) reactions are the fading of dyes when exposed to sunlight, the generation of vitamin D in the human skin by
sunlight, and the formation of ozone in the upper atmosphere by the ultraviolet radiation from the sun. See Photochemistry.

J Statistical Thermodynamics and Mechanics

This branch is concerned with the calculation of the internal energy, degree of order or organization (entropy), ability to do useful
work (free energy), and other properties, such as the equations of state of gases, the vapor pressures of liquids, the molecular
shapes adopted by polymer chains, and electrical conductivities of ionic solutions. These calculations are based on a model of the
individual molecule or ion and the mathematical techniques of statistical analysis, which permit the mutual interactions of large
numbers of randomly arranged particles to be evaluated. See Statistics.

IV CURRENT STATUS

Physical chemistry and chemical physics are vigorously active fields of research in chemistry today. Electrochemistry, colloid
chemistry, and photochemistry are of great importance in many phases of modern industry. The current computer and
communication revolutions, for example, could not have occurred without the special chemicals, crystals, and devices developed
in the course of research in these branches of physical chemistry.

In the area of fundamental research, as distinguished from applied research, the greatest emphasis today is on the theoretical
analysis of spectra of all kinds, ranging from the X-ray region of the electromagnetic spectrum to the radio-wave region (see
Spectroscopy; Spectrum). Emphasis is also placed on the application of quantum and wave mechanics to elucidate the principles
of molecular binding and structure. Valuable insights into these questions have been gained by studying the properties of
substances under conditions of both extremely high and extremely low temperatures and pressures as well as under the influence
of strong electrical, magnetic, and electromagnetic fields.
5

Chemical Reaction
By James Arthur Campbell

I INTRODUCTION

Chemical Reaction, process by which atoms or groups of atoms are redistributed, resulting in a change in the molecular
composition of substances. An example of a chemical reaction is formation of rust (iron oxide), which is produced when oxygen
in the air reacts with iron.

The products obtained from a given set of reactants, or starting materials, depend on the conditions under which a chemical
reaction occurs. Careful study, however, shows that although products may vary with changing conditions, some quantities
remain constant during any chemical reaction. These constant quantities, called the conserved quantities, include the number of
each kind of atom present, the electrical charge, and the total mass.

II CHEMICAL SYMBOLS

In order to discuss the nature of chemical reactions, certain basic facts about chemical symbols, nomenclature, and the writing of
formulas must first be understood. All substances are made up of some combination of atoms of the chemical elements. Rather
than full names, scientists identify elements with one- or two-letter symbols. Some common elements and their symbols are
carbon, C; oxygen, O; nitrogen, N; hydrogen, H; chlorine, Cl; sulfur, S; magnesium, Mg; aluminum, Al; copper, Cu; silver, Ag;
gold, Au; and iron, Fe.

Most chemical symbols are derived from the letters in the name of the element, most often in English, but sometimes in German,
French, Latin, or Russian. The first letter of the symbol is capitalized, and the second (if any) is lowercase. Symbols for some
elements known from ancient times come from earlier, usually Latin, names: for example, Cu from cuprum (copper), Ag from
argentum (silver), Au from aurum (gold), and Fe from ferrum (iron). The same set of symbols in referring to chemicals is used
universally. The symbols are written in Roman letters regardless of language.

Symbols for the elements may be used merely as abbreviations for the name of the element, but they are used more commonly in
formulas and equations to represent a fixed relative quantity of the element. Often the symbol stands for one atom of the element.
Atoms, however, have fixed relative weights, called atomic weights, so the symbols often stand for one atomic weight of the
element.

The atomic weights (atomic wt.) of the elements (see Elements, Chemical) are average atomic weights of the elements as they
occur in nature. Every chemical element consists of atoms the weights of which vary because of varying numbers of neutrons in
their nuclei. Atoms of the same element that differ in weight are called isotopes of the element. An isotope's weight may be
indicated by a superscript to the left of the abbreviation that indicates the total number of nucleons (protons plus neutrons) in the
6

nucleus. The symbols 235U and 238U, for example, represent two uranium isotopes of weight 235 and 238. The symbols 1H, 2H, and
3
H represent three hydrogen isotopes of weights 1, 2, and 3. If no isotopic weight is indicated, the mean (weighted average)
atomic weight is indicated. All of these weights are in atomic mass units (amu). One amu is defined as  of the mass of a 12C
atom, the most common isotope of carbon. See Atom.

An electrically neutral atom has equal numbers of protons and electrons. Electrically charged atoms and groups of atoms are
called ions. When an atom is electrically charged—that is, when it has lost or gained one or more electrons, and thereby become
an ion—that state may be indicated by a superscript to the right of the symbol, as in H +, Mg++, or Cl-. The symbol H+ indicates a
singly positive hydrogen ion, Mg++ a doubly positive magnesium ion, and Cl- a singly negative chlorine ion. See Ionization.

The atomic number of an element is equal to the number of protons in the nucleus of an atom of the element. All isotopes of a
particular element have the same number of protons in their nuclei. The atomic number is sometimes indicated by a lower-left
subscript. The symbol U3+ represents a uranium ion of triply positive charge (that is, an atom that has lost 3 electrons), with 92
protons and 146 neutrons (238 nucleons - 92 protons = 146 neutrons) in its nucleus, which is surrounded by 89 electrons (92 - 3 =
89).

III CHEMICAL FORMULAS

An individual atom can be represented by the symbol of the element, with the charge and mass of the atom indicated when
appropriate. Most substances, however, are compound, in that they are composed of combinations of atoms. The formula for
water, H2O, indicates that two atoms of hydrogen are present for every atom of oxygen. The formula shows that water is
electrically neutral, and it also indicates (because the atomic weights are H = 1.01, O = 16.00) that 2.02 unit weights of hydrogen
will combine with 16.00 unit weights of oxygen to produce 18.02 unit weights of water. Because the relative weights remain
constant, the weight units can be expressed in pounds, tons, kilograms, or any other unit so long as each weight is expressed in
the same unit as the other two.

Similarly, the formula for carbon dioxide is CO2; for gasoline, C8H18; for oxygen, O2; and for candle wax, CH2. The subscripts in
each case (with a 1 understood if no subscript is given) show the relative number of atoms of each element in the substance. CO 2
has 1 C for every 2 Os, and CH2 has 1 C for every 2 Hs. But why write O2 and C8H18 rather than simply O and C4H9, which show
the same atomic and weight ratios? Experiments show that atmospheric oxygen consists not of single atoms (O) but of molecules
made up of pairs of atoms (O2); molecules of gasoline consist of carbon and hydrogen ratios of C 8 and H18 rather than any other
combinations of carbon atoms and hydrogen atoms. The formulas of atmospheric oxygen and gasoline are examples of molecular
formulas. Water consists of H2O molecules, and carbon dioxide consists of CO2 molecules. Thus, H2O and CO2 are molecular
formulas. Candle wax (CH2), on the other hand, is not made up of molecules each containing 1 carbon atom and 2 hydrogen
atoms. It actually consists of very long chains of carbon atoms, with most of the carbon atoms bonded to 2 hydrogen atoms in
addition to being bonded to 2 neighboring carbon atoms in the chain. Such formulas, which give the correct relative atomic
composition but do not give the molecular formula, are called empirical formulas.

All formulas that are multiples of simpler ratios can be assumed to represent molecules: The formulas N 2, H2, H2O2, and C2H6
represent nitrogen gas, hydrogen gas, hydrogen peroxide, and ethane. However, formulas that show the simplest possible atomic
ratios must be assumed to be empirical unless evidence exists to the contrary. The formulas NaCl and Fe 2O3, for example, are
empirical; the former represents sodium chloride (table salt) and the latter iron oxide (rust), but no single molecules of NaCl or
Fe2O3 are present.

IV NAMING INORGANIC COMPOUNDS

All organic and inorganic compounds can be given systematic names based on the elementary composition and often the
structure of the substance. See Chemistry, Organic.

Binary inorganic compounds contain two different elements and are written with the more metallic (more electrically positive)
element first. Such compounds are named by taking the name of the first element followed by the main part of the name of the
7

second, more negative, element combined with the suffix -ide: NaCl, sodium chloride; CaS, calcium sulfide; MgO, magnesium
oxide; SiN, silicon nitride. When the atomic ratio differs from 1:1, a prefix to the name often makes this clear: CS 2 carbon
disulfide; GeCl4, germanium tetrachloride; SF6, sulfur hexafluoride; NO2, nitrogen dioxide; N2O4, dinitrogen tetraoxide.

Many groups of elements occur so often as ions that they are given names: nitrate, NO 3-; sulfate, SO42-; and phosphate, PO43-. The
suffix -ate usually indicates the presence of oxygen. The positive ion, NH 4+, is called ammonium, as in NH4Cl, ammonium
chloride, or (NH4)3PO4, ammonium phosphate.

Rules for naming more complicated compounds exist, but many compounds have been given trivial names—for example,
Na2B4O7·10 H2O, borax—or proprietary names—F(CF2)nF, Teflon. These nonsystematic names may be convenient in some
usages but they are often difficult to interpret.

The accompanying table lists names and formulas of the most common polyatomic inorganic ions. They form compounds by
combining in such a way that the net charge for the entire molecule is zero. The sum of the charges on the positive ions equals the
sum of the charges on the negative ions. When formed from water solutions, the compounds (termed hydrates) often contain
water molecules, as does borax, the systematic name of which is disodium tetraborate decahydrate—a good example of the
advantages and disadvantages of trivial names.

In the table, the suffix -ite indicates fewer oxygen atoms than in the corresponding -ate ion, with the prefix hypo- used with the
suffix -ite indicating still fewer. The prefix per- indicates more oxygen, or less negative charge, than the corresponding -ate ion.

V CHEMICAL EQUATIONS

Chemical symbols and formulas are used to describe chemical reactions; they denote substances having one set of formulas
changing into substances having another set of formulas. Consider the chemical reaction in which methane, or natural gas
(formula CH 4), burns in oxygen (O2), to form carbon dioxide (CO2), and water (H2O). If we assume that only these four
substances are involved, the formulas (used mainly as abbreviations for names) would be stated:

Because atoms are conserved in chemical reactions, however, the same numbers of atoms must appear on both sides of the
equation. Therefore, the reaction might be expressed as

Chemists substitute an arrow for “gives” and delete all the “1's” to get the balanced chemical equation:
8

Electrical charges and numbers of each kind of atom are conserved.

Balanced chemical equations are balanced not only with respect to charge and numbers of each kind of atom but also with respect
to weight, or, more correctly, to mass. The periodic table (see Periodic Law) lists these atomic weights: C = 12.01, H = 1.01, O =
16.00. So we can identify each atomic symbol with an appropriate mass:

Thus, 16.05 atomic mass units (amu) of CH4 react with 64.00 amu of O2 to produce 44.01 amu of CO2 plus 36.04 amu of H2O. Or
1 mole of methane reacts with 2 moles of oxygen to produce 1 mole of carbon dioxide plus 2 moles of water. The total mass on
each side of the equation is conserved:

Thus charge, atoms, and mass are all conserved.

VI CHEMICAL BONDING

When two or more atoms are brought close enough, an attractive force between the electrons of individual atoms and the nuclei
of one or more of the other atoms can result. If this force is large enough to keep the atoms together, a chemical bond is said to be
formed. All chemical bonds result from the simultaneous attraction of one or more electrons by more than one nucleus.

A Types of Bonds

If the bonded atoms are of metallic elements, the bond is said to be metallic. The electrons are shared between the atoms but are
able to move through the solid to give electrical and thermal conductivity, luster, malleability, and ductility. See Metals.

If the bonded atoms are nonmetals and identical (as in N2 or O2), the electrons are shared equally between the two atoms, and the
bond is called nonpolar covalent. If the atoms are nonmetals but differ (as in nitric oxide, NO), the electrons are shared unequally
and the bond is called polar covalent—polar because the molecule has a positive and a negative electric pole much like the north
and south poles of a magnet, and covalent because the atoms share electrons between them, even though unequally. These
substances are not electrical conductors, nor do they have luster, ductility, or malleability.

When a molecule of a substance contains atoms of both metals and nonmetals, the electrons are more strongly attracted to the
nonmetals, which become negatively charged ions; the metals become positively charged ions. The ions then attract their
opposites in charge, forming ionic bonds. Ionic substances conduct electricity when they are in the liquid state or in water
solutions, but not in the crystalline state, because individual ions are too large to move freely through the crystal.
9

Symmetrical sharing of electrons gives either metallic or nonpolar covalent bonds; unsymmetrical sharing gives polar covalent
bonds; electron transfer gives ionic bonds. The tendency for unequal distribution of electrons between pairs of atoms generally
increases as they are farther apart in the periodic table.

For the formation of stable ions and of covalent bonds, the most common pattern is for each atom to achieve the same total
number of electrons as the noble gas—Group 18 (or VIIIa)—element closest to it in the periodic table (see Noble Gases). The
metals in Groups 1 (or Ia) and 11 (or Ib) of the periodic table tend to lose one electron to form singly positive ions; those in
Groups 2 (or IIa) and 12 (or IIb) tend to lose two electrons to form doubly positive ions; and similarly for Groups 3 (or IIIb) and
13 (or IIIa). Likewise, the halogens, Group 17 (or VIIa), tend to gain one electron to form singly negative ions, and elements of
Group 16 (or VIa) to form doubly negative ions. As the net charge on an ion increases, however, the ion becomes less stable with
respect to sharing electrons with other atoms, so most large apparent charges (as in MnO 2, +4 and -2, respectively) would be
minimized by covalent sharing of electrons.

Covalent bonds form when both atoms lack the number of electrons in the nearest noble gas atom. Neutral chlorine atoms, for
example, have one less electron per atom than do argon atoms (35 versus 36). When two chlorine atoms form a covalent bond
sharing two electrons (one from each atom), both achieve the argon number of 36, Cl:Cl. It is common to represent a shared pair
of electrons by a straight line between the atom symbols: Cl:Cl is written ClCl.

Similarly, atomic nitrogen is three electrons short of the neon number (ten), but each nitrogen can get the neon number if six
electrons are shared between them: NN or NN. This is called a triple bond. Sulfur, in the same way, can achieve the argon
number by sharing four electrons in a double bond, S::S or SS. In carbon dioxide, both the carbon (with six of its own
electrons) and oxygen (with eight) achieve the neon number (ten) by sharing with double bonds: OCO. In all these bonding
formulas, only the shared electrons are shown.

B Valence

In most atoms, many of the electrons are so firmly attracted to their own nucleus that they can have no appreciable interaction
with other nuclei. Only those electrons on the “outside” of an atom can interact with two or more nuclei. These are called valence
electrons.

The number of valence electrons in an atom is indicated by the atom's periodic table family (or group) number, using only the
older Roman numeral designation. Thus we have one valence electron for elements in Groups 1 (or Ia) and 11 (or Ib). There are
two valence electrons for elements in Groups 2 (or IIa) and 12 (or IIb), and four for elements in Groups 4 (or IVb) and 14 (or
IVa). Each of the noble gas atoms elements except helium (that is, neon, argon, krypton, xenon, and radon) has eight valence
electrons. Elements in families (groups) near the noble gases tend to react to form noble gas sets of eight valence electrons. This
is known as the Lewis Rule of Eight, which was enunciated by the American chemist Gilbert N. Lewis.

The exception, helium (He), has a set of two valence electrons. Elements near helium tend to acquire a valence set of two:
hydrogen by gaining one electron, lithium by losing one, and beryllium by losing two electrons. Hydrogen typically shares its
single electron with one electron from another atom to form a single bond; such as in hydrogen chloride, HCl. The chlorine,
originally with seven valence electrons, now has eight. These valence electrons can be shown as or . The structures
of N2 and CO2 may now be expressed as or and or . These so-called Lewis structures
show noble gas valence electron sets of eight for each atom. Probably 80 percent of all covalent compounds can be reasonably
represented by Lewis electron structures. The remainder, especially those containing elements in the central region of the periodic
table, often cannot be described in terms of noble gas structures.

C Resonance

An interesting extension of Lewis structures, called resonance, is found, for example, in nitrate ions, NO3-. Each N originally has
five valence electrons, each O has six, plus one for the negative charge, or a total of 24 (5 + [3 × 6] + 1 = 24) electrons for four
atoms. This is only an average of six electrons per atom, so covalent sharing must occur if the Lewis Rule of Eight is to apply. It
10

is known that the nitrogen atom takes a central position surrounded by the three oxygen atoms, which can give an acceptable
Lewis structure, except that there are three possible structures. Actually only one structure is observed. Each Lewis resonance
structure suggests that two bonds should be single and one double. Experiments have shown, however, that all the bonds are
actually identical in every respect, with properties intermediate between those observed for single and double bonds in other
compounds. Modern theory suggests that a structure of localized, Lewis-type, shared electron bonds gives the general shape and
symmetry of the molecule plus a set of delocalized electrons (shown by dotted lines) that are shared over the whole molecule.

D Types of Chemical Reactions


An understanding of reaction mechanisms can be gained from a study of ionic and covalent bonding. One kind of reaction, ion
matching, is easy to understand as due to the pairing (or dissociation) of ions to form (or dissociate) neutral ionic substances, as
in Ag+ + Cl-⇋AgCl, or 3 Ca2+ + 2 PO43+⇋ Ca3(PO4)2, where the double arrow (instead of an equal sign) emphasizes the two
possible directions of reaction. Covalent single bond changes in which both electrons come from (or go to) one reactant are called

acid-base reactions, as in . A pair of electrons from the base enter an empty electron orbital of the
acid to form the covalent bond (see Acids and Bases). Covalent single bond changes in which one bonding electron comes from
(or goes to) each reactant are called free radical reactions, as in H· + ·H ⇋HH.

Sometimes reactants gain and lose electrons, as in oxidation-reduction, or redox, reactions: 2 Fe2+ + Br2⇋ 2 Fe3+ + 2 Br-. Thus, in
an oxidation-reduction reaction, one reactant is oxidized (loses one or more electrons) and the other reactant is reduced (gains one
or more electrons). Common examples of redox reactions involving oxygen are the rusting of metals such as iron (in which case
the metals are oxidized by atmospheric oxygen), combustion, and the metabolic reactions associated with respiration. An
example of a redox reaction that does not involve atmospheric oxygen is the reaction that produces electricity in the lead storage
battery: Pb + PbO2 + 4H+ + 2SO42- = 2PbSO4 + 2H2O.

The joining of two groups is also called addition; their separation is called decomposition. Multiple addition involving many
identical molecules is called polymerization.See Polymer.

E Chemical Energetics

Energy is conserved in chemical reactions. If stronger bonds form in the products than are broken in the reactants, heat is released
to the surroundings, and the reaction is termed exothermic. If stronger bonds break than are formed, heat must be absorbed from
the surroundings, and the reaction is endothermic. Because strong bonds are more apt to form than weak bonds, spontaneous
exothermic reactions are common—for example, the combustion of carbon-containing fuels with air to give CO 2 and H2O, both
of which possess strong bonds. Spontaneous endothermic reactions, however, are also well known; the dissolving of salt in water
is one example.

Endothermic reactions are always associated with the spreading, or the dissociation, of molecules. This can be measured as an
increase in the entropy of the system. The net effect of the tendency for strong bonds to form and the tendency of molecules and
ions to spread out, or dissociate, can be measured as the change in free energy of the system. All spontaneous changes at constant
pressure and temperature involve an increase in free energy, with a large increase in bond strength, or a large increase in
spreading out, or both. See Chemistry, Physical; Thermodynamics.

F Chemical Rates and Mechanisms

Some reactions, such as explosions, occur rapidly. Other reactions, such as rusting, take place slowly. Chemical kinetics, the
study of reaction rates, shows that three conditions must be met at the molecular level if a reaction is to occur: The molecules
11

must collide; they must be positioned so that the reacting groups are together in a transition state between reactants and products;
and the collision must have enough energy to form the transition state and convert it into products.

Fast reactions occur when these three criteria are easy to meet. If even one is difficult, however, the reaction is typically slow,
even though the change in free energy permits a spontaneous reaction.

Rates of reaction increase in the presence of catalysts, substances that provide a new, faster reaction mechanism but are
themselves regenerated so that they can continue the process (see Catalysis). Mixtures of hydrogen and oxygen gases at room
temperature do not explode. But the introduction of powdered platinum leads to an explosion as the platinum surface becomes
covered with adsorbed oxygen. The platinum atoms stretch the bonds of the O 2 molecules, weakening them and lowering the
activation energy. The oxygen atoms then react rapidly with hydrogen molecules, colliding with them, forming water, and
regenerating the catalyst. The steps by which a reaction occurs are called the reaction mechanism.

Rates of reaction can be changed not only by catalysts but also by changes in temperature and by changes in concentrations.
Raising the temperature increases the rate by increasing the kinetic energy of the molecules of the reactants, thereby increasing
the likelihood of transition states being achieved. Increasing the concentration can increase the reaction rate by increasing the rate
of molecular collisions.

G Chemical Equilibrium

As a reaction proceeds, the concentration of the reactants usually decreases as they are used up. The rate of reaction will,
therefore, decrease as well. Simultaneously, the concentrations of the products increase, so it becomes more likely that they will
collide with one another to reform the initial reactants. Eventually, the decreasing rate of the forward reaction becomes equal to
the increasing rate of the reverse reaction, and net change ceases. At this point the system is said to be at chemical equilibrium.
Forward and reverse reactions occur at equal rates.

Changes in systems at chemical equilibrium are described by Le Chatelier's principle, named after the French scientist Henri
Louis Le Châtelier: Any attempt to change a system at equilibrium causes it to react so as to minimize the change. Raising the
temperature causes endothermic reactions to occur; lowering the temperature leads to exothermic reactions. Raising the pressure
favors reactions that lower the volume, and vice versa. Increasing any concentration favors reactions using up the added material;
decreasing any concentration favors reactions forming that material. See Gases.

VII CHEMICAL SYNTHESIS

The principal goals of synthetic chemistry are to create new chemical substances and to develop better, less-expensive methods
for the synthesis of known substances. Sometimes simply purifying naturally occurring substances is sufficient either to obtain an
important chemical or to increase use of that chemical as a starting material for other syntheses. For instance, the pharmaceutical
industry often depends, for the source of starting materials in the synthesis of important medicines, upon the complicated organic
chemicals found in crude oil. More commonly, especially for rare or expensive naturally occurring substances, it is necessary to
synthesize the substance from less-expensive or more-available raw materials.
12

One task of synthetic chemistry, then, is to produce additional amounts of substances already found in nature. Examples are the
recovery of copper metal from its ores and the syntheses of certain naturally occurring medicines (such as aspirin) and vitamins
(such as ascorbic acid—vitamin C). A second task is to synthesize materials not found in nature, such as steel, plastics, ceramics
(space shuttle tiles, for example) and adhesives.

Some 11 million chemical compounds are now cataloged with the Chemical Abstracts Service in Columbus, Ohio; about 2000
new ones are synthesized every day. Some 6000 are in commercial production, with new compounds coming into the market at
the rate of about 300 per year. Each new compound is tested not only for its benefits and intended use, but also for any potentially
harmful effects on humans and the environment before it is allowed to go into the market. Determining toxicity is made difficult
and expensive by the wide variance in toxic dose levels among humans, plants, and animals and by the difficulty of measuring
the effects of long-term exposure.

Synthetic chemistry was not developed as a sophisticated and highly rigorous science until well into the 20th century. Until then,
the synthesis of a substance was often first accomplished by accident, and the uses of these new materials were limited. The
sketchy theoretical ideas prior to the turn of the century also limited chemists' ability to develop systematic approaches to
synthesis. In contrast, it is now possible to design new chemical substances to fill specific needs, (for example, medicines,
structural materials, or fuels), to synthesize in the laboratory almost any substance found in nature, to invent and prepare new
compounds, and even to predict, based on sophisticated computer modeling, either the properties of a “target” molecule or its
long-term effects in medicine or in the environment.

Much of the recent progress in synthesis rests on the ability of scientists to determine the detailed structure of a range of
substances and to understand the correlations between a molecule's structure and its properties, or structure-activity relationships.
In fact, the likely structures and properties of a series of target molecules can now be modeled ahead of their synthesis, giving
scientists a better understanding of the types of substances most needed for a given purpose. Modern penicillin drugs are
synthetic modifications of the substance first observed in nature by the British bacteriologist Alexander Fleming. More than 1000
human diseases have been identified as stemming from molecular deficiencies, and many can be treated by remedying that
deficiency using synthetic pharmaceuticals. Much of the search for new fuels and for methods of using solar energy is based on
the study of the molecular properties of synthetic materials. One of the most recent accomplishments of this type is the
fabrication of superconductors based on the structure of complicated inorganic ceramic materials, such as YBa 2Cu3O7 and other
structurally similar materials.

It is now possible to synthesize hormones, enzymes, and genetic material identical to that found in living systems, thereby
increasing the possibility of treating the root causes of human illness by genetic engineering. This has been made easier in recent
years by computer-assisted design of syntheses and by the powerful modeling capabilities of modern computers.

One of the most successful recent developments in synthetic biochemistry has been the routine use of simple living systems, such
as yeasts, bacteria, and molds, to produce important substances. The biochemical synthesis of biological materials is now
possible. Escherichia coli bacteria, for example, are used to produce human insulin. Yeasts are also used to produce alcohol, and
molds are used to produce penicillin.

Heat (physics)
Richard S. Thorsen

I INTRODUCTION

Heat (physics), in physics, transfer of energy from one part of a substance to another, or from one body to another by virtue of a
difference in temperature. Heat is energy in transit; it always flows from a substance at a higher temperature to the substance at a
lower temperature, raising the temperature of the latter and lowering that of the former substance, provided the volume of the
13

bodies remains constant. Heat does not flow from a lower to a higher temperature unless another form of energy transfer, work, is
also present. See also Power.

Until the beginning of the 19th century, the effect of heat on the temperature of a body was explained by postulating the existence
of an invisible substance or form of matter termed caloric. According to the caloric theory of heat, a body at a high temperature
contains more caloric than one at a low temperature; the former body loses some caloric to the latter body on contact, increasing
that body's temperature while lowering its own. Although the caloric theory successfully explained some phenomena of heat
transfer, experimental evidence was presented by the American-born British physicist Benjamin Thompson in 1798 and by the
British chemist Sir Humphry Davy in 1799 suggesting that heat, like work, is a form of energy in transit. Between 1840 and 1849
the British physicist James Prescott Joule, in a series of highly accurate experiments, provided conclusive evidence that heat is a
form of energy in transit and that it can cause the same changes in a body as work.

II TEMPERATURE

The sensation of warmth or coldness of a substance on contact is determined by the property known as temperature. Although it
is easy to compare the relative temperatures of two substances by the sense of touch, it is impossible to evaluate the absolute
magnitude of the temperatures by subjective reactions. Adding heat to a substance, however, not only raises its temperature,
causing it to impart a more acute sensation of warmth, but also produces alterations in several physical properties, which may be
measured with precision. As the temperature varies, a substance expands or contracts, its electrical resistivity (see Resistance)
changes, and in the gaseous form, it exerts varying pressure. The variation in a standard property usually serves as a basis for an
accurate numerical temperature scale (see below).

Temperature depends on the average kinetic energy of the molecules of a substance, and according to kinetic theory (see Gases;
Thermodynamics), energy may exist in rotational, vibrational, and translational motions of the particles of a substance.
Temperature, however, depends only on the translational molecular motion. Theoretically, the molecules of a substance would
exhibit no activity at the temperature termed absolute zero. See Molecule.

III TEMPERATURE SCALES

Five different temperature scales are in use today: the Celsius scale, known also as the Centigrade scale, the Fahrenheit scale, the
Kelvin scale, the Rankine scale, and the international thermodynamic temperature scale (see Thermometer). The Celsius scale,
with a freezing point of 0° C and a boiling point of 100° C, is widely used throughout the world, particularly for scientific work,
although it was superseded officially in 1950 by the international temperature scale. In the Fahrenheit scale, used in English-
speaking countries for purposes other than scientific work and based on the mercury thermometer, the freezing point of water is
defined as 32° F and the boiling point as 212° F (see Mercury). In the Kelvin scale, the most commonly used thermodynamic
temperature scale, zero is defined as the absolute zero of temperature, that is, -273.15° C, or -459.67° F. Another scale employing
absolute zero as its lowest point is the Rankine scale, in which each degree of temperature is equivalent to one degree on the
Fahrenheit scale. The freezing point of water on the Rankine scale is 492° R, and the boiling point is 672° R.

In 1933 scientists of 31 nations adopted a new international temperature scale with additional fixed temperature points, based on
the Kelvin scale and thermodynamic principles. The international scale is based on the property of electrical resistivity, with
platinum wire as the standard for temperature between -190° and 660° C. Above 660° C, to the melting point of gold, 1063° C, a
standard thermocouple, which is a device that measures temperature by the amount of voltage produced between two wires of
different metals, is used; beyond this point temperatures are measured by the so-called optical pyrometer, which uses the intensity
of light of a wavelength emitted by a hot body for the purpose.

In 1954 the triple point of water—that is, the point at which the three phases of water (vapor, liquid, and ice) are in equilibrium—
was adopted by international agreement as 273.16 K. The triple point can be determined with greater precision than the freezing
point and thus provides a more satisfactory fixed point for the absolute thermodynamic scale. In cryogenics, or low-temperature
research, temperatures as low as 0.003 K have been produced by the demagnetization of paramagnetic materials. Momentary
high temperatures estimated to be greater than 100,000,000 K have been achieved by nuclear explosions (see Nuclear Weapons).
14

IV HEAT UNITS

Heat is measured in terms of the calorie, defined as the amount of heat necessary to raise the temperature of 1 g of water at a
pressure of 1 atm from 15° to 16° C. This unit is sometimes called the small or gram calorie to distinguish it from the large
calorie, or kilocalorie, equal to 1000 cal, which is used in nutrition studies. In mechanical engineering practice in the United
States and the United Kingdom, heat is measured in British thermal units, or Btu (see British Thermal Unit). One Btu is the
quantity of heat required to raise the temperature of 1 lb of water 1° F and is equal to 252 cal. Mechanical energy can be
converted into heat by friction, and the mechanical work necessary to produce 1 cal is known as the mechanical equivalent of
heat. It is equal to 4.1855 × 107 ergs/cal or 778 ft-lb Btu. According to the law of conservation of energy, all the mechanical
energy expended to produce heat by friction appears as energy in the objects on which the work is performed. This fact was first
conclusively proven in a classic experiment performed by Joule, who heated water in a closed vessel by means of rotating paddle
wheels and found that the rise in water temperature was proportional to the work expended in turning the wheels.

If heat is converted into mechanical energy, as in an internal-combustion engine, the law of conservation of energy also applies.
In any engine, however, some energy is always lost or dissipated in the form of heat because no engine is perfectly efficient. See
Horsepower.

V LATENT HEAT

A number of physical changes are associated with the change of temperature of a substance. Almost all substances expand in
volume when heated and contract when cooled. The behavior of water between 0° and 4° C (32° and 39° F) constitutes an
important exception to this rule. The phase of a substance refers to its occurrence as either a solid, liquid, or gas, and phase
changes in pure substances occur at definite temperatures and pressures (see Phase Rule). The process of changing from solid to
gas is referred to as sublimation, from solid to liquid as melting, and from liquid to vapor as vaporization. If the pressure is
constant, these processes occur at constant temperature. The amount of heat required to produce a change of phase is called latent
heat, and hence, latent heats of sublimation, melting, and vaporization exist (see Distillation; Evaporation). If water is boiled in
an open vessel at a pressure of 1 atm, the temperature does not rise above 100° C (212° F), no matter how much heat is added.
The heat that is absorbed without changing the temperature of the water is the latent heat; it is not lost but is expended in
changing the water to steam and is then stored as energy in the steam; it is again released when the steam is condensed to form
water (see Condensation). Similarly, if a mixture of water and ice in a glass is heated, its temperature will not change until all the
ice is melted. The latent heat absorbed is used up in overcoming the forces holding the particles of ice together and is stored as
energy in the water. To melt 1 g of ice, 79.7 cal are needed, and to convert 1 g of water to steam at 100° C, 541 cal are needed.

VI SPECIFIC HEAT

The heat capacity, or the measure of the amount of heat required to raise the temperature of a unit mass of a substance one degree
is known as specific heat. If the heating process occurs while the substance is maintained at a constant volume or is subjected to a
constant pressure the measure is referred to as a specific heat at constant volume or at constant pressure. The latter is always
larger than, or at least equal to, the former for each substance. Because 1 cal causes a rise of 1° C in 1 g of water, the specific heat
of water is 1 cal/g/° C. In the case of water and other approximately incompressible substances, it is not necessary to distinguish
between the constant-volume and constant-pressure specific heats, as they are approximately equal. Generally, the two specific
heats of a substance depend on the temperature.

VII TRANSFER OF HEAT

The physical methods by which energy in the form of heat can be transferred between bodies are conduction and radiation. A
third method, which also involves the motion of matter, is called convection. Conduction requires physical contact between the
bodies or portions of bodies exchanging heat, but radiation does not require contact or the presence of any matter between the
bodies. Convection occurs when a liquid or gas is in contact with a solid body at a different temperature and is always
15

accompanied by the motion of the liquid or gas. The science dealing with the transfer of heat between bodies is called heat
transfer.

Heat Transfer
I INTRODUCTION

Heat Transfer, in physics, process by which energy in the form of heat is exchanged between bodies or parts of the same body at
different temperatures. Heat is generally transferred by convection, radiation, or conduction. Although these three processes can
occur simultaneously, it is not unusual for one mechanism to overshadow the other two. Heat, for example, is transferred by
conduction through the brick wall of a house, the surfaces of high-speed aircraft are heated by convection, and the earth receives
heat from the sun by radiation. See also Energy; Heat; Temperature.

II CONDUCTION

This is the only method of heat transfer in opaque solids. If the temperature at one end of a metal rod is raised by heating, heat is
conducted to the colder end, but the exact mechanism of heat conduction in solids is not entirely understood. It is believed,
however, to be partially due to the motion of free electrons in the solid matter, which transport energy if a temperature difference
is applied. This theory helps to explain why good electrical conductors also tend to be good heat conductors (see Conductor,
Electrical). Although the phenomenon of heat conduction had been observed for centuries, it was not until 1882 that the French
mathematician Jean Baptiste Joseph Fourier gave it precise mathematical expression in what is now regarded as Fourier's law of
heat conduction. This physical law states that the rate at which heat is conducted through a body per unit cross-sectional area is
proportional to the negative of the temperature gradient existing in the body.

The proportionality factor is called the thermal conductivity of the material. Materials such as gold, silver, and copper have high
thermal conductivities and conduct heat readily, but materials such as glass and asbestos have values of thermal conductivity
hundreds and thousands of times smaller, conduct heat poorly, and are referred to as insulators (see Insulation). In engineering
applications it is frequently necessary to establish the rate at which heat will be conducted through a solid if a known temperature
difference exists across the solid. Sophisticated mathematical techniques are required to establish this, especially if the process
varies with time, the phenomenon being known as transient-heat conduction. With the aid of analog and digital computers, these
problems are now being solved for bodies of complex geometry. See Computer.

III CONVECTION

Conduction occurs not only within a body but also between two bodies if they are brought into contact, and if one of the
substances is a liquid or a gas, then fluid motion will almost certainly occur. This process of conduction between a solid surface
and a moving liquid or gas is called convection. The motion of the fluid may be natural or forced. If a liquid or gas is heated, its
mass per unit volume generally decreases. If the liquid or gas is in a gravitational field, the hotter, lighter fluid rises while the
colder, heavier fluid sinks. This kind of motion, due solely to nonuniformity of fluid temperature in the presence of a
gravitational field, is called natural convection (see Gravitation). Forced convection is achieved by subjecting the fluid to a
pressure gradient and thereby forcing motion to occur according to the law of fluid mechanics.

If, for example, water in a pan is heated from below, the liquid closest to the bottom expands and its density decreases; the hot
water as a result rises to the top and some of the cooler fluid descends toward the bottom, thus setting up a circulatory motion.
Similarly, in a vertical gas-filled chamber, such as the air space between two window panes in a double-glazed, or Thermopane,
window, the air near the cold outer pane will move down and the air near the inner, warmer pane will rise, leading to a
circulatory motion.

The heating of a room by a radiator depends less on radiation than on natural convection currents, the hot air rising upward along
the wall and cooler air coming back to the radiator from the side of the bottom. Because of the tendencies of hot air to rise and of
16

cool air to sink, radiators should be placed near the floor and air-conditioning outlets near the ceiling for maximum efficiency.
Natural convection is also responsible for the rising of the hot water and steam in natural-convection boilers (see Boiler) and for
the draft in a chimney. Convection also determines the movement of large air masses above the earth, the action of the winds,
rainfall, ocean currents, and the transfer of heat from the interior of the sun to its surface.

IV RADIATION

This process is fundamentally different from both conduction and convection in that the substances exchanging heat need not be
in contact with each other. They can, in fact, be separated by a vacuum. Radiation is a term generally applied to all kinds of
electromagnetic-wave phenomena (see Electromagnetic Radiation). Some radiation phenomena can be described in terms of
wave theory (see Wave Motion), and others can be explained in terms of quantum theory. Neither theory, however, completely
explains all experimental observations. The German-born American physicist Albert Einstein conclusively demonstrated (1905)
the quantized behavior of radiant energy in his classical photoelectric experiments. Before Einstein's experiments the quantized
nature of radiant energy had been postulated, and the German physicist Max Planck used quantum theory and the mathematical
formalism of statistical mechanics to derive (1900) a fundamental law of radiation (see Statistics). The mathematical expression
of this law, called Planck's distribution, relates the intensity or strength of radiant energy emitted by a body to the temperature of
the body and the wavelength of radiation. This is the maximum amount of radiant energy that can be emitted by a body at a
particular temperature. Only an ideal body (blackbody,) emits such radiation according to Planck's law. Real bodies emit at a
somewhat reduced intensity. The contribution of all frequencies to the radiant energy emitted by a body is called the emissive
power of the body, the amount of energy emitted by a unit surface area of a body per unit of time. As can be shown from Planck's
law, the emissive power of a surface is proportional to the fourth power of the absolute temperature. The proportionality factor is
called the Stefan-Boltzmann constant after two Austrian physicists, Joseph Stefan and Ludwig Boltzmann, who, in 1879 and
1884, respectively, discovered the fourth power relationship for the emissive power. According to Planck's law, all substances
emit radiant energy merely by virtue of having a positive absolute temperature. The higher the temperature, the greater the
amount of energy emitted. In addition to emitting, all substances are capable of absorbing radiation. Thus, although an ice cube is
continuously emitting radiant energy, it will melt if an incandescent lamp is focused on it because it will be absorbing a greater
amount of heat than it is emitting.

Opaque surfaces can absorb or reflect incident radiation. Generally, dull, rough surfaces absorb more heat than bright, polished
surfaces, and bright surfaces reflect more radiant energy than dull surfaces. In addition, good absorbers are also good emitters;
good reflectors, or poor absorbers, are poor emitters. Thus, cooking utensils generally have dull bottoms for good absorption and
polished sides for minimum emission to maximize the net heat transfer into the contents of the pot. Some substances, such as
gases and glass, are capable of transmitting large amounts of radiation. It is experimentally observed that the absorbing,
reflecting, and transmitting properties of a substance depend upon the wavelength of the incident radiation. Glass, for example,
transmits large amounts of short wavelength (ultraviolet) radiation, but is a poor transmitter of long wavelength (infrared)
radiation (see Infrared Radiation; Ultraviolet Radiation). A consequence of Planck's distribution is that the wavelength at which
the maximum amount of radiant energy is emitted by a body decreases as the temperature increases. Wien's displacement law,
named after the German physicist Wilhelm Wien, is a mathematical expression of this observation and states that the wavelength
of maximum energy, expressed in micrometers (millionths of a meter), multiplied by the Kelvin temperature of the body is equal
to a constant, 2878. Most of the energy radiated by the sun, therefore, is characterized by small wavelengths. This fact, together
with the transmitting properties of glass mentioned above, explains the greenhouse effect. Radiant energy from the sun is
transmitted through the glass and enters the greenhouse. The energy emitted by the contents of the greenhouse, however, which
emit primarily at infrared wavelengths, is not transmitted out through the glass. Thus, although the air temperature outside the
greenhouse may be low, the temperature inside the greenhouse will be much higher because there is a sizable net heat transfer
into it.

In addition to heat transfer processes that result in raising or lowering temperatures of the participating bodies, heat transfer can
also produce phase changes such as the melting of ice or the boiling of water. In engineering, heat transfer processes are usually
designed to take advantage of these phenomena. In the case of space capsules reentering the atmosphere of the earth at very high
speed, a heat shield that melts in a prescribed manner by the process called ablation is provided to prevent overheating of the
interior of the capsule. Essentially, the frictional heating produced by the atmosphere is used to melt the heat shield and not to
raise the temperature of the capsule (see Friction).
17

Thermodynamics
I INTRODUCTION

Thermodynamics, field of physics that describes and correlates the physical properties of macroscopic systems of matter and
energy. The principles of thermodynamics are of fundamental importance to all branches of science and engineering.

A central concept of thermodynamics is that of the macroscopic system, defined as a geometrically isolable piece of matter in
coexistence with an infinite, unperturbable environment. The state of a macroscopic system in equilibrium can be described in
terms of such measurable properties as temperature, pressure, and volume, which are known as thermodynamic variables. Many
other variables (such as density, specific heat, compressibility, and the coefficient of thermal expansion) can be identified and
correlated, to produce a more complete description of an object and its relationship to its environment.

When a macroscopic system moves from one state of equilibrium to another, a thermodynamic process is said to take place.
Some processes are reversible and others are irreversible. The laws of thermodynamics, discovered in the 19th century through
painstaking experimentation, govern the nature of all thermodynamic processes and place limits on them.

II ZEROTH LAW OF THERMODYNAMICS

The vocabulary of empirical sciences is often borrowed from daily language. Thus, although the term temperature appeals to
common sense, its meaning suffers from the imprecision of nonmathematical language. A precise, though empirical, definition of
temperature is provided by the so-called zeroth law of thermodynamics as explained below.

When two systems are in equilibrium, they share a certain property. This property can be measured and a definite numerical
value ascribed to it. A consequence of this fact is the zeroth law of thermodynamics, which states that when each of two systems
is in equilibrium with a third, the first two systems must be in equilibrium with each other. This shared property of equilibrium is
the temperature.

If any such system is placed in contact with an infinite environment that exists at some certain temperature, the system will
eventually come into equilibrium with the environment—that is, reach the same temperature. (The so-called infinite environment
is a mathematical abstraction called a thermal reservoir; in reality the environment need only be large relative to the system being
studied.)

Temperatures are measured with devices called thermometers (see Thermometer). A thermometer contains a substance with
conveniently identifiable and reproducible states, such as the normal boiling and freezing points of pure water. If a graduated
scale is marked between two such states, the temperature of any system can be determined by having that system brought into
thermal contact with the thermometer, provided that the system is large relative to the thermometer.

III FIRST LAW OF THERMODYNAMICS

The first law of thermodynamics gives a precise definition of heat, another commonly used concept.

When an object is brought into contact with a relatively colder object, a process takes place that brings about an equalization of
temperatures of the two objects. To explain this phenomenon, 18th-century scientists hypothesized that a substance more
abundant at higher temperature flowed toward the region at a lower temperature. This hypothetical substance, called “caloric,”
was thought to be a fluid capable of moving through material media. The first law of thermodynamics instead identifies caloric,
or heat, as a form of energy. It can be converted into mechanical work, and it can be stored, but is not a material substance. Heat,
measured originally in terms of a unit called the calorie, and work and energy, measured in ergs, were shown by experiment to be
totally equivalent. One calorie is equivalent to 4.186 × 10 7 ergs, or 4.186 joules.
18

The first law, then, is a law of energy conservation. It states that, because energy cannot be created or destroyed—setting aside
the later ramifications of the equivalence of mass and energy (see Nuclear Energy)—the amount of heat transferred into a system
plus the amount of work done on the system must result in a corresponding increase of internal energy in the system. Heat and
work are mechanisms by which systems exchange energy with one another.

In any machine some amount of energy is converted into work; therefore, no machine can exist in which no energy is converted
into work. Such a hypothetical machine (in which no energy is required for performing work) is termed a “perpetual-motion
machine of the first kind.” Since the input energy must now take heat into account (and in a broader sense chemical, electrical,
nuclear, and other forms of energy as well), the law of energy conservation rules out the possibility of such a machine ever being
invented. The first law is sometimes given in a contorted form as a statement that precludes the existence of perpetual-motion
machines of the first kind.

IV SECOND LAW OF THERMODYNAMICS

The second law of thermodynamics gives a precise definition of a property called entropy. Entropy can be thought of as a
measure of how close a system is to equilibrium; it can also be thought of as a measure of the disorder in the system. The law
states that the entropy—that is, the disorder—of an isolated system can never decrease. Thus, when an isolated system achieves a
configuration of maximum entropy, it can no longer undergo change: It has reached equilibrium. Nature, then, seems to “prefer”
disorder or chaos. It can be shown that the second law stipulates that, in the absence of work, heat cannot be transferred from a
region at a lower temperature to one at a higher temperature.

The second law poses an additional condition on thermodynamic processes. It is not enough to conserve energy and thus obey the
first law. A machine that would deliver work while violating the second law is called a “perpetual-motion machine of the second
kind,” since, for example, energy could then be continually drawn from a cold environment to do work in a hot environment at no
cost. The second law of thermodynamics is sometimes given as a statement that precludes perpetual-motion machines of the
second kind.

V THERMODYNAMIC CYCLES

All important thermodynamic relations used in engineering are derived from the first and second laws of thermodynamics. One
useful way of discussing thermodynamic processes is in terms of cycles—processes that return a system to its original state after
a number of stages, thus restoring the original values for all the relevant thermodynamic variables. In a complete cycle the
internal energy of a system depends solely on these variables and cannot change. Thus, the total net heat transferred to the system
must equal the total net work delivered from the system.

An ideal cycle would be performed by a perfectly efficient heat engine—that is, all the heat would be converted to mechanical
work. The 19th-century French scientist Nicolas Léonard Sadi Carnot, who conceived a thermodynamic cycle that is the basic
cycle of all heat engines, showed that such an ideal engine cannot exist. Any heat engine must expend some fraction of its heat
input as exhaust. The second law of thermodynamics places an upper limit on the efficiency of engines; that upper limit is less
than 100 percent. The limiting case is now known as a Carnot cycle.

VI THIRD LAW OF THERMODYNAMICS

The second law suggests the existence of an absolute temperature scale that includes an absolute zero of temperature. The third
law of thermodynamics states that absolute zero cannot be attained by any procedure in a finite number of steps. Absolute zero
can be approached arbitrarily closely, but it can never be reached.

VII MICROSCOPIC BASIS OF THERMODYNAMICS


19

The recognition that all matter is made up of molecules provided a microscopic foundation for thermodynamics. A


thermodynamic system consisting of a pure substance can be described as a collection of like molecules, each with its individual
motion describable in terms of such mechanical variables as velocity and momentum. At least in principle, it should therefore be
possible to derive the collective properties of the system by solving equations of motion for the molecules. In this sense,
thermodynamics could be regarded as a mere application of the laws of mechanics to the microscopic system.

Objects of ordinary size—that is, ordinary on the human scale—contain immense numbers (on the order of 10 24) of molecules.
Assuming the molecules to be spherical, each would need three variables to describe its position and three more to describe its
velocity. Describing a macroscopic system in this way would be a task that even the largest modern computer could not manage.
A complete solution of these equations, furthermore, would tell us where each molecule is and what it is doing at every moment.
Such a vast quantity of information would be too detailed to be useful and too transient to be important.

Statistical methods were devised therefore to obtain averages of the mechanical variables of the molecules in a system and to
provide the gross features of the system. These gross features turn out to be, precisely, the macroscopic thermodynamic variables.
The statistical treatment of molecular mechanics is called statistical mechanics, and it anchors thermodynamics to mechanics.

Viewed from the statistical perspective, temperature represents a measure of the average kinetic energy of the molecules of a
system. Increases in temperature reflect increases in the vigor of molecular motion. When two systems are in contact, energy is
transferred between molecules as a result of collisions. The transfer will continue until uniformity is achieved, in a statistical
sense, which corresponds to thermal equilibrium. The kinetic energy of the molecules also corresponds to heat and—together
with the potential energy arising from interaction between molecules—makes up the internal energy of a system.

The conservation of energy, a well-known law of mechanics, translates readily to the first law of thermodynamics, and the
concept of entropy translates into the extent of disorder on the molecular scale. By assuming that all combinations of molecular
motion are equally likely, thermodynamics shows that the more disordered the state of an isolated system, the more combinations
can be found that could give rise to that state, and hence the more frequently it will occur. The probability of the more disordered
state occurring overwhelms the probability of the occurrence of all other states. This probability provides a statistical basis for
definitions of both equilibrium state and entropy.

Finally, temperature can be reduced by taking energy out of a system, that is, by reducing the vigor of molecular motion.
Absolute zero corresponds to the state of a system in which all its constituents are at rest. This is, however, a notion from
classical physics. In terms of quantum mechanics, residual molecular motion will exist even at absolute zero. An analysis of the
statistical basis of the third law goes beyond the scope of the present discussion.

Quantum Theory
I INTRODUCTION

Quantum Theory, in physics, description of the particles that make up matter and how they interact with each other and with
energy. Quantum theory explains in principle how to calculate what will happen in any experiment involving physical or
biological systems, and how to understand how our world works. The name “quantum theory” comes from the fact that the theory
describes the matter and energy in the universe in terms of single indivisible units called quanta (singular quantum). Quantum
theory is different from classical physics. Classical physics is an approximation of the set of rules and equations in quantum
theory. Classical physics accurately describes the behavior of matter and energy in the everyday universe. For example, classical
physics explains the motion of a car accelerating or of a ball flying through the air. Quantum theory, on the other hand, can
accurately describe the behavior of the universe on a much smaller scale, that of atoms and smaller particles. The rules of
classical physics do not explain the behavior of matter and energy on this small scale. Quantum theory is more general than
classical physics, and in principle, it could be used to predict the behavior of any physical, chemical, or biological system.
However, explaining the behavior of the everyday world with quantum theory is too complicated to be practical.
20

Quantum theory not only specifies new rules for describing the universe but also introduces new ways of thinking about matter
and energy. The tiny particles that quantum theory describes do not have defined locations, speeds, and paths like objects
described by classical physics. Instead, quantum theory describes positions and other properties of particles in terms of the
chances that the property will have a certain value. For example, it allows scientists to calculate how likely it is that a particle will
be in a certain position at a certain time.

Quantum description of particles allows scientists to understand how particles combine to form atoms. Quantum description of
atoms helps scientists understand the chemical and physical properties of molecules, atoms, and subatomic particles. Quantum
theory enabled scientists to understand the conditions of the early universe, how the Sun shines, and how atoms and molecules
determine the characteristics of the material that they make up. Without quantum theory, scientists could not have developed
nuclear energy or the electric circuits that provide the basis for computers.

Quantum theory describes all of the fundamental forces—except gravitation—that physicists have found in nature. The forces
that quantum theory describes are the electrical, the magnetic, the weak, and the strong. Physicists often refer to these forces as
interactions, because the forces control the way particles interact with each other. Interactions also affect spontaneous changes in
isolated particles.

II WAVES AND PARTICLES

One of the striking differences between quantum theory and classical physics is that quantum theory describes energy and matter
both as waves and as particles. The type of energy physicists study most often with quantum theory is light. Classical physics
considers light to be only a wave, and it treats matter strictly as particles. Quantum theory acknowledges that both light and
matter can behave like waves and like particles.

It is important to understand how scientists describe the properties of waves in order to understand how waves fit into quantum
theory. A familiar type of wave occurs when a rope is tied to a solid object and someone moves the free end up and down. Waves
travel along the rope. The highest points on the rope are called the crests of the waves. The lowest points are called troughs. One
full wave consists of a crest and trough. The distance from crest to crest or from trough to trough—or from any point on one
wave to the identical point on the next wave—is called the wavelength. The frequency of the waves is the number of waves per
second that pass by a given point along the rope.

If waves traveling down the rope hit the stationary end and bounce back, like water waves bouncing against a wall, two waves on
the rope may meet each other, hitting the same place on the rope at the same time. These two waves will interfere, or combine
(see Interference). If the two waves exactly line up—that is, if the crests and troughs of the waves line up—the waves interfere
constructively. This means that the trough of the combined wave is deeper and the crest is higher than those of the waves before
they combined. If the two waves are offset by exactly half of a wavelength, the trough of one wave lines up with the crest of the
other. This alignment creates destructive interference—the two waves cancel each other out and a momentary flat spot appears on
the rope. See also Wave Motion.

A Light as a Wave and as a Particle

Like classical physics, quantum theory sometimes describes light as a wave, because light behaves like a wave in many
situations. Light is not a vibration of a solid substance, such as a rope. Instead, a light wave is made up of a vibration in the
intensity of the electric and magnetic fields that surround any electrically charged object.

Like the waves moving along a rope, light waves travel and carry energy. The amount of energy depends on the frequency of the
light waves: the higher the frequency, the higher the energy. The frequency of a light wave is also related to the color of the light.
For example, blue light has a higher frequency than that of red light. Therefore, a beam of blue light has more energy than an
equally intense beam of red light has.
21

Unlike classical physics, quantum theory also describes light as a particle. Scientists revealed this aspect of light behavior in
several experiments performed during the early 20th century. In one experiment, physicists discovered an interaction between
light and particles in a metal. They called this interaction the photoelectric effect. In the photoelectric effect, a beam of light
shining on a piece of metal makes the metal emit electrons. The light adds energy to the metal’s electrons, giving them enough
energy to break free from the metal. If light was made strictly of waves, each electron in the metal could absorb many continuous
waves of light and gain more and more energy. Increasing the intensity of the light, or adding more light waves, would add more
energy to the emitted electrons. Shining a more and more intense beam of light on the metal would make the metal emit electrons
with more and more energy.

Scientists found, however, that shining a more intense beam of light on the metal just made the metal emit more electrons. Each
of these electrons had the same energy as that of electrons emitted with low intensity light. The electrons could not be interacting
with waves, because adding more waves did not add more energy to the electrons. Instead, each electron had to be interacting
with just a small piece of the light beam. These pieces were like little packets of light energy, or particles of light. The size, or
energy, of each packet depended only on the frequency, or color, of the light—not on the intensity of the light. A more intense
beam of light just had more packets of light energy, but each packet contained the same amount of energy. Individual electrons
could absorb one packet of light energy and break free from the metal. Increasing the intensity of the light added more packets of
energy to the beam and enabled a greater number of electrons to break free—but each of these emitted electrons had the same
amount of energy. Scientists could only change the energy of the emitted electrons by changing the frequency, or color, of the
beam. Changing from red light to blue light, for example, increased the energy of each packet of light. In this case, each emitted
electron absorbed a bigger packet of light energy and had more energy after it broke free of the metal. Using these results,
physicists developed a model of light as a particle, and they called these light particles photons.

In 1922 American physicist Arthur Compton discovered another interaction, now called the Compton effect, that reveals the
particle nature of light. In the Compton effect, light collides with an electron. The collision knocks the electron off course and
changes the frequency, and therefore energy, of the light. The light affects the electron in the same way a particle with
momentum would: It bumps the electron and changes the electron’s path. The light is also affected by the collision as though it
were a particle, in that its energy and momentum changes.

Momentum is a quantity that can be defined for all particles. For light particles, or photons, momentum depends on the
frequency, or color, of the photon, which in turn depends on the photon’s energy. The energy of a photon is equal to a constant
number, called Planck’s constant, times the frequency of the photon. Planck’s constant is named for German physicist Max
Planck, who first proposed the relationship between energy and frequency. The accepted value of Planck’s constant is 6.626 × 10 -
34
joule-second. This number is very small—written out, it is a decimal point followed by 33 zeroes, followed by the digits 6626.
The energy of a single photon is therefore very small.

The dual nature of light seems puzzling because we have no everyday experience with wave-particle duality. Waves are everyday
phenomena; we are all familiar with waves on a body of water or on a vibrating rope. Particles, too, are everyday objects—
baseballs, cars, buildings, and even people can be thought of as particles. But to our senses, there are no everyday objects that are
both waves and particles. Scientists increasingly find that the rules that apply to the world we see are only approximations of the
rules that govern the unseen world of light and subatomic particles.

B Matter as Waves and Particles

In 1923 French physicist Louis de Broglie suggested that all particles—not just photons—have both wave and particle properties.
He calculated that every particle has a wavelength (represented by λ, the Greek letter lambda) equal to Planck’s constant (h)
divided by the momentum (p) of the particle: λ = h/p. Electrons, atoms, and all other particles have de Broglie wavelengths. The
momentum of an object depends on its speed and mass, so the faster and heavier an object is, the larger its momentum (p) will be.
Because Planck’s constant (h) is an extremely tiny number, the de Broglie wavelength (h/p) of any visible object is exceedingly
small. In fact, the de Broglie wavelength of anything much larger than an atom is smaller than the size of one of its atoms. For
example, the de Broglie wavelength of a baseball moving at 150 km/h (90 mph) is 1.1 × 10 -34 m (3.6 × 10-34 ft). The diameter of a
hydrogen atom (the simplest and smallest atom) is about 5 × 10-11 m (about 2 × 10-10 ft), more than 100 billion trillion times larger
22

than the de Broglie wavelength of the baseball. The de Broglie wavelengths of everyday objects are so tiny that the wave nature
of these objects does not affect their visible behavior, so their wave-particle duality is undetectable to us.

De Broglie wavelengths become important when the mass, and therefore momentum, of particles is very small. Particles the size
of atoms and electrons have demonstrable wavelike properties. One of the most dramatic and interesting demonstrations of the
wave behavior of electrons comes from the double-slit experiment. This experiment consists of a barrier set between a source of
electrons and an electron detector. The barrier contains two slits, each about the width of the de Broglie wavelength of an
electron. On this small scale, the wave nature of electrons becomes evident, as described in the following paragraphs.

Scientists can determine whether the electrons are behaving like waves or like particles by comparing the results of double-slit
experiments with those of similar experiments performed with visible waves and particles. To establish how visible waves
behave in a double-slit apparatus, physicists can replace the electron source with a device that creates waves in a tank of water.
The slits in the barrier are about as wide as the wavelength of the water waves. In this experiment, the waves spread out
spherically from the source until they hit the barrier. The waves pass through the slits and spread out again, producing two new
wave fronts with centers as far apart as the slits are. These two new sets of waves interfere with each other as they travel toward
the detector at the far end of the tank.

The waves interfere constructively in some places (adding together) and destructively in others (canceling each other out). The
most intense waves—that is, those formed by the most constructive interference—hit the detector at the spot opposite the
midpoint between the two slits. These strong waves form a peak of intensity on the detector. On either side of this peak, the
waves destructively interfere and cancel each other out, creating a low point in intensity. Further out from these low points, the
waves are weaker, but they constructively interfere again and create two more peaks of intensity, smaller than the large peak in
the middle. The intensity then drops again as the waves destructively interfere. The intensity of the waves forms a symmetrical
pattern on the detector, with a large peak directly across from the midpoint between the slits and alternating low points and
smaller and smaller peaks on either side.

To see how particles behave in the double-slit experiment, physicists replace the water with marbles. The barrier slits are about
the width of a marble, as the point of this experiment is to allow particles (in this case, marbles) to pass through the barrier. The
marbles are put in motion and pass through the barrier, striking the detector at the far end of the apparatus. The results show that
the marbles do not interfere with each other or with themselves like waves do. Instead, the marbles strike the detector most
frequently in the two points directly opposite each slit.

When physicists perform the double-slit experiment with electrons, the detection pattern matches that produced by the waves, not
the marbles. These results show that electrons do have wave properties. However, if scientists run the experiment using a barrier
whose slits are much wider than the de Broglie wavelength of the electrons, the pattern resembles the one produced by the
marbles. This shows that tiny particles such as electrons behave as waves in some circumstances and as particles in others.

C Uncertainty Principle

Before the development of quantum theory, physicists assumed that, with perfect equipment in perfect conditions, measuring any
physical quantity as accurately as desired was possible. Quantum mechanical equations show that accurate measurement of both
the position and the momentum of a particle at the same time is impossible. This rule is called Heisenberg’s uncertainty principle
after German physicist Werner Heisenberg, who derived it from other rules of quantum theory. The uncertainty principle means
that as physicists measure a particle’s position with more and more accuracy, the momentum of the particle becomes less and less
precise, or more and more uncertain, and vice versa.

Heisenberg formally stated his principle by describing the relationship between the uncertainty in the measurement of a particle’s
position and the uncertainty in the measurement of its momentum. Heisenberg said that the uncertainty in position (represented
by Δx) times the uncertainty in momentum (represented by Δp;) must be greater than a constant number equal to Planck’s
constant (h) divided by 4 ( is a constant approximately equal to 3.14). Mathematically, the uncertainty principle can be written
as Δx Δp > h / 4. This relationship means that as a scientist measures a particle’s position more and more accurately—so the
uncertainty in its position becomes very small—the uncertainty in its momentum must become large to compensate and make this
23

expression true. Likewise, if the uncertainty in momentum, Δp, becomes small, Δx must become large to make the expression
true.

One way to understand the uncertainty principle is to consider the dual wave-particle nature of light and matter. Physicists can
measure the position and momentum of an atom by bouncing light off of the atom. If they treat the light as a wave, they have to
consider a property of waves called diffraction when measuring the atom’s position. Diffraction occurs when waves encounter an
object—the waves bend around the object instead of traveling in a straight line. If the length of the waves is much shorter than
the size of the object, the bending of the waves just at the edges of the object is not a problem. Most of the waves bounce back
and give an accurate measurement of the object’s position. If the length of the waves is close to the size of the object, however,
most of the waves diffract, making the measurement of the object’s position fuzzy. Physicists must bounce shorter and shorter
waves off an atom to measure its position more accurately. Using shorter wavelengths of light, however, increases the uncertainty
in the measurement of the atom’s momentum.

Light carries energy and momentum, because of its particle nature (described in the Compton effect). Photons that strike the atom
being measured will change the atom’s energy and momentum. The fact that measuring an object also affects the object is an
important principle in quantum theory. Normally the affect is so small it does not matter, but on the small scale of atoms, it
becomes important. The bump to the atom increases the uncertainty in the measurement of the atom’s momentum. Light with
more energy and momentum will knock the atom harder and create more uncertainty. The momentum of light is equal to Plank’s
constant divided by the light’s wavelength, or p = h/λ. Physicists can increase the wavelength to decrease the light’s momentum
and measure the atom’s momentum more accurately. Because of diffraction, however, increasing the light’s wavelength increases
the uncertainty in the measurement of the atom’s position. Physicists most often use the uncertainty principle that describes the
relationship between position and momentum, but a similar and important uncertainty relationship also exists between the
measurement of energy and the measurement of time.

III PROBABILITY AND WAVE FUNCTIONS

Quantum theory gives exact answers to many questions, but it can only give probabilities for some values. A probability is the
likelihood of an answer being a certain value. Probability is often represented by a graph, with the highest point on the graph
representing the most likely value and the lowest representing the least likely value. For example, the graph that shows the
likelihood of finding the electron of a hydrogen atom at a certain distance from the nucleus looks like the following:

The nucleus of the atom is at the left of the graph. The probability of finding the electron very near the nucleus is very low. The
probability reaches a definite peak, marking the spot at which the electron is most likely to be.

Scientists use a mathematical expression called a wave function to describe the characteristics of a particle that are related to time
and space—such as position and velocity. The wave function helps determine the probability of these aspects being certain
values. The wave function of a particle is not the same as the wave suggested by wave-particle duality. A wave function is strictly
a mathematical way of expressing the characteristics of a particle. Physicists usually represent these types of wave functions with
the Greek letter psi, Ψ. The wave function of the electron in a hydrogen atom is:
24

The symbol  and the letter e in this equation represent constant numbers derived from mathematics. The letter a is also a
constant number called the Bohr radius for the hydrogen atom. The square of a wave function, or a wave function multiplied by
itself, is equal to the probability density of the particle that the wave function describes. The probability density of a particle
gives the likelihood of finding the particle at a certain point.

The wave function described above does not depend on time. An isolated hydrogen atom does not change over time, so leaving
time out of the atom’s wave function is acceptable. For particles in systems that change over time, physicists use wave functions
that depend on time. This lets them calculate how the system and the particle’s properties change over time. Physicists represent a
time-dependent wave function with Ψ(t), where t represents time.

The wave function for a single atom can only reveal the probability that an atom will have a certain set of characteristics at a
certain time. Physicists call the set of characteristics describing an atom the state of the atom. The wave function cannot describe
the actual state of the atom, just the probability that the atom will be in a certain state.

The wave functions of individual particles can be added together to create a wave function for a system, so quantum theory
allows physicists to examine many particles at once. The rules of probability state that probabilities and actual values match
better and better as the number of particles (or dice thrown, or coins tossed, whatever the probability refers to) increases.
Therefore, if physicists consider a large group of atoms, the wave function for the group of atoms provides information that is
more definite and useful than that provided by the wave function of a single atom.

An example of a wave function for a single atom is one that describes an atom that has absorbed some energy. The energy has
boosted the atom’s electrons to a higher energy level, and the atom is said to be in an excited state. It can return to its normal
ground state (or lowest energy state) by emitting energy in the form of a photon. Scientists call the wave function of the initial
exited state Ψi (with “i” indicating it is the initial state) and the wave function of the final ground state Ψ f (with “f” representing
the final state). To describe the atom’s state over time, they multiply Ψ i by some function, a(t), that decreases with time, because
the chances of the atom being in this excited state decrease with time. They multiply Ψ f by some function, b(t), that increases
with time, because the chances of the atom being in this state increase with time. The physicist completing the calculation
chooses a(t) and b(t) based on the characteristics of the system. The complete wave equation for the transition is the following:

Ψ = a(t) Ψi + b(t) Ψf.

At any time, the rules of probability state that the probability of finding a single atom in either state is found by multiplying the
coefficient of its wave function (a(t) or b(t)) by itself. For one atom, this does not give a very satisfactory answer. Even though
the physicist knows the probability of finding the atom in one state or the other, whether or not reality will match probability is
still far from certain. This idea is similar to rolling a pair of dice. There is a 1 in 6 chance that the roll will add up to seven, which
is the most likely sum. Each roll is random, however, and not connected to the rolls before it. It would not be surprising if ten
rolls of the dice failed to yield a sum of seven. However, the actual number of times that seven appears matches probability better
as the number of rolls increases. For one million or one billion rolls of the dice, one of every six rolls would almost certainly add
up to seven.

Similarly, for a large number of atoms, the probabilities become approximate percentages of atoms in each state, and these
percentages become more accurate as the number of atoms observed increases. For example, if the square of a(t) at a certain time
is 0.2, then in a very large sample of atoms, 20 percent (0.2) of the atoms will be in the initial state and 80 percent (0.8) will be in
the final state.

One of the most puzzling results of quantum mechanics is the effect of measurement on a quantum system. Before a scientist
measures the characteristics of a particle, its characteristics do not have definite values. Instead, they are described by a wave
function, which gives the characteristics only as fuzzy probabilities. In effect, the particle does not exist in an exact location until
a scientist measures its position. Measuring the particle fixes its characteristics at specific values, effectively “collapsing” the
particle’s wave function. The particle’s position is no longer fuzzy, so the wave function that describes it has one high, sharp
peak of probability. If the position of a particle has just been measured, the graph of its probability density looks like the
following:
25

In the 1930s physicists proposed an imaginary experiment to demonstrate how measurement causes complications in quantum
mechanics. They imagined a system that contained two particles with opposite values of spin, a property of particles that is
analogous to angular momentum. The physicists can know that the two particles have opposite spins by setting the total spin of
the system to be zero. They can measure the total spin without directly measuring the spin of either particle. Because they have
not yet measured the spin of either particle, the spins do not actually have defined values. They exist only as fuzzy probabilities.
The spins only take on definite values when the scientists measure them.

In this hypothetical experiment the scientists do not measure the spin of each particle right away. They send the two particles,
called an entangled pair, off in opposite directions until they are far apart from each other. The scientists then measure the spin of
one of the particles, fixing its value. Instantaneously, the spin of the other particle becomes known and fixed. It is no longer a
fuzzy probability but must be the opposite of the other particle, so that their spins will add to zero. It is as though the first particle
communicated with the second. This apparent instantaneous passing of information from one particle to the other would violate
the rule that nothing, not even information, can travel faster than the speed of light. The two particles do not, however,
communicate with each other. Physicists can instantaneously know the spin of the second particle because they set the total spin
of the system to be zero at the beginning of the experiment. In 1997 Austrian researchers performed an experiment similar to the
hypothetical experiment of the 1930s, confirming the effect of measurement on a quantum system.

IV THE QUANTUM ATOM

The first great achievement of quantum theory was to explain how atoms work. Physicists found explaining the structure of the
atom with classical physics to be impossible. Atoms consist of negatively charged electrons bound to a positively charged
nucleus. The nucleus of an atom contains positively charged particles called protons and may contain neutral particles called
neutrons. Protons and neutrons are about the same size but are much larger and heavier than electrons are. Classical physics
describes a hydrogen atom as an electron orbiting a proton, much as the Moon orbits Earth. By the rules of classical physics, the
electron has a property called inertia that makes it want to continue traveling in a straight line. The attractive electrical force of
the positively charged proton overcomes this inertia and bends the electron’s path into a circle, making it stay in a closed orbit.
The classical theory of electromagnetism says that charged particles (such as electrons) radiate energy when they bend their
paths. If classical physics applied to the atom, the electron would radiate away all of its energy. It would slow down and its orbit
would collapse into the proton within a fraction of a second. However, physicists know that atoms can be stable for centuries or
longer.

Quantum theory gives a model of the atom that explains its stability. It still treats atoms as electrons surrounding a nucleus, but
the electrons do not orbit the nucleus like moons orbiting planets. Quantum mechanics gives the location of an electron as a
probability instead of pinpointing it at a certain position. Even though the position of an electron is uncertain, quantum theory
prohibits the electron from being at some places. The easiest way to describe the differences between the allowed and prohibited
positions of electrons in an atom is to think of the electron as a wave. The wave-particle duality of quantum theory allows
electrons to be described as waves, using the electron’s de Broglie wavelength.

If the electron is described as a continuous wave, its motion may be described as that of a standing wave. Standing waves occur
when a continuous wave occupies one of a set of certain distances. These distances enable the wave to interfere with itself in such
a way that the wave appears to remain stationary. Plucking the string of a musical instrument sets up a standing wave in the string
that makes the string resonate and produce sound. The length of the string, or the distance the wave on the string occupies, is
equal to a whole or half number of wavelengths. At these distances, the wave bounces back at either end and constructively
26

interferes with itself, which strengthens the wave. Similarly, an electron wave occupies a distance around the nucleus of an atom,
or a circumference, that enables it to travel a whole or half number of wavelengths before looping back on itself. The electron
wave therefore constructively interferes with itself and remains stable:

An electron wave cannot occupy a distance that is not equal to a whole or half number of wavelengths. In a distance such as this,
the wave would interfere with itself in a complicated way, and would become unstable:

An electron has a certain amount of energy when its wave occupies one of the allowed circumferences around the nucleus of an
atom. This energy depends on the number of wavelengths in the circumference, and it is called the electron’s energy level.
Because only certain circumferences, and therefore energy levels, are allowed, physicists say that the energy levels are quantized.
This quantization means that the energies of the levels can only take on certain values.

The regions of space in which electrons are most likely to be found are called orbitals. Orbitals look like fuzzy, three-dimensional
shapes. More than one orbital, meaning more than one shape, may exist at certain energy levels. Electron orbitals are also
quantized, meaning that only certain shapes are allowed in each energy level. The quantization of electron orbitals and energy
levels in atoms explains the stability of atoms. An electron in an energy level that allows only one wavelength is at the lowest
possible energy level. An atom with all of its electrons in their lowest possible energy levels is said to be in its ground state.
Unless it is affected by external forces, an atom will stay in its ground state forever.

The quantum theory explanation of the atom led to a deeper understanding of the periodic table of the chemical elements. The
periodic table of elements is a chart of the known elements. Scientists originally arranged the elements in this table in order of
increasing atomic number (which is equal to the number of protons in the nuclei of each element’s atoms) and according to the
chemical behavior of the elements. They grouped elements that behave in a similar way together in columns. Scientists found that
elements that behave similarly occur in a periodic fashion according to their atomic number. For example, a family of elements
called the noble gases all share similar chemical properties. The noble gases include neon, xenon, and argon. They do not react
easily with other elements and are almost never found in chemical compounds. The atomic numbers of the noble gases increase
from one element to the next in a periodic way. They belong to the same column at the far right edge of the periodic table.

Quantum theory showed that an element’s chemical properties have little to do with the nucleus of the element’s atoms, but
instead depend on the number and arrangement of the electrons in each atom. An atom has the same number of electrons as
protons, making the atom electrically neutral. The arrangement of electrons in an atom depends on two important parts of
27

quantum theory. The first is the quantization of electron energy, which limits the regions of space that electrons can occupy. The
second part is a rule called the Pauli exclusion principle, first proposed by Austrian-born Swiss physicist Wolfgang Pauli.

The Pauli exclusion principle states that no electron can have exactly the same characteristics as those of another electron. These
characteristics include orbital, direction of rotation (called spin), and direction of orbit. Each energy level in an atom has a set
number of ways these characteristics can combine. The number of combinations determines how many electrons can occupy an
energy level before the electrons have to start filling up the next level.

An atom is the most stable when it has the least amount of energy, so its lowest energy levels fill with electrons first. Each energy
level must be filled before electrons begin filling up the next level. These rules, and the rules of quantum theory, determine how
many electrons an atom has in each energy level, and in particular, how many it has in its outermost level. Using the quantum
mechanical model of the atom, physicists found that all the elements in the same column of the periodic table also have the same
number of electrons in the outer energy level of their atoms. Quantum theory shows that the number of electrons in an atom’s
outer level determines the atom’s chemical properties, or how it will react with other atoms.

The number of electrons in an atom’s outer energy level is important because atoms are most stable when their outermost energy
level is filled, which is the case for atoms of the noble gases. Atoms imitate the noble gases by donating electrons to, taking
electrons from, or sharing electrons with other atoms. If an atom’s outer energy level is only partially filled, it will bond easily
with atoms that can help it fill its outer level. Atoms that are missing the same number of electrons from their outer energy level
will react similarly to fill their outer energy level.

Quantum theory also explains why different atoms emit and absorb different wavelengths of light. An atom stores energy in its
electrons. An atom with all of its electrons at their lowest possible energy levels has its lowest possible energy and is said to be in
its ground state. One of the ways atoms can gain more energy is to absorb light in the form of photons, or particles of light. When
a photon hits an atom, one of the atom’s electrons absorbs the photon. The photon’s energy makes the electron jump from its
original energy level up to a higher energy level. This jump leaves an empty space in the original inner energy level, making the
atom less stable. The atom is now in an excited state, but it cannot store the new energy indefinitely, because atoms always seek
their most stable state. When the atom releases the energy, the electron drops back down to its original energy level. As it does,
the electron releases a photon.

Quantum theory defines the possible energy levels of an atom, so it defines the particular jumps that an electron can make
between energy levels. The difference between the old and new energy levels of the electron is equal to the amount of energy the
atom stores. Because the energy levels are quantized, atoms can only absorb and store photons with certain amounts of energy.
The photon’s energy is related to its frequency, or color. As the frequency of photons increases, their energy increases. Atoms
can only absorb certain amounts of energy, so only certain frequencies of light can excite atoms. Likewise, atoms only emit
certain frequencies of light when they drop to their ground state. The different frequencies available to different atoms help
astronomers, for example, determine the chemical makeup of a star by observing which wavelengths are especially weak or
strong in the star’s light. See also Spectroscopy.

V DEVELOPMENT OF QUANTUM THEORY

The development of quantum theory began with German physicist Max Planck’s proposal in 1900 that matter can emit or absorb
energy only in small, discrete packets, called quanta. This idea introduced the particle nature of light. In 1905 German-born
American physicist Albert Einstein used Planck’s work to explain the photoelectric effect, in which light hitting a metal makes
the metal emit electrons. British physicist Ernest Rutherford proved that atoms consisted of electrons bound to a nucleus in 1911.
In 1913 Danish physicist Niels Bohr proposed that classical mechanics could not explain the structure of the atom and developed
a model of the atom with electrons in fixed orbits. Bohr’s model of the atom proved difficult to apply to all but the simplest
atoms.

In 1923 French physicist Louis de Broglie suggested that matter could be described as a wave, just as light could be described as
a particle. The wave model of the electron allowed Austrian physicist Erwin Schrödinger to develop a mathematical method of
determining the probability that an electron will be at a particular place at a certain time. Schrödinger published his theory of
28

wave mechanics in 1926. Around the same time, German physicist Werner Heisenberg developed a way of calculating the
characteristics of electrons that was quite different from Schrödinger’s method but yielded the same results. Heisenberg’s method
was called matrix mechanics.

In 1925 Austrian-born Swiss physicist Wolfgang Pauli developed the Pauli exclusion principle, which allowed physicists to
calculate the structure of the quantum atom for the first time. In 1926 Heisenberg and two of his colleagues, German physicists
Max Born and Ernst Pascual Jordan, published a theory that combined the principles of quantum theory with the classical theory
of light (called electrodynamics). Heisenberg made another important contribution to quantum theory in 1927 when he
introduced the Heisenberg uncertainty principle.

Since these first breakthroughs in quantum mechanical research, physicists have focused on testing and refining quantum theory,
further connecting the theory to other theories, and finding new applications. In 1928 British physicist Paul Dirac refined the
theory that combined quantum theory with electrodynamics. He developed a model of the electron that was consistent with both
quantum theory and Einstein’s special theory of relativity, and in doing so he created a theory that came to be known as quantum
electrodynamics, or QED. In the early 1950s Japanese physicist Tomonaga Shin’ichirō and American physicists Richard
Feynman and Julian Schwinger each independently improved the scientific community’s understanding of QED and made it an
experimentally testable theory that successfully predicted or explained the results of many experiments.

VI CURRENT RESEARCH AND APPLICATIONS

At the turn of the 21st century, physicists were still finding new problems to study with quantum theory and new applications for
quantum theory. This research will probably continue for many decades. Quantum theory is technically a fully formulated theory
—any questions about the physical world can be calculated using quantum mechanics, but some are too complicated to solve in
practice. The attempt to find quantum explanations of gravitation and to find a unified description of all the forces in nature are
promising and active areas of research. Researchers try to find out why quantum theory explains the way nature works—they
may never find an answer, but the effort to do so is underway. Physicists also study the complicated area of overlap between
classical physics and quantum mechanics and work on the applications of quantum mechanics.

Studying the intersection of quantum theory and classical physics requires developing a theory that can predict how quantum
systems will behave as they get larger or as the number of particles involved approaches the size of problems described by
classical physics. The mathematics involved is extremely difficult, but physicists continue to advance in their research. The
constantly increasing power of computers should continue to help scientists with these calculations.

New research in quantum theory also promises new applications and improvements to known applications. One of the most
potentially powerful applications is quantum computing. In quantum computing, scientists make use of the behavior of subatomic
particles to perform calculations. Making calculations on the atomic level, a quantum computer could theoretically investigate all
the possible answers to a query at the same time and make many calculations in parallel. This ability would make quantum
computers thousands or even millions of time faster than current computers. Advancements in quantum theory also hold promise
for the fields of optics, chemistry, and atomic theory.

Microsoft ® Encarta ® Reference Library 2004. © 1993-2003 Microsoft Corporation. All rights reserved.

You might also like