Download as pdf or txt
Download as pdf or txt
You are on page 1of 240

Routledge Revivals

A Companion to the Physical Sciences

First published in 1989, this dictionary of the whole field of the physical
sciences is an invaluable guide through the changing terminology and
practices of scientific research. Arranged alphabetically, it traces how the
meaning of scientific terms have changed over time. It covers a wide range
of topics including voyages, observations, magnetism and pendulums, and
central subjects such as atom, valency and energy. There are also entries on
more abstract terms such as hypothesis, theory, induction, deduction,
falsification and paradigm, emphasizing that while science is more than
‘organized common sense’ it is not completely different from other
activities. Science’s lack of innocence is also recognized in headings like
pollution and weapons.
This book will be a useful resource to students interested in the history of
science.
A Companion to the Physical Sciences

David Knight
First published in 1989
by Routledge

This edition first published in 2016 by Routledge


2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017

Routledge is an imprint of the Taylor & Francis Group, an informa business


© 1989 David Knight

The right of David Knight to be identified as author of this work has been asserted by him in
accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission in
writing from the publishers.

Publisher’s Note
The publisher has gone to great lengths to ensure the quality of this reprint but points out that some
imperfections in the original copies may be apparent.

Disclaimer
The publisher has made every effort to trace copyright holders and welcomes correspondence from
those they have been unable to contact.

A Library of Congress record exists under LC control number: 89157798

ISBN 13: 978-1-138-64314-7 (hbk)


ISBN 13: 978-1-315-62947-6 (ebk)

A Companion to the Physical Sciences


David Knight
First published in 1989 by
Routledge
11 New Fetter Lane, London EC4P 4EE
29 West 35th Street, New York NY 10001

© 1989 David Knight

Typeset by Columns of Reading


Printed in Great Britain by TJ Press (Padstow) Ltd
Padstow, Cornwall

All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission in
writing from the publishers.

British Library Cataloguing in Publication Data


Knight, David, 1936 Nov. 30-
A companion to the physical sciences.

1. Physical sciences
I. Title

500.2
Library of Congress Cataloging in Publication Data
Available on request

ISBN 0-415-00901-4
Contents

Introduction

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Z

Index
Introduction

This book is addressed to anybody interested in the place of the physical


sciences in our culture: and it is written in the belief that fundamental
explanation in human affairs is historical. Historians of science do not fit
easily into any specialised department of knowledge, but the questions aired
here are those with which most of them are concerned. They involve the
careers of scientists, their beliefs and those of society about science, as well
as the origin and development of scientific theories. Because the life
sciences were written up in Peter and Jean Medawar’s Aristotle to Zoos
(OUP, Oxford, 1985) they are not systematically discussed here; though
many of our topics are as relevant to natural history as to what used to be
called natural philosophy. Much physical science is obscure, and to master
it requires skills in mathematics and experiment; but the lives and minds of
scientists are not substantially different from those of the rest of us. Science
is a human activity, and much of it can be understood by those who do not
need to practice it; it is a great mistake to see it as a mystery, or indeed as
something which will solve all a nation’s problems. Many of the entries
here thus discuss scientists as much as science; and like the demonstration-
lectures of the nineteenth century, the book is written in the hope that
entertainment and instruction go together.
A lexicographer is a harmless drudge, but this is not simply a Dictionary
of the History of Science; there is already one, edited by W. F. Bynum, E. J.
Browne and R. Porter (Macmillan, London, 1981), just as there are
dictionaries of the various sciences in which exact definitions of technical
terms are to be found. The entries here are more discursive than in most
dictionaries, and the book is intended as much for browsing in as for rapid
consultation. Nor is it biographical: for that, see T. I. Williams (ed.), A
Biographical Dictionary of Scientists (A & C Black, London, 1969) and the
multi-volume C. C. Gillispie (ed.), Dictionary of Scientific Biography
(Charles Scribner’s Sons, New York, 1970–85). For those who want more
information, there are citations at the end of some articles to recent
writings. Those, and I hope there will be many, who become interested in
the history of science will find that there are national societies concerned
with it; that in Britain publishes the British Journal for the History of
Science (BJHS for short), and that in the USA, Isis. The Society for the
History of Alchemy and Chemistry publishes Ambix; and the Newcomen
Society for the History of Technology publishes its Transactions. In
addition, there are journals like History of Science and Annals of Science
not associated with a society.
No book is unalloyed fun to write, as the original idea has to be veiled in
words; but this one has been more fun than most. I am very grateful for the
support of colleagues in the University of Durham, which has committed
itself to the History of Science as a field of study; of my friends especially
in the British Society for the History of Science; and of my family who
have as usual been very long-suffering. None of them should be blamed for
the result.
Absorption means swallowing up; we may be absorbed in a book, but in
science it can be used of heat entering a body, or of digestion. The
absorption of hydrogen by the metal palladium is a process of some interest,
and absorption by a catalyst may be a crucial part of a reaction in
chemistry. An absorption spectrum is one in which relatively-cool atoms
of a gas absorb light at the same frequencies at which they would if hot
emit it: the classic case is the spectrum of the Sun, in which Fraunhofer in
the early nineteenth century saw dark lines. In 1859 these were explained
by Kirchhoff and Bunsen as due to absorption in the outer layers of the Sun;
the chemical composition of which could therefore be determined.

An abstract is a summary of a paper. It may appear at the head of the


paper, so that readers can decide whether they want to read the whole thing
or not; or it may be published in a separate journal which abstracts from
many primary journals and is thus a guide to published research. The need
for such secondary journals emerged by the nineteenth century, when there
was too much being published for anybody to be able to read everything.
Abstracts may be prepared by the author, for whom it is a useful discipline,
or by an editor; and they may be in a different language from the original.
Those interested by an abstract may consult the original publication, or
write to the author for an offprint.

An academy of sciences is a body of eminent scientists, generally


appointed by a government and given salaries; in this it differs from a
voluntary association or a learned society. The first academies were
informal bodies in Renaissance Italy, generally under the patronage of an
aristocrat and enduring until he died or lost interest: the modern kind began
in France in 1665 with Louis XIV and his minister Colbert. They had
watched with interest the foundation of the Royal Society of London, given
its first Royal Charter in 1662; but Charles II could only give it goodwill
and a handsome mace, having no money to spare for science. It therefore
depended upon a large membership paying subscriptions, of whom only a
few were really committed to research and none could follow science as a
profession.
Louis XIV wanted to promote, use and control science, and the Parisian
Academy was a department of state; the academicians were a kind of civil
servant. They were paid, and they were expected to undertake utilitarian
researches when ordered to do so. They also acted as a scientific
authority, being officially consulted about new discoveries or
inventions. There were a set number of academicians, so many for each
science, which in time, as sciences developed and proliferated, led to
difficulties. The Academy was associated with an observatory and a
laboratory; and at first informally but then officially with a journal in
which its work was published. It developed a network of correspondents, in
the French provinces and abroad; and by the end of the eighteenth century it
was the envy of those in other countries. It was abolished after the French
Revolution, but soon refounded as the Institut, reformed but performing the
same functions.
This French model proved attractive to governments such as those of
eighteenth-century Prussia and Russia, and nineteenth-century Japan,
anxious to modernise and to impose progress from above; in contrast to the
voluntary societies which were the norm in Britain and North America. In
more recent times, much of the administration of science has come within
the scope of academies, which have sponsored voyages of discovery,
museums, international conferences, and also research institutions,
in this case in uneasy relations with universities. With the growth of
science, bodies such as the Royal Society have assumed much more of the
character of an academy, which they lacked into the nineteenth century: its
policy-making Council only acquired a majority of active scientists in the
1820s under Humphry Davy’s presidency. Academies played a part in the
separation of science from other activities in that specialisation of which
one aspect has been the appearance of ‘two cultures’.
R. Hahn (1971) The Anatomy of a Scientific Institution, California University Press, Berkeley.

Acceleration is a vector quantity, having direction as well as size: it means


the rate of change of velocity with respect to time. The ancient Greeks
could handle uniform motion: but it was not until the thirteenth century that
the mathematicians of Merton College, Oxford (Swineshead, Heytesbury
and Bradwardine), succeeded in dealing with what they called uniformly
non-uniform motion. They worked out a ‘mean-speed rule’, according to
which the distance covered by a body subject to this kind of motion would
be the same as that covered if it had been going uniformly at the mean
speed. This was presented geometrically; and although it seems equivalent
to our laws of uniform acceleration, they had no idea of velocity at a point,
or acceleration at an instant. By the 1630s, Galileo had these concepts:
perceiving that falling bodies were uniformly accelerated, or would be if
there were no air-resistance (he was unworried by loose empirical fit); and
also defining acceleration as rate of change of motion with regard to time,
rather than to distance as some contemporaries had supposed. All that
remained was to distinguish the scientific term ‘velocity’ from the ordinary
term ‘speed’, which does not include direction.

Accommodation was the term used to explain how a statement in the Bible
could conflict with modern science. Religion was about how to go to
heaven, not how the heavens go; and while the inspired authors could not
lie, they were not teaching physics and their message was therefore
accommodated to the understandings of their immediate audience. This
point was strongly urged by Galileo.

Accuracy is what we all want in science. The search for accuracy


continually brings one up against the limitations of apparatus and of
manipulation; and there is no point in taking immense pains over one part
of an experiment if another cannot be done without the possibility of large
errors. Thus in the 1780s Lavoisier in his chemical work did some
weighing, which could be done with great accuracy: assayers and
goldsmiths had long required very good balances. He also did some
volumetric analyses, where the accuracy was very much less: measurement
of volumes of gases and liquids was speedy but rough. When these steps
were all part of the same investigation, then there was no point in taking too
much trouble over the weighing; and Lavoisier was mistaken in calculating
his results as though all processes were as good as the weighing. It is the
weakest link which determines the accuracy of an experiment; and
Lavoisier’s figures are therefore delusive.
Accuracy is, since the middle of the nineteenth century, quantified; we
expect the scientist to estimate it, and to state it as ±5% or whatever it
may be. It can also be expressed in ‘significant figures’. If the atomic
weight of chlorine is given as 35.454, then we take it that only the last
figure is in doubt; that is, that the analyses are accurate to one part in 3,000.
This was a convention only rather slowly appreciated by chemists in the
last century, and only nowadays beginning to be appreciated by
archaeologists using dating methods based on radioactivity. One can
estimate the sensitivity of a galvanometer or of a balance; and practice
improves one’s use of a burette and pipette for handling liquids; but the
biggest change since the early nineteenth century has been the rise of
statistics, enabling a result more accurate than the individual
observations to be arrived at.

Acid meant something having a sour taste, and from the seventeenth
century with indicators like cabbage-juice it came to mean something which
produced a colour change, especially with litmus-paper which it turns pink.
This has now been quantified in terms of pH, the concentration of hydrogen
ions (or strictly -log[H+]), where 7 is neutral and lesser values correspond
to increasing acidity shown on universal indicator papers. The strong
mineral acids (sulphuric, nitric and our hydrochloric) became standard
reagents from the seventeenth century.
For Lavoisier in the 1780s, all acids were compounds of oxygen – hence
its name. This theory was demolished by Davy, who had no coherent
notion to put in its place; but by the mid-nineteenth century, notably in
Laurent’s Chemical Method (English ed. 1855) the idea gained ground that
acids contained hydrogen replaceable by a metal. With the twentieth
century, acids became proton donors or electron acceptors: there has been
steady conceptual change, with an uneasy relationship between common
usage and chemistry.
J. H. Brooke, in S. Forgan (ed.) (1980) Science and the Sons of Genius, Science Reviews, London,
pp. 121–75.

Action and reaction are equal and opposite, as anybody who has thumped
a table will know and as Newton’s laws of motion state: here it means
force, but in 1744 Pierre Maupertuis (the first head of the Prussian
Academy of Sciences) defined action as the product of mass, velocity and
path length. With this went the principle of least action: that nature in
producing changes uses the minimum quantity of action. He believed that
this was evidence for the wisdom of God. The principle had been anticipated
a century earlier by Pierre Fermat in explaining the path of light going
from one medium to another and being refracted: his conclusion was that
the ray went by the route taking the least time.
In general, an action is something done by somebody or something. In
science, the question of how things act upon one another is very important:
in the seventeenth century the mechanical principles of the ‘New
Philosophy’ required pushes and pulls as the cause of motion. Newton’s
physics, on the other hand, involved action at a distance: he could provide
no mechanical explanation of gravity; how, for example, the Earth exerts
a force upon the Moon across void space. The theory worked so well that
its uneasy foundations were generally ignored; and action at a distance was
also invoked in electricity and magnetism, although ether was
postulated as a medium for the light waves whose existence seemed proved
by experiment. But Faraday by the 1840s had come to doubt the possibility
of action at a distance, and of solid atoms surrounded by vacuum; and
postulated instead a field in which atoms were point centres of force. This
new understanding of matter and space transformed physics in the later
nineteenth century and since.

Administration is an activity much looked down upon by those active in


research when times are good; but in hard times good administrators are
essential. As science has become bigger in the twentieth century, with
large teams of experts at work upon great projects, so the need for
administrators has become greater; and many eminent scientists in fact
spend the second half of their professional life in administration, directing
a laboratory or museum, or working for the government. Such posts can be
a way of kicking upstairs someone who has run out of fresh ideas; a
phenomenon not uncommon among scientists, perhaps especially in the
mathematical field. Such a state does not necessarily go with great
organising ability; but it seems worse to divert into paperwork those who
could be doing fundamental research.
As scientific institutions grew from the invisible colleges of the
early days into formal associations and academies, so they required
Presidents, Treasurers and Secretaries. At first these were generally
honorary posts; but the French Academy of Sciences had a salaried
Permanent Secretary in the late seventeenth century in the person of
Fontenelle, a playwright and author of science fiction and popular science.
The secretary’s duties of keeping minutes and records and perhaps editing
or sub-editing a journal might be more smoothly and efficiently done by a
full-time officer of the society; and particularly if there were a building to
look after, employees to be hired, and laboratory supplies to be ordered, an
administrator became essential. By the nineteenth century, the Royal
Society and the Royal Institution in London had full-time administrators.
Such people might, like Fontenelle, have little more than a general interest
in science; running a scientific society is not altogether different from other
administrative tasks.
Presidents of societies, on the other hand, are normally full-time
scientists who have to cut down on other activities during their period of
office. Davy, when President of the Royal Society in the 1820s, saw himself
as an adult eagle teaching younger ones to fly higher into the sun; though in
practice he was not always as generous in spirit as this might imply.
Another kind of limited-term administration might be the organising of an
international conference. In the same way, being Head of Department
or Dean in a university or other educational institution will divert a
scientist for a longer or shorter time from research and teaching; and this
kind of administration may in effect become a full-time job. In a career, it is
hard to reverse the drift towards paperwork; especially as this tends to
attract increased prestige and salary.

Affinity means relationship by marriage, as against kinship which is blood-


relationship; there used to be in churches Tables of Kindred and Affinity
which were lists of those one could not legally marry. In biology, the term
affinity is used (improperly) to imply family resemblance, in contrast to
analogy which means outward similarity. In chemistry the metaphor is
more exact; a substance which readily combines with another is said to
show affinity for it. Newton in his Principia (1687) demonstrated the
existence of the attractive force of gravity; arguing that every particle
of matter in the universe attracts every other particle according to the
inverse square law. What men of science in the eighteenth century then
called ‘the attraction of gravitation’ was thus universal; it applied to
everything material.
There were two other kinds of attraction, and Newton and his followers
saw some analogies between them and gravity. One was ‘the attraction of
cohesion, or of aggregation’: this held like particles together in the liquid
and solid states. The other was ‘the attraction of affinity’, which bound
unlike particles or atoms into what would come to be called a molecule.
Neither of these forces was universal; and indeed as though to emphasise
the metaphor, chemical attraction was often called ‘elective affinity’. At the
beginning of the nineteenth century, Goethe wrote a famous novel, Elective
Affinities, in which in contrast to a straightforward chemical reaction, the
formation and breakdown of human relationships is exhilarating and
agonising.
At just this time chemists were beginning to struggle towards some
explanation of affinity: Davy in England and Berzelius in Sweden
proposed that electricity and chemical affinity were ultimately the same, or
somehow manifestations of some one power. Since electricity was a polar
phenomenon, positive or negative, then substances would only display
affinity if they were oppositely charged; and chemistry might be quantified
by measuring these affinities. This had been the Newtonian dream of the
eighteenth century: chemists knew that iron has more affinity for sulphuric
acid than copper has, because an iron nail dipped into copper sulphate will
dissolve and precipitate copper; but such experiments were hard to quantify.
Measurements of the heat given out in chemical reactions, another guide to
affinity, were also at that time hard to reproduce exactly; and in the event
chemistry was quantified in terms of weights rather than forces.
A hundred years after Goethe, Davy and Berzelius it became possible to
give a clearer account of affinity in terms of electrons shared or
transferred in chemical reactions; and the metaphorical implications of the
old term disappeared in the electronic theory of valency of G. N. Lewis
(1916). The word is much less used in modern chemistry than it was in the
past, especially as a number of extra interactions are now inferred, such as
hydrogen bonding and the surface attraction or absorption involved in
some catalysis.

Aliphatic compounds in organic chemistry are distinguished from


aromatic ones, which contain a benzene ring. The atoms of carbon are
arranged in straight or branching chains in these series of compounds,
which include the hydrocarbons which make up oils, and carbohydrates
such as starch and sugars.

Analogy was for Davy the very soul of science: by which he meant that
creative reasoning in the sciences is analogical. We can see this is in his
own work, where he recognised in 1810 the analogies between the greenish
gas from sea-salt and oxygen, and therefore recognised the former as an
element which he named Chlorine. Clearly the analogies here are
incomplete; breathing the two substances has very different results. It was
only in the realm of chemical behaviour that there were analogies; one
needs knowledge, a paradigm, as well as common sense to perceive
relevant analogies, and ignore surface ones. Similarly in biology, the
Tasmanian wolf showed analogies with our wolf, and William Swainson in
the 1830s put them in the same group in his classification: but already
Cuvier, looking beneath the skin at the mode of reproduction, had seen that
the Tasmanian wolf as a marsupial had no relevant analogies with dogs. To
the shepherd it has, just as chlorine to the coroner has analogies with
arsenic; the interests of the scientist are not necessarily those of
everyman.
Davy had an easier task when in France in 1813 he was shown a curious
product of sea-weed which formed a violet gas when heat was applied to it.
The analogies between what he called Iodine and chlorine are much closer
than those between chlorine and oxygen, and we might feel that anybody
coming from chlorine to this new substance would have spotted them. All
that analogy means is going from the known and familiar to the unknown;
and the step may be long or short. It cannot be guaranteed: analogies may
be misleading as they were to Swainson, and as analogies between heat and
fluids were in the caloric theory; though even here the analogy led
somewhere, and as Bacon said, error is better than confusion. We progress
from analogy to analogy; and those left behind in the past look crude, as no
doubt ours will to our grandchildren.
From chlorine to iodine is a straightforward analogy; the more interesting
are a borrowing from a distinct science or region of knowledge. This goes
on all the time, and ideas from fashionable or successful fields are applied
in more backward regions; the role of fashion in science should never be
underestimated. When Davy was President of the Royal Society, an
unknown young man, John Herapath, submitted a paper suggesting that the
behaviour of gases could be understood by analogy with that of billiard-
balls. A large number of absolutely hard particles, colliding with each other
and with the walls of their container would, he believed, behave like a gas.
Davy was unimpressed with this analogy, probably because he did not
believe in absolutely hard atoms; he may also have seen the mathematical
weaknesses in Herapath’s treatment: but a generation later the kinetic
theory of gases, in a more sophisticated version, became one of the most
exciting bits of physics. Mechanics, with statistics, had been used in the
explanation of the gas laws.
The analogies are clearly incomplete; the molecules do not have colours
like billiard-balls, but they do have a definite size and this can be taken
account of in accounting for the failure of the gas laws to fit actual gases
exactly, in Van der Waals’ equation. In the nineteenth century, it seemed
that one would work from analogies towards truth; now we might see
through a glass darkly, but in the future face to face. Thus the successes of
the wave analogy for light, and the billiard-ball analogy for gases,
indicated that light was really a wave motion and molecules really elastic
particles. The physics of the twentieth century has undermined this
confidence. Einstein showed that the generation of electricity when light
fell onto potassium was best explained by an analogy with bullets; the small
shot of red light does not produce an effect, while the big bullets from the
violet end of the spectrum knock out electrons easily enough. Thus to
explain light, we need different analogies in analysing different
experiments.
The same problem emerged with matter. Electrons seemed to be
particles, but in the 1920s it turned out that they could be diffracted as light
is at the edge of a knife; thus showing analogies with waves. It seems that
we cannot have one theory to account for all these properties, and must be
content with analogy; we know which to apply in any situation, and an
ultimate explanation eludes us. Scepticism is not new; but usually
scientists have avoided extreme versions of it, and many hope now for some
way out of the impasse of quantum theory. In the 1830s, Baden Powell
(mathematician, and father of the defender of Mafeking) suggested that
analogy could guarantee induction: he recognised the great mass of
scientific knowledge, so that inductive reasoning was not a matter of
isolated generalisings about white swans or black ravens, but was part of a
great network worthy of reasonable faith. To the extreme sceptic, this will
not do; but unless one believes in the guidance of analogy, one would
hardly undertake any science.
D. M. Knight (1986) ‘William Swainson: Naturalist, author and illustrator’, Archives of Natural
History, 13, 275–90.

Analysis is the process of breaking something up into small parts. It can be


used in philosophy, perhaps in opposition to holism: analytical
philosophers, following Descartes’ method, closely examine the small steps
used in reasoning. It is also used in mathematics, especially of the
techniques used by Lagrange and Laplace around 1800 which involved the
differential and integral calculus; their methods and notation were belatedly
imported into Britain, notably by a group of Cambridge men calling
themselves the Analytical Society, in the 1810s. But the most important use
of the term is in chemistry, where it describes the process of decomposing
a compound body to determine what it is made of, in what proportions, and
how arranged.
The classic way of doing this by the sixteenth century was
distillation. In this way alcohol had been separated from wine in
medieval Sicily; and as the active component it was called the ‘spirit’ of
wine. Usually the process was less interesting, and resulted in a watery and
an oily layer, with a sticky, earthy residue called the ‘caput mortuum’ left
behind in the retort: these were interpreted according to the theories of four
elements (Earth, Air, Fire and Water) or three principles (Mercury,
Sulphur and Salt), all of which differed from the ordinary things called by
those names.
Robert Boyle in the 1660s noted that the distillation of such things as
gold does not produce any products; and argued that corpuscles were the
fundamental building blocks of chemistry. But nobody knew how they were
arranged; and in the 1780s Lavoisier made analysis the crucial process in
chemistry, with the elements being substances which could not be analysed.
Volta’s electric battery proved a powerful analytical agent in the hands of
Davy, Berzelius and others in the early years of the nineteenth century; then
in the 1830s Liebig perfected the apparatus for organic analysis; and from
the 1860s Bunsen and Kirchhoff’s spectrum analysis became the speediest
and cleanest method for inorganic chemists.
Analysis is first qualitative, following a standard route to eliminate
possibilities, and showing what is present; and then quantitative, indicating
in what proportions the elements are there: this depends on Lavoisier’s
principle of conservation of mass. The problem is that the empirical
formula thus reached may not be unique to one compound (ammonium
cyanate and urea have the same one, and so do benzene and acetylene);
chemical analysis is not capable of revealing structure. Chemists therefore
also need synthesis, emphasised by Berthelot from 1860 as the
complementary process: in which the compound is built up preferably from
its elements, in a ‘total synthesis’, but otherwise from simpler compounds
of known structure. When analysis has been followed by synthesis, we
ought to have real knowledge.

Apparatus is exceedingly important in the sciences, although philosophers


sometimes seem to forget this in concentrating upon theories, laws and
hypotheses. When we visit a great museum we see the pieces of apparatus
which have been used in classic experiments: the mirror of a telescope,
retorts and test-tubes, or a pendulum, perhaps. These ‘ideas made
brass’, or usually in chemistry ‘glass’, have made discoveries possible. In
science there is frequently a conceptual frontier, where a Lavoisier or an
Einstein is needed to make contemporaries look at facts in a new way; but
they needed to support their new perspective with experiment, involving
careful weighing on the one hand, and observation of a solar eclipse on the
other. There are also technical frontiers, where it is clear that better
apparatus could disclose what was going on. Astronomical discovery has
thus waited upon the improvement of telescopes, and bacteriological upon
microscopes: while in the nineteenth century the discharge of electricity
through gases, a line of research leading to the electron, depended upon
the improvement of air-pumps. As they got closer to a vacuum, so Faraday,
Crookes and then J. J. Thomson could investigate the positive column and
then the cathode rays.
Over the years, more and more apparatus has become available ready-
made. The goldsmith’s trade meant that good balances were made and
could be bought from an early date; and no scientist since the
Renaissance has expected to have to make his own. The accuracy of the
specially made balance was higher than that of anything home-made, and
measurements with balances could be relied upon. Astronomers like Tycho
Brahe in the sixteenth century, Galileo in the seventeenth, and William
Herschel in the eighteenth, did make their own instruments; but a hundred
years after Galileo had turned the telescope towards the skies, they were
commercially available and some instrument-makers did very well out of
making them. Only when someone like Herschel wanted something much
bigger and better than was commercially available did he have to make his
own.
James Watt began as an instrument-maker, and did an apprenticeship; and
this did represent a route into a scientific career. His example can remind us
that science and technology are connected in a complex web; each needs
the other, and technology is not merely applied science, but may determine
what science is possible. Watt’s contemporary, Lavoisier, devoted the last
section of his textbook to chemical apparatus; in the belief that if anybody
carried out his experiments exactly as he had done he must come to the
same conclusions. Chemical apparatus in the last years of the eighteenth-
century presented great problems. The glass used was far from inert, even
when it was clear rather than green or brown; and it needed to be thick for
mechanical strength, and thin so as not to crack when heated. Different
vessels were joined together by pasty ‘lutes’ of various compositions which
were packed into the joint and believed to be inert; corks were too likely to
crack the necks of flasks to be widely used. The chemist bought sheets of
‘bibulous paper’ and cut out circles for filtering, using odd bits to soak in
litmus or some other vegetable dye as an indicator of acidity.
Skill in glass-blowing remained an essential part of the chemist’s
equipment right through the nineteenth century; a physicist might be ham-
handed, like the applied mathematician Poisson, but a chemist had to be
good with his hands, and indeed able to think with his fingers. But
apparatus became smaller: Lavoisier had described a blast furnace as part of
the equipment of a chemical laboratory, but by the early nineteenth century
W. H. Wollaston did his painstaking analyses with a spirit lamp. In 1827
Faraday published his only book, on Chemical Manipulation; which gives a
splendid feeling of what laboratory life was like at that period. His great
axiom was that nothing should be wasted; even test-tubes had to be made,
and his classic isolation of benzene from whale oil was done by fractional
distillation in a zig-zag glass tube he had made himself. His introduction
to research had come when Davy was injured in an explosion; safety
precautions played a small part in the chemistry of the day.
The first electric batteries had been a ‘pile’ of discs of two different
metals, perhaps coins, in acidulated water; then in scientific institutions
bigger ones were specially made. The widespread use of electric
telegraphs from the 1840s meant that standard apparatus for the study of
electricity could be bought; and as this science moved from a qualitative
into a quantitative phase accurate measurements became essential. Newton
had gone out and bought a prism when beginning his work on light; but by
the nineteenth century optical apparatus was much superior, especially if it
came from an eminent maker such as Fraunhofer. His prisms were good
enough for the black lines in the spectrum to be perceived as real rather
than due to strains in the glass; and from the 1860s the spectroscope, first
with prisms and then with diffraction gratings, became an essential piece of
apparatus in physics and in chemistry. Understanding electricity and
spectra led to the new world of quantum physics of the twentieth century.
Chemists now get Pyrex glassware made to fit together, but ingenuity in
adapting apparatus is still essential in any experimental science; and on the
frontier the scientist still has to make up the apparatus required, and usually
within a tight budget. Faraday’s advice is still relevant.
M. Faraday (1974) Chemical Manipulation (1827 ed.), reprint, Science Reviews, London.
G.L.E. Turner (1983) Nineteenth-century Scientific Instruments, Sotheby, London.

Aromatic compounds are those which contain a benzene ring . Benzene


was first isolated by Faraday in the distillation of whale oil; but its
structure remained a mystery. August Kekulé was in the 1850s struggling
towards a theory of valency, assigning different combining powers to the
various atoms of the elements: and perhaps when travelling on the top of a
London bus, or perhaps when gazing into the fire in a reverie, he thought of
snakes biting their own tails (an alchemical image) and applied this idea in
suggesting that the six carbon and six hydrogen atoms in the benzene
molecule were arranged in a ring. Just how the bonds were made was
obscure, and he had to postulate alternate double and single bonds; this
gave no reason for the great stability of the arrangement. Linus Pauling in
the 1930s suggested that there is resonance between the various possible
ring structures, none of which exactly represents the outcome which has
lower energy than any of them, and thus more stability. More recent
treatments are in terms of molecular orbitals, the electrons being seen as
not attached to particular carbon atoms.

An association is unlike an academy because its members are not


appointed; it is generally an open group. Down to the nineteenth century,
scientific societies were either academies or clubs; but in Germany after the
Napoleonic Wars ended in 1815 Lorenz Oken and others formed an
association open to all interested in science. Because Germany was still a
collection of separate states, some very small, the association was to meet
in a different city each year: and different states came to compete in their
universities, museums and laboratories as in their opera houses.
This German model was taken up in Britain, where in 1831 the British
Association for the Advancement of Science (BAAS) was inaugurated in
York; and then later in the USA, which like Germany had no single cultural
centre. In Britain the BAAS drew on provincial pride; it never met in
London in the nineteenth century, and cities competed to be host. The
meetings gave a great boost to popular science; lectures were delivered
sometimes to enormous audiences which included women, and local people
as well as eminent scientists (a word coined at the BAAS) described their
research. There were also great dinners and receptions. The addresses by
the Presidents were very widely reported; they described recent advances in
science, and usually called for more public support for science and
technology.
Very soon two classes, like officers and men, appeared in the BAAS;
probably some such development is unavoidable. The association came
under the authority of gentlemen chiefly from Cambridge; while the
provincials furnished data, and subscriptions to finance research.
Sometimes one could move from the amateur to the professional status, for
science was becoming a profession in the nineteenth century; and the
BAAS was prominent among bodies pressing for further scientific
education. It set up committees to report on the state of things in particular
disciplines, to promote research, and to stimulate government to support
science better. It became a good example of a voluntary society acting as a
powerful pressure group in Victorian Britain; many must have owed their
first exposure to science to its meetings, and many provincial colleges and
museums received a great boost from its visits. Science could also be
entertainment; the great men of the day came by train and could be seen and
heard, and there might also be a great row proving that science was not an
inhuman and dispassionate affair. The coming of television has reduced the
importance of the meetings of the BAAS and similar associations, but their
open character in an era of increasing specialisation is still of value.
S. G. Kohlstedt (1976) The Formation of the American Scientific Community, Illinois University
Press, Chicago.
R. MacLeod and P. Collins (eds) (1981) The Parliament of Science, Science Reviews, London.
J. Morrell and A. Thackray (1981) Gentlemen of Science, Oxford University Press, Oxford.

Astrology is the science of predicting human affairs from celestial


observations. It is very old, and is found in both the East and the West; at
first the affairs of states or kings, but then those of individuals, were
connected to the aspects of stars, planets and comets. Astrology was
developed against a background in which man, the little world or
microcosm, was seen as a model of the great world or macrocosm; so that
events in one must be parallel to those in the other. A web of
correspondences and influences linked everything together; and spirits
might be responsible for both planets and people, opening possibilities for
natural magic. In the West there were anxieties about determinism in
astrology, for its seemed to undermine the free choice between good and
evil essential to religion; but astrologers argued that their science
disclosed only tendencies about people, who are influenced but not
determined by the stars.
It might be supposed that astrology would not stand up to
falsification: but in the past as in the present many astrological forecasts
were vague enough not to be readily falsified. When they did turn out
wrong, the client if still alive and indignant did not give up astrology, but
changed his astrologer; while the astrologer would claim that the failure
showed the need for more expenditure on astrological research. We are
familiar with failures in medical diagnoses and in weather forecasts; and yet
we do not abandon medicine and meteorology. Astrology was, after all, a
science of some utility; the astrologer performed a counselling service,
closely connected with contemporary medicine, careers advice and
marriage guidance.
Only towards the end of the seventeenth century did scepticism about
astrology generally come to prevail among the educated. In its opening
years the great astronomer Johannes Kepler was employed by the Emperor
Rudolph II in Prague as astrologer, though he regarded these duties as
tiresome; and his English contemporary Thomas Harriot was suspected of
the serious offence of casting James I’s horoscope to see if he would come
to a sudden end on 5th November 1605 in the Gunpowder Plot. Galileo, on
the other hand, thought there was nothing in it; and his world view, in
which planets were large lumps of matter following the same laws as
terrestrial bodies, had no room for occult action at a distance. Descartes also
popularised a mechanical world view. When in 1687 Newton published his
Principia Mathematica, astronomy had a theory in which there was no
place for astrological forces; and facts or correlations which might seem to
support astrology became irrelevant, merely accidental, in the absence of
any plausible explanation in the new paradigm.
W. Lilly (1985) Christian Astrology (1647 ed.), Regulus, reprint, Exeter, afterwords by P. Curry and
G. Cornelius.
J. D. North (1986) Horoscopes and History, Secker & Warburg, London.
K. Thomas (1971) Religion and the Decline of Magic, Weidenfeld & Nicolson, London.
Astronomy has always been the paradigm science, the first to acquire a
deductive theory based upon a model founded in mathematics. Because
the heavens form a gigantic clock, the ‘fixed stars’ going around us each
day, while the Sun, Moon and planets move with respect to them,
astronomy was vital for the calendar; and the discovery that the possibility
of eclipses could be predicted was of great importance. It seems that in
Babylonia astronomers were satisfied to produce tables and arithmetical
series with which predictions could be made; whereas in ancient Greece a
theory or an explanation was demanded.
Plato, in the fourth century BC, urged astronomers to theorise rather than
to go on gazing up into heaven; and Eudoxus duly came up with an
ingenious system of spheres centred upon the Earth which accounted for the
planets’ wanderings. His seems to have been simply applied geometry; but
Aristotle took is seriously, and proposed that there were real spheres of
crystalline quintessence which carried the heavenly bodies. Further
observation showed that the Moon and Venus changed their distance from
the Earth; and by the time of Ptolemy, second century AD, a new version of
this system, with epicycles or wheels within wheels, had superseded
Eudoxus’ where accuracy was needed. This was very elastic; extra
epicycles could be added to take account of any newly-observed motions. It
was implausible as a piece of dynamics, and Ptolemy claimed no more than
that it ‘saved the appearances’ or fitted the facts.
Copernicus in 1543 revised Ptolemy, putting the Sun at the centre but
using the epicycles; he needed just as many, but his system was simpler in
that the order of the planets was unambiguous; the longer they took to go
round the Sun, the further they were from it. His publisher’s anonymous
preface indicated that this was another saving of appearances; and certainly
it defied common sense. It also seemed open to falsification: the Earth’s
motion in an orbit should produce apparent motions, parallax, among the
fixed stars; but none could be seen at that period, and Copernicus made the
‘unscientific’ suggestion that the stars must be immensely further away than
the few million miles previously supposed. He turned out to be right, as
Bessel showed in the 1830s.
Galileo was convinced of the reality of Copernicus’ system, and prepared
to defy the powers-that-be in justifying it. With his telescope from 1610 he
observed that Jupiter had moons, so that the Earth could not be the centre of
all motions; and that Venus showed phases like the Moon, proving that she
must go round the Sun. He also groped his way towards the idea of inertia
in making sense of the physics of a moving Earth. His contemporary
Kepler gave up the epicycles, and established that planets go in ellipses
with the Sun at one focus. None of this quite amounted to a proof of
Copernicus’ theory, as the Inquisitors pointed out; but it made it far more
plausible than the old view. When in 1687 Newton published his Principia
with his theory of gravity, a scientific revolution was completed.
Newton’s friend and supporter Edmund Halley brought comets within the
realm of law in the early eighteenth century; and he also suggested that the
diameter of the Earth’s orbit could be computed if measurements were
made from places at a distance from each other when Venus passed across
the face of the Sun. This only happens twice in a century, and he knew that
he would be dead before it was due, in 1761 and 1769. Voyages were duly
organised, Captain Cook being sent on his first great expedition to Tahiti
and on to New Zealand and New South Wales, to observe the Transits of
Venus; data collected at the great observatories in Paris, Greenwich and
elsewhere enabling navigators to compute the latitude and longitude of
remote islands with accuracy. The Sun’s distance from us was then
calculated, with reasonable accuracy.
Huygens had in the 1690s estimated the distance of a star, comparing its
brightness with that of the Sun. A century later, William Herschel detected
the ‘proper motion’ of the Sun among the fixed stars, and also showed that
some stars were paired, going around each other under gravity. He and
Laplace (famous for his demonstration that the solar system was stable
with no need for God to sustain it with miracles) proposed that nebulae
were solar systems in the making; and the evolutionary history of stars has
excited astronomers ever since. In the 1860s the dark lines in the spectrum
of the Sun, and then of stars, were interpreted as due to the presence of
chemical elements there; making possible celestial chemistry.
Helium was thus identified by Lockyer in the Sun before being isolated
on Earth; but the source of energy in stars was still mysterious until
radioactivity was discovered in the 1890s, and Einstein showed how
matter can be converted into energy with his famous equation, E = mc2.
Closer study of the spectral lines also indicated that they were slightly
shifted from where they were in terrestrial spectra, the shift being greater
for more-distant stars: this was interpreted as the effect of their motion
away from us, and as indicating that the universe is expanding. The start of
this expansion, the Big Bang, can be dated by extrapolating back; and
astronomy thus seems to bring us to the moment of creation.
Thus throughout its history, astronomy which deals with the most remote
objects, and where experiment is rarely possible so that observation has
to be relied upon, has been in the forefront, and has been concerned also
with matters of great moment in philosophy and religion. It has also
tended to become a ‘normal science’ concerned with routine observations
and plotting of star-charts; and these tensions make it fascinating. It is
perhaps a pity nevertheless that philosophers of science, and indeed
scientists, tend to assume that all sciences will follow the road which
astronomy has taken.
I. B. Cohen (1985) The Birth of a New Physics, 2nd ed., Penguin, Harmondsworth.
G. J. Toomer (1984) Ptolemy’s Almagest, Duckworth, London.

An atom means something which cannot be cut; a particle, as Newton put


it, so hard as never to wear away or break into pieces. The idea that
everything we see might be made up of atoms having much simpler
properties goes back to the ancient Greeks; a fragment of Democritus, the
‘laughing philosopher’, goes: ‘in appearance there are colours, tastes and
smells; in reality there are atoms and the void’. Democritus, Epicurus and
Lucretius, the Roman poet whose De Rerum Natura was the masterpiece of
ancient atomism, were drawn to the idea because of its materialism; the
Gods had not made the atoms, and did not control them. This made it
suspect to Pagans and Christians; and the emphasis on chance did not go
well with the evidence of order and design apparent in the Cosmos, and
especially in living creatures.
In the Renaissance, Lucretius’ poem was much admired for its style, but
its content was generally unacceptable. Nevertheless, Aristotle’s idea that
change was the result of different forms being imposed on matter seemed
by the seventeenth century to be purely verbal; and atomism a viable
alternative, promising real explanations. In France, Pierre Gassendi
argued that atoms could have been created by God, who could destroy them;
so that the atomist need not be a materialist. In England, Boyle suggested
the term corpuscle to emphasise this; and Ralph Cudworth, a Cambridge
theologian associated with Newton, proposed Moses as the real originator
of the atomic theory. By 1700, atomism was respectable, and was held by
most natural philosophers: but Descartes, because he could not believe in
the vacuum, empty space left in creation, was not a true atomist; neither was
Leibniz, who did not believe that God would create lots of particles
identical to each other when He could have made them all a little different.
The atoms were imagined to have only the primary qualities: shape, size,
weight. Secondary qualities (colours, tastes and so on) were the result of
different configurations of atoms interacting with sensitive organisms; and
the task of the man of science was to account for secondary qualities in
terms of primary ones. Newton’s work on light was an example of this;
but it did not prove possible to use atomic theory to give more than general
explanations in principle in the seventeenth and eighteenth centuries. The
Newtonian hope of quantifying the microsphere in terms of forces as had
been done for the Solar System was premature: but it did in the eighteenth
century lead to some interesting models of force-centre atoms, the best-
known being that of R. J. Boscovich.
With Dalton in the opening years of the nineteenth century the atomic
theory entered chemistry; and he used it to explain why elements combine
in definite quantities by weight, and why for example carbon dioxide
contains just twice as much oxygen as carbon monoxide. Earlier atomists
had thought in terms of all atoms being made of the same stuff; but for
Dalton the atoms of each element were identical to each other but different
from those of every other element. He therefore needed over thirty
fundamental particles, and the number grew rapidly towards ninety with the
discovery of new elements as the century went on.
To contemporaries, Dalton’s theory seemed speculative and
unsophisticated; but chemists found it increasingly useful in making sense
of reactions and structures. Laurent in the middle of the century
proposed a hypothetico-deductive method for chemistry; deductions from a
hypothetical structure were tested by experiment, and if they survived
falsification could be accepted. Then after 1860 the international
conference at Karlsruhe produced agreement on relative atomic weights,
and from the 1870s the Periodic Table of Mendeleev provided a
classification of elements, confirmed by his predictions of new ones
later discovered. In organic chemistry the atomic theory was essential to
understanding. Chemists might admit that their ‘chemical atoms’ were
probably not fundamental, but few doubted that they existed.
Those who did were closer to physics; and Wilhelm Ostwald hoped
down to the beginning of our century that chemistry might be based on
thermodynamics rather than on shaky Daltonian foundations. But by the
1890s the success of the kinetic theory of gases had led most physicists to
accept the particulate structure of matter; though new evidence indicated
that Dalton’s atoms were complex. The spectrum of every element showed
a pattern of lines more complicated than the vibrations of a billiard-ball;
and in 1897 J. J. Thomson ‘discovered’ the electron, the first sub-atomic
particle. He thought this might be the fundamental corpuscle, but since then
the list has grown longer again: he envisaged the atom as a plum-pudding,
with negatively charged electrons embedded in positive material, but the
experiments of Rutherford indicated a positive nucleus with circling
electrons. The mass of the atom is concentrated in the nucleus. The idea of
an atom made up of smaller particles, and divisible in radioactivity,
came in surprisingly easily; perhaps because by 1900 scientists were no
longer worried by what the Greek word ‘atom’ meant.
Rutherford’s model, as extended by Bohr who incorporated quantum
theory into it, explained the Periodic Table and thus brought together the
chemical and physical atomic theories which had diverged since Dalton’s
time. Aston showed that Dalton had been wrong in assuming that the atoms
of elements were identical; most exist in various isotopes differing in
weight; indeed Dalton’s is a magnificent example of a theory which was
very fertile though later radically falsified. His achievement had been to
convert atomism from an unfalsifiable world-view, not very different from a
just-so story, into a testable piece of science.
K. Fujii (1986) ‘The Berthollet-Proust controversy and Dalton’s atomic theory, 1800–1820’, BJHS,
19, 177–200.
D. M. Knight (1978) The Transcendental Part of Chemistry, Dawson, Folkestone. A. J. Rocke (1984)
Chemical Atomism in the Nineteenth Century, Ohio University Press, Columbus.

Authority is a poor reason for accepting anything, and the Royal Society’s
motto ‘Nullius in verba’ implies that nobody’s word for anything is
sufficient. The only authority in science should be experiment and the
reasoning based upon it through induction and deduction; such at least
was the hope of the founders of modern science in the seventeenth century,
who believed that their predecessors had been too deferential to the
opinions of Aristotle, Ptolemy and Galen. But academies such as the Royal
Society in due course came themselves to function as authorities; and the
views of great scientists, or associations of them, must carry great weight.
Science has therefore come over the years to be associated with dogma just
as much as with open-mindedness; indeed scientists are often accused of
closed-mindedness when they refuse to drop everything and investigate
flying saucers, telepathy, Noah’s Ark or acupuncture.
One good reason for accepting authority is that life is too short for us to
attempt to test everything for ourselves. Unless we accept a division of
labour in which those with particular skills exercise them, and we fall in
with their conclusions, we should all have to revert to nasty short and
brutish lives in caves. Scientists are well aware of error and fraud, and of
the difficulty of establishing truth, or even that plausible theory which is
necessary to convert a mass of facts into science. The literature of
science is full enough of mistakes and misinterpretations to make one very
doubtful of any claims made by those whose status is uncertain. This is
why the teaching of science is dogmatic, closer to the catechism than to
discussion; argument in modern science, where there is so much to learn,
comes only at an advanced stage. One might conclude from this that science
alone provides an incomplete education for life, a sphere of probabilities;
and must be augmented by study of the humanities, where authority is much
less prominent, or anyway by critical study of the history and philosophy of
science.
In the sciences, the teacher or supervisor can come to play the role of a
father. The eminent German professor of the nineteenth century would
publish his students’ work in his journal, and would find posts for them;
he would hope to preside over a school of research, where his authority
would prevail. This might involve factual claims, but more importantly
would involve a world-view or paradigm in which certain problems and
methods of investigation were seen as important, and others as unimportant
or wrongheaded. Where, as in nineteenth-century chemistry, there were a
number of such schools in various countries, each produced useful
information; and striking progress came when a student from one worked
for a time in another and had his eyes opened to other authorities. This
dialectical progress, produced by the clash of opposed views, is
characteristic of much science.
M. P. Crosland (ed.) (1975) The Emergence of Science in Western Europe, Macmillan, London.
H. T. Pledge (1939) Science Since 1500, HMSO, London, reprinted 1966, has charts of masters and
pupils.

An axiom is an assumption which lies at the basis of a deductive system.


Ideally it should be self-evident, undoubted by any reasonable person, as
the axioms of Euclid’s geometry were supposed to be: for example, that if A
is greater than B, and B is greater than C, then A is greater than C.
Assumptions less certain than this, such as that parallel lines never met,
were described as postulates; but this distinction is not always easy to draw.
Proof, in geometry or any deduction, involves demonstration that the
proposition in question is a consequence of the assumptions. The process is
therefore one of unfolding what is already present in axioms and
definitions; and to philosophers of a strongly empirical bent, deductive
proof is merely tautological – it does not tell us anything that we did not
know.
This idea seems implausible when applied to Euclid or to Archimedes,
who proved surprising propositions about circles, cylinders and cones
which must indeed have been implicit in the axioms but were not evident to
lesser mortals. And Archimedes in his work on the lever and on
hydrostatics crossed the boundary between mathematics and physics.
Anybody who accepted his axioms about rods or fluids could hardly doubt
his conclusions; and experiment seems redundant in the face of
geometrical demonstration, like trying to prove Pythagoras’ theorem by
making measurements on diagrams. The axiomatic model for physics has
remained exceedingly attractive ever since: merely-empirical
generalisations do not seem to be real science until they are shown to be
the consequence of theoretical propositions.
Archimedes’ proofs depended upon his adopting a model of rigid rods,
point masses and incompressible fluids: these are not things which we
actually meet with in the world, but they are abstractions which can be
handled mathematically. The certainty of the proofs involving them only
applies to this abstract world; if a fluid is of such a character, then it must
by necessity behave in certain ways. What has to be tested is how closely
the consequences of the model fit the phenomena; and this is where
experiment has to come in.
In physics and chemistry, theories of atoms or particles involve
axiomatic reasoning, which seems central to the idea of explanation in
these sciences; but Darwinian theory similarly begins with axioms, that
there is a struggle for existence, variation between organisms, and hence
natural selection. So throughout the sciences, deduction from axioms and
testing of consequences is an important method.
Black in the sciences does not mean exactly what it does in ordinary life:
black bodies are those which perfectly absorb and radiate energy, while
black holes are like sinks into which matter disappears. In the late
nineteenth century, the wave theory of light was triumphant everywhere:
all phenomena could, it seemed, be accounted for in terms of light waves.
Indeed the theory was taken as one of the great examples of how physics
could bring law and understanding into complicated fields, through
introducing unobservable entities; in this case the ether and its undulations.
One anomaly, or cloud in the otherwise bright sky of classical physics,
was the radiation from black bodies; which did not conform to expectations
derived from the wave theory. Planck suggested at the end of the century
that if the energy came off not continuously but in discrete packets of a size
governed by the frequency v multiplied by a constant h, then the equations
would fit the facts; and this was the beginning of quantum theory. It was
taken seriously by Einstein, in his work on the photoelectric effect in 1904;
and ever since then wave-particle dualism has been a feature of physics,
and the wave theory of light has been seen as incomplete.

The printed book was an essential feature of the beginning of modem


science; it meant that large and accurate editions first of ancient and then
of modern texts could be put into rapid circulation, and that science could
be done – as by Copernicus, Franklin, Dalton and Mendeleev – by those far
from metropolitan centres. With the invention of the scientific journal in
the 1660s, the importance of books in science began to decrease somewhat;
but they still form an important part of the literature. Popular science
is one genre in which books have always been prominent; and science is
learned from textbooks, which present the accepted paradigm at different
levels of the education system. On the frontiers, revolutionary ideas have
frequently been expounded in books rather than articles in journals; there is
not sufficient room in a short paper. Thus Darwin’s theory attracted no
attention when published as an article, but in a book its importance was
evident. The monograph describing current research and understanding in a
field can, especially when written by one responsible for much of it, be of
extreme value.
E. L. Einsenstein (1979) The Printing Press as an Agent of Change, Cambridge University Press,
Cambridge.
L. Febvre and H. J. Martin (1976) The Coming of the Book, Verso, London.
Caloric was the name given to the supposed fluid responsible for the
sensations of heat. Lavoisier in 1790 regarded it as a chemical element,
weightless but material. He envisaged changes of state, from solid to
liquid to gas, as chemical combinations of the substance concerned with
caloric: the ‘bases’ of elements like hydrogen, not then liquefied, were
therefore thought to be unknown. This belief was based on the discovery by
Joseph Black that a large and definite quantity of heat, ‘latent heat’, was
absorbed when ice melted to water at the same temperature, or water turned
into steam. This caloric was supposed to be chemically combined, while
that which just made the water warmer was mixed with it. Heat evolved in
chemical reactions came from previously-combined caloric. Evidence that
heat could be produced by friction weakened this theory, but in 1824 it lay
behind Sadi Carnot’s derivation of the second law of thermodynamics
based on the analogy of the flow of caloric and that of water through
machines. Error in science can lead to progress.

Catalyst is one of the few technical terms of nineteenth-century chemistry


to enter ordinary language: it was coined from the Greek by Berzelius, and
means an agent which promotes a chemical reaction without itself
undergoing change. The term is sometimes used of outside sources of
energy, such as light: thus the reaction between hydrogen and chlorine can
be described as catalysed by light, but the term is not generally applied to
heat, that universal accelerator of chemical change.
Usually the word is applied to substances; and catalysis may be
heterogeneous or homogeneous. A heterogeneous catalyst is in a different
phase from the reactants. Thus in the course of his research on the safety
lamp for miners, Davy found that methane and the oxygen in the air will
react at a low temperature in the presence of a coil of platinum wire, which
glows gently during the process; normally the reaction is explosive.
Platinum, especially in a finely divided state so that a great deal of surface
is exposed, has turned out to be a very effective catalyst for a whole series
of reactions; the mechanism is still not really clear 170 years on from
Davy’s experiment, but absorption of the reactants upon the surface of the
metal where they can be brought into intimate contact must somehow be
responsible. Platinum is particularly but not uniquely valuable as a
heterogeneous catalyst.
Homogeneous catalysts are in the same phase as the reactants; and the
classic one is sulphuric acid, which converts alcohols into ethers. It was
supposed in the early nineteenth century that the acid formed a part of the
ether, which was called ‘sulphuric ether’; but A. W. Williamson in London
worked out the mechanism of the reaction in which the acid in effect
removes water. With the alcohol the acid forms an intermediate compound;
and Williamson worked out how to make various ethers starting with
different alcohols and making the reaction go in stages. In such cases the
catalyst does react, but is then eliminated; and this was one of the first
reaction mechanisms to be elucidated, in the middle of the last century.

Cause is a term which we often use, usually in the context of some


unpleasant happening: we ask about the cause of death at an inquest, or of
an accident at an inquiry, but we do not usually ask about the causes of
successful births or happy landings. In those cases where a cause is to be
determined, there are usually a multiplicity of conditions which might be
responsible: the judge will try to pin down one of them. Which one may be
culturally determined; our ancestors who believed in astrology would have
allowed causes of things which we would not.
In science, ‘cause’ may mean the underlying changes in matter and
energy which bring something about; or, in a weaker sense, that one out of
the set of sufficient and necessary conditions which we have been varying
experimentally; or it may be used in a still weaker sense, as by David Hume
in the eighteenth century, to mean that which has so far always been found
to precede the effect. Hume believed that there could be no necessity
about the idea of causation; the world was a black box and its inner
workings, if any, were concealed from us. We could have no certainty
about the powers which produced the phenomena we observed; all we could
see were regularities, which led us to expect what would come next.
Prediction would always be a matter of probability, never based upon
certain knowledge. If we cannot know causes, then laws are the basis for
science; but in a world of contingency these could not be relied upon, for
induction is itself uncertain.
Scientists down to the beginning of our century generally believed in
the laboratory (whatever they might say in the study) that we could
discover mechanisms, and find real causes; that we were not dealing with
mere regularities. Thus the atomic theory, and the wave theory of light
showed what was really happening in chemistry and in optics. When the
numbers of things were too great to count, and the agents were too small to
see, as in the kinetic theory of gases where properties are explained in
terms of molecules dashing hither and thither and colliding, then it may be
necessary to fall back on statistical explanation. If one mass of gas is hotter
than another, then its molecules will on average be going faster; though
some will be going very slowly, or even be stationary. In this sense, motion
is the cause of heat.
But with quantum theory, phenomena like radioactivity seemed to
obey a law but not to have a causal explanation. There seemed to be no
difference between the radium atom which disintegrated, and those which
did not; and yet the rate of decay was absolutely regular, and uninfluenced
by circumstances. In modern physics, therefore, it seemed that only a weak
sense of ‘cause’ was applicable; we cannot know what is going on behind
the veil. Not everybody is happy with this; Einstein for example was not:
but so far there is not a causal version of quantum mechanics which is
generally satisfactory.

Certainty is what people have always sought for. Truth is the object of
knowledge, whereas probability is within the sphere of mere opinion; or
so it seemed down to about a hundred years ago. With the Reformation of
the sixteenth century (and indeed with schisms before that) the authority
of the Church as a source of certainty, guaranteed by God, collapsed.
Different branches of it held opposite doctrines; all coherence was lost. But
another hope lay in philosophy. Ancient philosophers, nearer than we to
the Golden Age, knew truth; and the hope of the Renaissance was that their
writings could be accurately restored and thus bring us certainty. But the
philosophers of Antiquity had disagreed amongst themselves, and the
carefully edited texts now showed that this disagreement was genuine and
not due to medieval misunderstanding. In the early seventeenth century
came the attempt at a new method; the rise of modern science, based on
experiment and mathematics. This general picture was in the mind of
Comte in the 1830s when he framed his idea of knowledge as progressing
from the theological through the metaphysical to the positive stage.
Bacon hoped for certainty through induction, the careful generalising
from instances observed without preconceptions. His New Atlantis
described an island ruled by a benevolent academy of sciences, with a
hierarchy going from fact-collectors up to those creating wide-ranging
theory. But the problem is that induction can never bring certainty. It
depends upon the assumptions that the future will resemble the past, and
that we have had a fair sample of events to reason upon. We might have
perceived a merely chance correlation, like that between the death-rate in
Hyderabad and the membership of the American Union of Machinists,
which went up and down in parallel for a number of years. Inductive
science depends upon moral certainty; as the great Bishop Butler put it in
the eighteenth century, probability is the guide to life: here, as in courts of
law, we depend on ‘moral certainty’, which means high probability.
Bacon’s contemporary, Galileo sought certainty through intuition and
deduction; he believed that the Book of Nature was written in the language
of mathematics, and that the underlying laws of nature must be of simple
numerical form. He got the law of falling bodies right; but unfortunately his
intuition led him to explain the cause of the tides as the revolution of the
Earth each day on its axis. He therefore expected one high tide every 24
hours; and was very proud of thus demonstrating the Earth’s motion. We
know, and the Venerable Bede knew, that tides do not work like that; only
someone living in Venice, where there are no tides, could have proposed
such an idea. Galileo’s intuition was not wholly reliable; certainty cannot be
achieved this way.
During the eighteenth century, phlogiston, which had been assumed in
chemistry as the component making things flammable, was shown by
Lavoisier to be an unnecessary hypothesis; but Lavoisier himself proposed
that all acids contain oxygen, which turned out to be false. In the nineteenth
century, the success of the wave theory of light ensured that scientists
believed in an ether, which was something that could carry waves; after
all, one can have waves in the sea, but not in nothing. Many even believed
more firmly in the ether than in solid matter, which might be no more than
whirlpools in the continuous sea of impalpable ether. The ether was like
phlogiston shown to be unnecessary, as Einstein demonstrated that light
sometimes behaves more like a stream of particles, photons, than like
waves.
There is no reason to believe that parts of our science will not go the way
of these supposed certainties of the past; error and even fraud cannot be
excluded from science, anyway in the short run – and in the long run we
shall all be dead. And yet most science is certain enough. Formulae of
chemical compounds which have been determined by analysis followed
by synthesis; hypotheses which have led to detailed and unexpected
predictions and survived falsification; theories which are fertile in
countless ways; constants which are the result of converging but distinct
lines of experiment and reasoning; these things all give us moral certainty,
and we should not perhaps expect more. The trouble is that this kind of
probability cannot be quantified, as nineteenth-century mathematicians
succeeded in making definite some areas of statistics.
Within a deductive system, such as Euclid’s geometry, one can have
proof of the theorems: but the certainty of these proofs tells us nothing
about the world, only that the results are contained within the axioms. There
are such axioms within the sciences also, which the scientist takes for
granted: these are the principles, such as those of the conservation of
matter and of energy which form a background against which
explanation of change is possible. They are deeply entrenched within
science, but they are not certain. They form part of the dogma which is
prominent in scientific education; and may even be conditions, like some
principle of uniformity, of doing science at all.
In our century, the ‘uncertainty principle’ has been added to these, with
the coming of quantum mechanics; and stating that we can never know both
the position and the velocity of a sub-atomic particle with complete
accuracy. It might seem that this really marks the end of the search for
certainty which began with Bacon and Galileo: but there are many things
about which we can be sure; such as that half of a sample of radium will
have decayed in a definite time. The underlying laws seem statistical rather
than causal; and this might make certainty in science even more like a
chimaera than it was before.
Chance has often been set against law. It is the realm of the disorderly and
the unpredictable. This was one of the objections levelled against the
ancient theory of atoms, because chance collisions were supposed to lie
behind all phenomena. A world of contingency like this would not be
conducive to serious science, which requires uniformity and cause. All
connections between things cannot for the scientist be just accidental; and
moreover it seemed that the coherence of the solar system and of animals
proved design, and could not be the result of mere chance. As Einstein put
it, God does not play dice.
And yet by the nineteenth century chance and law were in part
reconciled. Statistics and probability involved laws of chance, and
important branches of physics came to depend upon them. And Darwin in
his theory of evolution used the haphazard mechanism of natural selection,
where chance was operating according to law to produce development. The
old determinism of Laplace had to give way to a different paradigm, in
which small-scale phenomena in the organic and the inorganic realms are
not completely predictable. This neither proves that we are free or
determined, nor that there is or is not a Designer; but it does indicate that
our idea of a law of nature has to be refined. What we can say is that
science is still incompatible with the belief that anything might happen; it is
only lawful chance that has a place in it.

A charge is initially a load; and might mean what a furnace or a weapon


would hold. In electricity, it was used to mean the quantity which a
condenser, an ‘electric’, contained of the supposed electric fluid. At the end
of the eighteenth century, Coulomb established that electric charges attract
and repel each other according to an inverse square law, like gravity.
Following the work of Faraday and others on electrolysis, the electron was
first postulated by Stoney and Helmholtz as an atom of electricity, carrying
unit charge; the actual charge was estimated by J. J. Thomson in his work
on cathode rays in 1897, and determined by Millikan in the early years of
our century.

Chemistry is the science of the different kinds of matter. As such, it has a


very long history which begins with the first experiments with dyes and
metals. The root ‘chem’ in the names alchemy and chemistry is supposed to
come from ancient Egyptian, referring to the black earth of that country; but
it may be from ancient Greek or Chinese words connected with metals.
Through most of its history, chemistry has been strongly concerned with
utility through technology and through pharmacy. In popular usage, the
pharmacist is still the chemist; but since before 1800 the two have been
distinct, chemistry having its own theoretical structure which orders an
enormous mass of facts.
For the alchemist, base metals like lead or iron might, in the womb of the
earth or in the apparatus of the adept, grow into noble metals like gold or
silver; Boyle and Newton in the seventeenth century believed this to be
possible, and the unscrupulous could profit from the foolish prepared to
invest in schemes of gold-making. More certain returns in money and in
understanding came through metallurgy, and the associated theory of
phlogiston; in which the ore or calx was seen as fundamental, and the
metal or regulus a compound of this with phlogiston, the flammable
principle, coming usually from charcoal. These ideas had their medical
connections: the alchemists’ ‘philosophers’ stone’ would lengthen lives and
bring enlightenment; while phlogiston was the source of fever.
Lavoisier in the 1780s emphasised conservation of mass, urging
chemists to make their science quantitative by weighing as the assayers had
done for centuries. The calx is heavier than the metal: so he interpreted
burning as combination with oxygen; and took as simple substances or
elements those bodies (like the metals) that could not be decomposed,
which Dalton in the early 1800s identified as having distinct kinds of atoms.
Analysis became the crucial process in the science; and new elements
were rapidly added to the list. From the 1860s the elements were arranged
into the Periodic Table, a classification which proved to be an
immense help to the memory.
Chemistry was the first science in which laboratory teaching was
required; at first from about 1800 for medical students in progressive
universities, and then following the example of Liebig at Giessen in the
1830s for research. The chemist had to be able to make much of his own
apparatus, and had to be skilled in using it; chemistry was an experimental
science. The problem was to make sense of all the experimental data. Thus
it was clear that eight parts by weight of oxygen combined with one of
hydrogen to make water; but whether water was HO as Dalton thought, or
H2O as Avogadro and others might have put it, could not be settled.
Different textbooks of the mid-nineteenth century differed over the
formulae of simple compounds like alcohol.
Laurent in France suggested that chemists should follow the route of
hypothesis and deduction; thinking of a plausible structure for a
compound, working out consequences of it, and testing them. A result of
this strategy was Kekulé’s work on the benzene ring, making sense of
aromatic chemistry. In 1860 an international conference was called at
Karlsruhe to fix atomic weights and formulae. By 1870 chemists could feel
that they were on top of their material; and the new emphasis was on
synthesis, the making of compounds. This might, as with indigo, be
valuable in commerce; but it was also the best way of confirming structures,
especially in the exciting sector of organic chemistry, concerned with the
compounds of carbon.
This was a time of rapid expansion of education at all levels, in which
chemistry was especially important, being practical and useful and
somehow occupying a central place in the sciences. The hope in physiology
was that chemists would provide an explanation of the processes of life;
while the complexities of spectrum analysis brought chemists on to the
boundaries of physics. In schools and universities chemistry became the
dominant science: and it had become since Liebig a German science, in that
the German universities were the centres of excellence for research, while
German entrepreneurs had taken the long view and successfully built up a
science-based industry, based upon synthetic dyes. In the academic and the
business world, chemistry was becoming a profession.
Chemical theory after Lavoisier never had the resonance of Darwinian
evolution and ‘the struggle for existence’ or of thermodynamics, with the
prospect of the world running down. Chemistry has been a more workaday
science, and chemical imagery applied to humans has an entertaining and
reductive effect: evenings may be dissolved in aqueous alcohol, but the
imagination is not usually set on fire by chemical diction. Similarly, while
chemistry seemed about 1800 the fundamental science, concerned with the
forces which modify matter, by 1900 this role had passed to physics, the
science of energy and its changes. With the new atomic models of
Rutherford and Bohr in the 1910s chemistry was in a sense reduced to
physics. The Periodic Table was explained in terms of electronic orbits,
and chemical reactions in terms of the giving and sharing of electrons. In
the event, this reduction did not make very much difference; the properties
of compounds cannot in practice be calculated in advance from physical
data, and laboratory work remains essential.
Thus chemistry today is the essential service science in the academic and
industrial spheres. Mathematics and physical methods of analysis have
transformed the nineteenth-century picture of advanced cookery done in
test-tubes; but chemistry is still basically an experimental science, where
manipulative technique is essential, and where colours, smells and even
tastes are still important in diagnosis. But underlying these things there has
been built up over two hundred years that body of theory which turns facts
into science.
R. F. Bud and G. K. Roberts (1984) Science Versus Practice, Manchester University Press,
Manchester.
C. A. Russell, N. G. Coley and G. K. Roberts (1977) Chemists by Profession, Open University Press,
Milton Keynes.

Classification is something we do all the time, whenever we apply a


general term to something. If we didn’t classify, we would never make
sense of anything; though no doubt our fondness for classifying, especially
people, can be harmful. In the sciences, classification is sometimes seen as
an awkward ‘natural history stage’ through which all have to pass; but in
physics the classifying and naming of fundamental particles, and in
chemistry the Periodic Table are of great importance. Classification is
based upon analogy, and involves theory, explicit or implicit: it can be
convenient or inconvenient, natural or artificial; and sometimes plain
wrong, as when Lavoisier put heat and light among the chemical elements.

Colour brings much interest into our world. The question has been for over
two thousand years how much it is ‘really’ there and how much it is added
by us. Democritus believed that in appearance there were colours, tastes and
smells, while in reality there were atoms and the void. Galileo took up this
idea, distinguishing primary qualities, really present in things, and
secondary qualities which resulted from their interaction with a sensitive
observer. Locke made this distinction a very important part of his
philosophy. The job of the scientist was to account for the secondary
qualities in terms of the primary ones.
Newton showed how the spectrum can be understood as the
decomposition of white light into its constituent colours; an astonishing
discovery, because it had been generally supposed that colours were
modifications of whiteness rather than whiteness a mixture of colours. With
the wave theory of light colours seemed to be reduced in principle to
differences in wavelength; a real reduction of a secondary quality to a
primary one, to a number. In fact, things were more complicated; and the
great chemist Wilhelm Ostwald spent much time trying to develop a
classification of colours, finding he needed three dimensions to plot them.
Later systems of standard colours start from his system. Complementary
colours are those which together form white light.
John Dalton was completely colour-blind, which is an unusual condition;
he was sent by his mother to buy some grey thread, and came back with
scarlet which he thought matched what she wanted. He was the first to
describe colour blindness scientifically. His contemporary Thomas Young
developed a three-colour theory of vision, which was improved by
Helmholtz in the middle of the nineteenth century, and is still the basis of
theories of colour perception. Goethe, another contemporary of Dalton, was
highly critical of the Newtonian colour-theory; and in his Farbenlehre
pointed out how one can see colours in black and white diagrams, and also
how red in paintings comes forward and blue recedes; he is perhaps a
founder of psychological optics.

Complementary colours are those which, like red and green, together
make up white light; complementary angles together make up a right angle;
and complementary theories or models, like those of waves and
particles in quantum physics together provide an account of all the
phenomena.

Conservation is a word ordinarily used of ancient monuments or beautiful


landscapes; meaning the preservation of what is seen as valuable or
important culturally. It can be passive, or active: which involves restoration
of damage or erosion, an attempt to reverse the decay inevitable with the
passage of time. In this sense it is important in the sciences in museums and
libraries. Instruments need skilful restoration and conservation if we are
to understand how they worked; and the skills of the chemist are
particularly important in this kind of work. Apparatus which survives from
the past is a valuable guide to standards of accuracy, and helps us to see
what it was like to be a scientist in an earlier century; and it needs active
conservation and study rather than just being shut up in a glass case and
left there.
The same is true of the books and manuscripts of past science; and last
week’s science is past, so some of these archives will be very recent. The
first problem is to weed out what is really valuable. This is a task requiring
great boldness, for whenever we look at collections of books and
manuscripts we wish that they were more complete; and sometimes it seems
that with great perversity somebody has thrown away all that we would
want to see. But because there is just too much material for anybody to keep
everything, weeding has to be done. In the history of science there has in
recent years been increasing interest in institutions, as we move from
studying the great heroes towards looking at the normal, or away from great
discoveries towards careers, and science as a profession. This is a move
like that of botanists going from the collection of hitherto undescribed
species to the analysis of a flora. Reports of Treasurers and lists of members
may therefore be as interesting as laboratory notebooks; and may need
active rather than passive conservation.
This is especially the case for nineteenth-century papers, for from about
the 1830s cheap paper made from wood-pulp or esparto grass began to
replace the rag paper previously used. Along with steam presses and
machine-made binding-cases, this led to cheap books which were a great
feature of the dissemination of scientific and other learning. But the paper
was bleached, and acid was left in it; so that while earlier paper will last
indefinitely, much of that from the century after 1830 (and in cheap
publications nowadays) goes brown and brittle, and will eventually crumble
away. Its conservation is a problem; which can be alleviated by
microfilming, which has been done for some important archives. This also
has the useful effect of reducing the number of those who actually handle
the manuscripts; for as the Grand Canyon is subject to tourist erosion, so
any papers do get worn by use. Leather bindings, on the other hand, are
kept in good condition by handling; and there are things which can only be
learned by looking closely at papers rather than microfilms: conservation
must not mean locking in a strongroom to which all access is denied.
Conservation has another use more important in scientific theory; where
it is used in the explanation of change. If everything were in flux, then the
world would present us with one damned thing after another in an
incomprehensible sequence. It is only against a pattern of permanence that
we can make sense of changes. Thus in chemistry, the theory of atoms
involves particles which are unaltered in reactions but are simply
rearranged; atoms are conserved, and reaction mechanisms can therefore be
worked out, avoiding merely verbal explanations like the famous ‘virtus
dormativa’ of opium. The idea of permanent atoms goes back to the ancient
Greeks; but its implications were not fully appreciated until Lavoisier in the
1770s clearly expressed the principle of conservation of matter. The mass
of the reactants must be equal to that of the products, for matter cannot be
created or destroyed. Goldsmiths doing assays had used very accurate
balances, but Lavoisier made the balance central to chemistry; and found
himself having to undertake tricky tasks such as weighing gases as he
reinterpreted combustion as combination with oxygen rather than emission
of phlogiston.
In the middle of the nineteenth century, a new principle was recognised
which united a whole series of sciences into physics, which came
thereupon to be generally recognised as the fundamental science. The
reduction of all the rest to physics seemed possible and desirable, especially
to those drawn to materialism. The principle was conservation of energy.
Force had featured in Newtonian mechanics; Volta had in 1799 discovered
that chemical affinity could be transformed into electricity, and twenty
years later Oersted showed that electricity and magnetism were
interconvertible. Helmholtz urged that heat, light, mechanical force,
electricity and magnetism were all forms of energy, quantitatively
connected; and the programme for classical physics became the finding of
the exchange-rates between them.
With Einstein’s work on relativity at the beginning of the twentieth
century came the idea that mass and energy were themselves
interconvertible; a process actually seen in radioactivity. So the two
principles were united into one, the conservation of mass-energy. In
quantum mechanics, other properties such as spin are conserved.
Conservation principles are not simply empirical, readily subject to
falsification; when for instance some sub-atomic interactions seemed to
falsify conservation of mass-energy, a new particle, the neutrino, was
postulated to account for the divergence. It was at the time undetectable,
and would seem to the outsider ad hoc; but to make sense of the world we
need conservation principles, and they are not lightly given up.

Contingency is the opposite of necessity. A contingent world is one


which could have been otherwise, one possible world among others. There
have always been some whose ideal of science comes from geometry;
where proof is crucial, and where given the axioms and careful deduction,
it amounts to a demonstration that the theorem could not have been
otherwise. Mathematics for Galileo, and clear and distinct ideas for
Descartes, led to laws of nature which were in effect necessary truths.
Another attitude to laws is that we are seeking those which happen to
hold here; rather as one might investigate the legal system of a country.
Laws are then contingent; and experiment is very important because it can
indicate which laws do in fact apply. Now that we have seen that there are
various geometries, and that Newton’s laws of motion cannot be necessary
truths because they are not true, it is harder for us to believe in necessity:
and contingency is compatible with chance or determinism.
Induction is a way to these contingent truths; and it depends upon
recognition that certain characteristics, which need not logically go
together, do go together: ravens are black, gases expand uniformly with
temperature. A contingent world does not exclude order, or the possibility
of design: it is perfectly possible to believe that God, like a king, has chosen
certain laws, and that it is the task of the scientist to find them out.

A convention is a formal or implicit agreement to behave in a certain way.


In science there are conventions about symbols and signs, sometimes
ratified by international congresses, which are needed if we are to be
sure what we mean. But there are those who hold that all scientific
knowledge is a matter of convention: that truth is unattainable, and
perhaps explanation socially determined. Osiander, writing the preface to
Copernicus’ great book in 1543, urged that hypotheses in which are needed
if we are to beastronomy were to be used rather than believed, and therefore
judged by their convenience rather than their plausibility. But Galileo and
most scientists since have hoped for more than this, believing that some
certainty can come from appropriate method.
It seemed by the late nineteenth century that light was a wave motion
and that energy was conserved; proof of these ideas came from
experiment and theory, and yet both turned out to be incomplete. By the
1920s it was clear that we had to accept conservation of mass-energy, and
wave-particle dualism. Niels Bohr’s ‘Copenhagen’ interpretation of
quantum mechanics was that one should not seek for some explanation of
what was going on at the atomic level, looking for causes, but should rest
content with systems of equations which would give the right answers.
Thus a kind of conventionalism came back into physics, although many
were and remain unhappy with it. The deeper conventionalism in which all
science is a social construction is deeply unattractive to scientists, for whom
metals and birds behave in the same ways whatever the preconceptions of
those studying them. The historian of science is torn, seeing certain kinds of
questions coming up and certain answers seeming appropriate in certain
places and times; but recognising in science and technology some kind of
real progress. The problem is that the content and the context of science
cannot be completely separated, and the role of convention in it is often
inadequately appreciated; especially perhaps by philosophers for whom the
sciences sometimes provide a model of rationality rather than a very human
activity.

Coordinates determine the position of a body in space, and also in time


with the theory of relativity. We are now all used to this because of
reading maps; but in the seventeenth century it was a new idea. We call the
system of x, y and z axes Cartesian because Descartes in his Geometry of
1637 (one of the treatises to which his famous Discourse on Method
(method) was a preface) used them to express algebra in terms of geometry.
He showed that between these two branches of mathematics complete
translation was possible: algebraic equations could be plotted as lines
passing through points having definite coordinates, and geometrical figures
expressed in algebraic terms. This meant that algebra became respectable,
instead of convenient but doubtful; and that speedy algebraic solutions of
problems could often replace clumsy geometrical ones. The differential and
integral calculus, which Newton saw in terms of ‘fluxions’ or gradients and
areas, became possible following this coordinate geometry of Descartes.
Galileo, following medieval predecessors, had expressed time by a line in
deriving the law of falling bodies; but it was with Einstein that the idea of
time as a dimension, like height, breadth and width came into physics.
Some physicists would like us to get accustomed to many dimensions
because the facts seem to be best expressed and most elegantly that way;
and the coordinates become increasingly complicated as one moves towards
ten dimensions instead of Descartes’ three.
Scientists like to use scientific language in non-scientific contexts; this is
indeed a characteristic form of scientific humour, as for example when a
dumbfounded person is reduced to flocculent precipitate. On a visit to the
USA in 1884, William Thomson (Lord Kelvin) gave a series of seminars at
the new Johns Hopkins University at Baltimore; and referred to the group
of professors who participated as his ‘coordinates’; but this usage never
became general.
Lord Kelvin (1904) Baltimore Lectures, Cambridge University Press, Cambridge.

Corpuscle was the term used by Robert Boyle in the mid-seventeenth


century to describe the fundamental particles of matter. He was unhappy
about the term atom, because of its association with materialism and
atheism, and because it implied that even God could not split one: but in fact
his ‘Corpuscular Philosophy’ was a version of the atomism of Antiquity.
Boyle did not believe that the elements (Earth, Air, Fire and Water) were
truly fundamental; and his famous book, The Sceptical Chymist (1661) was
directed to showing that neither all these things nor the sulphur, salt and
mercury of the alchemists were to be found in all substances. Instead he
believed that everything was composed of corpuscles, which were all made
of the same stuff; namely matter. They had none of the secondary qualities,
such as colour, taste or smell: these were the result of our interactions with
material objects consisting of corpuscles in various arrangements or
‘textures’.
Corpuscles might differ in shape, size or weight; but it was not necessary
to suppose this. Some arrangements or clusters of them were very stable,
and formed ‘primary mixts’ such as iron or gold which could not be broken
up in ordinary chemical reactions. But Boyle, and Newton too, saw no
reason in principle why the corpuscles arranged into iron should not be
rearranged into gold, which was denser and more stable. Boyle therefore
took pains to get the law against practising alchemy (passed centuries
earlier in anxiety about inflation) repealed; and he and Newton tried to
achieve transmutations.
Boyle tried to use his ‘mechanical philosophy’ involving corpuscles to
explain not only chemical changes but also the effects of electricity and
magnetism: though he could only achieve explanations in principle, and
not in detail, he thought this was more promising than what seemed to him
the purely verbal explanations of the Aristotelians. By the nineteenth
century, the terms atom, particle and molecule had come to replace
corpuscle in scientific literature; but when J. J. Thomson in 1897 showed
that the cathode rays could be deflected by both electric and magnetic
fields, he described them as a stream of corpuscles. He was thereby
claiming that the particles whose existence the experiment seemed to
establish (later called electrons) were the fundamental building-blocks of
the world, in continuity with his great predecessors two hundred years
before.

Crystal is often used to mean rock-crystal, or silica, which was taken as the
most transparent of substances. Hooke in 1665 recognised that different
substances form crystals of different shapes: and the microscope with which
he was a pioneer revealed that metals for example have a crystalline
structure. He tried to work out how different arrangements of spherical
atoms might yield different crystal forms; an idea little noted until William
Hyde Wollaston in 1813 revived it, employing also spheroidal atoms in
models preserved at the Science Museum in London. But because it turned
out that the same substance might be found in different crystalline forms
(dimorphism) and different substances in the same form (isomorphism),
Wollaston’s instrument for measuring crystal angles, the goniometer,
turned out not to be a substitute for chemical analysis.
In the opening years of the nineteenth century, René Haüy in Paris was
the great authority on crystals; he had dropped a crystal of calcite (CaCO3)
and saw that the fragments were of the same form as the original; and went
on to demonstrate by cleavage how complex forms are made up of simple
unit cells. Geometrical analysis of crystals was taken further by William
Whewell in Cambridge, and then by W. H. Miller, whose Treatise on
Crystallography was published in 1839 and from whom modern methods of
describing crystals, the Miller Indices, derive.
Whewell believed that any atomic theory worthy of the name must
ultimately be based on crystal forms. Meanwhile, mineralogy was a
territory claimed by chemistry but threatened by geometry: chemical
analysis seemed to be more reliable, but geometry somehow more
fundamental. Calcite, in which there is double refraction, had long
interested those working on light; one ray was found to be polarised, which
was explained by the wave theory as having all its vibrations in one plane.
Pasteur knew that natural tartrates rotated the plane of polarisation, while
synthetic ones did not; and found that the test-tube sample contained
equal quantities of crystals which were mirror-images of one another. In
1871 van’t Hoff proposed that such lack of symmetry was due to molecular
asymmetry, different atoms being arranged differently about a carbon atom
to form a tetrahedron:
At last atomic structure and crystal form had been connected, and further
connexions were made following the discovery of X-rays, which form
diffraction patterns which are a guide to structure. The diffraction of
electrons from a crystal of nickel proved that they have a wave character,
and was thus important in quantum physics. Crystals are not only
beautiful, but have also been very significant in science.
Deduction is the process of drawing conclusions from axioms or
hypotheses. It is generally contrasted with induction, where one goes
from facts to generalisations. Because there are strict rules in the logic of
deduction, it has always appealed to those who seek for certainty in
science, like Descartes in the seventeenth century. Geometry, in which all
sorts of surprising and useful propositions could be proved from a few
axioms, seemed the model for all real science; and demonstration and
proof were what Newton sought in his method of ‘deduction from the
facts’. The problem is that the results are only as certain as the axioms, so
that outside pure mathematics deduction cannot be infallible; but much
scientific progress has come through forming hypotheses and testing
consequences deduced from them.
K. R. Popper (1963) Conjectures and Refutations, Routledge & Kegan Paul, London.

Demonstration in our day has come usually to mean political protest; but
in the sciences it has a long history, being used in various other senses. The
word comes from the Latin, meaning to show; and in geometry the OED at
the end of theorems is the indication that the proof required has now been
displayed. Geometrical demonstration is indeed a scientific ideal: from the
axioms by rigorous deduction a particular conclusion can be shown to be
inevitable. In physics this kind of reasoning goes back to Archimedes. His
demonstration of the law of the lever does not depend upon any
experiments; but the reader who has accepted the propositions about rigid
weightless rods and point masses cannot then escape the conclusions. The
only problem is to find how well they fit our world of actual levers and
awkwardly shaped objects: and science of this kind depends upon the
appropriateness of the theoretical model to actual states of affairs. At this
stage the demonstration is less rigorous; but predictions that are verified
can bring us close to certainty. Thus the prediction by Adams and Leverrier
that a new planet in a specified orbit was responsible for irregularities in
the motion of Uranus was verified with the observation of Neptune; in
confirmation at a deeper level of Newton’s theory of gravity. Leverrier
then went on to predict another planet, Vulcan, nearer the Sun than Mercury
and making it wobble: but nothing was to be seen there, and it was only
with Einstein’s theory of relativity that an explanation could be found
for the wobbles. In time, falsification may thus befall theories which
seemed confirmed: empirical science is provisional, and geometrical
demonstration an ideal strictly applicable only in mathematics and logic.
Demonstration is also used in a looser sense. In the early days of the
Royal Society, Robert Hooke was employed to devise and perform
experiments suggested by the Fellows during discussions. At academies
and associations, a newly-invented instrument (like Newton’s reflecting
telescope) might be demonstrated, or some experiment done which
illuminated a theory. By the early eighteenth century, lectures on the ‘New
Philosophy’ (what we would call experimental science) had become a
recognised form of intellectual entertainment. To know a little about
astronomy, air pressure and mechanics was essential for anybody who
wished to keep up with what was going on; even for conversations in salons
or coffee-houses. Courses in elementary science also began in universities.
These lectures were accompanied by demonstrations with specially-made
apparatus; which differed from that used in research because it was not
devised for discovery.
Ordinary apparatus is made to be taken apart and reassembled as
occasion requires; it can be used for different purposes, and the creative
experimentalist like Davy or Faraday will be reluctant to throw anything
away and will go in for inspired misuse of glassware. Indeed discovery may
depend upon extensions in the use of apparatus, just as great poetry depends
upon words used in new ways. Instruments made for demonstrations are
different. They are handsome rather than workaday; they are collectable in a
way that test-tubes are not. Gleaming in hardwood and brass, they were
made to show one single effect as clearly as possible. The young King
George III had a set of such instruments in the 1760s, now in the Science
Museum in London; which were intended to go with a course in science
taught originally at Leiden by ’sGravesande, and illustrated in his textbook.
He was a Newtonian, but demonstrated Newton’s physics for his classes
using machinery rather than mathematics.
Lectures accompanied by demonstrations were given by itinerant men of
science who toured in the eighteenth and early nineteenth centuries. They
would show air-pumps, demonstrating how a bird in the receiver will
collapse as the pressure is reduced, and probably revive when air is
readmitted; and they would have pendulums, blocks with pulleys, and
electrical machines to produce great sparks when the handle was turned.
Some instruments were scaled-down and simplified versions of machinery
used in shipbuilding, land-draining, or architecture; Glasgow University
had a miniature steam-engine of Newcomen’s primitive design, which
James Watt was called upon to repair, and which set him thinking about
improvements. At the museums in Utrecht and Leiden, there are collections
of apparatus used in lectures.
By the early nineteenth century, institutions like the Royal Institution
in London and Literary and Philosophical Societies in other cities offered
programmes of demonstration lectures, which became very fashionable. At
the Royal Institution, the emphasis shifted towards the popularisation of
current research. Thus Davy demonstrated the properties of potassium, and
Faraday his discoveries in electricity and magnetism. In such up-to-date
science there were as yet no elegant instruments available; so the staff had
to become expert in making apparatus which would show the effect desired
to a large audience, some of them at a fair distance from the lecturer. The
occasion was theatrical; and those who witnessed it were unlikely
themselves ever to do any scientific experiments, though some might be
fired with enthusiasm.
Justus von Liebig, beginning at Giessen in the 1830s the laboratory
teaching of chemistry which was to have a profound effect on scientific
education, was struck with the high level of demonstration lecturing in
Britain; but realised that this went with a lack of training for the
profession of science. He believed that serious students must learn by
actually handling apparatus themselves; that scientific knowledge was
acquired by doing rather than watching experiments. This principle has by
our time become generally accepted, and the demonstration has become less
prominent than it was in its heyday; but experiments done by the inexpert
often do not clearly demonstrate Boyle’s Law or Ohm’s Law as they are
meant to do, and there is still scope for the elegant demonstration in the
lecture-room or on television.
Design in nature has been urged as proof of the existence of God, the first
cause. We do seem to find order in nature; or perhaps a belief in order is a
necessary condition for anybody trying to do science. A world of mere
chance or contingency would have no law in it as ours seems to have. But
then the appearance of our world seems as unlikely, given the laws of
physics, as that monkeys would type out Shakespeare; so it would seem that
there must be a designer behind it. This is not a rigorous proof even of
Natural religion: and since moral order in the universe is not apparent, the
designer is not necessarily the personal God of Jews and Christians.
R. Dawkins (1986) The Blind Watchmaker, Longman, London.
J.C. Polkinghorne (1986) One World, SPCK, London.

Determinism is the belief that, given some initial state, then certain
succeeding states will inevitably follow. This is sometimes thought to be the
same as belief in a world governed by law; but this is not so, for laws like
the second law of thermodynamics limit what can happen but do not entail
some one state of things. In nature as in life, laws allow freedom, within
limits. Aristotle was a determinist, especially as interpreted by the medieval
Arab commentator Averroës; and this was one of the great problems when
his work was rediscovered in the West in the twelfth century AD.
Aristotle’s writings were condemned in Paris and in Oxford in the thirteenth
century, and only slowly came to be accepted following the work of
Albertus Magnus and Thomas Aquinas. It was in part because the union of
Christian doctrine and Aristotelian philosophy was uneasy that Galileo
caused such alarm, especially among the Dominican order to which
Albertus and Thomas had belonged. God’s freedom to act in the universe
must not be questioned.
Newton was anxious to allow God such freedom, and believed that
interventions were sometimes necessary; religion and science for him
were allies. So they were for most of his contemporaries; but increasingly in
the eighteenth century the idea gained ground that God would have created
a world in which everything would have been foreseen, and no sudden
adjustments to meet new circumstances would be necessary. The existence
of such a God seemed to be demonstrated as Newtonian physics showed
the simplicity and harmony of the world. Deism, or belief in a First Cause,
the God of the Sabbath day who had been resting ever since the creation,
went with determinism; and its great triumph came when Laplace in the
opening years of the nineteenth century proved that wobbles in the orbits
of the planets were self-correcting, and that Newton need not have
invoked God to sort them out from time to time. If, said Laplace, a Being
knew the position and velocity of every particle of matter in the universe,
then the future and the past would be present before His eyes.
The later development of physics did not fit so well with this view, which
alarmed contemporaries with its apparent materialism. Maxwell in 1859
introduced statistical rather than causal explanation into physics,
with his work on the dynamical theory of gases; and in the twentieth
century with quantum theory such reasoning became the norm. In the
Indeterminacy Principle of Heisenberg, we cannot know both the position
and the velocity of small particles: the world is more open-ended than
Laplace had expected, although the success of scientific predictions
shows that it is not a place where anything goes.

A diagram is an illustration in which all attempts at realism are given up in


favour of conveying explanation. Apparatus may often be shown in
diagrammatic form; as in technology are drawings of engines and
buildings, except perhaps those destined for the boardroom. Diagrams in
natural history may show a typical flower or bird, or an ideal column of
strata in geology; and in chemistry a reaction may be explained in a
typographical diagram.

A dimension is a measurement initially of height, length or breadth. With


Descartes in the 1620s these were expressed algebraically in terms of
coordinates along the x, y and z axes; and space was envisaged as a great
three-dimensional grid extending to infinity in all directions. By the mid-
nineteenth century, the term was also used to express mass, length and time;
the dimensional analysis was employed by Helmholtz in his work on
conservation of energy. To Einstein in relativity theory from 1904 time
was a dimension like height, length and breadth; he saw us living in a four-
dimensional space-time continuum. Before him, and since, others had
imagined what a world of less or more dimensions might be like: Lineland,
Flatland, and then universes with many dimensions in which it might be
possible to restore that explanation in terms of causes which seemed lost
in quantum physics.

Discovery means uncovering something which had hitherto been


concealed. This finding of things, causes or cures has come to seem the
characteristic feature of the sciences: so that we ask of some eminent
scientist what he or she discovered, although it could be that their work was
the reinterpretation of facts already known, the enunciation of a
conservation principle, or the establishment of a school of research.
But it is certain that in advancing a career in science, a discovery is a great
help to the scientist; and preferably an interesting one, for many
discoveries are relatively humdrum and foreseen within the paradigm
generally accepted.
One case widely discussed is the discovery of oxygen in the last quarter
of the eighteenth century. The Swedish apothecary Scheele heated mercury
calx (mercuric oxide to us) and isolated a sample of an air better at
supporting breathing and combustion than ordinary air. He informed his
patron Bergman of it, but Bergman did not get around to publishing the
news; so it did not become public knowledge and therefore really science.
Meanwhile Joseph Priestley, a Unitarian minister and political radical, had
done the same experiment; and he did publish his results, being a believer
in rapid publication rather than in waiting until research was completed, and
hoping that early and informal writing up would encourage others to enter
the field. Both Scheele and Priestley believed that burning was a matter of
discharging phlogiston; the eminently respirable air they had obtained
would absorb more than ordinary air, and was thus called ‘dephlogisticated
air’. Though calling it air, they came to recognise that it was a distinct
species and not just a modification of ordinary air; indeed that it was a
component of ordinary air, which could no longer be seen as an element.
Lavoisier repeated their experiments, and reinterpreted them: he believed
that combustion was a matter of combination with this part of the air, which
was also responsible for acidity; and that phlogiston was a non-entity. His
work is often taken as the beginning of modern chemistry; and with his
new theory he undertook the task of naming the gas ‘oxygen’. If we had to
decide which of the three was the discoverer, we might be puzzled; for
science is a matter of interpreted data, and those who isolated the substance
misinterpreted it. There is no clear answer, and probably most give the prize
to the candidate from their own nation.
Some discoveries are a matter of prediction; for example the planet
Neptune, where Adams in England and Leverrier in France both worked out
from wobbles in the orbit of Uranus where a new planet must be. That was
one of the many discoveries which have been made separately and
independently by different scientists. This phenomenon is a feature of the
history of science, and gives an impersonal aspect to the sciences; a
discovery seems to be in the air, so that if one person does not make it
another will, and scientific progress becomes inevitable or a matter of
necessity. This perhaps applies to discoveries within a paradigm, rather
than to the revolutionary reinterpretations like Lavoisier’s, which some
might be reluctant to call discovery.
We can collect pure oxygen in a test-tube, and we can send a rocket to
Neptune; they are both material objects. Whether hypothetical entities,
whose existence is postulated or inferred rather than directly observed, are
discovered or invented is open to doubt. One such is the electron. Faraday
in the 1830s found that definite quantities of electricity were associated
with definite quantities of matter in electrolysis; and in a memorial
lecture on Faraday in 1881 Helmholtz remarked that if matter is composed
of atoms, then so must electricity be. Faraday had also been interested in the
passage of electricity through gases; and his work was taken up by William
Crookes, who had made his name with the discovery of thallium. Better
apparatus, in this case vacuum pumps, enabled him to demonstrate the
curious properties of the cathode rays emitted in straight lines, and the
same whatever metal the cathode was made of. He believed that he was
seeing matter behaving oddly, in a fourth state, at such low pressures; but
his work was extended and reinterpreted by J. J. Thomson in 1897, who
saw the rays as a stream of minute particles, carrying a negative charge.
He identified them with the atoms of electricity of Helmholtz; and believed
that they were a fundamental constituent of all matter. This is generally
taken to mark the discovery of the electron.
Lavoisier was wrong in supposing that oxygen was a constituent of all
acids, and Thomson wrong in thinking he had proved that electrons were
particles in the same way that billiard-balls are: for Davy showed that the
acid from common salt, our hydrochloric acid, contains no oxygen, while
Davisson and Germer found that electrons could be diffracted like light
waves. But science is provisional, and subject to falsification, and we
cannot expect that any inferences will fit all subsequent observations; if
all facts are theory-loaded, then all discoveries may require reinterpretation
in time, and we do not need to demote the heroes of the past.
Some discoveries have been made by chance; Priestley said that his work
was in this category, and that of Roentgen also fits it. He was not the first to
find the photographic plates in the neighbourhood of cathode-ray tubes
became fogged; but he was the first to investigate it, to Crookes’ chagrin,
and he found a new kind of radiation, the X-rays. It has been well said that
accidental discoveries come only to those well prepared for them; and
whether a discovery is a consequence of a definite prediction, or an
opportunistic following up of an anomaly noticed in the laboratory, it is
never really a matter of chance. The would-be discoverer must keep his wits
about him.
M. Pollock (ed.) (1983) Common Denominators in Art and Science, Aberdeen University Press,
Aberdeen.

Distillation is an important technique for separation and purification in


chemistry. It seems to have been invented by the Arabs in medieval Sicily;
and was first used to produce brandy: a surprising achievement for a
teetotal civilisation. The solution or mixture is heated in an alembic or
retort, and the vapour passes through a condenser and is collected. This
became by the seventeenth century the standard method of analysis; in the
nineteenth it was the basis of Liebig’s teaching laboratory at Giessen
which became a great centre of chemical education. Fractional distillation
involves collecting various distillates which come over at different
temperatures; and distillation under reflux means having a condenser
mounted vertically so that the distillate returns to the retort.

Dogma is a doctrine given such authority that to depart from it is heresy,


putting one outside the believing community and its tradition. It has
generally been agreed that in science dogma has no place, and the motto of
the Royal Society since its foundation in 1662 has been ‘Nullius in Verba’,
which essentially means ‘Rely on no authority’. On the other hand, the
teaching of science has tended to be dogmatic and the role of referees and
editors has encouraged orthodoxy. We may think of the scientist as the
open-minded person in the white coat; but in many cases the innovator has
had a rough ride; those like William Harvey who live to see their ideas
generally accepted are, as he recognised, lucky.
Thomas Kuhn, writing on philosophy of science, drew attention to the
role of dogma in science, and suggested that it had a positive function. He
noted that science textbooks differed in layout and presentation, but very
little in approach or content; whereas those in History or Literature might
be very different. Teachers in these fields would indeed try to present
conflicting views to their students in a way those teaching physics or
chemistry would not. In science, the student might be shielded from the
argument which is a feature of research until after graduation. Some
concerned with education saw this as deplorable, and the Nuffield
Syllabuses in Britain for example are based upon the ‘heuristic’ principle
that the student must make the discoveries of Lavoisier or Helmholtz over
again.
The problem is that life is short, and that the student does not have
indefinite time; and may also not be as clever as the original discoverers. So
any heuristic system has to be guided, for otherwise the learners would have
to learn through error; and from this their teachers, like Dostoyevksy’s
Grand Inquisitor, seek to preserve them. Therefore, for Kuhn, students learn
‘normal science’, a paradigm which is carefully passed on from one
generation to another. Their education is rather like the drill given to
recruits in the army; when they have learned to react immediately to the
shouts of the sergeant on the parade-ground, they will probably also do it in
the heat of battle. In the same way, the recruit to science will not make the
mistakes of his fathers if he is well-drilled in the paradigm which brought
them eventual success.
From time to time the paradigm breaks down through accumulated
anomalies, and a scientific revolution occurs; but these are rare, though the
great names we remember are usually those of the revolutionaries. For the
great number of normal scientists, dogma is not perhaps a bad description
of their world-view and approach to science; though one might prefer to
label it a proper scepticism about wild ideas.

Dynamics is the study of motion and the forces responsible for it. It is thus
distinct, as a part of mechanics, from statics, the study of equilibrium; and
from kinematics, the study of motions without reference to forces. Ancient
Greeks had worked in both these last fields, their great triumphs being
Archimedes’ work on the lever and Ptolemy’s system of epicycles which
fitted the orbits of the planets; but they had not got far in dynamics. At
Merton College, Oxford, in the thirteenth century, Heytesbury, Swineshead
and Bradwardine worked out the rule for ‘uniformly nonuniform motion’,
or what we would call uniform acceleration: and in Paris Oresme and
Buridan developed a theory of impetus to explain how bodies (like arrows)
go on moving even when there is nothing pushing or pulling them. These
insights were developed by Galileo into his law of fall, and his idea of
circular inertia.
Aristotle’s theory of the planetary motions was that planets were held on
crystalline spheres centred upon the Earth. There were about four for each
planet, which were connected by crystalline gearing and driven by the
Prime Mover, God or Love, on the outside. This scheme, which was based
on an abstract mathematical model of Eudoxus’, could not account for the
changes in size of Venus and the Moon, best explained in terms of changes
in distance; and so it was replaced by Ptolemy’s ‘big wheel’ system of
epicycles.
The forces which could make planets go in such paths are unimaginable;
and this seems to have been a major feeling in Kepler’s mind when he
adopted Copernicus’ view that the Sun must be the centre, and that planets
must go in simple orbits around it. He showed that they go in an ellipse.
Dynamical thinking lay behind this triumph of the new astronomy; though
the nature of the forces was still unclear until Newton’s work on gravity
later in the seventeenth century.
In the nineteenth century, Ampère’s work appeared to Faraday rather like
Ptolemy’s had to Kepler; his equations fitted the facts of electricity and
magnetism but the forces behind the phenomena seemed unintelligible. With
his idea of the field he brought dynamical explanation into
electromagnetism. Concern with the forces which underlie what we observe
is a necessary spur to good physics and chemistry.
An editor of a journal can be a person of considerable power and
influence. Authors submit their papers, and the editor consults a referee
and perhaps a member of the editorial board which is now characteristic of
most journals. A journal may take a party line or support a particular dogma
or paradigm; where there are different traditions in different countries or
sciences the journal may reflect them, particularly if the editor is partisan.
In the past, ambitious professors like Liebig edited a journal in which their
students could publish their research; and certainly ready access to
publication is very important in science, which is ‘public knowledge’.
Recognising a valuable paper to be extracted from an unpromising draft is
an important part of an editor’s craft, which is a kind of intellectual
midwifery; encouragement from an editor is very valuable for a young
scholar. Some books also have editors; sourcebooks which reprint papers
which have appeared elsewhere, and symposia which contain papers
perhaps read at a conference, or lectures delivered in a series: all these
things call for tact and firmness from their editor, who is most of the time a
harmless drudge, but whose decisions are sometimes high-handed and
always final.

Education in science is a difficult matter. To some people, the developed


sciences seem to show such a clear logical structure that it is best to learn
them as systems of demonstrated truth: naturally mistakes have been made
in the past, but these should not be dwelt upon unless they have value as a
dreadful warning. To others this seems a dauntingly abstract route into
understanding what is a fully human activity, leading to provisional but
highly probable knowledge. They seek to place the student in the place of
whoever made the discovery or framed the theory; and take seriously
Einstein’s idea that science is a free creation of the human mind. The
difficulty here is that one is unlikely to be faced with a class of Einsteins;
and that relativity for example has to be discovered by all of them by the
end of next week because there are other things on the syllabus. Pure
‘heurism’ is impossible; but students can perhaps be given some of the
excitement of discovering for themselves.
Often they are not; and Thomas Kuhn suggested that this is not just a
sign of poor teaching. For him elementary science is a matter of inculcating
dogma, making students think in terms of a paradigm. Although the Royal
Society’s motto means ‘Accept nothing on authority’, students must take a
great deal on trust: they are being drilled, like recruits in the Army. There
are so many ways of going wrong, of focusing upon irrelevant facts, that
the student must be protected from error and directed along the straight
and narrow way. Laboratory work is not a matter of discovery, but of
learning manipulation; the time for originality is way off.
Since most of those learning science will never do original research,
they never get the opportunity to use their judgement; if Kuhn’s picture is
right, then it is very important that those learning sciences also study
Humanities. In the early nineteenth century, when there was very little
formal scientific education, numerous lectures usually accompanied by
demonstrations were given at various social levels. These had to be
palatable in order to attract and keep an audience. Some very eminent
scientists like Faraday turned out to be exceedingly good at this; and
others less famous found that they could make a career by lecturing, or by
writing books of popular science. A few people who went to such
lectures were stimulated, like Faraday himself by hearing Davy, to take up
science; but most wanted intellectual entertainment. As courses in science
at schools and universities were developed over the century,
accompanied by textbooks, so the excitement waned; the stress was on
communicating information, which is necessary for anybody entering a
profession connected with science. Ever since then, the problem of
marrying excitement and information has been with the science teacher,
who may emphasise utility or hypothesis, or may make judicious use of
history.

Efficiency is something now demanded of scientists doing research,


which is probably hopeless for the most creative and innovative parts of
science; but it is less controversially applied to engines. For scientific
purposes, it measures the work done against that possible; but in the
practical world of technology it involves rather less theory.
The first steam engines of the eighteenth century, built on the principles
of Thomas Newcomen, were exceedingly inefficient as seen from a later
perspective; but as they were built on top of coal mines to pump out water
their fuel costs were immaterial; without the engine the mine could not be
worked, and coal at the pit head was cheap. Where engines were used away
from mines, as at York Buildings in London to distribute water, investors in
this new technology could find they were in for a nasty shock.
During the eighteenth century, the engines were improved by adjusting
the relative sizes of parts and so on, and their efficiency, measured by the
bushels of water pumped per hundredweight of the best Newcastle coals
consumed, increased greatly. James Watt’s engines, with a separate
condenser so that the cylinder was not heated and cooled on each stroke,
were so efficient that they were economical enough to be used away from
coalfields. Cornish engineers, remote from coal, used high pressures to
improve efficiency further; locomotive as well as stationary steam engines
were thus possible and economical by the early nineteenth century.
There was a question whether there was a limit to the efficiency of such
engines; or whether one could hope for one which would pump a great lake
dry on one lump of coal. This question was answered by Sadi Carnot in
1824, with his abstract analysis of heat engines. He showed that the range
of temperature over which the engine worked, between the source (the
boiler) and the sink (the condenser) gave a maximum value; only with a
sink at absolute zero could one get 100 per cent efficiency, and even then
one would not be getting something for nothing: a certain amount of
chemical energy, as stored in coal, can only yield a certain amount of
mechanical work, according to definite laws. Thermodynamics, which came
into being a generation after Carnot’s pioneering study as a deductive
science of heat and work, thus opened the way for scientific estimations of
efficiency. And with the turbines of the later nineteenth and twentieth
centuries, thermodynamic efficiency as well as sheer cost is taken into
account.

Electricity was observed in Antiquity, and our word comes from the Greek
for amber. When rubbed, amber will attract little scraps of paper and such
things; and down to the eighteenth century electricity seemed a weak force,
suitable for parlour-tricks rather than for real work. It was found that glass
can also be made electric by rubbing; but that while two electrified glass
rods, or amber ones, will repel each other, a glass and amber rod will be
attracted, and when they get near enough there will be a spark between
them. People spoke of two opposite electricities, vitreous and resinous. In
the mid-eighteenth century, Musschenbroek in Leiden invented the first
condenser in which electric charge could be stored, the ‘Leiden Jar’ (a
glass vessel coated with foil); a series of them was called an ‘electric
battery’. The King of France had his boredom relieved one day by seeing a
line of grave monks all given an electric shock; and the fringe medicine of
the later eighteenth century involved various electric and magnetic
treatments.
There was still no real theory until Benjamin Franklin in Philadelphia,
then on the outskirts of the ‘civilised’ world, proposed that electricity was a
single weightless fluid. Rubbed glass is minus or negative because part of
its fluid has been removed; while rubbed amber is plus or positive because
it has accumulated more fluid. Metals, called non-electrics, would conduct
the fluid from one place to another. Franklin’s views were published in a
series of letters from 1747 on, collected together in a book in 1774; and in
1749 he had made his reputation in science with his experiment of
collecting electricity from thunder-clouds with a kite: an experiment fatal to
some who tried to repeat it. This demonstrated that electricity was a
powerful agent, responsible for lightning; which could now therefore be
conducted harmlessly away.
Franklin’s theory was not fully accepted in France, where two fluids,
positive and negative, were preferred; when a spark passes through paper,
there are burrs on both sides of it which seem to show motion both ways.
Coulomb showed that electrical attraction follows an inverse-square law
like gravity, thus bringing mathematics into the science. Priestley in
England suggested that electricity was a more fundamental science than
mechanics, and would penetrate beneath the surface of things; and at the
end of the century Schelling in Germany argued that polar forces underlie
matter, and that apparent rest is really dynamic equilibrium.
Meanwhile Galvani had made frogs’ legs twitch using two different
metals; he believed that organic material was essential but Volta in 1799
generated electricity by what he thought was the mere contact of different
metals in water. His ‘pile’ of metal plates generated the first electric current;
but it was not at first clear that this ‘galvanism’ was the same as
‘Franklinic’ electricity, and final proof only came from Faraday in the
1830s. Davy in 1806 demonstrated that chemical action rather than contact
is necessary, and argued that chemical affinity was electrical; neither he
nor his pupil Faraday believed in an electric fluid, but saw electricity as
force rather than matter. In their day, electricity became a part of
chemistry; it was only with the principle of conservation of energy that
electricity became a part of physics.
In the 1820s, following Oersted’s demonstration of electromagnetism,
Ampère developed equations to account for what was going on; but these,
based on Newtonian analogy, proved no use for prediction of new
phenomena. Faraday, ignorant and suspicious of mathematics but with a
qualitative notion of ‘lines of force’ spreading through space, made
discoveries which astonished contemporaries and led ultimately to new
technology: and in the 1850s Maxwell developed Faraday’s idea
mathematically to show that light waves were electromagnetic disturbances.
Electricity was thus by the late nineteenth century the fundamental
science which Priestley and Schelling had believed it would be; and it was
also being applied, first in the telegraph which transformed
communications and then in lighting and in the preparation of new metals
like aluminium. But the question of the nature of electricity and its relation
to matter was still not fully answered. Faraday had believed that matter was
no more than centres of force; this made him unwilling to believe in atoms.
But in 1897 J. J. Thomson ‘discovered’ the electron, seemingly
demonstrating that cathode rays were a stream of negatively charged
particles. Electricity was thus not a weightless fluid flowing from plus to
minus, but a stream of electrons flowing from minus to plus.
Later work showed that the electron is not a particle like a little billiard-
ball, and that energy and mass are interconvertible. So we are back to a
view closer to that of Faraday. Electrons surround the nucleus of the atom
in orbitals governed by quantum numbers which means that these
oppositely-charged entities do not simply rush together and neutralise each
other; but how far electrons and some other fundamental particles have a
mass is unclear. We can see that those who believed that electricity was
matter, a sort of fluid, and force both had some truth on their side; and
were certainly right in suggesting that electricity was a fundamental clue in
understanding the universe.

The electron, as a hypothetical atom of electricity, was named by the


Irish physicist Johnstone Stoney in the nineteenth century; it had in a sense
been observed in every phenomenon of electricity since the world began.
It became a central feature of physics about 1900; there were distinct
routes which led to this, all beginning with Faraday.
In the 1830s, Faraday began his classic work on the passage of electricity
through solutions. He took up experiments begun by Volta in 1799 and
extended by Davy, his patron and teacher, notably in 1806 and the
following years. Davy had suggested that chemical affinity was electrical;
he had shown that an electric current would decompose water into oxygen
and hydrogen and nothing else, and he had then gone on to isolate new
elements including Potassium and Sodium by passing an electric current
through their hydroxides in a state of fusion. He had the idea that a given
quantity of electricity would yield a given quantity of metal: but until Ohm
in the 1820s made clear the difference between potential (voltage) and
intensity (current), and their relationship in his law, it was very difficult to
quantify electricity; and Davy’s mind was qualitative.
Faraday in his laws of electrolysis demonstrated that electricity passed
through a solution will liberate metals in a quantity proportional to their
equivalent or atomic weights; these not being clearly distinguished at this
time. He did not believe in atoms, seeing matter as ultimately force; and so
he did not develop these laws into a theory. Stoney and Helmholtz, after
his death, realised that if matter be atomic, and if definite quantities are
liberated by definite quantities of electricity, then electricity too must be
atomic. It will combine with atoms of matter just as other atoms of matter
do, in definite small ratios. Helmholtz’s view, expressed in a memorial
lecture on Faraday delivered in London, was widely read.
Faraday was also interested in the passage of electricity through gases.
This required low pressures; and in the discharge-tube Faraday saw a dark
space (now named after him) develop as the pressure was reduced and the
light column which had filled it began to break up. Technology was in the
mid-century unable to supply a better air-pump, and Faraday’s research
stopped here; but his great admirer William Crookes took up this work in
the last quarter of the century. With better pumps, he was able to get lower
pressures; he followed the break-up of the column around the positive
electrode, and saw a space (named after him) spread from the negative
electrode, or cathode, until it filled the tube, which glowed with a greenish
colour.
He believed that rays were coming from the cathode; and he shifted his
anode so that it was to one side, and any rays would fall onto the end of the
tube. When this was coated with zinc sulphide, it glowed brightly when the
current was flowing; and when Crookes put a Maltese cross into the tube, it
cast a sharp shadow. This established that the rays were going in straight
lines; and Crookes then put a little windmill in the way, and found that the
rays made it turn. Further ingenious experiments showed that the rays were
deflected by a magnetic field. Crookes did not know what they were, but
borrowed from an early lecture of Faraday’s the idea that they were
revealing a ‘fourth state of matter’, simpler than the gaseous state as that is
simpler than the liquid. The rays were thus composed of entities of the
size of molecules, illuminated by the passage of electricity.
Germans believed, following Hertz’s discovery of radio waves, that
cathode rays must be electromagnetic radiation: waves and not particles.
But J. J. Thomson thought that they were a stream of charged particles, and
set out to prove it by deflecting them with an electric as well as a magnetic
field; with an even better air-pump, he succeeded in demonstrating this. He
compared Crookes to an explorer wandering around in new territory, while
he was a scientist with a definite theory to test; which while not entirely
fair is substantially true. His set-up was particularly ingenious, because he
deflected his rays with a magnetic field and then brought them back to the
zero point with an electric field applied at right-angles. Knowing the
strengths of the fields, he could calculate (without having to worry about
the dimensions of the apparatus) the ratio of mass to charge of the
particles. He found that if they had the same charge as a hydrogen ion, they
must be well over a thousand times smaller; and he called them
corpuscles.
By 1900, these had been identified with the electrons; which as well as
atoms of electricity were perceived to be components of all matter. Studies
of radioactivity showed that the β-rays were identical to cathode rays;
and Rutherford in his atomic model invoked a nucleus of positive material
encircled by electrons in orbits. Bohr invoked the quantum theory to give
stability to this model, which given the rules of classical physics would
have been utterly unstable; and succeeded in explaining the lines in
spectra quantitatively as produced by electrons jumping between permitted
orbits.
Then in the 1920s Davisson and Germer showed that when a beam of
electrons falls on a single crystal of a metal, it is diffracted as is light when
it falls on a knife-edge or a grating; the beam of electrons was thus
analogous to a wave. An electron was clearly not a particle in the same way
as a billiard-ball is; it has analogies with both waves and particles. This
makes it difficult to understand what is going on at the sub-atomic level,
though predictions can be made with great success; some physicists are
unworried but others feel that the use of wave and particle models which
between them cover all the phenomena is rather messy and will be
superseded.
I. Falconer (1987) ‘Corpuscles, electrons and cathode rays’, BJHS, 20, 241–76.
R. H. Kargon (1982) The Rise of Robert Millikan, Cornell University Press, Ithaca.

Element means something basic to a discipline, so that Euclid’s Elements


of Geometry was the fundamental treatise; but it has usually meant the
ultimate components of material things. From Antiquity there were
supposed to be four elements: Earth, Air, Fire and Water, associated with
the qualities hot, dry, cold and wet. Thus Earth was cold and dry, Fire hot
and dry, and so on. It was possible to transmute these elements one into
another, in for example growing a tree nourished by water alone; this
yielded wood, which could be burned emitting fire and (dirty) air, and
leaving behind earthy ashes. Ordinary earth and water were mostly
composed of their element. For William Gilbert in 1600, the element Earth
must be magnetic because the planet Earth was a great magnet; we still
sometimes talk of diamonds of the first water, because being transparent
they were supposed to be made of that element. The ‘humours’ which
characterised people were also connected with the elements. Earth and
Water were heavy, and fell to their natural place downwards; while Air and
Fire were light, and thus smoke went upwards. In the sixteenth century,
chemists following Paracelsus posited three ‘principles’, Sulphur, Salt and
Mercury, a mixture of which composed all bodies; these were the bearers of
chemical properties, and their relation to the four elements was uncertain.
Robert Boyle in his Sceptical Chymist of 1661 came out against the three
Principles and the four Elements, in favour of a theory that matter was
composed of corpuscles, small particles which were very like atoms
except that they might in principle be divided by God. Boyle believed that
stable arrangements of these particles (all composed of the same stuff)
made up the metals and other substances which chemists could not analyse
further. In principle, Boyle was prepared to recognise the possibility of
deeper-level rearrangements which might lead to base metals being
transmuted into gold. The problem was that the atoms were hypothetical;
and in his book, Elements of Chemistry (English translation 1790) meant to
provide a new basis for the science, Lavoisier wrote that atomism was a
metaphysical doctrine and that chemists should take as their fundamental
units the ‘Simple Substances’ which persisted through reactions. These
soon came to be called Elements in place of the original four. Their
definition is curiously negative, depending upon their resistance to analysis:
it may be put more positively by saying that in every chemical reaction, an
element yields heavier products. The list depended upon the techniques of
analysis available at a particular time: thus Davy with the newly-invented
electric battery, and his theory that chemical affinity depended upon
electricity added various new elements including Potassium, Sodium and
Chlorine to the number in the years after 1806.
Davy’s contemporary John Dalton proposed that these elements or simple
substances were composed of atoms. Earlier atomic theories involved
particles all of the same kind, but Dalton had different kinds of atom for
each element. All atoms of an element were however identical, and were to
be characterised by their relative weights. Dalton thus supposed that there
were some thirty irreducibly different building blocks in the world; some of
them very similar, like sodium and potassium. This offended the ideas of
simplicity held by many; and there even seemed to be good chemical
analogies between elements and compound radicals which indicated that
elements might not be ultimately simple: thus the salts of potassium and of
ammonium, NH4, are very similar. William Prout in 1815/16 suggested that
all the elements might be composed of hydrogen atoms, or of something
even smaller, in different arrangements. Despite evidence that the atomic
weights of many elements were not exact multiples of that of hydrogen, this
hypothesis remained tantalisingly attractive to chemists throughout the
nineteenth century, as the number of elements grew towards ninety.
Following the international conference at Karlsruhe in 1860 came
agreement on atomic weights; and various authors, of whom Mendeleev
was the most notable, succeeded in grouping the elements into a system,
called the Periodic Table because similar elements came up regularly and
periodically if they were listed in order of increasing atomic weight. This
was the heyday of Darwinism, and chemists like William Crookes saw an
evolutionary significance behind this classification. All the elements
seemed to be descendants of hydrogen, or of the still-hypothetical helium,
identified only in the spectrum of the Sun. The ‘Rare Earth’ elements like
lanthanum, found only in Sweden, seemed the inorganic analogues of the
marsupials of Australia.
Chemists devoted much time to the exact determination of atomic
weights, which seemed to be constants of nature characteristic of the
elements. But work on radioactive decay by Rutherford and Soddy in the
opening years of our century led to the prediction that lead formed in this
‘sub-atomic chemical change’ would have a different atomic weight from
ordinary lead; and Theodore Richards found it was so. This was the first
observation of an isotope; and the work of H. G. Moseley on X-ray spectra
of the elements confirmed that it was the charge on the nucleus rather than
the atomic weight which characterised elements. Prout was not as far wrong
as he seemed to have been, for elements were composed of protons and
electrons like hydrogen: Lavoisier and Dalton had tried to separate
chemistry from physics with its own theory of matter, but after a century
the two sciences had again converged.
W. H. Brock (1985) From Protyle to Proton: William Prout and the Nature of Matter, Adam Hilger,
Bristol.
D. M. Knight (1967) Atoms and Elements, Hutchinson, London.
An ellipse is the closed curve made by slicing through a cone, or by holding
a pencil in a loop of string passing round two fixed pins, the ‘foci’. Kepler’s
first law in astronomy is that planets in their orbits follow this curve,
with the Sun at one focus. Newton explained this law as the result of
universal gravity but also showed that it was not strictly true; the different
planets attract each other and thus distort each other’s orbits. The existence
of Neptune was inferred in the mid-nineteenth century from the wobbles in
the orbit of Uranus.

Energy is capacity for doing work; and down to the middle of the
nineteenth century it was used in an unscientific way of people. During the
eighteenth century, weightless fluids were postulated to account for heat
(caloric), chemical activity (phlogiston), electricity and magnetism;
while light was also believed to be a stream of particles. Various
analogies connected these sciences, but there was no real link between
them, until about 1800 when there came a spate of discoveries of
interconnections.
Coulomb showed that electricity followed an inverse-square law like
gravity; Volta that when two different metals were immersed in water and
connected up, electricity was generated; Davy that an electric current is a
powerful agent of chemical analysis; and W. Herschel that beyond the red
end of the solar spectrum there were rays of radiant heat. A little later
Oersted showed that a changing electric current affects a magnet; and Mary
Shelley’s fictional Dr Frankenstein had a successful research programme
based on the idea that electricity lay behind life.
The German philosopher Schelling in his ‘Naturphilosophie’ saw all
force as one; and the real world as based on equilibrium of polar forces,
rather than on solid lumps of matter. Oersted was influenced by this way
of thinking; but other contemporaries, like Davy, looked back two centuries
to the more down-to-earth philosophy of Francis Bacon, for whom heat
was the effect of matter in motion. In the 1830s Faraday quantified the
relationship between electricity and chemical affinity in his laws of
electrolysis, showing that definite quantities of electricity produce definite
quantities of chemical change; but absolute measurement of quantities of
electricity or chemical affinity was hardly possible at this time. What
could be measured were mechanical work, like that done by the falling of a
weight in a clock, and heat; and Joule in a classic experiment developed in
the early 1840s showed how much mechanical work produced a definite
quantity of heat.
Faraday and others had seen a ‘correlation of forces’, and in Faraday’s
case this idea lay behind his work in electromagnetism and his discovery
that the plane of polarisation of a beam of light is rotated by a magnetic
field; but he, like many chemists, was suspicious of mathematics and
mathematicians, and of any attempt to reduce electricity to mechanics.
Shortly before Joule, Mayer had published a calculation of the mechanical
equivalent of heat; he had noticed, as a ship’s doctor, that a patient bled in
the East Indies had redder blood than in Europe, and concluded that this
was because he did not have to use up oxygen to maintain his body-
temperature in the Tropics. Comparing the specific heat of a gas at constant
pressure (when it expands as it gets hotter, doing work against the
atmosphere) and at constant volume, he computed how much work
produced a quantity of heat; but his paper was disregarded. Joule’s
experiment was much more direct, but he too was ignored at first and was
fortunate to attract the attention of William Thomson, the future Lord
Kelvin, who worked with him, especially on gases.
By the 1840s the weightless fluids had been abandoned: light was seen as
a wave motion, heat as motion of particles, and Faraday’s idea of electric
and magnetic fields with lines of force had begun, in the hands of
Thomson, to reach applied mathematicians. Thomson’s friend Helmholtz
put together the ideas of correlation of forces and of quantitative
equivalence. In both learned and popular lectures he urged that a new and
fundamental discovery had been made; that a quantity, energy (though at
first translated into English as ‘force’) was conserved in all changes. Since
mechanical work is expressed in the dimensions of mass times length
squared divided by time squared, then energy whether electrical, chemical
or magnetic must also be expressed in these dimensions. The units of these
sciences must all be expressible in terms of grammes, centimetres and
seconds: and the determination of these values became one of the great
tasks of the scientists of the second half of the nineteenth century, the
epoch of classical physics which took over parts of what had hitherto been
chemistry.
The principle of conservation of energy became the First Law of
thermodynamics, the quantitative and deductive science of the
transformations of heat and work which was one of the great triumphs of
the nineteenth century. Although there had been steam engines since the
early eighteenth century, their working had not really been understood; and
eventually this understanding led to much greater efficiency, particularly
in the steam turbines of the later nineteenth century. At the beginning of our
century, with Einstein’s work leading to his famous equation connecting
energy with mass, E = mc2, conservation of energy has been subsumed into
that of mass-energy; in radioactivity matter is transformed into energy.
The old opposition between those who saw matter or force as the
underlying reality has thus been transcended; though perhaps at the expense
of clarity.

Engine, down to the eighteenth century, meant any kind of machine. People
were fascinated by clockwork, and most elaborate mechanisms were made
first for cathedrals and then for the wealthy and the powerful; and by the
later seventeenth century watches were common. Pendulum clocks,
following the work of Galileo, Hooke and Huygens were by this time very
accurate; and were used in observatories where time could be measured
with more accuracy than angles. On a larger scale, wind, water and animal
power drove ‘engines’ for grinding corn, fulling wool and pumping water,
as had happened for hundreds of years.
The great change came in the early eighteenth century, when Thomas
Newcomen built the first practicable steam-engine in 1714. This was,
strictly speaking, driven by the atmosphere, rather than by steam: the steam
in the cylinder was condensed by a spray of cold water, giving a partial
vacuum, and in the working stroke atmospheric pressure drove the piston
down. By later standards the engine was very wasteful, but used at a pithead
where coal was dirt cheap it was dependable and kept deep mines dry
enough to work, making the Industrial Revolution possible. Its power was
not at first much more than could be got from water or wind; and the
eighteenth century also saw great improvements to water-wheel design in
France and in England. Engine came to mean something which provides
power to do work.
With James Watt came the first attempts to understand what was going
on in the steam-engine, rather than just to tinker with and improve a good
basic design as other engineers had done. Watt saw how wasteful it was to
heat up and cool down the cylinder at each stroke, and added a separate
condenser; this introduced problems because the piston had to fit better,
leather and cold water no longer being an adequate seal. Here Matthew
Boulton at the Soho works in Birmingham, used to precision work, came in;
and his partnership with Watt brought the steam-engine out of the
coalfields, and first to the tin-mines of Cornwall and then by the end of the
century to the textile-mills around Manchester, and to other industries.
Sadi Carnot, reflecting on France’s defeat in the Napoleonic Wars, put it
down to steam-engines, to Britain’s industrial power and wealth. He
founded thermodynamics, providing an explanation of how steam-engines
worked in the most abstract and mathematical terms. This was no use to
engineers, especially Cornishmen, in the 1820s improving efficiency by
adopting higher pressures, and working towards reliable locomotives; but
as the science of physics came into being in the 1840s with conservation
of energy, so scientists learned important lessons about the
interconversion of heat and work from steam-engines. Technology here
preceded science; but then applied science was involved in the steam-
turbine and in later engine-designs.
D. Landes (1986) The Unbound Prometheus, new ed., Cambridge University Press, Cambridge.

Entropy is a measure of disorder. In closed systems, without energy


coming in from outside, entropy increases with the passage of time: this is
one way of stating the second law of thermodynamics. It may not be
apparent to anyone living with a baby, but living creatures increase order
and therefore decrease entropy; they transform food and oxygen into their
body structures. But this defiance of the second law of thermodynamics
cannot go on for ever; in the end it catches up with us and we die. In the
long run, entropy increases.
In the eighteenth century, Joseph Black in Glasgow wondered why the
snow did not melt as soon as the temperature rose above freezing; he did
some experiments on the melting of ice, and found that a great deal of
heat was required to turn ice into water at the same temperature; almost as
much as then to bring the water to boiling point. Then to turn boiling water
into steam required a great deal more heat still. He called these quantities of
heat that bring about a change of state, ‘latent heats’; and he saw water as
a chemical compound of heat and ice. In warm water, some heat was
combined or latent, and some was mixed with the water; more of the latter
would make the mixture hotter.
Lavoisier regarded heat as a chemical element, and called it caloric;
gases like oxygen were for him unknown bases combined with caloric, and
only when we had solid oxygen would we really know what the element
was like. In his chemistry, caloric was absorbed and given off in reactions
just like oxygen, but it was weightless; with light it appeared as a separate
group in its system of classification.
Then in 1824 Sadi Carnot, who had received at the Ecole Polytechnique
the best scientific training available in the world, published an essay on the
steam-engine; or rather, on the motive power of fire. James Watt had
worked closely with Black, but steam-engines depended little upon recent
science; Watt’s eventual success depended upon Boulton’s high technology
at Birmingham. Carnot looked at engines in a thoroughly abstract way,
taking up his father’s analysis of water-wheels. The higher the source of
water is above the sink where it flows out, the more power can be got from
the wheel: no water is used up in driving the wheel, but some may be
wasted in splashing, and some of the available power is lost in overcoming
friction. Carnot imagined an engine in which caloric flowed from a source
at a high temperature (the boiler) to a sink at a low temperature (the
condenser); and in place of a wheel, acted on a cylinder of gas.
His great innovation was to think of the process as a cycle. A quantity of
caloric Q entered the cylinder at the source at temperature T, causing the
gas to expand; this stage is ‘isothermal’, the temperature being constant.
The gas is then removed from the source and its expansion continued; in
this ‘adiabatic’ stage its temperature must fall since no heat is getting in or
out, and it falls to T′, the temperature of the sink. The cylinder is moved to
the sink, and Q is transferred to the sink isothermally at T′; it is then
removed and compressed adiabatically until its temperature has risen to T,
when it can be brought back to the source and the cycle repeated. Carnot
showed, using the Gas Laws of Boyle and Charles, that work (quantity W)
would be got out of such a cycle.
Carnot was careful to ensure that his cycle was reversible; if it were run
backwards, then work W would be needed to get the heat from sink to
source, just as it is needed to get water uphill. A refrigerator, taking heat
from cool things and giving it out into a warm room, needs power. Carnot
argued that no engine could be imagined more efficient than his cycle,
given the impossibility of ‘perpetual motion’. There was therefore a limit to
what could be got out of steam-engines as there was for water-wheels; it
depended on the temperature difference, T–T.′
Engineers already knew that using high-pressure boilers, where the water
boils at a higher temperature, led to higher efficiency provided there was
no explosion; and Carnot’s abstract analysis had little appeal to practical
men at the time. The caloric theory was also passing out of fashion as the
older view that heat was the motion of particles made a come-back in a new
guise with measurements of the mechanical equivalent of heat; so Carnot’s
analysis looked out of date to those involved in physics. But it did impress
William Thomson (later Lord Kelvin), who was also working with Joule on
energy and on the expansion and compression of gases. He believed both in
conservation of energy, and in Carnot’s cycle; and yet if heat is actually
converted into work in engines, Carnot’s analysis must be wrong. Rudolf
Clausius was at the same time wrestling with the same problem; and he and
Thomson both perceived that in caloric there were two quantities which
should be separated and redefined: the quantity of heat, or energy, and its
unavailability, or entropy.
In the perfect reversible cycle imagined by Carnot, there is no gain of
entropy; nothing is wasted. In any actual machine, entropy will increase. In
the nineteenth century, increasing entropy was seen as a sign that the
universe was running down, and would come to a ‘heat-death’ in which
everything was tepid and no heat available for work. Although calculations
indicated that it was good for several million years, and the discovery of
radioactivity greatly increased the figure, this idea contributed to fìn-de-
siècle gloom: instead of design, the modern man of 1900 might see a
purposeless engine gradually coming to a halt.
S. Carnot (1986) Reflections on the Motive Power of Fire, ed. and trans, by R. Fox, Manchester
University Press, Manchester.

An equation is a set of numbers and symbols in which those on the two


sides of the equality sign, =, are equal. It has long been important in
mathematics, where equations set out in much the same notation as we now
use are found from the seventeenth century. Descartes showed that figures
in geometry can be expressed in terms of an equation: thus x + 2y = 5
represents a straight line. His interest was partly in solving mathematical
problems elegantly and rapidly, and partly in establishing that if algebra
could be fully translated into geometry then it would share in the certainty
and prestige associated with that branch of mathematics where proof was
rigorous. He thus made equations respectable. Similar in appearance to
equations are inequalities, where the sign < (less than) or > (greater than)
replaces =.
Equations are now just as important in chemistry, where the number of
atoms on the two sides must balance and the quantities involved in the
reaction are thus indicated. Dalton proposed some formulae and
structures for compounds implying equations; and other authors at the
beginning of the nineteenth century tried to express the course of reactions
using geometry and algebra. But since there was no agreement over atomic
weights, the atomic theory at first brought no certainty; and it was not
until Auguste Laurent’s Chemical Method came out (English version, 1845)
that we find equations which we can recognise. Thereafter their usefulness
became apparent and during the second half of the century they became a
standard part of chemistry.
Chemical reactions are not just equalities; they go one way or another, or
reach equilibrium. An arrow may be used instead of the equality sign to
indicate direction; and reversibility is indicated by a double arrow.
Above the arrow the conditions for the reaction, such as ‘high pressure’,
may be written.

Equilibrium is a state of balance, in which we all hope to find ourselves. In


the sciences, we may distinguish static equilibrium as in the lever or
balance from dynamic, in which constant flux produces apparent rest, as in
some reactions in chemistry. Here two processes going in opposite
directions at the same rate give an impression of rest. Some, like Oersted in
the early nineteenth century, have seen static equilibrium as really dynamic,
and force as more fundamental than matter; and certainly in modern
physics the distinction between energy and matter, which seemed so
straightforward a hundred years ago, has become blurred.
The balance of nature which forms the subject-matter of ecology is a
dynamic equilibrium; and because the conditions are always changing,
except perhaps at the bottom of the sea where coelacanths can go on
flourishing, so the position of equilibrium changes too, and some species
become extinct. The gross unbalancing for which mankind is responsible
through pollution, lack of adequate values in science and other
spheres, and just through biological success, might well upset the whole
equilibrium and prove disastrous; though past experience seems to show
that sleepwalking may turn out better than making long-term plans in our
complex and undetermined world.
When a chemical equilibrium is disturbed, le Chatelier’s theorem tells us
that it changes so as to counteract the changing conditions; prediction of
the outcome of changes is thus possible. The possible changes, or degrees
of freedom, in an equilibrium involving several phases are governed by the
Phase Rule. The idea of equilibria in the physical sciences is closely
connected with that of reversibility.

Error is a part of all human life. Some scientists, like William Hyde
Wollaston in the early nineteenth century, have taken enormous pains to
avoid it; he was known in his circle as the Pope because he was believed
infallible. But he was over-cautious, and his work in chemistry was less
important than it would have been if he had been bolder in conjecture. He
and his contemporaries realised that no experiment was exact, and that all
measurement involved error; but they felt that there was little that could be
done about it, except to improve one’s skills in manipulation and to use
tried and accepted methods. Thomas Thomson and Berzelius in the late
1820s had a long argument about the accuracy of determinations of the
relative weights of atoms, in connection with Prout’s hypothesis, that they
were exact multiples of that of hydrogen. Thomson selected those results
which confirmed this hypothesis as his best, which to Berzelius seemed
prejudiced and unscientific: the quarrel only subsided when other analysts
confirmed Berzelius’ results, and accepted his principle that one starts from
experimental results rather than from a hypothesis which is to be tested.
This principle is not of universal application; for example in astronomy in
the 1840s with the detection of the planet Neptune following calculations
from the wobbles of Uranus.
Astronomers at the end of the eighteenth century had been troubled by
the inevitable inaccuracies in observing phenomena, such as the transit of
Venus across the Sun. How was the true result to be known from a mass of
erroneous ones? Gauss plotted observations, and found that they lay on a
curve which we now call a Gaussian distribution, but which was then called
the error curve. From it he could fix upon the true result, rather than just
taking a mean value. Producing truth out of error in this quantitative way
was one of the great achievements of the nineteenth century; we now take it
for granted in statistics, which has been extended far beyond this
concern with individual observations and plays a crucial role in quantum
physics and in the social sciences.
Errors which are more alarming to those who hope for infallible science
are theories which have met falsification; involving entities such as
caloric, ether and phlogiston. These stories are good cautionary tales for
anybody seriously tempted by scientism, for all these things commanded
the assent of experts. Science is a human activity, and it is provisional;
neither implicit faith nor complete scepticism are appropriate attitudes to
it.
A forthcoming issue of Philosophia is to be devoted to Error.

Ether is a member of a group of compounds having the general formula


R–O–R′, where R and R′ are alkyl radicals: in the ether used as an
anaesthetic, they are both ethyl, C2H5. This compound is made from
alcohol and sulphuric acid, and was called ‘sulphuric ether’ in the mid-
nineteenth century: the mechanism of the reaction in which it is formed was
one of the first to be elucidated by A. W. Williamson, who showed that the
acid acts as a dehydrating agent.
In physics, ether is a medium for the transmission of light waves, or for
forces such as gravity. Newton showed in 1687 that ethers cannot explain
Kepler’s laws of the orbits of planets, and had to fall back on action at a
distance. But the success of the wave theory of light in the nineteenth
century made everybody believe in ether even when Michelson and Morley
found themselves unable to detect the Earth’s movement through it: they
concluded that the Earth must carry the ether with it. Einstein’s relativity
theory made ether unnecessary; and it is absent from modern science.
G. Cantor and J. Hodge (eds) (1981) Conceptions of Ether, Cambridge University Press, Cambridge.

Evolution means an unfolding, and down to 1859 when Darwin’s Origin of


Species was published, it meant change according to a plan, such as the
unfolding of a bud into a flower, or the growth of an embryo from a single
cell. Darwin therefore avoided the word in his book, using ‘development’
instead, because he saw natural selection, law and chance, rather than any
plan as lying behind the appearance of new species. He was also impressed
by the way natural selection could lead to the loss of higher characters as
well as to progress: the barnacles, for example, are a kind of degenerated
shrimp, which in middle life lose the capacity to swim about; some males
also have become mere parasites on the females. These changes fitted the
species for a new niche in which they have prospered, even though it looks
to us like going down in the world.
Most of Darwin’s contemporaries did believe in progress, seeing natural
selection as a progressive force; and were happy to use the term evolution
for the process he described. By the 1870s when Darwin’s theory had
become the paradigm for biologists, evolutionary explanation also began to
appeal seriously to astronomers and to chemists. Even in the pre-Darwinian
popular evolutionary text Vestiges of the Natural History of Creation of
1844, anonymous but by Robert Chambers of Edinburgh, the world’s
history was described as beginning with a ‘fire mist’ of hot gas, from which
suns and planets had formed as it swirled in great whirlpools. This idea was
developed from the speculative ‘nebular hypothesis’ of Laplace, the great
astronomer of the early nineteenth century, who sought to bring order and
determinism into astronomy; it acquired some apparent plausibility when
in the 1840s the great reflecting telescope of Lord Rosse in Ireland revealed
the existence of spiral nebulae.
Inorganic evolution became more plausible when from the 1860s the
chemical elements were classified in the Periodic Table; and this array of
families looked rather like those of animals and plants which Darwin had
accounted for. It seemed possible that all the elements with which
chemistry had to deal were in some sense descendants of hydrogen, the
lightest element; or perhaps of helium, the new element which Norman
Lockyer had identified from lines in the spectrum of the sun but which had
not yet been isolated on earth. Lockyer himself was fascinated by such
ideas, and tried to work out the life history of a typical star.
His contemporary, William Crookes, also delighted in evolutionary
speculations, suggesting that the lanthanum metals found only apparently in
Sweden and all very like each other, were the inorganic equivalent of the
mammals of Australia. He even suggested a kind of mechanism involving
swings of a cosmic pendulum which produced the elements. Twentieth-
century theories of the atom do indeed involve inorganic evolution, of
elements and also of stars and galaxies, following the Big Bang; but the
question of whether there was a plan, or whether law and chance are
combined as in natural selection, is still a matter of debate.
M. Ruse (1986) Taking Darwin Seriously, Basil Blackwell, Oxford.

Experiment was in the seventeenth century distinguished gradually from


experience. The word means systematic testing, in which most factors are
held constant so that definite answers can be got: Bacon called it ‘putting
Nature to the question’, meaning cross-examination or even torture.
Some scientists, like Galileo, usually work from a hypothesis or theory,
and use experiment to test whether they are on the right lines. Thus Galileo
rolled things down an inclined plain to test his law of fall; he found that
they did indeed take twice as long to go four times as far. He claimed
complete accuracy for this experiment, which cannot be true because
rolling bodies do not exactly follow the law, as his contemporary Mersenne
found: the agreement was good enough for one who believed that
mathematics was the language of nature, and that minor divergences from
simple laws were not worth bothering about too much. This might now
seem close to fraud; but before statistics people often picked out
confirming instances and ignored others.
Others, like Priestley or Faraday, think with their hands, and develop
their ideas in the course of their experiments, which are thus a dialogue
with nature: one can see this in Priestley’s Experiments on Air, and in
Faraday’s Diary. Much chemistry has been done this way; and the first
presentation of uncooked experimental data is supposed to be in Boyle’s
work on the spring of air in the early 1660s.
‘Crucial experiments’ for Bacon were those which indicated which way
to go; but the term has come to mean those which led to confirmation or
falsification of a theory. The problem is that they were not always seen
as such when first done; and that no experiment can entail or overthrow a
theory because ad hoc adaptations can, sometimes rightly, be made.
D. Gooding and F. James (eds) (1985) Faraday Rediscovered, Macmillan, London. R. Harré (1983)
Great Scientific Experiments, Phaidon, Oxford.

Explanation is one of the aims of science, at any rate in the popular mind.
At the beginning of the seventeenth century, Galileo set out a programme of
explaining secondary qualities (such as colours, tastes and smells) in terms
of particles of matter and their motion. The problems were clearly
expressed by William Thomson (Lord Kelvin) in the late nineteenth
century. The spring of gases is very simple; they all obey Boyle’s law, so
that the product of pressure and volume is constant (at moderate pressures
and not-too-low temperatures). In the mid-nineteenth century, the behaviour
of gases was explained by the kinetic theory of Clerk Maxwell and Rudolf
Clausius, involving elastic molecules in constant collision with each other
and the walls of their container. This led to some surprising predictions;
for example that the viscosity of a gas was independent of the pressure.
This was confirmed by experiment: the theory therefore was generally
accepted. But while the elasticity of gases is very simple, because all follow
a simple law, that of solids is very complicated. For atoms in the nineteenth
century it seemed even worse than complicated, because the bouncing of a
rubber ball was explained in terms of its parts being squeezed together and
separating under repulsive forces; while atoms, having no parts, could not
be elastic. Since the distinction between atoms and molecules was not
clearly made, and some gases do in fact have molecules composed of a
single atom, this was a real difficulty.
Thomson was worried at the prospect of explaining something simple
and well-understood in terms of something complicated; and was thus
unhappy about the kinetic theory. He strove with the idea that atoms were
whirlpools in the ether, or were mere point centres of force; but these were
not altogether easy to understand either. He retained the idea that the
scientist ought to explain; but the new physics of quantum theory and
relativity which was coming in at the end of his life and the beginning of
our century made this more problematic. His younger contemporary Ernst
Mach had argued that the task of science was simply economy of thought.
The search for truth or explanation was hopeless; and theories were to be
judged as more or less convenient ways of grouping facts. For Mach, no
experiment ever entailed a theory. The shape of the Earth, flattened at the
Poles, had been explained since the early eighteenth century as an effect of
its rotation; and thus as evidence for the Copernican system of astronomy.
For Mach, it could equally for all we knew be the result of curious forces
set up by the stars in their daily revolution around the Earth. We adopt the
view of Copernicus rather than of Aristotle and Ptolemy because it is
simpler; but the discovery of new facts might tilt the balance back again.
Mach’s agnostic attitude to explanation is not unlike that of Osiander, the
publisher of Copernicus’ great book of 1543, who in the anonymous
preface urged that theories should be judged only by their convenience, and
not by their plausibility. This was also the line which the Inquisitors tried to
get Galileo to take. It is now generally called instrumentalism, because a
theory is seen as an instrument; like any piece of apparatus, it is judged
by whether it works, and given up when someone invents a better one. The
sciences, on this view, are not concerned with explanation, but with
classification and prediction.
The idea of elastic molecules is not then an explanation, but a model. We
can imagine a kind of three-dimensional game of billiards with perfectly-
elastic balls, and we can handle the mathematics involved in predicting the
outcome of collisions when the numbers of balls are enormous. We can then
even make allowances for the size of the balls, and slight attractive forces
between them, and make the behaviour of our model fit even better the
behaviour of real gases, which do not exactly obey the gas laws. But there
will be features of the model which are irrelevant, at any rate so far; in
billiards the colours of the balls are important, but not in the kinetic theory.
Even after years of usefulness, the model may well fail and be replaced
with another showing more powerful analogies with the phenomena.
Mach’s view was welcome to those struggling to make sense of the new
world of quanta. Light seemed to have been explained as a wave motion in
the ether; and yet in the photoelectric effect it seemed to be a stream of
particles or photons. Similarly, the cathode rays which J. J. Thomson in
1897 had explained as a stream of corpuscles or electrons were a quarter
of a century later found to undergo diffraction like waves. According to the
‘Copenhagen interpretation’ of Niels Bohr, one should not worry about this;
the mistake was to assume that electrons were either waves or particles. We
cannot know, and do not need to, what they are like in themselves; it is
enough for us to know what equations to use to make predictions in any
particular case. This did not satisfy Einstein, who wanted to understand and
explain; but in quantum physics this is still an elusive goal, and the
proposed explanations seem to involve accounting for the relatively
straightforward in terms of the complex or the incomprehensible.
Instrumentalism does not seem to be enough to satisfy many scientists;
and the popular view has advantages over the more sophisticated. Despite
the risks involved in seeking explanations, the restless and speculative
intellect will not long be satisfied with less.
Fact, which to us means something given, comes from a Latin word
meaning something made. While to common sense, it seems that facts and
theories are quite distinct, and perhaps that science is a quantity of
authenticated facts; to the more sophisticated the line is hard to draw. What
we observe is a phenomenon; but what we are interested in when doing
science is how it fits into a pattern or order. We need therefore to get
behind the phenomena and we find that a fact is more than a phenomenon;
it has to be significant. To the baby, we are told, the world is a booming
buzzing confusion; as we get older we learn to separate messages from
background noise, and this is what happens in the sciences too.
To know what is significant we have to have a theory or a hypothesis.
Thus to Galileo tidal observations going back to the Venerable Bede were
not significant; he had worked out (in tideless Venice) that the Earth’s
rotation would produce a tide every twenty-four hours, and refused to
accept the archaic and occult idea that the Moon might draw the sea across
thousands of miles of empty space. He was wrong; and so may other
scientists be who refuse to take seriously evidence (of flying saucers,
perhaps) which does not fit in with their world-view or paradigm. But
unless we have some way of deciding what is relevant we cannot do
science; and changes in science produce changes in what is seen as a fact.
We cannot just be wide-eyed and open-minded.
Scepticism like Galileo’s led eminent men of science in the eighteenth
century to deny that meteoric stones really fell out of the sky, or that sea-
shells could be embedded high in mountains; but these errors were in the
end put right, while the same sceptical spirit had banished astrology and
other alleged bodies of facts. In the nineteenth century, it was a fact that
fever was the consequence of spending nights ashore in tropical harbours;
to us, the fact is that the relevant diseases are carried by mosquitos. Facts
are not immune to correction, which happens when different aspects are
seen as significant.
Those who emphasise the place of facts in science usually see
explanation as a matter of bringing facts under a law rather than of
seeking a theory. The law connects observables rather than getting behind
the surface. But scientists are generally very suspicious of ‘purely-empirical
generalisations’; and demand of a law that it has some analogy with others
and some explanatory function. It seems as though what we require in a
science is both authenticated phenomena and appropriate ideas; what makes
most scientists reject parapsychology, for example, is that it does not seem
to have the latter. There is a good deal of dogma in science; but if there were
not, it would be confusion.

Falsification is the process of testing which is crucial in Karl Popper’s view


of science. Induction can never lead to certainty, because after a
thousand white swans the next one might turn out to be black. So science
must proceed by conjectures and refutations: consequences of a hypothesis
are worked out, and tested. A negative instance shows the conjecture to be
false; and science consists of propositions which could have been falsified,
but have so far not been. It thus has a provisional character. While this fits
much science, Copernicus for example disregarded the apparent
falsification of his theory, the absence of stellar parallax; the genius can
perhaps break rules! Some aspects of science are perhaps better explained
in terms of Thomas Kuhn’s paradigms.
K. R. Popper (1959) The Logic of Scientific Discovery, Hutchinson, London.

Fathers in the sciences, as in other activities, can be very important;


because they can be strongly hereditary occupations, as the Bernoullis,
Herschels, Becquerels, Huxleys and Wollastons show. But when the young
Gay-Lussac at the beginning of the nineteenth century showed great
promise at the Ecole Polytechnique, his teacher Berthollet said that he
would be his ‘Father in Science’. He duly worked for his promotion and
career, which was very successful: but perhaps showed the problems of
excessive deference, in that Gay-Lussac was reluctant to assert the
elementary nature of chlorine against the belief of Berthollet that it was an
oxide. Some ‘sons’ in science as elsewhere, rebel against their fathers; as
Faraday did against Davy, asserting his independence in the early 1820s:
but even here the style of one’s mentor, and the paradigms learned during
one’s earliest research, cannot easily be escaped from. Most scientists are
proud of their ancestry; and particularly in chemistry can sometimes trace
it back through several generations, perhaps back to Gay-Lussac. The
German professor of the nineteenth century reckoned to look after his
numerous sons in science, finding them posts and getting their research
published in journals; and the system still goes on.
M. P. Crosland (1978) Gay-Lussac, Cambridge University Press, Cambridge.

A field is a piece of ground, perhaps where a battle has been fought; and it
has come to mean a piece of intellectual territory: one of the sciences, or a
small part of one as the process of specialisation has gone on. But in
science, and especially in physics, it has come to have a technical
meaning, as an area within the range of some agent or force: this usage is
found in Faraday’s Experimental Researches in Electricity (paragraph 2252)
in 1845, which may be its earliest use in print, though he uses it casually
and without strict definition.
In studying electricity and magnetism, and speculating about light
and gravity, Faraday was becoming convinced that the medium in which
he saw lines of force everywhere was crucial, and that atoms were simply
centres of force. He believed that the Newtonian picture of bodies acting on
one another at a distance across void space was wrong; and that fields of
force gave the right idea. To the astonishment of contemporaries he
succeeded in affecting a beam of polarised light by passing it through a
magnetic field; its plane of polarisation was rotated. Other experiments on
magnetic and electrical induction could also be explained using the idea of
fields.
In the hands of Clerk Maxwell, Faraday’s intuitions were given
mathematical form, and field theory became one of the great pillars of late
nineteenth-century physics, and indeed of science since then. Electrical,
magnetic and gravitational fields form part of the ordinary vocabulary of
scientists; though laymen may continue to see electricity as a kind of
juice, or fluid, in terms which Faraday made obsolete.

A fluid may be a liquid or a gas; it merely means something which can


flow, and therefore contrasts with the solid state. In the eighteenth century,
heat and electricity were supposed to be a kind of fluid; and so was
phlogiston. ‘Aeriform’ or ‘elastic fluid’ was the term sometimes used to
distinguish the gaseous state; in modern chemistry, ‘fluid’ would
generally mean liquid.

Force is something of which we have an intuitive sense whenever we push


or pull. Newton’s third law states that force is the product of mass and the
acceleration to which it is subjected. To the surprise of most of those
learning elementary physics, work is only done when a force moves through
a distance; just holding up a weight does not involve work in the scientific
sense. The word ‘force’ has a rather loose sense, with anthropomorphic
overtones; and when it became fully quantified in physics, it was renamed
energy.
Following Leibniz, d’Alembert in the mid-eighteenth century used the
idea of ‘vis viva’, living force, to express the product of mass and the
velocity squared, mv2; which represents twice what we would call ‘kinetic
energy’ and is conserved in some interactions, whereas Newton’s
momentum, mv, is conserved in others. At the same period, the surgeon and
physiologist John Hunter postulated a vital force, which for instance
stopped us digesting our own insides during life, though this can happen
after death. Other kinds of forces were believed to be responsible for
electricity and magnetism; in this case, polar forces, having two opposite
forms.
The philosopher Schelling believed that all forces were ultimately one;
and like Priestley that forces were more fundamental thanmatter. In the
opening years of the nineteenth century all kinds of connections between
forces were shown: electricity and chemical affinity, by Galvani, Volta
and Davy, for example, and heat and light by William Herschel. In 1820
Oersted made the discovery that a discontinuous electric current will
generate a magnetic field. Faraday entered this field, and in due course
came up with the idea that all forces are ‘correlated’. This was close to the
idea of conservation, but not quite the same because it lacked the
quantitative basis that Joule and Mayer brought to it with their
measurements of the mechanical equivalent of heat. The new idea of
energy also contained the more abstract notion that not merely could all the
forces be interconverted, like different currencies, but also that there was
something more abstract underlying them all, like money. These things
were first clearly expressed by Helmholtz.
T. S. Kuhn (1977) The Essential Tension, Chicago University Press, Chicago, pp. 66–104.

A formula in physics or in chemistry is a set of symbols which represents


an event or a structure. Empirical formulae are those derived solely from
the data of analysis, while rational ones are an attempt to indicate the real
arrangement of atoms. Formulae may be set out so as to give different
degrees of structural information; and they may express a family of
compounds if R, symbolising a radical, is used rather than any particular
atomic symbols.

Fraud is a problem in the sciences, which depend upon public knowledge


and reliability; and indeed one of the arguments for scientific education is
that it is a training in accuracy and honesty. Certain frauds, such as
Piltdown Man which purported to be the remains of an early human, are
well known; this one was pretty generally accepted for nearly half a
century, and the credulity of eminent Darwinians of our century can be
contrasted with the scepticism about alleged human remains shown by their
nineteenth-century predecessors whose theory was different. Here the
problem of authority, unavoidable in science, entered in; eminent persons
had given their blessing to the error.
In more recent years, chemists of the 1960s and 1970s were drawn into a
wild-goose chase after ‘Polywater’, an alleged form of ordinary water
which had curious properties. There was some hope that it might be useful
as a weapon, and money was therefore available for research on it; grant-
giving bodies supported work on polywater, journals and their referees
were uncritical, and it was some years before the bubble burst. Here there
was some fraud, but probably a great deal more wishful thinking and
opportunism; and examples such as these do not really show that science
has become untrustworthy. But they do remind us that it is a human activity,
and not a royal road to truth; personalities, pre-judgements and paradigms,
and careerism, are features of it as of everything else we do, and vigilance
is the only hope. We can take comfort that in both these cases truth did turn
out to be mighty, and prevailed, so that scientific method was in the end
vindicated.
Charles Babbage, in his book on the Decline of Science of 1830,
described different categories of fraud. He distinguished the cook from the
trimmer, who merely adjusts his results, selecting those which best fit his
case; this can even be done unaware. In Babbage’s day the great example of
this was in chemistry. Thomas Thomson of Glasgow set his students to do
analyses and determine the relative weights of the atoms of the elements.
He was a convert to the view of Prout that all elements were polymers of
hydrogen, and that their atomic weights must be exact multiples of
hydrogen’s. This guided him in the selection of what were the best analyses,
which duly then confirmed his hypothesis. He was denounced by
Berzelius in Sweden, who claimed to do analyses without preconceptions;
and whose accuracy was in fact much greater. Thomson claimed that
science required theory; but other analysts confirmed Berzelius’ results.
This kind of controversy is much more common than actual charges of
complete fraud; and it is not immediately apparent where right lies.
Falsification in science is often a complicated business, made much
more difficult by fraud; but fraud is only possible because most scientists
are trustworthy.
F. Franks (1981) Polywater, MIT Press, Cambridge, Mass.
D. M. Knight (ed.) (1970) Classical Scientific Papers, Chemistry: 2nd Series, Mills & Boon,
London, reprints Thomson’s and Berzelius’ papers.
A. Kohn (1986) False Prophets, Basil Blackwell, Oxford.
A gas is a substance in a state in which it fills its container: when cooled
and compressed it will generally turn first into a liquid and then into a
solid. Gases which can be liquefied by pressure alone, like carbon dioxide
at room temperature, are strictly called vapours. All gases follow the same
laws with respect to heat and pressure, unlike solids and liquids; the
gaseous is therefore the simplest state. Different gases diffuse through each
other and thus mix gradually. Since the mid-nineteenth century, the kinetic
theory of gases has become generally accepted: this sees them as
composed of molecules in rapid motion, constantly colliding with each
other and the walls of the container, and makes quantitative predictions
which have been verified. Equal volumes of all gases under the same
conditions contain equal numbers of molecules.

Geology formed the part of Natural History devoted to the mineral


kingdom. About 1800 it began to emerge as a distinct science, as biology
was separated from it by Lamarck and Trew, and as it began to develop
theory. In Napoleonic Paris, Cuvier reconstructed the fossil mammals found
in the quarries of Montmartre, bringing to life a past fauna; and soon
afterwards in England the first dinosaurs were discovered by Buckland and
by Mantell. It became possible to give relative dates to strata, from the
fossils they contained, and to form an ideal ‘geological column’ of every
stratum in sequence; this can never be found in any one real location.
Recent strata contained fossils very similar to living forms, and as one went
deeper they became more and more different. Geologists competed to
identify and name ‘systems’ of strata, such as the Cambrian and the
Devonian rocks, tending to believe that there were real discontinuities
between such systems, where some great catastrophe had destroyed life. In
the strata were the organic remains of former worlds.
It became impossible to believe that all this history could have been
fitted in to the six thousand years which seemed to be required by a literal
reading of Genesis; and even by the 1820s geology had led to a new
conception of time as Copernican astronomy had expanded ideas of space.
Then from 1830 Charles Lyell proposed that past changes should be
explained exclusively in terms of causes acting at the present day; believing
that his predecessors had been prodigal of violence because they had been
parsimonious of time. He invoked millions of years; and his work greatly
impressed the young Darwin. Because Lyell insisted upon the recent
creation of mankind, there being then no human fossils known, he aroused
no great antagonism and was chosen as Professor at the Church-supported
King’s College, London. Geology was prominent in the British
Association, and with the Geological Survey became a profession.
Darwin’s evolutionary theory depended on slow changes; but soon after
1859 when he published it William Thomson (Lord Kelvin) thought of a
way of determining ages absolutely. He computed the age of the Sun,
assuming it were composed of the best coal and also fuelled by meteorites
and by collapse under gravity; and the age of the Earth, assuming it to be
gradually cooling down from a fluid state. Both calculations gave fifty to
one hundred million years; much too short for geologists, who did their best
to ignore this intervention from thermodynamics.
Kelvin had not known of radioactivity, which leads to different
assumptions and confirmed the long periods of Darwinism; and his mistake
led to geologists taking no notice of physics for many years. Only in the
1950s did evidence from magnetism and other physical measurements
become really important, with the sudden general acceptance of Continental
Drift and the theory of Plate Tectonics; making Geophysics vital to
understanding the rocks.
A. Hallam (1973) A Revolution in the Earth Sciences, Oxford University Press, Oxford.
M. J. S. Rudwick (1985) The Great Devonian Controversy, Chicago University Press, Chicago.

Geometry is that part of mathematics which deals with lines, planes and
volumes. It was put into the form of a deductive system by Euclid in the
third century BC, but had existed in a less organised form for a long time.
The ancient Greeks believed they had got it from the Egyptians, who had
developed it either because they needed to measure their lands (which is
what ‘geometry’ means) after the annual floods of the Nile, as Herodotus
suggested; or because their priests were a leisure class who could go in for
abstract thought, as Aristotle supposed. Geometry has always had these two
aspects: it is a necessary part of the training for the profession of architect,
surveyor or engineer; and it is a system of pure deduction, in which the
theorems can be worked out from the axioms without any kind of
experiment, and in which the aim is rigorous proof. In Antiquity, other
branches of mathematics lacked this rigour; but Descartes in 1637
published his demonstration that geometry and algebra can be completely
translated one into the other. Thereafter, the economy of algebraic proof has
been generally preferred; but Newton in his Principia (1687) used
geometrical methods throughout.
Euclid’s fifth postulate was that parallel lines never meet, and later
geometricians showed that this could not be proved from the axioms;
though it is perhaps not self-evident in the way that they are: for example, if
a>b, and b>c, then a>c. To Kant working on his philosophy at the end of
the eighteenth century it seemed that geometry was ‘synthetic a priori’,
because its propositions fitted the real world when tested, and yet could be
worked out by pure thought. But even in Kant’s time, alternative geometries
to Euclid’s were being worked out by Gauss and others; using a different
fifth postulate they got consistent but different systems; in which for
instance there are more or less than 180 degrees in a triangle.
Nobody seriously supposed in the nineteenth century that a non-
Euclidean geometry might fit the world; and Euclid’s had the longest run of
any textbook in history. But at the beginning of the twentieth century, A.
H. Poincaré as part of his idea that convention was the basis of science
suggested that space might be non-Euclidean – but that because of the way
our minds work we would go on using Euclidean geometry to describe it.
Einstein at just the same time proposed seriously that space was non-
Euclidean: that light did not follow a Euclidean straight line in the
neighbourhood of massive gravitating bodies. In order to keep a simple
physics in which light went in straight lines and at constant speed, he
proposed to use a different geometry. The link between measuring the world
and inventing a deductive system had been broken, and Kant’s category of
the synthetic a priori was no longer necessary. To find which geometry to
use we have to do empirical tests, and make judgements about simplicity.

God is the creator of the world, imposing order and design upon it, in a
tradition coming both from Platonic philosophy and also Judaeo–
Christian religion. Man being in God’s image could hope to understand His
work; and this impetus lay behind the Scientific Revolution of the
seventeenth century, along with the hope of the improvement of man’s
estate through technology. If all events are merely the result of chance and
contingency, then there is little point in spending one’s life looking for
laws; so the existence of God gave a guarantee that science was worthwhile.
Conversely, the evidence for design seemed proof of God’s existence: until
in the eighteenth century Hume argued forcibly that it fell far short of that;
and Laplace later found that in astronomy he ‘had no need of that
hypothesis’. Developed science no longer needs the underpinning required
in its early days: but many scientists do feel that an explanation of the
order of things is needed, and invoke God as the First Cause; though this
God is still some way from the Father worshipped by Christians, who works
through providence and miracles, and imposes moral order.

Government has, with the passage of time, become the most important
patron of science. This has transformed the nature of science, as it has
ceased to be something like a harmless hobby and has become a profession,
with close links to industry and the military. In these fields secrecy is all
important, and the old ideal of open communication, at international
conferences and in journals, has been somewhat modified; science is no
longer ‘public knowledge’. But because science of even the most pure and
recondite kind is expected to have applications in technology, governments
have also been prepared to support it; and without such patronage the
progress in science in the last century or so would have been impossible.
Archimedes is supposed to have constructed secret weapons to defend
Syracuse against the besieging Roman army; but it was about 1600 that the
idea that knowledge is power began to gain ground, notably in the writings
of Francis Bacon. In the succeeding century, various academies were
founded with various degrees of royal support; but scientific research was
still fairly cheap for the most part. The great exception was astronomy
where telecopes and a team of observers were required; and state-
supported observatories were set up, first in Paris and in Greenwich. The
hope here was that they would produce data enabling the longitude to be
found at sea; navigation would then become a science, and long-distance
trade would increase. By the 1760s this was achieved, and voyages like
those of Bougainville and Cook were planned at government expense.
Astronomical observations were made for mapping, and materials valuable
in natural history and ethnography were collected for museums, in what
became a series of expeditions, in which that of HMS Beagle was one.
Pendulums were also taken to high latitudes for gravitational measurements,
and the Earth’s magnetism was investigated.
Darwin learned much of his science on the voyage, and there was still
very little formal education in the sciences. In Paris, the Ecole
Polytechnique was founded shortly after the Revolution of 1789; here at
government expense research and teaching were carried on together. In the
German universities of the nineteenth century a similar pattern emerged;
and this was followed in Britain and the USA in the second half of the
century, despite the worries of exponents of ‘laissez-faire’. In Britain, grants
had been made available for research through the Royal Society and the
British Association; and in the 1880s came the first regular grants to
British universities, making a career in science possible. While the role of
the wealthy patron or foundation cannot be forgotten, and some have been
extremely good at picking winners, the bulk of finance for science, and
indeed for other disciplines in higher education, has to come from
governments; which oscillate between viewing such expenditure as costs or
as investment.

Gravity meant weight, but since the work of Newton it has meant that
power by which all particles of matter attract each other, with a force
proportional to the product of their masses divided by the square of their
distance apart. Throughout human history, it had been known that
unsupported bodies fall to the ground; but Galileo, in his working towards
the notion of inertia, propounded the law that falling is uniform
acceleration. He probably never did any experiments from the Tower of
Pisa, but argued that in the absence of air-resistance everything would fall at
the same speed. This was later verified by Boyle with his air-pump.
For Aristotle, the cause of falling had been that heavy bodies sought their
natural place, the centre of the Earth, which formed a compact sphere at the
centre of the Universe. For Galileo, this was no explanation; and because
the Earth was moving about the Sun, it was not merely verbal but wrong.
But he had no better explanation to offer; and for the motion of planets, he
fell back on the almost Aristotelian idea that circular motion required no
force to maintain it. Against this, Descartes argued that bodies left to
themselves go in straight lines or remain at rest.
Galileo thought it occult, reminiscent of astrology, to suggest that the
distant Moon, for example, could affect the sea and cause tides. But for
Newton, if the Earth did not attract the Moon, it would have gone off in a
straight line; and in 1665 he worked out the law of gravity, which he
formally published after a great deal more work in his Principia of 1687.
He showed that the orbits of the planets are as predicted in his theory,
from which Galileo’s law of falling also follows. He was perplexed about
the cause of gravity; Descartes had proposed whirlpools of ether to carry
the planets, but Newton showed that this would not work. Nevertheless he
was very reluctant to suppose that gravity was inherent to matter, and toyed
with ethers himself. He disliked the idea that matter could act where it is
not, across void space; and his work upset those devoted to mechanical
explanation, for whom the world was a big clock in which all the wheels
and springs should be revealed by science.
A spinning Earth would, according to Newton, become flattened at the
poles; and a French expedition under Maupertuis to Lapland showed in the
1730s that indeed it was. On another French voyage, to Peru, la Condamine
found that the Andes pulled his plumb-line out of true; showing that gravity
was not simply a pull towards the centre of the Earth, but really attraction of
particles. At the end of the eighteenth century, Cavendish, using a torsion
balance, demonstrated this in the laboratory; while William Herschel
showed that double stars move about each other under gravity, taking the
law outside the solar system; in the nineteenth century it was used to predict
the planet Neptune because of the wobbles it produced in the orbit of
Uranus.
Still gravity could not be explained; and unlike light its propagation
seemed instantaneous. Einstein interpreted it in terms of the curvature of
space; modern physicists search for gravitons; perhaps we just have to
accept it.
D. Gjertsen (1986) The Newton Handbook, Routledge & Kegan Paul, London.
Hardness is measured according to what will scratch the surface being
investigated; this provides a relative measurement. But atoms were
supposed to be absolutely hard, so as never to wear away or break in pieces
as Newton put it. Such particles could not bounce, because that would
involve parts coming together and separating; so elasticity and hardness
were seen as opposite qualities, and it was not until well into the nineteenth
century that a coherent kinetic theory of gases appeared.

Harmony is the playing of musical notes originally in succession but then


together so that they sound agreeable. Pythagoras found that simple ratios
underlay them, and music down to the eighteenth century was studied in
universities as a branch of mathematics. ‘Harmony’ is used in a general
way to describe the order of things; and from Antiquity was used
specifically with reference to the planets. In their orbits they were
supposed to make music too fine for the mortal ear to hear; probably in
reference to the idea that simple mathematical ratios, like those behind
chords, must be the key to understanding them. This Pythagorean idea
fascinated Kepler in his search from the 1590s for the laws of planetary
orbits; and even when he had proved that they move in ellipses at definite
speeds, he continued to seek for, and even to publish in Harmonices Mundi,
1619, the music of the spheres.

Heat is something we feel when we touch a poker that has been in the fire,
for example. To work out its cause was one of the earliest tasks of science;
and both Galileo and Bacon, in the early seventeenth century, identified
motion as the cause of heat. But they meant something different by this
phrase.
Galileo revived the old distinction made by Greek atomists between
primary qualities, which reside in things, and secondary qualities, which
depend on an observer. Heat was a secondary quality, produced by the
motions of minute particles which formed a kind of fluid, a substance.
Thus along with other atoms or corpuscles, there were particles of heat. A
hundred and fifty years on, this theory was made more definite by
Lavoisier, who named the fluid caloric. This was for him a kind of
chemical element, which could form definite compounds: ice and caloric
thus form water. All liquids and gases are thus compounds of caloric;
which can also be mixed with them, so that hot water contains more than
cold.
Bacon, who investigated ‘hot’ spicy dishes as well as pokers, saw heat as
the motion of the particles of which bodies are made up, and not as another
thing. In the 1790s, his theory seemed old-fashioned; but Benjamin
Thompson (Count Rumford) observed that indefinite quantities of heat
could be produced by friction. This would be very odd if it were a
substance, but not if it is simply motion of particles. Carnot arrived at the
second law of thermodynamics in the 1820s using the caloric theory; but by
the 1840s many scientists were coming to see heat as a form of energy
rather than as an element. In Rumford’s experiment, mechanical work was
being transformed into heat; and James Joule made a clockwork
arrangement in which water was heated by churning it with paddles driven
by the falling of a weight. He could then compute just how much work was
equivalent to a definite quantity of heat; and on the strength of this, he,
Helmholtz, and others announced the conservation of energy.
Heat derived from chemical energy, notably when anything burned, first
attracted attention, leading to the notions of phlogiston, and of caloric with
which Lavoisier had replaced it. He had also demonstrated, in his work on
oxygen, that the ‘vital heat’ of life is akin to combustion. In the early
nineteenth century, heat became the concern of those working in what was
coming to be called physics; who distinguished the conduction, convection
and radiation of heat. With Joule and Helmholtz the study of heat ceased to
be a distinct science and became a branch of physics, a science of which
energy was the central idea.

A hierarchy is an arrangement in order of people or objects. The term may


be used to describe the Bishops, Priests and Deacons of a church; and by
extension, the various levels of those who make science their profession,
with members of academies at the top, through university professors, to
ordinary practitioners and teachers. In a hierarchy, the various levels ought
to be distinct: thus in a church, a distinct public ordination accompanies
each jump in status. In science, this is not the case, and the term can only
function as a metaphor; but in view of the way science can be a kind of
religion, and the scientific community supports a paradigm, it can be
illuminating.
More seriously, it has a use in discussions of classification, a process
carried on throughout the sciences; the view that there is a ‘natural history
stage’ through which sciences all pass in a kind of adolescence cannot stand
up, in days when physicists are concerned with classifying fundamental
particles. It is one thing simply to name and to place things; but it is more
to place them in an unambiguous order, with clear limits. Such a natural
system cannot in all cases be achieved, and it seems that often we just have
to use artificial methods of classifying, chosen for their convenience rather
than their truth to nature. In a well-established system, such as the
Periodic Table of chemical elements, there is something hierarchical: the
elements fall into various groups which are on different logical levels. Thus
sodium is a metal, in the alkali metals family, and in the second horizontal
series.
When we classify things we have made such as sciences it is easier to
impose an order. Most people at any time have some idea of a hierarchy of
sciences; but different sciences have been perceived as the most
fundamental by different observers at different times. In the seventeenth
century, it was mechanics; about 1800, chemistry, the science of forces;
then biology, as evolution seemed the crucial idea; and then physics.
Philosophers such as Hegel and Whewell worked out an order in which the
various sciences all had their distinct appropriate fundamental idea: this
made any attempt at reduction futile, because the facts of chemistry could
not for example be fully expressed in the language of physics, which was
on a different level. The problem is that discoveries have often been made
on the frontiers of sciences which ought to be so far apart in the hierarchy
that they had no common frontier. We feel intuitively that there must be a
hierarchy, that full explanation in biology or psychology cannot be given
in terms of physics; but actually to draw one up is a great problem.
R.-P. Horstmann and M. J. Petry (eds) (1986) Hegel’s Philosophie der Natur, Klett-Cotta, Stuttgart.

History is what this whole book is about; for science has a very long one,
in which it has changed character enormously. Its subject matter, apparatus
and institutions have evolved into something very different from what
they were in Antiquity, or indeed in the time of Galileo. Historians of
science used to be mostly scientists, active or retired; but now, far behind
science, the history of science has become a kind of profession. Instead of
just looking at the ancestry of ideas now current, historians began to flirt
with philosophy of science, and to write case-studies of interesting
episodes demonstrating induction or deduction, paradigms or
falsification. Because we live in an untidy world, these rarely fitted the
ideal very well unless ‘rationally reconstructed’; and the serious historian
draws the line at that, because it is the particularity of the past which makes
it enthralling.
Historians of science therefore deserted philosophers, and turned to
social historians; looking at the life and work of ordinary as well as great
scientists, and at scientific academies and associations, metropolitan and
provincial. Scientists rather than science became their province; and models
from sociology, particularly that of ‘marginal men’ seemed illuminating. It
became possible to write history of science with the science left out; for
science was, after all, an activity not completely different from other ways
of spending time, like the law or shopkeeping. Hierarchy and status
became very important notions.
But unfortunately none of these flirtations aroused much excitement
among practitioners of other disciplines. Historians of science rather than
scientists seemed to be the marginal men; and unfortunately became less
ready to risk the wide ranging synthesis as they concentrated on
manuscript material. But for scientists, philosophers, social historians and
sociologists, the history of science is a fascinating territory, with many
paths leading into it; the search for the context of past science, which then
formed the context of more recent science, is a fascinating and never-ending
quest, and the sources are abundant and varied.
D. M. Knight (1975) Sources for the History of Science, Cambridge University Press, Cambridge.
H. Kragh (1987) An Introduction to the Historiography of Science, Cambridge University Press,
Cambridge.

Homology in biology means underlying resemblance: it is applied for


example to the way that all mammals have the same bone structure though
they differ enormously in form. In chemistry, radicals are described as
forming ‘homologous series’ of compounds, in which for example a methyl
group, CH3, may be replaced by ethyl, C2H5, or propyl, C3H7, in
hydrocarbons, alcohols, ethers or fatty acids.

Hypothesis used to mean something like data. In his Principia, 1687,


Newton used the term to include Kepler’s laws of planetary motion; and
Captain Clarke, who took over after Captain Cook’s death in Hawaii in
1779, referred to the expedition’s observations as a hypothesis for
astronomers to work upon. In geometry, the term meant the assumptions
made at the outset of a proof. But by 1700 the term had come usually to
imply guesswork. In later editions of Principia, Newton changed his
heading from ‘Hypotheses’ to ‘Rules of Reasoning’ and ‘Phenomena’; and
added the famous remark, ‘Hypotheses non fingo’, I feign no hypotheses.
Newton had in mind attempts at the explanation of gravity, involving
hypothetical ethers; and argued that his laws of mechanics, and his
interpretation of his experiments in optics, were sufficient for the purpose
of science. He believed that his conclusions were ‘deduced from the
phenomena’ and therefore had a certainty which was absent from any
science which began with hypothetical entities. But in the ‘Queries’ which
he added to later editions of his Opticks Newton himself toyed with a
theory of matter involving atoms; he had to recognise a place for such
hypotheses in some parts of science, hoping no doubt that eventually such
scaffolding might be taken down.
Bacon at the beginning of the seventeenth century had proposed a
cautious method for the sciences based upon induction; whereas
Descartes’ bolder method involved hypotheses and deductions from them,
following the model of geometry. Generalisation from a mass of facts
cannot lead to powerful theories like that of Newton, or to very general
principles like conservation of energy; and in chemistry in the nineteenth
century attempts to work out the structure of molecules from the data of
analysis seemed hopeless. Laurent suggested that chemists ought to work
from a hypothesis, deducing consequences which could then be tested by
experiment. This was what Kekulé did in suggesting that benzene and other
aromatic compounds had a ring structure; and his prediction that there
would be three compounds of the formula C6H4X2 was verified. Thus a
hypothetico-deductive approach to science became accepted as a proper
one, and even perhaps as the fundamental scientific method; for example in
Popper’s philosophy of science, based on attempted falsification of
hypotheses.
For Popper, no logic of discovery is possible; how a scientist comes
up with a hypothesis is the concern only of a biographer or psychologist.
What makes the hypothesis scientific is that it is testable, so that it could be
falsified. If it were thus proved wrong, then it would be dropped; but the
sciences are a mass of hypotheses which have so far passed the tests to
which they have been subjected. Popper reminds us that there is no final
proof in the sciences, and that scientific knowledge is provisional; there is
no royal road to truth.
For most of us, the process of discovery is one of the most interesting and
exciting parts of science and its history. Anecdotes, about Galileo and the
Tower of Pisa or Newton and the apple, cling around the great names,
though their reliability is not much greater than the stories of miracles
surrounding the lives of early saints; but we want to understand what made
great scientists tick. While scientific discovery cannot be reduced to an
automatic process of following rules, we feel that there ought to be some
kind of logic about it. And Newton, with his Rules of Reasoning, seems to
have thought this way too; though he was perhaps unhelpful in saying that
he made his great discoveries by thinking hard and long about them.
Scientific hypotheses are generally based upon analogy, the recognition
of similarity. Thus Davy confronted with a curious violet substance from
seaweed saw its analogies with the green gas which he had just established
as an element, chlorine; and duly named it iodine. He could engage in a
race with great French chemist Gay-Lussac although he was an alien in
Napoleon’s Paris armed only with a travelling laboratory, because he did
not need to perform all the systematic experiments necessary to somebody
who had not recognised the analogies.
Some hypotheses involve more complex analogies, of more or less
completeness, involving a physical or ideal model. Thus the dynamical or
kinetic theory of gases depended upon the analogy of particles and
billiard-balls or other elastic spheres. Deductions made from this model,
such as Maxwell’s that the viscosity of a gas is independent of its pressure,
were confirmed; and a series of other independent successes gave rise to
confidence that the analogy must be real.
The logic of analogy, and of the formation of hypotheses, is not tight; and
no definite rules can be laid down for it. But this does not mean that there is
no logic at all; and the paradigm accepted among scientists can guide
towards helpful analogies. The success of hypotheses which unified masses
of facts into a coherent structure leads to them being called theories; and
has meant that in the twentieth century we have become much more tolerant
of hypotheses than Newton was. But what we do demand of them is that
they are testable; hypotheses which are not can be described as ‘ad hoc’,
invented to save an awkward situation, or as metaphysical, and scientists
may use such things but would hate to admit it.
Illustration is an important part of science, because a good picture can be
worth a thousand words. There are also some points which can be better
made in visual language than in ordinary languages like English or German;
in this respect illustration can be a bit like mathematics, which is the
language of much of physics.
The most obvious role for illustrations is to show what things look like,
and here scientific illustration is part of topographical art. A museum or an
academy may be shown just as a cathedral or a palace might be; and indeed
we can learn something about the public image of science from its
buildings, which may be splendid, functional or ramshackle. Gothic,
classical and modernist styles have been chosen for housing the sciences
to make claims of wisdom, power and up-to-dateness. But on the whole it is
the inside of laboratories and observatories which is probably more
interesting to most of us. We can understand much more about experiments
of the past if we can see how and where they were done; and a picture of a
laboratory is much more informative than an inventory. Given a series of
illustrations of actual or ideal laboratories over a period of time, we can see
how demands and potentialities changed; and the same is true of lecture
theatres, which were an important feature of public science. Textbooks are
a good source of such pictures.
Apparatus is something much better depicted than described, and here
again we can see changes over time. Here the trade catalogue is valuable
because it shows us how pieces of apparatus became standard equipment,
rather than something assembled in the heat of research. It also tells us
about prices, and thus enables us to make sense of budgets from the past.
There were some scientists like Faraday who were scornful of much
ready-made apparatus; and his Chemical Manipulation of 1827 tells us and
shows us how to make do and mend, constructing elaborate systems for
fractional distillation out of a piece of glass tubing for example. His
book, like other chemistry books of the time, was illustrated with little
pictures in the text: some of these are representational, suggesting the three
dimensions of the apparatus (some contemporaries show hands holding
glassware), while others are conventional, two-dimensional and
diagrammatic. Gradually through the nineteenth century, illustrations in the
physical sciences became less representational and more schematic.
In Faraday’s book, the illustrations were wood-engravings. This
technique, where the picture is engraved on the end-grain of boxwood, had
been perfected by Thomas Bewick around 1800, for his famous pictures of
birds, and vignettes of rural Northumberland. What is to print black stands
up from the block, which can therefore be printed with metal type – the
boxwood being very durable. This provided a cheap way of reproducing
small pictures, the box being a small tree; and one finds that some blocks
were either copied or directly passed on to be used in a succession of books.
For bigger illustrations, wood blocks could be bolted together and this was
done in journalism; but in science copper plates were generally used down
to about 1830. They were engraved, or etched with acid; and then inked,
wiped and printed from, on to damp paper under pressure. This was
expensive, and could not be combined at all easily with type; so the
illustrations all appeared separately, often at the back of the book or paper,
rather than in the text. Lavoisier’s Elements of Chemistry of 1790 has
handsome illustrations of apparatus done on copper, but less convenient
than Faraday’s.
In the early years of the nineteenth century, there came a new technique:
lithography, invented by Sennefelder in Austria. A drawing is made on a
stone with a wax crayon; the stone is then wetted, and inked with an oil-
based ink; this adheres only to the wax, being repelled by the water, and a
print can therefore be taken from the stone. This was much cheaper than
copper-plate, about one-third the price; and once the process had been
developed to give as fine detail as was required, it came to replace
engraving for scientific purposes. In geology this step was particularly
important because it meant that from the 1820s books and papers could be
much more profusely illustrated; and points about the succession of strata
could be much more clearly made. In natural history, the accurate depiction
of animals, plants and fossils was essential and was done by many artists of
great skill like the Bauer brothers and Edward Lear, who managed to
combine scientific and aesthetic values in producing works of art which
pass the test of time. Their successors still achieve this.
In technology illustration is essential, from the handsome picture of an
engine to go in the boardroom to the working drawings from which ships
or machinery were and are constructed. In the nineteenth century, these
became much more detailed, and the scope for the craftsman was
correspondingly reduced. The drawing office became the hub of the works;
and the drawings became also more schematic, with conventions of colour
to indicate materials for example. Projective geometry, invented in France
at the end of the eighteenth century, became the key to technical drawing,
unambiguously showing structure.
Crystals belong to natural history and to chemistry, and pictures of them
gave way to diagrams which more clearly showed their facets during the
nineteenth century. Dalton had suggested some arrangements of atoms
about 1807, and chemists struggling to arrive at molecular structure needed
diagrams. This was especially true of those trying to account for isomers
and isomorphs, and then from the 1860s with the rise of structural organic
chemistry. The ring structures proposed by Kekulé for benzene and other
aromatic compounds are impossible to follow without a picture; and in our
century the same is true of the resonance energy of compounds whose
formulae can be shown in more than one way. Illustration is still with us.
K. Baynes and F. Pugh (1981) The Art of the Engineer, Lund Humphries, Guildford. A. Ellenius (ed.)
(1985) ‘The natural sciences and the arts: An international symposium’, Acta Universitatis
Upsaliensis, 22.
S. Forgan (1986) ‘Context, image and function: A preliminary enquiry into the architecture of
scientific societies’, BJHS, 19, 89–113.
M. J. S. Rudwick (1976) ‘A visual language for geology’, History of Science, 14, 149–95.

Induction is the method of generalising from particular cases. It can be


used rigorously in mathematics; but it was proposed by Francis Bacon in
the early seventeenth century as the way forward in the empirical sciences.
He saw deduction as simply unfolding conclusions implicit in the axioms,
rather than leading to anything new; and as essentially dependent on
authority. Science was to be founded not upon dogma or hypotheses but
upon facts. One problem is logical: we can never be sure that we have got
a fair sample of facts in what is after all an infinite series. Induction can
never lead to absolute certainty. J. S. Mill in the nineteenth century tried to
draw up rules for inductive inferences which would separate causal
connections from merely accidental ones, but without the assumption that
we live in a world of law we cannot be confident in any generalisation.
A more serious question for the scientist, who will probably be content
with very high probability rather than certainty, is whether the method of
Bacon and Mill is ever used, or ever could be. There seems to be a logical
leap in going from instances to laws; the accumulation of data will never
lead of itself to real science, where facts are organised. Millions of apples
had been seen to fall before Newton, but their behaviour did not entail his
universal law of gravity. Some kind of intuition seems necessary for such
a discovery: we can move logically down the hierarchy of scientific laws
and theories to facts, but not up it.
If we form a generalisation, like Boyle’s law of the elasticity of gases,
then a further instance of it is not much help in its confirmation; but a
negative instance, a falsification, would seem much more powerful. One
failure discredits a law, where a thousand successes are no real proof of it.
Karl Popper makes science depend for its progress not on induction but on
conjecture which is testable; laws which have survived many attempts to
falsify them become generally (but provisionally) accepted, while those
falsified are dropped. In fact Boyle’s law is often falsified, but is useful in
many ordinary cases and in theoretical treatments of ‘ideal gases’; scientists
do not behave as logicians wish they would.
There do seem to be cases where a simple inductive procedure is used in
the sciences, especially in complex fields like public health where there is a
lot of data but little theory, and probably a multiplicity of causes. Scientists
seek by experiment to move on from this stage towards one where
powerful theories, remote from brute facts, lead to detailed predictions.
Bacon had got part of the story, but different sciences at different times
require different methods.
The word induction is also used in electricity and magnetism, where a
charge or pole brought near will induce electricity or magnetism in a
suitable body; a phenomenon investigated in the 1830s by Joseph Henry and
by Faraday.
E. J. Lowe (1987) ‘What is the “problem of induction”?’, Philosophy, 62, 325–40. J. R. Milton
(1987) ‘Induction before Hume’, BJPS, 38, 49–74.

Inertia is the resistance bodies put up to any force changing their velocity;
it is the characteristic of matter, expressed in terms of what we call
Newton’s first law of motion: ‘All bodies continue in a state of rest or
uniform motion unless a force acts on them’. This is not very plausible, and
its adoption was one of those triumphs of science over common sense: we
see everything slow down and come to a halt unless a force (provided by
the engine of a car or train, for example) keeps it going. In the physics of
Aristotle and his school all motion required a cause; and yet even in
Antiquity there were cases of motion that continued without a mover –
arrows and thrown stones, for instance, and objects falling to the ground.
Falling bodies were seen as moving, with increasing rapidity depending
on their weight, towards the centre of the Earth, the ‘natural place’ of all
heavy things; while light fiery sparks fly upward. In the sixth century AD,
John Philoponus in Alexandria dropped weights from a tower and found
that their speed was not proportional to their weight; so the exact law of
falling remained a mystery. Even more perplexing was the problem of the
motion of projectiles; but in the thirteenth and fourteenth centuries, Jean
Buridan and Nichole Oresme in Paris worked out a theory of ‘impetus’ to
explain it.
The bowstring or the sling imparted impetus to the missile, which was
gradually used up in its flight; when it was all gone, the projectile fell
straight to the ground. Similarly, a falling body gained impetus so that it
went faster and faster. This theory prevailed down to the early seventeenth
century, when Galileo was making sense of the motions of planets
following Copernicus’ view that they circle the Sun. If the centre of the
Earth was not the centre of the universe, and was moving, then there
seemed no good reason why it should be the natural place of all heavy
bodies; Galileo did not concern himself with the cause of falling, but
suggested that the law was that falling bodies were uniformly accelerated,
following an equation worked out three hundred years before by
mathematicians at Merton College, Oxford. This is a classic case of a
scientist avoiding a ‘why’ question and tackling a ‘how’ question.
For the motion of planets, Galileo had to give some explanation; he
believed that in the absence of friction, a ball bowled would go rolling right
around the Earth, and that similarly the Moon rolled around us, and we
around the Sun. This circular motion was thus inertial, requiring no force;
which pleased him, because he could not believe in forces acting at a
distance across void spaces. Descartes in his Principles of Philosophy
modified this idea, musing on the unchangeability of God; and concluded
that inertial motion must be straight-line rather than circular. Given this
principle as the basis of his physics, Newton had to account for the closed
orbits of the planets, which are ellipses and not circles; and came up
with his theory of gravity.

Institutions in science are organisations of all kinds in which science is


carried on, forwarded or popularised. If we think of science simply as the
advance of the understanding of nature, then the institutions may not seem
as interesting as the scientists who were associated with them: but in
recent years historians have become more conscious of science as a social
activity, and have therefore looked harder at how it was supported, its
status, and the way ideas gained currency. This approach can lead to
history of science with the science left out, because scientific institutions
have much in common with such bodies as clubs, churches or political
groupings: which reminds us that science is one human activity among
others, with objectives which overlap those of other activities.
The earliest institutions dedicated to science were learned societies;
academies, associations or invisible colleges which promoted
research and its dissemination, and might publish a journal. Such bodies
might have close relations with government, and often had a library, a
museum and a laboratory for the use of their members. They might also
have a theatre in which lectures would be given: this was the most notable
feature of the Royal Institution in London, for example, where Davy and
Faraday worked in the early nineteenth century. Some such institutions
might be essentially devoted to education, giving courses of lectures to
mechanics; while others were for those qualified to perform or to
understand research.
As science became something like a profession in the nineteenth
century, so there came the need for professional societies. In medicine there
had been the Royal Colleges of Physicians and of Surgeons since the
sixteenth century, and equivalent bodies for licensing practitioners in other
countries. In the eighteenth century, the engineers developed professional
institutions, with access generally through apprenticeship; the professional
body sets standards and also fees, and works to keep the unqualified out of
its territory. Such interests are rather different from those of learned
societies; and among chemists in Britain, for example, we find the learned
and the professional institutions separating in the later nineteenth century,
and coming together again only after a hundred years.
If we are to understand science at a time and place, it is essential to look
at its institutions as well as its theories; and some indeed would see these as
socially determined.
M. Hunter and P. B. Wood (1986) Towards Solomon’s house’, History of Science, 24, 49–108.
I. Inkster and J. Morrell (eds) (1983) Metropolis and Province, Hutchinson, London.

An instrument is a piece of apparatus probably used to make a


measurement. In the seventeenth century the telescope and the microscope
opened up new worlds to the scientific investigator; and by the nineteenth
century, sciences such as electricity depended upon instruments.
Instrumentalism is the idea that theories in science are a kind of
instrument or tool; to be judged not according to their supposed
approximation to truth, but simply for their utility. This extreme
empiricism makes explanation illusory, and economy of thought the main
aim of sciences: with prediction a major bonus. Such modesty is
sometimes in order, but most scientists seem to hope for more.

The first international conference was probably that called to set up the
metric system; but since it was held in France during the wars which
followed the Revolution of 1789 only delegates from allied or conquered
foreign countries came. Since the early nineteenth century, international
conferences have become an important part of scientific life: reading
published papers is no substitute for meeting other scientists engaged in
the same kind of work. Some famous meetings have been called to settle a
difficulty, like that at Karlsruhe in 1860 which was to agree upon atomic
weights in chemistry (and failed to do so); but others are held regularly or
occasionally to review progress and exchange ideas, often under the
auspices of national associations or academies. Frequent conferences
have meant that the distinct national styles or traditions in science which
were characteristic of earlier days have become much less prominent;
scientific education and practice have become international, and a subset
of English their language.

The first invisible college was a group to which Boyle belonged in the
1650s; they did not all meet together regularly, but kept in touch by
correspondence and visits. They formed a nucleus for the Royal Society,
which became visible when it got its charter in 1662, as the first enduring
academy of sciences. But even after the foundation of academies and
associations, those closely involved in research kept up informally with
others in the same field, forming correspondence networks; which might
sometimes turn into visible groups, like the Society of Arcueil in
Napoleonic France. Such groups (often international) are still very
important in science, because the number of people engaged with a
particular problem is often quite small, and the circulation of offprints
and preprints, correspondence, visits and occasional meetings is fruitful.
D. Crane (1972) Invisible Colleges, Chicago University Press, Chicago.
M. P. Crosland (1967) The Society of Arcueil, Heinemann, London.

Isomers are compounds which contain the same components in the same
proportions, but differ in properties. They may contain the same number of
atoms in different arrangements, like urea and ammonium cyanate,

, or different numbers of atoms, like


benzene and ethane (acetylene), C6H6, C2H2. The term was coined by
Berzelius from the Greek in the 1820s.

Isomorphism is the phenomenon of different chemical compounds having


the same crystal structure: examples are potassium nitrate and calcium
carbonate, many potassium and ammonium salts, and the alums which often
feature in crystal-growing sets. Such similarity of form was, ever since its
discovery by Mitscherlich in the 1820s, supposed to be related to
similarity in structure; but in fact it was not until the 1860s that agreement
of formulae could be attained, following the international conference
at Karlsruhe.
Isotopes are atoms of the same element which differ in weight. It was
supposed by Dalton that all atoms of the same element are identical, and
during the nineteenth century a great deal of work was put into the
determination of relative atomic weights, which seemed important constants
of nature. Mendeleev’s Periodic Table relied heavily upon this data for
the ordering of elements.
Some chemists tried to make sense of these numbers: Prout in 1815/16
suggested that relative to hydrogen, they would all be whole numbers,
believing that probably all the elements were polymers of hydrogen. Exact
analysis overturned this hypothesis; but many numbers were in fact close
to whole numbers on this scale, and later chemists found this tantalising.
Also, they could not believe that elements so similar as potassium, rubidium
and caesium could be irreducibly different. William Crookes, trying to
separate out the ‘rare earth’ elements like lanthanum came to the conclusion
that he was not dealing with distinct species, but with what he called ‘meta-
elements’ more like biological varieties. He expected them to evolve into
real elements; or at least that all the elements had evolved from a common
ancestor.
In the 1880s these ideas looked rather eccentric, like a throwback to
alchemy; but with the discovery of radioactivity in the last years of the
century, and the work of Rutherford and Soddy on what they called sub-
atomic chemical change, it became clear that atoms like species of living
creatures were not permanent. Theodore Richards at Harvard showed that
lead found in association with uranium had a different atomic weight from
‘ordinary’ lead; and at first the phenomenon was supposed to be confined to
the radioactive elements. But in the 1920s Aston at Cambridge devised a
mass spectroscope to separate isotopes (which he named) and found that
many elements had two or more such forms. Prout’s hypothesis could be
resurrected in a different form; the various isotopes all had almost whole-
number masses compared to that of hydrogen, but in nature they come in
mixtures which produce fractional numbers. The small differences from
whole numbers Aston called a packing fraction; it represents the energy of
formation of the atom.
W. H. Brock (1985) From Protyle to Proton: William Prout and the Nature of Matter, Adam Hilger,
Bristol.
The journal has become the most important vehicle for the publication of
research in the sciences; and searching through the literature is the
first thing the scientist has to do in any project. It was not always so.
Even after the invention of printing, books were for some two hundred
years the only form of scientific publication. The first journals were
associated with the academies of the 1660s, and their usefulness soon
became apparent.
Before that time, an experiment, observation or calculation was only
written up and published when it could form part of a book, or at least a
pamphlet. Authors might, like Copernicus, circulate a draft in manuscript
among a few associates: this process still continues within invisible
colleges, but the work thus disseminated does not become part of that
public and consensual knowledge which we call science. By the middle of
the seventeenth century, correspondence had become important for
circulating scientific information even across frontiers; Mersenne, a friar in
Paris, had a network of correspondents and passed on news of the latest
developments. The problem is how to break into such a network; and this
was where the journal came in.
The Paris Academy of Sciences from its earliest days was associated with
the Journal des Scavans, published in Paris and in Amsterdam (where there
was no censorship). This was a reviewing journal, in which progress in
various fields was described. This kind of journal is still very important in
science: eminent authors report on recent work and point to promising lines
of investigation. In the seventeenth century, and indeed down to about
1800, one might hope to read everything relevant to a topic; but the
problem was in finding it, and one might have to rely upon the review. For
anybody looking for the paradigm, or interested in the contemporary
reputation of a past scientist, reviewing journals are essential; and because
they could be subscribed to, or consulted in a library, journals widened
and blurred the boundaries of the scientific community.
The Royal Society of London was associated with another journal,
beginning slightly later in March 1665: the Philosophical Transactions.
This began from the correspondence of the society’s secretary, Henry
Oldenburg; and it then took the different form of a research journal,
publishing signed original papers. These were read by a referee, whose
job it was to check experiments and inferences: a system which has
continued for respectable journals ever since, and which should prevent
nonsense getting into print, though it has not always done so in the past.
Now when anybody did anything interesting, he could send a report of it to
Oldenburg without having to wait until he had enough to write a book; and
it would duly be published if authenticated. To our eyes, many of the early
papers are curious rather than scientific; but they tell us about the science of
the time, when wonders were perhaps more interesting than close study of
the ordinary course of things. The scientific paper went well with the
Baconian vision of science as a cumulative activity, in which all could add
their brick to the edifice.
These journals were joined by others as academies were set up in other
countries; so that through the eighteenth century there was a steady growth
in their number, and thus in the number of papers printed. But as academies
became more prestigious so they liked more polished papers, summing up a
research programme rather than describing work in progress; and their
stately volumes took a long time, often years, to come out. In France and
then in Britain, publishers began to bring out private journals which
would bring information before the world more quickly; the Philosophical
Magazine, begun by Alexander Tilloch at the end of the eighteenth century,
still survives. Such journals did not at first go in for very much refereeing;
and included reports of the meetings of societies and academies, and
reviews of books and of papers in other publications, especially foreign
ones. They also sometimes reprinted papers which had appeared in more
august but less accessible journals like the Philosophical Transactions.
All these were general journals, covering the whole of science; but by
about 1800 specialised journals began to appear. Some were associated with
a society, such as the Linnean Society of London which is concerned with
natural history; others were private, like Loudon’s Magazine of Natural
History. The specialised scientific societies of the nineteenth and twentieth
centuries also brought out their journals; and by the 1880s the Philosophical
Transactions was divided into two sections, A concerned with physical
sciences, and B with life sciences. By then it was no longer true that anyone
interested in science could be expected to read a paper in any field.
The great problem was to keep up. Abstracts began to be published in
the nineteenth century, so that the gist of a paper could be consulted in a
different publication, and the reader could decide whether to read the whole
thing. Papers, which at first had been very like letters or lectures, became
shorter and more formal as they were addressed to a professional
audience. Offprints were circulated among correspondents who might
have failed to consult the relevant issue of the journal in which they had
first appeared.
All journals seem to tend, if they survive, to become more formal. The
Royal Society from the 1830s began a new informal publication, its
Proceedings; where we can find the President’s address, the Treasurer’s
report, and so on; but this steadily became more like the Philosophical
Transactions. Private journals also became more dignified; but by the 1860s
chemists had a weekly newspaper, Crookes’ Chemical News. This was then
joined by Lockyer’s Nature, covering all science; in which many
fundamental discoveries have been first announced. We still have the
objective of rapid and reliable publication, which will be seen by everybody
interested; but there is so much being published that this requires a
hierarchy of journals.
A. J. Meadows (ed.) (1980) Development of Science Publishing in Europe, Elsevier, Amsterdam.
W. H. Brock and A. J. Meadows (1984) The Lamp of Learning, Taylor & Francis, London.
Kinematics is the study of motions without reference to forces, and is
therefore opposed to dynamics. Planets were, before Kepler, treated in
kinematic terms; nobody asked very seriously how they could possibly be
maintained in the epicyclic paths proposed for them. This may provide a
model for a science safely based upon phenomena and description rather
than on hypothesis and explanation; but most of us would be rather
unhappy with it.

The laboratory is the place where most science is done; though clearly
mathematics needs no apparatus, and geology for example also requires
field-work. In the early nineteenth century, laboratories were not necessarily
separate rooms; the great chemist Berzelius used his kitchen, and Anna his
cook was also his technician. But even then better-endowed scientists
had a laboratory at their disposal; and the basement laboratory at the Royal
Institution, for example, is preserved in the state in which Faraday used it.
Laboratories have in recent years come into use in disciplines such as
archaeology, partly for good reasons and partly to bring the prestige of real
science into what might otherwise seem unserious activities.
In the 1790s at the Ecole Polytechnique in Paris there seems to have been
some laboratory teaching; but for the next half-century lectures with
demonstration experiments were the usual form of scientific education
at all levels. Chemistry was the first science in which practical training was
seen as essential, no doubt because in the early nineteenth century it was
very much an experimental science, looked upon by applied
mathematicians as applied cookery. Practical classes were at first an ‘extra’,
for which a further fee was asked at the new universities of London and
Durham for example; but they were soon required, and Liebig at Giessen
pioneered graduate laboratory work for the PhD degree also in the 1830s.
Twenty years later, laboratory teaching in physics began to come in as that
science emerged, with conservation of energy, into a mathematical and
experimental discipline; and a few schools, like the famous Queenswood
where Tyndall and Frankland taught, introduced laboratories. By the end of
the century, school laboratories were common.
Eminent scientists, like Crookes, might still do their research in private
laboratories at the turn of the century; it was only as science became
professional that institutional laboratories became the norm. The large
laboratories of the later twentieth century, where teams whose members
have different trainings work in ‘big science’, are a phenomenon for the
sociologist.
B. Latour and S. Woolgar (1979) Laboratory Life, Sage, Beverley Hills.
Language is very important in the sciences. The Royal Society in its early
days set up a committee to try to produce a precise language; and
Lavoisier’s reform of chemistry involved a new vocabulary because he
believed the old one incorporated error. Illustration is a kind of visual
language very important in science: and Galileo believed that the Book of
Nature was written in the language of mathematics, a belief in which many
physicists have followed him. Pictures and symbols are less rich than
ordinary language, where words call up associations and can be used in
jokes; but scientists are suspicious of word-play, anyway at work, and
look for accuracy and precision.
Down to the middle of the seventeenth century, Latin was the language of
learning and therefore of science; but gradually the vernacular languages
came to replace it. By 1800, French was the best language for international
communication in science; then German took the lead; and in our century
English has come to occupy the place Latin had three or four hundred years
ago. Much science is easy to translate, because it is not nuanced; but all
publication is rhetorical, written to persuade, in some degree, and original
works of science like Darwin’s Origin of Species are difficult to put into a
foreign tongue: there have been, for example, a series of Japanese
translations since the nineteenth century, trying to get it right. With old
scientific texts, like Harvey’s, it is difficult for a modern translator not to
use words carrying implications Harvey could not have dreamed of; using
translations is essential, but it does demand an act of faith.
Old words like field or energy assume in scientific contexts a
constricted but definite meaning; while some new words like catalyst
(coined from the Greek) pass out into ordinary language and mean
something much vaguer than in chemistry. Outside the laboratory, and
sometimes inside it, some indefiniteness is an advantage, allowing us
flexibility.

Law was at first seen to imply a lawgiver, and to be obeyed only by rational
beings; we do not expect dogs to keep the law. At the foundation of a city or
a state, a wise man would choose a system of laws; and this pattern was
applied to God and his creation. There are different systems available for
legislators, and in the same way God had the option of creating a number of
possible worlds. The job of the scientist was to find out which one he had
in fact made; and scientific laws were thus a matter of contingency: they
could have been otherwise. Down to the seventeenth century, ‘natural law’
meant the law which ought to be applied to strangers or between states, in
contrast to the specific codes which governed citizens in their relations with
each other.
In the seventeenth century, various laws of nature were found. Galileo’s
discovery of the law of falling bodies, Snell’s of the law of refraction,
and Boyle’s of the law of gas pressure, are well-known examples. These
are simple quantitative relationships which seemed to apply universally.
They do not involve causes; and this makes appeal to laws attractive to
those suspicious of theories. One problem is that they do not usually apply
exactly. If Galileo had dropped weights from the Tower of Pisa, he would
have found that heavier ones got to the ground slightly sooner; but he knew
that there are often interfering effects that make the course of nature less
simple than it should have been. His law represented an ideal situation, with
no air resistance. At low temperatures, Boyle’s law does not at all describe
what happens; instead of being springy, the gas becomes a liquid; and the
ideal ‘perfect gas’, which always behaves as Boyle would expect, was
invented. Laws do not always clearly fit facts.
We talk of weights or gases obeying a law, but to contemporaries of
Boyle, like John Ray the naturalist, this was impossible. Nature for him was
not ‘it’ but ‘she’, a power which carried out God’s will rather as Cardinal
Richelieu did that of Louis XIV. Like a King’s minister, nature strove to
obey the law but might fall short; and the materials she had to work with
might sometimes let her down, as happens with a monstrous birth. So we
should not expect complete consistency, in this ‘biological’ view of things.
To Boyle, active in chemistry, this idea was unattractive: explanation
for him meant a mechanical account, in terms of matter and motion (and
particularly of particles), of what was going on. For him as a good
Protestant, there was no power between us and God; so that God had laid
down laws which matter must obey. He was concerned to leave room for
God to work miracles; but by the late eighteenth century the idea had
become general that there was some necessity behind the laws of nature.
God might be the First Cause, but could or would not interfere in the
creation, which was a sphere of determinism. Rather than liberty under the
law, this was an iron rule; which has only been relaxed somewhat in our
century, with quantum physics.

The lecture is underrated in our day because it is associated too much with
communicating information, and not enough with theatre. The real task of
the lecturer is to get across excitement; details can then be got from books
and journals. The medieval lecturer read out a standard text, and that is
what the word originally meant; though this became obsolete with the
invention of printing five hundred years ago, it is still sometimes done. In
science, the lecture can be particularly effective as a part of popular
science where it is probably associated with demonstration experiments
in the tradition of Faraday at the Royal Institution; and also with
communicating current research to students. It is a pity that in
universities it is often used to cover a syllabus already in textbooks; and
that at international conferences many speakers read out pre-circulated
papers rather than giving a brief lecture to get discussion going.
L. Stewart (1986) ‘Public lectures and private patronage in Newtonian England’, Isis, 77, 47–58.

A library is a necessary part of any scientific institution, because


consulting the literature is a vital part of research and teaching.
Associated with a museum or a laboratory will be standard books,
journals and abstracts where things can be rapidly looked up; while a
university requires textbooks as well. For the historian of science, and
for some kinds of scientist, old books and long runs of journals are
essential; and manuscripts such as notebooks and letters are of enormous
importance. The library will now also have access to electronic stores of
information: but written material is still crucial where reasoning is to be
followed rather than facts looked up, and this is the core of science.

Light has many striking, indeed illuminating, features which mean that it is
used in metaphorical as much as in literal senses; we speak of casting light
on some problem, and of seeing the light. In medieval England, Robert
Grosseteste and Roger Bacon at Oxford were associated with ‘light
metaphysics’ in which light was an emanation from God; this led them
towards science. But even in Antiquity, there was also a science of light:
Euclid for example had worked out the law of reflection, and one of his
definitions of a straight line is that it is the path of a ray of light.
With the seventeenth century came the study of refraction, culminating
in Newton’s Opticks (1704) and his conclusion that white light is composed
of rays of all the different colours which form the spectrum. Also in the
seventeenth century had come the first measurement of the velocity of light.
Galileo had tried having two men with lanterns some way apart: the first
exposed his light and the second exposed his as soon as he saw it; the first
man saw how long it took for the light to get there and back. All that this
showed was that light went extremely fast; and Descartes assumed that its
velocity was infinite, though later in his Dioptrique (1637) he said that it
went faster in water than in air. At the Paris Observatory, Roemer noticed
that the satellites of the planet Jupiter sometimes emerged from behind it
early and sometimes late, and that they were late when we were furthest
from Jupiter and early when we were nearest. He concluded that light took
a quarter of an hour or so to cross the Earth’s orbit.
Newton supposed that light was a stream of particles, because a wave
would not go in a straight line; but he recognised wave-like properties in
some phenomena, like diffraction. In the early nineteenth century, Thomas
Young in England and then Fresnel and Arago more effectively in France,
urged that light was a wave motion and using a model of transverse waves,
like those of the sea, accounted for all that was known about light, and
made surprising predictions which were verified. By the end of the
century, light-waves and the ether which carried them were universally
accepted. Maxwell had shown that light was a form of electromagnetic
radiation, a form of energy rather than a thing.
But Einstein looking for an explanation of the photoelectric effect found
that he had to invoke particles of light, photons, in accordance with quantum
theory; blue photons carrying more energy than red ones. And since 1904
anybody concerned with light has had to recognise wave-particle dualism:
neither model will on its own account for all that is now known. Whether
we shall ever find a new theory which will return us to the happy
confidence of the Victorians remains to be seen; the future of light is
shrouded in darkness.
G. N. Cantor (1983) Optics After Newton, Manchester University Press, Manchester.

The liquid state is one in which a substance is mobile, and will fill the
bottom of its container. Liquids generally can be cooled so as to become
solid; and on the application of heat they become gases. In chemistry,
this state is very important because most ordinary reactions involve liquids.
Those which will not mix, like oil and water, constitute distinct phases in a
system.

Literature in the context of science has nothing to do with aesthetic


values, but simply means relevant publications. Anybody engaged in
research has to undertake a literature-search: science being public
knowledge, one begins with what is published. This will determine the
questions asked, for all knowledge is cumulative in the sense that new
contributions must be seen to fit in with what is known; and also suggest
appropriate techniques of experiment. The literature presents the paradigm,
most clearly in the textbook but then in research papers also.
Normally textbooks and works of popular science are not regarded as
part of the literature; but the reference book is very important. Scientists
generally consult books, whereas historians and philosophers read them
right through; and works which give the latest atomic weights or the
melting points of derivatives of organic compounds are very useful to the
researcher. He will also be able to look up standard methods of preparation,
useful formulae, and values for physical constants.
Then reviews are a valuable source. These discuss recent work in a
field, citing important papers; if the author and editor have done a good
job, reading a review is an excellent way of getting up to date. The review
may (as often in humanities) take the form of an essay on a recent
publication; but more usually it comes in special publications, and is a
formal survey by an authority. From the days of the first scientific
journals in the 1660s, this kind of paper has had a recognised role.
Abstracts are the next kind of research tool; the necessity for these
became apparent to enterprising publishers by the nineteenth century,
when too much was published for anybody to read it all and have time to do
anything else. They are short accounts, prepared by the author or an editor,
of recent papers; and indicate to the researcher which papers should be read
in full, and which can be neglected. Journals such as Chemical Abstracts
show the extent also of the discipline, and which are the most active parts of
it. As well as abstracts, there are now Citation Indexes, which indicate the
influence and thus perhaps the importance of papers; and can be useful in
sociology of science.
All these things get the scientist back in the end to the primary
publications, the articles in journals and the monographs in which research
is published. The problem may still be that whatever is published is out of
date; the literature is a guide to last year’s science. The speed with which
things get into print seems to have got less in the twentieth century than it
was in the nineteenth. So anybody wanting to keep fully abreast of what is
going on must break into the invisible college of those active in the
field, getting ‘preprints’ of papers or circulating information electronically.
Magic and science inosculated far into the seventeenth century and cannot
easily be disentangled, especially in alchemy and astrology. Magic is
practical, being an attempt to use powers in nature for human ends; on the
whole it worked less well than technology, but this was not obvious in
1600. Judgements of what is real and what is pseudo in science are easy
only with hindsight: magic is not simply irrational.
The conjuror or sorcerer was a potent figure in the seventeenth century,
calling up and using evil spirits to work wickedness; and trials of witches
were a feature of the age, magic being a sphere open to women earlier than
orthodox science. It was fortunate that the Devil always abandoned his
associates once they were arrested, so they could do no harm to the powers
that be. But while to many the powers of evil seem to have been more
evident than those of good, there was no reason why all magic should be
black; to charges of necromancy the Renaissance magi like John Dee would
retort that he communicated with angels and sought to benefit mankind.
There was good authority for magic, in texts of supposed ancient
wisdom believed older and therefore wiser than those of Greek
philosophers; but the coming of mechanical clocks provided a new model or
paradigm, and led to a different idea of cause, from those prevailing in
Antiquity. In a clock, weights drive gear wheels which drive the hands; and
in elaborate cathedral clocks such as that at Strasburg all sorts of other
exciting things were made to happen as the clock struck the hours, but all
by means of wheels, rods and springs. The clock could be mechanically
explained; and from the time particularly of Descartes, whose Discourse on
Method appeared in 1637, mechanical explanation became the ideal in
science. It was no longer enough to appeal to organic analogies, like the
parallel between man the microcosm and the world, the macrocosm: or to
mere conjunction, where A always seemed to be followed by B: real
science was not a matter of black boxes but of understanding the processes
of nature.
This went with at least a threat of materialism, and the expulsion of
spirit from science which became concerned with matter in its
interactions and transformations. The spirit of scepticism came into
science with Descartes, and has stayed there; so that scientists are
reluctant to investigate claims that spoons can be bent by non-mechanical
means, or that cures result from a miracle. The values taught by science
are perhaps more mundane than those of the magicians; but the capacity of
scientists to work good and evil would astonish the sorcerer.
O. Mayr (1986) Authority, Liberty and Automatic Machinery in Early Modern Europe, Johns
Hopkins University Press, Baltimore.
K. Thomas (1971) Religion and the Decline of Magic, Weidenfeld & Nicolson, London.

Magnetism is a word that comes from the city of Magnesia on the Meander
river in Asia Minor; where magnetic rocks were found. There were stories
in Antiquity of rocks which could pull all the nails out of boats; but the
discovery that a needle can be magnetised, and will then set itself north and
south, was made in China. With printing and gunpowder, which also have
Chinese ancestries, it was one of the discoveries picked upon in the early
seventeenth century by Francis Bacon as distinguishing modern times. It
made possible the voyages of discovery of the Renaissance, which Bacon
took as a symbol of scientific discovery which would follow the use of his
method.
The first major book on magnetism was William Gilbert’s De Magnete of
1600, which described numerous experiments, especially with a ‘terella’ or
little model of the Earth made from lodestone, magnetic iron ore. He found
that with ‘armature’ (steel shields fixed on their ends) cigar-shaped
lodestones were stronger. Magnetism was profitably investigated as a part
of primitive geophysics; but the phenomena eluded explanation. For
Descartes, magnets emitted streams of particles which brought back
pieces of iron; for Gilbert the magnet had a soul.
Oersted in 1820 proved that magnetism and electricity are
interrelated, showing that an interrupted electric current affects a compass-
needle; and both forces had been shown to obey an inverse-square law,
like gravity. Faraday argued from 1845 that magnetism was best
understood as a field filled with lines of force surrounding the poles. He
believed that all the forces of nature were correlated: and from the late
1840s the principle of conservation of energy brought magnetism into
the new science of physics. Magnetic behaviour of fundamental particles
is very important in quantum physics.
In Faraday’s time, terrestrial magnetism was also a subject of intense
investigation and even international competition; associated especially with
Alexander von Humboldt, and with James Clark Ross who got to the North
Magnetic Pole, and very near the South one.

Manipulation means the skilful handling of things; and it is very important


in chemistry, a science heavily depending upon experiment. In 1827
Faraday published his only book, Chemical Manipulation (all the others are
collected papers) in which he gave instructions for performing all the
activities which the chemist in the laboratory has to do. This can still
profitably be read, even in these days of quickfit apparatus: his skills led to
discovery when he isolated the aromatic compound benzene by fractional
distillation of whale-oil, in a zig-zag tube he made himself. A ham-handed
chemist even now would be at a considerable disadvantage, and practice in
manipulation is very valuable.

Mass measures the resistance to force which a body puts up. In the physics
of Descartes in the early seventeenth century, matter was defined by its
extension; it was what took up space. Although Descartes had first clearly
stated the law of inertia, that a body continues in a state of rest or
uniform motion unless a force acts on it, he did not use it to characterise
matter. His ‘subtle matter’, a kind of ether, might have no mass; and in the
eighteenth century weightless fluids were invoked to account for
electricity, magnetism, heat and light.
The inertial principle was stated by Newton in his Principia of 1687 as
his first law of motion; and for him, as for the upholders of atoms in
Antiquity, mass rather than extension was the essential feature of matter.
Mass is different from weight, which measures the effect of gravity upon a
body. This will vary, slightly in different places on the Earth and greatly in a
spaceship or on the Moon, as Newton recognised. Mass on the other hand is
constant in Newtonian physics, and conservation of mass was the
cornerstone of Lavoisier’s chemistry: but in Einstein’s relativity it will
vary with the velocity of the body, and increase as this approaches the
velocity of light. Some fundamental particles may have no mass if at rest,
and owe it all to their motion; energy and mass being connected according
to the famous equation E = mc2, where E is the energy, m the mass and c
the velocity of light.
The materialism which alarms moralists nowadays is belief that the more
things we own the happier we shall be; a modern and less joyful version of
the Epicurean ‘eat, drink and be merry for tomorrow we die’. Philosophical
materialism may entail this behaviour, but it means the belief that only
matter exists: there are no souls or spirits. Joseph Priestley, the
eighteenth-century chemist, believed that materialism was a part of
Christianity, properly understood; that the resurrection of the body rather
than the immortality of the soul was the correct doctrine, and that matter
sufficiently organised could think. Most contemporaries were horrified, and
connected this belief with Priestley’s support of the French Revolution and
other radical causes; he left Britain in the 1790s to take refuge in the USA,
but even there he found few supporters but Jefferson.
The simplicity of the doctrine had attracted many of Priestley’s French
contemporaries; and in science it seemed that explanation in terms of
mechanisms was needed rather than any dependence on immaterial spirits.
John Hunter in Britain and Blumenbach in Germany found some kind of
vital spirit necessary in physiology; but the French and German
physiologists of the nineteenth century believed they could do without it.
The kidneys secrete urine, went one dictum, and the brain secretes thought.
They interpreted Wohler’s synthesis of the organic compound urea as a
disproof: Wöhler had been interested in the experiment as an example of
the rearrangement of atoms, or isomerism.
In chemistry, Davy and others in the early nineteenth century came to
believe that it was force and not matter which was crucial. An electrical
charge would alter the chemical properties of metals; and chemical
affinity seemed electrical. Matter itself was inert and brutish, as Newton
had thought. This emphasis upon what came to be called energy
transformed ideas of the universe; which could no longer be seen simply as
an enormous clock. But it did not mean that chemistry disproved
materialism: as structures were better understood, so matter assumed a
place as important as energy in chemistry; while energy ceased to seem
anything very like spirit.
In our century, matter and energy in Einstein’s physics became
interconvertible; and while quantum theory seemed to be incompatible
with determinism, matter became something very different from a heap of
inert billiard-balls. Priestley’s vision of active matter is closer to ours than
Dalton’s; materialism may still be a useful world-view for the scientist,
though some would say that it is narrow and that some kind of religion is
vital for good science.

Mathematics, wrote Galileo, is the language of the Book of Nature. By that


he meant that simple mathematical relations lay behind all phenomena; and
also that only those with mathematics could really hope to get very far in
science. We can see him at work in deriving his law of fall: in which he
applied the ideas of medieval mathematicians on what they called
uniformly non-uniform motion, our uniform acceleration, to actual falling
bodies. His conclusion was that all bodies would fall at the same rate, and
that the distance they fell would be proportional to the square of the time.
Here, he took some pure mathematics and applied it to a particular problem
in physics; this is something which frequently happens, though it can also
go the other way round, as with Fourier’s series devised to solve a problem
in heat-flow but leading to interesting mathematics. Galileo also had to
idealise his problem: bodies do not all fall at the same rate, or accelerate
according to his law. Experiment therefore did not confirm it, and it could
not have been arrived at by generalising experience; but Galileo could
explain the divergences by invoking air resistance.
With the work of Galileo, Descartes, Newton and Leibniz, mathematics
acquired enormous prestige in the seventeenth century as a source of proof
and certainty; and the great advances in astronomy at that time would
have been impossible without it. Astronomy became the pattern which other
sciences were expected to follow, but in some cases this did not work out:
classification in botany and zoology depended on recognising analogy
rather than on numbers, and chemistry could not be quantified as Newton
had hoped in terms of forces. But with the passage of time, Galileo’s
vision of science has come closer to what goes on not only in physics but
also in chemistry, genetics and social sciences. Indeed it seems possible that
our society over-values mathematics and the quantitative.

Matter is the substratum which underlines all that we see. In Aristotle’s


physics it had no positive characteristics, all of which depended upon it
being given some ‘form’; but to critics in the seventeenth century this
seemed to yield only verbal explanations. Most of those then interested in
science adopted some kind of theory of atoms or particles; which had
only the ‘primary’ qualities of shape, size and weight, and by their different
arrangements gave rise to things having colour, taste and smell. At that
time, this theory could not be tested or subjected to falsification any
more than the Aristotelian one; except that heavy metals like gold might be
expected to be more inert than lighter ones like tin, because their particles
would be closer together making it harder for acids to worm their way in.
Early atomists in Antiquity had preached materialism, the idea that we
are no more than matter and that our existence will end at death when our
atoms will be dispersed. In the eighteenth century, materialism seemed a
great threat to the whole fabric of society, because it removed all threat of a
Last Judgement; if one could get away with wickedness, why not do it?
Priestley, by way of contrast, tried to reconcile materialism with Unitarian
Christianity, arguing that the Resurrection of the Body (rather than the
Immortality of the Soul) was the true doctrine, and that by a miracle our
bodies would all be reconstituted when the Last Trumpet sounded.
Priestley believed that matter was active, full of powers, whereas Newton
and his contemporaries saw it as passive and ‘brutish’, incapable of even
holding together by itself, just a mass of billiard-balls. Our fundamental
particles are more like those supposed by Priestley, and matter has shown
itself a good deal more interesting as atomism has become a scientific
theory rather than the world-view which it was for Newton.

Measurement has great prestige, and seems to have been very important in
the history of science; where quantification is essential for confirmation, or
falsification, of hypotheses and theories. Important measurements go
back to remote Antiquity; Eratosthenes in Alexandria for example having
determined the circumference of the Earth with considerable accuracy in
the third century BC. In the eighteenth century, more precise determinations
of this quantity by Maupertuis in Finland and la Condamine in Equador
demonstrated that the Earth was flattened at the Poles, in accordance with
Newtonian theory. J. J. Thomson saw some of his predecessors as explorers,
working qualitatively and recording whatever they saw; while mature
science involved theory and quantitative method. Certainly mathematics
and measurement are the crucial features of physics: and in chemistry too
the measurable has come to prevail, as smells and tastes have given way in
analysis to physical properties like the spectrum.
Outside the physical sciences the role of measurement is less certain,
because the measurable quantities in dealing with people may be less
important than the qualitative ones; and whether the social sciences should
seek to model themselves on physics is open to doubt. We do try to quantify
deprivation or affluence, and this can be of value to governments in
deciding upon policy; but such measurements are theory-laden to a degree
that those in chemistry are not, and the search for accuracy can in this field
land one in error: as attempts to measure intelligence have shown, where
we find fraud as well, and where it is not clear that there is any simple
quantity to be measured.
Sometimes limited accuracy in measurement has been an advantage as
when Kepler worked on the orbits of planets about 1600. His data from
Tycho Brahe’s observations was too good to permit him to fit circles to it,
so he worked on until he hit upon the ellipse; which a single planet would
indeed follow, but is not strictly the orbit of planets forming a system,
because they attract each other and produce wobbles. This was only
detected following Newton’s work on gravity, of which it was a
confirmation, as the telescope came into astronomy.

A special method is often supposed to be the great characteristic of


science; and to be teachable as a sort of general study. It is often indeed
supposed to be very simple: probably to be based upon induction, the
collection of numerous instances and cautious generalisation from them, as
described by Francis Bacon in the seventeenth century. There are cases
where this, or something like it, has been done; a favourite example in the
last century was the work of Dr Wells from Carolina on the dew, which was
praised by John Herschel in his Preliminary Discourse of 1830. But it does
not fit what happens in more sophisticated science.
Thus Newton was not the first to observe that unsupported bodies fall to
the ground, and his conclusion that the Moon was falling just as the apple
did was not based on numerous instances of falling moons. Rather it was a
deduction from a hypothesis about every particle of matter attracting
every other particle with a force inversely proportional to the square of the
distance. His method here was closer to that of Galileo, for whom the Book
of Nature was written in the language of mathematics; and Newton indeed
called his book Principia Mathematica, 1687. The key to truth was thus
mathematical simplicity; and his contemporaries were attracted to Newton’s
views because of this.
Another suggestion is therefore that science works by falsification, or
conjectures and refutations; this has been proposed by Karl Popper in our
century. Scientific propositions differ from others because they are testable,
and could be proved false; if they are, they are discarded, but otherwise they
survive and form the body of science – which thus has a provisional
character. The trouble is that some science is not readily falsifiable,
Darwinian evolution being an example, although it is clearly powerful and
fertile in leading to research. And some fundamental parts of science have
been deduced from untestable or false premisses: the law that bodies
continue at rest or in uniform motion unless a force acts on them by
Descartes from the unchangeability of God; and the second law of
thermodynamics by Carnot from the idea that heat is a fluid like water, and
will not flow spontaneously uphill.
Some therefore hold like Feyerabend that in the sciences there is no
method: that anything goes. Certainly in different sciences and at different
times, different methods have seemed appropriate; and just as the great
novelist or poet expands our ideas of what can be done with words, so the
greatest scientists perhaps expand our idea of scientific method. In
Thomas Kuhn’s terms, they bring in a new paradigm to which their
contemporaries are converted as was St Paul on the Damascus road. There
seems nowadays little reason to believe that there is anything like a single
and infallible scientific method; the sciences are thereby made more
interesting and exciting, and the gap between them and other activities
narrower.

A miracle is a wonder which promotes faith, an answer to prayer. It need


not involve anything inexplicable: we say ‘It was a miracle that a fishing
boat was passing so that everyone on the wrecked ship was rescued’; but it
ought to be good, revealing God, so that we would not call it a miracle had
nobody been saved. An element of unlikelihood is usually required if an
event is to be called a miracle, but it would be possible to talk of the daily
miracle of the sunrise.
It is the unlikelihood rather than the edifying character of miracles which
has, however, generally been seized upon: and some familiar ones – the
resurrection of the dead, or walking on the water – pass beyond
unlikelihood into incomprehensibility. The word ‘miracle’ in science and
in philosophy has thus come to mean an event which cannot be explained
in terms of the accepted regularities or laws of nature. Such an event might
imply some supernatural interference with the ordinary course of things, for
good or ill; for insurance purposes, floods or tempests may be called ‘Acts
of God’.
This usage could hardly develop before the seventeenth century, when
Bacon, Galileo and Descartes were among those who argued that the world
was a great machine or engine subject to inexorable regularity; and that the
job of science was to find the mechanisms within this great clock. In 1687
Newton published his great work: but he was later accused by Leibniz of
suggesting that the world was kept going by a ‘perpetual miracle’, because
his equations were no explanation of how the Sun and the Earth could
attract each other across void space. This seemed like magic to
contemporaries who were just rejoicing in having escaped the occult.
Newton’s theory of gravity was so powerful that it came to seem silly
to doubt it, even if ‘action at a distance’ was a mystery. But in the middle of
the eighteen century, David Hume, although he believed we could never be
sure that one event was really the cause of another, argued that miracles
were so improbable that we should always be sceptical about any alleged
case. The passage of time made the miraculous story ever more implausible.
By 1800 Laplace could argue for determinism that a Being who knew the
position and velocity of every particle would know the past and the
future.
Romantics like Schelling and Coleridge argued against this mechanical
world-view, believing that force was primary and rejecting the idea that the
first morning of Creation wrote, what the last Day of Reckoning shall read.
Although we know the provisional character of scientific laws and
paradigms, in our Age of Science it is difficult to believe in miracles like
resurrection, except as a metaphor; but the majority of miracles, as
examples of Providence and cause for thanksgiving, are no real problem.

A model meant a reproduction, usually on a smaller scale, of something;


architects and engineers find them very helpful in analysing structures as
well as in showing ordinary people what the result of their work will look
like. The performance of ships and aeroplanes, for example, can be
predicted from experiments with models. In the sciences, one may use
models of this kind: Watt became interested in steam-engines when
mending a model used at Glasgow in classes on natural philosophy; while
Davy made a model volcano in which he put potassium, adding water to
make it erupt to the edification of his lecture-audience, because he
believed that this might be why real volcanoes erupted. But models may be
useful when they are not believed to display a perfect analogy with the
phenomena of nature; as bubbles on a tank of water can illustrate features of
metals.
William Thomson (Lord Kelvin), the doyen of classical physicists, said
in the later nineteenth century that he could never understand anything
unless he could make a model of it. His models were not generally exact
reproductions; thus he invented systems of wheels and piano-wire to
simulate the ether which was supposed to carry light-waves; while using
spinning-tops and universal joints he constructed an ‘atom’ which although
made entirely of rigid parts was elastic. Atomic models naturally have to be
on a bigger scale than the reality they are supposed to represent. Thomson
did not believe that ether or atoms were really like his models; but the
models gave him ideas which he could then test.
His friend Helmholtz was interested in the mathematics of vortex rings,
and Thomson became interested too; he made an apparatus to make
smoke-rings of ammonium chloride, and watched them bounce off each
other. It occured to him that atoms might be smoke-rings in the frictionless
ether, so that they would go on for ever and the young J. J. Thomson (no
relation), later to become famous for his work on the electron, published
his first paper on the vortex atom, indicating that vortices might be able to
combine as chemical elements do. Here the model was not at first specially
made in a physics laboratory, but when the first cigar was smoked; the
scientist must be an opportunist in choice of models.
From these nineteenth-century examples, it is but a step to the sense in
which philosophers of science now use the term ‘model’. The model need
no longer be made; using a model in science simply means imagining
something with which one is familiar (billiard-balls perhaps, or some
mathematical equations) in order to throw light on something with which
one is unfamiliar (the cause of pressure in gases, or the behaviour of
electrons). When the model works well, generates surprising predictions
which survive falsification, and coheres with other parts of science,
realists believe that it is a model of reality; sceptics believe that its
analogies will sooner or later break down, and that it is no more than part of
our temporary world-view, or paradigm.

Molecule in the eighteenth century meant a small particle; but from the
1820s, first with Gaudin in France, it came to be distinguished from atom
and to mean the smallest particle of a substance that exists in a free state.
By the 1860s, with Cannizzaro’s revival of Avogadro’s hypothesis that
equal volumes of all gases under the same conditions contain equal
numbers of molecules, it was accepted that molecules might be composed
of two or more atoms of the same element (though this conflicted with
views of chemical affinity), or of different elements. At the very end of
the nineteenth century, the molecules of Argon were shown to contain a
single atom.

A museum is a temple of the Muses, goddesses of learning; and the first


was that at Alexandria which flourished in the third century BC. It was
associated with research in mathematics and other fields of science. In
modern times, the museum grew out of the collections of all sorts of things,
or cabinets of curiosities, of the Renaissance. The early Royal Society, like
other academies, had such a collection, which included ethnographical
items, medical preparations and fossils; this was described in a book by
Nehemiah Grew, an early Fellow of the Society.
Sir Hans Sloane, who was President of the Society after Newton, left his
collections to the nation; and they became the nucleus of the British
Museum. This was organised by the end of the eighteenth century into
different departments; but such great museums always tend to become the
nation’s attic. The Smithsonian Institution in Washington DC began with a
bequest also, this time from an eccentric Englishman called Smithson.
During the nineteenth century, following the great Natural History Museum
in Paris, the British Museum built up its natural history collections, largely
at first through gifts and bequests; and in 1881 a new building at South
Kensington was put up to house these, away from the antiquities and works
of art. Richard Owen, the great opponent of Darwin, presided over it; and
gradually the system of having only part of the collection on show, the rest
being available for research, was adopted. The officers of the museum
became great experts in their fields, and the catalogues published were
important works on classification and not just guides to the galleries,
though these were published too.
Apparatus used for physics and chemistry, and equipment for
technology, was not a feature of the British Museum though it might well
have featured in a cabinet of curiosities. But in the nineteenth century, there
were many exhibitions devoted to such things; we remember the Great
Exhibition of 1851 only because it was the biggest and best. In its successor
of 1876 there was a special section of apparatus; and this became the
nucleus of the Science Museum at South Kensington. Such a museum is
unlike that devoted to natural history, because research in physics and
chemistry is not carried out there; historical research is indeed done, but the
collections of historic and representative apparatus may not be of interest to
the working scientist, though one would hope they might be. They are
probably very important in attracting the young; and the principle of letting
people turn handles and touch things is an old one, and distinguishes the
science museum from those devoted to unique works of art.
Naming in natural history is very important, because we need to know just
what we are talking about. Species and minerals are named after friends or
patrons; and in physics there are laws, and in chemistry reactions, which
bear the name of their discoverer: Boyle’s and Ohm’s laws, Grignard
reagents, Williamson’s ether synthesis. But all these are technically
‘trivial names’, and more significant has been the effort to find theory-free
words for things. Lavoisier and his associates tried to do this in the 1780s as
his oxygen theory came to replace that of phlogiston; but unfortunately the
word oxygen implied an idea of acidity that Davy in 1810 showed to be
untenable. Davy’s names for what we call chlorides and chlorates did not
catch on; but when Faraday in the 1830s wanted names for the charged
particles which he believed to carry the current in electrolysis, he turned to
William Whewell (a Cambridge polymath) who coined for him ‘ion’,
‘anode’ and ‘cathode’: terms which did not involve the older ideas which
Faraday believed enshrined in ‘poles’. Whewell had already coined the
word ‘scientist’; and the idea of making up new scientific terms from Greek
and Latin roots caught on generally. Giving something a name is the final
step in discovery.

Necessity means that things could not be otherwise. It applies to proof in


geometry, where if the axioms are accepted and the deduction is sound,
then the conclusion or demonstration must follow. Necessary truths are
those of logic or of mathematics: all bachelors are unmarried men, 2 + 2 =
4. These only apply if we are using current English, and the decimal system
of arithmetic; and they do not tell us anything about the world. They are
closer to rules for the correct use of symbols; which will depend on context.
In a university, ‘bachelor’ can correctly be applied to a married woman
who has got a BA or BSc; and in binary arithmetic or Boolean algebra the
laws of addition are different.
Necessity can also be used to imply determinism; that in science we
find that certain causes inevitably produce certain effects, and that given
one state of affairs another is inevitable: if we pour sulphuric acid onto
chalk, we shall get carbon dioxide. If this were not to happen, we would
reckon that one of the bottles was wrongly labelled; but we can see that this
is not like a truth of logic, for many reactions depend on conditions. In
quantum physics, we do not seem to find such necessary links; and Hume in
the eighteenth century argued that we can never establish necessary causal
links. All our observations may just be of accidental connections; we can
never be completely certain that induction is sound, and that the future
will resemble the past.
Necessity was invoked by early atomists, and notably by Laplace in the
opening years of the nineteenth century when he postulated a Being who
knew the velocity and position of every particle of matter in the
Universe, and could therefore see the past and the future. It went with the
vision of the world as an enormous clock; but it does not go very well with
modern science. We can have laws of thermodynamics and atomic physics
without strict determinism; and we can doubt that any logical necessity
underlies empirical knowledge.

A nucleus is the central part of anything; for example of a cell in an animal


or plant, where the chromosomes are. In physics, it refers to that part of the
atom where the mass is concentrated and which is surrounded by
electrons in orbitals which occupy most of the volume of the atom. This
model was proposed by Rutherford at the beginning of this century in
response to his scattering experiments: he shot alpha rays at a very thin
sheet of gold foil, and found that most went straight through but a few were
deflected through big angles. This was not what would happen if atoms
were, as J. J. Thomson proposed, a ‘plum pudding’ of positive material with
electrons scattered through it. Such a barrier would simply have slowed or
slightly deflected particles shot at it. Atoms having a massive positive
nucleus and electrons in orbit, as planets go round the Sun, would behave
as in the experiment; the particles widely deflected being those which had
passed near a nucleus.
The first problem was that Rutherford’s model would be very unstable;
such an atom should radiate energy as the electrons spiralled rapidly into
the nucleus. But Bohr invoked quantum theory, with electrons occupying
orbits strictly specified by quantum numbers, to save the model; and
showed that the lines in the spectrum of hydrogen corresponded to
electrons moving between these permitted orbits. And Harry Moseley, also
working with Rutherford, confirmed from X-ray spectra of the elements
that the charge on the nucleus rather than the atomic weight is their critical
property: for hydrogen it was one, for helium two, and so on.
Another difficulty was that one would expect that a congeries of positive
particles would repel each other: and exactly what forces bind nuclei
together was a problem. In hydrogen, there is just one positive particle,
which was called a proton; but in other elements the mass of the nucleus
exceeds that of the protons, and in many there are isotopes, atoms having
different mass but the same nuclear charge and other properties. It was
supposed that there must also be some electrons in the nucleus; but in 1932,
Chadwick, again in Rutherford’s laboratory, detected the neutron, having
the same mass as the proton but no charge. For a short period it seemed as
though all matter might be made up of protons, neutrons and electrons only;
electrons by then being seen as not quite ordinary particles, but having wave
characters and occupying orbitals like a standing wave.
Since then the number of particles composing the nucleus has grown
rapidly, and so has the understanding of the forces involved; which can be
liberated in weapons or slowly in power stations generating electricity,
and in natural radioactive decay. In such cases some of the mass of the
original atoms is converted into energy. The electron was the first
‘fundamental particle’; the quark is now believed to be the most
fundamental of them but all is not yet clear.
J. L. Heilbron (1974) H. G. J. Moseley, University of California Press, Berkeley.

Number was felt by Pythagoreans to be the secret of the order in the world;
and ever since, simple numerical relationships have been sought in the
belief that mathematics is the language of nature. The ancient Greeks,
Romans and Jews used letters to stand for numbers; this does not make
addition and subtraction easy, but an abacus made these operations rapid.
The Babylonians used a base of sixty in their astronomical work, and from
them we still have our sixty minutes in an hour, and in a degree; each
minute being divided into sixty seconds, that into sixty thirds, and so on. In
England we are familiar with a base of twelve, with eggs in dozens, twelve
inches to a foot, and until quite recently twelve pennies to a shilling: and a
base twenty with a ‘score’, as in three score years and ten, and in the French
system of counting with ‘quatre-vingt’. But the base ten has come to
displace its competitors in most fields, though computing has given a boost
to binary arithmetic.
Babylonians lacked a zero; and our ‘Arabic’ system of numbers came to
us in the Renaissance ultimately from the Hindus, but via the Arabs, and
was first taken up in book-keeping. It involved symbols for one to nine, and
zero; and the idea of ‘place value’. A number as large as you like can be
expressed because the value of a numeral depends on its place; whether it is
in the column of units, tens, hundreds and so on. By the eighteenth century
this was extended beyond the decimal point, so that fractions could be
expressed in the same way as whole numbers. Much modern science has
depended on quantification; which is the application of measurement and
numbers to phenomena; it is hard to imagine life without numbers.
Observation is sometimes in popular science set against hypothesis; to
those who see induction as the primary method in science, it must be the
foundation of all satisfactory theory. The problem is, as Darwin put it: ‘all
observation must be for or against some view, if it is to be of any service’.
The scientist has to decide what factors are relevant; the time of day
matters in terrestrial magnetism, but not in chemistry. The most striking
observations are those which confirm, or lead to the falsification of,
some ideas. So in much science the observations come after the theory:
though sometimes careful observation of a phenomenon may lead to a great
discovery, as with X-rays when Roentgen observed that photographic plates
were fogged near cathode-ray tubes.
It is essential that observation should be honest, and this has indeed been
urged as a way science can teach values. As Charles Babbage wrote, ‘the
character of an observer, as of a woman, if suspected is destroyed’; we
might see Victorian sexism there, but fraud is always a risk in science and
honest reporting is something which scientific education strives to
inculcate. This does not mean that two observers will necessarily report just
the same things, for different features will strike them; but an agreed
scientific paradigm will indicate what is important, and experiment,
especially in the laboratory, is designed to isolate one or very few factors.
So in most cases the reports of two trained scientists will be very similar,
and in developed sciences their observations will usually be quantitative.
Outright fraud is probably very rare in science, though the crimes of
‘cooking’ and ‘trimming’ on Babbage’s list, where observations are
improved a little, will be familiar to undergraduates and no doubt happen at
more exalted levels: but error and its reduction and estimation in the cause
of accuracy is very important. No quantitative observation can be made
completely accurate; so they are repeated, and statistics applied if
necessary to find the correct value. To estimate error is very important, and
if the possible error in an experiment is about as large as the effect sought
then there is no point in doing it. Tycho Brahe, the sixteenth-century
astronomer, was probably the first systematically to estimate the error in his
instruments; the idea reached chemistry much later, and Lavoisier for
example in 1790 gave results implying (in the number of figures after the
decimal point) far greater accuracy than he could have attained.
Observation through history has thus become increasingly sophisticated
and different from ordinary perception, just as science has diverged from
organised common sense.

An observatory is where observations, especially astronomical but also


of weather and terrestrial magnetism, are made. There are examples from
India and from Samarkand in central Asia, and from central America; all
over the world the making of calendars meant that observations were an
essential part of civilisation. Down to 1600, all observations were made
with the naked eye; and from Ptolemy’s Almagest, written in the second
century AD, we can learn about instruments used in Antiquity, while
archaeological evidence fills in the picture from this era. The last great
naked-eye observer was Tycho Brahe, who set up an observatory on the
Danish island of Hveen in the latter part of the sixteenth century, calling it
Uraniborg; his very accurate and consistent observations, in which he
estimated the error, provided the foundations for Kepler’s laws of the
orbits of planets.
With the coming of the telescope, new accuracy became possible; and
in the 1660s observatories were set up under state patronage in Paris and
at Greenwich near London. Here the great hope was that navigation would
be assisted, because celestial observations would enable the mariner to find
his longitude at sea; and with the publication of the Nautical Almanac from
the 1760s this became possible given careful measurements of the distance
apart of the Sun and the Moon. It became much easier with the
simultaneous perfection of the chronometer, so that local time could be
compared with that at Greenwich.
By this time, there were many observatories around the world; and
Cook’s first voyage, on which he discovered New South Wales, was part of
an international project to determine the Earth’s distance from the Sun by
following the Transit of Venus across the Sun from various different
observatories. Cook’s was Point Venus on Tahiti. During the nineteenth
century, observatories became important centres of scientific research in
optics and geophysics as well as astronomy, with large staffs and ongoing
research programmes. Experiments on the velocity of light and on the
density of the Earth were done, at Paris and at Greenwich: and bigger and
bigger telescopes, associated with photography and spectroscopy, brought
into being the new science of physical astronomy, in which the stars were
not simply points of light to be plotted but sources of energy to be
understood and classified, having their systems and lifetimes.

An offprint is a paper extracted from a journal and sent by the author to


others working in his field who might not otherwise have read it. Scientific
journals began in the 1660s, and at first there were very few of them;
anybody active in science could keep abreast of the literature. By the end
of the eighteenth century, this was ceasing to be true, and it is from about
1800 that we find the first offprints. At first these were repaginated, so that
they start at page 1 wherever in the journal they happened to have come;
but this makes citation difficult, and by the 1830s pagination was generally
preserved. Publishers were uneasy that circulation of offprints would
reduce the sales of journals, and usually stipulated that none should be sent
out, or perhaps even supplied to authors, until a month after publication. By
the later nineteenth century, offprints were generally supplied and were
important for authors anxious about priority. Some come free of charge,
but the sale of offprints to authors can be profitable to printers and
publishers; and in our time they may come from books too, where these are
symposia and different authors have written different chapters.

The orbit of a planet is the path it follows through the heavens; and Kepler
in 1609 showed that this is an ellipse. This model from astronomy was
taken up by Rutherford in his theory of the atom, which he supposed to
consist of a massive nucleus surrounded by electrons in circular orbits.
Niels Bohr modified this theory in the second decade of our century by
applying quantum theory to the orbits; and Sommerfeld allowed for
elliptical orbits of different but definite eccentricities. However, as it
became clear that the electron was not a particle rather like a billiard-ball,
but had a wave character also, the term ‘orbital’ was introduced in place of
orbit.

Order primarily evokes ideas of arrangement. Whence, Newton asked,


comes the order and beauty which we see in the world? and his answer was
that it was the design of God. King Alfonso of Castile, on studying
Ptolemy’s system of astronomy with its epicycles, is supposed to have said
that if God had consulted him about the creation, he would have
recommended something a good deal simpler. To contemporaries, it seemed
that Newton with his law of gravity had revealed how simple the world
really was; in place of the complexities of error, he had supplied the
simplicity of truth, generating real science. From a few simple laws, all
that we behold and wonder at followed; and when we call the eighteenth
century the Enlightenment, we echo this feeling that at last the order of
things was understood.
That century belonged not only to Newton, but to Linnaeus; who put in
order the enormous numbers of plants found in Europe and being brought
back on voyages from all over the world. His naming was based on the
number of sexual parts in the flower, which was artificial and was
superseded slowly after 1789 by the natural system of Jussieu, in which
many characteristics are taken into account: but Linnaeus’ double-barrelled
system of indicating genus and species has survived as an efficient method
of information-retrieval. Classification is a fundamental urge among
mankind, and the search for a natural method is based on the assumption
that there is a real order of things in nature, which we can find out: the
alternative assumption is that there is no order except what we impose for
some purpose or other, and that the old Chinese system of separating
animals into those we can eat, those we can’t, and those which belong to the
Emperor, is just as good as that of modern zoologists.
The genera of animals and plants are grouped within ‘families’, ‘orders’
and ‘kingdoms’, forming a hierarchy; and since Darwin’s theory of
development or evolution by natural selection was published in 1859, this
has been explained as a family tree. Whereas a system had previously been
seen as the occasion for wonder, the order is now accounted for in terms of
a theory, and even taken for granted: in the progress of science, a danger is
that the astonishing may be made dull, as Blake, Keats and other writers in
the Romantic Movement of the early nineteenth century feared. The
greatest scientists keep their youthful wonder longer than the rest of us.
In the physical sciences, Mendeleev’s Periodic Table of the elements
became accepted in the 1870s as fundamental to chemistry; where it
summed up great quantities of knowledge, and made possible prediction
of the properties of unknown compounds. In the twentieth century, this
system also was explained, in terms of the nuclear atom of Rutherford and
Bohr; but this reduction has not made it less valuable to chemists.
Radioactivity, quantum mechanics and nuclear physics have led to a
much more complex world-view than that of the Enlightenment; so that
understanding the order behind the world has become much more difficult
for most people, But while specialisation has been the most prominent
feature of the sciences in the last 150 years, as science has become a
profession, there have also been many great steps in unification. One was
Darwinism; another conservation of energy; and another relativity. The
theory that everything began with a Big Bang after which the universe has
been steadily and rapidly expanding gives us a new vision of order, no less
impressive than the more static Newtonian one. The very low probability
of a habitable Earth being generated this way impresses some astronomers
just as the order and beauty he saw impressed Newton.
A different sense of ‘order’ is found in mathematics: order of magnitude,
expressed in powers of ten. Ancient Greeks and Romans wrote their
numbers using letters; we are still familiar with this from Roman numbers
for dates, useful when distributors do not want us to recognise quite how
old a film is. A problem is that in counting, one would after a time run out
of numbers; if I means 1, X means 10, C means 100, and M means 1,000
(and other letters have been used for 5, 50 and 500) there will be numbers
too big to represent. Archimedes showed that his contemporaries need not
worry; a notation based upon powers would do the trick. X to the power X
is a big number; and to the power X it would be 1 and ten thousand million
noughts in our rather more convenient notation, based upon place value and
using zero.
In physics, getting the order of magnitude right is an important step in the
devising and improving of a theory; it marks the first stage of that
agreement with the facts which is required when a model is being tested,
and would generally count as surviving falsification. If we counted in
some number smaller than tens, then this test would be more exacting than
it is.
D. M. Knight (1981) Ordering the World, Burnett Books, London.
Organic means connected with living organisms; and it was believed down
to the middle of the nineteenth century that the chemistry of life was
different from that of dead matter because a vital force supervened. Thus
John Hunter, the great surgeon of the late eighteenth century, found that
after death the digestive juices sometimes attack the stomach, whereas in
life it contains them. Wohler’s synthesis of urea was chiefly interesting as
a rearrangement of atoms; and it was not until the work of Berthelot,
published in 1860 in two massive volumes, that organic chemistry was
accepted as that branch dealing with the compounds of carbon, rather than
depending on any special forces.

Oxygen is the gas in the atmosphere which we breathe to sustain life, and
which combines with things when they burn. These ideas were first clearly
expressed by Lavoisier in the 1780s, and began what is often thought of as
the revolution in chemistry. Before that, respiration was generally thought
of as an air-cooling system which prevented the heart, the source of the vital
heat, from getting overheated; and combustion was believed to involve the
emission of phlogiston. Air had since Antiquity been regarded as an
element, in the sense of a constituent of all material bodies; and even in the
newer sense, of something which cannot be further analysed, it was in the
eighteenth century still thought of as an element. Samples of it might be
good or bad; and it might even be ‘fixed’ in compounds like limestone.
Scheele in Sweden isolated a sample of what we would call oxygen in
investigating the nature of fire; he heated various substances and collected
what came off. One of his samples proved very good for respiration; and he
believed that he had made air free from phlogiston. Priestley a little later
heated mercuric oxide and got a sample of gas, which he interpreted as
Scheele had; but his paper describing the experiment was published first.
He met Lavoisier in France and told him about his work; and Lavoisier
interpreted the experiment differently. Priestley probably saw his eminently
respirable air as a distinct species; but Lavoisier did so explicitly.
In Lavoisier’s chemistry, oxygen occupied a central place. It was the only
‘supporter of combustion’, and it was also (as its name implies) the
generator of acids. After Dalton’s theory of atoms was published in the
first decade of the nineteenth century, the relative weights of atoms were
often based on that of oxygen as standard (notably by Wollaston and
Berzelius) though Dalton had preferred hydrogen, as the lightest. But at just
this period Davy showed that caustic potash and caustic soda, the alkalies,
contained oxygen; and then that the acid from sea-salt does not. This strong
acid we call hydrochloric; and in chlorine Davy found another supporter of
combustion and generator of acids, so that oxygen had to share its throne.
Oxygen is still very important in chemistry; it is no longer the standard
for atomic weights, but the addition of oxygen, oxidation, and its removal,
reduction, have been generalised into two kinds of reaction. In the mid-
nineteenth century, an isomer of oxygen, ozone, was discovered; and the
layer of this substance in the upper atmosphere is supposed to be very
important in absorbing otherwise-harmful radiation. A world without
oxygen would be a world without anything like us.
Paper prepared from rags made printed books economically possible; and
the word duly came to be applied to what was written on paper. In the
sciences, the paper has become the most important vehicle for making new
knowledge public. In the seventeenth century, the learned journal was
invented in France and in Britain; this contains short contributions, either a
review of recent work, or an announcement of a discovery or a new
theory. No longer did one have to wait until one had enough material for a
book; and the paper achieved a more certain circulation than a letter could
ever do.
The early papers were very like letters intended for general reading and
therefore somewhat impersonal; but not taking for granted particularly
qualified readers. In the early journals, the papers were read (in full or
perhaps in abstract) at the society, association or academy to which they
were addressed; and were usually read by the Secretary rather than the
author. They were then referred to a committee which decided whether to
publish them, and would probably submit them to a referee: so that they
eventually appeared with a kind of stamp of approval, though the Royal
Society always added a disclaimer to the effect that publication did not
indicate official acceptance of all the views expressed.
Various benefactors bequeathed money for lectures; and these were
then often published as papers, having a rather more rhetorical character
than those which began as letters: though all scientific papers are designed
to persuade. The Croonian or Bakerian Lecturer at the Royal Society would
in the early nineteenth century be expansive, and the published versions of
Davy’s, for example, were noticed in the Edinburgh Review and other
interdisciplinary publications.
By the early nineteenth century there were also some private journals,
where space was at more of a premium; and gradually science was
beginning to split up into sciences, as specialised communities grew.
Formal scientific education in the second half of the nineteenth century
increased the trend towards specialisation; and as common assumptions and
background could be taken for granted, so editors required and authors
wrote papers which were much more terse. The literature of science
became much less literary.
This has gone on, so that now one would not read scientific papers for
fun. Moreover, the number of papers has so increased that nobody could
keep up with them all; the proliferation of journals has meant that published
material is not necessarily public knowledge, because the right people may
not have read it. Publication has also got slower than it could be in the last
century (Roentgen’s paper on X-rays being the record), so that by the time a
paper appears it is out of date. Scientists belong therefore to invisible
colleges round which preprints are sent; or in which the information
circulates electronically, so that papers need no longer actually be on paper.

Paradigm means example, but in the writings of Thomas Kuhn it has come
to have a special significance. He noted the role of dogma in scientific
education, and came to believe that science was not just open-minded
application of organised commonsense, but a system to be learned in order
to avoid error. A science is founded when somebody organises a large
number of facts with a theory or model; those working in this science fill
in the picture thus sketched, doing ‘normal science’ within the paradigm of
the founder. In time anomalies accumulate, and a revolution happens,
resulting in a new paradigm which cannot be fully translated into the
language of the old one. The revolutionaries are the great figures of the
history of science: Galileo, Newton, Lavoisier, Darwin, Marie Curie,
Einstein. They thought of a new way of looking at nature, rather than
simply discovering new phenomena. There is some ambiguity about just
what does constitute a paradigm, and the scheme fits actual episodes rather
loosely; but it is very suggestive.
Research comes to mean something a little like painting by numbers
rather than original discovery for 99 per cent of scientists; and there is
probably a good deal of truth in this caricature. The danger of Kuhn’s
perspective is that it encourages attention to a very few eminent
practitioners; its advantage is that it presents science as a fully human
activity with its social context. The scientific community accepts and passes
on the paradigm. In place of a method which inevitably leads to truth and
certainty, we get a picture of science as provisional creations of the
human mind in its quest for order. The problem seems to be that the
paradigm is less fixed and more suggestive than Kuhn’s analysis implies;
the quantum theory of the 1980s is extremely different from that of Planck,
though it has clearly developed from his insight. Such episodes as the
acceptance of Continental Drift in geology in the 1960s are illuminated
from a Kuhnian viewpoint.
I. B. Cohen (1985) Revolutions in Science, Harvard University Press, Cambridge, Mass.
T. S. Kuhn (1970) The Structure of Scientific Revolutions, 2nd ed., Chicago University Press,
Chicago.

Parallax is the apparent motion of objects produced by the motion of the


observer. In astronomy, such changes among the fixed stars were sought in
confirmation or falsification of the Copernican view that the Earth
moves round the Sun: but could not be detected until the nineteenth century.
Copernicus said that this was because the stars were so far away that the
Earth’s motion in its orbit was insignificant; and time proved him right,
though this reply could be criticised as ad hoc by anybody with a
prescriptive philosophy of science.

A particle is a small lump of matter. The term has often been used by
those who did not want to commit themselves to such theory-laden terms
as atom or molecule; rather like corpuscle. Thus Descartes believed in
particles, though he was not strictly an atomist, because his particles filled
all space and were liable to wear. Those in the nineteenth century
developing the kinetic theory of gases similarly spoke of particles because
they were uncertain whether they were dealing with atoms as defined in
chemistry. Speculations at the same period about the ether which carried
the light waves also often involved particles, though some believed it to
be a continuum: invoking particles here meant subscribing to a particular
view, rather than avoiding one.
It is in this sense that the term ‘particle’ is often used in modem science.
In the nineteenth century, the period of ‘classical physics’, it seemed
possible to do crucial experiments to decide between two possible
explanations. Newton had suggested particles of light, in particular to
account for its going in straight lines: for light waves, invoked by his
contemporary Christiaan Huygens, would be expected to go round corners.
Until the opening years of the nineteenth century, Newton’s view was
generally held; but then Thomas Young made the discovery that when light
is passed through two nearby slits in a screen, the resulting beams interfere
as waves do when two pebbles are thrown into a pond.
This was at first seen as a curious anomaly, insufficient to upset an
established paradigm; but then Fresnel in France proposed a detailed theory
in which light was transverse vibrations, like the waves of the sea; this was
powerful enough to account for known phenomena, to make surprising
predictions like the existence of a bright spot in the centre of the shadow
of a sphere, and even to show how light generally goes in pretty straight
lines. Thus it seemed by about 1840 that light was not a stream of particles,
but a train of waves. This was taken for granted as something proved by
scientists, and as an example of progress from the time of Newton whose
view had been exploded.
Crucial experiments also seemed possible with other kinds of rays
discovered during the nineteenth century. Infra-red and ultra-violet radiation
showed such strong analogy with light that they must also be waves; while
Faraday’s laws of electrolysis led Helmholtz and Stoney to the view that
electricity must be particulate, a little lump of it attaching itself to each
atom of matter. The cathode rays investigated by Crookes and J. J.
Thomson were proved by 1897 to be composed of charged particles, which
could be deflected by electric and magnetic fields, cast sharp shadows, and
would turn a little windmill. They were by 1900 identified with the
electrons of Helmholtz and Stoney; Thomson had described them as
‘corpuscles’, and as fundamental constituents of all matter.
In the opening years of our century, all this was cast into doubt. Einstein
explained the photoelectric effect in terms of light particles, photons; using
quantum theory, and ascribing definite quantities of energy to photons
corresponding to different colours. Thus red ones have much less than
violet, and will not eject electrons from potassium no matter how bright the
red light is. To account for all the properties of light, it was now necessary
to subscribe to two inconsistent theories: Newton’s particles of light had
made a comeback.
Then Louis de Broglie proposed that matter also might have its wave
character; and indeed it was found that electrons could be diffracted just
like light. This makes possible electron microscopy, just as Einstein’s work
underlies our light-meters. Wave-particle dualism has become an
inescapable part of twentieth-century science; particle is still used as a
general term when one does not want to specify something like atom or
electron, but the hope that crucial experiments can show that something is
definitely and always a particle has had to be given up: we live in a world
of both/and rather than either/or.

A pendulum, contemptuously called a swing-swang in the seventeenth


century, is if ‘simple’ a heavy bob at the end of a string, and if ‘compound’
a rigid arm moving on a pivot. Galileo is supposed to have noticed
chandeliers swaying in a cathedral, and that they kept in step whatever the
amplitude of their swings. In the 1660s Hooke began and Huygens
completed the application of pendulums to the accurate measurement of
time. A pendulum can also be used to measure gravity: the length of a
pendulum which beats once each second varies slightly in different places
because the Earth is not a perfect sphere; and geophysical voyages of the
nineteenth century spent some time in remote places swinging pendulums
to find the curvature of the Earth. The length of a ‘seconds pendulum’ in a
particular place has been proposed as a natural unit.

The Periodic table is the system of classification of the elements; it is


thus a central part of chemistry. Lavoisier had made the elements the
starting-point of chemistry in 1790, and Dalton in 1806 had suggested that
each element had its own irreducibly-different kind of atom with a
characteristic weight. The problem was to determine them. By experiment
it was shown that one unit of hydrogen and eight of oxygen compose water;
if its formula is HO, then the atom of oxygen weighs 8 times that of
hydrogen, but if it is H2O then the figure is 16. There was doubt right
through the science, and even different textbooks gave different formulae
for simple compounds.
In 1860 an international conference was called at Karlsruhe to try to
settle on agreed atomic weights; it was inconclusive, but the ideas of S.
Cannizzaro were afterwards accepted: he suggested a way, based on the
hypothesis of his countryman Avogadro, of going from volumes of gases
in a reaction to formulae. This brought agreement on atomic weights; and
meant that these numbers could now be involved in the classification of the
elements. An order based on numbers has a definiteness lacking in one
based only on chemical properties.
Various chemists attempted such a system, with some success; but the
one who staked his reputation on it was D. I. Mendeleev of St Petersburg.
He saw that when the elements are put in order of increasing atomic weight,
then similar ones recur at intervals or ‘periods’. At first his table had the
elements in vertical columns of increasing weight, so that the ‘families’
(like lithium, sodium, potassium; or fluorine, chlorine, bromine) were in
horizontal rows; later he turned it the other way into the form with which
we are familiar. What was most striking was that he noticed that sometimes
there were gaps; the next element did not fit into the next empty square, but
in with the family beyond. He could predict the properties of the
undiscovered elements in these gaps with what turned out to be surprising
accuracy: Scandium and Gallium were just what he had expected, and these
predictions gave a great boost to his theory.
Mendeleev had come to his Table as a teaching aid; he was writing what
became a classic textbook, and he sought to make sense of inorganic
chemistry. With the Table, the student did not simply need to memorise a
great number of facts, but could work out the nature of an element from
those of its neighbours, vertically, horizontally and diagonally. Mendeleev
found that Iodine and Tellurium came out of order, and boldly made
chemical properties take precedence over atomic weights, believing them to
be in error. This inversion was later explained in terms of isotopes; and the
Periodic Table itself found an explanation in the Bohr-Rutherford model
of the atom, in which the nuclei of the elements are surrounded by
characteristic numbers of electrons in orbits determined by quantum
theory.
D. M. Knight (1970) Classical Scientific Papers, Chemistry: 2nd Series, Mills & Boon, London,
reprints the early papers in facsimile.
J. W. van Spronsen (1969) The Periodic System of Chemical Elements, North Holland, Amsterdam.

A phase in chemistry is a component of a system in equilibrium; in


different states, or in different liquid layers. In 1876 Josiah Willard
Gibbs propounded the ‘phase rule’, a law connecting the degrees of
freedom (changes in temperature, pressure and so on) and the number of
components, which allows prediction of the effects of changing conditions
on equilibria.
In astronomy, the Moon displays phases, waxing and waning from a
crescent to full and back again; and Galileo found with his telescope that
Venus also shows phases, proving that she goes round the Sun rather than
the Earth. Finally, waves are in or out of phase accordingly as they reinforce
or cancel each other.

A phenomenon is what we observe. It is presumably the result of


something interacting with us, but philosophers following Kant urge that we
can know nothing for certain about things in themselves, and must be
content with phenomena. The problem is that our observations are theory-
laden; we see what we are looking for, and so the phenomena are not the
same for everybody. Most scientists again are not satisfied with laws that
link phenomena, but look for explanation.

Philosophy, or more strictly Natural Philosophy, was the old name for
science; and is still preserved in the titles of some university departments
and chairs, where it now tends to mean physics. Darwin on board HMS
Beagle in the 1830s was ‘Philos’ to his shipmates; the Royal Society’s
journal is called Philosophical Transactions, and the oldest surviving
private journal is the Philosophical Magazine; while the ‘literary and
philosophical societies’ of Manchester, Newcastle and other places were
devoted to science. Descriptive sciences were called ‘Natural History’, with
its three kingdoms, Animal, Vegetable and Mineral; but might well be
included under the general heading of Philosophy.
The label was not, however, accidental. The earliest philosophers of
Greece had been chiefly interested in speculative science; and in the
seventeenth century the great philosophers were concerned to establish a
scientific method which would lead to certainty. Descartes’ Discourse on
Method of 1637 was an introduction to three treatises, on geometry, optics
and meteorology, which were supposed to show the method in action.
Those whose chief interests were in the laboratory were expected
nevertheless to form a world view, and to be prepared to justify it in public.
This might, especially in Britain, involve argument about the existence and
wisdom of God. But by the 1820s there was a widespread feeling that
science should be based upon induction from facts and should exclude
speculation.
In 1833 the British Association for the Advancement of Science met in
Cambridge, and the poet S. T. Coleridge denounced the use of the terms
‘philosophy’ and ‘philosopher’ in the context of science, urging his hearers
to adopt different words. He saw philosophy in the new tradition of Kant as
including metaphysics, logic and ethics; a discipline in which the data and
the methods of the empirical sciences might be very important, but which
was distinct from them. William Whewell at the meeting suggested the
word scientist to describe all those present; and in 1840 he published his
Philosophy of the Inductive Sciences, which was the first book in English to
be explicitly devoted to what is now called the Philosophy of Science.
Whewell believed that science was a matter of imposing right ideas upon
accurate observations; he emphasised theory and deduction far more
than his contemporary, J. S. Mill; but both hoped for science which was
getting steadily nearer truth. In our century, philosophers of science have
been less confident. Whether descriptive or prescriptive (laying down rules
of method), they have emphasised the provisional character of science:
Popper’s method of falsification, Kuhn’s paradigms and Lakatos’
research programmes all involve probability rather than indubitable
knowledge. Their connection with day-to-day science is perhaps not very
close; there was a loss on both sides when science separated from
philosophy.

Phlogiston, from a Greek word meaning ‘flammable’, was the agent,


probably matter, responsible for combustion. Things which burnt well
contained much of it, and it was given off with the flames; something like
charcoal which left little ash was almost all phlogiston. Metals which will
burn, or iron which rusts, were thought to be compounds of their ‘calx’ or
ashes with phlogiston. Lavoisier’s experiments in the 1780s in which he
demonstrated that with mercury the calx weighs more than the metal, and
that phlogiston must (if it existed) have negative weight, weakened belief;
and set chemistry off on a new course in which reaction with oxygen
replaced emission of phlogiston as central in combustion. We no longer
believe that a chemical property is carried by a material component; but
there were ways in which phlogiston was understood as something like
chemical potential energy. Phlogiston is sometimes seen as a classic error,
but it gave shape to the science of the day and as Bacon said, error is better
than confusion; we should not be hasty in condemning our fathers.

The name physics comes from the Greek ‘phusis’ meaning nature; and the
science of physics as we know it is really a product of the nineteenth
century. In the 1850s the idea of conservation of energy brought together
into a coherent unity a number of sciences which had previously been
separate.
The most important of these was natural philosophy. This in the
seventeenth century had meant roughly what we mean by ‘science’, and by
the eighteenth century was often abbreviated to ‘philosophy’. But in learned
use it meant those branches of science to which mathematics had been
applied, in the manner of Galileo and Newton: astronomy, mechanics and
optics. In Scottish universities, the term ‘natural philosophy’ is still
preserved in the title of what are elsewhere physics departments.
In France in the late eighteenth century, the term ‘experimental physics’
came into use as a phrase to cover work in fields such as heat,
electricity and magnetism where there was not yet a fully mathematical
structure of theory. In the Paris Academy of Sciences posts were created for
experimental physicists; and in England Thomas Young used the term
‘physics’ for various experimental sciences in a famous series of lectures in
London at the beginning of the nineteenth century. Some of his ‘physics’ is
a part of ours, and some is not. In Britain, electricity was generally seen at
this time as a branch of chemistry (Faraday’s contemporaries saw him as a
chemist), while magnetism was associated with navigation or with geology.
France at the beginning of the nineteenth century was the leading
scientific nation; and was particularly strong in applied mathematics. In
1812 S. D. Poisson was elected to a physics chair at the First Class of the
Institut, as the Academy of Sciences was then described: he was a
mathematician, who had done important theoretical work in electricity but
was no experimentalist. His election has been taken to mark the realisation
that physics involves both experiment and mathematical theory, and thus
the transition to our modern understanding of the term.
Nevertheless ‘physics’ continued to mean generally those sciences
concerned with what some still saw as ‘imponderable fluids’ responsible
for heat, electricity and magnetism. The more modern view, gaining ground
as the century wore on, was that these were all manifestations of some one
underlying force; and that the workings of nature were to be understood in
terms of opposed or polar forces, like the north and south poles of magnets.
Oersted’s detection of the magnetic field produced by a changing electric
current in 1820 encouraged such beliefs; but in the 1820s and 1830s it did
not seem possible to quantify such things as electricity or heat so as to
know just how much of one produced a certain quantity of the other.
James Joule in a series of experiments in the 1840s warmed some water
in an insulated container by churning it with a clockwork system driven by
a falling weight. He could measure the mechanical work done by the weight
as it fell through a given distance, and also the heat generated in the water;
and thus determine the ‘mechanical equivalent of heat’. He and others saw
that not merely was a definite amount of work converted into a definite
quantity of heat, but that this was a manifestation of a more general process.
This was most clearly perceived by Helmholtz, one of the great polymaths
of the century, who stated the principle in a paper published in Berlin in
1847 and then in more popular manner in a public lecture at Konigsberg. He
saw that if heat, light, mechanical work, electricity and so on were all
interconvertible, then they should be expressible in the same dimensions of
mass, length and time.
In place of ‘force’ the term ‘energy’, precisely defined, came into use;
and the science of physics became the science of energy and its
transformations. It thus became the fundamental science; whereas at the
beginning of the century chemistry had seemed the science of forces, and
natural philosophy concerned with the less interesting sphere of billiard-
balls. The task of the physicist was to carry on the work of Joule and
determine the exchange rates between the various forms of energy; and to
some it seemed as though this would complete the science of physics.
This was not how things worked out. The study of the spectrum of gases
led to the idea of the atom being complex, which was confirmed in 1897
when J. J. Thomson identified the cathode rays as a stream of sub-atomic
particles, carrying a negative charge. At the same time Max Planck
explained the radiation from black bodies as coming in packets, or quanta,
of definite size rather than continuously. In 1904 Einstein indicated that the
photoelectric effect, a conversion of light to electricity, could only be
understood in terms of quanta of light interacting with a metal; this seemed
like a reversion to the old theory of light particles forming an imponderable
fluid. He also argued that space and time must be seen as relative rather
than absolute as in Newtonian mechanics.
The so-called ‘classical physics’ came to an end therefore just about the
turn of our century, but not as the completing of an enterprise. Physicists
have been concerned with sub-atomic particles, with wave-particle dualism,
and with cosmological questions ever since; and in the meantime their
science has become much more an affair of teams and groups. Joule worked
on his own, and did not make his living from science; Helmholtz was a
professor in Germany, with excellent university laboratories but still doing a
relatively cheap science, and much the same can be said of Rutherford at
Cambridge in the 1920s and 1930s; whereas since then fundamental
research in physics has been very expensive, and its links with high
technology very close.
P. M. Harman (1978) Energy, Force and Matter, Cambridge University Press, Cambridge.
R. W. Home (1983) ‘Poisson’s memoirs on electricity’, BJHS, 16, 239–59.
R. McCormach (1982) Night Thoughts of a Classical Physicist, MIT Press, Cambridge, Mass.

A planet in Greek is a wanderer. In astronomy they were distinguished


from the ‘fixed stars’ which remain in the same arrangement. Their motions
along the ecliptic through the constellations called ‘Signs of the Zodiac’ are
important for the calendar and in astrology. In Antiquity there were seven
planets: the Moon, Mercury, Venus, the Sun, Mars, Jupiter and Saturn, all
supposed to be in orbit around the Earth. With the new system of
Copernicus, Kepler and Galileo, the Sun became the centre and the Earth a
planet; in the eighteenth century W. Herschel discovered Uranus, and
Neptune and Pluto have been subsequently added to the list.

Pollution is something for which scientists and technologists tend to


get blamed, like weapons; and for which they blame politicians, while
taking credit for remedies and labour-saving devices. Pollution is a fact of
life, and one species’ pollution is another’s food; but there is no doubt that
while all human activities produce some pollution, modern industry can do
it on a grand scale. Nuclear waste is a new kind of problem, more alarming
in some ways than chemical pollution: but all life involves risks of one kind
or another, and we should not delude ourselves into thinking that complete
safety or cleanliness is possible, or indeed desirable. In the long run, we’ll
all be dead whatever happens.
We can get some encouragement from reading what Victorian cities were
like. The Thames became so polluted with sewage, draining out into the
river at low tide and sloshing back with the incoming high tide, that its
water was found by Faraday to be opaque, and sittings of Parliament had to
be suspended. Leather working, involving rubbing skins with the faeces of
dogs, was a filthy trade; and early chemical industry poured forth
hydrochloric acid into the atmosphere and all sorts of nasty things into
rivers. Over cities there hung a heavy pall of soot, keeping out sunlight so
that rickets was common, and making industrial areas into a black country.
Our rivers and our air are much cleaner than they were; and the progress
has been brought about chiefly by chemists, but also by other scientists.
In this sphere as in others, the sciences are neither villain nor hero; the
polluting processes may well be a consequence of some scientific advance,
but the solution will also be found in the sciences. What we should never
expect, because the world is not like that, is to solve our problems once for
all; each solution brings its own problems, and ingenuity will be needed in
science and technology into the indefinite future.

Popular science is something which scientists regard with deep suspicion.


It should not appear on a ‘research profile’ or in a submission to a
promotions committee; and time spent in popularisation is seen as lost to
serious business. And yet the great scientists of the past, like Faraday and T.
H. Huxley, took it seriously and owed much of their contemporary
reputation to their popular lectures. Indeed only with the specialisation
which was such a feature of the nineteenth century, and with the rise of
formal scientific education, did popularising cease to be respectable. Even
now, in popular writing or lecturing (including radio and television) the
eminent can disclose their philosophy, and reveal their method, as they
cannot do in formal .
The genre included ever since the seventeenth century sermons
(especially in Britain) and lectures, which might be accompanied by
demonstrations. It also included works of literature: sometimes poems,
like Erasmus Darwin’s evolutionary Temple of Nature; sometimes
dialogues, like Jane Marcet’s Conversations on Chemistry; sometimes
novels, like Mary Shelley’s Frankenstein; and, usually, expository prose,
From the eighteenth century on, encyclopedias were an important vehicle of
popular science; Diderot and d’Alembert’s great Encyclopédie is
remembered chiefly for its anti-religious and anti-governmental tone, but its
articles on science and technology, with superb illustrations, were
very important. In nineteenth-century Britain, encyclopedias (the
Metropolitana, with book-length contributions; the Penny, with de
Morgan’s mathematical articles; the Britannica with its various editions;
and others) were a valuable source of popular science, and an indication of
how stiff a read our ancestors were prepared for in their self-education.
Gradually popularisers became separated from scientists writing for a
general audience; and the image of the populariser became that of someone
writing about what he or she does not really understand. Putting original
scientific ideas into verse turned out for Erasmus Darwin to ensure that they
were not taken seriously; and his grandson’s ideas on evolution were duly
presented in prose, though accessible to the general reader. By the mid-
nineteenth century, the sciences had become more distinct and separate,
and there was a need for interpreters between, for example, physicists and
biologists: Mary Somerville was one of the ablest who performed this task
of high popularisation. Works of synthesis like hers need no apology, and
are not unworthy of the greatest intellects: indeed the sciences depend as
much on great generalisations as upon discoveries. Helmholtz presented
the idea of conservation of energy in popular lectures bringing together
previously distinct disciplines.
We need such synthetic works, which should not be confused with
textbooks or with vehicles for crude scientism; and we should not despise
anything which can arouse the enthusiasm of the young, as Jane Marcet’s
book aroused that of Faraday.
D. M. Knight (1986) ‘Accomplishment or dogma’, Ambix, 33, 94–8.
N. Reingold (1984) ‘MGM meets the atom bomb’, Wilson Quarterly, 8, 155–63.
S. Sheets-Pyenson (1985) ‘Popular science periodicals, 1820–1875’, Annals of Science, 42, 549–72.
M. Shortland (1987) ‘Screen memories: Psychiatry in the movies’, BJHS, 20, 421–52.

Power is capacity for action, for doing work: and it has come to be used in
the sciences in an increasingly quantified and definite sense. In the
seventeenth century it was what lay behind the rather loosely defined term,
force: matter was seen by Locke and Newton as having certain powers
associated with it, or given it, which would appear as forces of attraction or
repulsion. Chemistry came to be seen as the study of the powers which
modify matter; though at the end of the eighteenth century Lavoisier argued
that the chemist should mainly concern himself with weights.
The word is also used in mathematics, where three to the power two is
nine; but in the early nineteenth century, it came increasingly to be
associated with technology. James Watt introduced the idea of horsepower,
a rate of working of 550 foot-pounds per second, as a measure of what a
steam-engine or other prime mover could do. In the eighteenth century
about twelve horsepower could be expected from a windmill, water wheel,
or primitive steam-engine: but in the nineteenth century this figure soon
looked very small as steam-engines were made very large and much more
efficient; a process culminating in the turbines of Parsons in the later part of
the century. Faraday in his work on electricity and magnetism made
primitive motors, but not until after his death in 1867 did electric power
gradually become increasingly available; and in the last years of that
century, the internal-combustion engine provided a further source of power.
Between them, these two have displaced the steam engine, which now
chiefly appears as a turbine generating electricity. Modern civilisation
depends upon sources of power which are intimately connected with
science; which has also brought power to governments. Bacon was right
that knowledge is power, in more than one sense.

Prediction is for some philosophers the main aim of science; which might
seem to make weather-forecasting the paradigm among the sciences. The
point is that from past instances, given the support of theory, one can
extrapolate into the future. In saying that hydrated copper sulphate
crystals are blue, the scientist makes a timeless remark – that they
always have been and always will be, and hence predicting that any that
you happen to meet will be blue. Predictions that bridges will not fall down
or that mixtures of gases will not explode or that there is no risk in some
minor surgery all arouse attention and excitement when they turn out
wrong: these are cases with more factors than the copper sulphate example,
and momentous predictions like these are clearly more interesting. They
also show that there is art as well as science in the affairs of life;
probability is the guide to life, and judgement is essential.
In the sciences, successful prediction is the most spectacular test of
theory. The famous cases, like the prediction of Neptune from wobbles in
the orbit of Uranus made independently by Adams and Leverrier, of
Gallium and Germanium by Mendeleev, and of the positron by Dirac, were
the results of calculation from gravitation theory, the Periodic Table and
quantum theory; and they are taken to mean that the theory is close to truth.
In fact, erroneous or inadequate theories have been used to make successful
predictions. Sadi Carnot in his work on the second law of thermodynamics
predicted correctly that high-pressure steam-engines would have higher
efficiency; but his theory was that heat was a fluid, not converted to
work. Fresnel’s prediction that there would be a white spot in the centre of
the shadow of a sphere was based on his wave theory of light, and was
correct; but since Einstein’s work of 1904 the wave theory is recognised as
covering only some of the phenomena of light. Predictions are mundane or
spectacular, but they cannot confer certainty.

Pressure is the force exerted by a gas on the walls of its container. It is


also exerted by the atmosphere. Galileo noted that pumps will not suck
water up more than some thirty feet; he attributed this to the column
breaking under its own weight if longer than that, but his disciple Torricelli
saw that the atmosphere was pressing down and keeping the column up. He
found that using mercury, thirteen times as dense as water, the height was
about thirty inches. Pascal sent his brother-in-law up a mountain, the Puy du
Dome, with a mercury barometer to verify that the pressure dropped as one
got higher; and with the invention of the air-pump Boyle established the law
that the volume of a quantity of gas diminishes as the pressure increases.
Barometers were used for weather reports, but their value in forecasting
was not really appreciated until Captain FitzRoy used them on HMS
Beagle. In the mid-nineteenth century, pressure was interpreted in the
kinetic theory of gases as the result of the impact of particles.

Principles in science are propositions which cannot be directly proved; but


which if assumed as axioms ensure that a great many observations fall
into place. Some are very general, like ‘Every effect must have a cause’; but
others have definite consequences, like the principle that mass-energy is
conserved. This could not be directly proved; and indeed when it suffered
apparent falsification, a new particle, the neutrino, was invented to save
the principle. Later, some direct evidence for the neutrino was forthcoming;
and indeed principles can become more and more probable, though their
truth cannot be established beyond all doubt. Ptolemaic astronomers added
epicycles to save appearances and keep the principle that the Sun goes
round us; and no doubt we sometimes do the same kind of thing.
Principles are thus related to conventions, in science as in life; where our
principles are also unprovable, and it is often a good idea before insisting
upon them to try replacing ‘principle’ by ‘prejudice’. But in the sciences,
they work more formally, bringing deduction and demonstration into
areas otherwise dominated by brute fact without order: we have to
remember that no experiment ever entails a theory, and that science
depends on a suitable choice of basic ideas or principles.
Principles, in early modern chemistry, meant something like elements.

Priority, getting there first, is very important to any scientist wanting to


make a career and achieve eminence. There are no second prizes, no silver
and bronze medals, in science; it is like the original Olympic Games, where
the winner took all. There have always been some scientists who have not
needed to bother to acquire a reputation; such as Henry Cavendish in the
late eighteenth century, who as a wealthy aristocrat was already eminent,
and having solved various problems in electricity to his own satisfaction
never got around to publishing them. But there have never been very many
such people; and the exponential expansion of the scientific community has
brought in countless numbers with a reputation to make.
It is a feature of the sciences that discoveries are often made
simultaneously; that is, that scientists, perhaps in different countries, come
up with something at the same time. It does happen in other fields too;
competing biographies or histories of some episode, for example, are the
bane of publishers. But in science it is more common and more acute.
Some cases are famous: John Couch Adams in England and Urbain Le
Verrier in France both predicted a new planet beyond Uranus and causing
wobbles in its orbit, working quite independently. Here both were working
in the same paradigm, Adams being one of the first generation at
Cambridge who had learned the new French mathematics introduced there
from about 1820. Sometimes the discoverers do not share a common world-
view, and it is then more difficult to say whether they discovered the same
thing and who got there first; this is the case with the discovery of oxygen,
with the theory of evolution, and with the principle of conservation of
energy.
Some discoveries are indeed published, but no notice is taken of them;
perhaps as with Sadi Carnot’s little book announcing the second law of
thermodynamics because parts of the theory seemed obsolete. His priority
was only recognised posthumously, which is not very much help in one’s
career. By the middle of the nineteenth century, there were too many
journals for everybody to read all of them; and to safeguard priority, and
to stake out claims to intellectual territory, rapid publication and wide
circulation of offprints were essential. It is with publication that
knowledge becomes science, and priority is not given to those who, for
whatever reason, had not got their work into print; so perfectionism can be
a handicap, and some publication of error is inevitable. For the historian,
priority is less interesting than it is for the scientist involved; history does
not consist in giving medals, but in trying to recreate the past.

A prize is offered regularly or occasionally by an academy or by other


institutions for scientific work. Sometimes a topic is set; the Paris
Academy did this in the eighteenth and nineteenth centuries to stimulate
work in a particular direction. Fresnel’s paper on the wave theory of light
was a notable winner of such a competition, but often the prize went to
something less original. Other prizes or medals, which may be associated
with a lecture by the winner, go to those who have done research in some
fairly wide field; and mark recognition by peers.

Probability, wrote Bishop Butler in a passage beloved of Mr Gladstone, is


the very guide to life. He sought for the analogy of religion; but in his day
probability, which seems a vague notion, was being brought within the
boundaries of mathematics. At first it was deductive probability, in the
study of games of chance: a coin can only fall ‘heads’ or ‘tails’, and we can
therefore deduce that there is a 50 per cent chance of either outcome. On
this basis we toss for who takes which end of the ground, has first innings,
and so on. For dice, it is a similar story: they have six faces, and therefore
there is one chance in six that they will fall with any particular face
upwards. If there is any very great divergence from these predictions, we
might suspect that the coin or the dice have been weighted to behave
unfairly.
We may assume, as Arbuthnot did in the early eighteenth century, that
other affairs of life follow such simple rules. He observed that slightly more
boy babies are born than girls; and assumed that it ought to be like coin-
tossing, and that God must therefore be interfering in the process, perhaps to
produce more males to fight wars. Deeper understanding of the processes of
reproduction perhaps makes us see this as naive; but we do use probability
to determine whether there is a problem worth further investigation.
Here it is often inductive probability. We cannot say without
observation how long the average life-span is, in order to investigate
whether that of cigarette-smokers is significantly shorter. In the early
nineteenth century Laplace tried to work out how big a jury ought to be, and
what majority was needed, so that most of the guilty would be convicted
and most of the innocent set free: but he began with assumptions about how
often people told the truth and came to sensible conclusions which just
came off the top of his head.
In the nineteenth century, the science of statistics came into being
with the growth of social data, and the demand by governments to know
more about their societies. It seemed that the problem of induction might be
solved if inductive inferences were given some kind of definite probability;
but here the kind of qualitative probability invoked by Butler seems to be
all there is. The theory of relativity is more probable than it was when
Einstein first proposed it; but we cannot give any figures to make this
statement more rigorous. The term probability covers a number of related
but different uses.
I. Hacking (1975) The Emergence of Probability, Oxford University Press, Oxford.
J. C. Kassler (1986) ‘The emergence of probability reconsidered’, Archives Internationale d’Histoire
des Sciences, 36, 17–44.

A profession is a conspiracy against the public, ensuring that we must


purchase essential services from a closed shop; or it is a group of selfless
and highly-trained people, governed by a higher moral code than ordinary
folk, who exclude quacks and charlatans in order to cherish and protect us
all. The oldest profession has always lacked that strict control over entry
and standards which characterise younger professions; and indeed
professional organisation is something undertaken by those who see their
talents being prostituted.
The traditional learned professions were the Church, the Law and
Medicine; in the medieval universities, which existed to train members of
these professions, the higher degrees were awarded in these fields. In
principle the unqualified were disqualified from practice; but in fact at the
lower levels this was impractical, and poorly trained curates, attorneys and
apothecaries looked after the needs of those who did not belong to the
higher social classes. By the eighteenth century, there seems to have been in
Britain something like a free market in medicine; as there was in religion,
with religious toleration meaning that dissenting ministers were competing
with the established clergy.
Those working in the sciences were often members of one of these
professions. Physicians, or the lower grade of surgeons who learned their
trade by apprenticeship, provided most zoologists as well as anatomists and
physiologists; the clergy with their hold on education were extremely
important, in mathematics and geology as well as in the study of butterflies
and plants which we are apt to associate with eccentric vicars; and lawyers
from Francis Bacon to Charles Grove, a pioneer in conservation of
energy, have also played their part. There were very few paid scientific
posts, especially in Britain and the USA where there was no academy; and
to work in science it was essential to earn a living elsewhere. There was
nothing ‘amateurish’ about the work on oxygen of Lavoisier the tax-man or
Priestley the Unitarian minister; but they could only devote some of their
time to their chemistry.
Scientific societies were usually open, and the amount of interest
members took was extremely variable; it was not until Davy was president
in the 1820s that the Royal Society’s governing Council had a majority of
active men with published papers. Earlier than this in France, men of
science had begun to think of themselves as a group distinct from others;
and within Napoleonic France, patronage began to be exercised by eminent
men of science rather than by grandees. Thus Berthollet was able to
promote his favourite pupil Gay-Lussac to academic posts and to a place in
the Academy. A scientific career was becoming a possibility; though in the
event, because he married for love and had a large family, Gay-Lussac had
to turn to consultancy work in technology to augment his salary. French
academics in the nineteenth century resorted to ‘le cumul’, holding more
than one post to the great damage of science there.
When in Britain in 1831 the British Association for the Advancement of
Science was set up, the Royal Society and then the specialised societies
became gradually more closed; only those with qualifications were
admitted. The Chemical Society of London, the first national society in the
world, was a learned body; but many of its members were employed as
analysts, and in 1877 formed a professional group, a kind of white-collar
(or coat) craft union, the Institute of Chemistry. A hundred years later the
two bodies amalgamated; but such a union is uneasy. The professional
chemist here worked in industry or for the government, and sought to keep
certain jobs for chartered chemists; this is a narrow sense of ‘professional’,
as it was used in the nineteenth century in opposition to ‘liberal’; Jane
Marcet’s famous Conversations on Chemistry (1807) excluded such
professional matter, and Faraday in the 1830s gave up professional work to
concentrate on understanding the nature of electricity and magnetism.
When people talk about science as a profession, it is usually work like
this later research of Faraday’s that they have in mind. By the mid-
nineteenth century, it was just becoming possible to live by science, though
Faraday was never well-off. The word scientist was coined in 1833 to
describe those whose working life was spent in science: it implied
narrowness to some (like Faraday, who preferred to be called a
philosopher), but it marked the emergence of science as a liberal profession
of a kind, with self-regulation through control of publications by editors
and referees, and with expanding academic and industrial opportunities.
Whether scientists in universities, for example, would think of their
profession as ‘scientist’ or as ‘lecturer’ is doubtful; it has not become a
profession in quite the sense in which the three old ones are, but it is now a
way of life.
M. P. Crosland (1978) Gay-Lussac, Cambridge University Press, Cambridge.
S. G. Kohlstedt (1976) The Formation of the American Scientific Community, University of Illinois
Press, Urbana.
C. A. Russell, N. G. Coley and G. K. Roberts (1977) Chemists by Profession, Open University Press,
Milton Keynes.

Proof was what the ancient Greeks demanded in geometry. Getting the
right answer was not enough; demonstration was required that it could not
have been otherwise. Ancient geometers might, like Archimedes in his
method, get their theorems by some more intuitive or practical method; but
it only became a part of mathematics when formally proved. This meant
that it could be deduced from the axioms: in this ideal deductive system the
conclusions are in a sense tautologies, implicit in the assumptions made at
the start.
Until in the nineteenth century there was a range of geometries,
Euclidean and non-Euclidean, geometry had seemed necessarily true. But
now the proofs only hold within their particular system. In ordinary life,
proof means convincing a jury beyond reasonable doubt; and in empirical
science it is not so different. Outside mathematics and logic, there is no
formal proof; but we have enough confidence in the laws of nature to think
it captious to doubt that unsupported bodies near the Earth’s surface might
not fall. Science may not be a matter of complete certainty, and truth
unknowable; but even without geometrical proof, we can be sure enough.
But it is good for us to be reminded of the provisional character of all our
knowledge.
Proof is also, in its older sense of test (the proof of the pudding is in the
eating) used in printing; where proofs of a paper are sent to the author for
correction.
Property is used in chemistry in the old sense of a quality essential to
something. Thus the properties of gold are that it is yellow, dense, malleable
and so on. Sometimes one particular property may become diagnostic, as
with acids and litmus paper; or may become almost a definition, like an
atomic number in twentieth-century Periodic Tables of the elements. But
generally in the sciences, as in any attempt to get a natural system of
classification in dealing with natural kinds, we rely on a cluster of
properties rather than a single one in characterising things.

A publisher is one who brings out books and journals; and in the
eighteenth century this began to be separated from printing and bookselling.
Some academies and associations are their own publishers; while others
have close formal links with publishing-houses which may be state-run or
commercial, and which bring out recommended works. Some scientific
societies simply exist to publish a journal or a series of monographs, like
the Ray Society in natural history. But much of the literature of science
has been published by commercial publishers: for example in Britain,
Taylor & Francis have published the Philosophical Magazine since 1798;
Smith & Elder published Voyages; Macmillan began Nature over a hundred
years ago and published textbooks associated with it; while Van Voorst in
the nineteenth century published many important illustrated works on
natural history.
W. H. Brock and A. J. Meadows (1984) The Lamp of Learning, Taylor & Francis, London.
E. Eisenstein (1979) The Printing Press as an Agent of Change, Cambridge University Press,
Cambridge.
J. Glynn (1987) The Prince of Publishers, Allison & Busby, London.

Purity is an idea important in the human sciences; where its connections


with sex and with diet are familiar: eating fish on Fridays, abstaining from
pork, and distaste for promiscuity help to define groups, and we all live in
societies and groups.
In physical science, the idea is most important in chemistry. Chemical
analysis, and especially at first distillation, extracted from various
substances their active components; and down to the nineteenth century,
this was the most important part of chemistry. The matter responsible for
the activity of a drug, a dye, a substance used in tanning or in metallurgy
was isolated; and then perhaps it might be looked for in other natural
products, or even prepared by synthesis.
There was some doubt about the basic underlying material: in the
classical tradition there had been four elements, Earth, Air, Fire and Water,
which in various proportions made up everything; in Alchemy, there were
three principles, Sulphur, Salt and Mercury. All these were ‘ideal’, in that
ordinary earth or ordinary salt were not the elements, though the element
predominated in them. Pure elements could not indeed be made. Boyle
believed that the elements and principles were imaginary, and that matter
was made up of corpuscles, all ultimately the same.
Goldsmiths nevertheless refined gold and silver, and had a clear idea of
what pure metals would be; and gradually pharmacists worked their way
towards an idea of chemical purity in their products: though it seems to
have been only in the early nineteenth century that Dalton’s friends, the
Henrys, advertised their ‘magnesia’ for stomach upsets as pure, while their
rivals still used anecdotal advertisement.
The Spectroscope brought new accuracy into analysis; and the science of
the later nineteenth century required more pure reagents. Thus early
investigations into the electrical properties of Selenium (the first
semiconductor) were hampered by its impurity, which meant that results
could not be repeated: and the place of the ‘rare earth’ (lanthanide) metals
in the Periodic Table could not be clearly determined because samples
were impure. We now take purity for granted; but in looking at the history
of chemistry we must remember that it is a relative term.
A quantum is a small definite quantity of something; and in physics it was
introduced by Max Planck at the end of the nineteenth century, and made
into a central and crucial part of our understanding of energy and matter
by Einstein and by Bohr in the first two decades of our century. Planck was
trying to find an explanation of the radiation from black bodies, which
did not fit existing theory. Light was believed to be a wave motion in the
ether, and it was therefore taken for granted that light energy would be
continuous; and it was perplexing that black-body radiation should be
anomalous. Planck found that if one assumed that energy came off in small
definite packets, as it were, whose size depended upon the frequency of the
radiation, v, multiplied by a constant h, then one got the right answer.
He seems to have been uneasy about the physical implications of this
discovery, and prepared to treat it as simply an equation which gave the
right answer: but Einstein in 1904 analysed the photoelectric effect using
quantum theory and got the right answer there too. When light falls onto
certain metals, notably potassium, it will generate a current of
electricity; but red light however bright is not effective, while blue light
however weak produces the effect. Blue light is of higher frequency so its
quanta are larger; and Einstein proposed light particles, or photons, of
energy hv. The powerful blue photons could knock an electron out of the
potassium, while shining red light upon it was like peppering an elephant
with small shot, ineffective.
Meanwhile Rutherford was developing his model of the atom in which
electrons in orbits circle the massive nucleus as planets go round the
Sun. The problem here was that negatively charged bodies circling a
positively charged one should spiral into it; so that the life of Rutherford’s
atoms would be exceedingly short. Niels Bohr, who came to Cambridge to
work with J. J. Thomson but then transferred to Rutherford at Manchester,
suggested that the orbits might be stable if they corresponded to definite
energies, governed by quantum numbers. Electrons could not, like planets,
be at any distance from the nucleus; and the bright lines in the spectrum of
the elements corresponded to electrons jumping from one permitted orbit
to another. For hydrogen, where the lines had been shown by Balmer to
follow a mathematical series, this could be shown quantitatively.
Some spectral lines turned out not to be single, but to be split; this meant
that further quantum numbers were required to permit definite ellipticities,
to describe the spin of electrons, and to cover effects of magnetism. But
this old quantum theory still had an ‘ad hoc’ character; like the epicycles in
ancient astronomy, it worked, but it was hard to believe that it could really
represent truth about nature. Matter and energy seemed to have a dual
character, with characteristics of waves and particles; and for Bohr this was
as far as one could get. Theory was an instrument, a tool, to be judged
pragmatically; and one knew which equation to apply in any situation. The
fatal thing was to suppose that one could ask questions about what an
electron was; we only see phenomena and construct models, and muddles
are inevitable if we worry about how an electron gets from one orbit to
another, or how a photon knows it is faced with a double or a single slit and
modifies its behaviour accordingly.
Heinsenberg’s uncertainty, or indeterminacy, principle indicated that
the realm of the atom is governed by statistics rather than by strict
causality. The disintegration of atoms which we observe as radioactivity
has a certain probability, strictly governed by law so that we can
determine just how long it will be before half of a sample has decayed; but
it is impossible to predict the splitting of a particular atom. Schrödinger’s
wave-mechanics brought more order into quantum theory, and in principle
it should be possible to deduce the properties of an element or even a
compound from Schrödinger’s wave-equation. But quantum mechanics still
seems to be unlike the rest of physics; and Einstein could never believe that
this statistical basis was enough. It is possible that at present unknown
‘hidden variables’ might bring causal explanation into this area; or it may
be that we shall have to rest content with Bohr’s ‘Copenhagen
interpretation’, which seemed more satisfying in a period when positivism
reigned in philosophy.
T. S. Kuhn (1987) Black Body Theory and the Quantum Discontinuity, new ed., Chicago University
Press, Chicago.
Radical means a root, and a political radical is one who hopes to get to the
root of social problems rather than just trim the branches. In chemistry, it
means a group of atoms which behave rather like an element, surviving
through series of reactions. A classic case was ammonium, which Davy
and Berzelius investigated: analysis using electrolysis of dry ammonium
compounds with a mercury cathode produced a frothy substance which they
called ammonium amalgam, seeing it as an alloy analogous to those which
potassium and calcium form with mercury. A radical composed of nitrogen
and hydrogen behaved very like a metal in its compounds, and seemed
rather like a metal when liberated; though in the end it turned out that the
product was a froth of ammonia in mercury.
In organic chemistry, very few elements are involved: carbon, hydrogen,
oxygen and nitrogen form the bulk of organic compounds. Structures are
composed of radicals, such as alkanes, (Methyl, –CH3; Ethyl, –C2H5, etc.),
hydroxyl –OH, carboxyl –COOH, and so on. Radicals like methyl are often
written R in general formulae, so that an alcohol may be expressed as ROH;
an ether with two different radicals may be written ROR’.
Davy and Berzelius had tried to get ammonium as a free radical, and
organic chemists have tried with others: Frankland in the mid-nineteenth
century prepared the first organo-metallic compounds in an experiment
designed to isolate a free radical. Since they have an unsatisfied valency,
they are unstable; and in general are only to be found as intermediates in
those reactions which proceed in stages.

Radioactivity was one of the great discoveries of the 1890s; it restored to


the Paris of the ‘naughty nineties’ some of the reputation as the centre of
science which it had enjoyed a century before. Until the 1890s, mysterious
and invisible rays had been the province of the mesmerist and the
charlatan; while the transformation of one element into another was the aim
of alchemy, again discredited in the nineteenth-century Age of Science. The
discoverer was Henri Becquerel, the son and grandson of eminent
scientists; who noticed the fogging of photographic plates kept near a
sample of uranium in 1896. This was soon after the discovery of X-rays in
the same manner, by Roentgen in Germany.
Marie Curie, one of the first great women in science, and her husband
Pierre isolated in minerals from the Joachimsthal in Austria two new
radioactive elements, radium and polonium; with these stronger sources the
phenomenon could be fully investigated. The Curies approached it from
chemistry, seeing it as a curious property of some substances; but its
elucidation turned out to require a different approach, from physics.
In 1898 Ernest Rutherford, who had come from New Zealand to England,
distinguished two components of the radiation: alpha rays were stopped
easily, for example by a sheet of paper; while beta rays were more
penetrating, and by 1900 were identified with the corpuscles, or
electrons, which J. J. Thomson had in 1897 found to compose the
cathode-rays. In 1908 Rutherford and Geiger proved that alpha rays were
made up of helium atoms which had lost two electrons and thus had a
double positive charge. In 1901 gamma radiation was recognised, as akin to
X-rays.
Rutherford went to Montreal, and there worked with Frederick Soddy, a
chemist; and in 1902–3 formed the theory that radioactivity was sub-
atomic chemical change. By emitting alpha and beta particles, an atom of
one element could change into that of another; and they worked out
successions of changes, at very different rates, by which this was
happening. This involved them in the idea that atoms of the same element
might differ in weight: naming of such atoms, as isotopes, and recognition
of how general the phenomenon is, came later with Aston, but Theodore
Richards at Harvard showed that lead associated with uranium, and
therefore probably produced from it, has a lower atomic weight than
‘ordinary’ lead.
Back in England, Soddy joined William Ramsay in London to follow the
first transmutation to be wholly above-board. They showed that in the
spectrum of gas in a vessel surrounding some radium there were no helium
lines; and that when it was left helium lines began to appear. The element
was thus being generated in the process of the decay of the radium.
Rutherford also came back to England, to Manchester and then to
Cambridge: he and Ramsay fought for the country’s supply of radium for
their experiments, and this experience led Rutherford to the view that
‘chemist’ meant ‘damn fool’. Rutherford proposed that the atom was
composed of a massive nucleus surrounded by electrons in orbits; and that
therefore the nucleus was the seat of radioactive change. He found that the
rate of decay was constant, unaffected by temperature or chemical
combination; which meant that determining its cause was perplexing. He
also suggested that radioactive decay might be important as a source of
heat in the Earth, making possible a much longer time-scale for geology
than physicists had previously been able to allow. Rutherford also came to
recognise that in radioactive changes, mass is being transformed into
energy; and that this might be valuable in solving the forthcoming coal
crisis to which Ramsay had drawn attention, when fossil fuels would
eventually run out.
The dangers of radiation from radioactive substances was not at first
realised, and the precautions necessary were worked out by trial and error
after several of the pioneers had succumbed to cancers. Radium treatment
for cancers was the first positive use of radioactivity, followed by dating
techniques for archaeology based upon Rutherford’s discovery of the
constant rate of decay. The first use of radioactive energy was explosive, in
the atomic bombs of 1945; these weapons were developed regardless of
expense and mark the beginning of physics as ‘big science’ which cannot
be carried on without enormous investment. Rutherford’s Cavendish
Laboratory was cheap; the story was that anybody asking for string was
gruffly asked how many inches he needed; but by the time Rutherford died
in 1937 the sealing-wax and string era was over. Physics has become
alarming as well as expensive with the study of the nucleus; and has lost the
innocence it had in the early years of our century, when the study of
radioactivity was getting under way.
J. Hendry (ed.) (1984) Cambridge Physics in the Thirties, Adam Hilger, Bristol.

Rays are the lines followed by light or other forms of radiant energy. In
working out what happens in reflection or refraction, we make a ray
diagram, following the path through a lens, prism or raindrop; and we can
thus compute what will happen in optical apparatus. In the opening years
of the nineteenth century, William Herschel found that there were heating
rays beyond the red end of the visible spectrum; and Ritter and Wollaston
found chemically active rays beyond the violet. Light rays were therefore
just part of a family, some of which were invisible but detectable.
Faraday, working in the 1830s and 1840s on electricity and
magnetism, came to the conclusion that the fields associated with these
agents were filled with lines of force; and in a lecture, ‘Thoughts on ray-
vibrations’, he suggested that such lines filled all space. He believed that
magnetic fields must act upon light, and found that they did; and suggested
that light might be vibrations in the lines of force. This idea was given
mathematical form by Clerk Maxwell, and led to the detection by Hertz of
the first radio waves.
Faraday also investigated the passage of electricity through gases, and
saw how at low pressures a dark space forms around the cathode. His
experiments were continued by William Crookes, who found that at lower
pressures ‘cathode rays’ streamed from the negative electrode in straight
lines, casting shadows. By this time light was firmly believed to be a wave
motion; but in 1897 J. J. Thomson showed that the cathode rays were
composed of negatively-charged particles, which he called corpuscles
and later electrons. Rays were also emitted in radioactivity, in three
forms: alpha, which were positive particles (helium nuclei); beta, which
were electrons; and gamma, which were like X-rays, analogous to light.
Rays might therefore be waves or particles; but in the early twentieth
century light was shown to have a particle aspect, and then electrons to
have a wave character, so that rays partake of both.

Reaction, in Newton’s third law of motion, is equal and opposite to action;


as anybody who has fired a rifle will know, the kick given to the shoulder
corresponds to the push given to the bullet. In science, the term is chiefly
used in chemistry: where substances which act upon each other are said to
react, and any chemical change is called a reaction.
In mechanics, all bodies are subject to gravity; but chemical affinity is
elective. Some elements and compounds will react together, and others will
not: the ‘noble’ metals gold and platinum are unreactive, and the ‘inert’
gases neon and argon were until recently believed never to react chemically.
A rise in temperature of ten degrees Celsius doubles the rate of a reaction,
approximately, so many reactions will only go when the reactants are
heated: which is why Bunsen burners and test-tubes are such
characteristic features of the chemical laboratory.
Reactions which go slowly can be followed, and their rate measured, so
that the mechanism can be worked out; often there are a series of stages, of
which the slowest is the rate-controlling step. Some reactions require a
catalyst: the first in which the stages were elucidated was the ether
synthesis of A. W. Williamson done in the middle of the nineteenth
century. Exothermic reactions are those in which heat is given out, and
endothermic those in which it is absorbed, usually yielding less-stable
products. Some reactions are reversible, and a change of circumstances will
make them go backwards; or they may stop at a position of equilibrium
rather than going to completion. Chain reactions are those which once
started, increase in speed, and may produce an explosion.

Reduction in chemistry is the process in which oxygen or its equivalent is


removed from a compound; as when an ore is reduced to the metal. It is
thus the converse of oxidation. In more theoretical terms, it has then come
to mean the process in which a metal is brought to a lower valency state, as
when a ferric salt (Fe+++) is reduced to a ferrous one (Fe++).
In philosophy of science, reduction is the process of explanation of
the phenomena of one science in the language of another. Thus in the early
twentieth century, chemistry was reduced to physics: a model of the atom
derived by physicists from physical data (such as kinetic theory of
gases, spectrum analysis and X-rays) involving a nucleus and
electrons could account for the Periodic Table of the chemical
elements. Niels Bohr’s theory of electrons in elliptical orbits explained
how the elements formed families, such as the halogens (chlorine, bromine
and iodine), and as extended by G. N. Lewis explained how elements react
by giving or exchanging electrons. This did not make very much difference
to chemists, because experiment in the laboratory is still necessary to find
what will actually happen in real cases; the calculations are too
complicated, and the phenomena are explained in principle rather than in
detail.
This is often the way when reduction has happened; but it can be a
controversial process, as when anyone attempts to reduce psychology to
chemistry or physics. Some philosophers, such as Hegel, have believed in a
hierarchy of sciences, each having its appropriate level. Questions of
perception would then belong to psychology, and of sound or light waves to
physics; but one could not reduce perceptions of colour to wavelengths.
Partly because we can see that the hierarchies of Hegel, and such
contemporaries as Ampère in France and Whewell in Britain, no longer
express the way we see things, we find it difficult to envisage firm
boundaries between sciences; and easy to believe that some are more
fundamental than others. It is really only the prospect of mind being
reduced to matter which alarms us; and this is perhaps more a
philosophical question than a scientific one. We would still need categories
of mental events, as we still need chemical ones; genuine reduction and
unification in the sciences are more exciting than daunting.

A referee is asked by the editor of a journal to go through a paper


submitted, repeat the experiments, check the citations and recommend
whether or not it should be published. Referees may not be very good
players, but they know the rules, or paradigm. The system began with the
earliest journals; thus Robert Hooke was referee for Newton’s first paper on
light, and his criticisms led to an enduring quarrel. To avoid this kind of
thing, the referee is generally anonymous. There are in the past cases of
very original papers, for example J. J. Waterston’s on the theory that a gas
was composed of particles moving at speed, rejected because of a bad
report from a referee: the system works best where it is competence rather
than great originality which is in question.

Reflection occurs when a ray of light falls onto a suitable surface; and
reflection upon this phenomenon in Antiquity led by Euclid’s time to the law
that the angle of incidence is equal to the angle of reflection. In the
seventeenth century reflecting telescopes with large curved mirrors came
into use, Newton being the inventor of one kind, because reflection does
not, like refraction, split white light up into its constituent colours.

Refraction happens when a ray of light goes from one medium to


another, for example from air to water or glass. A stick dipping into water
looks bent because of refraction. In astronomy it is important because it
makes stars and planets near the horizon appear in the wrong places;
Ptolemy and Kepler tried to find the law of refraction, but it was the
Dutchman Snell who came up with it in 1621: the sine of the angle of
incidence divided by the sine of the angle of refraction is a constant, called
the refractive index. Newton demonstrated that light of different colours is
differently refracted, producing a spectrum with a prism. Most telescopes,
microscopes and spectacles depend on refraction.

Relativity is the theory propounded by Einstein from 1904 in first a


‘special’ and then a ‘general’ form, that space and time are not absolute, but
relative. They order coexistent and successive phenomena but have no
independent existence. Events in ‘frames of reference’ accelerating with
respect to each other cannot be dated according to any absolute scale; only
inside one frame can we unambiguously say that two events were
simultaneous, or that one came before the other. Light is the fastest signal
we can send, and its velocity is an important constant of nature: further, its
path is a straight line, even if this means that we must diverge from
Euclidean geometry; space can be described therefore as curved. Bodies
accelerated to velocities near that of light gain in mass and can therefore
never quite reach it. Einstein is supposed to have asked a porter whether
Crewe stopped at this train: we can no longer say simply that the Earth
moves round the Sun, but only that it is much more straightforward to
suppose that it does. If we do not, then we have to assume mysterious
forces emanating from the fixed stars; and therefore it is wise not to return
to the Ptolemaic system. Physics is nevertheless a more provisional
business than it had seemed: a change of viewpoint or paradigm may be
required if we are to keep it as simple as possible, and avoid multiplying
entities; and Einstein’s is a good example itself of such a revolution. His
theory did have empirical consequences; stars which were actually behind
the Sun according to Euclidean geometry appeared beside it during a total
eclipse, particles accelerated to very high velocities do increase in mass,
and the orbit of Mercury is better accounted for in his theory than in
Newton’s. In a sense, his work refuted that of Newton, as Newton falsified
Kepler’s laws; but it is probably best to see the older work as a special case
or approximate solution, rather than as false: after all, we go on using it for
all sorts of purposes – the dynamics of getting to the Moon were
Newtonian.
A. Pais (1982) Subtle is the Lord, Oxford University Press, Oxford.

Religion and science were in the late nineteenth century seen as


necessarily at strife. In Comte’s scheme, the ‘theological’ phase of thought,
in the individual and in history, gave way to the ‘metaphysical’, and then to
the ‘positive’ or scientific: and Huxley saw extinguished theologians about
the cradle of every science, like the serpents about that of Hercules. In place
of miracles and verbal explanation, scientists provided a value-free
account of what was going on in the world.
It was not always so; nor perhaps is it now. In the seventeenth century,
those who began the enterprise of modern science built upon the work of
medieval predecessors and were themselves generally pious men, though
not always orthodox. Thus Newton became a Unitarian, though he kept it
secret: but his belief in God was very important to him. Not only was God
the creator, but he intervened in the working of the world: and Newton
believed that God needed to straighten out the wobbles in planetary orbits
from time to time, and keep up the quantity of motion in the world. Many of
his contemporaries became Deists, seeing God as the craftsman who had
built a clock which then ran itself. In any case, if the world was created by
God, it had a plan which we could hope to understand, at least in part; so
there was a point in doing science, as there would not have been if
everything depended upon chance. Religion provided an underpinning.
Even Darwinian evolution could be understood as a long-term plan, and
Darwin seems to have come to it that way; but his religious faith became
very weak. The late nineteenth century was the high point of scientism;
science no longer needed an underpinning, but other activities tried to
model themselves upon science. The First World War, where poison gas and
other scientific devices were used as weapons, began to weaken the
benevolent image of science as the sole guarantee of progress; and quantum
physics indicated that determinism and mechanism were not enough.
Astronomy, with the idea of the ‘Big Bang’ seemed to point to a moment of
creation; while the probability of life developing by chance seemed
vanishingly small. The problem is that the First Cause of scientific
argument is not necessarily very like God the Father who consoles the
widow and the orphan.
T. Cosslett (1984) Science and Religion in the Nineteenth Century, Cambridge University Press,
Cambridge.
D. M. Knight (1986) The Age of Science, Basil Blackwell, Oxford.
C. A. Russell (1985) Cross-currents, IVF Press, London.

Research is the process of discovery in the sciences. In the past, it was


generally used in the plural; a scientist would in due course publish
Researches on some topic, as Davy did on Nitrous Oxide in 1800. Faraday’s
collected papers were similarly titled Experimental Researches in
Electricity. With the twentieth century, and perhaps as an indicator of
specialisation, the singular has come into fashion; and it is now used as the
antithesis (or occasionally the complement) of teaching or of
manufacturing. It is also sometimes contrasted with scholarship in the
Humanities, presumably in the belief that there is nothing new to discover
there; though many very important scientists have, like Lavoisier and
Darwin, been distinguished for making contemporaries look at familiar
facts in a new way rather than for specific discoveries. It would be odd not
to count as research the ponderings of such people.
Researches in the sciences may be done in the laboratory, where
experiment may be used as a test of a hypothesis or may (as with Faraday)
be itself a method of discovery; or may sometimes be done with pencil and
paper manipulating symbols of mathematics. At the Ecole Polytechnique in
Paris in the 1790s science was taught by those active in research, like the
chemist Berthollet; and since then in universities this has been the norm.
The problem is that as the sciences become more specialised, research into
some arcane and recondite topic becomes increasingly remote from the
interests of undergraduates; while in the sciences especially there is a strong
feeling, despite the existence of textbooks, that everything that is to be
learned must be taught. These pressures separate research and teaching;
which is a great pity, because only those who are learning themselves make
good teachers. The divide between research and manufacturing is also sad,
because pursuit of the fast buck and neglect of research does not lead to
strong industry. As compared to that in Humanities, scientific research (and
teaching) is very expensive; this is a new development since the nineteenth
century, but it makes the future look uncertain.

Resonance in physics is that phenomenon in acoustics where a string or


fork tuned to a particular note will sound when the note is played. Strings
will also resonate when a note an octave above or below is played; as can
be shown on the piano when we hold down a key, play and release the
octave, and listen. The idea was extended into optics with the wave theory
of light, by G. G. Stokes and then by Kirchhoff in accounting for the dark
lines in the spectrum of the Sun. They suggested that cool atoms would
absorb radiation at the same frequencies which they would emit when hot;
and thus Kirchhoff and Bunsen inaugurated the chemistry of the stars.
In twentieth-century chemistry, the idea of resonance was applied to
structures by Linus Pauling. Aromatic compounds, benzene and its
derivatives, were more stable than anybody would have expected a ring of
six carbon atoms to be. The structure of the ring proposed in the 1860s by
August Kekulé involved alternate single and double bonds between the
atoms; which gives two formulae, where the double bonds are at the top left
or the top right of the hexagon. No different compounds corresponding to
these two structures could be found: and Pauling’s theory was that the true
structure corresponded to neither formula, but to something in between – it
was resonating between all the possible structures. It was possible to
calculate all the various structures which were participating in various
degrees; which included one with the ‘spare’ bonds not doubled but
pointing into the middle. Other compounds which could be given
alternative structures also proved to be more stable than one would expect;
and this was attributed to resonance energy. By the later 1950s this theory
was being recognised as unsatisfactory; and it was replaced by the idea of
molecular orbitals, in which electrons in compounds such as benzene
surround the molecule rather than a specific atom, and are thus in a state
of lower energy.

A retort is a piece of apparatus used in chemistry for distillation: it


has a round body with a long tube projecting from its short neck, down
which the distillate will flow and from which it can be collected; cold water
may be needed to condense it. Distillation came to be seen as a fundamental
chemical process, in which the active component or the spirit of a natural
product was separated out from it by heat; and the retort, like the test-tube,
became a symbol of chemical activity.

Reversibility is a characteristic of some reactions in chemistry, and also


of various processes important in thermodynamics. Copper gently heated
will react with air or oxygen to form a black oxide; strongly heated, the
oxide will yield copper and oxygen. Changing circumstances will determine
which way such a reaction goes, or where it reaches equilibrium.
In deriving the second law of thermodynamics, Sadi Carnot imagined a
heat-engine in which a cylinder of perfect gas absorbed heat slowly and
reversibly from a source, and gave it out at a sink at lower temperature.
Since none was wasted, and each stage in the cycle was reversible, Carnot
could show that such a machine was the most efficient imaginable; actual
machines have to run fast, some waste is inevitable, and reversibility is only
sometimes important. Carnot’s engine run in reverse would function as a
refrigerator, taking heat from cool bodies and giving it out at a higher
temperature: and his work showed that energy is required for this process.
If one Carnot cycle were driving another in reverse, then no work would be
done and there would be no net result; but the real world is not like that.
Many processes, like those of the sausage factory, are irreversible; and in
closed systems the steady increase in entropy is an indication that not
everything can be reversed, and that time’s arrow has a direction.

A review may mean two things; either a kind of journal devoted to essays
describing progress in some field, or an account of a book, a paper, or a
meeting in a periodical. The Journal des Scavans, published in association
with the French Academy of Sciences, was the first scientific journal, and it
was essentially a review in the first sense. It filled a need which is still
there. The output of research papers is now enormous, and the reader
cannot plough through them all, but must rely upon abstracts and the
guidance of reviews. In nineteenth-century Britain there were a number of
competing reviews (the Edinburgh, the Quarterly, the Westminster, the
North British and more) which tried to cover science and the humanities,
and are valuable sources for the reception of scientific theories among the
general intellectual public. These were anonymous, but the names of the
authors were usually a poorly-kept secret; the power of the editors was
very great.
The Royal Society’s Philosophical Transactions was a research journal,
publishing signed articles, but including book reviews. These are still an
important feature of scientific periodicals; and by the end of the nineteenth
century the practice which originated in France, of signing such reviews,
had become the norm. Such reviews can, in the hands of a master, tell us
not only about the book but also about the state of a discipline.
Scepticism has always been an important part of science: the Royal
Society’s motto, ‘Nullius in Verba’, take nobody’s word for anything, would
be shared by other academies and associations. In philosophy,
scepticism is an old tradition; its extreme is the solipsist, who cannot
believe that anything exists outside himself. If we doubt the existence of an
external world or of any kind of order then we would not be able to do
science at all; so the scepticism of scientists is a rather different thing.
Outsiders grumble that scientists are unwilling to investigate the places
where flying saucers have been reported, or the bending of spoons by will-
power; they just get on with their boring old analyses or something. And
yet we are also prepared to laugh at the early Royal Society who did the
experiment of making a circle of unicorn’s horn and seeing whether a
spider put in the middle of it could escape; and for publishing a paper by an
early President claiming that geese hatched out of barnacles. It is hard to get
the balance right; and scientists, despite all the surprising discoveries that
have been made, seem to have become increasingly sceptical about things
not found by qualified observers, preferably in a laboratory.
What is believed depends on the current paradigm, or the expectations
based on successful theory and generalisation. Thus it was hard about 1800
to accept that duck-billed platypuses were real and not made (like
mermaids) by sailors practising taxidermy on long voyages. It was then
harder to accept that the creature laid eggs; because in every way it seemed
to cut across the classification painfully constructed. It is curious that
radioactivity, with the suggestion that atoms of elements are decaying,
was much more easily accepted; this tells something about atomic theory in
the 1890s.
Within science we can find people being pig-headed, as when in the
1840s observatories were too busy to look for the planet Neptune
predicted by Adams; or credulous as they were about ‘polywater’ in the
1960s. Scepticism may seem a matter of keeping clear of fraud, magic,
miracles and spirits; but it is more complicated than that, and the Royal
Society’s motto is easier said than followed, especially in these days when
we have to take advice from experts.
F. Franks (1981) Polywater, MIT Press, Cambridge, Mass.
A school usually means an educational institution, either a collection of
departments within a university, a component part of a Faculty; or else
and most often, a place where children are taught. The sciences did not
have much of a place in schools in this sense until into the nineteenth
century; at the higher levels the classics generally formed the major part of
the curriculum, and a ‘modern’ side where modern languages, modern
history and science were taught was uncommon. In the eighteenth century,
Dissenting Academies open to Nonconformists in England had a more
modern syllabus; and this formed a model for later schools. In France,
competitive examination for the Ecole Polytechnique meant that some
science had to be taught; and at Cambridge mathematics was the most
important degree-subject, for which again schools would prepare their
pupils. But only in the second half of the nineteenth century, when
universities began to offer degrees in the sciences, did schools begin to
adopt textbooks and their pupils begin to find themselves entered for
examinations.
In connection with the sciences, a school means also a group of people
associated together in their research. Usually a school will have its great
man, the rest being disciples of this father: in our century Rutherford’s
school at Cambridge in the 1920s and 1930s might be an example.
Sometimes as here, the school forms around a scientist of unusual ability;
but especially in the past, when science was fairly cheap, the genius might
work on his own like Faraday, and lesser mortals with more social gifts and
ambitions might form a school. There is too much talk about ‘centres of
excellence’ today, but what this cliché may draw attention to is the
existence of a school, where some set of problems is being investigated,
perhaps using some particular technique invented by the Father, and where
those who go can join in and contribute to the success of the enterprise. The
school is thus working within the paradigm of its founder; and it will in due
course be overtaken by events and innovation elsewhere unless the next
generation is unusually flexible. Scientists like to see themselves as the
‘sons’ and ‘grandsons’ of the eminent, and as members of some respected
school; and as they move to jobs elsewhere, their connections are often kept
up by correspondence, a valuable source for the historian.
S. T. Keith and P. K. Koch (1986) ‘Formation of a research school: Theoretical solid-state physics at
Bristol, 1930–54’, BJHS, 19, 19–44.
L. J. Klosterman (1985) ‘A research school of chemistry in the nineteenth century: J. B. Dumas and
his students’, Annals of Science, 42, 1–40.
J. A. Secord (1986) ‘The geological survey of Great Britain as a research school, 1839–1855’,
History of Science, 24, 223–75.

Science is a word which down to the middle of the nineteenth century


meant any organised body of knowledge. It was opposed to practice, or
mere rule of thumb, and not to humanities; and the distinction between arts
and sciences then was used to separate practical activities from those for
which there was a body of theory. Equivalent words in other languages,
such as Wissenschaft and Nauk, are still used in this way: the English
language is unusual in distinguishing natural knowledge by the exclusive
term ‘science’. This narrowing seems to have begun with the setting up of
the British Association for the Advancement of Science in 1831, in which
the Sections covered disciplines which were thought to be developed and
empirical; geology got in, but archaeology and geography did not. By the
end of the nineteenth century, the Royal Society was a professional body,
no longer admitting the antiquarians or philologists who had been
prominent among its membership about 1800, when ‘amateur’ still meant
connoisseur.
University Faculties of Science still follow surprisingly closely the lead
of the BAAS a hundred and fifty years ago: and yet it is not immediately
obvious that botany and applied mathematics have so much in common,
and are so different from sociology and logic, that a great gulf exists
between science and other activities. The most important thing seems to be
that more money is available for science; and that the word ‘science’ has
come to have an almost magical significance in modern society.
It is sometimes urged that science has a method for arriving at truth, and
even (as by T. H. Huxley and W. K. Clifford a hundred years ago) that this
method is organised common sense. They had in mind induction, where
carefully-observed facts are generalised, and deduction, where
consequences of established truths are worked out. The problem is that this
does not seem to fit creative science, though it may work for the ‘normal
science’ done within a paradigm. Galileo admired Copernicus for his
defying common sense in proposing that the Earth went round the Sun.
Copernicus’ theory should have been testable, because the Earth’s motion in
its orbit would produce apparent changes in the positions of the fixed stars,
the Stellar parallax; but in his day these were undetectable, and he boldly
inferred that the stars were therefore much further away than anybody had
thought. The greatest scientists in imposing order on the world disregard
the rules drawn up for ordinary intellects.
They may then go wrong, as in the nineteenth century when the many
successes of the wave theory of light seemed to amount to a demonstration
of its truth. It is a long-standing feature of science that its theories are
provisional, but it was hoped that its empirical laws might be eternal; and
some scientists like Gay-Lussac devoted themselves to the search for
laws rather than theories. The great examples of such laws are those of
Boyle and of Ohm, relating gas pressures and volumes and electrical
currents and resistances respectively; but the trouble is that these do not fit
all cases, and represent ideal situations just as theoretical models do.
Scientists also distrust the purely empirical, the realm of happenstances and
coincidences, and look for satisfactory explanations; it is the change in
these over time which is one of the fascinations of the history of science.
It has been urged by Karl Popper that science should be seen as a series
of conjectures and refutations. It is distinguished from other activities by its
reliance upon falsification. General propositions cannot be confirmed
beyond doubt, because the exception may not yet have been seen – as black
swans were not observed by Europeans until the eighteenth century. On the
other hand they can be falsified by a single counter-instance. Scientists put
forward hypotheses and they (or other scientists) do their best to falsify
them; those which cannot be falsified, like Freud’s notion of an Oedipus
Complex, are not science. Science consists of falsifiable but unfalsified
propositions; those which have survived many tests are worth a good deal
of trust.
The problem is that this does not fit Copernicus, for example, who
refused to see his elegant scheme as falsified, and added what we might call
an ‘ad hoc’ hypothesis about stellar distances to it, to save it. He was right;
and so it seems were those who invoked the neutrino, a minute particle, to
save the conservation principles of mass, energy and spin in our
century. Darwinian evolution is probably impossible to falsify; great
theoretical schemes are not abandoned because of a single experiment
which turns out differently from what was expected. All that such a test
shows is that there is something wrong somewhere; an anomaly within the
science, and all sciences have these. Only when there is a rival theory, in
which what were anomalies are predicted, will the old one be given up.
Science is a highly conservative business.
Science is also an abstraction, in that in the last 150 years the great
feature has been specialisation; the various sciences do not have very
much in common, except perhaps a hope that empirical methods will bring
more knowledge. Bacon in the early seventeenth century remarked that
science is power; a hundred years ago science seemed to many simply a
force for good, which could even inculcate values such as truthfulness. We
see the dangers of uncontrolled science and technology. Science is a
provisional and human activity, not a royal road to truth; it is not
independent of political and other views; but in its higher reaches it is
thoroughly exciting and creative.

Sciences are distinct bodies of knowledge which have shifting and uneasy
frontiers between them, and which together constitute science. Thus
electricity was in the early nineteenth century generally seen as a part of
chemistry; but the new idea of conservation of energy brought it into
physics. The sciences are often thought of as a hierarchy, having different
status, with the hope perhaps of the ultimate reduction of them all to one:
in our day, this one would probably be physics, but about 1800 it would
have been chemistry, and at other times biology. To a reductionist, there is a
real if distant hope of unifying all science: and thus for example explaining
human behaviour in terms of electrons.
Eminent practitioners of philosophy of science such as Hegel and
William Whewell in the nineteenth century had a hierarchy of sciences
without reductionism: they saw each science as having its appropriate
fundamental idea, and thus as being complementary. Any attempt to reduce
one to another would distort the map of knowledge. The problem is that
sciences do change their paradigm over time, and that developments seem
to happen on the boundaries of sciences which were put far apart in these
schemes. The history of sciences is one of change and development, not of
continuous accumulation of data to complete a pattern already foreseen.
So reduction may happen, and indeed in modern theories of chemical
affinity, or valency, we do find some of the science reduced to physics.
But chemists are unhappy at the thought that calculations might replace
experiment; and in sciences textbooks, academies and institutions, and
journals all help to reinforce specialism and keep up the frontiers. By the
nineteenth century the old Natural Philosopher was becoming the
scientist by profession, specialising necessarily in one science because
life was too short to keep up in several. The process has gone on, so that a
scientist is not merely a chemist, but an organic or a physical chemist; and
within this great subdivision, further specialisation has become essential.
The outlooks and interests, and the methods of research, among
practitioners of different sciences are very different; and the very question
of whether Psychology or Economics for example are sciences is open.
There have been times when chemists and physicists, and physicists and
geologists, have ignored one another: Rutherford thought that chemist was
another word for damn fool, while Victorian physicists got the age of the
Earth hopelessly wrong. While specialism is unavoidably with us,
collaboration in teams seems to be a way of crossing the daunting but
artificial frontiers between sciences. We can have a shot at saying which is
the leading science at a particular time and place, but whether others will be
reduced to it remains to be seen.

Scientism is the belief that science is the only road to truth, and that its
methods must be employed in all other branches of thought and enquiry.
This idea was popular in the late nineteenth century and the first half of this
one, when scientists seemed to have great moral authority, and the
sciences seemed the essential feature of education; it often had strong
links with materialism, and was attractive in philosophy escaping from
religion. In our time, it is less plausible that science can supply us with all
the values we need.
M. Midgley (1985) Evolution as a Religion, Methuen, London.
A scientist’s life is devoted to science. There was no word until the 1830s
to describe such a person, and this was partly because there were so few of
them. The scientific community of the seventeenth and eighteenth centuries
was made up of doctors, lawyers, clergy, officers of the army and the navy,
and landed gentlemen; for all of whom science could not be very much
more than a hobby, pursued in their leisure time. They were amateurs, in the
old sense of those who loved the activity, because hardly anybody could
follow science as a profession. This does not mean that their work in
science was not of a high standard; but their education, and their social
relations, were those of their class and did not mark them out. There was no
particular group-feeling which separated men of science from others in ‘two
cultures’.
Even as late as the opening years of the nineteenth century, those
engaged in science often covered a very wide field. Thus Thomas Young is
remembered as a pioneer of the wave theory of light, but he also took the
first steps in the decipherment of the Rosetta Stone, wrote on chemical
affinity, first described astigmatism in the human eye, and in a famous
course of lectures on natural philosophy used ‘work’ and ‘energy’ in
something like their modern scientific sense. Specialism was beginning to
come, but there were many who were unhappy about it, and who enjoyed
working across the boundaries between disciplines some of which might
now be excluded from the circle of the sciences.
Academies did sometimes provide for and encourage specialisation, by
having a set number of places for practitioners of different sciences; and by
separating into distinct groups the antiquarians and the experimentalists;
and there were separate journals for research papers in science and in
literary studies, though some famous reviews (like the Edinburgh, which
attacked both Young and Wordsworth) covered both. Scientific clubs, like
the Royal Society, were open to all with an interest in science, and only a
minority of Fellows in Regency Britain were involved in research.
In 1831 the British Association for the Advancement of Science was
founded, and it was at the third annual meeting of this institution that the
word ‘scientist’ was coined by William Whewell: himself a cleric and moral
philosopher as well as an expert on mineralogy and tides. It was formed by
analogy with ‘artist’; but many scientists of the day, like Darwin and
Faraday, considered themselves as doing philosophy – that is, developing
and defending a world-view, rather than simply finding a few new facts or
inventing an engine. By the 1830s it was just becoming possible to live by
science, so that one’s research brought bread and butter: science could
begin to be a profession, rather than an activity of those practising another
profession, like medicine or law.
But there was no uniformity about the backgrounds of the first generation
of scientists. In France from the Revolution of 1789 there had been the
Ecole Polytechnique, devoted to science and engineering but becoming ever
more militarised, and other Hautes Ecoles; and there were university
courses in sciences, though the professors were poorly paid and resorted to
‘cumul’, holding a number of posts at once, to make ends meet. In Germany
after the Battle of Waterloo in 1815 the universities became very important,
and the ideals of Bildung, or character-formation, and Wissenschaft, or
knowledge for its own sake, were much stressed. By the 1830s Liebig had
built up in the little University of Giessen a research school in organic
chemistry, giving laboratory training to students for a PhD degree.
Gradually in France and especially in Germany there emerged a corps of
trained scientists, sharing a paradigm, and increasingly distinct from those
with arts degrees, though by no means philistines.
In the Anglo-Saxon world this came later. Faraday had been apprenticed
to a bookbinder; Darwin dropped out of medicine, and read an ordinary
degree at Cambridge to prepare himself for life as a clergyman; Huxley was
trained on the job as a surgeon, at Charing Cross Hospital. It was only in the
second half of the century that a career pattern for a scientist became
available, with the great expansion in education at all levels after 1870.
Then the would-be scientist hoped to go to a university, and study a science;
and then perhaps to continue with academic research – British universities
started offering PhD degrees in the twentieth century, so before that one
went to Germany or didn’t bother with graduate qualifications – or else to
go into teaching, or set up as a consultant. There was, right through the
nineteenth century, a good living to be made for a few scientists, who had
often achieved fame through research or public lectures, by consultancy,
generally involving chemical analyses, or electricity.
By the end of the nineteenth century, science had become divided into an
increasing number of sciences, and the frontiers between them had become
fixed in academic institutions. Chemists looked askance at physicists, for
example; and one’s first loyalty was to one’s discipline rather than to
science. Nevertheless, there was and is much that the different sciences
have in common; and a kind of scientism, the belief that science can solve
all the problems of mankind, was taught to students of them all. The
greatest changes in the twentieth century were first, that women were able
to become scientists; there had been some before, like Caroline Herschel,
Jane Marcet and Mary Somerville in nineteenth-century Britain, but they
had not been able to play a full part in things; and second, that science was
taken up in countries such as Japan and India, so that scientists ceased to be
exclusively European males.

Simplicity, like elegance, is one of the criteria by which scientists judge


theories. A simple theory is in Popper’s view more falsifiable and therefore
better than a complex one; so choice of a simple theory need have nothing
to do with the supposed simplicity of the universe. On the other hand, most
of us will sympathise with King Alphonso of Castile, who when the
Ptolemaic system of astronomy was explained to him said that if God had
asked him he would have recommended something simpler. Newtonians
rejoiced in the order, beauty and simplicity which Newton seemed to have
demonstrated, vindicating the Creator. Whatever may have been in God’s
mind, there is little doubt that people like simplicity and harmony, and
search for such things in nature.
In the early nineteenth century, there was great argument over the nature
of matter. Some in the classical tradition of atomism, saw everything as
composed of identical atoms, in different arrangements; while others, like
Lavoisier and Dalton, saw a considerable number of distinct kinds of
elements – a number which rose towards ninety as the century wore on.
Thomas Beddoes, the first patron of Davy, remarked that this was not really
a matter of science, but a question of one’s taste in world-making: but one
can more seriously say that simplicity is not always obvious. To some it
will be simpler to have only one kind of atom, although these must be fixed
in complicated arrangements; to others, it is simpler to have many
irreducible elements. One could imagine those who found the wheels within
wheels of the Ptolemaic system simpler than the ellipses of Kepler. It would
be discouraging and perverse to urge scientists to look for complicated
theories, but simple ones (like the idea that light is waves) have turned out
to be false, and it may be that the world is complex and awkward.

A solid maintains its form, which may be a crystal or an amorphous lump,


against the application of forces. It is one of the states of matter, and
will normally turn into a liquid and then a gas as heat is applied to it.
Joseph Black in eighteenth-century Scotland perceived that the snow did
not all melt as soon as the temperature rose above freezing; and inferred
that in any change of state ‘latent heat’ is absorbed or emitted. Heat is
always required to transform a solid into a liquid at the same temperature;
and Lavoisier saw this as a kind of chemical reaction, and heat (or caloric)
as a fluid and an element. We would distinguish a solid by the immobility
of its particles, which when it melts have enough energy to move about.

The spectrum became a matter of great scientific interest with Newton’s


work in the 1660s. The rainbow must always have attracted attention, and
Descartes in 1637 had published an analysis of it. Ever since lenses had
come into use as reading-glasses people had noticed coloured fringes
around images; but these were seen as something water-drops or glass did
to white light. Others had used prisms, but Newton seems to have been the
first to have his screen far enough from the prism for the sunlight to be
completely separated into what he considered to be seven colours. He
found that another prism would recombine them, but that light of one colour
passed through a second prism was not further changed. His conclusion was
that white light was a mixture of all the colours of the rainbow.
Glass in Newton’s time was imperfect, and any lines in the spectrum
were likely to be due to strains; but by the early nineteenth century
Fraunhofer was making glass good enough for him to map a whole series of
dark lines in the solar spectrum. Chemists doing analyses were also
interested in the colours different elements gave to flames: though the most
common seemed to be a brilliant orange. In 1860 a chemist and a physicist,
Bunsen and Kirchhoff, began using a spectroscope in analysis; using
samples heated in Bunsen’s newly-invented burner. They found that heated
elements in the form of a gas gave a spectrum of bright lines in a
characteristic pattern: the ubiquitous orange being due to sodium. They then
identified Fraunhofer’s lines as an absorption spectrum; relatively cool
atoms absorb light at the same frequencies that they emit it when hot, so the
dark lines indicate elements in the outer layer of the Sun.
In the following years, spectrum analysis became one of the most active
parts of the sciences, with chemists like William Crookes and astronomers
like Norman Lockyer making a reputation there. Using gratings, notably
those made by the American Rowland, meant that the absolute rather than
the relative frequency of the lines could be determined; and with this
quantitative information, efforts were made to make some sense of the
patterns. This was achieved for hydrogen by the Swiss teacher, Balmer, who
fitted a mathematical series to it. His series was given physical significance
by Bohr, who saw electrons circling around the nucleus in orbits fixed
according to quantum theory. As they moved from one permitted orbit to
another, so they emitted or absorbed light at a definite frequency. Spectra
thus became not merely a valuable method of chemical analysis, but a guide
to atomic structure; and spectra in the visible region were supplemented by
those in the infra-red and ultra-violet, and then in the X-ray region.

Spin is a property of electrons and other sub-atomic particles. Two


electrons could occupy the same orbit in Bohr’s atomic model, provided
they had opposite spin: , and – . Now they are not seen as little balls, but
as having a wave character also; nevertheless, this property called spin
remains important, and is subject to a conservation principle in atomic
disintegrations.

Spirit has been used in opposition to matter, and its existence is denied in
materialism. In the seventeenth century, brute matter was contrasted with
spirit in the revival of the theory of corpuscles or atoms. In the mid-
nineteenth century, Spiritualism came from America to Europe, and a
number of eminent scientists investigated the curious facts for which
the spirits of the dead seemed responsible. In the absence of any satisfactory
theory, psychical research gradually became less interesting; its
phenomena were also unpredictable and hard to reproduce, unlike for
example those of chemistry which happen any day and anywhere. In
chemistry, when distillation was invented in medieval Sicily it seemed
that the spirit was being extracted from the wine, leaving the body behind;
which is why we still refer to brandy and whisky as ‘spirits’.
J. Oppenheim (1985) The Other World, Cambridge University Press, Cambridge.

Standards in science may mean values, but are generally connected with
units or with purity. The Bureau of Standards in the USA is a body which
publishes and maintains standards; and similarly there are British Standards
for all kinds of materials. Measurement was very difficult until there were
standard units, first within a particular country and then internationally.
Thus in the eighteenth century, there were many rather different standard
inches and pints; and the US and Imperial Gallons still differ, because in the
USA the smaller ‘wine pint’ of 16 oz became the standard unit, rather than
the 20 oz pint used in Britain for other fluids and now standard.
Older standards were ‘artificial’, depending upon particular standard
objects; first perhaps the dimensions of a King, and then on measures kept
somewhere central. When in the nineteenth century, the Houses of
Parliament in London were burnt down, the standard Imperial measures
were destroyed with them; and had to be reconstructed from copies which
had luckily been deposited in other cities.
Scientists hoped for ‘natural’ standards; thus the inch might be defined
in terms of the length of a pendulum which at Greenwich beat seconds. In
France, the new ‘metric system’ set up after the Revolution of 1789 and
ratified by an international conference (only attended by
representatives of governments allied to France) was natural; the metre
being a fraction of the circumference of the Earth. But for practical
purposes a standard bar of platinum was needed; and more accurate
measurements since have shown that the bar does not exactly conform to
the definition. The bar is preferred, so the natural basis is lost: we cannot
manage to conduct affairs with provisional standards which will have to be
revised each generation.
From the 1850s the metric system was enforced in France, and from the
1870s it came into general use for scientific purposes in Britain and the
USA; only the historian of science still has to wrestle with ‘grains’ and
‘lines’, and we may rejoice that we no longer have to buy coal in
‘chaldrons’ which meant something different in Newcastle and in London.

State in scientific contexts usually refers to whether a substance is solid,


liquid or gas at the temperature and pressure in question. Faraday and
others imagined a fourth state of matter, which would be simpler than the
gaseous state. All gases obey the same laws of expansion and contraction
with heat, whereas solids and liquids all expand differently; so one could
say that gases are simpler, and in a fourth state some chemical differences
might in turn disappear. William Crookes in early experiments with a
cathode-ray tube thought he had got matter at very low pressures into such a
state; but since J. J. Thomson’s work of 1897 the explanation of these
experiments involves electrons rather than a new state of ordinary matter.
Outside scientific theory, the state means the government; and in
scientific institutions, in education, and in the funding of research,
governments everywhere have become increasingly important. Ever since
the Grand Duke of Tuscany appointed Galileo Court Mathematician and
Philosopher, governments have appreciated the prestige that may come
from supporting the sciences; but it is chiefly in the last hundred years that
science has become so ‘big’ that it requires state support. It is unfortunate
that so much of this goes into weapons.

Statistics has always had an intimate connection with the state, as


Disraeli’s ‘lies, damn lies, and statistics’ reminds us. In his day ‘vital
statistics’ did not mean what it does today; it had no relation to the human
form divine, but meant birth, marriage and death rates. The governments of
the nineteenth century found themselves suddenly in possession of far more
information about those in their charge than ever before; and statistics
became, to a fact-oriented generation, the key to knowledge and thus to
governing.
John Graunt and William Petty had presented works on ‘political
arithmetic’ to the Royal Society in the 1660s; and Graunt had demonstrated
the constancy of deaths from various causes in London, bringing order out
of what seemed chance or contingency. But his data were poor;
christenings and funerals from parish records, with causes of death supplied
by untrained informants. When with the Revolution of 1688 new financial
institutions were created in Britain, Life Insurance began and needed a
sound statistical base. Halley the astronomer, finding that he could not rely
upon records at home, computed ‘life tables’ from a German city (Breslau)
where they were better. From the early eighteenth century, all over Europe,
actuarial calculation provided useful careers for those trained in
mathematics.
Halley had proposed that the Earth’s distance from the Sun, an important
measurement in astronomy, could be determined from observations made
at great distances apart of the passage of Venus across the Sun. This
happened after his death, in 1761 and 1769; and numerous observatories,
some set up on scientific voyages, followed the Transit. There was in fact
too much material; and when the distance was computed, different
astronomers preferred different observations, so that there was some
uncertainty about the right answer. Gauss found that observations scattered
about the correct value in what he called the ‘error curve’, and we call a
Gaussian distribution. His disciple, Quetelet, found that all sorts of things
besides errors, such as human heights, fell on this curve: his work, done in
Belgium in the 1840s, astonished contemporaries with its demonstration of
the predictability of human behaviour in the mass (especially murder rates),
and aroused anxieties about determinism. Since Gauss, the analysis of
experiments, especially complex ones where several variables are
concerned, has required statistics; but it took time for such mathematical
sophistication to spread through biology and the social sciences.
These statistics have been based upon induction, trying to bring
certainty out of uncertain data. But there were cases where probability
could be worked out in advance by deduction; for example in games of
chance. De Moivre in the early eighteenth century did calculations of this
kind; and Laplace, the contemporary of Gauss, applied such calculations to
human affairs. Assuming that witnesses told the truth in a given proportion
of cases, and that juries made sensible decisions in another proportion, he
computed what majorities should be required for conviction, given that only
a certain proportion of innocent people should be convicted, and another
proportion of the guilty should be acquitted. Napoleon briefly made Laplace
Home Secretary, but soon concluded that this kind of approach to criminal
justice was not promising.
For coins, dice and roulette wheels, deductive probability is
appropriate; and when in the eighteenth century it was noted that more boys
were born than girls, this seemed a deviation from the ‘heads or tails’ model
requiring an explanation in terms of God’s providence, perhaps in providing
more soldiers. Similarly, Michell argued that more stars looked close
together than one would expect from a random distribution, and that
therefore some of them must really be paired. This conjecture was
confirmed when William Herschel observed some of them moving round
each other under gravity. Here we find a statistical argument (in this case a
rather dubious one) leading to a scientific discovery; which is something
we are accustomed to in medical research, for example on smoking.
In dealing with large numbers of people, one has to fall back on statistics;
and in physics Maxwell in 1859 introduced statistical explanation, in his
work on gases, which he supposed to be composed of elastic particles
rushing hither and thither. As the gas is heated, the particles will on average
move faster; but their velocities will fall on a Gaussian distribution, and
some in a mass of cool gas will be moving faster than some in a hot one.
Maxwell’s famous ‘demon’ could defy the second law of thermodynamics
because he was small enough to distinguish individual molecules; we
cannot.
A part of exact science was thus shown to depend on statistical laws; and
with radioactivity and modem theory of atoms regularity on the large
scale has been shown to depend on uncertainty at the subatomic level. We
are not so far here from Quetelet and his murder rates; and modern
government depends on statistical information and on opinion polls.
D. A. MacKenzie (1981) Statistics in Britain, 1865–1930, Edinburgh University Press, Edinburgh.
T. M. Parker (1983) ‘The mathematics of society: Variation and error in Quetelet’s statistics’, BJHS,
18, 51–69.

Status is very important in science as in other spheres. It may be that


scientists are suspicious of authority, and it is undoubtedly true that the
opinions of some distinguished but elderly scientists are not taken seriously;
but in general there is some kind of consensus about ranking. One way in
which this has always tended to show itself is in a feeling that one acquires
a licence to speculate only by some solid work within the accepted
paradigm. The first pieces of research ought to be only moderately original,
acceptable to examiners and referees; only afterwards, when the game is
mastered, is some revolutionary work acceptable. In 1887, Arrhenius
submitted a thesis suggesting that strong electrolytes like sodium chloride,
NaCl, were dissociated into ions, Na+ and Cl-, in solution; weak
electrolytes being partially dissociated. This seemed absurd to
contemporaries, who knew how powerful was the reaction between
sodium and chlorine, and did not fully distinguish atoms from ions. The
thesis was also awkwardly on the boundary between chemistry and
physics: it was awarded a bare pass. Arrhenius was fortunate that his work
was taken up by Ostwald, a pioneer of physical chemistry, who enjoyed a
high status and could become his patron.
This kind of thing is not unique to the sciences; and what is perhaps
more interesting is the difference in status assigned to different sciences at
different times. We can perhaps look for the leading science at a given time:
the one which seems most fundamental, so that explanations in others are
couched in terms directly or indirectly from the leading one. Thus in
Antiquity, geometry had a high status; but probably the life sciences,
medicine with zoology and botany, were most important. With the invention
of mechanical clocks in the West, by the sixteenth century mechanical
analogies were coming to seem more persuasive than organic ones; and
by the seventeenth century, with Kepler, Galileo, Huygens and Newton,
astronomy was the leading science. Its status was challenged about 1800 by
chemistry, the science of matter and its forces; then chemistry gave place
to geology and biology, and after Darwin’s theory came out in 1859
evolutionary perspectives were sought everywhere. Conservation of
energy at about the same time brought physics into being, uniting various
separate sciences; and for the first half of the twentieth century physics
enjoyed the highest status. Now perhaps it has gone to molecular biology:
such changes are not merely fashion, but do introduce fertile ideas and
models across the frontiers of sciences and prevent stagnation; they give a
character to all the sciences at a particular time.
R. Paul (1984) ‘German academic science and the Mandarin ethos’, BJHS, 17, 1–29.
Structure may be connected with technology, for the theory of structures
is very important in engineering; where it came to replace rule of thumb,
based upon trial and error, by the end of the eighteenth century, notably in
France. Calculations could be made for buildings, bridges and ships so that
disasters should be less frequent: gradually the design drawing and the
blueprint restricted the craftsman’s freedom and brought about
standardisation and industrial discipline; and a new aesthetic in which unity
of design and repetition was admired rather than variety and individual
decoration.
In chemistry and physics, structure means the arrangement of atoms or
particles underlying the phenomena. Hooke in Micrographia (1665) saw
that different substances had different crystal forms, and tried to connect
this with different arrangements of spherical particles; his friend Boyle
believed that corpuscles of the same basic matter in different structures or
‘textures’ formed all substances. In contrast, Dalton at the beginning of the
nineteenth century saw each element as having a different kind of atom;
and suggested various structures for common substances like water and
carbon dioxide.
Only when Laurent introduced the method of deduction into chemistry
in the 1840s did it become possible to test hypothetical structures; and
organic chemistry became the most exciting part of the science as
analysis and synthesis led to structural formulae. Aromatic compounds
were seen as based on a hexagonal ring of carbon atoms, its stability later
explained in terms of resonance, and aliphatic compounds as having
straight or branching chains of carbon atoms.
Study of the spectrum of the elements at the same period indicated that
atoms too must have a structure, since they vibrate in so complicated a
manner. When in 1897, J. J. Thomson seemed to have discovered Boyle’s
corpuscles in the electron, investigation of atomic structure could get
under way; and soon afterwards X-ray crystallography in the hands of von
Laue and the Braggs opened the way for working out unambiguously how
atoms were placed in crystals. Now the structure of the nucleus and of the
galaxy, the smallest and the largest things, still preoccupies scientists.
A surface is the outside of anything, and is thus the province of geometry.
Many reactions in chemistry take place on surfaces: the more something
is ground up, or ‘triturated’, the greater will be its surface area: and finely-
divided platinum is a very powerful catalyst.

Symmetry is something we expect to find, an indication of order in the


world; and yet departures from symmetry are really more interesting. Thus
Pasteur spotted asymmetry in the crystals of some tartrates; a discovery
which proved very important in understanding structure and valency. In
quantum physics some interactions are asymmetrical in that parity is not
conserved; which again has led to fruitful experiment and theory.

Synthesis is opposed to analysis, and means making something: the terms


were used in medieval philosophy to mean the process of breaking down a
problem into its parts, and then putting them together again, which was seen
as scientific method. In something like this ideal sense, we speak of the
‘Newtonian synthesis’; meaning that Newton put together insights and
techniques derived from predecessors and contemporaries to form a new
world-view. This could be said of other great scientists too; and it is a way
of reminding us that nobody starts from scratch in creating science.
Chemistry was for long a science based on analysis. By distillation,
and then by electricity, the chemist sought to decompose substances and
find out their elements; and the discovery of an element in the nineteenth
century was a road to fame and fortune. But deciding upon structures was
a difficult matter; and some said that one must stick with ideal types. From
the 1860s Marcellin Berthelot argued that the time had come for a new
chemistry, based upon synthesis: ideally total synthesis, beginning from the
elements. By careful choice of reactions, one could be sure just how the
parts had come together; confirming and going beyond the analysis.
Berthelot, curiously enough, did not believe in atoms; but his successors
have developed his techniques to see how atoms and radicals fit together
in molecules.
Technology is the basis of our society, and it is often thought of as applied
science. Indeed it is because of their real or supposed utility that the
sciences get the funding from government which nowadays they need,
since sealing-wax and string will not get you very far in physics. The
relationship is often described as one in which discoveries are made by
scientists who have no end in view except the knowledge of nature; they
are simply devotees of truth, the modem equivalent of the medieval monk.
Behind them may lurk pure mathematicians, even more remote from the
real world as they work away absent-mindedly with pencil and paper. These
discoveries are then applied to devices of all kinds by technologists, so that
the last year’s mathematics becomes this year’s science and next year’s
labour-saving gadget. The prophet of this era was Francis Bacon, who in the
early seventeenth century perceived that knowledge is power.
There is sufficient truth in this picture to make it a caricature; and some
cultures may be better at different stages on the way from an equation
through the laboratory to the pilot plant and the industrial application. But
it was a long time before such a pattern became anything like the rule; and
there is also a good deal of movement the other way. Much science depends
on increasingly sophisticated apparatus, available only through
technology; we can see this in the past, in astronomy, where discoveries
waited on telescopes, and in physics where J. J. Thomson’s discovery of the
electron in 1897 depended upon vacuum pumps. Ornithology took a great
leap with the invention of binoculars; while medicine and embryology were
held up at what we can call a ‘technical frontier’ until achromatic
microscopes were developed and improved in the nineteenth century.
Exciting fields of science have also been opened up by technological
advances or catastrophes. Thus James Watt as an Instrument Maker at the
University of Glasgow was set to repair the model steam-engine used in
lecture demonstrations; he observed how wasteful of heat it was and set
about inventing a better one. In successful partnership with Boulton in
Birmingham he built engines, joined the Lunar Society of scientists and
industrialists, and from steam moved on to fundamental chemical studies on
the composition of water. His improvements to the engine involved some of
the science of the day, but chiefly depended upon practical reasoning,
scientific only in that it was not the rule of thumb on which most early
industry depended. The steam-engine after Watt’s death continued to
stimulate physicists, in a process which led to the science of
thermodynamics. Watt’s concentration on the efficiency of a machine led
him and his successors into abstruse science.
Similarly, work on electric telegraphs and their failures led Wheatstone
and William Thomson (Lord Kelvin) into understanding of the nature of
electricity: but the converse is true of Faraday. His early work was
‘useful’: he analysed whale oil, discovering benzene in the process, he
investigated steels of various compositions, and he made optical glass of
very high refractive index. In the 1830s he resolved to drop these short-term
researches, and devote his life to understanding the nature of electricity and
magnetism. By the end of his life in the 1860s electromagnetism was still of
little apparent use; but from the 1880s the lives of all of us have been
transformed by the harnessing of electricity. Faraday in deciding to devote
himself to pure and long-term research most profoundly changed human
circumstances. Contemporaries were struck by his unworldliness, in giving
up the substantial fees available in consultancy work for an eminent analyst:
choice between science and technology may sometimes still involve fame
or fee.
Once Faraday’s Dynamo could be scaled up and generate power for a
town, electric light (with the new bulbs of Edison and Swan, who were
inventors rather than scientists) became a possibility; but then came
something not uncommon in the history of technology. Gas lighting was
enormously improved by the invention of the mantle. Early gas lights had a
smoky luminous flame, like a Bunsen burner with the air-hole shut; in the
mantle an almost colourless blue flame of gas and air played on the
supporting material suitably impregnated which gave off a brilliant white
light. The electric light thus did not have an easy passage, and did not
become general until well into the twentieth century.
The electrical industry fits the modem pattern, of technology arising from
pure science; and the more modern parts of the chemical industry are
similar. The first synthetic dye, ‘mauve’, was prepared by Perkin at the
Royal College of Chemistry when he was trying to synthesise a known and
quite different substance. He set up a works to make ‘aniline’ dyes from
coal tar, bright colours from blackness in what seemed a kind of alchemy;
but the industry soon passed to Germany where financiers did not expect so
quick a return as in Britain, and were prepared to support research,
In the eighteenth century, science was often seen as mere curiosity, on
which a mature person would not spend too much time; while the Baconian
improvement of man’s estate through industry was wholly desirable. Later,
science seemed estimable while technology with connections to
materialism, pollution and weapons made liberals uneasy. The
connections are very complex; but both are fully human activities.
D. Landes (1986) Unbound Prometheus, new ed., Cambridge University Press, Cambridge.
R. Porter (ed.) (1987) Man Masters Nature, BBC, London.

A telegraph is a way of sending messages over a long distance. It might be


done with smoke and fire, or flags, or drums; but the electric telegraph of
the 1830s was the first application of the science of electricity in
technology, though electric shocks had been used in unorthodox medicine,
and lightning conductors had come into use following Franklin’s dangerous
experiment with a kite in a thunderstorm. Charles Wheatstone was the
leading scientist among the pioneers. He had begun as a maker of musical
instruments, and indeed is famous as the inventor of the piano accordian,
now essential to the music of central Europe; but he taught physics at
King’s College, London, and was a close associate of Faraday.
Electric telegraphy, in which impulses caused a needle to point at letters
on a dial, was very valuable for sending information along a railway faster
than a train could go; Wheatstone tried to measure how fast, and concluded
that they went at much the same speed as light. This made time signals
possible, so that clocks at railway stations could be synchronised and
‘railway time’ used throughout Britain instead of local solar time. Public
attention was directed to the telegraph when it was used to catch a murderer
seen boarding a train, who found police waiting for him when he got off.
Cables were soon being laid across continents, and then under the ocean;
Werner Siemens and William Thomson (Lord Kelvin) being prominent in
these developments. Diplomacy was revolutionised, because ambassadors
and viceroys could be in touch with governments; when mail took months,
such men had had to take initiatives without consultation. Thought could
now be transmitted faster than a man on horseback could go; and believers
in progress dwelt much on this. At the beginning of our century, Marconi
developed ‘wireless telegraphy’, and the age of radio and television began.
B. Bowers (1975) Sir Charles Wheatstone, HMSO, London.

Telescopes were probably invented in Holland about 1600, a long time after
lenses had come into use for reading glasses. Galileo heard about them, and
may even have seen one; and then made his own. He, and Thomas Harriot
in England at the same time, pointed telescopes at the heavens; and
Galileo’s Starry Messenger of 1610 was the first book to describe what one
saw with this new instrument. He identified four moons of Jupiter, many
stars in the Milky Way, and mountains and craters on the Moon; and
interpreted his discoveries as proof that the Earth goes round the Sun.
Later he saw that Venus has phases like the Moon, and observed sunspots.
His book was a great success, and led to his return to his native Florence as
court mathematician and philosopher: later books, written in Italian instead
of Latin, got him into trouble with the Inquisition; evidence from the
telescope could not establish the truth of the Earth’s motion, though it could
make it more plausible.
Galileo could see things nobody before had seen; and clearly astronomy
would never be the same again. Early telescopes were not easy to use; glass
was of poor quality, magnifications were small, and images were fuzzy and
bordered by coloured fringes. Galileo’s had a concave objective, and a
convex eyepiece, like an opera-glass; Kepler suggested making both
convex, though this inverts the image; and Newton urged that to get clear
images telescopes ought to be based on reflection rather than
refraction, using a large curved mirror to collect light. The biggest
telescopes work on this principle; but they were not suitable for carrying
around the world.
In the early eighteenth century, John Dollond invented the achromatic
lens system, in which multiple lenses made of different kinds of glass
replaced single lenses; this got rid of the extraneous colours. By the 1660s
telescopes became essential equipment at observatories; and in the
following century Captain Cook and others could set up observatories at
places like Tahiti, using refractors. In the nineteenth century, photography
increased the value of telescopes, for images could be studied at leisure; and
the spectrum of the stars could also be investigated.
I. B. Cohen (1986) The Birth of a New Physics, new ed., Penguin, Harmondsworth.
G. L. E. Turner (1983) Nineteenth-century Scientific Instruments, Sotheby, London.

A test-tube is a small glass tube closed at one end, and is a vital piece of
apparatus in the laboratory. Faraday in his Chemical Manipulation of
1827 gave instructions on how to make them, but even by that date they
could be bought. By the end of the century they were made of Pyrex glass.
They are used for experiments in which things are dissolved, products of
distillation collected, and analyses performed; in popular science,
phrases like ‘test-tube babies’ imply any laboratory procedure.

A textbook is written for instruction; and generally for a specific kind of


course at a particular level in a system of education, often with an
examination at the end of it. The most successful must have been Euclid’s
Geometry; but in the modern world textbooks begin perhaps with
Sacrobosco (John of Holywood) On the Sphere, which set out Aristotelian
astronomy and was one of the great successes of early printing. In the
seventeenth century, Jacques Rohault wrote a standard textbook on the
physics of Descartes; the eighteenth-century English version of this (used
at Cambridge university) had footnotes by the Newtonian Samuel Clarke
bringing it up to date and sometimes contradicting the text.
Some great scientists wrote their own textbooks; Boerhaave at Leiden,
for example, did so after pirated versions of his lectures had begun to
circulate, and his was one of the most important chemistry books of the
eighteenth century. We should not confuse textbooks with popular
science, where writings can be more personal; textbooks present the
paradigm to students. As such, they are very interesting to anybody who
wants to know what was the received view; textbooks show what is
regarded as established knowledge, and are some way behind the frontier.
We can nevertheless see science in progress by comparing editions of the
same book; some have had an astonishing number.
A few textbooks mark a scientific revolution: Lavoisier wrote something
like a textbook to get across his views on oxygen, and Mendeleev worked
out the Periodic Table in the course of writing a textbook. When physics
came into being with conservation of energy in the mid-nineteenth
century, William Thomson and P. G. Tait wrote an advanced textbook to
present the discipline in the new way, A Treatise on Natural Philosophy,
1879; later volumes never appeared because they disagreed over the
appropriate mathematics. But such original approaches are the exception;
textbooks generally are dull but worthy, and do not bring high status to
their authors.

Theory is what makes sense of facts; and science consists of appropriate


theory and authenticated facts. The wider the range of a theory, that is the
more phenomena for which it can account, the more important it is. Some
scientists and philosophers see theories as a tool or instrument, to be
judged in terms of elegance and efficiency, rather than having any
connection with truth. In the late nineteenth century, Ernst Mach argued
that no state of affairs ever entails a theory, and the economy of thought was
the real object of theorising.
In contrast to this, G. G. Stokes, the great classical physicist, wrote in
1849 that ‘A well-established theory is not a mere aid to the memory, but it
professes to make us acquainted with the real processes of nature in
producing observed phenomena.’ This is the way most scientists, at least
when not doing philosophy see it: they would hope to be finding out the
way things are. But quantum theory in the twentieth century, especially in
the ‘Copenhagen interpretation’ of Niels Bohr, goes with the
instrumentalism of Osiander and Mach: we have various equations, and
wave and particle models, but cannot form a clear picture of the real
processes going on. It may be that this is a temporary business, and that a
new theory will in due course provide an explanation of the kind Stokes
sought in his work on light.
Theories have a close connection with experiment and observation,
because they should be testable, or falsifiable, in principle at least; and
the best theories generally lead also to predictions, which when verified
are very important in persuading the scientific community that a theory is
on the right lines.
Thermodynamics is the science dealing with the relationship between
heat and work. It came into being in the middle of the nineteenth century,
long after the steam-engine had been in use; and ultimately led to greater
efficiency in engines, and to a more profound understanding of
chemistry and physics.
Thermodynamics was from the start based upon laws, for its founders
hoped to escape from the hypotheses which had characterised attempts to
explain heat; and it has always been an austere and mathematical field.
The first law was stated by Helmholtz, although since about 1800 several
people had been struggling towards a clear enunciation of it and had
probably grasped its essentials; it is the principle of conservation of
energy, that energy cannot be created or destroyed. What we see when coal
bums or an electric torch is switched on is the transformation of one kind of
energy, in these cases chemical energy, into another; here, heat, and
electricity which in its turn is transformed into light. There are definite
exchange rates between the forms of energy, and this first law transformed
the way different sciences were seen in the 1850s. It brought about a great
unification in which magnetism, electricity, heat and mechanics were
all grouped together into physics.
The second law was worked out in 1824, before the first, by Sadi Carnot,
investigating whether there was a limit to the efficiency of steam-engines.
He proposed an ideal engine working frictionlessly with a perfect gas in the
cylinder; and he used the ideas of reversibility and of the Cycle. His
engine would run backwards, so that it could be used as a refrigerator; and
he showed that run forwards one would get work out of it, while run
backwards it would need to be driven. He used the caloric theory in which
heat was seen as a fluid; and concluded that just as water always runs
downhill, so heat will only flow spontaneously from a hot ‘source’ to a cold
‘sink’. In his ideal cycle, no caloric was lost; but in any actual machine, he
recognised that there would be losses. The highest possible efficiency was
governed by the gap between source and sink; the hotter and colder they
were the greater it would be, but there was always a definite limit. The
second law may be stated as the proposition that to get heat from a cold
body to a warm one, work must be done.
After the first law was generally accepted, the caloric theory became
untenable; heat is not a fluid, but a form of energy. William Thomson and
Rudolf Clausius proposed the concept of entropy, which measures the
unavailability of heat; so that in actual processes, the entropy always
increases. This is another way of stating the second law, and implies that
there is a definite direction to time; other laws of physics are timeless. In a
closed system the heat will gradually become less and less available until
everything is lukewarm and there are no sources and sinks differing in
temperature and capable of doing work; this may apply to the universe, and
is called the heat-death.
Thomson, an admirer of Fourier and his analytical theory of heat where
the flow rather than the nature of heat was at issue, saw these laws as
hypothesis-free; and many contemporaries were keen to accept them as
ultimate rather than try to explain them, which would involve hypothesis.
Willard Gibbs, Ostwald and Nernst were among those who made
thermodynamics into a deductive science. But Maxwell in 1859 used
statistics to account for the behaviour of gases using the model of
particles colliding with each other and the walls of the vessel. He
invented a ‘demon’ so small that he could see individual molecules; and
gave him a frictionless trap-door to work. On each side of it was tepid gas;
but the demon let fast-moving particles through one way, and slow-moving
ones through the other way; and thus gradually got faster particles
accumulated on one side and slower on the other. This meant that he had
separated tepid gas into hot and cold portions without doing any work; in
defiance of the second law, which was therefore not absolute but statistical.
One response was to say that physics did not have to deal with demons
unless somebody actually produced one; but in fact the statistical basis of
physics has become clearer in the twentieth century with investigations of
radioactivity and the nucleus. So has the value of thermodynamics,
essential for example in understanding rates of chemical reactions; this
great triumph of ‘classical physics’ is still crucial. The second law,
discovered following an ‘erroneous’ hypothesis about heat, and involving
time and chance, is still a somewhat curious though fundamental principle.
M. N. Wise and C. Smith (1986) ‘Measurement, work and industry in Lord Kelvin’s Britain’,
Historical Studies in Physical and Biological Sciences, 17, 147–73.
Time is such an important feature of life that we all know what it is, as St
Augustine said, until we have to explain it. In Newton’s lifetime came the
discovery of the moons of Jupiter and of the pendulum; measured against
the Sun, these were somewhat irregular, but against sidereal time, based on
the apparent motions of the stars, they were regular. For Newton, the Sun
was a measure of vulgar time, while these others were dealing with absolute
time which flowed regularly. He was criticised by Leibniz, for whom time
was just the order of events and had no independent existence; and in
relativity absolute time has indeed gone. We have also replaced
pendulum clocks by the vibrations of quartz crystals or of atoms, as more
convenient and bringing greater regularity and simplicity into physics.
Early clocks had kept poor time, and needed frequent regulation using a
sundial; and pendulum clocks were not portable. In the eighteenth century
the invention of chronometers (first by John Harrison) meant that
navigators on voyages could conveniently find their longitude by
comparing local time with that of Greenwich. James Cook did this on his
second voyage, comparing the results with those derived from astronomical
phenomena (an eclipse if one was lucky; otherwise careful observations of
the distance between the Sun and Moon), and thus measuring time led to a
determination of place.
In the nineteenth century the telegraph brought time signals and a
uniform system of zones in place of the vulgar local times based on
sundials; railways required such timing as coaches had not, and society
became much more time-conscious. In geology the fossil record demanded
an exceedingly long past; while thermodynamics seemed to provide a
measure of the passage of time in the increasing of entropy, and to point to
the end of everything as the universe ran down to a point where everything
was tepid and nothing happened. Contemplation of the tracts of time past
and to come produced a kind of gloom among some Victorian intellectuals,
because to see order in such a world was difficult; but those who embraced
scientism got a melancholy satisfaction from the feeling that the solar
system should see us out. Provided we control pollution and do not use
our weapons, we have a good many million years ahead of us; but perhaps
gloom is more appropriate to us than it was a hundred years ago.
D. Landes (1985) Revolution in Time, Harvard University Press, Cambridge, Mass.
O. Mayr (1986) Authority, Liberty and Automatic Machinery in Early Modern Europe, Johns
Hopkins University Press, Baltimore.

Toys and science do not at first sight go together, but the educational toy
has a long history and the apparatus of one generation, particularly that
used in lecture demonstrations, may become the toy of the next,
teaching scientific principles or perhaps just giving pleasure. There are
some particularly ingenious versions from the nineteenth century.
G. L. E Turner (1987) ‘Presidential address: Scientific toys’, BJHS, 20, 377–98.

Tradition is an important part of science. Even the most revolutionary of


scientists have to engage with their predecessors: Descartes’ image of a
town-planner starting with empty territory just does not fit what he did, or
anybody else does. Scientists fit formally or informally into schools,
following some paradigm or thinking along the same kind of lines; this is
especially obvious where many eminent people have attended some
particular institution or laboratory and share a world-view and aims.
Science is a blend of tradition and innovation; and indeed innovation is only
possible against a background of tradition.
Tradition is very obvious in scientific education, which has been
described as inculcation of dogma. The student is not encouraged to think
original thoughts, but is drilled; being made to learn facts and theories,
hard to distinguish even for the sophisticated, until about the level of a
Bachelor’s degree. In the past, there were strong national traditions:
chemistry in eighteenth-century France, Germany and Britain covered
rather different (though overlapping) areas of knowledge, different
techniques were fashionable, and different explanations were preferred.
These things could go on on a smaller scale; in mid-nineteenth century
London, students at King’s College got a course in which physical
chemistry was predominant, while at University College the emphasis was
on analysis.
Those outstanding scientists who have founded a school, becoming
fathers to whole generations, like Liebig in the last century and Rutherford
in ours, modify existing traditions and pass on a new one; in both these
cases, an experimental tradition. The sciences are more cosmopolitan now,
and national traditions may be little more important than institutional ones;
but frontiers between sciences are differently drawn in different places, and
contacts between experimentalists and theoreticians may or may not
happen. The study of spectra was immensely advanced in the mid-
nineteenth century when a physicist and a chemist, Kirchhoff and Bunsen,
got together in a partnership that was rare at that time: these sciences had
distinct traditions.
When we look at Harvey, Lavoisier or Einstein, we may be tempted to
ask what’s new; and to place them on pedestals as those who overthrew
error and combated ignorance. It is better to place them in the context of
the traditions available at their time, and to see how they modified them and
extended the possibilities for science.
M. P. Crosland (ed.) (1975) The Emergence of Science in Western Europe, Macmillan, London.
R. Porter (ed.) (1987) Man Masters Nature, BBC, London.

Transition means a change; the transition-temperature for a solid is that at


which it changes from one crystal form to another, for example. Within
any hierarchy there are transitions and the principle of continuity, that
Nature makes no jumps, has been held by many scientists: for example
in discussing evolution, or in criticising theories of absolutely hard
atoms. The transitions or frontiers between sciences are clearly artificial,
but may be obstacles to communication or promising areas for discovery
depending on one’s temperament and position.
Transition metals are those like iron, chromium and manganese,
occupying the middle of the Periodic table. In these Elements,
Electrons are believed to be going into inner Orbits as we go through the
family; so the metals have similar Properties and variable Valency.

Truth is what many believe that the sciences aim at; and they mean by it
consistency with the facts of the world. Pilate’s question, ‘What is Truth?’,
they see like Bacon as jesting by a man who would not wait for an answer.
The problem is that facts are problematic; and selection of relevant facts
depends on the world-view, or in science the paradigm or the theory, held
by the observer. Science may aim at truth, but can we know that we have
got there? In the last century, the wave theory of light seemed so firmly
established that its truth was beyond question; whereas since Einstein’s
work of 1904 we have come to accept that it is only a half-truth. Well-
established theories, and even sciences (like classical physics) may find
falsification.
An alternative view is that truth describes coherence. It is therefore
subject to time and place: the wave theory of light was true a hundred years
ago, and phlogiston theory a hundred years before that. This fits with our
perception of change within science; but it makes science an intellectual
game rather than an earnest search in which at the end we shall see the
world as God sees it. Whether either view makes much difference in the
practical affairs of life is doubtful: if we can never be sure that we have
reached truth, we are in no worse a position than the judge and jury; and in
science we have no doubt to do our best to avoid error. We can also
remember that falsification is possible, subject to ad hoc hypotheses
which may rightly be added, although confirmation never is. If science is a
search for truth, it is also a never-ending quest.
K. R. Popper (1972) Objective Knowledge, Oxford University Press, Oxford.

Type was a word much used in contexts of religion: where Joshua who
led his people into the Promised Land was, for example, a type of his
namesake Jesus. In the sciences a type is an actual or ideal entity to which
others are an approximation. Thus the first plant or animal of a species to be
described in the literature is the type-specimen, against which others
may be compared if it, or an illustration, survive. But especially in the
nineteenth century, the term was extended to higher levels of
classification: so that all crustacea, for example, were seen as
manifestations of one type, although crabs, lobsters and barnacles all look
rather different to the untrained eye. This led to a rather Platonic biology,
favoured in German Naturphilosophie and by Darwin’s great opponent,
Richard Owen; which became obsolete with general acceptance of
evolution. Nevertheless, Darwin’s ally Huxley taught physiology using the
crayfish as the type; students who had learned everything about crayfish
could then extend their ideas into other species.
In chemistry, ideal structures formed the basis for a theory of types,
made popular by Laurent and Gerhardt in the middle of the nineteenth
century. Laurent hoped that from imagined structures consequences could
be derived by deduction, which could confirm or falsify them. Gerhardt
was pessimistic about ever knowing structures; and for him the types, such
as that for water OH2, ammonia HN3, and methane CH4, were convenient
ways of grouping compounds. The success of Laurent’s programme in
structural organic chemistry, later extended and confirmed by X-ray work
on crystals, made this restricted theory of types also obsolete by the later
nineteenth century.
Uniformity of Nature is a basic assumption for all science. If
contingency ruled everywhere, and no prediction was possible, then
there would be no point in wasting one’s time looking for laws. In the
method of induction, uniformity is taken for granted, for otherwise there
would be no hope of extrapolating from past experience into the future.
Similarly, deduction depends upon there being a stable classification of
things, so that what is true of some members of a group will be true of all;
properties are constant.
This means that the idea functions as a principle; ‘Nature is Uniform’
might be a kind of text put up over the laboratory door. If an apparent
breach of the principle were to be observed, such as something falling
upwards or someone growing younger, we would not feel that this was an
interesting exception to the rule, but would look for some other explanation.
In its common use, the term miracle is taken to be a breach of uniformity;
and the successes of the sciences over the last three or four centuries mean
that miracles are now an embarrassment in religion rather than a support.
The argument for the existence of God from design was adapted, notably by
William Paley about 1800, to emphasise the foresight in planning a uniform
world rather than the interventions of Providence to correct injustices. That
science seems to work is not a proof of nature’s uniformity; it might just
mean that we had concentrated on uniformities and ignored the unexpected
and the novel; but on the whole it does seem to give strong support to the
principle.
In geology the principle was explicitly adopted in the famous book by
Charles Lyell, Principles of Geology, 1830–32. This was an attempt to
account for past changes in terms of causes acting at the present day; in
deliberate opposition to those who had invoked catastrophes such as Noah’s
Flood, which were scientifically inexplicable. This step is often seen as
making geology into a science. In fact catastrophists such as Cuvier in
France, and Lyell’s teacher Buckland in England, had used the principle of
uniformity as a confirmation of their views. The quick-frozen mammoths of
Siberia, the various past faunas of Montmartre, and the hyaena bones of
Yorkshire caves, all seemed to point to a series of disasters at intervals in
the past. Use of the uniformity principle does not in itself lead to any
particular version of science. In this case Lyell’s came to prevail, though
geologists found themselves soon afterwards having to incorporate the Ice
Age into their scheme: which seemed like a catastrophe, but could be
handled as something surprising and requiring explanation.
Uniformity is therefore a part of any conceivable paradigm in the
sciences; but it functions as something which cannot itself be tested, as a
principle, rather than as a proposition open to falsification.

Units, from unit meaning one, are what we express quantities in. There are
various national and international standards which have brought order
into what was in the past arbitrary. Conservation of energy brought the
realisation, especially in Helmholtz, that all manifestations of energy must
be expressible in the same dimensions of mass, length and time; and thus
connected the various units in which heat, electricity and mechanical
work had been expressed. Gradually since the ‘metric system’ was set up in
Revolutionary France and accepted at an international conference
attended only by France’s allies, it has spread and has become the basis of
all the units of science.

The medieval university taught astronomy and geometry with arithmetic


and music as the ‘quadrivium’ in the philosophy faculty; coming after the
‘trivial’ subjects of grammar, logic and rhetoric, and leading on to the
higher faculties of theology, law and medicine. Basic science was thus an
essential part of higher education; but it was taught as a body of
knowledge, as dogma, rather than as something progressive. Universities
down to the nineteenth century were not training those who were going to
spend their lives in scientific research; science was not yet a profession,
but part of the common culture which those being prepared for the
professions – clergy, lawyers and doctors – should know.
In the seventeenth and eighteenth centuries, most research took place
outside universities; in academies, or under private patronage like
Priestley’s work on gases done when he was Librarian to Lord Shelburne.
The Universities of Leiden early in the eighteenth century and of Göttingen
later were centres of both research and teaching; but it was the Ecole
Polytechnique of Revolutionary Paris which in the 1790s closely combined
the two, so that distinguished researchers were teaching the latest science
and technology to undergraduate students. The University of Berlin,
planned by William von Humboldt (brother of Alexander, the eminent
explorer and scientist), became in the 1810s the new model, in which the
increase of knowledge and the intellectual development of the students
were both crucial in all Faculties.
In Britain and America, the old pattern of universities as institutions
teaching well-established knowledge, mostly to those intending to become
clergymen, lasted well into the nineteenth century; but Edinburgh, London,
Manchester and Johns Hopkins came closer to the German model, and in
the second half of the century Oxford and Cambridge, Harvard and Yale all
began to conform to it also. Utility became less prominent than it had
been, and the new ideal was of a liberal education; but scientists urged that
the sciences would provide this, and that one need not get a classical or
mathematical training first and then take up science. Instead of being part of
the preliminary training for everybody, the sciences became disciplines to
be studied instead of, for example, Modern Languages, History, or Law.
The University became by the end of the century a centre where numerous
specialised activities were going on: and centres for scientific research,
despite some challenges by separate research institutes.
Since then research has become increasingly expensive, and more and
more a matter of teamwork; and it has also got further again from under-
graduate teaching, which tends always to lapse into dogma. But separation
of research from teaching in different institutions does not in science or in
humanities seem to be the answer: universities have to be centres of
learning, and those who are not learning will not be good teachers; and
conversely there is nothing like teaching for clearing the head. We may
hope that the Berlin model will survive into the twenty-first century.
H. Becher (1986) ‘Voluntary science in Cambridge University to the 1850s’, BJHS, 19, 57–87.
C. E. McClelland (1980) State, Society, and University in Germany 1700–1914, Cambridge
University Press, Cambridge.
M. Sanderson (ed.) (1975) The Universities in the Nineteenth Century, Routledge & Kegan Paul,
London.

Utility is what governments have in mind when they support science


through an academy or a museum, or through education. Bacon early in the
seventeenth century urged that knowledge is power; but he also believed
that experiments of light must precede experiments of fruit. The
relationship of science and technology is a complex one, with influences
both ways; on apparatus as well as on engines and weapons. But it is
generally supposed that all discoveries will lead to something useful.
There is a story that Robert Peel, the statesman, asked Faraday what use his
new dynamo was; Faraday replied that he did not know, but ‘I wager that
one day your government will tax it.’
Faraday’s career can illuminate the question of utility. In the 1820s, after
isolating benzene in a pioneering piece of fractional distillation of
hydrocarbons, he undertook two pieces of research in applied science, on
steel and on optical glass. In India very superior steel called ‘wootz’ was
used for swords, and on behalf of the Royal Society Faraday investigated it
and other steels chemically. He also tried to make glass of very high
refractive index. These researches were inconclusive; and in the 1830s he
began the work in electricity and magnetism which at the time had no
obvious use but which have transformed his world into ours. They led to the
power station, the transformer and the electric motor, and also to the
cathode-ray tube. Where his generation depended on horses and steam-
engines, we rely on electric power and internal combustion engines, fuelled
by the products of the fractional distillation of crude oil.
To aim at utility in this case led to some moderately interesting results,
but it was only when Faraday dropped these researches in favour of
abstruse investigations which excited him that he made discoveries which
were to be of enormous utility. It would be rash to generalise from this
example, but in the long run the fruits of what we would call ‘pure’ science
may well be greater than those which seem more immediately useful. It is
sometimes suggested that British science has been particularly ivory-
towered; but this is not what struck foreign observers in the nineteenth
century, like the great chemist Liebig; he saw the English disease as a
matter of seeking immediate usefulness and profits. We should expect with
Bacon that a real search for understanding the world will in due course lead
to changing it; and for the better, we hope.
R. Fox (1984) ‘Science, industry and the social order in Mulhouse, 1798–1871’, BJHS, 17, 127–68.
J. Morrell (1985) ‘Wissenschaft in Worstedopolis: Public science in Bradford, 1800–1850’, BJHS,
18, 1–23.
A vacuum is a void space. Those ancient Greeks who believed in atoms
also postulated void; but Aristotle believed it impossible. In more modern
times, Descartes also thought that where there was space there must be
matter; but above barometer-tubes or air-pumps it must be very subtle
undetectable matter, a kind of ether. Atomists of the seventeenth century
like Boyle and Newton came to believe in the vacuum, especially because
there seemed to be nothing slowing up planets and comets in their orbits.
This meant for Newton that gravity must work across empty space; he had
no explanation for this, and was somewhat embarrassed about it. Forces
acting at a distance across voids worried Faraday in the mid-nineteenth
century also, and led him to his conception of the field.
Boyle was more interested in air-pumps than in astronomy; and was
convinced that he was getting close to a vacuum in his receiver. Bells
ringing there became inaudible as the pressure dropped, birds died, and
feathers fell as fast as bullets. To the philosopher, a vacuum means the
absolute absence of matter; to Boyle it could be better or worse depending
on the state of his pump that day. It is in Boyle’s sense that scientists tend
to use the word; and in the nineteenth century air-pumps were improved
until they became something like the high-vacuum pumps they were
claimed to be.
Under these conditions all sorts of interesting electrical phenomena were
observed, and the conduction of electricity through gases became,
following Faraday’s work, a major area of interest in the later nineteenth
century. Electric light bulbs, cathode-ray tubes, X-rays and then early
electronic apparatus were the fruit of this; with the discovery of the
electron in 1897 by J. J. Thomson being perhaps the most important
landmark. Attempts to attain a vacuum led to understanding of the nature of
matter.

Valency, or valence in the USA, is the combining power of an element.


Some, like iron, have more than one, and form different series of
compounds. The idea that each element has a definite valency was stated by
Couper and then by Kekulé, who in his work on aromatic compounds and
on structure generally revolutionised chemistry from the 1860s. From
1916 G. N. Lewis gave an explanation of valency and affinity in terms of
the giving, accepting and sharing of electrons between two or more
atoms; and this has been the basis of understanding ever since.

Values are something scientists are uneasy about. Science involves


certain moral characteristics if it is properly performed: honesty, openness,
readiness to admit error, and a capacity to rejoice in others’ successes in a
great enterprise. These things make the sciences an important component
of a liberal education; but clearly science is practised by people rather than
angels, and not all scientists live up to these ideals. Fraud and error and
persistence in paradigms are all frequently found, and some sociologists of
science see it as careerism: naked apes in search of a double helix.
From the seventeenth century, the idea was that knowledge of nature was
based on observation rather than authority, and was independent of
politics and religion. Cavaliers and Roundheads, Catholics and
Protestants, could all agree about chemistry or perhaps even astronomy.
Science was the search after truth, to be pursued dispassionately. But ideas
flowed back and forth between the sciences and other fields: the great
Encyclopédie of Diderot and d’Alembert alarmed the French government in
the mideighteenth century, Malthus’ Essay on Population of 1798 was an
explosive political document, and Darwin’s theory of evolution of 1859
brought ideas from political economy into biology. Historians of science
have increasingly detected religious and political beliefs lying behind even
such austere parts of science as Newton’s theory of gravity and Galton’s
statistics.
Scientists have continued to claim independence from such things for
science, while admitting that in arriving at theories people’s minds move in
mysterious ways: but the testing, the attempted falsification, is
objective. But the search for knowledge is itself ambiguous. Gentlemen in
the seventeenth century were warned of the danger of overdoing it; of
becoming narrow-minded, knowledgeable rather than wise. In our time
these dangers are more obvious, with research on weapons being probably
the most active sector of science, and with a good deal of other research
lacking profound interest or the promise of much utility. Philosophy of
science has concentrated on the logic of explanation, and sometimes
looked at discovery; but has not paid much attention to morality. Some,
like Nicholas Maxwell, hope that the course of science could be diverted so
that it aimed for wisdom rather than knowledge; others of us are gloomier
about human nature and uneasy about scientism, and hope rather that
science will be complemented (and perhaps enriched) by the different kinds
of thinking required in the Humanities.
N. Maxwell (1984) From Knowledge to Wisdom, Basil Blackwell, Oxford.

A vector is a quantity which has both size and direction: thus velocity is a
vector, while speed is a ‘scalar’ quantity. Two cars may be moving at the
same speed on a narrow winding road, but it makes a difference whether
their velocities are the same or opposite. The handling of quaternions and
vectors was an important part of nineteenth-century mathematics.

Viscosity is a measure of the resistance a fluid or gas makes to motion


within it. G. G. Stokes in the last century came up with the law that the
heavier a body is, the faster it falls through viscous fluids; and Maxwell
deduced from the kinetic theory of gases, and showed by experiment, that
the viscosity of gases is independent of pressure. Generally, viscous fluids
are thick, like paint; they may also be ‘viscid’ if they are sticky, like treacle.

Voyages have been a metaphor for scientific activity ever since Bacon used
on his title-page the arms of Columbus, showing ships going through the
Pillars of Hercules and out into the ocean of undiscovered truth.
Wordsworth saw Newton as a voyager through strange seas of thought,
alone; the scientist is a kind of Ancient Mariner, with a gripping and
extraordinary yarn for us. But voyages have also been an important part of
science itself, where many like Darwin learned their craft and made
observations impossible for stay-at-homes.
In the middle of the eighteen century, La Condamine went with an
expedition to survey the frontier between Peru and Brazil, based on a line
drawn on the map by an earlier Pope; and he found both that the Earth was
flattened at the Poles, and that the Andes pulled his plumb-line out of true;
both observations verifying Newton’s theory of gravitation. Then in the
1760s came tables of the Moon’s motions, enabling longitude to be
calculated at sea; and shortly afterwards the first chronometers, providing a
direct comparison of local time with that at Greenwich or Paris, and thus
another determination of longitude. Accurate charting far from home was
now possible, and Bougainville and Cook took advantage of it. On his
voyages, Cook was accompanied by astronomers and naturalists; his
botanist, Joseph Banks, found plenty to do at Botany Bay, and after his
return to Britain was President of the Royal Society from 1778 until 1820,
and encouraged further scientific voyages.
Under French auspices, Alexander von Humboldt voyaged to tropical
America, making observations in all branches of natural history, as well as
political economy and anthropology; indeed he was a founder of scientific
geography. Darwin much admired his account of his travels, and saw
himself as extending Humboldt’s work further to the south. Voyages to the
far north also, in search of quicker passages to the East, were important for
geophysical measurements, of terrestrial magnetism and with pendulums:
observatories were set up, and under Humboldt’s influence international
cooperation produced great quantities of data; while Edward Sabine
became, like Banks, President of the Royal Society.
Cook’s voyages, and that of HMS Beagle and other survey ships, were
primarily concerned with charting coastlines; but by the second half of the
nineteenth century there was increasing interest in the deep seas, and the
voyage of HMS Challenger in 1873–76 marks the beginning of
oceanography. This ship carried a laboratory, and was specially adapted for
a scientific voyage. Earlier scientists had often been exasperated at the lack
of room on board ship, and at the way their time ashore was restricted
because the boats were often needed for survey work, and the naturalists
could not be allowed to wander away among suspicious or hostile natives. It
is striking how much work was done by those such as Sabine, Darwin,
Huxley and Joseph Hooker who made the best of it all.
J. F. W. Herschel (ed.) (1974) The Admiralty Manual of Scientific Enquiry (1851 ed.), reprinted,
Dawson, London, introduction by D. M. Knight.
A. Moyal (1986) A Bright and Savage Land, Collins, Sydney.
Wave motions we are familiar with from the sea. When we watch the
rollers coming in from the Atlantic ocean, the movement of the molecules
of water has only been up and down. The wave may travel from Cape Cod
to Land’s End, but the water does not. Water-waves are called transverse,
because the motion is at right-angles to the line in which the wave is going.
In the sea, the particles can only move up and down; but one can imagine
three-dimensional waves in which the vibrations were in all possible planes
at right-angles to the direction of propagation. Another sort of wave is the
longitudinal; here the particles vibrate in the line of propagation. Sound
waves are supposed to be of this kind. The molecules of air move back and
forth, none coming all the way from the source to our ear.
Light was more problematic. The waves of the sea will go round comers,
while light goes in straight lines and casts sharp shadows. Newton could not
believe therefore that it could be wave-motion, although he was aware of
some phenomena (like what we call ‘Newton’s Rings’, seen when a lens is
pressed onto a flat glass plate) which seemed to demand an explanation in
terms of waves. His contemporary, Christiaan Huygens, tried to develop a
wave theory, but it was unsatisfactory and did not catch on.
In the early nineteenth century, first Thomas Young, who turned his hand
to many things but finished few of them, and then A. J. Fresnel, sought to
demonstrate that a wave theory could explain all the phenomena of light.
The key was a model of transverse waves in three dimensions; and the
polarisation of light could be explained in terms of all of its vibrations being
in the same plane. Phenomena, like the fringes Young observed when light
was passed through two nearby slits in a card and fell onto a screen, could
be readily explained if light were waves: which interfere, just as those do
when two pebbles are thrown into a puddle. By 1840 it was accepted that
light was waves in the ether. The particle theory was to be revived in our
century in the work of Einstein; and we now have to live with wave/particle
dualism.
The distance between crest or troughs is the wavelength; the height of
crests gives the amplitude; and the number of oscillations per second gives
the frequency. The mathematics of wave-motions is elegant; and in
physics the study of waves is fundamental to the study of matter.
Any weapon can be much improved by the application of science.
Scientists like to feel that penicillin was a great achievement of science, and
the atomic bomb the work of politicians and generals; but the connection
between science and warfare goes back long before 1945. It may indeed be
that science lost its innocence then: in the last century T. H. Huxley could
say that science (unlike religion) had never done anyone any harm, but it
was not quite true then and now seems very implausible. This is not to say
that science had done more harm than good, but only that it is a human
activity with potential for good and ill and not simply beneficent.
Archimedes designed weapons to defend his native Syracuse against the
Roman army; Galileo was proud of his work on the path of projectiles,
which he showed to be a parabola; and the first President of the Royal
Society in the 1660s, Lord Brouncker, published papers on the recoil of
guns. The great chemist Lavoisier owed some of the unpopularity, which
led him to the guillotine, to his work on the improvement of gunpowder,
which involved collecting saltpetre from cellars and outhouses. His
research meant that French gunpowder, which had been inferior to British
in the Seven Years’ War, was superior by the War of American
Independence: the embattled farmers, Minute Men, could shoot further and
more reliably than the redcoats.
Despite these things, science about 1800 was not generally perceived as
an activity closely relevant to war. Davy was awarded a prize for his work
in electricity by the French Academy of Sciences, and in 1813 went to
Paris to collect it despite the Napoleonic Wars. Speeches were made about
how this proved that the sciences were above the wars of kings; Davy
seems to have seen his role as fighting in the cultural realm, proving that
chemistry was not a French science. He had backed a gunpowder
manufactory which actually managed to lose money during the war; but
otherwise his had in no sense been war work.
Sadi Carnot, the pioneer of thermodynamics, perceived that it was the
steam-engine, and thus Britain’s industrial power, which had won the war.
But technology was still barely a matter of applied science; and it was not
until the synthetic dye industry began in the middle of the century that
professionally trained scientists were in the forefront of technical
innovation. Out of this work came new explosives: the propellant
guncotton, and high explosives like nitro-glycerine.
By the Crimean War in the 1850s rifles were replacing muskets and
highly-visible red coats coming to seem a poor kind of uniform; but this
was a step owing little to science. On the other hand, Faraday was asked
about the possibility of smoking the Russians out of Kronstedt, their great
base in the Baltic, with chlorine; he felt it was impractical, but does not
seem to have believed it immoral. As artillery both on land and at sea was
improved, so mathematical skills were increasingly needed; and systems of
fire control gave opportunities for officers trained in the sciences. The first
institution in which teaching and research in the sciences were closely
combined was the Ecole Polytechnique in Paris, a military school; and this
example was followed, with less distinguished science, at Woolwich and
other training centres for officers.
The twentieth-century wars of peoples proved indeed more terrible than
those of kings; and have all closely involved scientists. Whereas Davy was
fêted in Paris, a century on in the Great War of 1914–18 all Germans were
expelled from the Royal Society and it was not until well into the 1920s that
Germans were able to play a full role in international conferences and
other activities of the international scientific community. In that war,
chlorine was indeed used with terrible results; and the war was even made
possible by Fritz Haber’s process for ‘fixing’ nitrogen from the air to make
nitrates, scraping cellars no longer being sufficient. Haber, who was
responsible for the chlorine campaign, was awarded a Nobel Prize after the
war; his process could be, and is, used in fertilisers but in fact down to 1918
it had provided explosives.
Since then the relationship between science and defence has got ever
closer; radar, machines for coding and for breaking codes, ballistic missiles,
and nuclear weapons were all results of World War Two, and have been
much improved since. Without the pressures of war we might never have
had penicillin; and certainly electronics has benefited from the no-expense-
spared kind of research associated with the military. Defending freedom and
democracy, in which science can probably best flourish anyway, is a good
thing to do; but there seems little doubt that defence-spending distorts
science and technology, and that most of the ‘spinoffs’ could have been
achieved rather more cheaply. The most interesting problems are not
usually the short-term ones whose solutions are demanded by government
agencies; and science would do better if it got the same money with less
strings attached – though this may be a remote hope!
Where involvement with weapons is very sad is that it brings in secrecy.
This has always been associated with technology, where innovation means
money and where industrial espionage is a very old profession; but
science has always been public knowledge, where publication brings
prestige and also speeds progress. Those studying the sciences should think
about such problems; otherwise their education may leave them as
unprepared for life as the legendary Irish girl arriving at Euston station.

Weather is always with us, especially in Britain; and it has always been
something where prediction is important. The first published weather
forecasts were made by Admiral FitzRoy, who had been Captain of HMS
Beagle on Darwin’s voyage; he relied on reports sent by telegraph from
coastguards and others. Attacks on the system for inaccuracy led in part to
his suicide in 1865. In observatories such as that at Kew more systematic
study had been going on, but scientists there did not feel the time was
ripe for publishing forecasts; the forecasts were indeed suspended, but
restarted by public demand from 1867 and fully restored ten years later.
Even inaccurate forecasts are felt to be better than nothing.
J. Burton, (1986) ‘Robert FitzRoy and the early history of the Meteorological Office’, BJHS, 19,
147–76.
J. L. Davis (1984) ‘Weather forecasting … at the Paris Observatory, 1853–1878’. Annals of Science,
41, 359–82.

Weight is readily measured by the extension or compression of a spring; it


is a force. It varies from place to place: in a spaceship we might be
weightless, and on the Moon we would weigh much less than on Earth; and
it is thus distinguished from mass, the quantity of matter in anything.

Women, said the historian H. T. Buckle in a lecture at the Royal


Institution in 1858, think quicker than men, and are therefore more
accustomed to deduction. Since he thought that science would not
progress fast enough if left to the empiricism and induction natural to the
masculine mind, he believed that women played an essential role in the
sciences. But his examples of deductive thinkers were Newton and
Goethe; for him, women essentially provided support and encouragement
for bold thinking. He noted how many eminent men had remarkable
mothers.
There were examples which he might have used of women who had
made important contributions to science; though whether they were
particularly intuitive and deductive thinkers is an open question. Mme
Merian the entomologist, and Mme du Chatelet who translated Newton into
French, were eminent in science in previous centuries; and in the nineteenth
century the number of women prominent in the scientific community began
to rise. Caroline Herschel worked with her brother William on the frontier
of astronomy; and Jane Marcet wrote introductory books in the form of
conversations on chemistry, natural philosophy, and political economy.
Her most successful readers were Faraday and John Stuart Mill; all the
characters in her books are female, but she made no concessions to
frivolous ‘femininity’. Women came to lectures at the Royal Institution and
the British Association; when they were present, the subject was presented
in a more general and less narrowly professional manner.
At a higher level, Mary Somerville wrote books which aimed to
overcome the specialisation characteristic of the nineteenth century and
since; and which proved highly suggestive to some of those working
towards the conception of conservation of energy, for example. She was
very well-read, but considered herself unoriginal; no discovery is attached
to her name, but there can be originality in a synthesis in which existing
knowledge is put together, and there is no doubt of the power of her mind.
Other women distinguished themselves as mathematicians, or as
illustrators especially in botany; sometimes also as travellers, like Marianne
North, whose works are in a special gallery at Kew Gardens, and who was
one of a number of intrepid Victorian ladies who went exploring. But right
through the century women could not join most scientific societies or
academies, and could not give lectures or publish papers in scientific
journals. Opportunities for men to learn science formally were few in the
first sixty or seventy years of the century, so women were not at such a
disadvantage there as one might expect; but while there were few
opportunities for men to make a career, there were almost none for women.
It is only in the twentieth century that anything like equal opportunities
for women in science have become available; whether their minds are
quicker is uncertain, but past examples indicate that women can shine in
mathematics and physical sciences as well as in descriptive ones, and can
think empirically as well as intuitively.
L. L. Shiebinger (1987) ‘Maria Winkelmann at the Berlin Academy: A turning point for women in
science’, Isis, 78, 174–200.

Work is done when a force moves through a distance. Energy is capacity


for doing work. These quantities were not clearly defined until the
nineteenth century; before that they were words in common use, without an
exact sense in science, whose language often develops that way. It is odd
to those studying physics to find that no work in the technical sense is done
in merely holding something up; and counter-intuitive to the intellectual to
find that no work is involved in mere thought; but luckily scientific usage
does not eliminate ordinary language.
X-rays were discovered by W. C. Roentgen in 1895 when he found a
mysterious radiation coming from a cathode-ray tube, which affected
photographic plates. When his wife’s hand was between the tube and the
plate, the photograph showed the bones of her hand. The rays attracted
enormous attention for their medical usefulness. They became important in
physics when in 1912 Max von Laue showed that they are diffracted in
passing through crystals, proving that they are waves, like light but of
very short wavelength. W. L. and W. H. Bragg worked out a method of
using X-rays to determine crystal structures; while H. Moseley used X-ray
spectra to find the atomic numbers of the elements.
Zero, or nothing, is a very valuable symbol because it made possible a
place-value notation. The ancient Greeks, Romans and Hebrews had letters
for numbers; and while they could no doubt perform rapid calculations on
an abacus, simple arithmetic would have been very clumsy, and one could
worry about numbers too big to be expressed because letters would have
run out. The Hindus seem to have invented the numbers we call Arabic, in
which the value of a number depends on its place; the figure 2 may
represent hundreds, tens or units for example. Having a symbol for zero is
essential in such a system for writing 20, 202 and so on; and it is then easy
to see how the Arabic numbers superseded the Roman ones.
Zero is also used in heat, where the Absolute Zero is the lowest possible
temperature. This is not some barrier which has so far defeated even the
ingenuity of the scientist; trying to get below it would be rather like
trying to visit the white area round a map of the world. In both cases there is
no such place. All possible temperatures can be mapped on a scale
beginning at absolute zero and going upwards. Various interesting
phenomena happen at extremely low temperatures, such as
superconductivity in metals. It is the point at which the gas or liquid in a
thermometer would shrink to nothing, only of course it becomes solid
instead; and at which everything would come to a halt.

The Zodiac is an Arabic term for the circle in the heavens, the Ecliptic,
inclined at about 23 degrees to the Equator, on which the Sun, Moon and
the planets travel against the background of the fixed stars. It is divided
into twelve parts, called after the ‘Signs’ or constellations by which they
can be identified. If we are told that ‘Mars is in Libra’ we know where to
look; and in astrology the position of the planets at the moment of our
birth, our horoscope, is believed to determine our character.

A zone is a belt, and as such it was used in astronomy. In chemistry,


zone-refining is a process in which a bar of a metal is so heated that the hot
belt passes from head to tail, carrying down with it the impurities: this
process can be used where a metal (like tungsten) has such a high melting
point that other methods of refining are impractical.
Index

Adams, J. C. (1819–92) 37, 40, 120, 122


Albertus Magnus (c. 1193–1280) 39
Ampère, A. M. (1775–1836) 44, 47, 132
Aquinas, T. (c. 1225–74) 39
Arago, D. F. J. (1786–1853) 90
Arbuthnot, J. (1667–1735) 123
Archimedes (c. 287–212 BC) 21, 36, 43, 70, 107, 125, 169
Aristotle (384–322 BC) 16, 18, 20, 39, 43, 61, 68, 71, 80, 95, 155, 165
Arrhenius, S. A. (1859–1927) 149f
Aston, F. W. (1877–1945) 84
Avogadro, A. (1776–1856) 28, 100, 113

Babbage, C. (1793–1871) 66, 104


Bacon, F. (1561–1626) 9, 25, 26, 52, 60, 70, 72f, 79, 85, 93, 97, 98, 115, 120, 124, 141, 152, 160,
164f, 167
Bacon, R. (c. 1214–92) 89
Balmer, J. J. (1825–98) 128, 146
Banks, J. (1743–1820) 167
Bauer, Franz (1758–1840) & Ferdinand (1760–1826) 78
Becquerel, A. H. (1852–1908) 64, 129
Beddoes, T. (1760–1808) 144
Bede (c. 673–735) 25, 63
Bergman, T. O. (1735–84) 40
Bernoulli, Jacques (1654–1705), Jean (1667–1748), and D. (1700–48) 64
Berthelot, P. E. M. (1827–1907) 11, 108, 151f
Berthollet, C. L. (1748–1822) 64, 124, 135
Berzelius, J. J. (1779–1848) 8, 11, 23, 58, 67, 86, 108, 128f
Bessel, F. W. (1784–1864) 16
Bewick, T. (1753–1828) 78
Black, J. (1728–99) 22, 55, 145
Blake, W. (1773–1821) 107
Blumenbach, J. F. (1752–1840) 94
Boerhaave, H. (1668–1738) 155
Bohr, N. (1885–1962) 19, 29, 33, 44, 62, 102, 106, 113, 127, 132, 146, 155
Boscovich, R. J. (1711–87) 18
Bougainville, L. A. (1729–1811) 70, 167
Boulton, M. (1728–1809) 54, 55, 152
Boyle, R. (1627–91) 11, 18, 27, 34, 38, 50, 56, 60, 61, 71, 79, 82, 88, 101, 140, 150f, 165
Bradwardine, T. (1250–1349) 4, 43, 81
Bragg, W. H. (1862–1942) & W. L. (1890–1971) 151, 172
Broglie, L. C. V. M. (1875–1960) 112
Brahe, Tycho (1546–1601) 12, 97, 104, 105
Buckland, W. (1784–1856) 67, 166
Bunsen, R. W. (1811–99) 3, 11, 132, 145, 153, 160
Buridan, J. (c. 1300–58) 43, 80
Butler, J. (1692–1752) 122f

Cannizzaro, S. (1826–1910) 100, 113


Carnot, N. L. S. (1796–1832) 23, 46, 54, 55f, 73, 98, 120, 122, 137, 157, 169
Cavendish, H. (1731–1810) 71, 121
Chambers, R. (1802–71) 59
Chadwick, J. (1892–1974) 103
Charles, J. A. C. (1746–1823) 56
Clarke, S. (1675–1729) 155
Clausius, R. J. E. (1822–B8) 56, 61, 157
Clifford, W. K. (1845–79) 140
Coleridge, S. T. (1772–1834) 99, 114
Comte, A. (1798–1857) 25, 134
Cook, J. (1728–79) 17, 70, 75, 105, 155, 158, 167
Copernicus, N. (1473–1543) 16, 22, 33, 44, 61f, 64, 80, 84, 111, 117, 140
Coulomb, C. A. (1736–1806) 27, 47, 52
Couper, A. S. (1831–92) 165
Crookes, W. (1832–1919) 11, 41, 42, 49, 51, 60, 83, 86, 87, 112, 131, 145, 147
Cudworth, R. (1617–88) 18
Curie, M. (1867–1934) 110, 129
Cuvier, G. (1769–1832) 9, 67, 162

d’Alembert, J. le R. (1717–1783) 65, 118, 166


Dalton, J. (1766–1844) 18, 19, 22, 28, 30, 51, 57, 79, 83, 95, 108, 112, 144, 150
Darwin, C. R. (1809–82) 21, 22, 26, 28, 51, 59f, 66, 68, 70, 87, 97, 101, 104, 107, 114, 134, 135,
141, 143, 150, 161, 166, 167f, 171
Darwin, E. (1731–1802) 118f
Davisson, C. J. (1881–1958) 41, 49
Davy, H. (1778–1829) 4, 6, 7, 8, 9, 11, 12, 23, 37, 38, 41, 45, 47, 48, 51, 52, 64, 65, 76, 81, 94, 99,
101, 108, 109, 124, 128f, 135, 144, 169f
de Moivre, A. (1667–1754) 148
de Morgan, A. (1806–71) 119
Dee, J. (1527–1608) 92
Democritus (d.c.361 BC) 17, 29
Descartes, R. (1596–1650) 10, 15, 18, 32, 33, 34, 36, 40, 56, 69, 71, 75, 81, 90, 92, 93, 95, 98, 111,
114, 145, 155, 165
Diderot, D. (1713–84) 118, 166
Dirac, P. A. M. (1902–) 120
Disraeli, B. (1804–81) 147
Dollond. J. (1706–61) 155
du Châtelet, G. E. (1706–49) 171
Edison, T. A. (1847–1931) 153
Einstein, A. (1879–1955) 10, 11, 17, 22, 25, 26, 32, 37, 40, 44, 53, 59, 62, 69, 72, 90, 94, 95, 110,
112, 117, 120, 123, 127, 133, 160, 161, 169
Epicurus (342–270 BC) 18
Eratosthenes (c. 73–192 BC) 96
Euclid (fl.c. 290 BC) 21, 26, 50, 68, 89, 133, 155
Eudoxus (c. 408–355 BC) 16, 43

Faraday, M. (1791–1867) 11, 12, 27, 37, 38, 41, 44, 45, 47ff, 52, 60, 64f, 77f, 81, 86, 89, 93, 101,
111, 116, 118, 120, 124, 131, 135, 139, 143, 153, 154, 155, 164, 165, 170
Fermat, P. (1601–65) 6
Feyerabend, P. (1924– ) 98
FitzRoy, R. (1805–65) 121, 171
Fontenelle, B. le B. (1657–1757) 7
Fourier, J. B. J. (1768–1830) 95, 157
Frankland, E. (1825–99) 87, 129
Franklin, B. (1706–90) 22, 46f, 154
Fraunhofer, J. (1787–1826) 3, 13, 145
Fresnel, A. J. (1788–1827) 90, 111, 120, 122, 168
Freud, S. (1856–1939) 140

Galileo (1564–1642) 4, 5, 12, 15, 16, 25, 26, 29, 32, 33, 34, 39, 43, 54, 60, 61f, 63, 71, 72, 74, 80, 87,
88, 90, 95, 97, 98, 110, 112, 114, 115, 117, 120, 140, 147, 150, 154, 169
Galton, F. (1822–1911) 166
Galvani, L. (1737–98) 47, 65
Gassendi, P. (1592–1655) 18
Gaudin, M. A. A. (1804–80) 100
Gauss, C. F. (1777–1855) 58, 69, 148f
Gay-Lussac, J. L. (1778–1850) 64, 76, 124, 140
Geiger, H. (J.) W. (1882–1945) 129
Gerhardt, C. F. (1816–56) 161
Germer, L. H. (1896–1971) 41, 49
Gibbs, J. W. (1839–1903) 113f, 158
Gilbert, W. (1544–1603) 50, 93
Goethe, J.W. (1749–1832) 8, 30, 171
Graunt, J. (1620–74) 148 ’sGravesande, W. J. (1688–1742) 37
Grew, N. (1641–1712) 100
Grignard, F. A. V. (1871–1935) 101
Grosseteste, R. (c. 1168–1253) 89
Grove, W. R. (1811–96) 124

Haber, F. (1868–1934) 170


Halley, E. (1656–1742) 16, 148
Harriot, T. (1560–1621) 15, 154
Harrison, J. (1693–1776) 158
Harvey, W. (1578–1657) 42, 87f, 160
Haüy, R. J. (1743–1822) 35
Hegel, G. F. W. (1770–1831) 74, 132, 141
Heisenberg, W. (1901–76) 39, 128
Helmholtz, H. L. F. (1821–94) 27, 30, 32, 40, 43, 48f, 53, 66, 73, 99, 111f, 116, 119, 157
Henry, J. (1797–1878) 80
Henry, W. (1774–1836) 127
Herapath, J. (1790–1868) 9
Herschel, C. L. (1750–1848) 64, 144, 171
Herschel, J. F. W. (1792–1871) 64, 97, 168
Herschel, W. (1738–1822) 12, 17, 52, 64, 65, 71, 117, 131
Hertz, H. R. (1857–94) 49, 131
Heytesbury, W. (fl. 1340) 4, 43, 81
Hooke, R. (1635–1703) 35, 37, 54, 112, 133, 150
Hooker, J. D. (1817–1911) 168
Humboldt, A. (1769–1859) 93, 163, 167
Hume, D. (1711–76) 24, 69, 98
Hunter, J. (1728–93) 65, 94, 108
Huxley, T. H. (1826–95) 64, 118, 134, 140, 143, 161, 168, 169
Huygens, C. (1629–95) 17, 54, 111, 112, 150, 168

Joule, J. P. (1818–89) 52, 56, 65, 73, 116


Jussieu, A. L. (1748–1836) 106

Kant, I. (1724–1804) 69
Keats, J. (1795–1821) 107
Kekulé, F. A. (1829–96) 13, 28, 76, 79, 136, 165
Kepler, J. (1571–1630) 15, 16, 44, 52, 59, 72, 75, 86, 97, 105, 106, 117, 133, 134, 144, 150, 155
Kirchhoff, G. R. (1824–87) 3, 11, 136, 145, 160
Kuhn, T. S. (1922– ) 42f, 45, 64, 98, 110, 115

la Condamine, C. M. (1701–74) 71, 96, 167


Lagrange, J. L. (1736–1813) 10
Lakatos, I. (1922–74) 115
Lamarck, J. B. (1744–1829) 67
Laplace, P. S. (1749–1827) 10, 17, 27, 39, 69, 99, 102, 123, 148
Laue, M. T. F. (1879–1960) 151, 172
Laurent, A. (1808–53) 6, 19, 28, 57, 76, 150, 161
Lavoisier, A. L. (1743–94) 5, 6, 11, 12, 22, 25, 27, 31, 40f, 43, 50, 55, 72f, 78, 87, 94, 101, 108, 110,
112, 115, 119, 124, 135, 144, 145, 156, 160, 169
le Chatelier, H. L. (1850–1936) 57
Lear, E. (1812–88) 78
Leibniz, G. W. (1646–1716) 18, 65, 95, 98
Leverrier, U. J. J. (1811–77) 37, 40, 120, 122
Lewis, G. N. (1875–1946) 8, 132, 166
Liebig, J. (1803–73) 11, 28, 38, 42, 44, 87, 143, 160, 165
Linnaeus, C. (1707–78) 85, 106
Locke, J. (1632–1704) 29, 119
Lockyer, J. N. (1836–1920) 17, 60, 86, 145
Loudon, J. C.(1783–1843) 85
Lucretius (c. 95–55 BC) 18
Lyell, C. (1797–1875) 68, 162

Mach, E. (1838–1916) 61f, 153


Malthus, T. R. (1766–1834) 166
Mantell, G. A. (1790–1852) 67
Marcet, J. (1769–1858) 118f, 124, 144, 171
Marconi, G. (1874–1937) 154
Maupertuis, P. L. M. (1697–1759) 6, 71, 96
Maxwell, J. C. (1831–79) 39, 47, 61, 65, 76, 90, 131, 149, 158, 167
Mayer, J. R. (1814–78) 53, 65
Mendeleev, D. I. (1834–1907) 19, 22, 51, 83, 107, 113, 120, 156
Merian, M. S. (1647–1717) 171
Mersenne, M. (1588–1648) 60, 84
Michell, J. (1724–93) 149
Michelson, A. A. (1852–1931) 59
Mill, J. S. (1806–73) 79, 115
Miller, W. H. (1801–80) 35
Millikan, R. A. (1868–1953) 27
Mitscherlich, E. (1794–1863) 83
Morley, E. W. (1838–1923) 59
Moseley, H. G. J. (1887–1915) 51, 102, 172
Musschenbroek, P. (1692–1761) 46

Nernst, H. W. (1864–1941) 158


Newcomen, T. (1663–1729) 38, 45, 54
Newton, I. (1642–1727) 6, 8, 13, 15, 16ff, 27, 29, 32, 34, 36f, 39, 44, 47, 52, 59, 64f, 69, 71f, 75, 79f,
90, 94, 95, 96, 97, 98, 106f, 110, 111, 115, 119, 133, 134, 144f, 150, 151, 155, 158, 165, 166, 168,
171
North, M. (1830–90) 172

Oersted, H. C. (1771–1851) 32, 47, 52, 57, 65, 93, 116


Ohm, G. S. (1789–1854) 38, 48, 101, 140
Oken, L. (1779–1851) 13
Oldenburg, H. (c. 1616–67) 85
Oresme, N. (c. 1320–82) 43, 80
Osiander, A. (1498–1552) 33, 62, 155
Ostwald, F. W. (1853–1932) 19, 30, 150, 158
Owen, R. (1804–92) 100, 161

Paley, W. (1743–1805) 162


Paracelsus (1493–1541) 50
Parsons, C. A. (1854–1931) 120
Pascal, B. (1623–62) 121
Pasteur, L. (1822–95) 35, 151
Pauling, L. (1901–) 136
Perkin, W. H. (1838–1907) 153
Petty, W. (1623–87) 148
Philoponus, J. (fl.c.550) 80
Planck, M. C. E. W. (1858–1947) 22, 110, 117, 127
Plato (c. 427–347 BC) 16, 161
Poincaré, H. (1854–1912) 69
Poisson, S. D. (1781–1840) 12, 116
Popper, K. (1902– ) 63, 76, 79, 97, 115, 140, 144
Powell, B. (1796–1860) 10
Priestley, J. (1733–1804) 40, 42, 47, 60, 65, 94, 96, 108, 124, 163
Prout, W. (1785–1850) 51, 58, 66, 83
Ptolemy (c. 100–170) 16, 20, 43f, 61, 105, 106, 121, 133, 134, 144
Pythagoras (c. 579–497 BC) 21, 72, 103

Quetelet, L. A. J. (1796–1874) 148

Ramsay, W. (1852–1916) 130


Ray, J. (1628–1705) 88
Richards, T. W. (1868–1928) 51, 130
Ritter, J. W. (1776–1810) 131
Roemer, O. C. (1644–1710) 90
Roentgen, W. K. (1845–1923) 42, 110, 129, 172
Rohault, J. (1620–1675) 155
Ross, J.C. (1800–62) 93
Rosse, Lord [Parsons, W.] (1800–67) 59
Rowland, H. A. (1848–1901) 145
Rutherford, E. (1871–1937) 19, 29, 49, 51, 84, 102f, 106, 113, 117, 127, 129f, 139, 142, 160

Sacrobosco (d.1256) 155


Scheele, C. W. (1742–36) 40, 108
Schelling, F. W. J. (1775–1854) 47, 52, 65, 99
Schrodinger, E. (1887–1961) 128
Shelley, M. (1797–1851) 52, 118
Siemens, E. W. (1816–92) 154
Sloane, H. (1660–1753) 100
Smithson, J. (1765–1829) 100
Snell, W. (1591–1626) 88, 133
Soddy, F. (1877–1956) 51, 84, 129f
Somerville, M. (1780–1872) 119, 144, 171f
Stokes, G. G. (1819–1903) 136, 155, 167
Stoney, G. J. (1826–1911) 27, 48, 111f
Swainson, W. (1789–1855) 9
Swan, J. W. (1828–1917) 153
Swineshead, R. (fl. 1350) 4, 43, 81

Tait, P. G. (1831–1901) 156


Thompson, B. [Count Rumford] (1753–1814) 73
Thomson, J. J. (1856–1940) 11, 19, 27, 35, 41, 47, 49, 62, 96, 99, 102, 112, 116, 127, 129, 131, 147,
150, 152, 165
Thomson, T. (1773–1852) 58, 66
Thomson, W. [Lord Kelvin] (1824–1907) 34, 53, 56, 61, 68, 99, 153, 154, 156, 157
Tilloch, A. (1759–1825) 85
Torricelli, E. (1608–47) 120
Trew, G. R. (1776–1837) 67
Tyndall, J. (1820–93) 87

Van der Waals, J. D. (1837–1923) 10


Van’t Hoff, J. H. (1852–1911) 35
Volta, A. (1745–1827) 11, 32, 48, 52, 65

Waterston, J.J. (1811–83) 133


Watt, J. (1736–1819) 12, 38, 46, 54, 99, 119, 152
Wells, W. C. (1757–1817) 97
Wheatstone, C. (1802–75) 154
Whewell, W. (1794–1866) 35, 74, 101, 115, 132, 141, 143
Williamson, A. W. (1824–1904) 23, 59, 101, 132
Wöhler, F. (1800–82) 94, 108
Wollaston, W. H. (1766–1828) 12, 35, 58, 64, 108, 131
Wordsworth, W. (1770–1850) 143, 167

Young, T. (1773–1829) 30, 90, 111, 115, 142, 143, 168

You might also like