Professional Documents
Culture Documents
David Knight - A Companion to the Physical Sciences-Routledge (2015)
David Knight - A Companion to the Physical Sciences-Routledge (2015)
First published in 1989, this dictionary of the whole field of the physical
sciences is an invaluable guide through the changing terminology and
practices of scientific research. Arranged alphabetically, it traces how the
meaning of scientific terms have changed over time. It covers a wide range
of topics including voyages, observations, magnetism and pendulums, and
central subjects such as atom, valency and energy. There are also entries on
more abstract terms such as hypothesis, theory, induction, deduction,
falsification and paradigm, emphasizing that while science is more than
‘organized common sense’ it is not completely different from other
activities. Science’s lack of innocence is also recognized in headings like
pollution and weapons.
This book will be a useful resource to students interested in the history of
science.
A Companion to the Physical Sciences
David Knight
First published in 1989
by Routledge
The right of David Knight to be identified as author of this work has been asserted by him in
accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission in
writing from the publishers.
Publisher’s Note
The publisher has gone to great lengths to ensure the quality of this reprint but points out that some
imperfections in the original copies may be apparent.
Disclaimer
The publisher has made every effort to trace copyright holders and welcomes correspondence from
those they have been unable to contact.
All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission in
writing from the publishers.
1. Physical sciences
I. Title
500.2
Library of Congress Cataloging in Publication Data
Available on request
ISBN 0-415-00901-4
Contents
Introduction
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Z
Index
Introduction
Accommodation was the term used to explain how a statement in the Bible
could conflict with modern science. Religion was about how to go to
heaven, not how the heavens go; and while the inspired authors could not
lie, they were not teaching physics and their message was therefore
accommodated to the understandings of their immediate audience. This
point was strongly urged by Galileo.
Acid meant something having a sour taste, and from the seventeenth
century with indicators like cabbage-juice it came to mean something which
produced a colour change, especially with litmus-paper which it turns pink.
This has now been quantified in terms of pH, the concentration of hydrogen
ions (or strictly -log[H+]), where 7 is neutral and lesser values correspond
to increasing acidity shown on universal indicator papers. The strong
mineral acids (sulphuric, nitric and our hydrochloric) became standard
reagents from the seventeenth century.
For Lavoisier in the 1780s, all acids were compounds of oxygen – hence
its name. This theory was demolished by Davy, who had no coherent
notion to put in its place; but by the mid-nineteenth century, notably in
Laurent’s Chemical Method (English ed. 1855) the idea gained ground that
acids contained hydrogen replaceable by a metal. With the twentieth
century, acids became proton donors or electron acceptors: there has been
steady conceptual change, with an uneasy relationship between common
usage and chemistry.
J. H. Brooke, in S. Forgan (ed.) (1980) Science and the Sons of Genius, Science Reviews, London,
pp. 121–75.
Action and reaction are equal and opposite, as anybody who has thumped
a table will know and as Newton’s laws of motion state: here it means
force, but in 1744 Pierre Maupertuis (the first head of the Prussian
Academy of Sciences) defined action as the product of mass, velocity and
path length. With this went the principle of least action: that nature in
producing changes uses the minimum quantity of action. He believed that
this was evidence for the wisdom of God. The principle had been anticipated
a century earlier by Pierre Fermat in explaining the path of light going
from one medium to another and being refracted: his conclusion was that
the ray went by the route taking the least time.
In general, an action is something done by somebody or something. In
science, the question of how things act upon one another is very important:
in the seventeenth century the mechanical principles of the ‘New
Philosophy’ required pushes and pulls as the cause of motion. Newton’s
physics, on the other hand, involved action at a distance: he could provide
no mechanical explanation of gravity; how, for example, the Earth exerts
a force upon the Moon across void space. The theory worked so well that
its uneasy foundations were generally ignored; and action at a distance was
also invoked in electricity and magnetism, although ether was
postulated as a medium for the light waves whose existence seemed proved
by experiment. But Faraday by the 1840s had come to doubt the possibility
of action at a distance, and of solid atoms surrounded by vacuum; and
postulated instead a field in which atoms were point centres of force. This
new understanding of matter and space transformed physics in the later
nineteenth century and since.
Analogy was for Davy the very soul of science: by which he meant that
creative reasoning in the sciences is analogical. We can see this is in his
own work, where he recognised in 1810 the analogies between the greenish
gas from sea-salt and oxygen, and therefore recognised the former as an
element which he named Chlorine. Clearly the analogies here are
incomplete; breathing the two substances has very different results. It was
only in the realm of chemical behaviour that there were analogies; one
needs knowledge, a paradigm, as well as common sense to perceive
relevant analogies, and ignore surface ones. Similarly in biology, the
Tasmanian wolf showed analogies with our wolf, and William Swainson in
the 1830s put them in the same group in his classification: but already
Cuvier, looking beneath the skin at the mode of reproduction, had seen that
the Tasmanian wolf as a marsupial had no relevant analogies with dogs. To
the shepherd it has, just as chlorine to the coroner has analogies with
arsenic; the interests of the scientist are not necessarily those of
everyman.
Davy had an easier task when in France in 1813 he was shown a curious
product of sea-weed which formed a violet gas when heat was applied to it.
The analogies between what he called Iodine and chlorine are much closer
than those between chlorine and oxygen, and we might feel that anybody
coming from chlorine to this new substance would have spotted them. All
that analogy means is going from the known and familiar to the unknown;
and the step may be long or short. It cannot be guaranteed: analogies may
be misleading as they were to Swainson, and as analogies between heat and
fluids were in the caloric theory; though even here the analogy led
somewhere, and as Bacon said, error is better than confusion. We progress
from analogy to analogy; and those left behind in the past look crude, as no
doubt ours will to our grandchildren.
From chlorine to iodine is a straightforward analogy; the more interesting
are a borrowing from a distinct science or region of knowledge. This goes
on all the time, and ideas from fashionable or successful fields are applied
in more backward regions; the role of fashion in science should never be
underestimated. When Davy was President of the Royal Society, an
unknown young man, John Herapath, submitted a paper suggesting that the
behaviour of gases could be understood by analogy with that of billiard-
balls. A large number of absolutely hard particles, colliding with each other
and with the walls of their container would, he believed, behave like a gas.
Davy was unimpressed with this analogy, probably because he did not
believe in absolutely hard atoms; he may also have seen the mathematical
weaknesses in Herapath’s treatment: but a generation later the kinetic
theory of gases, in a more sophisticated version, became one of the most
exciting bits of physics. Mechanics, with statistics, had been used in the
explanation of the gas laws.
The analogies are clearly incomplete; the molecules do not have colours
like billiard-balls, but they do have a definite size and this can be taken
account of in accounting for the failure of the gas laws to fit actual gases
exactly, in Van der Waals’ equation. In the nineteenth century, it seemed
that one would work from analogies towards truth; now we might see
through a glass darkly, but in the future face to face. Thus the successes of
the wave analogy for light, and the billiard-ball analogy for gases,
indicated that light was really a wave motion and molecules really elastic
particles. The physics of the twentieth century has undermined this
confidence. Einstein showed that the generation of electricity when light
fell onto potassium was best explained by an analogy with bullets; the small
shot of red light does not produce an effect, while the big bullets from the
violet end of the spectrum knock out electrons easily enough. Thus to
explain light, we need different analogies in analysing different
experiments.
The same problem emerged with matter. Electrons seemed to be
particles, but in the 1920s it turned out that they could be diffracted as light
is at the edge of a knife; thus showing analogies with waves. It seems that
we cannot have one theory to account for all these properties, and must be
content with analogy; we know which to apply in any situation, and an
ultimate explanation eludes us. Scepticism is not new; but usually
scientists have avoided extreme versions of it, and many hope now for some
way out of the impasse of quantum theory. In the 1830s, Baden Powell
(mathematician, and father of the defender of Mafeking) suggested that
analogy could guarantee induction: he recognised the great mass of
scientific knowledge, so that inductive reasoning was not a matter of
isolated generalisings about white swans or black ravens, but was part of a
great network worthy of reasonable faith. To the extreme sceptic, this will
not do; but unless one believes in the guidance of analogy, one would
hardly undertake any science.
D. M. Knight (1986) ‘William Swainson: Naturalist, author and illustrator’, Archives of Natural
History, 13, 275–90.
Authority is a poor reason for accepting anything, and the Royal Society’s
motto ‘Nullius in verba’ implies that nobody’s word for anything is
sufficient. The only authority in science should be experiment and the
reasoning based upon it through induction and deduction; such at least
was the hope of the founders of modern science in the seventeenth century,
who believed that their predecessors had been too deferential to the
opinions of Aristotle, Ptolemy and Galen. But academies such as the Royal
Society in due course came themselves to function as authorities; and the
views of great scientists, or associations of them, must carry great weight.
Science has therefore come over the years to be associated with dogma just
as much as with open-mindedness; indeed scientists are often accused of
closed-mindedness when they refuse to drop everything and investigate
flying saucers, telepathy, Noah’s Ark or acupuncture.
One good reason for accepting authority is that life is too short for us to
attempt to test everything for ourselves. Unless we accept a division of
labour in which those with particular skills exercise them, and we fall in
with their conclusions, we should all have to revert to nasty short and
brutish lives in caves. Scientists are well aware of error and fraud, and of
the difficulty of establishing truth, or even that plausible theory which is
necessary to convert a mass of facts into science. The literature of
science is full enough of mistakes and misinterpretations to make one very
doubtful of any claims made by those whose status is uncertain. This is
why the teaching of science is dogmatic, closer to the catechism than to
discussion; argument in modern science, where there is so much to learn,
comes only at an advanced stage. One might conclude from this that science
alone provides an incomplete education for life, a sphere of probabilities;
and must be augmented by study of the humanities, where authority is much
less prominent, or anyway by critical study of the history and philosophy of
science.
In the sciences, the teacher or supervisor can come to play the role of a
father. The eminent German professor of the nineteenth century would
publish his students’ work in his journal, and would find posts for them;
he would hope to preside over a school of research, where his authority
would prevail. This might involve factual claims, but more importantly
would involve a world-view or paradigm in which certain problems and
methods of investigation were seen as important, and others as unimportant
or wrongheaded. Where, as in nineteenth-century chemistry, there were a
number of such schools in various countries, each produced useful
information; and striking progress came when a student from one worked
for a time in another and had his eyes opened to other authorities. This
dialectical progress, produced by the clash of opposed views, is
characteristic of much science.
M. P. Crosland (ed.) (1975) The Emergence of Science in Western Europe, Macmillan, London.
H. T. Pledge (1939) Science Since 1500, HMSO, London, reprinted 1966, has charts of masters and
pupils.
Certainty is what people have always sought for. Truth is the object of
knowledge, whereas probability is within the sphere of mere opinion; or
so it seemed down to about a hundred years ago. With the Reformation of
the sixteenth century (and indeed with schisms before that) the authority
of the Church as a source of certainty, guaranteed by God, collapsed.
Different branches of it held opposite doctrines; all coherence was lost. But
another hope lay in philosophy. Ancient philosophers, nearer than we to
the Golden Age, knew truth; and the hope of the Renaissance was that their
writings could be accurately restored and thus bring us certainty. But the
philosophers of Antiquity had disagreed amongst themselves, and the
carefully edited texts now showed that this disagreement was genuine and
not due to medieval misunderstanding. In the early seventeenth century
came the attempt at a new method; the rise of modern science, based on
experiment and mathematics. This general picture was in the mind of
Comte in the 1830s when he framed his idea of knowledge as progressing
from the theological through the metaphysical to the positive stage.
Bacon hoped for certainty through induction, the careful generalising
from instances observed without preconceptions. His New Atlantis
described an island ruled by a benevolent academy of sciences, with a
hierarchy going from fact-collectors up to those creating wide-ranging
theory. But the problem is that induction can never bring certainty. It
depends upon the assumptions that the future will resemble the past, and
that we have had a fair sample of events to reason upon. We might have
perceived a merely chance correlation, like that between the death-rate in
Hyderabad and the membership of the American Union of Machinists,
which went up and down in parallel for a number of years. Inductive
science depends upon moral certainty; as the great Bishop Butler put it in
the eighteenth century, probability is the guide to life: here, as in courts of
law, we depend on ‘moral certainty’, which means high probability.
Bacon’s contemporary, Galileo sought certainty through intuition and
deduction; he believed that the Book of Nature was written in the language
of mathematics, and that the underlying laws of nature must be of simple
numerical form. He got the law of falling bodies right; but unfortunately his
intuition led him to explain the cause of the tides as the revolution of the
Earth each day on its axis. He therefore expected one high tide every 24
hours; and was very proud of thus demonstrating the Earth’s motion. We
know, and the Venerable Bede knew, that tides do not work like that; only
someone living in Venice, where there are no tides, could have proposed
such an idea. Galileo’s intuition was not wholly reliable; certainty cannot be
achieved this way.
During the eighteenth century, phlogiston, which had been assumed in
chemistry as the component making things flammable, was shown by
Lavoisier to be an unnecessary hypothesis; but Lavoisier himself proposed
that all acids contain oxygen, which turned out to be false. In the nineteenth
century, the success of the wave theory of light ensured that scientists
believed in an ether, which was something that could carry waves; after
all, one can have waves in the sea, but not in nothing. Many even believed
more firmly in the ether than in solid matter, which might be no more than
whirlpools in the continuous sea of impalpable ether. The ether was like
phlogiston shown to be unnecessary, as Einstein demonstrated that light
sometimes behaves more like a stream of particles, photons, than like
waves.
There is no reason to believe that parts of our science will not go the way
of these supposed certainties of the past; error and even fraud cannot be
excluded from science, anyway in the short run – and in the long run we
shall all be dead. And yet most science is certain enough. Formulae of
chemical compounds which have been determined by analysis followed
by synthesis; hypotheses which have led to detailed and unexpected
predictions and survived falsification; theories which are fertile in
countless ways; constants which are the result of converging but distinct
lines of experiment and reasoning; these things all give us moral certainty,
and we should not perhaps expect more. The trouble is that this kind of
probability cannot be quantified, as nineteenth-century mathematicians
succeeded in making definite some areas of statistics.
Within a deductive system, such as Euclid’s geometry, one can have
proof of the theorems: but the certainty of these proofs tells us nothing
about the world, only that the results are contained within the axioms. There
are such axioms within the sciences also, which the scientist takes for
granted: these are the principles, such as those of the conservation of
matter and of energy which form a background against which
explanation of change is possible. They are deeply entrenched within
science, but they are not certain. They form part of the dogma which is
prominent in scientific education; and may even be conditions, like some
principle of uniformity, of doing science at all.
In our century, the ‘uncertainty principle’ has been added to these, with
the coming of quantum mechanics; and stating that we can never know both
the position and the velocity of a sub-atomic particle with complete
accuracy. It might seem that this really marks the end of the search for
certainty which began with Bacon and Galileo: but there are many things
about which we can be sure; such as that half of a sample of radium will
have decayed in a definite time. The underlying laws seem statistical rather
than causal; and this might make certainty in science even more like a
chimaera than it was before.
Chance has often been set against law. It is the realm of the disorderly and
the unpredictable. This was one of the objections levelled against the
ancient theory of atoms, because chance collisions were supposed to lie
behind all phenomena. A world of contingency like this would not be
conducive to serious science, which requires uniformity and cause. All
connections between things cannot for the scientist be just accidental; and
moreover it seemed that the coherence of the solar system and of animals
proved design, and could not be the result of mere chance. As Einstein put
it, God does not play dice.
And yet by the nineteenth century chance and law were in part
reconciled. Statistics and probability involved laws of chance, and
important branches of physics came to depend upon them. And Darwin in
his theory of evolution used the haphazard mechanism of natural selection,
where chance was operating according to law to produce development. The
old determinism of Laplace had to give way to a different paradigm, in
which small-scale phenomena in the organic and the inorganic realms are
not completely predictable. This neither proves that we are free or
determined, nor that there is or is not a Designer; but it does indicate that
our idea of a law of nature has to be refined. What we can say is that
science is still incompatible with the belief that anything might happen; it is
only lawful chance that has a place in it.
Colour brings much interest into our world. The question has been for over
two thousand years how much it is ‘really’ there and how much it is added
by us. Democritus believed that in appearance there were colours, tastes and
smells, while in reality there were atoms and the void. Galileo took up this
idea, distinguishing primary qualities, really present in things, and
secondary qualities which resulted from their interaction with a sensitive
observer. Locke made this distinction a very important part of his
philosophy. The job of the scientist was to account for the secondary
qualities in terms of the primary ones.
Newton showed how the spectrum can be understood as the
decomposition of white light into its constituent colours; an astonishing
discovery, because it had been generally supposed that colours were
modifications of whiteness rather than whiteness a mixture of colours. With
the wave theory of light colours seemed to be reduced in principle to
differences in wavelength; a real reduction of a secondary quality to a
primary one, to a number. In fact, things were more complicated; and the
great chemist Wilhelm Ostwald spent much time trying to develop a
classification of colours, finding he needed three dimensions to plot them.
Later systems of standard colours start from his system. Complementary
colours are those which together form white light.
John Dalton was completely colour-blind, which is an unusual condition;
he was sent by his mother to buy some grey thread, and came back with
scarlet which he thought matched what she wanted. He was the first to
describe colour blindness scientifically. His contemporary Thomas Young
developed a three-colour theory of vision, which was improved by
Helmholtz in the middle of the nineteenth century, and is still the basis of
theories of colour perception. Goethe, another contemporary of Dalton, was
highly critical of the Newtonian colour-theory; and in his Farbenlehre
pointed out how one can see colours in black and white diagrams, and also
how red in paintings comes forward and blue recedes; he is perhaps a
founder of psychological optics.
Complementary colours are those which, like red and green, together
make up white light; complementary angles together make up a right angle;
and complementary theories or models, like those of waves and
particles in quantum physics together provide an account of all the
phenomena.
Crystal is often used to mean rock-crystal, or silica, which was taken as the
most transparent of substances. Hooke in 1665 recognised that different
substances form crystals of different shapes: and the microscope with which
he was a pioneer revealed that metals for example have a crystalline
structure. He tried to work out how different arrangements of spherical
atoms might yield different crystal forms; an idea little noted until William
Hyde Wollaston in 1813 revived it, employing also spheroidal atoms in
models preserved at the Science Museum in London. But because it turned
out that the same substance might be found in different crystalline forms
(dimorphism) and different substances in the same form (isomorphism),
Wollaston’s instrument for measuring crystal angles, the goniometer,
turned out not to be a substitute for chemical analysis.
In the opening years of the nineteenth century, René Haüy in Paris was
the great authority on crystals; he had dropped a crystal of calcite (CaCO3)
and saw that the fragments were of the same form as the original; and went
on to demonstrate by cleavage how complex forms are made up of simple
unit cells. Geometrical analysis of crystals was taken further by William
Whewell in Cambridge, and then by W. H. Miller, whose Treatise on
Crystallography was published in 1839 and from whom modern methods of
describing crystals, the Miller Indices, derive.
Whewell believed that any atomic theory worthy of the name must
ultimately be based on crystal forms. Meanwhile, mineralogy was a
territory claimed by chemistry but threatened by geometry: chemical
analysis seemed to be more reliable, but geometry somehow more
fundamental. Calcite, in which there is double refraction, had long
interested those working on light; one ray was found to be polarised, which
was explained by the wave theory as having all its vibrations in one plane.
Pasteur knew that natural tartrates rotated the plane of polarisation, while
synthetic ones did not; and found that the test-tube sample contained
equal quantities of crystals which were mirror-images of one another. In
1871 van’t Hoff proposed that such lack of symmetry was due to molecular
asymmetry, different atoms being arranged differently about a carbon atom
to form a tetrahedron:
At last atomic structure and crystal form had been connected, and further
connexions were made following the discovery of X-rays, which form
diffraction patterns which are a guide to structure. The diffraction of
electrons from a crystal of nickel proved that they have a wave character,
and was thus important in quantum physics. Crystals are not only
beautiful, but have also been very significant in science.
Deduction is the process of drawing conclusions from axioms or
hypotheses. It is generally contrasted with induction, where one goes
from facts to generalisations. Because there are strict rules in the logic of
deduction, it has always appealed to those who seek for certainty in
science, like Descartes in the seventeenth century. Geometry, in which all
sorts of surprising and useful propositions could be proved from a few
axioms, seemed the model for all real science; and demonstration and
proof were what Newton sought in his method of ‘deduction from the
facts’. The problem is that the results are only as certain as the axioms, so
that outside pure mathematics deduction cannot be infallible; but much
scientific progress has come through forming hypotheses and testing
consequences deduced from them.
K. R. Popper (1963) Conjectures and Refutations, Routledge & Kegan Paul, London.
Demonstration in our day has come usually to mean political protest; but
in the sciences it has a long history, being used in various other senses. The
word comes from the Latin, meaning to show; and in geometry the OED at
the end of theorems is the indication that the proof required has now been
displayed. Geometrical demonstration is indeed a scientific ideal: from the
axioms by rigorous deduction a particular conclusion can be shown to be
inevitable. In physics this kind of reasoning goes back to Archimedes. His
demonstration of the law of the lever does not depend upon any
experiments; but the reader who has accepted the propositions about rigid
weightless rods and point masses cannot then escape the conclusions. The
only problem is to find how well they fit our world of actual levers and
awkwardly shaped objects: and science of this kind depends upon the
appropriateness of the theoretical model to actual states of affairs. At this
stage the demonstration is less rigorous; but predictions that are verified
can bring us close to certainty. Thus the prediction by Adams and Leverrier
that a new planet in a specified orbit was responsible for irregularities in
the motion of Uranus was verified with the observation of Neptune; in
confirmation at a deeper level of Newton’s theory of gravity. Leverrier
then went on to predict another planet, Vulcan, nearer the Sun than Mercury
and making it wobble: but nothing was to be seen there, and it was only
with Einstein’s theory of relativity that an explanation could be found
for the wobbles. In time, falsification may thus befall theories which
seemed confirmed: empirical science is provisional, and geometrical
demonstration an ideal strictly applicable only in mathematics and logic.
Demonstration is also used in a looser sense. In the early days of the
Royal Society, Robert Hooke was employed to devise and perform
experiments suggested by the Fellows during discussions. At academies
and associations, a newly-invented instrument (like Newton’s reflecting
telescope) might be demonstrated, or some experiment done which
illuminated a theory. By the early eighteenth century, lectures on the ‘New
Philosophy’ (what we would call experimental science) had become a
recognised form of intellectual entertainment. To know a little about
astronomy, air pressure and mechanics was essential for anybody who
wished to keep up with what was going on; even for conversations in salons
or coffee-houses. Courses in elementary science also began in universities.
These lectures were accompanied by demonstrations with specially-made
apparatus; which differed from that used in research because it was not
devised for discovery.
Ordinary apparatus is made to be taken apart and reassembled as
occasion requires; it can be used for different purposes, and the creative
experimentalist like Davy or Faraday will be reluctant to throw anything
away and will go in for inspired misuse of glassware. Indeed discovery may
depend upon extensions in the use of apparatus, just as great poetry depends
upon words used in new ways. Instruments made for demonstrations are
different. They are handsome rather than workaday; they are collectable in a
way that test-tubes are not. Gleaming in hardwood and brass, they were
made to show one single effect as clearly as possible. The young King
George III had a set of such instruments in the 1760s, now in the Science
Museum in London; which were intended to go with a course in science
taught originally at Leiden by ’sGravesande, and illustrated in his textbook.
He was a Newtonian, but demonstrated Newton’s physics for his classes
using machinery rather than mathematics.
Lectures accompanied by demonstrations were given by itinerant men of
science who toured in the eighteenth and early nineteenth centuries. They
would show air-pumps, demonstrating how a bird in the receiver will
collapse as the pressure is reduced, and probably revive when air is
readmitted; and they would have pendulums, blocks with pulleys, and
electrical machines to produce great sparks when the handle was turned.
Some instruments were scaled-down and simplified versions of machinery
used in shipbuilding, land-draining, or architecture; Glasgow University
had a miniature steam-engine of Newcomen’s primitive design, which
James Watt was called upon to repair, and which set him thinking about
improvements. At the museums in Utrecht and Leiden, there are collections
of apparatus used in lectures.
By the early nineteenth century, institutions like the Royal Institution
in London and Literary and Philosophical Societies in other cities offered
programmes of demonstration lectures, which became very fashionable. At
the Royal Institution, the emphasis shifted towards the popularisation of
current research. Thus Davy demonstrated the properties of potassium, and
Faraday his discoveries in electricity and magnetism. In such up-to-date
science there were as yet no elegant instruments available; so the staff had
to become expert in making apparatus which would show the effect desired
to a large audience, some of them at a fair distance from the lecturer. The
occasion was theatrical; and those who witnessed it were unlikely
themselves ever to do any scientific experiments, though some might be
fired with enthusiasm.
Justus von Liebig, beginning at Giessen in the 1830s the laboratory
teaching of chemistry which was to have a profound effect on scientific
education, was struck with the high level of demonstration lecturing in
Britain; but realised that this went with a lack of training for the
profession of science. He believed that serious students must learn by
actually handling apparatus themselves; that scientific knowledge was
acquired by doing rather than watching experiments. This principle has by
our time become generally accepted, and the demonstration has become less
prominent than it was in its heyday; but experiments done by the inexpert
often do not clearly demonstrate Boyle’s Law or Ohm’s Law as they are
meant to do, and there is still scope for the elegant demonstration in the
lecture-room or on television.
Design in nature has been urged as proof of the existence of God, the first
cause. We do seem to find order in nature; or perhaps a belief in order is a
necessary condition for anybody trying to do science. A world of mere
chance or contingency would have no law in it as ours seems to have. But
then the appearance of our world seems as unlikely, given the laws of
physics, as that monkeys would type out Shakespeare; so it would seem that
there must be a designer behind it. This is not a rigorous proof even of
Natural religion: and since moral order in the universe is not apparent, the
designer is not necessarily the personal God of Jews and Christians.
R. Dawkins (1986) The Blind Watchmaker, Longman, London.
J.C. Polkinghorne (1986) One World, SPCK, London.
Determinism is the belief that, given some initial state, then certain
succeeding states will inevitably follow. This is sometimes thought to be the
same as belief in a world governed by law; but this is not so, for laws like
the second law of thermodynamics limit what can happen but do not entail
some one state of things. In nature as in life, laws allow freedom, within
limits. Aristotle was a determinist, especially as interpreted by the medieval
Arab commentator Averroës; and this was one of the great problems when
his work was rediscovered in the West in the twelfth century AD.
Aristotle’s writings were condemned in Paris and in Oxford in the thirteenth
century, and only slowly came to be accepted following the work of
Albertus Magnus and Thomas Aquinas. It was in part because the union of
Christian doctrine and Aristotelian philosophy was uneasy that Galileo
caused such alarm, especially among the Dominican order to which
Albertus and Thomas had belonged. God’s freedom to act in the universe
must not be questioned.
Newton was anxious to allow God such freedom, and believed that
interventions were sometimes necessary; religion and science for him
were allies. So they were for most of his contemporaries; but increasingly in
the eighteenth century the idea gained ground that God would have created
a world in which everything would have been foreseen, and no sudden
adjustments to meet new circumstances would be necessary. The existence
of such a God seemed to be demonstrated as Newtonian physics showed
the simplicity and harmony of the world. Deism, or belief in a First Cause,
the God of the Sabbath day who had been resting ever since the creation,
went with determinism; and its great triumph came when Laplace in the
opening years of the nineteenth century proved that wobbles in the orbits
of the planets were self-correcting, and that Newton need not have
invoked God to sort them out from time to time. If, said Laplace, a Being
knew the position and velocity of every particle of matter in the universe,
then the future and the past would be present before His eyes.
The later development of physics did not fit so well with this view, which
alarmed contemporaries with its apparent materialism. Maxwell in 1859
introduced statistical rather than causal explanation into physics,
with his work on the dynamical theory of gases; and in the twentieth
century with quantum theory such reasoning became the norm. In the
Indeterminacy Principle of Heisenberg, we cannot know both the position
and the velocity of small particles: the world is more open-ended than
Laplace had expected, although the success of scientific predictions
shows that it is not a place where anything goes.
Dynamics is the study of motion and the forces responsible for it. It is thus
distinct, as a part of mechanics, from statics, the study of equilibrium; and
from kinematics, the study of motions without reference to forces. Ancient
Greeks had worked in both these last fields, their great triumphs being
Archimedes’ work on the lever and Ptolemy’s system of epicycles which
fitted the orbits of the planets; but they had not got far in dynamics. At
Merton College, Oxford, in the thirteenth century, Heytesbury, Swineshead
and Bradwardine worked out the rule for ‘uniformly nonuniform motion’,
or what we would call uniform acceleration: and in Paris Oresme and
Buridan developed a theory of impetus to explain how bodies (like arrows)
go on moving even when there is nothing pushing or pulling them. These
insights were developed by Galileo into his law of fall, and his idea of
circular inertia.
Aristotle’s theory of the planetary motions was that planets were held on
crystalline spheres centred upon the Earth. There were about four for each
planet, which were connected by crystalline gearing and driven by the
Prime Mover, God or Love, on the outside. This scheme, which was based
on an abstract mathematical model of Eudoxus’, could not account for the
changes in size of Venus and the Moon, best explained in terms of changes
in distance; and so it was replaced by Ptolemy’s ‘big wheel’ system of
epicycles.
The forces which could make planets go in such paths are unimaginable;
and this seems to have been a major feeling in Kepler’s mind when he
adopted Copernicus’ view that the Sun must be the centre, and that planets
must go in simple orbits around it. He showed that they go in an ellipse.
Dynamical thinking lay behind this triumph of the new astronomy; though
the nature of the forces was still unclear until Newton’s work on gravity
later in the seventeenth century.
In the nineteenth century, Ampère’s work appeared to Faraday rather like
Ptolemy’s had to Kepler; his equations fitted the facts of electricity and
magnetism but the forces behind the phenomena seemed unintelligible. With
his idea of the field he brought dynamical explanation into
electromagnetism. Concern with the forces which underlie what we observe
is a necessary spur to good physics and chemistry.
An editor of a journal can be a person of considerable power and
influence. Authors submit their papers, and the editor consults a referee
and perhaps a member of the editorial board which is now characteristic of
most journals. A journal may take a party line or support a particular dogma
or paradigm; where there are different traditions in different countries or
sciences the journal may reflect them, particularly if the editor is partisan.
In the past, ambitious professors like Liebig edited a journal in which their
students could publish their research; and certainly ready access to
publication is very important in science, which is ‘public knowledge’.
Recognising a valuable paper to be extracted from an unpromising draft is
an important part of an editor’s craft, which is a kind of intellectual
midwifery; encouragement from an editor is very valuable for a young
scholar. Some books also have editors; sourcebooks which reprint papers
which have appeared elsewhere, and symposia which contain papers
perhaps read at a conference, or lectures delivered in a series: all these
things call for tact and firmness from their editor, who is most of the time a
harmless drudge, but whose decisions are sometimes high-handed and
always final.
Electricity was observed in Antiquity, and our word comes from the Greek
for amber. When rubbed, amber will attract little scraps of paper and such
things; and down to the eighteenth century electricity seemed a weak force,
suitable for parlour-tricks rather than for real work. It was found that glass
can also be made electric by rubbing; but that while two electrified glass
rods, or amber ones, will repel each other, a glass and amber rod will be
attracted, and when they get near enough there will be a spark between
them. People spoke of two opposite electricities, vitreous and resinous. In
the mid-eighteenth century, Musschenbroek in Leiden invented the first
condenser in which electric charge could be stored, the ‘Leiden Jar’ (a
glass vessel coated with foil); a series of them was called an ‘electric
battery’. The King of France had his boredom relieved one day by seeing a
line of grave monks all given an electric shock; and the fringe medicine of
the later eighteenth century involved various electric and magnetic
treatments.
There was still no real theory until Benjamin Franklin in Philadelphia,
then on the outskirts of the ‘civilised’ world, proposed that electricity was a
single weightless fluid. Rubbed glass is minus or negative because part of
its fluid has been removed; while rubbed amber is plus or positive because
it has accumulated more fluid. Metals, called non-electrics, would conduct
the fluid from one place to another. Franklin’s views were published in a
series of letters from 1747 on, collected together in a book in 1774; and in
1749 he had made his reputation in science with his experiment of
collecting electricity from thunder-clouds with a kite: an experiment fatal to
some who tried to repeat it. This demonstrated that electricity was a
powerful agent, responsible for lightning; which could now therefore be
conducted harmlessly away.
Franklin’s theory was not fully accepted in France, where two fluids,
positive and negative, were preferred; when a spark passes through paper,
there are burrs on both sides of it which seem to show motion both ways.
Coulomb showed that electrical attraction follows an inverse-square law
like gravity, thus bringing mathematics into the science. Priestley in
England suggested that electricity was a more fundamental science than
mechanics, and would penetrate beneath the surface of things; and at the
end of the century Schelling in Germany argued that polar forces underlie
matter, and that apparent rest is really dynamic equilibrium.
Meanwhile Galvani had made frogs’ legs twitch using two different
metals; he believed that organic material was essential but Volta in 1799
generated electricity by what he thought was the mere contact of different
metals in water. His ‘pile’ of metal plates generated the first electric current;
but it was not at first clear that this ‘galvanism’ was the same as
‘Franklinic’ electricity, and final proof only came from Faraday in the
1830s. Davy in 1806 demonstrated that chemical action rather than contact
is necessary, and argued that chemical affinity was electrical; neither he
nor his pupil Faraday believed in an electric fluid, but saw electricity as
force rather than matter. In their day, electricity became a part of
chemistry; it was only with the principle of conservation of energy that
electricity became a part of physics.
In the 1820s, following Oersted’s demonstration of electromagnetism,
Ampère developed equations to account for what was going on; but these,
based on Newtonian analogy, proved no use for prediction of new
phenomena. Faraday, ignorant and suspicious of mathematics but with a
qualitative notion of ‘lines of force’ spreading through space, made
discoveries which astonished contemporaries and led ultimately to new
technology: and in the 1850s Maxwell developed Faraday’s idea
mathematically to show that light waves were electromagnetic disturbances.
Electricity was thus by the late nineteenth century the fundamental
science which Priestley and Schelling had believed it would be; and it was
also being applied, first in the telegraph which transformed
communications and then in lighting and in the preparation of new metals
like aluminium. But the question of the nature of electricity and its relation
to matter was still not fully answered. Faraday had believed that matter was
no more than centres of force; this made him unwilling to believe in atoms.
But in 1897 J. J. Thomson ‘discovered’ the electron, seemingly
demonstrating that cathode rays were a stream of negatively charged
particles. Electricity was thus not a weightless fluid flowing from plus to
minus, but a stream of electrons flowing from minus to plus.
Later work showed that the electron is not a particle like a little billiard-
ball, and that energy and mass are interconvertible. So we are back to a
view closer to that of Faraday. Electrons surround the nucleus of the atom
in orbitals governed by quantum numbers which means that these
oppositely-charged entities do not simply rush together and neutralise each
other; but how far electrons and some other fundamental particles have a
mass is unclear. We can see that those who believed that electricity was
matter, a sort of fluid, and force both had some truth on their side; and
were certainly right in suggesting that electricity was a fundamental clue in
understanding the universe.
Energy is capacity for doing work; and down to the middle of the
nineteenth century it was used in an unscientific way of people. During the
eighteenth century, weightless fluids were postulated to account for heat
(caloric), chemical activity (phlogiston), electricity and magnetism;
while light was also believed to be a stream of particles. Various
analogies connected these sciences, but there was no real link between
them, until about 1800 when there came a spate of discoveries of
interconnections.
Coulomb showed that electricity followed an inverse-square law like
gravity; Volta that when two different metals were immersed in water and
connected up, electricity was generated; Davy that an electric current is a
powerful agent of chemical analysis; and W. Herschel that beyond the red
end of the solar spectrum there were rays of radiant heat. A little later
Oersted showed that a changing electric current affects a magnet; and Mary
Shelley’s fictional Dr Frankenstein had a successful research programme
based on the idea that electricity lay behind life.
The German philosopher Schelling in his ‘Naturphilosophie’ saw all
force as one; and the real world as based on equilibrium of polar forces,
rather than on solid lumps of matter. Oersted was influenced by this way
of thinking; but other contemporaries, like Davy, looked back two centuries
to the more down-to-earth philosophy of Francis Bacon, for whom heat
was the effect of matter in motion. In the 1830s Faraday quantified the
relationship between electricity and chemical affinity in his laws of
electrolysis, showing that definite quantities of electricity produce definite
quantities of chemical change; but absolute measurement of quantities of
electricity or chemical affinity was hardly possible at this time. What
could be measured were mechanical work, like that done by the falling of a
weight in a clock, and heat; and Joule in a classic experiment developed in
the early 1840s showed how much mechanical work produced a definite
quantity of heat.
Faraday and others had seen a ‘correlation of forces’, and in Faraday’s
case this idea lay behind his work in electromagnetism and his discovery
that the plane of polarisation of a beam of light is rotated by a magnetic
field; but he, like many chemists, was suspicious of mathematics and
mathematicians, and of any attempt to reduce electricity to mechanics.
Shortly before Joule, Mayer had published a calculation of the mechanical
equivalent of heat; he had noticed, as a ship’s doctor, that a patient bled in
the East Indies had redder blood than in Europe, and concluded that this
was because he did not have to use up oxygen to maintain his body-
temperature in the Tropics. Comparing the specific heat of a gas at constant
pressure (when it expands as it gets hotter, doing work against the
atmosphere) and at constant volume, he computed how much work
produced a quantity of heat; but his paper was disregarded. Joule’s
experiment was much more direct, but he too was ignored at first and was
fortunate to attract the attention of William Thomson, the future Lord
Kelvin, who worked with him, especially on gases.
By the 1840s the weightless fluids had been abandoned: light was seen as
a wave motion, heat as motion of particles, and Faraday’s idea of electric
and magnetic fields with lines of force had begun, in the hands of
Thomson, to reach applied mathematicians. Thomson’s friend Helmholtz
put together the ideas of correlation of forces and of quantitative
equivalence. In both learned and popular lectures he urged that a new and
fundamental discovery had been made; that a quantity, energy (though at
first translated into English as ‘force’) was conserved in all changes. Since
mechanical work is expressed in the dimensions of mass times length
squared divided by time squared, then energy whether electrical, chemical
or magnetic must also be expressed in these dimensions. The units of these
sciences must all be expressible in terms of grammes, centimetres and
seconds: and the determination of these values became one of the great
tasks of the scientists of the second half of the nineteenth century, the
epoch of classical physics which took over parts of what had hitherto been
chemistry.
The principle of conservation of energy became the First Law of
thermodynamics, the quantitative and deductive science of the
transformations of heat and work which was one of the great triumphs of
the nineteenth century. Although there had been steam engines since the
early eighteenth century, their working had not really been understood; and
eventually this understanding led to much greater efficiency, particularly
in the steam turbines of the later nineteenth century. At the beginning of our
century, with Einstein’s work leading to his famous equation connecting
energy with mass, E = mc2, conservation of energy has been subsumed into
that of mass-energy; in radioactivity matter is transformed into energy.
The old opposition between those who saw matter or force as the
underlying reality has thus been transcended; though perhaps at the expense
of clarity.
Engine, down to the eighteenth century, meant any kind of machine. People
were fascinated by clockwork, and most elaborate mechanisms were made
first for cathedrals and then for the wealthy and the powerful; and by the
later seventeenth century watches were common. Pendulum clocks,
following the work of Galileo, Hooke and Huygens were by this time very
accurate; and were used in observatories where time could be measured
with more accuracy than angles. On a larger scale, wind, water and animal
power drove ‘engines’ for grinding corn, fulling wool and pumping water,
as had happened for hundreds of years.
The great change came in the early eighteenth century, when Thomas
Newcomen built the first practicable steam-engine in 1714. This was,
strictly speaking, driven by the atmosphere, rather than by steam: the steam
in the cylinder was condensed by a spray of cold water, giving a partial
vacuum, and in the working stroke atmospheric pressure drove the piston
down. By later standards the engine was very wasteful, but used at a pithead
where coal was dirt cheap it was dependable and kept deep mines dry
enough to work, making the Industrial Revolution possible. Its power was
not at first much more than could be got from water or wind; and the
eighteenth century also saw great improvements to water-wheel design in
France and in England. Engine came to mean something which provides
power to do work.
With James Watt came the first attempts to understand what was going
on in the steam-engine, rather than just to tinker with and improve a good
basic design as other engineers had done. Watt saw how wasteful it was to
heat up and cool down the cylinder at each stroke, and added a separate
condenser; this introduced problems because the piston had to fit better,
leather and cold water no longer being an adequate seal. Here Matthew
Boulton at the Soho works in Birmingham, used to precision work, came in;
and his partnership with Watt brought the steam-engine out of the
coalfields, and first to the tin-mines of Cornwall and then by the end of the
century to the textile-mills around Manchester, and to other industries.
Sadi Carnot, reflecting on France’s defeat in the Napoleonic Wars, put it
down to steam-engines, to Britain’s industrial power and wealth. He
founded thermodynamics, providing an explanation of how steam-engines
worked in the most abstract and mathematical terms. This was no use to
engineers, especially Cornishmen, in the 1820s improving efficiency by
adopting higher pressures, and working towards reliable locomotives; but
as the science of physics came into being in the 1840s with conservation
of energy, so scientists learned important lessons about the
interconversion of heat and work from steam-engines. Technology here
preceded science; but then applied science was involved in the steam-
turbine and in later engine-designs.
D. Landes (1986) The Unbound Prometheus, new ed., Cambridge University Press, Cambridge.
Error is a part of all human life. Some scientists, like William Hyde
Wollaston in the early nineteenth century, have taken enormous pains to
avoid it; he was known in his circle as the Pope because he was believed
infallible. But he was over-cautious, and his work in chemistry was less
important than it would have been if he had been bolder in conjecture. He
and his contemporaries realised that no experiment was exact, and that all
measurement involved error; but they felt that there was little that could be
done about it, except to improve one’s skills in manipulation and to use
tried and accepted methods. Thomas Thomson and Berzelius in the late
1820s had a long argument about the accuracy of determinations of the
relative weights of atoms, in connection with Prout’s hypothesis, that they
were exact multiples of that of hydrogen. Thomson selected those results
which confirmed this hypothesis as his best, which to Berzelius seemed
prejudiced and unscientific: the quarrel only subsided when other analysts
confirmed Berzelius’ results, and accepted his principle that one starts from
experimental results rather than from a hypothesis which is to be tested.
This principle is not of universal application; for example in astronomy in
the 1840s with the detection of the planet Neptune following calculations
from the wobbles of Uranus.
Astronomers at the end of the eighteenth century had been troubled by
the inevitable inaccuracies in observing phenomena, such as the transit of
Venus across the Sun. How was the true result to be known from a mass of
erroneous ones? Gauss plotted observations, and found that they lay on a
curve which we now call a Gaussian distribution, but which was then called
the error curve. From it he could fix upon the true result, rather than just
taking a mean value. Producing truth out of error in this quantitative way
was one of the great achievements of the nineteenth century; we now take it
for granted in statistics, which has been extended far beyond this
concern with individual observations and plays a crucial role in quantum
physics and in the social sciences.
Errors which are more alarming to those who hope for infallible science
are theories which have met falsification; involving entities such as
caloric, ether and phlogiston. These stories are good cautionary tales for
anybody seriously tempted by scientism, for all these things commanded
the assent of experts. Science is a human activity, and it is provisional;
neither implicit faith nor complete scepticism are appropriate attitudes to
it.
A forthcoming issue of Philosophia is to be devoted to Error.
Explanation is one of the aims of science, at any rate in the popular mind.
At the beginning of the seventeenth century, Galileo set out a programme of
explaining secondary qualities (such as colours, tastes and smells) in terms
of particles of matter and their motion. The problems were clearly
expressed by William Thomson (Lord Kelvin) in the late nineteenth
century. The spring of gases is very simple; they all obey Boyle’s law, so
that the product of pressure and volume is constant (at moderate pressures
and not-too-low temperatures). In the mid-nineteenth century, the behaviour
of gases was explained by the kinetic theory of Clerk Maxwell and Rudolf
Clausius, involving elastic molecules in constant collision with each other
and the walls of their container. This led to some surprising predictions;
for example that the viscosity of a gas was independent of the pressure.
This was confirmed by experiment: the theory therefore was generally
accepted. But while the elasticity of gases is very simple, because all follow
a simple law, that of solids is very complicated. For atoms in the nineteenth
century it seemed even worse than complicated, because the bouncing of a
rubber ball was explained in terms of its parts being squeezed together and
separating under repulsive forces; while atoms, having no parts, could not
be elastic. Since the distinction between atoms and molecules was not
clearly made, and some gases do in fact have molecules composed of a
single atom, this was a real difficulty.
Thomson was worried at the prospect of explaining something simple
and well-understood in terms of something complicated; and was thus
unhappy about the kinetic theory. He strove with the idea that atoms were
whirlpools in the ether, or were mere point centres of force; but these were
not altogether easy to understand either. He retained the idea that the
scientist ought to explain; but the new physics of quantum theory and
relativity which was coming in at the end of his life and the beginning of
our century made this more problematic. His younger contemporary Ernst
Mach had argued that the task of science was simply economy of thought.
The search for truth or explanation was hopeless; and theories were to be
judged as more or less convenient ways of grouping facts. For Mach, no
experiment ever entailed a theory. The shape of the Earth, flattened at the
Poles, had been explained since the early eighteenth century as an effect of
its rotation; and thus as evidence for the Copernican system of astronomy.
For Mach, it could equally for all we knew be the result of curious forces
set up by the stars in their daily revolution around the Earth. We adopt the
view of Copernicus rather than of Aristotle and Ptolemy because it is
simpler; but the discovery of new facts might tilt the balance back again.
Mach’s agnostic attitude to explanation is not unlike that of Osiander, the
publisher of Copernicus’ great book of 1543, who in the anonymous
preface urged that theories should be judged only by their convenience, and
not by their plausibility. This was also the line which the Inquisitors tried to
get Galileo to take. It is now generally called instrumentalism, because a
theory is seen as an instrument; like any piece of apparatus, it is judged
by whether it works, and given up when someone invents a better one. The
sciences, on this view, are not concerned with explanation, but with
classification and prediction.
The idea of elastic molecules is not then an explanation, but a model. We
can imagine a kind of three-dimensional game of billiards with perfectly-
elastic balls, and we can handle the mathematics involved in predicting the
outcome of collisions when the numbers of balls are enormous. We can then
even make allowances for the size of the balls, and slight attractive forces
between them, and make the behaviour of our model fit even better the
behaviour of real gases, which do not exactly obey the gas laws. But there
will be features of the model which are irrelevant, at any rate so far; in
billiards the colours of the balls are important, but not in the kinetic theory.
Even after years of usefulness, the model may well fail and be replaced
with another showing more powerful analogies with the phenomena.
Mach’s view was welcome to those struggling to make sense of the new
world of quanta. Light seemed to have been explained as a wave motion in
the ether; and yet in the photoelectric effect it seemed to be a stream of
particles or photons. Similarly, the cathode rays which J. J. Thomson in
1897 had explained as a stream of corpuscles or electrons were a quarter
of a century later found to undergo diffraction like waves. According to the
‘Copenhagen interpretation’ of Niels Bohr, one should not worry about this;
the mistake was to assume that electrons were either waves or particles. We
cannot know, and do not need to, what they are like in themselves; it is
enough for us to know what equations to use to make predictions in any
particular case. This did not satisfy Einstein, who wanted to understand and
explain; but in quantum physics this is still an elusive goal, and the
proposed explanations seem to involve accounting for the relatively
straightforward in terms of the complex or the incomprehensible.
Instrumentalism does not seem to be enough to satisfy many scientists;
and the popular view has advantages over the more sophisticated. Despite
the risks involved in seeking explanations, the restless and speculative
intellect will not long be satisfied with less.
Fact, which to us means something given, comes from a Latin word
meaning something made. While to common sense, it seems that facts and
theories are quite distinct, and perhaps that science is a quantity of
authenticated facts; to the more sophisticated the line is hard to draw. What
we observe is a phenomenon; but what we are interested in when doing
science is how it fits into a pattern or order. We need therefore to get
behind the phenomena and we find that a fact is more than a phenomenon;
it has to be significant. To the baby, we are told, the world is a booming
buzzing confusion; as we get older we learn to separate messages from
background noise, and this is what happens in the sciences too.
To know what is significant we have to have a theory or a hypothesis.
Thus to Galileo tidal observations going back to the Venerable Bede were
not significant; he had worked out (in tideless Venice) that the Earth’s
rotation would produce a tide every twenty-four hours, and refused to
accept the archaic and occult idea that the Moon might draw the sea across
thousands of miles of empty space. He was wrong; and so may other
scientists be who refuse to take seriously evidence (of flying saucers,
perhaps) which does not fit in with their world-view or paradigm. But
unless we have some way of deciding what is relevant we cannot do
science; and changes in science produce changes in what is seen as a fact.
We cannot just be wide-eyed and open-minded.
Scepticism like Galileo’s led eminent men of science in the eighteenth
century to deny that meteoric stones really fell out of the sky, or that sea-
shells could be embedded high in mountains; but these errors were in the
end put right, while the same sceptical spirit had banished astrology and
other alleged bodies of facts. In the nineteenth century, it was a fact that
fever was the consequence of spending nights ashore in tropical harbours;
to us, the fact is that the relevant diseases are carried by mosquitos. Facts
are not immune to correction, which happens when different aspects are
seen as significant.
Those who emphasise the place of facts in science usually see
explanation as a matter of bringing facts under a law rather than of
seeking a theory. The law connects observables rather than getting behind
the surface. But scientists are generally very suspicious of ‘purely-empirical
generalisations’; and demand of a law that it has some analogy with others
and some explanatory function. It seems as though what we require in a
science is both authenticated phenomena and appropriate ideas; what makes
most scientists reject parapsychology, for example, is that it does not seem
to have the latter. There is a good deal of dogma in science; but if there were
not, it would be confusion.
A field is a piece of ground, perhaps where a battle has been fought; and it
has come to mean a piece of intellectual territory: one of the sciences, or a
small part of one as the process of specialisation has gone on. But in
science, and especially in physics, it has come to have a technical
meaning, as an area within the range of some agent or force: this usage is
found in Faraday’s Experimental Researches in Electricity (paragraph 2252)
in 1845, which may be its earliest use in print, though he uses it casually
and without strict definition.
In studying electricity and magnetism, and speculating about light
and gravity, Faraday was becoming convinced that the medium in which
he saw lines of force everywhere was crucial, and that atoms were simply
centres of force. He believed that the Newtonian picture of bodies acting on
one another at a distance across void space was wrong; and that fields of
force gave the right idea. To the astonishment of contemporaries he
succeeded in affecting a beam of polarised light by passing it through a
magnetic field; its plane of polarisation was rotated. Other experiments on
magnetic and electrical induction could also be explained using the idea of
fields.
In the hands of Clerk Maxwell, Faraday’s intuitions were given
mathematical form, and field theory became one of the great pillars of late
nineteenth-century physics, and indeed of science since then. Electrical,
magnetic and gravitational fields form part of the ordinary vocabulary of
scientists; though laymen may continue to see electricity as a kind of
juice, or fluid, in terms which Faraday made obsolete.
Geometry is that part of mathematics which deals with lines, planes and
volumes. It was put into the form of a deductive system by Euclid in the
third century BC, but had existed in a less organised form for a long time.
The ancient Greeks believed they had got it from the Egyptians, who had
developed it either because they needed to measure their lands (which is
what ‘geometry’ means) after the annual floods of the Nile, as Herodotus
suggested; or because their priests were a leisure class who could go in for
abstract thought, as Aristotle supposed. Geometry has always had these two
aspects: it is a necessary part of the training for the profession of architect,
surveyor or engineer; and it is a system of pure deduction, in which the
theorems can be worked out from the axioms without any kind of
experiment, and in which the aim is rigorous proof. In Antiquity, other
branches of mathematics lacked this rigour; but Descartes in 1637
published his demonstration that geometry and algebra can be completely
translated one into the other. Thereafter, the economy of algebraic proof has
been generally preferred; but Newton in his Principia (1687) used
geometrical methods throughout.
Euclid’s fifth postulate was that parallel lines never meet, and later
geometricians showed that this could not be proved from the axioms;
though it is perhaps not self-evident in the way that they are: for example, if
a>b, and b>c, then a>c. To Kant working on his philosophy at the end of
the eighteenth century it seemed that geometry was ‘synthetic a priori’,
because its propositions fitted the real world when tested, and yet could be
worked out by pure thought. But even in Kant’s time, alternative geometries
to Euclid’s were being worked out by Gauss and others; using a different
fifth postulate they got consistent but different systems; in which for
instance there are more or less than 180 degrees in a triangle.
Nobody seriously supposed in the nineteenth century that a non-
Euclidean geometry might fit the world; and Euclid’s had the longest run of
any textbook in history. But at the beginning of the twentieth century, A.
H. Poincaré as part of his idea that convention was the basis of science
suggested that space might be non-Euclidean – but that because of the way
our minds work we would go on using Euclidean geometry to describe it.
Einstein at just the same time proposed seriously that space was non-
Euclidean: that light did not follow a Euclidean straight line in the
neighbourhood of massive gravitating bodies. In order to keep a simple
physics in which light went in straight lines and at constant speed, he
proposed to use a different geometry. The link between measuring the world
and inventing a deductive system had been broken, and Kant’s category of
the synthetic a priori was no longer necessary. To find which geometry to
use we have to do empirical tests, and make judgements about simplicity.
God is the creator of the world, imposing order and design upon it, in a
tradition coming both from Platonic philosophy and also Judaeo–
Christian religion. Man being in God’s image could hope to understand His
work; and this impetus lay behind the Scientific Revolution of the
seventeenth century, along with the hope of the improvement of man’s
estate through technology. If all events are merely the result of chance and
contingency, then there is little point in spending one’s life looking for
laws; so the existence of God gave a guarantee that science was worthwhile.
Conversely, the evidence for design seemed proof of God’s existence: until
in the eighteenth century Hume argued forcibly that it fell far short of that;
and Laplace later found that in astronomy he ‘had no need of that
hypothesis’. Developed science no longer needs the underpinning required
in its early days: but many scientists do feel that an explanation of the
order of things is needed, and invoke God as the First Cause; though this
God is still some way from the Father worshipped by Christians, who works
through providence and miracles, and imposes moral order.
Government has, with the passage of time, become the most important
patron of science. This has transformed the nature of science, as it has
ceased to be something like a harmless hobby and has become a profession,
with close links to industry and the military. In these fields secrecy is all
important, and the old ideal of open communication, at international
conferences and in journals, has been somewhat modified; science is no
longer ‘public knowledge’. But because science of even the most pure and
recondite kind is expected to have applications in technology, governments
have also been prepared to support it; and without such patronage the
progress in science in the last century or so would have been impossible.
Archimedes is supposed to have constructed secret weapons to defend
Syracuse against the besieging Roman army; but it was about 1600 that the
idea that knowledge is power began to gain ground, notably in the writings
of Francis Bacon. In the succeeding century, various academies were
founded with various degrees of royal support; but scientific research was
still fairly cheap for the most part. The great exception was astronomy
where telecopes and a team of observers were required; and state-
supported observatories were set up, first in Paris and in Greenwich. The
hope here was that they would produce data enabling the longitude to be
found at sea; navigation would then become a science, and long-distance
trade would increase. By the 1760s this was achieved, and voyages like
those of Bougainville and Cook were planned at government expense.
Astronomical observations were made for mapping, and materials valuable
in natural history and ethnography were collected for museums, in what
became a series of expeditions, in which that of HMS Beagle was one.
Pendulums were also taken to high latitudes for gravitational measurements,
and the Earth’s magnetism was investigated.
Darwin learned much of his science on the voyage, and there was still
very little formal education in the sciences. In Paris, the Ecole
Polytechnique was founded shortly after the Revolution of 1789; here at
government expense research and teaching were carried on together. In the
German universities of the nineteenth century a similar pattern emerged;
and this was followed in Britain and the USA in the second half of the
century, despite the worries of exponents of ‘laissez-faire’. In Britain, grants
had been made available for research through the Royal Society and the
British Association; and in the 1880s came the first regular grants to
British universities, making a career in science possible. While the role of
the wealthy patron or foundation cannot be forgotten, and some have been
extremely good at picking winners, the bulk of finance for science, and
indeed for other disciplines in higher education, has to come from
governments; which oscillate between viewing such expenditure as costs or
as investment.
Gravity meant weight, but since the work of Newton it has meant that
power by which all particles of matter attract each other, with a force
proportional to the product of their masses divided by the square of their
distance apart. Throughout human history, it had been known that
unsupported bodies fall to the ground; but Galileo, in his working towards
the notion of inertia, propounded the law that falling is uniform
acceleration. He probably never did any experiments from the Tower of
Pisa, but argued that in the absence of air-resistance everything would fall at
the same speed. This was later verified by Boyle with his air-pump.
For Aristotle, the cause of falling had been that heavy bodies sought their
natural place, the centre of the Earth, which formed a compact sphere at the
centre of the Universe. For Galileo, this was no explanation; and because
the Earth was moving about the Sun, it was not merely verbal but wrong.
But he had no better explanation to offer; and for the motion of planets, he
fell back on the almost Aristotelian idea that circular motion required no
force to maintain it. Against this, Descartes argued that bodies left to
themselves go in straight lines or remain at rest.
Galileo thought it occult, reminiscent of astrology, to suggest that the
distant Moon, for example, could affect the sea and cause tides. But for
Newton, if the Earth did not attract the Moon, it would have gone off in a
straight line; and in 1665 he worked out the law of gravity, which he
formally published after a great deal more work in his Principia of 1687.
He showed that the orbits of the planets are as predicted in his theory,
from which Galileo’s law of falling also follows. He was perplexed about
the cause of gravity; Descartes had proposed whirlpools of ether to carry
the planets, but Newton showed that this would not work. Nevertheless he
was very reluctant to suppose that gravity was inherent to matter, and toyed
with ethers himself. He disliked the idea that matter could act where it is
not, across void space; and his work upset those devoted to mechanical
explanation, for whom the world was a big clock in which all the wheels
and springs should be revealed by science.
A spinning Earth would, according to Newton, become flattened at the
poles; and a French expedition under Maupertuis to Lapland showed in the
1730s that indeed it was. On another French voyage, to Peru, la Condamine
found that the Andes pulled his plumb-line out of true; showing that gravity
was not simply a pull towards the centre of the Earth, but really attraction of
particles. At the end of the eighteenth century, Cavendish, using a torsion
balance, demonstrated this in the laboratory; while William Herschel
showed that double stars move about each other under gravity, taking the
law outside the solar system; in the nineteenth century it was used to predict
the planet Neptune because of the wobbles it produced in the orbit of
Uranus.
Still gravity could not be explained; and unlike light its propagation
seemed instantaneous. Einstein interpreted it in terms of the curvature of
space; modern physicists search for gravitons; perhaps we just have to
accept it.
D. Gjertsen (1986) The Newton Handbook, Routledge & Kegan Paul, London.
Hardness is measured according to what will scratch the surface being
investigated; this provides a relative measurement. But atoms were
supposed to be absolutely hard, so as never to wear away or break in pieces
as Newton put it. Such particles could not bounce, because that would
involve parts coming together and separating; so elasticity and hardness
were seen as opposite qualities, and it was not until well into the nineteenth
century that a coherent kinetic theory of gases appeared.
Heat is something we feel when we touch a poker that has been in the fire,
for example. To work out its cause was one of the earliest tasks of science;
and both Galileo and Bacon, in the early seventeenth century, identified
motion as the cause of heat. But they meant something different by this
phrase.
Galileo revived the old distinction made by Greek atomists between
primary qualities, which reside in things, and secondary qualities, which
depend on an observer. Heat was a secondary quality, produced by the
motions of minute particles which formed a kind of fluid, a substance.
Thus along with other atoms or corpuscles, there were particles of heat. A
hundred and fifty years on, this theory was made more definite by
Lavoisier, who named the fluid caloric. This was for him a kind of
chemical element, which could form definite compounds: ice and caloric
thus form water. All liquids and gases are thus compounds of caloric;
which can also be mixed with them, so that hot water contains more than
cold.
Bacon, who investigated ‘hot’ spicy dishes as well as pokers, saw heat as
the motion of the particles of which bodies are made up, and not as another
thing. In the 1790s, his theory seemed old-fashioned; but Benjamin
Thompson (Count Rumford) observed that indefinite quantities of heat
could be produced by friction. This would be very odd if it were a
substance, but not if it is simply motion of particles. Carnot arrived at the
second law of thermodynamics in the 1820s using the caloric theory; but by
the 1840s many scientists were coming to see heat as a form of energy
rather than as an element. In Rumford’s experiment, mechanical work was
being transformed into heat; and James Joule made a clockwork
arrangement in which water was heated by churning it with paddles driven
by the falling of a weight. He could then compute just how much work was
equivalent to a definite quantity of heat; and on the strength of this, he,
Helmholtz, and others announced the conservation of energy.
Heat derived from chemical energy, notably when anything burned, first
attracted attention, leading to the notions of phlogiston, and of caloric with
which Lavoisier had replaced it. He had also demonstrated, in his work on
oxygen, that the ‘vital heat’ of life is akin to combustion. In the early
nineteenth century, heat became the concern of those working in what was
coming to be called physics; who distinguished the conduction, convection
and radiation of heat. With Joule and Helmholtz the study of heat ceased to
be a distinct science and became a branch of physics, a science of which
energy was the central idea.
History is what this whole book is about; for science has a very long one,
in which it has changed character enormously. Its subject matter, apparatus
and institutions have evolved into something very different from what
they were in Antiquity, or indeed in the time of Galileo. Historians of
science used to be mostly scientists, active or retired; but now, far behind
science, the history of science has become a kind of profession. Instead of
just looking at the ancestry of ideas now current, historians began to flirt
with philosophy of science, and to write case-studies of interesting
episodes demonstrating induction or deduction, paradigms or
falsification. Because we live in an untidy world, these rarely fitted the
ideal very well unless ‘rationally reconstructed’; and the serious historian
draws the line at that, because it is the particularity of the past which makes
it enthralling.
Historians of science therefore deserted philosophers, and turned to
social historians; looking at the life and work of ordinary as well as great
scientists, and at scientific academies and associations, metropolitan and
provincial. Scientists rather than science became their province; and models
from sociology, particularly that of ‘marginal men’ seemed illuminating. It
became possible to write history of science with the science left out; for
science was, after all, an activity not completely different from other ways
of spending time, like the law or shopkeeping. Hierarchy and status
became very important notions.
But unfortunately none of these flirtations aroused much excitement
among practitioners of other disciplines. Historians of science rather than
scientists seemed to be the marginal men; and unfortunately became less
ready to risk the wide ranging synthesis as they concentrated on
manuscript material. But for scientists, philosophers, social historians and
sociologists, the history of science is a fascinating territory, with many
paths leading into it; the search for the context of past science, which then
formed the context of more recent science, is a fascinating and never-ending
quest, and the sources are abundant and varied.
D. M. Knight (1975) Sources for the History of Science, Cambridge University Press, Cambridge.
H. Kragh (1987) An Introduction to the Historiography of Science, Cambridge University Press,
Cambridge.
Inertia is the resistance bodies put up to any force changing their velocity;
it is the characteristic of matter, expressed in terms of what we call
Newton’s first law of motion: ‘All bodies continue in a state of rest or
uniform motion unless a force acts on them’. This is not very plausible, and
its adoption was one of those triumphs of science over common sense: we
see everything slow down and come to a halt unless a force (provided by
the engine of a car or train, for example) keeps it going. In the physics of
Aristotle and his school all motion required a cause; and yet even in
Antiquity there were cases of motion that continued without a mover –
arrows and thrown stones, for instance, and objects falling to the ground.
Falling bodies were seen as moving, with increasing rapidity depending
on their weight, towards the centre of the Earth, the ‘natural place’ of all
heavy things; while light fiery sparks fly upward. In the sixth century AD,
John Philoponus in Alexandria dropped weights from a tower and found
that their speed was not proportional to their weight; so the exact law of
falling remained a mystery. Even more perplexing was the problem of the
motion of projectiles; but in the thirteenth and fourteenth centuries, Jean
Buridan and Nichole Oresme in Paris worked out a theory of ‘impetus’ to
explain it.
The bowstring or the sling imparted impetus to the missile, which was
gradually used up in its flight; when it was all gone, the projectile fell
straight to the ground. Similarly, a falling body gained impetus so that it
went faster and faster. This theory prevailed down to the early seventeenth
century, when Galileo was making sense of the motions of planets
following Copernicus’ view that they circle the Sun. If the centre of the
Earth was not the centre of the universe, and was moving, then there
seemed no good reason why it should be the natural place of all heavy
bodies; Galileo did not concern himself with the cause of falling, but
suggested that the law was that falling bodies were uniformly accelerated,
following an equation worked out three hundred years before by
mathematicians at Merton College, Oxford. This is a classic case of a
scientist avoiding a ‘why’ question and tackling a ‘how’ question.
For the motion of planets, Galileo had to give some explanation; he
believed that in the absence of friction, a ball bowled would go rolling right
around the Earth, and that similarly the Moon rolled around us, and we
around the Sun. This circular motion was thus inertial, requiring no force;
which pleased him, because he could not believe in forces acting at a
distance across void spaces. Descartes in his Principles of Philosophy
modified this idea, musing on the unchangeability of God; and concluded
that inertial motion must be straight-line rather than circular. Given this
principle as the basis of his physics, Newton had to account for the closed
orbits of the planets, which are ellipses and not circles; and came up
with his theory of gravity.
The first international conference was probably that called to set up the
metric system; but since it was held in France during the wars which
followed the Revolution of 1789 only delegates from allied or conquered
foreign countries came. Since the early nineteenth century, international
conferences have become an important part of scientific life: reading
published papers is no substitute for meeting other scientists engaged in
the same kind of work. Some famous meetings have been called to settle a
difficulty, like that at Karlsruhe in 1860 which was to agree upon atomic
weights in chemistry (and failed to do so); but others are held regularly or
occasionally to review progress and exchange ideas, often under the
auspices of national associations or academies. Frequent conferences
have meant that the distinct national styles or traditions in science which
were characteristic of earlier days have become much less prominent;
scientific education and practice have become international, and a subset
of English their language.
The first invisible college was a group to which Boyle belonged in the
1650s; they did not all meet together regularly, but kept in touch by
correspondence and visits. They formed a nucleus for the Royal Society,
which became visible when it got its charter in 1662, as the first enduring
academy of sciences. But even after the foundation of academies and
associations, those closely involved in research kept up informally with
others in the same field, forming correspondence networks; which might
sometimes turn into visible groups, like the Society of Arcueil in
Napoleonic France. Such groups (often international) are still very
important in science, because the number of people engaged with a
particular problem is often quite small, and the circulation of offprints
and preprints, correspondence, visits and occasional meetings is fruitful.
D. Crane (1972) Invisible Colleges, Chicago University Press, Chicago.
M. P. Crosland (1967) The Society of Arcueil, Heinemann, London.
Isomers are compounds which contain the same components in the same
proportions, but differ in properties. They may contain the same number of
atoms in different arrangements, like urea and ammonium cyanate,
The laboratory is the place where most science is done; though clearly
mathematics needs no apparatus, and geology for example also requires
field-work. In the early nineteenth century, laboratories were not necessarily
separate rooms; the great chemist Berzelius used his kitchen, and Anna his
cook was also his technician. But even then better-endowed scientists
had a laboratory at their disposal; and the basement laboratory at the Royal
Institution, for example, is preserved in the state in which Faraday used it.
Laboratories have in recent years come into use in disciplines such as
archaeology, partly for good reasons and partly to bring the prestige of real
science into what might otherwise seem unserious activities.
In the 1790s at the Ecole Polytechnique in Paris there seems to have been
some laboratory teaching; but for the next half-century lectures with
demonstration experiments were the usual form of scientific education
at all levels. Chemistry was the first science in which practical training was
seen as essential, no doubt because in the early nineteenth century it was
very much an experimental science, looked upon by applied
mathematicians as applied cookery. Practical classes were at first an ‘extra’,
for which a further fee was asked at the new universities of London and
Durham for example; but they were soon required, and Liebig at Giessen
pioneered graduate laboratory work for the PhD degree also in the 1830s.
Twenty years later, laboratory teaching in physics began to come in as that
science emerged, with conservation of energy, into a mathematical and
experimental discipline; and a few schools, like the famous Queenswood
where Tyndall and Frankland taught, introduced laboratories. By the end of
the century, school laboratories were common.
Eminent scientists, like Crookes, might still do their research in private
laboratories at the turn of the century; it was only as science became
professional that institutional laboratories became the norm. The large
laboratories of the later twentieth century, where teams whose members
have different trainings work in ‘big science’, are a phenomenon for the
sociologist.
B. Latour and S. Woolgar (1979) Laboratory Life, Sage, Beverley Hills.
Language is very important in the sciences. The Royal Society in its early
days set up a committee to try to produce a precise language; and
Lavoisier’s reform of chemistry involved a new vocabulary because he
believed the old one incorporated error. Illustration is a kind of visual
language very important in science: and Galileo believed that the Book of
Nature was written in the language of mathematics, a belief in which many
physicists have followed him. Pictures and symbols are less rich than
ordinary language, where words call up associations and can be used in
jokes; but scientists are suspicious of word-play, anyway at work, and
look for accuracy and precision.
Down to the middle of the seventeenth century, Latin was the language of
learning and therefore of science; but gradually the vernacular languages
came to replace it. By 1800, French was the best language for international
communication in science; then German took the lead; and in our century
English has come to occupy the place Latin had three or four hundred years
ago. Much science is easy to translate, because it is not nuanced; but all
publication is rhetorical, written to persuade, in some degree, and original
works of science like Darwin’s Origin of Species are difficult to put into a
foreign tongue: there have been, for example, a series of Japanese
translations since the nineteenth century, trying to get it right. With old
scientific texts, like Harvey’s, it is difficult for a modern translator not to
use words carrying implications Harvey could not have dreamed of; using
translations is essential, but it does demand an act of faith.
Old words like field or energy assume in scientific contexts a
constricted but definite meaning; while some new words like catalyst
(coined from the Greek) pass out into ordinary language and mean
something much vaguer than in chemistry. Outside the laboratory, and
sometimes inside it, some indefiniteness is an advantage, allowing us
flexibility.
Law was at first seen to imply a lawgiver, and to be obeyed only by rational
beings; we do not expect dogs to keep the law. At the foundation of a city or
a state, a wise man would choose a system of laws; and this pattern was
applied to God and his creation. There are different systems available for
legislators, and in the same way God had the option of creating a number of
possible worlds. The job of the scientist was to find out which one he had
in fact made; and scientific laws were thus a matter of contingency: they
could have been otherwise. Down to the seventeenth century, ‘natural law’
meant the law which ought to be applied to strangers or between states, in
contrast to the specific codes which governed citizens in their relations with
each other.
In the seventeenth century, various laws of nature were found. Galileo’s
discovery of the law of falling bodies, Snell’s of the law of refraction,
and Boyle’s of the law of gas pressure, are well-known examples. These
are simple quantitative relationships which seemed to apply universally.
They do not involve causes; and this makes appeal to laws attractive to
those suspicious of theories. One problem is that they do not usually apply
exactly. If Galileo had dropped weights from the Tower of Pisa, he would
have found that heavier ones got to the ground slightly sooner; but he knew
that there are often interfering effects that make the course of nature less
simple than it should have been. His law represented an ideal situation, with
no air resistance. At low temperatures, Boyle’s law does not at all describe
what happens; instead of being springy, the gas becomes a liquid; and the
ideal ‘perfect gas’, which always behaves as Boyle would expect, was
invented. Laws do not always clearly fit facts.
We talk of weights or gases obeying a law, but to contemporaries of
Boyle, like John Ray the naturalist, this was impossible. Nature for him was
not ‘it’ but ‘she’, a power which carried out God’s will rather as Cardinal
Richelieu did that of Louis XIV. Like a King’s minister, nature strove to
obey the law but might fall short; and the materials she had to work with
might sometimes let her down, as happens with a monstrous birth. So we
should not expect complete consistency, in this ‘biological’ view of things.
To Boyle, active in chemistry, this idea was unattractive: explanation
for him meant a mechanical account, in terms of matter and motion (and
particularly of particles), of what was going on. For him as a good
Protestant, there was no power between us and God; so that God had laid
down laws which matter must obey. He was concerned to leave room for
God to work miracles; but by the late eighteenth century the idea had
become general that there was some necessity behind the laws of nature.
God might be the First Cause, but could or would not interfere in the
creation, which was a sphere of determinism. Rather than liberty under the
law, this was an iron rule; which has only been relaxed somewhat in our
century, with quantum physics.
The lecture is underrated in our day because it is associated too much with
communicating information, and not enough with theatre. The real task of
the lecturer is to get across excitement; details can then be got from books
and journals. The medieval lecturer read out a standard text, and that is
what the word originally meant; though this became obsolete with the
invention of printing five hundred years ago, it is still sometimes done. In
science, the lecture can be particularly effective as a part of popular
science where it is probably associated with demonstration experiments
in the tradition of Faraday at the Royal Institution; and also with
communicating current research to students. It is a pity that in
universities it is often used to cover a syllabus already in textbooks; and
that at international conferences many speakers read out pre-circulated
papers rather than giving a brief lecture to get discussion going.
L. Stewart (1986) ‘Public lectures and private patronage in Newtonian England’, Isis, 77, 47–58.
Light has many striking, indeed illuminating, features which mean that it is
used in metaphorical as much as in literal senses; we speak of casting light
on some problem, and of seeing the light. In medieval England, Robert
Grosseteste and Roger Bacon at Oxford were associated with ‘light
metaphysics’ in which light was an emanation from God; this led them
towards science. But even in Antiquity, there was also a science of light:
Euclid for example had worked out the law of reflection, and one of his
definitions of a straight line is that it is the path of a ray of light.
With the seventeenth century came the study of refraction, culminating
in Newton’s Opticks (1704) and his conclusion that white light is composed
of rays of all the different colours which form the spectrum. Also in the
seventeenth century had come the first measurement of the velocity of light.
Galileo had tried having two men with lanterns some way apart: the first
exposed his light and the second exposed his as soon as he saw it; the first
man saw how long it took for the light to get there and back. All that this
showed was that light went extremely fast; and Descartes assumed that its
velocity was infinite, though later in his Dioptrique (1637) he said that it
went faster in water than in air. At the Paris Observatory, Roemer noticed
that the satellites of the planet Jupiter sometimes emerged from behind it
early and sometimes late, and that they were late when we were furthest
from Jupiter and early when we were nearest. He concluded that light took
a quarter of an hour or so to cross the Earth’s orbit.
Newton supposed that light was a stream of particles, because a wave
would not go in a straight line; but he recognised wave-like properties in
some phenomena, like diffraction. In the early nineteenth century, Thomas
Young in England and then Fresnel and Arago more effectively in France,
urged that light was a wave motion and using a model of transverse waves,
like those of the sea, accounted for all that was known about light, and
made surprising predictions which were verified. By the end of the
century, light-waves and the ether which carried them were universally
accepted. Maxwell had shown that light was a form of electromagnetic
radiation, a form of energy rather than a thing.
But Einstein looking for an explanation of the photoelectric effect found
that he had to invoke particles of light, photons, in accordance with quantum
theory; blue photons carrying more energy than red ones. And since 1904
anybody concerned with light has had to recognise wave-particle dualism:
neither model will on its own account for all that is now known. Whether
we shall ever find a new theory which will return us to the happy
confidence of the Victorians remains to be seen; the future of light is
shrouded in darkness.
G. N. Cantor (1983) Optics After Newton, Manchester University Press, Manchester.
The liquid state is one in which a substance is mobile, and will fill the
bottom of its container. Liquids generally can be cooled so as to become
solid; and on the application of heat they become gases. In chemistry,
this state is very important because most ordinary reactions involve liquids.
Those which will not mix, like oil and water, constitute distinct phases in a
system.
Magnetism is a word that comes from the city of Magnesia on the Meander
river in Asia Minor; where magnetic rocks were found. There were stories
in Antiquity of rocks which could pull all the nails out of boats; but the
discovery that a needle can be magnetised, and will then set itself north and
south, was made in China. With printing and gunpowder, which also have
Chinese ancestries, it was one of the discoveries picked upon in the early
seventeenth century by Francis Bacon as distinguishing modern times. It
made possible the voyages of discovery of the Renaissance, which Bacon
took as a symbol of scientific discovery which would follow the use of his
method.
The first major book on magnetism was William Gilbert’s De Magnete of
1600, which described numerous experiments, especially with a ‘terella’ or
little model of the Earth made from lodestone, magnetic iron ore. He found
that with ‘armature’ (steel shields fixed on their ends) cigar-shaped
lodestones were stronger. Magnetism was profitably investigated as a part
of primitive geophysics; but the phenomena eluded explanation. For
Descartes, magnets emitted streams of particles which brought back
pieces of iron; for Gilbert the magnet had a soul.
Oersted in 1820 proved that magnetism and electricity are
interrelated, showing that an interrupted electric current affects a compass-
needle; and both forces had been shown to obey an inverse-square law,
like gravity. Faraday argued from 1845 that magnetism was best
understood as a field filled with lines of force surrounding the poles. He
believed that all the forces of nature were correlated: and from the late
1840s the principle of conservation of energy brought magnetism into
the new science of physics. Magnetic behaviour of fundamental particles
is very important in quantum physics.
In Faraday’s time, terrestrial magnetism was also a subject of intense
investigation and even international competition; associated especially with
Alexander von Humboldt, and with James Clark Ross who got to the North
Magnetic Pole, and very near the South one.
Mass measures the resistance to force which a body puts up. In the physics
of Descartes in the early seventeenth century, matter was defined by its
extension; it was what took up space. Although Descartes had first clearly
stated the law of inertia, that a body continues in a state of rest or
uniform motion unless a force acts on it, he did not use it to characterise
matter. His ‘subtle matter’, a kind of ether, might have no mass; and in the
eighteenth century weightless fluids were invoked to account for
electricity, magnetism, heat and light.
The inertial principle was stated by Newton in his Principia of 1687 as
his first law of motion; and for him, as for the upholders of atoms in
Antiquity, mass rather than extension was the essential feature of matter.
Mass is different from weight, which measures the effect of gravity upon a
body. This will vary, slightly in different places on the Earth and greatly in a
spaceship or on the Moon, as Newton recognised. Mass on the other hand is
constant in Newtonian physics, and conservation of mass was the
cornerstone of Lavoisier’s chemistry: but in Einstein’s relativity it will
vary with the velocity of the body, and increase as this approaches the
velocity of light. Some fundamental particles may have no mass if at rest,
and owe it all to their motion; energy and mass being connected according
to the famous equation E = mc2, where E is the energy, m the mass and c
the velocity of light.
The materialism which alarms moralists nowadays is belief that the more
things we own the happier we shall be; a modern and less joyful version of
the Epicurean ‘eat, drink and be merry for tomorrow we die’. Philosophical
materialism may entail this behaviour, but it means the belief that only
matter exists: there are no souls or spirits. Joseph Priestley, the
eighteenth-century chemist, believed that materialism was a part of
Christianity, properly understood; that the resurrection of the body rather
than the immortality of the soul was the correct doctrine, and that matter
sufficiently organised could think. Most contemporaries were horrified, and
connected this belief with Priestley’s support of the French Revolution and
other radical causes; he left Britain in the 1790s to take refuge in the USA,
but even there he found few supporters but Jefferson.
The simplicity of the doctrine had attracted many of Priestley’s French
contemporaries; and in science it seemed that explanation in terms of
mechanisms was needed rather than any dependence on immaterial spirits.
John Hunter in Britain and Blumenbach in Germany found some kind of
vital spirit necessary in physiology; but the French and German
physiologists of the nineteenth century believed they could do without it.
The kidneys secrete urine, went one dictum, and the brain secretes thought.
They interpreted Wohler’s synthesis of the organic compound urea as a
disproof: Wöhler had been interested in the experiment as an example of
the rearrangement of atoms, or isomerism.
In chemistry, Davy and others in the early nineteenth century came to
believe that it was force and not matter which was crucial. An electrical
charge would alter the chemical properties of metals; and chemical
affinity seemed electrical. Matter itself was inert and brutish, as Newton
had thought. This emphasis upon what came to be called energy
transformed ideas of the universe; which could no longer be seen simply as
an enormous clock. But it did not mean that chemistry disproved
materialism: as structures were better understood, so matter assumed a
place as important as energy in chemistry; while energy ceased to seem
anything very like spirit.
In our century, matter and energy in Einstein’s physics became
interconvertible; and while quantum theory seemed to be incompatible
with determinism, matter became something very different from a heap of
inert billiard-balls. Priestley’s vision of active matter is closer to ours than
Dalton’s; materialism may still be a useful world-view for the scientist,
though some would say that it is narrow and that some kind of religion is
vital for good science.
Measurement has great prestige, and seems to have been very important in
the history of science; where quantification is essential for confirmation, or
falsification, of hypotheses and theories. Important measurements go
back to remote Antiquity; Eratosthenes in Alexandria for example having
determined the circumference of the Earth with considerable accuracy in
the third century BC. In the eighteenth century, more precise determinations
of this quantity by Maupertuis in Finland and la Condamine in Equador
demonstrated that the Earth was flattened at the Poles, in accordance with
Newtonian theory. J. J. Thomson saw some of his predecessors as explorers,
working qualitatively and recording whatever they saw; while mature
science involved theory and quantitative method. Certainly mathematics
and measurement are the crucial features of physics: and in chemistry too
the measurable has come to prevail, as smells and tastes have given way in
analysis to physical properties like the spectrum.
Outside the physical sciences the role of measurement is less certain,
because the measurable quantities in dealing with people may be less
important than the qualitative ones; and whether the social sciences should
seek to model themselves on physics is open to doubt. We do try to quantify
deprivation or affluence, and this can be of value to governments in
deciding upon policy; but such measurements are theory-laden to a degree
that those in chemistry are not, and the search for accuracy can in this field
land one in error: as attempts to measure intelligence have shown, where
we find fraud as well, and where it is not clear that there is any simple
quantity to be measured.
Sometimes limited accuracy in measurement has been an advantage as
when Kepler worked on the orbits of planets about 1600. His data from
Tycho Brahe’s observations was too good to permit him to fit circles to it,
so he worked on until he hit upon the ellipse; which a single planet would
indeed follow, but is not strictly the orbit of planets forming a system,
because they attract each other and produce wobbles. This was only
detected following Newton’s work on gravity, of which it was a
confirmation, as the telescope came into astronomy.
Molecule in the eighteenth century meant a small particle; but from the
1820s, first with Gaudin in France, it came to be distinguished from atom
and to mean the smallest particle of a substance that exists in a free state.
By the 1860s, with Cannizzaro’s revival of Avogadro’s hypothesis that
equal volumes of all gases under the same conditions contain equal
numbers of molecules, it was accepted that molecules might be composed
of two or more atoms of the same element (though this conflicted with
views of chemical affinity), or of different elements. At the very end of
the nineteenth century, the molecules of Argon were shown to contain a
single atom.
Number was felt by Pythagoreans to be the secret of the order in the world;
and ever since, simple numerical relationships have been sought in the
belief that mathematics is the language of nature. The ancient Greeks,
Romans and Jews used letters to stand for numbers; this does not make
addition and subtraction easy, but an abacus made these operations rapid.
The Babylonians used a base of sixty in their astronomical work, and from
them we still have our sixty minutes in an hour, and in a degree; each
minute being divided into sixty seconds, that into sixty thirds, and so on. In
England we are familiar with a base of twelve, with eggs in dozens, twelve
inches to a foot, and until quite recently twelve pennies to a shilling: and a
base twenty with a ‘score’, as in three score years and ten, and in the French
system of counting with ‘quatre-vingt’. But the base ten has come to
displace its competitors in most fields, though computing has given a boost
to binary arithmetic.
Babylonians lacked a zero; and our ‘Arabic’ system of numbers came to
us in the Renaissance ultimately from the Hindus, but via the Arabs, and
was first taken up in book-keeping. It involved symbols for one to nine, and
zero; and the idea of ‘place value’. A number as large as you like can be
expressed because the value of a numeral depends on its place; whether it is
in the column of units, tens, hundreds and so on. By the eighteenth century
this was extended beyond the decimal point, so that fractions could be
expressed in the same way as whole numbers. Much modern science has
depended on quantification; which is the application of measurement and
numbers to phenomena; it is hard to imagine life without numbers.
Observation is sometimes in popular science set against hypothesis; to
those who see induction as the primary method in science, it must be the
foundation of all satisfactory theory. The problem is, as Darwin put it: ‘all
observation must be for or against some view, if it is to be of any service’.
The scientist has to decide what factors are relevant; the time of day
matters in terrestrial magnetism, but not in chemistry. The most striking
observations are those which confirm, or lead to the falsification of,
some ideas. So in much science the observations come after the theory:
though sometimes careful observation of a phenomenon may lead to a great
discovery, as with X-rays when Roentgen observed that photographic plates
were fogged near cathode-ray tubes.
It is essential that observation should be honest, and this has indeed been
urged as a way science can teach values. As Charles Babbage wrote, ‘the
character of an observer, as of a woman, if suspected is destroyed’; we
might see Victorian sexism there, but fraud is always a risk in science and
honest reporting is something which scientific education strives to
inculcate. This does not mean that two observers will necessarily report just
the same things, for different features will strike them; but an agreed
scientific paradigm will indicate what is important, and experiment,
especially in the laboratory, is designed to isolate one or very few factors.
So in most cases the reports of two trained scientists will be very similar,
and in developed sciences their observations will usually be quantitative.
Outright fraud is probably very rare in science, though the crimes of
‘cooking’ and ‘trimming’ on Babbage’s list, where observations are
improved a little, will be familiar to undergraduates and no doubt happen at
more exalted levels: but error and its reduction and estimation in the cause
of accuracy is very important. No quantitative observation can be made
completely accurate; so they are repeated, and statistics applied if
necessary to find the correct value. To estimate error is very important, and
if the possible error in an experiment is about as large as the effect sought
then there is no point in doing it. Tycho Brahe, the sixteenth-century
astronomer, was probably the first systematically to estimate the error in his
instruments; the idea reached chemistry much later, and Lavoisier for
example in 1790 gave results implying (in the number of figures after the
decimal point) far greater accuracy than he could have attained.
Observation through history has thus become increasingly sophisticated
and different from ordinary perception, just as science has diverged from
organised common sense.
The orbit of a planet is the path it follows through the heavens; and Kepler
in 1609 showed that this is an ellipse. This model from astronomy was
taken up by Rutherford in his theory of the atom, which he supposed to
consist of a massive nucleus surrounded by electrons in circular orbits.
Niels Bohr modified this theory in the second decade of our century by
applying quantum theory to the orbits; and Sommerfeld allowed for
elliptical orbits of different but definite eccentricities. However, as it
became clear that the electron was not a particle rather like a billiard-ball,
but had a wave character also, the term ‘orbital’ was introduced in place of
orbit.
Oxygen is the gas in the atmosphere which we breathe to sustain life, and
which combines with things when they burn. These ideas were first clearly
expressed by Lavoisier in the 1780s, and began what is often thought of as
the revolution in chemistry. Before that, respiration was generally thought
of as an air-cooling system which prevented the heart, the source of the vital
heat, from getting overheated; and combustion was believed to involve the
emission of phlogiston. Air had since Antiquity been regarded as an
element, in the sense of a constituent of all material bodies; and even in the
newer sense, of something which cannot be further analysed, it was in the
eighteenth century still thought of as an element. Samples of it might be
good or bad; and it might even be ‘fixed’ in compounds like limestone.
Scheele in Sweden isolated a sample of what we would call oxygen in
investigating the nature of fire; he heated various substances and collected
what came off. One of his samples proved very good for respiration; and he
believed that he had made air free from phlogiston. Priestley a little later
heated mercuric oxide and got a sample of gas, which he interpreted as
Scheele had; but his paper describing the experiment was published first.
He met Lavoisier in France and told him about his work; and Lavoisier
interpreted the experiment differently. Priestley probably saw his eminently
respirable air as a distinct species; but Lavoisier did so explicitly.
In Lavoisier’s chemistry, oxygen occupied a central place. It was the only
‘supporter of combustion’, and it was also (as its name implies) the
generator of acids. After Dalton’s theory of atoms was published in the
first decade of the nineteenth century, the relative weights of atoms were
often based on that of oxygen as standard (notably by Wollaston and
Berzelius) though Dalton had preferred hydrogen, as the lightest. But at just
this period Davy showed that caustic potash and caustic soda, the alkalies,
contained oxygen; and then that the acid from sea-salt does not. This strong
acid we call hydrochloric; and in chlorine Davy found another supporter of
combustion and generator of acids, so that oxygen had to share its throne.
Oxygen is still very important in chemistry; it is no longer the standard
for atomic weights, but the addition of oxygen, oxidation, and its removal,
reduction, have been generalised into two kinds of reaction. In the mid-
nineteenth century, an isomer of oxygen, ozone, was discovered; and the
layer of this substance in the upper atmosphere is supposed to be very
important in absorbing otherwise-harmful radiation. A world without
oxygen would be a world without anything like us.
Paper prepared from rags made printed books economically possible; and
the word duly came to be applied to what was written on paper. In the
sciences, the paper has become the most important vehicle for making new
knowledge public. In the seventeenth century, the learned journal was
invented in France and in Britain; this contains short contributions, either a
review of recent work, or an announcement of a discovery or a new
theory. No longer did one have to wait until one had enough material for a
book; and the paper achieved a more certain circulation than a letter could
ever do.
The early papers were very like letters intended for general reading and
therefore somewhat impersonal; but not taking for granted particularly
qualified readers. In the early journals, the papers were read (in full or
perhaps in abstract) at the society, association or academy to which they
were addressed; and were usually read by the Secretary rather than the
author. They were then referred to a committee which decided whether to
publish them, and would probably submit them to a referee: so that they
eventually appeared with a kind of stamp of approval, though the Royal
Society always added a disclaimer to the effect that publication did not
indicate official acceptance of all the views expressed.
Various benefactors bequeathed money for lectures; and these were
then often published as papers, having a rather more rhetorical character
than those which began as letters: though all scientific papers are designed
to persuade. The Croonian or Bakerian Lecturer at the Royal Society would
in the early nineteenth century be expansive, and the published versions of
Davy’s, for example, were noticed in the Edinburgh Review and other
interdisciplinary publications.
By the early nineteenth century there were also some private journals,
where space was at more of a premium; and gradually science was
beginning to split up into sciences, as specialised communities grew.
Formal scientific education in the second half of the nineteenth century
increased the trend towards specialisation; and as common assumptions and
background could be taken for granted, so editors required and authors
wrote papers which were much more terse. The literature of science
became much less literary.
This has gone on, so that now one would not read scientific papers for
fun. Moreover, the number of papers has so increased that nobody could
keep up with them all; the proliferation of journals has meant that published
material is not necessarily public knowledge, because the right people may
not have read it. Publication has also got slower than it could be in the last
century (Roentgen’s paper on X-rays being the record), so that by the time a
paper appears it is out of date. Scientists belong therefore to invisible
colleges round which preprints are sent; or in which the information
circulates electronically, so that papers need no longer actually be on paper.
Paradigm means example, but in the writings of Thomas Kuhn it has come
to have a special significance. He noted the role of dogma in scientific
education, and came to believe that science was not just open-minded
application of organised commonsense, but a system to be learned in order
to avoid error. A science is founded when somebody organises a large
number of facts with a theory or model; those working in this science fill
in the picture thus sketched, doing ‘normal science’ within the paradigm of
the founder. In time anomalies accumulate, and a revolution happens,
resulting in a new paradigm which cannot be fully translated into the
language of the old one. The revolutionaries are the great figures of the
history of science: Galileo, Newton, Lavoisier, Darwin, Marie Curie,
Einstein. They thought of a new way of looking at nature, rather than
simply discovering new phenomena. There is some ambiguity about just
what does constitute a paradigm, and the scheme fits actual episodes rather
loosely; but it is very suggestive.
Research comes to mean something a little like painting by numbers
rather than original discovery for 99 per cent of scientists; and there is
probably a good deal of truth in this caricature. The danger of Kuhn’s
perspective is that it encourages attention to a very few eminent
practitioners; its advantage is that it presents science as a fully human
activity with its social context. The scientific community accepts and passes
on the paradigm. In place of a method which inevitably leads to truth and
certainty, we get a picture of science as provisional creations of the
human mind in its quest for order. The problem seems to be that the
paradigm is less fixed and more suggestive than Kuhn’s analysis implies;
the quantum theory of the 1980s is extremely different from that of Planck,
though it has clearly developed from his insight. Such episodes as the
acceptance of Continental Drift in geology in the 1960s are illuminated
from a Kuhnian viewpoint.
I. B. Cohen (1985) Revolutions in Science, Harvard University Press, Cambridge, Mass.
T. S. Kuhn (1970) The Structure of Scientific Revolutions, 2nd ed., Chicago University Press,
Chicago.
A particle is a small lump of matter. The term has often been used by
those who did not want to commit themselves to such theory-laden terms
as atom or molecule; rather like corpuscle. Thus Descartes believed in
particles, though he was not strictly an atomist, because his particles filled
all space and were liable to wear. Those in the nineteenth century
developing the kinetic theory of gases similarly spoke of particles because
they were uncertain whether they were dealing with atoms as defined in
chemistry. Speculations at the same period about the ether which carried
the light waves also often involved particles, though some believed it to
be a continuum: invoking particles here meant subscribing to a particular
view, rather than avoiding one.
It is in this sense that the term ‘particle’ is often used in modem science.
In the nineteenth century, the period of ‘classical physics’, it seemed
possible to do crucial experiments to decide between two possible
explanations. Newton had suggested particles of light, in particular to
account for its going in straight lines: for light waves, invoked by his
contemporary Christiaan Huygens, would be expected to go round corners.
Until the opening years of the nineteenth century, Newton’s view was
generally held; but then Thomas Young made the discovery that when light
is passed through two nearby slits in a screen, the resulting beams interfere
as waves do when two pebbles are thrown into a pond.
This was at first seen as a curious anomaly, insufficient to upset an
established paradigm; but then Fresnel in France proposed a detailed theory
in which light was transverse vibrations, like the waves of the sea; this was
powerful enough to account for known phenomena, to make surprising
predictions like the existence of a bright spot in the centre of the shadow
of a sphere, and even to show how light generally goes in pretty straight
lines. Thus it seemed by about 1840 that light was not a stream of particles,
but a train of waves. This was taken for granted as something proved by
scientists, and as an example of progress from the time of Newton whose
view had been exploded.
Crucial experiments also seemed possible with other kinds of rays
discovered during the nineteenth century. Infra-red and ultra-violet radiation
showed such strong analogy with light that they must also be waves; while
Faraday’s laws of electrolysis led Helmholtz and Stoney to the view that
electricity must be particulate, a little lump of it attaching itself to each
atom of matter. The cathode rays investigated by Crookes and J. J.
Thomson were proved by 1897 to be composed of charged particles, which
could be deflected by electric and magnetic fields, cast sharp shadows, and
would turn a little windmill. They were by 1900 identified with the
electrons of Helmholtz and Stoney; Thomson had described them as
‘corpuscles’, and as fundamental constituents of all matter.
In the opening years of our century, all this was cast into doubt. Einstein
explained the photoelectric effect in terms of light particles, photons; using
quantum theory, and ascribing definite quantities of energy to photons
corresponding to different colours. Thus red ones have much less than
violet, and will not eject electrons from potassium no matter how bright the
red light is. To account for all the properties of light, it was now necessary
to subscribe to two inconsistent theories: Newton’s particles of light had
made a comeback.
Then Louis de Broglie proposed that matter also might have its wave
character; and indeed it was found that electrons could be diffracted just
like light. This makes possible electron microscopy, just as Einstein’s work
underlies our light-meters. Wave-particle dualism has become an
inescapable part of twentieth-century science; particle is still used as a
general term when one does not want to specify something like atom or
electron, but the hope that crucial experiments can show that something is
definitely and always a particle has had to be given up: we live in a world
of both/and rather than either/or.
Philosophy, or more strictly Natural Philosophy, was the old name for
science; and is still preserved in the titles of some university departments
and chairs, where it now tends to mean physics. Darwin on board HMS
Beagle in the 1830s was ‘Philos’ to his shipmates; the Royal Society’s
journal is called Philosophical Transactions, and the oldest surviving
private journal is the Philosophical Magazine; while the ‘literary and
philosophical societies’ of Manchester, Newcastle and other places were
devoted to science. Descriptive sciences were called ‘Natural History’, with
its three kingdoms, Animal, Vegetable and Mineral; but might well be
included under the general heading of Philosophy.
The label was not, however, accidental. The earliest philosophers of
Greece had been chiefly interested in speculative science; and in the
seventeenth century the great philosophers were concerned to establish a
scientific method which would lead to certainty. Descartes’ Discourse on
Method of 1637 was an introduction to three treatises, on geometry, optics
and meteorology, which were supposed to show the method in action.
Those whose chief interests were in the laboratory were expected
nevertheless to form a world view, and to be prepared to justify it in public.
This might, especially in Britain, involve argument about the existence and
wisdom of God. But by the 1820s there was a widespread feeling that
science should be based upon induction from facts and should exclude
speculation.
In 1833 the British Association for the Advancement of Science met in
Cambridge, and the poet S. T. Coleridge denounced the use of the terms
‘philosophy’ and ‘philosopher’ in the context of science, urging his hearers
to adopt different words. He saw philosophy in the new tradition of Kant as
including metaphysics, logic and ethics; a discipline in which the data and
the methods of the empirical sciences might be very important, but which
was distinct from them. William Whewell at the meeting suggested the
word scientist to describe all those present; and in 1840 he published his
Philosophy of the Inductive Sciences, which was the first book in English to
be explicitly devoted to what is now called the Philosophy of Science.
Whewell believed that science was a matter of imposing right ideas upon
accurate observations; he emphasised theory and deduction far more
than his contemporary, J. S. Mill; but both hoped for science which was
getting steadily nearer truth. In our century, philosophers of science have
been less confident. Whether descriptive or prescriptive (laying down rules
of method), they have emphasised the provisional character of science:
Popper’s method of falsification, Kuhn’s paradigms and Lakatos’
research programmes all involve probability rather than indubitable
knowledge. Their connection with day-to-day science is perhaps not very
close; there was a loss on both sides when science separated from
philosophy.
The name physics comes from the Greek ‘phusis’ meaning nature; and the
science of physics as we know it is really a product of the nineteenth
century. In the 1850s the idea of conservation of energy brought together
into a coherent unity a number of sciences which had previously been
separate.
The most important of these was natural philosophy. This in the
seventeenth century had meant roughly what we mean by ‘science’, and by
the eighteenth century was often abbreviated to ‘philosophy’. But in learned
use it meant those branches of science to which mathematics had been
applied, in the manner of Galileo and Newton: astronomy, mechanics and
optics. In Scottish universities, the term ‘natural philosophy’ is still
preserved in the title of what are elsewhere physics departments.
In France in the late eighteenth century, the term ‘experimental physics’
came into use as a phrase to cover work in fields such as heat,
electricity and magnetism where there was not yet a fully mathematical
structure of theory. In the Paris Academy of Sciences posts were created for
experimental physicists; and in England Thomas Young used the term
‘physics’ for various experimental sciences in a famous series of lectures in
London at the beginning of the nineteenth century. Some of his ‘physics’ is
a part of ours, and some is not. In Britain, electricity was generally seen at
this time as a branch of chemistry (Faraday’s contemporaries saw him as a
chemist), while magnetism was associated with navigation or with geology.
France at the beginning of the nineteenth century was the leading
scientific nation; and was particularly strong in applied mathematics. In
1812 S. D. Poisson was elected to a physics chair at the First Class of the
Institut, as the Academy of Sciences was then described: he was a
mathematician, who had done important theoretical work in electricity but
was no experimentalist. His election has been taken to mark the realisation
that physics involves both experiment and mathematical theory, and thus
the transition to our modern understanding of the term.
Nevertheless ‘physics’ continued to mean generally those sciences
concerned with what some still saw as ‘imponderable fluids’ responsible
for heat, electricity and magnetism. The more modern view, gaining ground
as the century wore on, was that these were all manifestations of some one
underlying force; and that the workings of nature were to be understood in
terms of opposed or polar forces, like the north and south poles of magnets.
Oersted’s detection of the magnetic field produced by a changing electric
current in 1820 encouraged such beliefs; but in the 1820s and 1830s it did
not seem possible to quantify such things as electricity or heat so as to
know just how much of one produced a certain quantity of the other.
James Joule in a series of experiments in the 1840s warmed some water
in an insulated container by churning it with a clockwork system driven by
a falling weight. He could measure the mechanical work done by the weight
as it fell through a given distance, and also the heat generated in the water;
and thus determine the ‘mechanical equivalent of heat’. He and others saw
that not merely was a definite amount of work converted into a definite
quantity of heat, but that this was a manifestation of a more general process.
This was most clearly perceived by Helmholtz, one of the great polymaths
of the century, who stated the principle in a paper published in Berlin in
1847 and then in more popular manner in a public lecture at Konigsberg. He
saw that if heat, light, mechanical work, electricity and so on were all
interconvertible, then they should be expressible in the same dimensions of
mass, length and time.
In place of ‘force’ the term ‘energy’, precisely defined, came into use;
and the science of physics became the science of energy and its
transformations. It thus became the fundamental science; whereas at the
beginning of the century chemistry had seemed the science of forces, and
natural philosophy concerned with the less interesting sphere of billiard-
balls. The task of the physicist was to carry on the work of Joule and
determine the exchange rates between the various forms of energy; and to
some it seemed as though this would complete the science of physics.
This was not how things worked out. The study of the spectrum of gases
led to the idea of the atom being complex, which was confirmed in 1897
when J. J. Thomson identified the cathode rays as a stream of sub-atomic
particles, carrying a negative charge. At the same time Max Planck
explained the radiation from black bodies as coming in packets, or quanta,
of definite size rather than continuously. In 1904 Einstein indicated that the
photoelectric effect, a conversion of light to electricity, could only be
understood in terms of quanta of light interacting with a metal; this seemed
like a reversion to the old theory of light particles forming an imponderable
fluid. He also argued that space and time must be seen as relative rather
than absolute as in Newtonian mechanics.
The so-called ‘classical physics’ came to an end therefore just about the
turn of our century, but not as the completing of an enterprise. Physicists
have been concerned with sub-atomic particles, with wave-particle dualism,
and with cosmological questions ever since; and in the meantime their
science has become much more an affair of teams and groups. Joule worked
on his own, and did not make his living from science; Helmholtz was a
professor in Germany, with excellent university laboratories but still doing a
relatively cheap science, and much the same can be said of Rutherford at
Cambridge in the 1920s and 1930s; whereas since then fundamental
research in physics has been very expensive, and its links with high
technology very close.
P. M. Harman (1978) Energy, Force and Matter, Cambridge University Press, Cambridge.
R. W. Home (1983) ‘Poisson’s memoirs on electricity’, BJHS, 16, 239–59.
R. McCormach (1982) Night Thoughts of a Classical Physicist, MIT Press, Cambridge, Mass.
Power is capacity for action, for doing work: and it has come to be used in
the sciences in an increasingly quantified and definite sense. In the
seventeenth century it was what lay behind the rather loosely defined term,
force: matter was seen by Locke and Newton as having certain powers
associated with it, or given it, which would appear as forces of attraction or
repulsion. Chemistry came to be seen as the study of the powers which
modify matter; though at the end of the eighteenth century Lavoisier argued
that the chemist should mainly concern himself with weights.
The word is also used in mathematics, where three to the power two is
nine; but in the early nineteenth century, it came increasingly to be
associated with technology. James Watt introduced the idea of horsepower,
a rate of working of 550 foot-pounds per second, as a measure of what a
steam-engine or other prime mover could do. In the eighteenth century
about twelve horsepower could be expected from a windmill, water wheel,
or primitive steam-engine: but in the nineteenth century this figure soon
looked very small as steam-engines were made very large and much more
efficient; a process culminating in the turbines of Parsons in the later part of
the century. Faraday in his work on electricity and magnetism made
primitive motors, but not until after his death in 1867 did electric power
gradually become increasingly available; and in the last years of that
century, the internal-combustion engine provided a further source of power.
Between them, these two have displaced the steam engine, which now
chiefly appears as a turbine generating electricity. Modern civilisation
depends upon sources of power which are intimately connected with
science; which has also brought power to governments. Bacon was right
that knowledge is power, in more than one sense.
Prediction is for some philosophers the main aim of science; which might
seem to make weather-forecasting the paradigm among the sciences. The
point is that from past instances, given the support of theory, one can
extrapolate into the future. In saying that hydrated copper sulphate
crystals are blue, the scientist makes a timeless remark – that they
always have been and always will be, and hence predicting that any that
you happen to meet will be blue. Predictions that bridges will not fall down
or that mixtures of gases will not explode or that there is no risk in some
minor surgery all arouse attention and excitement when they turn out
wrong: these are cases with more factors than the copper sulphate example,
and momentous predictions like these are clearly more interesting. They
also show that there is art as well as science in the affairs of life;
probability is the guide to life, and judgement is essential.
In the sciences, successful prediction is the most spectacular test of
theory. The famous cases, like the prediction of Neptune from wobbles in
the orbit of Uranus made independently by Adams and Leverrier, of
Gallium and Germanium by Mendeleev, and of the positron by Dirac, were
the results of calculation from gravitation theory, the Periodic Table and
quantum theory; and they are taken to mean that the theory is close to truth.
In fact, erroneous or inadequate theories have been used to make successful
predictions. Sadi Carnot in his work on the second law of thermodynamics
predicted correctly that high-pressure steam-engines would have higher
efficiency; but his theory was that heat was a fluid, not converted to
work. Fresnel’s prediction that there would be a white spot in the centre of
the shadow of a sphere was based on his wave theory of light, and was
correct; but since Einstein’s work of 1904 the wave theory is recognised as
covering only some of the phenomena of light. Predictions are mundane or
spectacular, but they cannot confer certainty.
Proof was what the ancient Greeks demanded in geometry. Getting the
right answer was not enough; demonstration was required that it could not
have been otherwise. Ancient geometers might, like Archimedes in his
method, get their theorems by some more intuitive or practical method; but
it only became a part of mathematics when formally proved. This meant
that it could be deduced from the axioms: in this ideal deductive system the
conclusions are in a sense tautologies, implicit in the assumptions made at
the start.
Until in the nineteenth century there was a range of geometries,
Euclidean and non-Euclidean, geometry had seemed necessarily true. But
now the proofs only hold within their particular system. In ordinary life,
proof means convincing a jury beyond reasonable doubt; and in empirical
science it is not so different. Outside mathematics and logic, there is no
formal proof; but we have enough confidence in the laws of nature to think
it captious to doubt that unsupported bodies near the Earth’s surface might
not fall. Science may not be a matter of complete certainty, and truth
unknowable; but even without geometrical proof, we can be sure enough.
But it is good for us to be reminded of the provisional character of all our
knowledge.
Proof is also, in its older sense of test (the proof of the pudding is in the
eating) used in printing; where proofs of a paper are sent to the author for
correction.
Property is used in chemistry in the old sense of a quality essential to
something. Thus the properties of gold are that it is yellow, dense, malleable
and so on. Sometimes one particular property may become diagnostic, as
with acids and litmus paper; or may become almost a definition, like an
atomic number in twentieth-century Periodic Tables of the elements. But
generally in the sciences, as in any attempt to get a natural system of
classification in dealing with natural kinds, we rely on a cluster of
properties rather than a single one in characterising things.
A publisher is one who brings out books and journals; and in the
eighteenth century this began to be separated from printing and bookselling.
Some academies and associations are their own publishers; while others
have close formal links with publishing-houses which may be state-run or
commercial, and which bring out recommended works. Some scientific
societies simply exist to publish a journal or a series of monographs, like
the Ray Society in natural history. But much of the literature of science
has been published by commercial publishers: for example in Britain,
Taylor & Francis have published the Philosophical Magazine since 1798;
Smith & Elder published Voyages; Macmillan began Nature over a hundred
years ago and published textbooks associated with it; while Van Voorst in
the nineteenth century published many important illustrated works on
natural history.
W. H. Brock and A. J. Meadows (1984) The Lamp of Learning, Taylor & Francis, London.
E. Eisenstein (1979) The Printing Press as an Agent of Change, Cambridge University Press,
Cambridge.
J. Glynn (1987) The Prince of Publishers, Allison & Busby, London.
Rays are the lines followed by light or other forms of radiant energy. In
working out what happens in reflection or refraction, we make a ray
diagram, following the path through a lens, prism or raindrop; and we can
thus compute what will happen in optical apparatus. In the opening years
of the nineteenth century, William Herschel found that there were heating
rays beyond the red end of the visible spectrum; and Ritter and Wollaston
found chemically active rays beyond the violet. Light rays were therefore
just part of a family, some of which were invisible but detectable.
Faraday, working in the 1830s and 1840s on electricity and
magnetism, came to the conclusion that the fields associated with these
agents were filled with lines of force; and in a lecture, ‘Thoughts on ray-
vibrations’, he suggested that such lines filled all space. He believed that
magnetic fields must act upon light, and found that they did; and suggested
that light might be vibrations in the lines of force. This idea was given
mathematical form by Clerk Maxwell, and led to the detection by Hertz of
the first radio waves.
Faraday also investigated the passage of electricity through gases, and
saw how at low pressures a dark space forms around the cathode. His
experiments were continued by William Crookes, who found that at lower
pressures ‘cathode rays’ streamed from the negative electrode in straight
lines, casting shadows. By this time light was firmly believed to be a wave
motion; but in 1897 J. J. Thomson showed that the cathode rays were
composed of negatively-charged particles, which he called corpuscles
and later electrons. Rays were also emitted in radioactivity, in three
forms: alpha, which were positive particles (helium nuclei); beta, which
were electrons; and gamma, which were like X-rays, analogous to light.
Rays might therefore be waves or particles; but in the early twentieth
century light was shown to have a particle aspect, and then electrons to
have a wave character, so that rays partake of both.
Reflection occurs when a ray of light falls onto a suitable surface; and
reflection upon this phenomenon in Antiquity led by Euclid’s time to the law
that the angle of incidence is equal to the angle of reflection. In the
seventeenth century reflecting telescopes with large curved mirrors came
into use, Newton being the inventor of one kind, because reflection does
not, like refraction, split white light up into its constituent colours.
A review may mean two things; either a kind of journal devoted to essays
describing progress in some field, or an account of a book, a paper, or a
meeting in a periodical. The Journal des Scavans, published in association
with the French Academy of Sciences, was the first scientific journal, and it
was essentially a review in the first sense. It filled a need which is still
there. The output of research papers is now enormous, and the reader
cannot plough through them all, but must rely upon abstracts and the
guidance of reviews. In nineteenth-century Britain there were a number of
competing reviews (the Edinburgh, the Quarterly, the Westminster, the
North British and more) which tried to cover science and the humanities,
and are valuable sources for the reception of scientific theories among the
general intellectual public. These were anonymous, but the names of the
authors were usually a poorly-kept secret; the power of the editors was
very great.
The Royal Society’s Philosophical Transactions was a research journal,
publishing signed articles, but including book reviews. These are still an
important feature of scientific periodicals; and by the end of the nineteenth
century the practice which originated in France, of signing such reviews,
had become the norm. Such reviews can, in the hands of a master, tell us
not only about the book but also about the state of a discipline.
Scepticism has always been an important part of science: the Royal
Society’s motto, ‘Nullius in Verba’, take nobody’s word for anything, would
be shared by other academies and associations. In philosophy,
scepticism is an old tradition; its extreme is the solipsist, who cannot
believe that anything exists outside himself. If we doubt the existence of an
external world or of any kind of order then we would not be able to do
science at all; so the scepticism of scientists is a rather different thing.
Outsiders grumble that scientists are unwilling to investigate the places
where flying saucers have been reported, or the bending of spoons by will-
power; they just get on with their boring old analyses or something. And
yet we are also prepared to laugh at the early Royal Society who did the
experiment of making a circle of unicorn’s horn and seeing whether a
spider put in the middle of it could escape; and for publishing a paper by an
early President claiming that geese hatched out of barnacles. It is hard to get
the balance right; and scientists, despite all the surprising discoveries that
have been made, seem to have become increasingly sceptical about things
not found by qualified observers, preferably in a laboratory.
What is believed depends on the current paradigm, or the expectations
based on successful theory and generalisation. Thus it was hard about 1800
to accept that duck-billed platypuses were real and not made (like
mermaids) by sailors practising taxidermy on long voyages. It was then
harder to accept that the creature laid eggs; because in every way it seemed
to cut across the classification painfully constructed. It is curious that
radioactivity, with the suggestion that atoms of elements are decaying,
was much more easily accepted; this tells something about atomic theory in
the 1890s.
Within science we can find people being pig-headed, as when in the
1840s observatories were too busy to look for the planet Neptune
predicted by Adams; or credulous as they were about ‘polywater’ in the
1960s. Scepticism may seem a matter of keeping clear of fraud, magic,
miracles and spirits; but it is more complicated than that, and the Royal
Society’s motto is easier said than followed, especially in these days when
we have to take advice from experts.
F. Franks (1981) Polywater, MIT Press, Cambridge, Mass.
A school usually means an educational institution, either a collection of
departments within a university, a component part of a Faculty; or else
and most often, a place where children are taught. The sciences did not
have much of a place in schools in this sense until into the nineteenth
century; at the higher levels the classics generally formed the major part of
the curriculum, and a ‘modern’ side where modern languages, modern
history and science were taught was uncommon. In the eighteenth century,
Dissenting Academies open to Nonconformists in England had a more
modern syllabus; and this formed a model for later schools. In France,
competitive examination for the Ecole Polytechnique meant that some
science had to be taught; and at Cambridge mathematics was the most
important degree-subject, for which again schools would prepare their
pupils. But only in the second half of the nineteenth century, when
universities began to offer degrees in the sciences, did schools begin to
adopt textbooks and their pupils begin to find themselves entered for
examinations.
In connection with the sciences, a school means also a group of people
associated together in their research. Usually a school will have its great
man, the rest being disciples of this father: in our century Rutherford’s
school at Cambridge in the 1920s and 1930s might be an example.
Sometimes as here, the school forms around a scientist of unusual ability;
but especially in the past, when science was fairly cheap, the genius might
work on his own like Faraday, and lesser mortals with more social gifts and
ambitions might form a school. There is too much talk about ‘centres of
excellence’ today, but what this cliché may draw attention to is the
existence of a school, where some set of problems is being investigated,
perhaps using some particular technique invented by the Father, and where
those who go can join in and contribute to the success of the enterprise. The
school is thus working within the paradigm of its founder; and it will in due
course be overtaken by events and innovation elsewhere unless the next
generation is unusually flexible. Scientists like to see themselves as the
‘sons’ and ‘grandsons’ of the eminent, and as members of some respected
school; and as they move to jobs elsewhere, their connections are often kept
up by correspondence, a valuable source for the historian.
S. T. Keith and P. K. Koch (1986) ‘Formation of a research school: Theoretical solid-state physics at
Bristol, 1930–54’, BJHS, 19, 19–44.
L. J. Klosterman (1985) ‘A research school of chemistry in the nineteenth century: J. B. Dumas and
his students’, Annals of Science, 42, 1–40.
J. A. Secord (1986) ‘The geological survey of Great Britain as a research school, 1839–1855’,
History of Science, 24, 223–75.
Sciences are distinct bodies of knowledge which have shifting and uneasy
frontiers between them, and which together constitute science. Thus
electricity was in the early nineteenth century generally seen as a part of
chemistry; but the new idea of conservation of energy brought it into
physics. The sciences are often thought of as a hierarchy, having different
status, with the hope perhaps of the ultimate reduction of them all to one:
in our day, this one would probably be physics, but about 1800 it would
have been chemistry, and at other times biology. To a reductionist, there is a
real if distant hope of unifying all science: and thus for example explaining
human behaviour in terms of electrons.
Eminent practitioners of philosophy of science such as Hegel and
William Whewell in the nineteenth century had a hierarchy of sciences
without reductionism: they saw each science as having its appropriate
fundamental idea, and thus as being complementary. Any attempt to reduce
one to another would distort the map of knowledge. The problem is that
sciences do change their paradigm over time, and that developments seem
to happen on the boundaries of sciences which were put far apart in these
schemes. The history of sciences is one of change and development, not of
continuous accumulation of data to complete a pattern already foreseen.
So reduction may happen, and indeed in modern theories of chemical
affinity, or valency, we do find some of the science reduced to physics.
But chemists are unhappy at the thought that calculations might replace
experiment; and in sciences textbooks, academies and institutions, and
journals all help to reinforce specialism and keep up the frontiers. By the
nineteenth century the old Natural Philosopher was becoming the
scientist by profession, specialising necessarily in one science because
life was too short to keep up in several. The process has gone on, so that a
scientist is not merely a chemist, but an organic or a physical chemist; and
within this great subdivision, further specialisation has become essential.
The outlooks and interests, and the methods of research, among
practitioners of different sciences are very different; and the very question
of whether Psychology or Economics for example are sciences is open.
There have been times when chemists and physicists, and physicists and
geologists, have ignored one another: Rutherford thought that chemist was
another word for damn fool, while Victorian physicists got the age of the
Earth hopelessly wrong. While specialism is unavoidably with us,
collaboration in teams seems to be a way of crossing the daunting but
artificial frontiers between sciences. We can have a shot at saying which is
the leading science at a particular time and place, but whether others will be
reduced to it remains to be seen.
Scientism is the belief that science is the only road to truth, and that its
methods must be employed in all other branches of thought and enquiry.
This idea was popular in the late nineteenth century and the first half of this
one, when scientists seemed to have great moral authority, and the
sciences seemed the essential feature of education; it often had strong
links with materialism, and was attractive in philosophy escaping from
religion. In our time, it is less plausible that science can supply us with all
the values we need.
M. Midgley (1985) Evolution as a Religion, Methuen, London.
A scientist’s life is devoted to science. There was no word until the 1830s
to describe such a person, and this was partly because there were so few of
them. The scientific community of the seventeenth and eighteenth centuries
was made up of doctors, lawyers, clergy, officers of the army and the navy,
and landed gentlemen; for all of whom science could not be very much
more than a hobby, pursued in their leisure time. They were amateurs, in the
old sense of those who loved the activity, because hardly anybody could
follow science as a profession. This does not mean that their work in
science was not of a high standard; but their education, and their social
relations, were those of their class and did not mark them out. There was no
particular group-feeling which separated men of science from others in ‘two
cultures’.
Even as late as the opening years of the nineteenth century, those
engaged in science often covered a very wide field. Thus Thomas Young is
remembered as a pioneer of the wave theory of light, but he also took the
first steps in the decipherment of the Rosetta Stone, wrote on chemical
affinity, first described astigmatism in the human eye, and in a famous
course of lectures on natural philosophy used ‘work’ and ‘energy’ in
something like their modern scientific sense. Specialism was beginning to
come, but there were many who were unhappy about it, and who enjoyed
working across the boundaries between disciplines some of which might
now be excluded from the circle of the sciences.
Academies did sometimes provide for and encourage specialisation, by
having a set number of places for practitioners of different sciences; and by
separating into distinct groups the antiquarians and the experimentalists;
and there were separate journals for research papers in science and in
literary studies, though some famous reviews (like the Edinburgh, which
attacked both Young and Wordsworth) covered both. Scientific clubs, like
the Royal Society, were open to all with an interest in science, and only a
minority of Fellows in Regency Britain were involved in research.
In 1831 the British Association for the Advancement of Science was
founded, and it was at the third annual meeting of this institution that the
word ‘scientist’ was coined by William Whewell: himself a cleric and moral
philosopher as well as an expert on mineralogy and tides. It was formed by
analogy with ‘artist’; but many scientists of the day, like Darwin and
Faraday, considered themselves as doing philosophy – that is, developing
and defending a world-view, rather than simply finding a few new facts or
inventing an engine. By the 1830s it was just becoming possible to live by
science, so that one’s research brought bread and butter: science could
begin to be a profession, rather than an activity of those practising another
profession, like medicine or law.
But there was no uniformity about the backgrounds of the first generation
of scientists. In France from the Revolution of 1789 there had been the
Ecole Polytechnique, devoted to science and engineering but becoming ever
more militarised, and other Hautes Ecoles; and there were university
courses in sciences, though the professors were poorly paid and resorted to
‘cumul’, holding a number of posts at once, to make ends meet. In Germany
after the Battle of Waterloo in 1815 the universities became very important,
and the ideals of Bildung, or character-formation, and Wissenschaft, or
knowledge for its own sake, were much stressed. By the 1830s Liebig had
built up in the little University of Giessen a research school in organic
chemistry, giving laboratory training to students for a PhD degree.
Gradually in France and especially in Germany there emerged a corps of
trained scientists, sharing a paradigm, and increasingly distinct from those
with arts degrees, though by no means philistines.
In the Anglo-Saxon world this came later. Faraday had been apprenticed
to a bookbinder; Darwin dropped out of medicine, and read an ordinary
degree at Cambridge to prepare himself for life as a clergyman; Huxley was
trained on the job as a surgeon, at Charing Cross Hospital. It was only in the
second half of the century that a career pattern for a scientist became
available, with the great expansion in education at all levels after 1870.
Then the would-be scientist hoped to go to a university, and study a science;
and then perhaps to continue with academic research – British universities
started offering PhD degrees in the twentieth century, so before that one
went to Germany or didn’t bother with graduate qualifications – or else to
go into teaching, or set up as a consultant. There was, right through the
nineteenth century, a good living to be made for a few scientists, who had
often achieved fame through research or public lectures, by consultancy,
generally involving chemical analyses, or electricity.
By the end of the nineteenth century, science had become divided into an
increasing number of sciences, and the frontiers between them had become
fixed in academic institutions. Chemists looked askance at physicists, for
example; and one’s first loyalty was to one’s discipline rather than to
science. Nevertheless, there was and is much that the different sciences
have in common; and a kind of scientism, the belief that science can solve
all the problems of mankind, was taught to students of them all. The
greatest changes in the twentieth century were first, that women were able
to become scientists; there had been some before, like Caroline Herschel,
Jane Marcet and Mary Somerville in nineteenth-century Britain, but they
had not been able to play a full part in things; and second, that science was
taken up in countries such as Japan and India, so that scientists ceased to be
exclusively European males.
Spirit has been used in opposition to matter, and its existence is denied in
materialism. In the seventeenth century, brute matter was contrasted with
spirit in the revival of the theory of corpuscles or atoms. In the mid-
nineteenth century, Spiritualism came from America to Europe, and a
number of eminent scientists investigated the curious facts for which
the spirits of the dead seemed responsible. In the absence of any satisfactory
theory, psychical research gradually became less interesting; its
phenomena were also unpredictable and hard to reproduce, unlike for
example those of chemistry which happen any day and anywhere. In
chemistry, when distillation was invented in medieval Sicily it seemed
that the spirit was being extracted from the wine, leaving the body behind;
which is why we still refer to brandy and whisky as ‘spirits’.
J. Oppenheim (1985) The Other World, Cambridge University Press, Cambridge.
Standards in science may mean values, but are generally connected with
units or with purity. The Bureau of Standards in the USA is a body which
publishes and maintains standards; and similarly there are British Standards
for all kinds of materials. Measurement was very difficult until there were
standard units, first within a particular country and then internationally.
Thus in the eighteenth century, there were many rather different standard
inches and pints; and the US and Imperial Gallons still differ, because in the
USA the smaller ‘wine pint’ of 16 oz became the standard unit, rather than
the 20 oz pint used in Britain for other fluids and now standard.
Older standards were ‘artificial’, depending upon particular standard
objects; first perhaps the dimensions of a King, and then on measures kept
somewhere central. When in the nineteenth century, the Houses of
Parliament in London were burnt down, the standard Imperial measures
were destroyed with them; and had to be reconstructed from copies which
had luckily been deposited in other cities.
Scientists hoped for ‘natural’ standards; thus the inch might be defined
in terms of the length of a pendulum which at Greenwich beat seconds. In
France, the new ‘metric system’ set up after the Revolution of 1789 and
ratified by an international conference (only attended by
representatives of governments allied to France) was natural; the metre
being a fraction of the circumference of the Earth. But for practical
purposes a standard bar of platinum was needed; and more accurate
measurements since have shown that the bar does not exactly conform to
the definition. The bar is preferred, so the natural basis is lost: we cannot
manage to conduct affairs with provisional standards which will have to be
revised each generation.
From the 1850s the metric system was enforced in France, and from the
1870s it came into general use for scientific purposes in Britain and the
USA; only the historian of science still has to wrestle with ‘grains’ and
‘lines’, and we may rejoice that we no longer have to buy coal in
‘chaldrons’ which meant something different in Newcastle and in London.
Telescopes were probably invented in Holland about 1600, a long time after
lenses had come into use for reading glasses. Galileo heard about them, and
may even have seen one; and then made his own. He, and Thomas Harriot
in England at the same time, pointed telescopes at the heavens; and
Galileo’s Starry Messenger of 1610 was the first book to describe what one
saw with this new instrument. He identified four moons of Jupiter, many
stars in the Milky Way, and mountains and craters on the Moon; and
interpreted his discoveries as proof that the Earth goes round the Sun.
Later he saw that Venus has phases like the Moon, and observed sunspots.
His book was a great success, and led to his return to his native Florence as
court mathematician and philosopher: later books, written in Italian instead
of Latin, got him into trouble with the Inquisition; evidence from the
telescope could not establish the truth of the Earth’s motion, though it could
make it more plausible.
Galileo could see things nobody before had seen; and clearly astronomy
would never be the same again. Early telescopes were not easy to use; glass
was of poor quality, magnifications were small, and images were fuzzy and
bordered by coloured fringes. Galileo’s had a concave objective, and a
convex eyepiece, like an opera-glass; Kepler suggested making both
convex, though this inverts the image; and Newton urged that to get clear
images telescopes ought to be based on reflection rather than
refraction, using a large curved mirror to collect light. The biggest
telescopes work on this principle; but they were not suitable for carrying
around the world.
In the early eighteenth century, John Dollond invented the achromatic
lens system, in which multiple lenses made of different kinds of glass
replaced single lenses; this got rid of the extraneous colours. By the 1660s
telescopes became essential equipment at observatories; and in the
following century Captain Cook and others could set up observatories at
places like Tahiti, using refractors. In the nineteenth century, photography
increased the value of telescopes, for images could be studied at leisure; and
the spectrum of the stars could also be investigated.
I. B. Cohen (1986) The Birth of a New Physics, new ed., Penguin, Harmondsworth.
G. L. E. Turner (1983) Nineteenth-century Scientific Instruments, Sotheby, London.
A test-tube is a small glass tube closed at one end, and is a vital piece of
apparatus in the laboratory. Faraday in his Chemical Manipulation of
1827 gave instructions on how to make them, but even by that date they
could be bought. By the end of the century they were made of Pyrex glass.
They are used for experiments in which things are dissolved, products of
distillation collected, and analyses performed; in popular science,
phrases like ‘test-tube babies’ imply any laboratory procedure.
Toys and science do not at first sight go together, but the educational toy
has a long history and the apparatus of one generation, particularly that
used in lecture demonstrations, may become the toy of the next,
teaching scientific principles or perhaps just giving pleasure. There are
some particularly ingenious versions from the nineteenth century.
G. L. E Turner (1987) ‘Presidential address: Scientific toys’, BJHS, 20, 377–98.
Truth is what many believe that the sciences aim at; and they mean by it
consistency with the facts of the world. Pilate’s question, ‘What is Truth?’,
they see like Bacon as jesting by a man who would not wait for an answer.
The problem is that facts are problematic; and selection of relevant facts
depends on the world-view, or in science the paradigm or the theory, held
by the observer. Science may aim at truth, but can we know that we have
got there? In the last century, the wave theory of light seemed so firmly
established that its truth was beyond question; whereas since Einstein’s
work of 1904 we have come to accept that it is only a half-truth. Well-
established theories, and even sciences (like classical physics) may find
falsification.
An alternative view is that truth describes coherence. It is therefore
subject to time and place: the wave theory of light was true a hundred years
ago, and phlogiston theory a hundred years before that. This fits with our
perception of change within science; but it makes science an intellectual
game rather than an earnest search in which at the end we shall see the
world as God sees it. Whether either view makes much difference in the
practical affairs of life is doubtful: if we can never be sure that we have
reached truth, we are in no worse a position than the judge and jury; and in
science we have no doubt to do our best to avoid error. We can also
remember that falsification is possible, subject to ad hoc hypotheses
which may rightly be added, although confirmation never is. If science is a
search for truth, it is also a never-ending quest.
K. R. Popper (1972) Objective Knowledge, Oxford University Press, Oxford.
Type was a word much used in contexts of religion: where Joshua who
led his people into the Promised Land was, for example, a type of his
namesake Jesus. In the sciences a type is an actual or ideal entity to which
others are an approximation. Thus the first plant or animal of a species to be
described in the literature is the type-specimen, against which others
may be compared if it, or an illustration, survive. But especially in the
nineteenth century, the term was extended to higher levels of
classification: so that all crustacea, for example, were seen as
manifestations of one type, although crabs, lobsters and barnacles all look
rather different to the untrained eye. This led to a rather Platonic biology,
favoured in German Naturphilosophie and by Darwin’s great opponent,
Richard Owen; which became obsolete with general acceptance of
evolution. Nevertheless, Darwin’s ally Huxley taught physiology using the
crayfish as the type; students who had learned everything about crayfish
could then extend their ideas into other species.
In chemistry, ideal structures formed the basis for a theory of types,
made popular by Laurent and Gerhardt in the middle of the nineteenth
century. Laurent hoped that from imagined structures consequences could
be derived by deduction, which could confirm or falsify them. Gerhardt
was pessimistic about ever knowing structures; and for him the types, such
as that for water OH2, ammonia HN3, and methane CH4, were convenient
ways of grouping compounds. The success of Laurent’s programme in
structural organic chemistry, later extended and confirmed by X-ray work
on crystals, made this restricted theory of types also obsolete by the later
nineteenth century.
Uniformity of Nature is a basic assumption for all science. If
contingency ruled everywhere, and no prediction was possible, then
there would be no point in wasting one’s time looking for laws. In the
method of induction, uniformity is taken for granted, for otherwise there
would be no hope of extrapolating from past experience into the future.
Similarly, deduction depends upon there being a stable classification of
things, so that what is true of some members of a group will be true of all;
properties are constant.
This means that the idea functions as a principle; ‘Nature is Uniform’
might be a kind of text put up over the laboratory door. If an apparent
breach of the principle were to be observed, such as something falling
upwards or someone growing younger, we would not feel that this was an
interesting exception to the rule, but would look for some other explanation.
In its common use, the term miracle is taken to be a breach of uniformity;
and the successes of the sciences over the last three or four centuries mean
that miracles are now an embarrassment in religion rather than a support.
The argument for the existence of God from design was adapted, notably by
William Paley about 1800, to emphasise the foresight in planning a uniform
world rather than the interventions of Providence to correct injustices. That
science seems to work is not a proof of nature’s uniformity; it might just
mean that we had concentrated on uniformities and ignored the unexpected
and the novel; but on the whole it does seem to give strong support to the
principle.
In geology the principle was explicitly adopted in the famous book by
Charles Lyell, Principles of Geology, 1830–32. This was an attempt to
account for past changes in terms of causes acting at the present day; in
deliberate opposition to those who had invoked catastrophes such as Noah’s
Flood, which were scientifically inexplicable. This step is often seen as
making geology into a science. In fact catastrophists such as Cuvier in
France, and Lyell’s teacher Buckland in England, had used the principle of
uniformity as a confirmation of their views. The quick-frozen mammoths of
Siberia, the various past faunas of Montmartre, and the hyaena bones of
Yorkshire caves, all seemed to point to a series of disasters at intervals in
the past. Use of the uniformity principle does not in itself lead to any
particular version of science. In this case Lyell’s came to prevail, though
geologists found themselves soon afterwards having to incorporate the Ice
Age into their scheme: which seemed like a catastrophe, but could be
handled as something surprising and requiring explanation.
Uniformity is therefore a part of any conceivable paradigm in the
sciences; but it functions as something which cannot itself be tested, as a
principle, rather than as a proposition open to falsification.
Units, from unit meaning one, are what we express quantities in. There are
various national and international standards which have brought order
into what was in the past arbitrary. Conservation of energy brought the
realisation, especially in Helmholtz, that all manifestations of energy must
be expressible in the same dimensions of mass, length and time; and thus
connected the various units in which heat, electricity and mechanical
work had been expressed. Gradually since the ‘metric system’ was set up in
Revolutionary France and accepted at an international conference
attended only by France’s allies, it has spread and has become the basis of
all the units of science.
A vector is a quantity which has both size and direction: thus velocity is a
vector, while speed is a ‘scalar’ quantity. Two cars may be moving at the
same speed on a narrow winding road, but it makes a difference whether
their velocities are the same or opposite. The handling of quaternions and
vectors was an important part of nineteenth-century mathematics.
Voyages have been a metaphor for scientific activity ever since Bacon used
on his title-page the arms of Columbus, showing ships going through the
Pillars of Hercules and out into the ocean of undiscovered truth.
Wordsworth saw Newton as a voyager through strange seas of thought,
alone; the scientist is a kind of Ancient Mariner, with a gripping and
extraordinary yarn for us. But voyages have also been an important part of
science itself, where many like Darwin learned their craft and made
observations impossible for stay-at-homes.
In the middle of the eighteen century, La Condamine went with an
expedition to survey the frontier between Peru and Brazil, based on a line
drawn on the map by an earlier Pope; and he found both that the Earth was
flattened at the Poles, and that the Andes pulled his plumb-line out of true;
both observations verifying Newton’s theory of gravitation. Then in the
1760s came tables of the Moon’s motions, enabling longitude to be
calculated at sea; and shortly afterwards the first chronometers, providing a
direct comparison of local time with that at Greenwich or Paris, and thus
another determination of longitude. Accurate charting far from home was
now possible, and Bougainville and Cook took advantage of it. On his
voyages, Cook was accompanied by astronomers and naturalists; his
botanist, Joseph Banks, found plenty to do at Botany Bay, and after his
return to Britain was President of the Royal Society from 1778 until 1820,
and encouraged further scientific voyages.
Under French auspices, Alexander von Humboldt voyaged to tropical
America, making observations in all branches of natural history, as well as
political economy and anthropology; indeed he was a founder of scientific
geography. Darwin much admired his account of his travels, and saw
himself as extending Humboldt’s work further to the south. Voyages to the
far north also, in search of quicker passages to the East, were important for
geophysical measurements, of terrestrial magnetism and with pendulums:
observatories were set up, and under Humboldt’s influence international
cooperation produced great quantities of data; while Edward Sabine
became, like Banks, President of the Royal Society.
Cook’s voyages, and that of HMS Beagle and other survey ships, were
primarily concerned with charting coastlines; but by the second half of the
nineteenth century there was increasing interest in the deep seas, and the
voyage of HMS Challenger in 1873–76 marks the beginning of
oceanography. This ship carried a laboratory, and was specially adapted for
a scientific voyage. Earlier scientists had often been exasperated at the lack
of room on board ship, and at the way their time ashore was restricted
because the boats were often needed for survey work, and the naturalists
could not be allowed to wander away among suspicious or hostile natives. It
is striking how much work was done by those such as Sabine, Darwin,
Huxley and Joseph Hooker who made the best of it all.
J. F. W. Herschel (ed.) (1974) The Admiralty Manual of Scientific Enquiry (1851 ed.), reprinted,
Dawson, London, introduction by D. M. Knight.
A. Moyal (1986) A Bright and Savage Land, Collins, Sydney.
Wave motions we are familiar with from the sea. When we watch the
rollers coming in from the Atlantic ocean, the movement of the molecules
of water has only been up and down. The wave may travel from Cape Cod
to Land’s End, but the water does not. Water-waves are called transverse,
because the motion is at right-angles to the line in which the wave is going.
In the sea, the particles can only move up and down; but one can imagine
three-dimensional waves in which the vibrations were in all possible planes
at right-angles to the direction of propagation. Another sort of wave is the
longitudinal; here the particles vibrate in the line of propagation. Sound
waves are supposed to be of this kind. The molecules of air move back and
forth, none coming all the way from the source to our ear.
Light was more problematic. The waves of the sea will go round comers,
while light goes in straight lines and casts sharp shadows. Newton could not
believe therefore that it could be wave-motion, although he was aware of
some phenomena (like what we call ‘Newton’s Rings’, seen when a lens is
pressed onto a flat glass plate) which seemed to demand an explanation in
terms of waves. His contemporary, Christiaan Huygens, tried to develop a
wave theory, but it was unsatisfactory and did not catch on.
In the early nineteenth century, first Thomas Young, who turned his hand
to many things but finished few of them, and then A. J. Fresnel, sought to
demonstrate that a wave theory could explain all the phenomena of light.
The key was a model of transverse waves in three dimensions; and the
polarisation of light could be explained in terms of all of its vibrations being
in the same plane. Phenomena, like the fringes Young observed when light
was passed through two nearby slits in a card and fell onto a screen, could
be readily explained if light were waves: which interfere, just as those do
when two pebbles are thrown into a puddle. By 1840 it was accepted that
light was waves in the ether. The particle theory was to be revived in our
century in the work of Einstein; and we now have to live with wave/particle
dualism.
The distance between crest or troughs is the wavelength; the height of
crests gives the amplitude; and the number of oscillations per second gives
the frequency. The mathematics of wave-motions is elegant; and in
physics the study of waves is fundamental to the study of matter.
Any weapon can be much improved by the application of science.
Scientists like to feel that penicillin was a great achievement of science, and
the atomic bomb the work of politicians and generals; but the connection
between science and warfare goes back long before 1945. It may indeed be
that science lost its innocence then: in the last century T. H. Huxley could
say that science (unlike religion) had never done anyone any harm, but it
was not quite true then and now seems very implausible. This is not to say
that science had done more harm than good, but only that it is a human
activity with potential for good and ill and not simply beneficent.
Archimedes designed weapons to defend his native Syracuse against the
Roman army; Galileo was proud of his work on the path of projectiles,
which he showed to be a parabola; and the first President of the Royal
Society in the 1660s, Lord Brouncker, published papers on the recoil of
guns. The great chemist Lavoisier owed some of the unpopularity, which
led him to the guillotine, to his work on the improvement of gunpowder,
which involved collecting saltpetre from cellars and outhouses. His
research meant that French gunpowder, which had been inferior to British
in the Seven Years’ War, was superior by the War of American
Independence: the embattled farmers, Minute Men, could shoot further and
more reliably than the redcoats.
Despite these things, science about 1800 was not generally perceived as
an activity closely relevant to war. Davy was awarded a prize for his work
in electricity by the French Academy of Sciences, and in 1813 went to
Paris to collect it despite the Napoleonic Wars. Speeches were made about
how this proved that the sciences were above the wars of kings; Davy
seems to have seen his role as fighting in the cultural realm, proving that
chemistry was not a French science. He had backed a gunpowder
manufactory which actually managed to lose money during the war; but
otherwise his had in no sense been war work.
Sadi Carnot, the pioneer of thermodynamics, perceived that it was the
steam-engine, and thus Britain’s industrial power, which had won the war.
But technology was still barely a matter of applied science; and it was not
until the synthetic dye industry began in the middle of the century that
professionally trained scientists were in the forefront of technical
innovation. Out of this work came new explosives: the propellant
guncotton, and high explosives like nitro-glycerine.
By the Crimean War in the 1850s rifles were replacing muskets and
highly-visible red coats coming to seem a poor kind of uniform; but this
was a step owing little to science. On the other hand, Faraday was asked
about the possibility of smoking the Russians out of Kronstedt, their great
base in the Baltic, with chlorine; he felt it was impractical, but does not
seem to have believed it immoral. As artillery both on land and at sea was
improved, so mathematical skills were increasingly needed; and systems of
fire control gave opportunities for officers trained in the sciences. The first
institution in which teaching and research in the sciences were closely
combined was the Ecole Polytechnique in Paris, a military school; and this
example was followed, with less distinguished science, at Woolwich and
other training centres for officers.
The twentieth-century wars of peoples proved indeed more terrible than
those of kings; and have all closely involved scientists. Whereas Davy was
fêted in Paris, a century on in the Great War of 1914–18 all Germans were
expelled from the Royal Society and it was not until well into the 1920s that
Germans were able to play a full role in international conferences and
other activities of the international scientific community. In that war,
chlorine was indeed used with terrible results; and the war was even made
possible by Fritz Haber’s process for ‘fixing’ nitrogen from the air to make
nitrates, scraping cellars no longer being sufficient. Haber, who was
responsible for the chlorine campaign, was awarded a Nobel Prize after the
war; his process could be, and is, used in fertilisers but in fact down to 1918
it had provided explosives.
Since then the relationship between science and defence has got ever
closer; radar, machines for coding and for breaking codes, ballistic missiles,
and nuclear weapons were all results of World War Two, and have been
much improved since. Without the pressures of war we might never have
had penicillin; and certainly electronics has benefited from the no-expense-
spared kind of research associated with the military. Defending freedom and
democracy, in which science can probably best flourish anyway, is a good
thing to do; but there seems little doubt that defence-spending distorts
science and technology, and that most of the ‘spinoffs’ could have been
achieved rather more cheaply. The most interesting problems are not
usually the short-term ones whose solutions are demanded by government
agencies; and science would do better if it got the same money with less
strings attached – though this may be a remote hope!
Where involvement with weapons is very sad is that it brings in secrecy.
This has always been associated with technology, where innovation means
money and where industrial espionage is a very old profession; but
science has always been public knowledge, where publication brings
prestige and also speeds progress. Those studying the sciences should think
about such problems; otherwise their education may leave them as
unprepared for life as the legendary Irish girl arriving at Euston station.
Weather is always with us, especially in Britain; and it has always been
something where prediction is important. The first published weather
forecasts were made by Admiral FitzRoy, who had been Captain of HMS
Beagle on Darwin’s voyage; he relied on reports sent by telegraph from
coastguards and others. Attacks on the system for inaccuracy led in part to
his suicide in 1865. In observatories such as that at Kew more systematic
study had been going on, but scientists there did not feel the time was
ripe for publishing forecasts; the forecasts were indeed suspended, but
restarted by public demand from 1867 and fully restored ten years later.
Even inaccurate forecasts are felt to be better than nothing.
J. Burton, (1986) ‘Robert FitzRoy and the early history of the Meteorological Office’, BJHS, 19,
147–76.
J. L. Davis (1984) ‘Weather forecasting … at the Paris Observatory, 1853–1878’. Annals of Science,
41, 359–82.
The Zodiac is an Arabic term for the circle in the heavens, the Ecliptic,
inclined at about 23 degrees to the Equator, on which the Sun, Moon and
the planets travel against the background of the fixed stars. It is divided
into twelve parts, called after the ‘Signs’ or constellations by which they
can be identified. If we are told that ‘Mars is in Libra’ we know where to
look; and in astrology the position of the planets at the moment of our
birth, our horoscope, is believed to determine our character.
Faraday, M. (1791–1867) 11, 12, 27, 37, 38, 41, 44, 45, 47ff, 52, 60, 64f, 77f, 81, 86, 89, 93, 101,
111, 116, 118, 120, 124, 131, 135, 139, 143, 153, 154, 155, 164, 165, 170
Fermat, P. (1601–65) 6
Feyerabend, P. (1924– ) 98
FitzRoy, R. (1805–65) 121, 171
Fontenelle, B. le B. (1657–1757) 7
Fourier, J. B. J. (1768–1830) 95, 157
Frankland, E. (1825–99) 87, 129
Franklin, B. (1706–90) 22, 46f, 154
Fraunhofer, J. (1787–1826) 3, 13, 145
Fresnel, A. J. (1788–1827) 90, 111, 120, 122, 168
Freud, S. (1856–1939) 140
Galileo (1564–1642) 4, 5, 12, 15, 16, 25, 26, 29, 32, 33, 34, 39, 43, 54, 60, 61f, 63, 71, 72, 74, 80, 87,
88, 90, 95, 97, 98, 110, 112, 114, 115, 117, 120, 140, 147, 150, 154, 169
Galton, F. (1822–1911) 166
Galvani, L. (1737–98) 47, 65
Gassendi, P. (1592–1655) 18
Gaudin, M. A. A. (1804–80) 100
Gauss, C. F. (1777–1855) 58, 69, 148f
Gay-Lussac, J. L. (1778–1850) 64, 76, 124, 140
Geiger, H. (J.) W. (1882–1945) 129
Gerhardt, C. F. (1816–56) 161
Germer, L. H. (1896–1971) 41, 49
Gibbs, J. W. (1839–1903) 113f, 158
Gilbert, W. (1544–1603) 50, 93
Goethe, J.W. (1749–1832) 8, 30, 171
Graunt, J. (1620–74) 148 ’sGravesande, W. J. (1688–1742) 37
Grew, N. (1641–1712) 100
Grignard, F. A. V. (1871–1935) 101
Grosseteste, R. (c. 1168–1253) 89
Grove, W. R. (1811–96) 124
Kant, I. (1724–1804) 69
Keats, J. (1795–1821) 107
Kekulé, F. A. (1829–96) 13, 28, 76, 79, 136, 165
Kepler, J. (1571–1630) 15, 16, 44, 52, 59, 72, 75, 86, 97, 105, 106, 117, 133, 134, 144, 150, 155
Kirchhoff, G. R. (1824–87) 3, 11, 136, 145, 160
Kuhn, T. S. (1922– ) 42f, 45, 64, 98, 110, 115