Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 264

Reading Passages [ 1]

TOPICS
Anthropology
1. Anthropology
2. A day in Samoa
3. The economic process in primitive societies
4. The early education of Manus children
5. Moral standards and social organization
6. Production in primitive societies
7. The rules of good fieldwork
8. The science of custom
9. Survival in the cage
10. Gestures
11. Regional signals
12. The voices of time

Biology
13. Evolution and natural selection.
14. Banting and the discovery of insulin
15. The Galapagos
16. On the origin of species

Business
17. Brands Up
18. How to be a great manager
19. Derivatives - the beauty
20. Motives
21. Research & development
22. SONY
23. American and Japanese styles

Chemistry
24. Metallurgy: Making alloys
25. Electricity helps Chemistry: Electro-plating

Economics
26. The conventional wisdom
27. Markets
28. Investment
29. Barter
30. Productivity as a guide to wages
31. The failure of the classical theory of commercial policy

Education
32. The personal qualities of a teacher
33. Rousseau's Emile
34. The beginnings of scientific and technical education
35. Supposed mental faculties and their training
36. The concept of number
37. English in the primary school
38. Britain and Japan: Two roads to higher education
39. What types of students do you have to teach?
40. Spoon-fed feel lost at the cutting edge
Geology/Geography
41. The age of the earth
42. Oils
Reading Passages [ 2]

43. The use of land

History
44. The nature, object and purpose of history
45. The expansion of Western civilization
46. The career of Jenghis Khan
47. The trial and execution of Charles I
48. Peter the Great
49. The United States in 1790
50. Civilisation and history
51. Coal

Language
52. 'Primitiveness' in language
53. English in the fifteenth century
54. An international language
55. Language as symbolism
56. From word symbol to phoneme symbol

Law
57. Modern constitutions
58. The functions of government
59. Initiative and referendum
60. Law report
61. The legal character of international law
62. The law of negligence
63. The death penalty

Mathematics
64. On different degrees of smallness
65. Chance or probability

Philosophy
66. Definition and some of its difficulties
67. The subject matter of philosophy
68. What can we communicate?
69. Ethics
70. Aristotle's Ethics
71. The road to happiness
72. Logic
73. Inductive and deductive logic

Physics
74. The origin of the sun and the planets.
75. Can life exist on the planets?
76. The theory of continuous creation.
77. The creation of the universe
78. Atomic radiation and life
79. Marconi and the invention of radio
80. Particles or waves?
81. Matter, mass and energy
82. Structure of matter
83. The quantum theory of radiation
84. Footprints of the atom
85. Splitting the atom
Reading Passages [ 3]

86. The development of electricity


87. The discovery of x-rays

Politics
88. Crowds
89. Diplomacy
90. What future for Africa?
91. Nationalism
92. Democracy
93. Locke's political theory
94. The search for world order
95. The declaration of independence
96. The rights of man

Psychology
97. Society and intelligence
98. The pressure to conform
99. Learning to live with the computer
100. Forgetting
101. Adolescence
102. Body language
103. Distance regulation in animals
104. An observation and an explanation
105. Adaptive control of reading rate

Sociology
106. Rational and irrational elements in contemporary society
107. Social life in a provincial university
108. The menace of over-population
109. Changes in English social life after 1918
110. Scientific method in the social sciences
111. Shopping in Russia

Technology
112. Seduced by technology
113. Blowing hot and cold on British windmills
114. Direct uses of solar radiation
115. Industrial robots
116. Tomorrow's phone calls
117. Coal
118. The medium is the message
119. The development of electricity
120. The autonomous house

Twentieth Century Discovery


121. Discovery of insecticides and pesticides
122. The origin of life
123. The structure of matter
124. Distance in our solar system
125. Space travel

The Artificial World Around Us


126. What men are doing to things
127. The nose does not know
128. "This movie smells"
Reading Passages [ 4]

129. The artificial air


130. The odor-makers
131. The truth about tastes
132. Inside the flavor factory
133. Let us have "Nothing" to eat
134. What is happening to the steel age?
135. Diamonds of the laboratory
136. Who made this ruby?
137. The synthetic future

ANTROPOLOGY
What Is Anthropology?
Anthropology is the study of humankind, especially of Homo sapiens, the biological species to which we
human beings belong. It is the study of how our species evolved from more primitive organisms; it is also the
study of how our species developed a mode of communication known as language and a mode of social life
known as culture. It is the study of how culture evolved and diversified. And finally, it is the study of how
culture, people, and nature Interact wherever human beings are found.
This book is an Introduction to general anthropology, which is an amalgam of four fields of study
traditionally found within departments of anthropology at major universities. The four fields are cultural
anthropology (sometimes called social anthropology), archaeology, anthropological linguistics, and physical
anthropology. The collaborative effort of these four fields is needed in order to study our species in
evolutionary perspective and in relation to diverse habitats and cultures.
Cultural anthropology deals with the description and analysis of the forms and styles of social life of past
and present ages. Its subdiscipline, ethnography, systematically describes contemporary societies and cultures.
Comparison of these descriptions provides the basis for hypotheses and theories about the causes of human
lifestyles.
Archaeology adds a crucial dimension to this endeavor. By digging up the remains of cultures of past
ages, archaeology studies sequences of social and cultural evolution under diverse natural and cultural
conditions. In the quest for understanding the present-day characteristics of human existence, for validating or
invalidating proposed theories of historical causation, the great temporal depth of the archaeological record is
indispensable.
Anthropological linguistics provides another crucial perspective:
the study of the totality of languages spoken by human beings. Linguistics attempts to reconstruct the
historical changes that have led to the formation of individual languages and families of languages. More
fundamentally, anthropological linguistics is concerned with the nature of language and Its functions and the
way language Influences and is Influenced by other aspects of cultural life. Anthropological linguistics is
concerned with the origin of language and the relationship between the evolution of language and the
evolution of Homo sapiens. And finally, anthropological linguistics is concerned with the relationship between
the evolution of languages and the evolution and differentiation of human cultures.
Physical anthropology grounds the work of the other anthropological fields in our animal origins and our
genetically determined nature. Physical anthropology seeks to reconstruct the course of human evolution by
studying the fossil remains of ancient human and infrahuman species. Physical anthropology seeks to describe
the distribution of hereditary variations among contemporary populations and to sort out – classify -
categorize and measure the relative contributions made by heredity, environment, and culture to human
biology.
Because of Its combination of biological, archaeological, and ethnographic perspectives, general
anthropology is uniquely suited to the study of many problems of vital Importance to the survival and well-
being of our species.
To be sure, disciplines other than anthropology are concerned with the study of human beings. Our
animal nature is the subject of intense research by biologists, geneticists, and physiologists. In medicine alone,
hundreds of additional specialists investigate the human body, and psychiatrists and psychologists, rank upon
rank, seek the essence of the human mind and soul. Many other disciplines examine our cultural, intellectual,
and aesthetic behavior. These disciplines include sociology, human geography, social psychology, political
science, economics, linguistics, theology, philosophy, musicology, art, literature, and architecture. There are
Reading Passages [ 5]

also many “area specialists,” who study the languages and life-styles of particular peoples, nations, or regions:
“Latin Americanists,” “Indianists,” “Sinologists,” and so on - vesaire. In view of this profusion - abundance of
disciplines that describe, explain, and Interpret aspects of human life, what justification can there be for a single
discipline that claims to be the general science of the human species?
The Importance of General Anthropology
Research and publications are accumulating in each of the four fields of anthropology at an exponential
rate. Few anthropologists nowadays master more than one field. And anthropologists increasingly find
themselves working not with fellow anthropologists of another field but with members of entirely different
scientific or humanistic specialties. For example, cultural anthropologists interested in the relationship between
cultural practices and the natural environment may be obliged to pay closer attention to agronomy or ecology
than to linguistics. Physical anthropologists interested in the relationship between human and protohuman
fossils may, because of the Importance of teeth in the fossil record, become more familiar with dentistry
journals than with journals devoted to ethnography or linguistics. Cultural anthropologists interested in the
relationship between culture and individual personality are sometimes more at home professionally with
psychiatrists and social psychologists than with the archaeologists in theIr own university departments. Hence,
many more than four fields are represented in the ongoing research of modern anthropology.
The specialized nature of most anthropological research makes it Imperative that the general
significance of anthropological facts and theories be preserved. This is the task of general anthropology.
General anthropology does not pretend to survey the entire subject matter of physical, cultural,
archaeological, and linguistic anthropology. Much less does It pretend to survey the work of the legions of
scholars in other disciplines who also study the biological, linguistic, and cultural aspects of human existence.
Rather, it strives to achieve a particular orientation toward all the human sciences, disciplines, and fields.
Perhaps the best word for this orientation is ecumenical - worldwide General anthropology does not teach all
that one must know in order to master the four fields or all that one must know in order to become an
anthropologist. Instead, general anthropology teaches how to evaluate facts and theories about human nature
and human culture by placing them in a total, universalist perspective. In the words of Frederica De Laguna,
Anthropology is the only discipline that offers a conceptual schema for the whole context of human
experience…. It is like the carrying frame onto which may be fitted all the several subjects of a liberal education,
and by organizing the load, making it more wieldy and capable of being carried. (1968, p. 475)
I believe that the importance of general anthropology is that It is panhuman, evolutionary, and
comparative. The previously mentioned disciplines are concerned with only a particular segment of human
experience or a particular time or phase of our cultural or biological development. But general anthropology is
systematically and uncompromisingly comparative. Its findings are never based upon the study of a single
population, race, “tribe,” class, or nation. General anthropology insists first and foremost that conclusions
based upon the study of one particular human group or civilization be checked against the evidence of other
groups or civilizations under both similar and different conditions. In this way the relevance of general
anthropology transcends the interests of any particular “tribe,” race, nation, or culture. In anthropological
perspective, all peoples and civilizations are fundamentally local and evanescent – fleeting - temporary . Thus
general anthropology is implacably opposed to the insularity and mental constriction of those who would have
themselves and none other represent humanity, stand at the pinnacle of progress, or be chosen by God or
history to fashion the world in their own Image.
Therefore general anthropology is “relevant” even when It deals with fragments of fossils, extinct
civilizations, remote villages, or exotic customs. The proper study of humankind requires a knowledge of
distant as well as near lands and of remote as well as present times.
Only in this way can we humans hope to tear off the blinders of our local life-styles to look upon the
human condition without prejudice.
Because of Its multidisciplinary, comparative, and diachronic perspective, anthropology holds the key
to many fundamental questions of recurrent and contemporary relevance. It lies peculiarly within the
competence of general anthropology to explicate our species’ animal heritage, to define what is distinctively
human about human nature, and to differentiate the natural and the cultural conditions responsible for
competition, conflict, and war. General anthropology is also strategically equipped to probe-search the
significance of racial factors in the evolution of culture and in the conduct of contemporary human affairs.
General anthropology holds the key to an understanding of the origins of social inequality - of racism,
exploitation, poverty, and underdevelopment. Overarching all of general anthropology’s contributions is the
search for the causes of social and cultural differences and similarities. What is the nature of the determinism
Reading Passages [ 6]

that operates in human history, and what are the consequences of this determinism for individual freedom of
thought and action? To answer these questions is to begin to understand the extent to which we can increase
humanity’s freedom and well-being by conscious intervention in the processes of cultural evolution.

A Day in Samoa
The life of the day begins at dawn, or if the moon has shown until daylight, the shouts of the young
men may be heard before dawn from the hillside. Uneasy in the night, populous with ghosts, they shout lustily
to one another as they hasten with their work. As the dawn begins to fall among the soft brown roofs and the
slender palm trees stand out against a colourless, gleaming sea, lovers slip home from trysts beneath the palm
trees or in the shadow of beached canoes, that the light may find each sleeper in his appointed place. Cocks
crow, negligently, and a shrill-voiced bird cries from the breadfruit trees. The insistent roar of the reef seems
muted to an undertone for the sounds of a waking village. Babies cry, a few short wails before sleepy mothers
give them the breast. Restless little children roll out of their sheets and wander drowsily down to the beach to
freshen their faces in the sea. Boys, bent upon an early fishing, start collecting their tackle and go to rouse their
more laggard companions. Fires are lit, here and there, the white smoke hardly visible against the paleness of
the dawn. The whole village, sheeted and frowsy, stirs, rubs its eyes, and stumbles towards the beach. 'Talofa!'
'Talofa!' 'Will the journey start today?' 'Is it bonito fishing your lordship is going?' Girls stop to giggle over some
young ne'er-do-well who escaped during the night from an angry father's pursuit and to venture a shrewd
guess that the daughter knew more about his presence than she told. The boy who is taunted by another, who
has succeeded him in his sweetheart's favour, grapples with his rival, his foot slipping in the wet sand. From
the other end of the village comes a long-drawn-out, piercing wail. A messenger has just brought word of the
death of some relative in another village. Half-clad, unhurried women, with babies at their breasts or astride
their hips, pause in their tale of Losa's outraged departure from her father's house to the greater kindness in the
home of her uncle, to wonder who is dead. Poor relatives whisper their requests to rich relatives, men make
plans to set a fish-trap together, a woman begs a bit of yellow dye from a kinswoman, and through the village
sounds the rhythmic tattoo which calls the young men together. They gather from all parts of the village,
diggingsticks in hand, ready to start inland to the plantation. The older men set off upon their more lonely
occupations, and each household, reassembled under its peaked roof, settles down to the routine of the
morning. Little children, too hungry to wait for the late breakfast, beg lumps of cold taro which they munch
greedily. Women carry piles of washing to the sea or to the spring at the far end of the village, or set off inland
after weaving materials. The older girls go fishing on the reef, or perhaps set themselves to weaving a new set
of Venetian blinds.
In the houses, where the pebbly floors have been swept bare with a stiff, long-handled broom, the
women great with child and the nursing mothers sit and gossip with one another. Old men sit apart,
unceasingly twisting palm husk on their bare thighs and muttering old tales under their breath. The carpenters
begin work on the new house, while the owner bustles about trying to keep them in a good humour. Families
who will cook today are hard at work; the taro, yams, and bananas have already been brought from inland; the
children are scuttling back and forth, fetching sea water, or leaves to stuff the pig. As the sun rises higher in the
sky, the shadows deepen under the thatched roofs, the sand is burning to the touch, the hibiscus flowers wilt on
the hedges, and little children bid the smaller ones, 'Come out of the sun.' Those whose excursions have been
short return to the village, the women with strings of crimson jellyfish, or baskets of shellfish, the men with
coconuts, carried in baskets slung on a shoulder-pole. The women and children eat their breakfast, just hot from
the oven, if this is cook day, and the young men work swiftly in the midday heat, preparing the noon feast for
their elders.
It is high noon. The sand burns the feet of the little children, who leave their palm-leaf balls and their
pinwheels of frangipani blossoms to wither in the sun, as they creep into the shade of the houses. The women
who must go abroad carry great banana leaves as sunshades or wind wet cloths about their heads. Lowering a
few blinds against the slanting sun all who are left in the village wrap their heads in sheets and go to sleep.
Only a few adventurous children may slip away for a swim in the shadow of a high rock, some industrious
woman continues with her weaving, or a close little group of women bend anxiously over a woman in labour.
The village is dazzling and dead; any sound seems oddly loud and out of place. Words have to cut through the
solid heat slowly. And then the sun gradually sinks over the sea.

A second time the sleeping people stir, roused perhaps by the cry of 'A boat!' resounding through the
village. The fishermen beach their canoes, weary and spent from the heat, in spite of the slaked lime on their
Reading Passages [ 7]

heads, with which they have sought to cool their brains and redden their hair. The brightly coloured fishes are
spread out on the floor, or piled in front of the houses until the women pour water over them to free them from
taboo. Regretfully, the young fishermen separate out the 'taboo fish', which must be sent to the chief, or proudly
they pack the little palm-leaf baskets with offerings offish to take to their sweethearts. Men come home from the
bush, grimy and heavy laden, shouting as they come, greeted in a sonorous rising cadence by those who have
remained at home. They gather in the guest house for their evening kava drinking. The soft clapping of hands,
the highpitched intoning of the talking chief who serves the kava echo through the village. Girls gather flowers
to weave into necklaces; children, lusty from their naps and bound to no particular task, play circular games in
the half shade of the late afternoon. Finally the sun sets, in a flame which stretches from the mountain behind to
the horizon on the sea; the last bather comes up from the beach, children straggle home, dark little figures
etched against the sky; lights shine in the houses, and each household gathers for its evening meal. The suitor
humbly presents his offering, the children have been summoned from their noisy play, perhaps there is an
honoured guest who must be served first, after the soft, barbaric singing of Christian hymns and the brief and
graceful evening prayer. In front of a house at the end of the village, a father cries out the birth of a son. In some
family circles a face is missing, in others little runaways have found a haven. Again quiet settles upon the
village, as first the head of the household, then the women and children, and last of all the patient boys, eat
their supper.
After supper the old people and the little children are bundled off to bed. If the young people have
guests, the front of the house is yielded to them. For day is the time for the councils of old men and the labours
of youth, and night is the time for lighter things. Two kinsmen, or a chief and his councillor, sit and gossip over
the day's events or make plans for the morrow. Outside a crier goes through the village announcing that the
communal breadfruit pit will be opened in the morning, or that the village will make a great fish-trap. If it is
moonlight, groups of young men, women by twos and threes, wander through the village, and crowds of
children hunt for land crabs or chase each other among the breadfruit trees. Half the village may go fishing by
torchlight, and the curving reef will gleam with wavering lights and echo with shouts of triumph or
disappointment, teasing words or smothered cries of outraged modesty. Or a group of youths may dance for
the pleasure of some visiting maiden.
Many of those who have retired to sleep, drawn by the merry music, will wrap their sheets about them
and set out to find the dancing. A white-clad, ghostly throng will gather in a circle about the gaily lit house, a
circle from which every now and then a few will detach themselves and wander away among the trees.
Sometimes sleep will not descend upon the village until long past midnight; then at last there is only the
mellow thunder of the reef and the whisper of lovers, as the village rests until dawn.
(From Coming of age in Samoa by Margaret Mead (1928))

The Economic Process in Primitive Societies


In our own economic system money gives a universal measure of values, a convenient medium of
exchange through which we can buy or sell almost anything, and also a standard by which payments at one
time can be expressed as commitments for the future. In a wider sense it allows for the measurement of services
against things, and promotes the flow of the economic process. In a primitive society without money we might
expect all this to be absent, yet the economic process goes on. There is a recognition of services, and payment is
made for them; there are means of absorbing people into the productive process, and values are expressed in
quantitative terms, measured by traditional standards.
Let us examine, to begin with, a situation of simple distribution such as occurs when an animal is killed
in a hunt. Do the hunters fall on the carcass and cut it to pieces, the largest piece to the strongest man? This is
hardly ever the case. The beast is normally divided according to recognized principles. Since the killing of an
animal is usually a co-operative activity one might expect to find it portioned out according to the amount of
work done by each hunter to obtain it. To some extent this principle is followed, but other people have their
rights as well. In many parts of Australia each person in the camp gets a share depending upon his or her
relation to the hunters. The worst parts may even be kept by the hunters themselves. In former times, at Alice
Springs, according to Palmer, when a kangaroo was killed the hunter had to give the left hind leg to his brother,
the tail to his father's brother's son, the loins and fat to his father-in-law, the ribs to his mother-in-law, the
forelegs to his father's younger sister, the head to his wife, and he kept for himself the entrails and the blood. In
different areas the portions assigned to such kinsfolk differ. When grumbles and fights occur, as they often do,
it is not because the basic principles of distribution are questioned, but because it is thought they are not being
properly followed. Though the hunter, his wife, and children seem to fare badly, this inequality is corrected by
Reading Passages [ 8]

their getting in their turn better portions from kills by other people. The result is a criss-cross set of exchanges
always in progress. The net result in the long run is substantially the same to each person, but through this
system the principles of kinship obligation and the morality of sharing food have been emphasized.
We see from this that though the principle that a person should get a reward for his labour is not
ignored, this principle is caught up into a wider set of codes which recognize that kinship ties, positions, or
privilege, and ritual ideas should be supported on an economic basis. As compared with our own society,
primitive societies make direct allowance for the dependants upon producers as well as for the immediate
producers themselves.
These same principles come out in an even more striking way in the feasts which are such an important
part of much primitive life. The people who produce the food, or who own it, deliberately often hand over the
best portions to others.
A feast may be the means of repaying the labour of others; of setting the seal on an important event,
such as initiation or marriage; or of cementing an alliance between groups. Prestige is usually gained by the
giver of the feast, but where personal credit and renown are linked most closely with the expenditure of wealth,
the giving of a feast is a step upon the ladder of social status. In the Banks Islands and other parts of Melanesia
such feasts are part of the ceremonial of attaining the various ranks of the men's society, which is an important
feature of native life. In Polynesia these graded feasts do not normally occur, but in Tikopia a chief is expected
to mark the progress of his reign by a feast every decade or so. The 'feasts of merit' of the Nagas of Assam are
not so much assertion against social competitors as means of gaining certain recognized ranks in the society.
(From Human Types, by Raymond Firth.

The Early Education of Manus Children


For the first few months after he has begun to accompany his mother about the village the baby rides
quietly on her neck or sits in the bow of the canoe while his mother punts in the stern some ten feet away. The
child sits quietly, schooled by the hazards to which he has been earlier exposed. There are no straps, no baby
harness to detain him in his place. At the same time, if he should tumble overboard, there would be no tragedy.
The fall into the water is painless. The mother or father is there to pick him up. Babies under two and a half or
three are never trusted with older children or even with young people. The parents demand a speedy physical
adjustment from the child, but they expose him to no unnecessary risks. He is never allowed to stray beyond
the limits of safety and watchful adult care.
So the child confronts duckings, falls, dousings of cold water, or entanglements in slimy seaweed, but
he never meets with the type of accident which will make him distrust the fundamental safety of his world.
Although he himself may not yet have mastered the physical technique necessary for perfect comfort in the
water his parents have. A lifetime of dwelling on the water has made them perfectly at home there. They are
sure-footed, clear-eyed, quick handed. A baby is never dropped; his mother never lets him slip from her arms
or carelessly bumps his head against door post or shelf. In the physical care of the child she makes no clumsy
blunders. Her every move is a reassurance to the child, counteracting any doubts which he may have
accumulated in the course of his own less sure-footed progress. So thoroughly do Manus children trust their
parents that a child will leap from any height into an adult's outstretched arms, leap blindly and with complete
confidence of being safely caught.
Side by side with the parents' watchfulness and care goes the demand that the child himself should
make as much effort, acquire as much physical dexterity, as possible. Every gain a child makes is noted, and the
child is inexorably held to his past record. There are no cases of children who toddle a few steps, fall, bruise
their noses, and refuse to take another step for three months. The rigorous way of life demands that the
children be self-sufficient as early as possible. Until a child has learned to handle his own body, he is not safe in
the house, in a canoe or on the small islands. His mother or aunt is a slave, unable to leave him for a minute,
never free of watching his wandering steps. So every new proficiency is encouraged and insisted upon. Whole
groups of busy men and women cluster about the baby's first step, but there is no such delightful audience to
bemoan his first fall. He is set upon his feet gently but firmly and told to try again. The only way in which he
can keep the interest of his admiring audience is to try again. So self-pity is stifled and another step is
attempted.
As soon as the baby can toddle uncertainly he is put down into the water at low tide when parts of the
lagoon are high and others only a few inches under water. Here the baby sits and plays in the water or takes a
few hesitating steps in the yielding spongy mud. The mother does not leave his side, nor does she leave him
there long enough to weary him. As he grows older, he is allowed to wade about at low tide. His elders keep a
Reading Passages [ 9]

sharp lookout that he does not stray into deep water until he is old enough to swim. But the supervision is
unobtrusive. Mother is always there if the child gets into difficulties, but he is not nagged and plagued with
continual 'don'ts'. His whole play-world is so arranged that he is permitted to make small mistakes from which
he may learn better judgment and greater circumspection, but he is never allowed to make mistakes which are
serious enough to frighten him permanently or inhibit his activity. He is a tight-rope walker, learning feats
which we would count outrageously difficult for little children, but his tight-rope is stretched above a net of
expert parental solicitude.... Expecting children to swim at three, to climb about like young monkeys even
before that age, may look to us like forcing them; really it is simply a quiet insistence upon their exerting every
particle of energy and strength which they possess. Swimming is not taught: the small waders imitate their
slightly older brothers and sisters, and after floundering about in waist deep water begin to strike out for
themselves. Sure-footedness on land and swimming come almost together, so that the charm which is recited
over a newly delivered woman says, 'May you not have another child until this one can walk and swim.'
(From Growing up in New Guinea, by Margaret Mead.)

Moral Standards and Social Organization


What the anthropologist does in the study of moral systems is to examine for particular societies the
ideas of right and wrong that are held, and their social circumstances. Consideration of material from some of
the more primitive societies, and a contrast of it with Western patterns, will help to bring out some of the basic
moral aspects of social action.
A simple way of introducing this subject is to mention a personal experience. It concerns the morality of
giving, which has important problems in all human societies.
When I went to the isolated island of Tikopia I was dependent, as every anthropologist is, on the local
people for information and for guidance. This they gave, freely in some respects, but with reservation in others,
particularly on religious matters. Almost without exception, too, they showed themselves greedy for material
goods such as knives, fish-hooks, calico, pipes and tobacco, and adept at many stratagems for obtaining them.
In particular, they used the forms of friendship. They made me gifts in order to play upon the sense of
obligation thus aroused in me. They lured me to their houses by generous hospitality which it was difficult to
refuse, and then paraded their poverty before me. The result of a month or two of this was that I became
irritated and weary. My stocks of goods were not unlimited, and I did not wish to exhaust them in this casual
doling out to people from whom I got no special anthropological return. I foresaw the time when I would wish
to reward people for ethnographic data and help of a scientific kind and I would either have debased my
currency or exhausted it. Moreover I came to the conclusion that there was no such thing as friendship or
kindliness among these people. Everything they did for me seemed to be in expectation of some return. What
was worse, they were apt to ask for such return at the time, or even in advance of their service.
Then I began to reflect. What was this disinterested friendship and kindness which I expected to find?
Why, indeed, should these people do many services for me, a perfect stranger, without return? Why should
they be content to leave it to me to give them what I wanted rather than express their own ideas upon what
they themselves wanted? In our European society how far can we say disinterestedness goes? How far do we
use this term for what is really one imponderable item in a whole series of interconnected services and
obligations? A Tikopia, like anyone else, will help to pick a person up if he slips to the ground, bring him a
drink, or do many other small things without any mention of reciprocation. But many other services which
involve him in time and trouble he regards as creating an obligation. This is just what they do themselves. He
thinks it right to get a material reward, and right that he should be able to ask for it. Is he wrong in this? Was
my moral indignation at his self-seeking justified?
So I revised my procedure. At first I had expected a man to do me a service and wait, until, in my own
good time, I made him a freewill gift. Now I abandoned the pretence of disinterested friendliness. When a gift
was made to me or a service done, I went at once to my stores, opened them, and made the giver a present
roughly commensurate to the value of that received.
But more important than the change in my procedure was the change in my moral attitudes. I was no
longer indignant at the behaviour of these calculating savages, to whom friendship seemed to be expressed
only in material terms. It was pleasant and simple to adopt their method. If one was content to drop the search
for 'pure' or 'genuine' sentiments and accept the fact that to people of another culture, especially when they had
not known one long, the most obvious foundation of friendship was material reciprocity, the difficulties
disappeared. When the obligation to make a material return was dragged into the light, it did not inhibit the
development of sentiments of friendship, but fostered it.
Reading Passages [ 10 ]

What I have shown of the material elements of friendship in Tikopia is intelligible in a society where no
very clear-cut line is drawn between social service and economic service, where there is no sale or even barter
of goods, but only borrowing and exchange in friendly or ceremonial form. In European culture we demarcate
the sphere of business from that of friendship. The former insists on the rightness of obtaining the best bargain
possible, while the latter refuses to treat in terms of bargains at all. Yet there is an intermediate sphere. Business
has its social morality. Things are done 'as a favour', there are concepts of 'fair' prices, and sharp practice and
profiteering are judged as wrong. On the other hand, friendship does not necessarily ignore the material
aspects. 'One good turn deserves another' epitomizes regard for reciprocity which underlies many friendly
actions.
(From Elements of Social Organization, by Raymond Firth.)

Production in Primitive Societies


The exploitation of the natural resources of the environment constitutes the productive system of any
people, and the organization of this system in primitive society differs in several important respects from our
own. The first point which must be mentioned is the character of work. As we have said, most economic effort
in primitive society is devoted to the production of food. The activities involved in this have, quite apart from
the stimulus of real or potential hunger, a spontaneous interest lacking in the ordinary work of an office or
factory in contemporary civilization. This will become clear when we reflect that most of the food-getting
activities of primitive peoples, such as fishing, hunting and gardening, are recreations among ourselves. It does
not follow that primitive man takes an undiluted pleasure in such activities-much of the labour connected with
them is heavy, monotonous or hazardous. But they do possess an inherent interest lacking in most of the
economic labour in modern civilization, and much the same applies to primitive technology, in which the
craftsman himself creates an artefact, rather than being merely a human cog in the machinery of production.
The spontaneous interest of work under primitive conditions is reinforced by a number of social values
attached to it. Skill and industry are honoured and laziness condemned, a principle exemplified in the folk
songs and proverbs of the Maori. From childhood onwards the virtues of industry are extolled, as in the term
ihu puku, literally 'dirty nose', applied as a compliment to an industrious man because it implies that he is
continually occupied in cultivation with his face to the ground; on the other hand, the twin vices of greed and
laziness are condemned in the saying: 'Deep throat, shallow muscles'. Such social evaluations as these give
pride in successful and energetic work, and stimulate potential laggards to play their part in productive effort.
The interest of primitive work is increased, and its drudgery mitigated, by the fact that it is often co-
operative. Major undertakings, such as house-building or the construction of large canoes, usually require the
labour of more than one person. And even when the task concerned could be done individually, primitive
peoples often prefer collective labour. Thus in Hehe agriculture much of the cultivation is done individually or
by small family groups. But at the time of the annual hoeing of the ground, it is customary for a man to
announce that on a certain day his wife will brew beer. His relatives and neighbours attend, help with the
hoeing, and are rewarded with beer in the middle of the day and in the evening. This is not to be regarded as
payment, since casual visitors who have not helped with the hoeing may also take part in the beer drink. Under
this system, each man helps others and is helped by them in turn. From the purely economic point of view, the
system has no advantage, since each man could quite well hoe his own ground and the preparation of beer
adds substantially to the work involved. But the system does possess psychological advantages. The task of
hoeing might well appear endless if undertaken by each individual separately. Collective labour, and the
collateral activity of beer-drinking, changes a dreary task into a social occasion. The same principle applies to
collective labour in general in primitive society, and to the social activities of feasting, dancing and other forms
of collective enjoyment which frequently accompany it or mark its conclusion.
(From An Introduction to Social Anthropology, by Ralph Piddington.)
Production in Primitive Societies
The exploitation of the natural resources of the environment constitutes the productive system of any
people, and the organization of this system in primitive society differs in several important respects from our
own. The first point which must be mentioned is the character of work. As we have said, most economic effort
in primitive society is devoted to the production of food. The activities involved in this have, quite apart from
the stimulus of real or potential hunger, a spontaneous interest lacking in the ordinary work of an office or
factory in contemporary civilization. This will become clear when we reflect that most of the food-getting
activities of primitive peoples, such as fishing, hunting and gardening, are recreations among ourselves. It does
not follow that primitive man takes an undiluted pleasure in such activities-much of the labour connected with
Reading Passages [ 11 ]

them is heavy, monotonous or hazardous. But they do possess an inherent interest lacking in most of the
economic labour in modern civilization, and much the same applies to primitive technology, in which the
craftsman himself creates an artefact, rather than being merely a human cog in the machinery of production.
The spontaneous interest of work under primitive conditions is reinforced by a number of social values
attached to it. Skill and industry are honoured and laziness condemned, a principle exemplified in the folk
songs and proverbs of the Maori. From childhood onwards the virtues of industry are extolled, as in the term
ihu puku, literally 'dirty nose', applied as a compliment to an industrious man because it implies that he is
continually occupied in cultivation with his face to the ground; on the other hand, the twin vices of greed and
laziness are condemned in the saying: 'Deep throat, shallow muscles'. Such social evaluations as these give
pride in successful and energetic work, and stimulate potential laggards to play their part in productive effort.
The interest of primitive work is increased, and its drudgery mitigated, by the fact that it is often co-
operative. Major undertakings, such as house-building or the construction of large canoes, usually require the
labour of more than one person. And even when the task concerned could be done individually, primitive
peoples often prefer collective labour. Thus in Hehe agriculture much of the cultivation is done individually or
by small family groups. But at the time of the annual hoeing of the ground, it is customary for a man to
announce that on a certain day his wife will brew beer. His relatives and neighbours attend, help with the
hoeing, and are rewarded with beer in the middle of the day and in the evening. This is not to be regarded as
payment, since casual visitors who have not helped with the hoeing may also take part in the beer drink. Under
this system, each man helps others and is helped by them in turn. From the purely economic point of view, the
system has no advantage, since each man could quite well hoe his own ground and the preparation of beer
adds substantially to the work involved. But the system does possess psychological advantages. The task of
hoeing might well appear endless if undertaken by each individual separately. Collective labour, and the
collateral activity of beer-drinking, changes a dreary task into a social occasion. The same principle applies to
collective labour in general in primitive society, and to the social activities of feasting, dancing and other forms
of collective enjoyment which frequently accompany it or mark its conclusion.
(From An Introduction to Social Anthropology, by Ralph Piddington.)

The Science of Custom


Anthropology is the study of human beings as creatures of society. It fastens its attention upon those
physical characteristics and industrial techniques, those conventions and values, which distinguish one
community from all others that belong to a different tradition.
The distinguishing mark of anthropology among the social sciences is that it includes for serious study
other societies than our own. For its purposes any social regulation of mating and reproduction is as significant
as our own, though it may be that of the Sea Dyaks, and have no possible historical relation to that of our
civilization. To the anthropologist, our customs and those of a New Guinea tribe are two possible social
schemes for dealing with a common problem, and in so far as he remains an anthropologist he is bound to
avoid any weighting of one in favour of the other. He is interested in human behaviour, not as it is shaped by
one tradition, our own, but as it has been shaped by any tradition whatsoever. He is interested in the great
gamut of custom that is found in various cultures, and his object is to understand the way in which these
cultures change and differentiate, the different forms through which they express themselves, and the manner
in which the customs of any peoples function in the lives of the individuals who compose them.
Now custom has not been commonly regarded as a subject of any great moment. The inner workings of
our own brains we feel to be uniquely worthy of investigation, but custom, we have a way of thinking, is
behaviour at its most commonplace. As a matter of fact, it is the other way round. Traditional custom, taken the
world over, is a mass of detailed behaviour more astonishing than what any one person can ever evolve in
individual actions no matter how aberrant. Yet that is a rather trivial aspect of the matter. The fact of first-rate
importance is the predominant role that custom plays in experience and belief, and the very great varieties it
may manifest.
No man ever looks at the world with pristine eyes. He sees it edited by a definite set of customs and
institutions and ways of thinking. Even in his philosophical probings he cannot go behind these stereotypes; his
very concepts of the true and the false will still have reference to his particular traditional customs. John Dewey
has said in all seriousness that the part played by custom in shaping the behaviour of the individual as over
against any way in which he can affect traditional custom, is as the proportion of the total vocabulary of his
mother tongue over against those words of his own baby talk that are taken up into the vernacular of his
family. When one seriously studies social orders that have had the opportunity to develop autonomously, the
Reading Passages [ 12 ]

figure becomes no more than exact and matter-of-fact observation. The life-history of the individual is first and
foremost an accommodation to the patterns and standards traditionally handed down in his community. From
the moment of his birth the customs into which he is born shape his experience and behaviour. By the time he
can talk, he is the little creature of his culture, and by the time he is grown and able to take part in its activities,
its habits are his habits, its beliefs his beliefs, its impossibilities his impossibilities. Every child that is born into
his group will share them with him, and no child born into one on the opposite side of the globe will ever
achieve the thousandth part. There is no social problem it is more incumbent upon us to understand than this
of the role of custom. Until we are intelligent as to its laws and varieties, the main complicating facts of human
life must remain unintelligible.
(From Patterns of Culture, by Ruth Benedict.)

Survival in the Cage


Most casual visitors to zoos are convinced, as they stroll from cage to cage, that the antics of the inmates
are no more than an obliging performance put on solely for their entertainment. Unfortunately for our
consciences, however, this sanguine view of the contented, playful, caged animal could in many cases be hardly
farther from the truth. Recent research at London Zoo has amply demonstrated that many caged animals are in
fact facing a survival problem as severe as that of their cousins in the wild-a struggle to survive, simply, against
the monotony of their environment. Well fed, well housed, well cared for, and protected from its natural
enemies, the zoo animal in its super-Welfare State existence is bored, sometimes literally to death.
The extraordinary and subtle lengths to which some animals go to overcome this problem, and the
surprising behaviour patterns which arise as a result, were vividly described by Dr Desmond Morris (Curator
of Mammals, London Zoo) at a conference on `The biology of survival' held in the rooms of the Zoological
Society. As he and other speakers pointed out, the problem of surviving in a monotonous and restricted
environment is not confined to the animal cage. Apart from the obvious examples of human prisoners or the
astronaut, the number of situations in which human beings have to face boredom and confinement for long
stretches is growing rather than decreasing. More to the point, many of the ways in which animals respond to
these conditions have striking analogies in many forms of obsessional or neurotic behaviour in humans: the
psychiatrist could well learn from the apes.
The animals which seem to react most strongly to this monotony are the higher 'non-specialists' - those
that do not rely on one or two highly developed adaptations or `tricks' to survive in the wild. Normally seizing
every opportunity to exploit the chances and variety of their surroundings, they are constantly investigating
and exploring; in short, they are neophilic (loving the new). Most species seem to show a balance between this
active, curious behaviour and its opposite, or neophobic; but with the primates and members of the dog and cat
families, for example, the neophilic pattern is usually overwhelmingly predominant.
It is not surprising that when such species are placed in the highly non-variable environment of a zoo
cage, where there are few novel stimuli, they cannot accept-and indeed actually fight against-any kind of
enforced inactivity (apart from those times when they obviously just give up and relax). As Dr Morris has
remarked, how they do this is a great testimony to their ingenuity.
Observations by Dr Morris and the staff of London Zoo have revealed that there are probably five main
ways in which animals try to overcome their monotony. The first is to invent new motor patterns for
themselves-new exercises, gymnastics, and so forth. They may also try to increase the complexity of their
environment by creating new stimulus situations: many carnivores, such as the large cats, play with their food
as though it were a living animal, throwing up dead birds in the air, pursuing the carcass, and pouncing on it to
kill.
Alternatively the animal may increase the quantity of its reaction to normal stimuli. Hypersexuality is
one common response to this type of behaviour. A fourth method, akin to some kinds of obsessional behaviour
in man, is to increase the variability of its response to stimuli such as food. Many animals can be seen playing,
pawing, advancing, and retreating from their food before eating it: some even go further by regurgitating it
once eaten and then devouring it again, and so on. Lastly-and this kind of behaviour can most nearly be called
neurotic-is the development of full, normal responses to subnormal stimuli, such as the camel's expression of
sexual arousal when cigarette smoke is blown in its face, or the making of mother substitutes out of straw,
wood, and suchlike.
No one claims that the observations of animals under these conditions are anything but fragmentary.
But at least enough is now known about them to persuade zoologists that these bizarre behaviour patterns are
not just haphazard, neurotic responses but are genuine attempts to put back some kind of values into the
Reading Passages [ 13 ]

animal's surroundings, attempts which are beginning to show consistent patterns. It is also too early to say how
far studies of this sort can throw light on human behaviour under similar conditions (though as one zoologist
remarked, they do show that the best way of surviving a prison sentence is to turn oneself utterly neophobic
and take up an advanced course on economics). Yet there is a growing realization that the human environment
in the future will become more like that of the zoo animal rather than less, so that the kind of observations
mentioned above might well have a growing relevance.
Fairly recent studies of coalminers, for example, have shown that in spite of their phenomenally high
daily energy output they spend about seventeen hours sitting or lying down, and sedentary workers some
twenty to twenty-one hours. With the spread of automation and the growth of white collar workers these
figures are likely to increase. With astronauts, polar scientists, and long-range aircraft crews the problem
already exists: a recent study of Antarctic scientists produced the remarkable fact that on average they spent
only four per cent of their time in winter outside the confines of their living quarters. With this continued
confinement and the extreme uniformity of the outside environment many odd behaviour patterns were
developed.
(From an article by Gerald Leach in The Guardian, Tuesday, May 14th, 1963.)

GESTURES
A gesture is any action that sends a visual signal to an onlooker. To become a gesture, an act has to be
seen by someone else and has to communicate some piece of information to them. It can do this either because
the gesturer deliberately sets out to send a signal - as when he waves his hand - or it can do it only incidentally -
as when he sneezes. The hand-wave is a Primary Gesture, because it has no other existence or function. It is a
piece of communication from start to finish. The sneeze, by contrast, is a secondary, or Incidental Gesture. Its
primary function is mechanical and is concerned with the sneezer’s personal breathing problem. In its
secondary role, however, it cannot help but transmit a message to his companions, warning them that he may
have caught a cold.
Most people tend to limit their use of the term ‘gesture’ to the primary form - the hand-wave type - but
this misses an important point. What matters with gesturing is not what signals we think we are sending out,
but what signals are being received. The observers of our acts will make no distinction between our intentional
Primary Gestures and our unintentional, incidental ones. In some ways, our Incidental Gestures are the more
illuminating of the two, if only for the very fact that we do not think of them as gestures, and therefore do not
censor and manipulate them so strictly. This is why it is preferable to use the term ‘gesture’ in its wider
meaning as an ‘observed action’.
A convenient way to distinguish between Incidental and Primary Gestures is to ask the question:
Would I do it if I were completely alone? If the answer is No, then it is a Primary Gesture. We do not wave,
wink, or point when we are by ourselves; not, that is, unless we have reached the unusual condition of talking
animatedly to ourselves.
INCIDENTAL GESTURES
Mechanical actions with secondary messages
Many of our actions are basically non-social, having to do with problems of personal body care, body
comfort and body transportation; we clean and groom ourselves with a variety of scratchings, rubbings and
wipings; we cough, yawn and stretch our limbs; we eat and drink; we prop ourselves up in restful postures,
folding our arms and crossing our legs; we sit, stand, squat and recline, in a whole range of different positions;
we crawl, walk and run in varying gaits and styles. But although we do these things for our own benefit, we are
not always unaccompanied when we do them. Our companions learn a great deal about us from these
‘personal’ actions - not merely that we are scratching because we itch or that we are running because we are
late, but also, from the way we do them, what kind of personalities we possess and what mood we are in at the
time.
Sometimes the mood-signal transmitted unwittingly in this way is one that we would rather conceal, if
we stopped to think about it. Occasionally we do become self-consciously aware of the ‘mood broadcasts’ and
‘personality displays’ we are making and we may then try to check ourselves. But often we do not, and the
message goes out loud and clear.
For instance, if a student props his head on his hands while listening to a boring lecture, his head-on-
hands action operates both mechanically and gesturally. As a mechanical act, it is simply a case of supporting a
tired head - a physical act that concerns no one but the student himself. At the same time, though, it cannot help
Reading Passages [ 14 ]

operating as a gestural act, beaming out a visual signal to his companions, and perhaps to the lecturer himself,
telling them that he is bored.
In such a case his gesture was not deliberate and he may not even have been aware that he was
transmitting it. If challenged, he would claim that he was not bored at all, but merely tired. If he were honest -
or impolite - he would have to admit that excited attention easily banishes tiredness, and that a really
fascinating speaker need never fear to see a slumped, head-propped figure like his in the audience.
In the schoolroom, the teacher who barks at his pupils to ‘sit up straight’ is demanding, by right, the
attention-posture that he should have gained by generating interest in his lesson. It says a great deal for the
power of gesture-signals that he feels more ‘attended-to’ when he sees his pupils sitting up straight, even
though he is consciously well aware of the fact that they have just been forcibly un-slumped, rather than
genuinely excited by his teaching.
Many of our Incidental Gestures provide mood information of a kind that neither we nor our companions
become consciously alerted to. It is as if there is an underground communication system operating just below
the surface of our social encounters. We perform an act and it is observed. Its meaning is read, but not out loud.
We ‘feel’ the mood, rather than analyse it. Occasionally an action of this type becomes so characteristic of a
particular situation that we do eventually identify it - as when we say of a difficult problem: ‘That will make
him scratch his head’, indicating that we do understand the link that exists between puzzlement and the
Incidental Gesture of head-scratching. But frequently this type of link operates below the conscious level, or is
missed altogether.
Where the links are clearer, we can, of course, manipulate the situation and use our Incidental Gestures
in a contrived way. If a student listening to a lecture is not tired, but wishes to insult the speaker, he can
deliberately adopt a bored, slumped posture, knowing that its message will get across. This is a Stylized
Incidental Gesture - a mechanical action that is being artificially employed as a pure signal. Many of the
common ‘courtesies’ also fall into this category - as when we greedily eat up a plate of food that we do not want
and which we do not like, merely to transmit a suitably grateful signal to our hosts. Controlling our Incidental
Gestures in this way is one of the processes that every child must learn as it grows up and learns to adapt to the
rules of conduct of the society in which it lives.
EXPRESSIVE GESTURES
Biological gestures of the kind we share with other animals
Primary Gestures fall into six main categories. Five of these are unique to man, and depend on his
complex, highly evolved brain. The exception is the category I called Expressive Gestures. These are gestures of
the type which all men, everywhere, share with one another, and which other animals also perform. They
include the important signals of Facial Expression, so crucial to daily human interaction.
All primates are facially expressive and among the higher species the facial muscles become
increasingly elaborate, making possible the performance of a whole range of subtly varying facial signals. In
man this trend reaches its peak, and it is true to say that the bulk of non-verbal signalling is transmitted by the
human face.
The human hands are also important, having been freed from their ancient locomotion duties, and are
capable, with their Manual Gesticulations, of transmitting many small mood changes by shifts in their postures
and movements, especially during conversational encounters. I am defining the word ‘gesticulation’, as distinct
from ‘gesture’, as a manual action performed unconsciously during social interactions, when the gesticulator is
emphasizing a verbal point he is making.
These natural gestures are usually spontaneous and very much taken for granted. Yes, we say, he made
a funny face. But which way did his eyebrows move? We cannot recall. Yes, we say, he was waving his arms
about as he spoke. But what shape did his fingers make? We cannot remember. Yet we were not inattentive. We
saw it all and our brains registered what we saw. We simply did not need to analyse the actions, any more than
we had to spell out the words we heard, in order to understand them. In this respect they are similar to the
Incidental Gestures of the previous category, but they differ, because here there is no mechanical function - only
signalling. This is the world of smiles and sneers, shrugs and pouts, laughs and winces, blushes and blanches,
waves and beckons, nods and glares, frowns and snarls. These are the gestures that nearly everyone performs
nearly everywhere in the world. They may differ in detail and in context from place to place, but basically they
are actions we all share. We all have complex facial muscles whose sole job it is to make expressions, and we all
stand on two feet rather than four, freeing our hands and letting them dance in the air evocatively as we
explain, argue and joke our way through our social encounters. We may have lost our twitching tails and our
Reading Passages [ 15 ]

bristling fur, but we more than make up for it with our marvellously mobile faces and our twisting, spreading,
fluttering hands.
In origin, our Expressive Gestures are closely related to our Incidental Gestures, because their roots also
lie in primarily non-communicative actions. The clenched fist of the gesticulator owes its origin to an intention
movement of hitting an opponent, just as the frown on the face of a worried man can be traced back to an
ancient eye-protection movement of an animal anticipating physical attack. But the difference is that in these
cases the link between the primary physical action and its ultimate descendant, the Expressive Gesture, has
been broken. Smiles, pouts, winces, gapes, smirks, and the rest, are now, for all practical purposes, pure
gestures and exclusively communicative in function.
Despite their worldwide distribution, Expressive Gestures are nevertheless subject to considerable
cultural influences. Even though we all have an evolved set of smiling muscles, we do not all smile in precisely
the same way, to the same extent, or on the same occasions. For example, all children may start out as easy-
smilers and easy-laughers, but a local tradition may insist that, as the youngsters mature, they must hide their
feelings, and their adult laughter may become severely muted as a result. These local Display Rules, varying
from place to place, often give the false impression that Expressive Gestures are local inventions rather than
modified, but universal, behaviour patterns.
MIMIC GESTURES
Gestures which transmit signals by imitation
Mimic Gestures are those in which the performer attempts to imitate, as accurately as possible, a
person, an object or an action. Here we leave our animal heritage behind and enter an exclusively human
sphere. The essential quality of a Mimic Gesture is that it attempts to copy the thing it is trying to portray. No
stylized conventions are applied. A successful Mimic Gesture is therefore understandable to someone who has
never seen it performed before. No prior knowledge should be required and there need be no set tradition
concerning the way in which a particular item is represented. There are four kinds of Mimic Gesture:
First, there is Social Mimicry, or ‘putting on a good face’. We have all done this. We have all smiled at a
party when really we feel sad, and perhaps looked sadder at a funeral than we feel, simply because it is
expected of us. We lie with simulated gestures to please others. This should not be confused with what
psychologists call ‘role-playing’. When indulging in Social Mimicry we deceive only others, but when role-
playing we deceive ourselves as well.
Second, there is Theatrical Mimicry - the world of actors and actresses, who simulate everything for our
amusement. Essentially it embraces two distinct techniques. One is the calculated attempt to imitate specifically
observed actions. The actor who is to play a general, say, will spend long hours watching films of military
scenes in which he can analyse every tiny movement and then consciously copy them and incorporate them
into his final portrayal. The other technique is to concentrate instead on the imagined mood of the character to
be portrayed, to attempt to take on that mood, and to rely upon it to produce, unconsciously, the necessary
style of body actions.
In reality, all actors use a combination of both these techniques, although in explaining their craft they
may stress one or other of the two methods. In the past, acting performances were usually highly stylized, but
today, except in pantomime, opera and farce, extraordinary degrees of realism are reached and the formal,
obtrusive audience has become instead a shadowy group of eavesdroppers. Gone are the actor’s asides, gone
are the audience participations. We must all believe that it is really happening. In other words, Theatrical
Mimicry has at last become as realistic as day-to-day Social Mimicry. In this respect, these first two types of
mimic activity contrast sharply with the third, which can be called Partial Mimicry.
In Partial Mimicry the performer attempts to imitate something which he is not and never can be, such
as a bird, or raindrops. Usually only the hands are involved, but these make the most realistic approach to the
subject they can manage. If a bird, they flap their ‘wings’ as best they can; if raindrops, they describe a
sprinkling descent as graphically as possible. Widely used mimic gestures of this kind are those which convert
the hand into a ‘gun’, an animal of some sort, or the foot of an animal; or those which use the movements of the
hand to indicate the outline shape of an object of some kind.
The fourth kind of Mimic Gesture can best be called Vacuum Mimicry, because the action takes place in
the absence of the object to which it is related. If I am hungry, for example, I can go through the motions of
putting imaginary food into my mouth. If I am thirsty, I can raise my hand as if holding an invisible glass, and
gulp invisible liquid from it.
The important feature of Partial Mimicry and Vacuum Mimicry is that, like Social and Theatrical
Mimicry, they strive for reality. Even though they are doomed to failure, they make an attempt. This means that
Reading Passages [ 16 ]

they can be understood internationally. In this respect they contrast strongly with the next two types of gesture,
which show marked cultural restrictions.

SCHEMATIC GESTURES
Imitations that become abbreviated or abridged
Schematic Gestures are abbreviated or abridged versions of Mimic Gestures. They attempt to portray
something by taking just one of its prominent features and then performing that alone. There is no longer any
attempt at realism.
Schematic Gestures usually arise as a sort of gestural shorthand because of the need to perform an
imitation quickly and on many occasions. Just as, in ordinary speech, we reduce the word ‘cannot’ to ‘can’t’, so
an elaborate miming of a charging bull becomes reduced simply to a pair of horns jabbed in the air as a pair of
fingers.
When one element of a mime is selected and retained in this way, and the other elements are reduced or
omitted, the gesture may still be easy to understand, when seen for the first time, but the stylization may go so
far that it becomes meaningless to those not ‘in the know’. The Schematic Gesture then becomes a local
tradition with a limited geographical range. If the original mime was complex and involved several distinctive
features, different localities may select different key features for their abridged versions. Once these different
forms of shorthand have become fully established in each region, then the people who use them will become
less and less likely to recognize the foreign forms. The local gesture becomes ‘the’ gesture, and there quickly
develops, in gesture communication, a situation similar to that found in linguistics. Just as each region has its
own verbal language, so it also has its own set of Schematic Gestures.
To give an example: the American Indian sign for a horse consists of a gesture in which two fingers of
one hand ‘sit astride’ the fingers of the other hand. A Cistercian monk would instead signal ‘horse’ by lowering
his head slightly and pulling at an imaginary tuft of hair on his forehead. An Englishman would probably
crouch down like a jockey and pull at imaginary reins. The Englishman’s version, being closer to a Vacuum
Mimic Gesture, might be understood by the other two, but their gestures, being highly schematic, might well
prove incomprehensible to anyone outside their groups.
Some objects, however, have one special feature that is so strongly characteristic of them that, even with
Schematic Gestures, there is little doubt about what is being portrayed. The bull, mentioned above, is a good
example of this. Cattle are nearly always indicated by their horns alone, and the two horns are always
represented by two digits. In fact, if an American Indian, a Hindu dancer, and an Australian Aborigine met,
they would all understand one another’s cattle signs, and we would understand all three of them. This does not
mean that the signs are all identical. The American Indian’s cattle sign would represent the bison, and the horns
of bison do not curve forward like those of domestic cattle, but inward, towards each other. The American
Indian’s sign reflects this, his hands being held to his temples and his forefingers being pointed inward. The
Australian Aborigine instead points his forefingers forward. The Hindu dancer also points forward, but rather
than using two forefingers up at the temples, employs the forefinger and little finger of one hand, held at waist
height. So each culture has its own variant, but the fact that horns are such an obvious distinguishing feature of
cattle means that, despite local variations, the bovine Schematic Gesture is reasonably understandable in most
cultures.
SYMBOLIC GESTURES
Gestures which represent moods and ideas
A Symbolic Gesture indicates an abstract quality that has no simple equivalent in the world of objects
and movements. Here we are one stage further away from the obviousness of the enacted Mimic Gesture.
How, for instance, would you make a silent sign for stupidity? You might launch into a full-blooded
Theatrical Mime of a drooling village idiot. But total idiocy is not a precise way of indicating the momentary
stupidity of a healthy adult. Instead, you might tap your forefinger against your temple, but this also lacks
accuracy, since you might do precisely the same thing when indicating that someone is brainy. All the tap does
is to point to the brain. To make the meaning more clear, you might instead twist your forefinger against your
temple, indicating ‘a screw loose’. Alternatively, you might rotate your forefinger close to your temple,
signalling that the brain is going round and round and is not stable.
Many people would understand these temple-forefinger actions, but others would not. They would
have their own local, stupidity gestures, which we in our turn would find confusing, such as tapping the elbow
of the raised forearm, flapping the hand up and down in front of half-closed eyes, rotating a raised hand, or
laying one forefinger flat across the forehead.
Reading Passages [ 17 ]

The situation is further complicated by the fact that some stupidity signals mean totally different things
in different countries. To take one example, in Saudi Arabia stupidity can be signalled by touching the lower
eyelid with the tip of the forefinger. But this same action, in various other countries, can mean disbelief,
approval, agreement, mistrust, scepticism, alertness, secrecy, craftiness, danger, or criminality. The reason for
this apparent chaos of meanings is simple enough. By pointing to the eye, the gesturer is doing no more than
stress the symbolic importance of the eye as a seeing organ. Beyond that, the action says nothing, so that the
message can become either: ‘Yes, I see’, or ‘I can’t believe my eyes’, or ‘Keep a sharp look-out’, or ‘I like what I
see’, or almost any other seeing signal you care to imagine. In such a case it is essential to know the precise
‘seeing’ property being represented by the symbolism of the gesture in any particular culture.
So we are faced with two basic problems where Symbolic Gestures are concerned: either one meaning
may be signalled by different actions, or several meanings may be signalled by the same action, as we move
from culture to culture. The only solution is to approach each culture with an open mind and learn their
Symbolic Gestures as one would their vocabulary.
As part of this process, it helps if a link can be found between the action and the meaning, but this is not
always possible. In some cases we simply do not know how certain Symbolic Gestures arose. It is clear that they
are symbolic because they now represent some abstract quality, but how they first acquired the link between
action and meaning has been lost somewhere in their long history. A good instance of this is the ‘cuckold’ sign
from Italy. This consists of making a pair of horns, either with two forefingers held at the temples, or with a
forefinger and little finger of one hand held in front of the body. There is little doubt about what the fingers are
meant to be: they are the horns of a bull. As such, they would rate as part of a Schematic Gesture. But they do
not send out the simple message ‘bull’. Instead they now indicate ‘sexual betrayal’. The action is therefore a
Symbolic gesture and, in order to explain it, it becomes necessary to find the link between bulls and sexual
betrayal.
Historically, the link appears to be lost, with the result that some rather wild speculations have been
made. A complication arises in the form of the ‘horned hand’, also common in Italy, which has a totally
different significance, even though it employs the same motif of bull’s horns. The Y horned hand is essentially a
protective gesture, made to ward off imagined dangers. Here it is clear enough that it is the bull’s great power,
ferocity and masculinity that is being invoked as a symbolic aid to protect the gesturer. But this only makes it
even more difficult to explain the other use of the bull’s-horns gesture as a sign of a ‘pathetic’ cuckold.
A suggested explanation of this contradiction is that it is due to one gesture using as its starting point
the bull’s power, while the other - the cuckold sign - selects the bull’s frequent castration. Since the
domestication of cattle began, there have always been too many bulls in relation to cows. A good, uncastrated
bull can serve between 50 and 100 cows a year, so that it is only necessary to retain a small proportion of intact
bulls for breeding purposes. The rest are rendered much more docile and easy to handle for beef production, by
castration. In folk-lore, then, these impotent males must stand helplessly by, while the few sexually active bulls
‘steal their rightful females’; hence the symbolism of: bull = cuckold.
A completely different explanation once offered was that, when the cuckold discovers that his wife has
betrayed him, he becomes so enraged and jealous that he bellows and rushes violently about like a ‘mad bull’.
A more classical interpretation involves Diana the Huntress, who made horns into a symbol of male
downfall. Actaeon, another hunter, is said to have sneaked a look at her naked body when she was bathing.
This so angered her that she turned him into a horned beast and set his own hounds upon him, who promptly
killed and ate him.
Alternatively, there is the version dealing with ancient religious prostitutes. These ladies worshipped
gods who wore ‘horns of honour’ - that is, horns in their other role as symbols of power and masculinity - and
the gods were so pleased with the wives who became sacred whores that they transferred their godly horns on
to the heads of the husbands who had ordered their women to act in this role. In this way, the horns of honour
became the horns of ridicule.
As if this were not enough, it is also claimed elsewhere, and with equal conviction, that because stags
have horns (antlers were often called horns in earlier periods) and because most stags in the rutting season lose
their females to a few dominant males who round up large harems, the majority of ‘horned’ deer are unhappy
‘cuckolds’.
Finally, there is the bizarre interpretation that bulls and deer have nothing to do with it. Instead, it is
thought that the ancient practice of grafting the spurs of a castrated cockrel on to the root of its excised comb,
where they apparently grew and became ‘horns’, is the origin of the symbolic link between horns and cuckolds.
Reading Passages [ 18 ]

This claim is backed up by the fact that the German equivalent word for ‘cuckold’ (hahnrei) originally meant
‘capon’.
If, after reading these rival claims, you feel that all you have really learned is the meaning of the phrase
‘cock-and-bull story’, you can be forgiven. Clearly, we are in the realm of fertile imagination rather than
historical record. But this example has been dealt with at length to show how, in so many cases, the true story
of the origin of a Symbolic Gesture is no longer available to us. Many other similarly conflicting examples are
known, but this one will suffice to demonstrate the general principle.
There are exceptions, of course, and certain of the Symbolic Gestures we make today, and take for
granted, can easily be traced to their origins. ‘Keeping your fingers crossed’ is a good example of this. Although
used by many non-Christians, this action of making the cross, using only the first and second fingers, is an
ancient protective device of the Christian church. In earlier times it was commonplace to make a more
conspicuous sign of the cross (to cross oneself) by moving the whole arm, first downwards and then sideways,
in front of the body, tracing the shape of the cross in the air. This can still be seen in some countries today in a
non-religious context, acting as a ‘good luck’ protective device. In more trivial situations it has been widely
replaced, however, by the act of holding up one hand to show that the second finger is tightly crossed over the
first, with the crossing movement of the arm omitted. Originally this was the secret version of ‘crossing oneself’
and was done with the hand in question carefully hidden from view. It may still be done in this secret way, as
when trying to protect oneself from the consequences of lying, but as a ‘good luck’ sign it has now come out
into the open. This development is easily explained by the fact that crossing the fingers lacks an obvious
religious character. Symbolically, the finger-crossing may be calling on the protection of the Christian God, but
the small finger action performed is so far removed from the priestly arm crossing action, that it can without
difficulty slide into everyday life as a casual wish for good fortune. Proof of this is that many people do not
even realize that they are demanding an act of Christian worship - historically speaking - when they shout out:
‘Keep your fingers crossed!’
TECHNICAL GESTURES
Gestures used by specialist minorities
Technical Gestures are invented by a specialist minority for use strictly within the limits of their
particular activity. They are meaningless to anyone outside the specialization and operate in such a narrow
field that they cannot be considered as playing a part in the mainstream of visual communication of any
culture.
Television-studio signals are a good example of Technical Gestures in use today. The studio
commentator we see on our screens at home is face to face with a ‘studio manager’. The manager is linked to
the programme director in the control room by means of headphones and conveys the director’s instructions to
the commentator by simple visual gestures. To warn the commentator that he will have to start speaking at any
moment, the manager raises a forearm and holds it stiffly erect. To start him speaking, he brings the forearm
swiftly down to point at the commentator. To warn him that he must stop speaking in a few seconds, the
manager rotates his forearm, as if it were the hand of a clock going very fast - ‘Time is running out fast.’ To ask
him to lengthen the speaking time and say more, he holds his hands together in front of his chest and pulls
them slowly apart, as if stretching something – ‘stretch it out.’ To tell the speaker to stop dead this instant, the
manager makes a slashing action with his hand across his throat - ‘Cut!’ There are no set rules laid down for
these signals. They grew up in the early days of television and, although the main ones listed here are fairly
widespread today, each studio may well have its own special variants, worked out to suit a particular
performer.
Other Technical Gestures are found wherever an activity prohibits verbal contact. Skindivers, for
instance, cannot speak to one another and need simple signals to deal with potentially dangerous situations. In
particular they need gestures for danger, cold, cramp and fatigue. Other messages, such as yes, no, good, bad,
up and down, are easily enough understood by the use of everyday actions and require no Technical Gestures
to make sense. But how could you signal to a companion that you had cramp? The answer is that you would
open and close one hand rhythmically - a simple gesture, but one that might nevertheless save a life.
Disaster can sometimes occur because a Technical Gesture is required from someone who is not a
specialist in a technical field. Suppose some holiday-makers take out a boat, and it sinks, and they swim to the
safety of a small, rocky island. Wet and frightened, they crouch there wondering what to do next, when to their
immense relief a small fishing-boat comes chugging towards them. As it draws level with the island, they wave
frantically at it. The people on board wave back, and the boat chugs on and disappears. If the stranded holiday-
makers had been marine ‘specialists’, they would have known that, at sea, waving is only used as a greeting. To
Reading Passages [ 19 ]

signal distress, they should have raised and lowered their arms stiffly from their sides. This is the accepted
marine gesture for ‘Help!’
Ironically, if the shipwrecked signallers had been marine experts and had given the correct distress
signal, the potential rescue boat might well have been manned by holiday-makers, who would have been
completely nonplussed by the strange actions and would probably have ignored them. When a technical sphere
is invaded by the non-technical, gesture problems always arise.
Firemen, crane-drivers, airport-tarmac signalmen, gambling-casino croupiers, dealers at auctions, and
restaurant staff, all have their own special Technical Gestures. Either because they must keep quiet, must be
discreet, or cannot be heard, they develop their own sets of signals. The rest of us can ignore them, unless we,
too, wish to enter their specialized spheres.
CODED GESTURES
Sign-language based on a formal system
Coded Gestures, unlike all others, are part of a formal system of signals. They interrelate with one
another in a complex and systematic way, so that they constitute a true language. The special feature of this
category is that the individual units are valueless without reference to the other units in the code. Technical
Gestures may be systematically planned, but, with them, each signal can operate quite independently of the
others. With Coded Gestures, by contrast, all the units interlock with one another on rigidly formulated
principles, like the letters and words in a verbal language.
The most important example is the Deaf-and-dumb Sign Language of hand signals, of which there is
both a one-handed and a two-handed version. Also, there is the Semaphore Language of arm signals, and the
Tic-tac Language of the race course. These all require considerable skill and training and belong in a totally
different world from the familiar gestures we employ in everyday life. They serve as a valuable reminder,
though, of the incredibly sensitive potential we all possess for visual communication. It makes it all the more
plausible to argue that we are all of us responding, with greater sensitivity than we may realize, to the ordinary
gestures we witness each day of our lives.
(From Manwatching by Desmond Morris.)

REGIONAL SIGNALS
The way signals change from country to country and district to district
A Regional Signal is one that has a limited geographical range. If a Norwegian, a Korean and a Masai
were marooned together on a desert island, they would easily be able to communicate their basic moods and
intentions to one another by their actions. All humanity shares a large repertoire of common movements,
expressions and postures. But there would also be misunderstandings. Each man would have acquired from his
own culture a special set of Regional Signals that would be meaningless to the others. If the Norwegian were
shipwrecked instead with a Swede and a Dane, he would find his task much easier, because their closer origins
would mean a greater share of these regional gestures, since localized actions, like many words, do not follow
precisely the present-day national boundaries.
This comparison of gestures with words is significant because it reveals immediately our state of
ignorance as regards gestural geography. We already know a great deal about linguistic maps, but we know far
too little about Gesture Maps. Ask a linguist to describe the distribution of any language you like to name and
he will be able to provide accurate, detailed information for you. Take any word, and he will be able to
demonstrate its spread from country to country. He can even present you with local dialect maps for some
parts of the world and show you, like Professor Higgins in Pygmalion, how slang expressions are limited to
certain small areas of big cities. But ask anyone for a world-wide gesture atlas, and you will be disappointed.
A start has already been made, however, and new field work is now beginning. Although this research
is only in its infancy, recent studies in Europe and around the Mediterranean are providing some valuable clues
about the way gestures change as one travels from locality to locality. For example, there is a simple gesture in
which the forefinger taps the side of the nose. In England most people interpret this as meaning secrecy or
conspiracy. The message is: 'Keep it dark, don't spread it around.' But as one moves down across Europe to
central Italy, the dominant meaning changes to become a helpful warning: 'Take care, there is danger-they are
crafty.' The two messages are related, because they are both concerned with cunning. In England it is we who
are cunning, by not divulging our secret. But in central Italy it is they who are cunning, and we must be warned
against them. The Nose Tap gesture symbolizes cunning in both cases, but the source of the cunning has
shifted.
Reading Passages [ 20 ]

This is an example of a gesture keeping the same form over a wide range, and also retaining the same
basic meaning, but nevertheless carrying a quite distinct message in two regions. The more gestures that are
mapped in the field, the more common this type of change is proving to be. Another instance is found in the
Eye Touch gesture, where the forefinger touches the face just below the eye and pulls the skin downwards,
opening the eye wider. In England and France this has the dominant meaning: 'You can't fool me - I see what
you are up to.' But in Italy this shifts to: 'Keep your eyes peeled - pay attention, he's a crook.' In other words the
basic meaning remains one of alertness, but it changes from 'I am alert' to 'You be alert'.
In both these cases, there is a small number of people in each region who interpret the gesture in its
other meaning. It is not an all-or-none situation, merely a shift in dominance of one message over the other.
This gives some idea of the subtlety of regional changes. Occasionally there is a total switch as one moves from
one district to the next, but more often than not the change is only a matter of degree.
Sometimes it is possible to relate the geography of modern Regional Signals to past historical events.
The Chin Flick gesture, in which the backs of the fingers are swept upwards and forwards against the
underside of the chin, is an insulting action in both France and northern Italy. There it means 'Get lost-you are
annoying me.' In southern Italy it also has a negative meaning, but the message it carries is no longer insulting.
It now says simply 'There is nothing' or 'No' or 'I cannot' or 'I don't want any'. This switch takes place between
Rome and Naples and gives rise to the intriguing possibility that the difference is due to a surviving influence
of ancient Greece. The Greeks colonized southern Italy, but stopped their northern movement between Rome
and Naples. Greeks today use the Chin Flick in the same way as the southern Italians. In fact, the distribution of
this, and certain other gestures, follows remarkably accurately the range of the Greek civilization at its zenith.
Our words and our buildings still display the mark of early Greek influence, so it should not be too surprising
if ancient Greek gestures are equally tenacious. What is interesting is why they did not spread farther as time
passed. Greek architecture and philosophy expanded farther and farther in their influences, but for some
reason, gestures like the Chin Flick did not travel so well. Many countries, such as England, lack them
altogether, and others, like France, know them only in a different role.
Another historical influence becomes obvious when one moves to North Africa. There, in Tunisia, the
Chin Flick gesture once again becomes totally insulting: a Tunisian gives a 'French' Chin Flick, rather than a
'Southern Italian' Chin Flick, despite the fact that France is more remote. The explanation, borne out by other
gesture links between France and Tunisia, is that the French colonial influence in Tunisia has left its imperial
mark even on informal body-language. The modern Tunisian is gesturally more French than any of his closer
neighbours who have not experienced the French presence.
This gives rise to the question as to whether gestures are generally rather conservative, compared with
other social patterns. One talks about the latest fashions in clothing, but one never hears of 'this season's crop of
new gestures'. There does seem to be a cultural tenacity about them, similar to the persistence found in much
folklore and in many children's games and rhymes. Yet new gestures do occasionally manage to creep in and
establish themselves. Two thousand years ago it was apparently the Greeks who were the 'gesturally virile'
nation. Today it is the British, with their Victory-sign and their Thumbs-up, and the Americans with their OK
Circle-sign. These have spread right across Europe and much of the rest of the world as well, making their first
great advance during the turmoil of the Second World War, and managing to cling on since then, even in the
gesture-rich countries of southern Europe. But these are exceptions. Most of the local signs made today are
centuries old and steeped in history.
(From Manwatching by Desmond Morris)
Reading Passages [ 21 ]

The Voices of Time


Time talks. It speaks more plainly than words. The message it conveys comes through loud and clear.
Because it is manipulated less consciously, it is subject to less distortion than the spoken language. It can shout
the truth where words lie.
I was once a member of a mayors’ committee on human relations in a large city. My assignment was to
estimate what the chances were of non-discriminatory practices being adopted by the different city
departments. The first step in this project was to interview the department heads, two of whom were
themselves members of minority groups. If one were to believe the words of these officials, it seemed that all of
them were more than willing to adopt non-discriminatory labour practices. Yet I felt that, despite what they
said, in only one case was there much chance for a change. Why? The answer lay in how they used the silent
language of time and space.
Special attention had been given to arranging each interview. Department heads were asked to be
prepared to spend an hour or more discussing their thoughts with me. Nevertheless, appointments were
forgotten; long waits in outer offices (fifteen to forty-five minutes) were common, and the length of the
interview was often cut down to ten or fifteen minutes. I was usually kept at an impersonal distance during the
interview. In only one case did the department head come from behind his desk. These men had a position and
they were literally and figuratively sticking to it!
The implications of this experience (one which public-opinion pollsters might well heed) is quite
obvious. What people do is frequently more important than what they say. In this case the way these municipal
potentates handled time was eloquent testimony to what they inwardly believed, for the structure and meaning
of time systems, as well as the time intervals, are easy to identify. In regard to being late there are: “mumble
something" periods, slight apology periods, mildly insulting periods requiring full apology, rude periods, and
downright insulting periods. The psychoanalyst has long been aware of the significance of communication on
this level. He can point to the way his patients handle time as evidence of “resistances" and “transference."
Different parts of the day, for example, are highly significant in certain contexts. Time may indicate the
importance of the occasion as well as on what level an interaction between persons is to take place. In the
United States if you telephone somebody very early in the morning, while he is shaving or having breakfast, the
time of the call usually signals a matter of utmost importance and extreme urgency. The same applies for calls
after 11.00 p.m. A call received during sleeping hours is apt to be taken as a matter of life and death, hence the
rude joke value of these calls among the young. Our realization that time talks is even reflected in such
common expressions as, "What time does the clock say?"
An example of how thoroughly these things are taken for granted was reported to me by John Useem,
an American social anthropologist, in an illuminating case from the South Pacific. The natives of one of the
islands had been having a difficult time getting their white supervisors to hire them in a way consistent with
their traditional status system. Through ignorance the supervisors had hired too many of one group and by so
doing had disrupted the existing balance of power among the natives. The entire population of the island was
seething because of this error. Since the Americans continued in their ignorance and refused to hire according
to local practice, the head men of the two factions met one night to discuss an acceptable reallocation of jobs.
When they finally arrived at a solution, they went en masse to see the plant manager and woke him up to tell
him what had been decided. Unfortunately it was then between two and three o’clock in the morning. They did
not know that it is a sign of extreme urgency to wake up Americans at this hour. As one might expect, the
American plant manager, who understood neither the local language nor the culture nor what the hullabaloo
was all about, thought he had a riot on his hands and called out the Marines. It simply never occurred to him
that the parts of the day have a different meaning for these people than they have for us.
On the other hand, plant managers in the United States are fully aware of the significance of a
communication made during the middle of the morning or afternoon that takes everyone away from his work.
Whenever they want to make an important announcement they will ask: “When shall we let them know?" In
the social world a girl feels insulted when she is asked for a date at the last minute by someone she doesn’t
know very well, and the person who extends an invitation to a dinner party with only three or four days’ notice
has to apologize. How different from the people of the Middle East with whom it is pointless to make an
appointment too far in advance, because the informal structure of their time system places everything beyond a
week into a single category of “future" in which plans tend to “slip off their minds.”
Advance notice is often referred to in America as “lead time," an expression which is significant in a
culture where schedules are important. While it is learned informally, most of us are familiar with how it works
in our own culture, even though we cannot state the rules technically. The rules for lead time in other cultures,
Reading Passages [ 22 ]

however, have rarely been analysed. At the most they are known by experience to those who have lived abroad
for some time. Yet think how important it is to know how much time is required to prepare people, or for them
to prepare themselves, for things to come. Sometimes lead time would seem to be very extended. At other
times, in the Middle East, any period longer than a week may be too long.
How troublesome differing ways of handling time can be is well illustrated by the case of an American
agriculturalist assigned to duty as an attaché of our embassy in a Latin country. After what seemed to him a
suitable period he let it be known that he would like to call on the minister who was his counterpart. For
various reasons, the suggested time was not suitable; all sorts of cues came back to the effect that the time was
not yet ripe to visit the minister. Our friend, however, persisted and forced an appointment which was
reluctantly granted. Arriving a little before the hour (the American respect pattern), he waited. The hour came
and passed; five minutes - ten minutes - fifteen minutes. At this point he suggested to the secretary that perhaps
the minister did not know he was waiting in the outer office. This gave him the feeling that he had done
something concrete and also helped to overcome the anxiety that was stirring inside him. Twenty minutes -
twenty-five minutes - thirty minutes - forty-five minutes (the insult period)!
He jumped up and told the secretary that he had been “cooling his heels” in an outer office for forty-
five minutes and he was “damned sick and tired” of this type of treatment. The message was relayed to the
minister, who said, in effect, “Let him cool his heels.” The attaché’s stay in the country was not a happy one.
The principal source of misunderstanding lay in the fact that in the country in question the five-minute
delay interval was not significant. Forty-five minutes, on the other hand, instead of being at the tail end of the
waiting scale, was just barely at the beginning. To suggest to an American’s secretary that perhaps her boss
didn’t know you were there after waiting sixty seconds would seem absurd, as would raising a storm about
“cooling your heels” for five minutes. Yet this is precisely the way the minister registered the protestations of
the American in his outer office! He felt, as usual, that Americans were being totally unreasonable.
Throughout this unfortunate episode the attach? was acting according to the way he had been brought
up. At home in the United States his responses would have been normal ones and his behaviour legitimate. Yet
even if he had been told before he left home this sort of thing would happen, he would have had difficulty not
feeling insulted after he had been kept waiting for forty-five minutes. If, on the other hand, he had been taught
the details of the local time system just as he should have been taught the local spoken language, it would have
been possible for him to adjust himself accordingly.
What bothers people in situations of this sort is that they don’t realize they are being subjected to
another form of communication, one that works part of the time with language and part of the time
independently of it. The fact that the message conveyed is couched in no formal vocabulary makes things
doubly difficult, because neither party can get very explicit about what is actually taking place. Each can only
say what he thinks is happening and how he feels about it. The thought of what is being communicated is what
hurts.

AMERICAN TIME
People of the Western world, particularly Americans, tend to think of time as something fixed in nature,
something around us from which we cannot escape; an ever present part of the environment, just like the air we
breathe. That it might be experienced in any other way seems unnatural and strange, a feeling which is rarely
modified even when we begin to discover how really different it is handled by some other people. Within the
West itself certain cultures rank time much lower in over-all importance than we do. In Latin America, for
example, where time is treated rather cavalierly, one commonly hears the expression, “Our time or your time?”
“Hora americana, hora mejicana?”
As a rule, Americans think of time as a road or a ribbon stretching into the future, along which one
progresses. The road has segments or compartments which are best kept discrete (“one thing at a time”). People
who cannot schedule time are looked down upon as impractical. In at least some parts of Latin America, the
North American (their term for us) finds himself annoyed when he has made an appointment with somebody,
only to find a lot of other things going on at the same time. An old friend of mine of Spanish cultural heritage
used to run his business according to the “Latino” system. This meant that up to fifteen people were in his
office at the same time. Business which might have been finished in a quarter of an hour sometimes took a
whole day. He realized, of course, that the Anglo-Americans were disturbed by this and used to make some
allowance for them, a dispensation which meant that they spent only an hour or so in his office when they had
planned on a few minutes. The American concept of the discreteness of time and the necessity for scheduling
Reading Passages [ 23 ]

was at variance with this amiable and seemingly confusing Latin system. However, if my friend had adhered to
the American system he would have destroyed a vital part of his prosperity.
People who came to do business with him also came to find out things and to visit each other. The ten
to fifteen Spanish-Americans and Indians who used to sit around the office (among whom I later found myself
after I had learned to relax a little) played their own part in a particular type of communications network.
Not only do we Americans segment and schedule time, but we look ahead and are oriented almost
entirely toward the future. We like new things and are preoccupied with change. We want to know how to
overcome resistance to change. In fact, scientific theories and even some pseudo-scientific ones, which
incorporate a striking theory of change, are often given special attention.
Time with us is handled much like a material; we earn it, spend it, save it, waste it. To us it is somewhat
immoral to have two things going on at the same time. In Latin America it is not uncommon for one man to
have a number of simultaneous jobs which he either carries on from one desk or which he moves between,
spending a small amount of time on each.
While we look to the future, our view of it is limited. The future to us is foreseeable future, not the
future of the South Asian that involves many centuries. Indeed, our perspective is so short as to inhibit the
operation of a good many practical projects, such as sixty- and one-hundred-year conservation works requiring
public support and public funds. Anybody who has worked in industry or in the government of the United
States has heard the following: "Gentlemen, this is for the long term! Five or ten years.”
For us a “long time” can be almost anything - ten or twenty years, two or three months, a few weeks, or
even a couple of days. The South Asian, however, feels that it is perfectly realistic to think of a “long time” in
terms of thousands of years or even an endless period. A colleague once described their conceptualization of
time as follows: “Time is like a museum with endless corridors and alcoves. You, the viewer, are walking
through the museum in the dark, holding a light to each scene as you pass it. God is the curator of the museum,
and only He knows all that is in it. One lifetime represents one alcove.”
The American’s view of the future is linked to a view of the past, for tradition plays an equally limited
part in American culture. As a whole, we push it aside or leave it to a few souls who are interested in the past
for some very special reason.
There are, of course, a few pockets, such as New England and the South, where tradition is emphasized.
But in the realm of business, which is the dominant model of United States life, tradition is equated with
experience, and experience is thought of as being very close to if not synonymous with know-how. Know-how
is one of our prized possessions, so that when we look backward it is rarely to take pleasure in the past itself
but usually to calculate the know-how, to assess the prognosis for success in the future.
Promptness is also valued highly in American life. If people are not prompt, it is often taken either as an
insult or as an indication that they are not quite responsible. There are those, of a psychological bent, who
would say that we are obsessed with time. They can point to individuals in American culture who are literally
time-ridden. And even the rest of us feel very strongly about time because we have been taught to take it so
seriously. We have stressed this aspect of culture and developed it to a point unequalled anywhere in the
world, except, perhaps, in Switzerland and North Germany. Many people criticize our obsessional handling of
time. They attribute ulcers and hypertension to the pressure engendered by such a system. Perhaps they are
right.
SOME OTHER CONCEPTS OF TIME
Even within the very borders of the United States there are people who handle time in a way which is
almost incomprehensible to those who have not made a major effort to understand it. The Pueblo Indians, for
example, who live in the Southwest, have a sense of time which is at complete variance with the clock-bound
habits of the ordinary American citizen. For the Pueblos events begin when the time is ripe and no sooner.
I can still remember a Christmas dance I attended some twenty-five years ago at one of the pueblos near
the Rio Grande. I had to travel over bumpy roads for forty-five miles to get there. At seven thousand feet the
ordeal of winter cold at one o’clock in the morning is almost unbearable. Shivering in the still darkness of the
pueblo, I kept searching for a clue as to when the dance would begin.
Outside everything was impenetrably quiet. Occasionally there was the muffled beat of a deep pueblo
drum, the opening of a door, or the piercing of the night’s darkness with a shaft of light. In the church where
the dance was to take place a few white towns-folk were huddled together on a balcony, groping for some clue
which would suggest how much longer they were going to suffer. “Last year I heard they started at ten
o’clock.” “They can’t start until the priest comes.” “There is no way of telling when they will start.” All this
punctuated by chattering teeth and the stamping of feet to keep up circulation.
Reading Passages [ 24 ]

Suddenly an Indian opened the door, entered, and poked up the fire in the stove. Everyone nudged his
neighbour: “Maybe they are going to begin now." Another hour passed. Another Indian came in from outside,
walked across the nave of the church, and disappeared through another door. “Certainly now they will begin.
After all, it’s almost two o’clock.” Someone guessed they were just being ornery in the hope that the white men
would go away. Another had a friend in the pueblo and went to his house to ask when the dance would begin.
Nobody knew. Suddenly, when the whites were almost exhausted, there burst upon the night the deep sounds
of the drums, rattles, and low male voices singing. Without warning the dance had begun.
After years of performances such as this, no white man in his right mind will hazard a guess as to when
one of these ceremonial dances will begin. Those of us who have learned now know that the dance doesn’t start
at a particular time. It is geared to no schedule. It starts when “things” are ready!
As I pointed out, the white civilized Westener has a shallow view of the future compared to the
Oriental. Yet set beside the Navajo Indians of northern Arizona, he seems a model of long-term patience. The
Navajo and the European-American have been trying to adjust their concepts of time for almost a hundred
years. So far they have not done too well. To the old-time Navajo time is like space - only the here and now is
quite real. The future has little reality to it.
An old friend of mine reared with the Navajo expressed it this way: “You know how the Navajo love
horses and how much they love to gamble and bet on horse races. Well, if you were to say to a Navajo, ‘My
friend, you know my quarter horse that won all the races at Flagstaff last Fourth of July?’ that Navajo would
eagerly say ‘yes, yes,’ he knew the horse; and if you were to say, ‘In the fall I am going to give you that horse,’
the Navajo’s face would fall and he would turn round and walk away. On the other hand, if you were to say to
him, ‘Do you see that old bag of bones I just rode up on? That old hay-bellied mare with the knock knees and
pigeon toes, with the bridle that’s falling apart and the saddle that’s worn out? You can have that horse, my
friend, it’s yours. Take it, ride it away now.’ Then the Navajo would beam and shake your hand and jump on
his new horse and ride away. Of the two, only the immediate gift has reality; a promise of future benefits is not
even worth thinking about.”
In the early days of the range control and soil conservation programs it was almost impossible to
convince the Navajo that there was anything to be gained from giving up their beloved sheep for benefits
which could be enjoyed ten or twenty years in the future. Once I was engaged in the supervision of the
construction of small earth dams and like everyone else had little success at first in convincing Navajo
workmen that they should work hard and build the dam quickly, so that there would be more dams and more
water for the sheep. The argument that they could have one dam or ten, depending on how hard they worked,
conveyed nothing. It wasn’t until I learned to translate our behaviour into their terms that they produced as we
knew they could.
The solution came about in this way. I had been discussing the problem with a friend, Lorenzo Hubbell,
who had lived on the reservation all his life. When there were difficulties I used to find it helpful to unburden
myself to him. Somewhere in his remarks there was always a key to the underlying patterns of Navajo life. As
we talked I learned that the Navajo understood and respected a bargain. I had some inkling of this when I
noticed how unsettled the Indians became when they were permitted to fall down on the job they had agreed to
do. In particular they seemed to be apprehensive lest they be asked to repay an unfulfilled obligation at some
future time. I decided to sit down with the Navajo crew and talk to them about the work. It was quite useless to
argue about the future advantages which would accrue from working hard; linear reasoning and logic were
meaningless. They did respond, however, when I indicated that the government was giving them money to get
out of debt, providing jobs near their families, and giving them water for their sheep. I stressed the fact that in
exchange for this, they must work eight hours every day. This was presented as a bargain. Following my
clarification the work progressed satisfactorily.
One of my Indian workmen inadvertently provided another example of the cultural conflict centring
around time. His name was “Little Sunday.” He was small, wiry, and winning. Since it is not polite to ask the
Navajo about their names or even to ask them what their name is, it was necessary to inquire of others how he
came to be named “Little Sunday.” The explanation was a revealing one.
In the early days of the white traders the Indians had considerable difficulty getting used to the fact that
we Europeans divided time into strange and unnatural periods instead of having a “natural” succession of days
which began with the new moon and ended with the old. They were particularly perplexed by the notion of the
week introduced by the traders and missionaries. Imagine a Navajo Indian living some forty or fifty miles from
a trading store that is a hundred miles north of the railroad deciding that he needs flour and maybe a little lard
for bread. He thinks about the flour and the lard, and he thinks about his friends and the fun he will have
Reading Passages [ 25 ]

trading, or maybe he wonders if the trader will give him credit or how much money he can get for the hide he
has. After riding horseback for a day and a half to two days he reaches the store all ready to trade. The store is
locked up tight. There are a couple of other Navajo Indians camped in the hogan built by the trader. They say
the trader is inside but he won’t trade because it’s Sunday. They bang on his door and he tells them, “Go away,
it’s Sunday,” and the Navajo says, “But I came from way up on Black Mesa, and I am hungry. I need some
food.“What can the trader do? Soon he opens the store and then all the Navajo pour in. One of the most
frequent and insistent Sunday visitors was a man who earned for himself the sobriquet “Big Sunday.” “Little
Sunday,” it turns out, ran a close second.
The Sioux Indians provide us with another interesting example of the differing views toward time. Not
so long ago a man who was introduced as the superintendent of the Sioux came to my office. I learned that he
had been born on the reservation and was a product of both Indian and white cultures, having earned his A.B.
at one of the Ivy League colleges.
During a long and fascinating account of the many problems which his tribe was having in adjusting to
our way of life, he suddenly remarked: “What would you think of a people who had no word for time? My
people have no word for ‘late’ or for ‘waiting’, for that matter. They don’t know what it is to wait or to be late.”
He then continued, “I decided that until they could tell the time and knew what time was they could never
adjust themselves to white culture. So I set about to teach them time. There wasn’t a clock that was running in
any of the reservation classrooms. So I first bought some decent clocks. Then I made the school buses start on
time, and if an Indian was two minutes late that was just too bad. The bus started at eight forty-two and he had
to be there.”
He was right of course. The Sioux could not adjust to European ways until they had learned the
meaning of time. The superintendent’s methods may have sounded a bit extreme, but they were the only ones
that would work. The idea of starting the buses off and making the drivers hold to a rigid schedule was a stroke
of genius; much kinder to the Indian, who could better afford to miss a bus on the reservation than lose a job in
town because he was late.
There is, in fact, no other way to teach time to people who handle it as differently from us as the Sioux.
The quickest way is to get very technical about it and to make it mean something. Later on these people can
learn the informal variations, but until they have experienced and then mastered our type of time they will
never adjust to our culture.
Thousands of miles away from the reservations of the American Indian we come to another way of
handing time which is apt to be completely unsettling to the unprepared visitor. The inhabitants of the atoll of
Truk in the Southwest Pacific treat time in a fashion that has complicated life for themselves as well as for
others, since it poses special problems not only for their civil and military governors and the anthropologists
recording their life but for their own chiefs as well.
Time does not heal on Truk! Past events stack up, placing an ever-increasing burden on the Trukese and
weighing heavily on the present. They are, in fact, treated as though they had just occurred. This was borne out
by something which happened shortly after the American occupation of the atoll at the end of World War II.
A villager arrived all out of breath at the military government headquarters. He said that a murder had
been committed in the village and that the murderer was running around loose. Quite naturally the military
governor became alarmed. He was about to dispatch M.P.s to arrest the culprit when he remembered that
someone had warned him about acting precipitously when dealing with “natives.” A little enquiry turned up
the fact that the victim had been “fooling around” with the murderer’s wife. Still more enquiry of a routine
type, designed to establish the place and date of the crime, revealed that the murder had not occurred a few
hours or even days ago, as one might expect, but seventeen years before. The murderer had been running
around loose in the village all this time.
A further example of how time does not heal on Truk is that of a land dispute that started with the
German occupation in the 1890s, was carried on down through the Japanese occupation, and was still current
and acrimonious when the Americans arrived in 1946.
Prior to Missionary Moses’ arrival on Uman in 1867 life on Truk was characterized by violent and
bloody warfare. Villages, instead of being built on the shore where life was a little easier, were placed on the
sides of mountains where they could be better protected. Attacks would come without notice and often without
apparent provocation. Or a fight might start if a man stole a coconut from a tree that was not his or waylaid a
woman and took advantage of her. Years later someone would start thinking about the wrong and decide that
it had not been righted. A village would be attacked again in the middle of the night.
Reading Passages [ 26 ]

When charges were brought against a chief for things he had done to his people, every little slight,
every minor graft would be listed; nothing would be forgotten. Damages would be asked for everything. It
seemed preposterous to us Americans, particularly when we looked at the lists of charges. “How could a chief
be so corrupt?” “How could the people remember so much?”
Though the Truk islanders carry the accumulated burden of time past on their shoulders, they show an
almost total inability to grasp the notion that two events can take place at the same time when they are any
distance apart. When the Japanese occupied Truk at the end of World War I they took Artie Moses, chief of the
island of Uman to Tokyo. Artie was made to send a wireless message back to his people as a demonstration of
the wizardry of Japanese technology. His family refused to believe that he had sent it, that he had said anything
at all, though they knew he was in Tokyo. Places at a distance are very real to them, but people who are away
are very much away, and any interaction with them is unthinkable.
An entirely different handling of time is reported by the anthropologist Paul Bohannan for the Tiv, a
primitive people who live in Nigeria. Like the Navajo, they point to the sun to indicate a general time of day,
and they also observe the movement of the moon as it waxes and wanes. What is different is the way they use
and experience time. For the Tiv, time is like a capsule. There is time for visiting, for cooking, or for working;
and when one is in one of those times, one does not shift to another.
The Tiv equivalent of the week lasts five to seven days. It is not tied into periodic natural events, such as
the phases of the moon. The day of the week is named after the things which are being sold in the nearest
“market.” If we had the equivalent, Monday would be “automobiles” in Washington, D.C., “furniture” in
Baltimore, and “yard goods” in New York. Each of these might be followed by the days for appliances, liquor
and diamonds in the respective cities. This would mean that as you travelled about the day of the week would
keep changing, depending on where you were.
A requisite of our own temporal system is that the components must add up: Sixty seconds have to
equal one minute, sixty minutes one hour. The American is perplexed by people who do not do this. The
African specialist Henri Alexandre Junod, reporting on the Thonga, tells of a medicine man who had
memorized a seventy-year chronology and could detail the events of each and every year in sequence. Yet this
same man spoke of the period he had memorized as an “era” which he computed at “four months and eight
hundred years’ duration.” The usual reaction to this story and others like it is that the man was primitive, like a
child, and did not understand what he was saying, because how could seventy years possibly be the same as
eight hundred? As students of culture we can no longer dismiss other conceptualizations of reality by saying
that they are childlike. We must go much deeper. In the case of the Thonga, it seemed that a “chronology” is
one thing and an “era” something else quite different, and there is no relation between the two in operational
terms.
If these distinctions between European-American time and other conceptions of time seem to draw too
heavily on primitive peoples, let me mention two other examples - from cultures which are as civilized, if not as
industrialized, as our own. In comparing the United States with Iran and Afghanistan very great differences in
the handling of time appear. The American attitude toward appointments is an example. Once while in Tehran
I had the opportunity to observe some young Iranians making plans for a party. After plans were made to pick
up everyone at appointed times and places everything began to fall apart. People would leave messages that
they were unable to take so-and-so or were going somewhere else, knowing full well that the person who had
been given the message couldn’t possibly deliver it. One girl was left stranded on a street corner, and no one
seemed to be concerned about it. One of my informants explained that he himself had had many similar
experiences. Once he had made eleven appointments to meet a friend. Each time one of them had failed to
show up. The twelfth time they swore that they would both be there, that nothing would interfere. The friend
failed to arrive. After waiting for forty-five minutes my informant phoned his friend and found him still at
home. The following conversation is an approximation of what took place:
"Is that you, Abdul?” “Yes.” “Why aren’t you here? I thought we were to meet for sure.” “Oh, but it was
raining,” said Abdul with a sort of whining intonation that is very common in Parsi.
If present appointments are treated rather cavalierly, the past in Iran takes on a very great importance.
People look back on what they feel are the wonders of the past and the great ages of Persian culture. Yet the
future seems to have very little reality or certainty to it. Businessmen have been known to invest hundreds of
thousands of dollars in factories of various sorts without making the slightest plan as to how to use them. A
complete woollen mill was bought and shipped to Tehran before the buyer had raised enough money to erect
it, to buy supplies, or even to train personnel. When American teams of technicians came to help Iran’s
economy they constantly had to cope with what seemed to them to be an almost total lack of planning.
Reading Passages [ 27 ]

Moving east from Iran to Afghanistan, one gets farther afield from American time concepts. A few years
ago in Kabul a man appeared, looking for his brother. He asked all the merchants of the market place if they
had seen his brother and told him where he was staying in case his brother arrived and wanted to find him. The
next year he was back and repeated the performance. By this time one of the members of the American embassy
had heard about his inquiries and asked if he had found his brother. The man answered that he and his brother
had agreed to meet in Kabul, but neither of them had said what year.
Strange as some of these stories about the ways in which people handle time may seem, they become
understandable when correctly analysed. To do this adequately requires an adequate theory of culture. Before
we return to the subject of time again - in a much later chapter of this book - I hope that I will have provided
just such a theory. It will not only shed light on the way time is meshed with many other aspects of society but
will provide a key to unlock some of the secrets of the eloquent language of culture which speaks in so many
different ways.
(Edward T. Hall: The Silent Language published by Doubleday & Company, New York in 1959)

BIOLOGY
Evolution and Natural Selection
The idea of evolution was known to some of the Greek philosophers. By the time of Aristotle,
speculation had suggested that more perfect types had not only followed less perfect ones but actually had
developed from them. But all this was guessing; no real evidence was forthcoming. When, in modern times, the
idea of evolution was revived, it appeared in the writings of the philosophers-Bacon, Descartes, Leibniz and
Kant. Herbert Spencer was preaching a full evolutionary doctrine in the years just before Darwin's book was
published, while most naturalists would have none of it. Nevertheless a few biologists ran counter to the
prevailing view, and pointed to such facts as the essential unity of structure in all warm-blooded animals.
The first complete theory was that of Lamarck (1744-1829), who thought that modifications due to
environment, if constant and lasting, would be inherited and produce a new type. Though no evidence for such
inheritance was available, the theory gave a working hypothesis for naturalists to use, and many of the social
and philanthropic efforts of the nineteenth century were framed on the tacit assumption that acquired
improvements would be inherited.
But the man whose book gave both Darwin and Wallace the clue was the Reverend Robert Malthus
(1766-1834), sometime curate of Albury in Surrey. The English people were increasing rapidly, and Malthus
argued that the human race tends to outrun its means of subsistence unless the redundant individuals are
eliminated. This may not always be true, but Darwin writes:
In October 1838, I happened to read for amusement Malthus on Population, and being well prepared to
appreciate the struggle for existence which everywhere goes on, from long continued observation of the habits
of animals and plants, it at once struck me that, under these circumstances, favourable variations would tend to
be preserved, and unfavourable ones to be destroyed. The result of this would be the formation of new species.
Here then I had a theory by which to work.
Darwin spent twenty years collecting countless facts and making experiments on breeding and
variation in plants and animals. By 1844 he had convinced himself that species are not immutable, but worked
on to get further evidence. On 18 June 1858 he received from Alfred Russell Wallace a paper written in Ternate,
in the space of three days after reading Malthus's book. Darwin saw at once that Wallace had hit upon the
essence of his own theory. Lyell and Hooker arranged with the Linnaean Society to read on July 1st 1858
Wallace's paper together with a letter from Darwin and an abstract of his theory written in 1844, Then Darwin
wrote out an account of his labours, and on 24th November 1859 published his great book The Origin of Species.
In any race of plants or animals, the individuals differ from each other in innate qualities. Darwin
offered no explanation of these variations, but merely accepted their existence. When the pressure of numbers
or the competition for mates is great, any variation in structure which is of use in the struggle has 'survival
value', and gives its possessor an improved chance of prolonging life and leaving offspring. That variation
therefore tends to spread through the race by the elimination of those who do not possess it, and a new variety
or even species may be established. As Huxley said, this idea was wholly unknown till 1858. Huxley said the
book was like a flash of lightning in the darkness. He wrote
It did the immense service of freeing us from the dilemma - Refuse to accept the Creation hypothesis,
and what have you to propose that can be accepted by any cautious reasoner? In 1857 I had no answer ready,
and I do not think anyone else had. A year later we reproached ourselves with dullness for being perplexed
Reading Passages [ 28 ]

with such an enquiry. My reflection when I first made myself master of the central idea of the Origin was 'How
extremely stupid not to have thought of that!'

The hypothesis of natural selection may not be a complete explanation, but it led to a greater thing than
itself-an acceptance of the theory of organic evolution, which the years have but confirmed. Yet at first some
naturalists joined the opposition. To the many, who were unable to judge the biological evidence, the effect of
the theory of evolution seemed incredible as well as devastating, to run counter to common sense and to
overwhelm all philosophic and religious landmarks. Even educated man, choosing between the Book of
Genesis and the Origin of Species, proclaimed with Disraeli that he was 'on the side of the Angels'.
Darwin himself took a modest view. While thinking that natural selection was the chief cause of
evolution, he did not exclude Lamarck's idea that characters acquired by long use or disuse might be inherited,
though no evidence seemed to be forthcoming. But about 1890 Weismann drew a sharp distinction between the
body (or soma) and the germ cells which it contains. Somatic cells can only reproduce cells like themselves, but
germ cells give rise not only to the germ cells of a new individual but to all the many types of cell in his body.
Germ cells descend from germ cells in a pure line of germ plasm, but somatic cells trace their origin to germ
cells. From this point of view, the body of each individual is an unimportant by-product of his parents' germ
cells. The body dies, leaving no offspring, but the germ plasms show an unbroken continuity. The products of
the germ cells are not likely to be affected by changes in the body. So Weismann's doctrine offered an
explanation of the apparent noninheritance of acquired characters.
The supporters of pure Darwinism came to regard the minute variations as enough to explain natural
selection and natural selection enough to explain evolution. But animal breeders and horticulturalists knew that
sudden large mutations occur, especially after crossing, and that new varieties might be established at once.
Then in 1900 forgotten work by Mendel was rediscovered and a new chapter opened.
In 1869 Darwin's cousin, Francis Galton, applied these principles to mental qualities. By searching books
of reference, Galton examined the inheritance of ability. For instance, he found that the chance of the son of a
judge showing great ability was about 500 times as high as that of a man taken at random, and for the judge's
father it was nearly as much. While no prediction can be made about individuals, on the average of large
numbers, the inheritance of ability is certain.
(From Chapter VIII of A Shorter History of Science by Sir W. C. Dampier.)

Banting and the Discovery of Insulin


While at the Medical School, Banting went into the library and looked at the November issue of Surgery,
Gynaecology and Obstetrics. The first article was entitled 'The relation of the Islets of Langerhans to Diabetes' by
Dr. Moses Barron of Minneapolis. Banting had to talk to his students next day on the functions of the pancreas,
so he took the journal home with him.
One paragraph in Barron's review of previous literature on the subject referred to the experiments on
tying the pancreatic ducts of rabbits made by Arnozen and Vaillard thirty-six years earlier. Banting had not
heard of these experiments before, but he knew that attempts to treat diabetes with extracts of the pancreas had
failed; and he wondered why.
A possible answer that occurred to him was that the hormone from the islets of Langerhans was
destroyed during the extraction of the pancreas. The question then was what might destroy it; and his thoughts
turned to the digestive ferment that the pancreas produced. He knew this was very powerful, so powerful that
it could break up and dissolve all sorts of protein foods including the toughest meats. Perhaps, during the
process of extraction, this ferment destroyed the vital hormone.
If that were so, Banting reasoned, the extraction ought to be delayed until the pancreas was no longer
producing this ferment. According to the experiments of Arnozen and Vaillard, this condition could be reached
by tying the pancreatic ducts. It was two o'clock in the morning of October 31, 1920, when he wrote in his small
black notebook: 'Tie off pancreas ducts of dogs. Wait six or eight weeks. Remove and extract.'
Although he did not know it, this was much the same idea that had come to Lydia de Witt fourteen
years earlier. But it was not for the idea alone that Banting deserves to be remembered; his greatness lay in the
way he put it into practice. He had to wait until the spring of 1921 before he could start work, and he filled in
the time by reading all the literature on the subject he could find. He still missed Lydia de Witt's work. At last
he was given his laboratory-small, hot and rather primitive-and his ten dogs. His assistant, Charles Best, was a
recent graduate in physiology and biochemistry who had been working under Macleod on sugars. They began
work on May 16.
Reading Passages [ 29 ]

Banting began by tying off the pancreatic ducts of a number of dogs, which was quite easy. Then he had
to remove the pancreas from other dogs to give them diabetes. The operation was not easy, and Banting's
training and ability as a surgeon proved invaluable. Even so, several dogs died before he evolved a suitable
technique.
On July 6 Banting and Best chloroformed two of the dogs whose pancreatic ducts they had tied seven
weeks earlier, and were disappointed to find that the pancreas had not degenerated as they had hoped. They
had not tied the ducts with the correct degree of tension needed-the margin of error was very small. And they
had only one week left to complete their work. Macleod was away in Europe, but an extension was granted by
the authorities, and the experiment was continued. On July 27 another duct-tied dog was chloroformed, and
when Banting operated he found that the pancreas had shrivelled to about one-third of its original size. It was
removed, chopped into pieces and mixed with saline; and a small amount of a filtered extract was injected into
one of the diabetic dogs. Within two hours its blood sugar had fallen considerably, and before long the dog
became conscious, rose, and wagged its tail.
The effect of the injection was so dramatic that Banting and Best could hardly believe it; but further
experiments made them sure that they had indeed found what they were looking for. They had succeeded in
extracting the anti-diabetic hormone secreted by the islets of Langerhans. They called it 'isletin'. It was some
time later that Macleod renamed it insulin, a word that had been suggested in 1910. Insulin did not cure
diabetes. After a while the dog relapsed, and further injections were needed to revive it again. But with regular
injections of insulin a dog with diabetes could live.
Banting and Best next succeeded in obtaining insulin by injecting secretin to stimulate the production of
the digestive ferment from the pancreas and exhaust the cells from which it came. This was a much quicker
method than tying the ducts and waiting several weeks; and although the practical results were disappointing,
its importance to the theory was considerable.
So far insulin had been extracted only in sufficient quantity for laboratory work, and already Banting
and Best were seeking means of getting larger supplies. They now obtained insulin from the pancreas of a
foetal calf-that is, a calf that had not yet been born. Nature, ever practical, does not supply digestive ferments
until a calf starts eating, so there was nothing to destroy the insulin during extraction. This new success enabled
Banting and Best to keep up an adequate supply of insulin for more extensive experiments. At the same time
they realized that if their work was to have practical results in medical treatment it would be necessary to get
much larger supplies. And they could only come from adult cattle in the slaughterhouse. The problem was to
find a means of extracting the insulin from the pancreas of an ordinary adult animal.
The problem was solved well enough to provide insulin for the first injections on human beings. Two
patients in Toronto General Hospital were chosen-a fourteen-year-old boy and a doctor, both very seriously ill
with diabetes: 'hopeless' cases. When treated with insulin-although still in a relatively impure form-they
improved at once. The boy is alive and well to-day.
'Research in medicine is specialized,' Banting said later, 'and as in all organized walks of life, a division
of labour is necessary. In consequence, a division of labour in the field of insulin took place.' Professor J. B.
Collip, a biochemist, was called in to produce a purer extract. He succeeded very quickly; and other workers
made it possible to obtain insulin on a really large scale. Before very long insulin injections became the standard
treatment for diabetes all over the world. They still are today.
Banting, only thirty-one years old, was suddenly famous. Although for some extraordinary reason he
was not knighted until 1934, he was awarded the Nobel Prize for Medicine in 1923jointly with Macleod.
(Fom Chapter VIII of Great Discoveries in Modern Science by Patrick Pringle.)

The Galapagos Islands


After Tahiti the Galapagos were the most famous of all the tropical islands in the Pacific. They had been
discovered in 1535 by Fray Tomas de Berlanga, Bishop of Panama, and were now owned by Ecuador, 500 odd
miles away. Already in the 1830s some sixty or seventy whalers, mostly American, called there every year for
'refreshments', They replenished their water tanks from the springs, they captured tortoises for meat, (galapagos
is the Spanish word for giant tortoises), and they called for mail at Post Office Bay where a box was set up on
the beach. Every whaling captain took from it any letters which he thought he might be able to forward.
Herman Melville called in at the Galapagos aboard the Acushnet not long after the Beagle's visit, and the
'blighted Encantadas' are a part of the saga of the white whale. 'Little but reptile life is here found', wrote
Melville, 'the chief sound of life is a hiss'.
Reading Passages [ 30 ]

Apart from their practical uses there was nothing much to recommend the Galapagos; they were not
lush and beautiful islands like the Tahiti group, they were (and still are) far off the usual maritime routes,
circled by capricious currents, and nobody lived in them then except for a handful of political prisoners who
had been stranded there by the Ecuador government. The fame of the islands was founded upon one thing;
they were infinitely strange, unlike any other islands in the world. No one who went there ever forgot theta.
For the Beagle this was just another port of call in a very long voyage, but for Darwin it was much more than
that, for it was here, in the most unexpected way-just as a man might have a sudden inspiration while he is
travelling in a car or a train-that he began to form a coherent view of the evolution of life on this planet. To put
it into his own words: 'Here, both in space and time, we seem to be brought somewhat near to that great fact-
that mystery of mysteries-the first appearance of new beings on this earth'.
The Beagle cruised for just over a month in the Galapagos, and whenever they reached an interesting
point FitzRoy dropped off a boatload of men to explore. On Narborough Island the turtles were coming in at
night to lay their eggs in the sand, thousands of them; they laid six eggs in each hole. On Charles Island there
was a penal settlement of two hundred convicts, who cultivated sugar-cane, bananas and corn on the high
ground. But the group that concerns us is the one that was put ashore on James Island. Here Darwin,
Covington, Bynoe and two sailors were landed with a tent and provisions, and FitzRoy promised to come back
and pick them up at the end of a week. Darwin visited other islands as well, but they did not differ very much
from James Island, and so we can conveniently group all his experiences into this one extraordinary week. They
set up their tent on the beach, laid out their bedding and their stores, and then began to look around them.
The marine lizards, on closer inspection, turned out to be miniature dragons, several feet in length, and
they had great gaping mouths with pouches under them and long flat tails; 'imps of darkness', Darwin called
them. They swarmed in thousands; everywhere Darwin went they scuttled away before him, and they were
even blacker than the forbidding black rocks on which they lived. Everything about these iguanas was odd.
They never went more than ten yards inland; either they sunned themselves on the shore or dived into the sea
where at once they became expert swimmers, holding their webbed feet close to their sides and propelling
themselves along with strong swift strokes of their tails. Through the clear water one could see them cruising
close to the bottom, and they could stay submerged for a very long time; a sailor threw one into the sea with a
heavy weight attached to it, and when he fished it up an hour later it was still alive and kicking. They fed on
seaweed, a fact that Darwin and Bynoe ascertained when with Bynoe's surgical instruments they opened one
up and examined the contents of its stomach. And yet, like some sailors, these marine beasts hated the sea.
Darwin took one by the tail and hurled it into a big pool that had been left in the rocks by the ebb-tide. At once
it swam back to the land. Again Darwin caught it and threw it back, and again it returned. No matter what he
did the animal simply would not stay in the sea, and Darwin was forced to conclude that it feared the sharks
there and instinctively, when threatened by anything, came ashore where it had no enemies. Their breeding
season was November, when they put on their courting colours and surrounded themselves with their harems.

The other creatures on the coast were also strange in different ways; flightless cormorants, penguins
and seals, both cold-sea creatures, unpredictably living here in these tropical waters, and a scarlet crab that
scuttled over the lizards' backs, hunting for ticks. Walking inland with Covington, Darwin arrived among some
scattered cactuses, and here two enormous tortoises were feeding. They were quite deaf and did not notice the
two men until they had drawn level with their eyes. Then they hissed loudly and drew in their heads. These
animals were so big and heavy that it was impossible to lift them or even turn them over on their sides Darwin
and Covington tried-and they could easily bear the weight of a man. Darwin got aboard and found it a very
wobbly seat, but he in no way impeded the tortoise's progress; he calculated that it managed 60 yards in ten
minutes, or 360 yards an hour, which would be roughly four miles a day - 'allowing a little time for it to eat on
the road'.

The origin of species


INTRODUCTION
When on board H.M.S. Beagle as naturalist, I was much struck with certain facts in the distribution of
the organic beings inhabiting South America, and in the geological relations of the present to the past
inhabitants of that continent. These facts, as will be seen in the latter chapters of this volume, seemed to throw
some light on the origin of species- that mystery of mysteries, as it has been called by one of our greatest
philosophers. On my return home, it occurred to me, in 1837, that something might perhaps be made out on
this question by patiently accumulating and reflecting on all sorts of facts which could possibly have any
Reading Passages [ 31 ]

bearing on it. After five years' work I allowed myself to speculate on the subject, and drew up some short notes;
these I enlarged in 1844 into a sketch of the conclusions, which then seemed to me probable: from that period to
the present day I have steadily pursued the same object. I hope that I may be excused for entering on these
personal details, as I give them to show that I have not been hasty in coming to a decision.
My work is now (1859) nearly finished; but as it will take me many more years to complete it, and as my
health is far from strong, I have been urged to publish this abstract. I have more especially been induced to do
this, as Mr. Wallace, who is now studying the natural history of the Malay Archipelago, has arrived at almost
exactly the same general conclusions that I have on the origin of species. In 1858 he sent me a memoir on this
subject, with a request that I would forward it to Sir Charles Lyell, who sent it to the Linnean Society, and it is
published in the third volume of the Journal of that society. Sir C. Lyell and Dr. Hooker, who both knew of my
work- the latter having read my sketch of 1844- honoured me by thinking it advisable to publish, with Mr.
Wallace's excellent memoir, some brief extracts from my manuscripts.
This abstract, which I now publish, must necessarily be imperfect. cannot here give references and
authorities for my several statements; and I must trust to the reader reposing some confidence in my accuracy.
No doubt errors will have crept in, though I hope I have always been cautious in trusting to good authorities
alone. I can here give only the general conclusions at which I have arrived, with a few facts in illustration, but
which, I hope, in most cases will suffice. No one can feel more sensible than I do of the necessity of hereafter
publishing in detail all the facts, with references, on which my conclusions have been grounded; and I hope in a
future work to do this. For I am well aware that scarcely a single point is discussed in this volume on which
facts cannot be adduced, often apparently leading to conclusions directly opposite to those at which I have
arrived. A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides
of each question; and this is here impossible.
I much regret that want of space prevents my having the satisfaction of acknowledging the generous
assistance which I have received from very many naturalists, some of them personally unknown to me. I
cannot, however, let this opportunity pass without expressing my deep obligations to Dr. Hooker, who, for the
last fifteen years, has aided me in every possible way by his large stores of knowledge and his excellent
judgment.
In considering the Origin of Species, it is quite conceivable that a naturalist, reflecting on the mutual
affinities of organic beings, on their embryological relations, their geographical distribution, geological
succession, and other such facts, might come to the conclusion that species had not been independently created,
but had descended, like varieties, from other species. Nevertheless, such a conclusion, even if well founded,
would be unsatisfactory, until it could be shown how the innumerable species inhabiting this world have been
modified, so as to acquire that perfection of structure and coadaptation which justly excites our admiration.
Naturalists continually refer to external conditions, such as climate, food, &c., as the only possible cause of
variation. In one limited sense, as we shall hereafter see, this may be true; but it is preposterous to attribute to
mere external conditions, the structure, for instance, of the woodpecker, with its feet, tail, beak, and tongue, so
admirably adapted to catch insects under the bark of trees. In the case of the mistletoe, which draws its
nourishment from certain trees, which has seeds that must be transported by certain birds, and which has
flowers with separate sexes absolutely requiring the agency of certain insects to bring pollen from one flower to
the other, it is equally preposterous to account for the structure of this parasite, with its relations to several
distinct organic beings, by the effects of external conditions, or of habit, or of the volition of the plant itself.
It is, therefore, of the highest importance to gain a clear insight into the means of modification and
coadaptation. At the commencement of my observations it seemed to me probable that a careful study of
domesticated animals and of cultivated plants would offer the best chance of making out this obscure problem.
Nor have I been disappointed; in this and in all other perplexing cases I have invariably found that our
knowledge, imperfect though it be, of variation under domestication, afforded the best and safest clue. I may
venture to express my conviction of the high value of such studies, although they have been very commonly
neglected by naturalists.
From these considerations, I shall devote the first chapter of this Abstract to Variation under
Domestication. We shall thus see that a large amount of hereditary modification is at least possible; and, what is
equally or more important, we shall see how great is the power of man in accumulating by his Selection
successive slight variations. I will then pass on to the variability of species in a state of nature; but I shall,
unfortunately, be compelled to treat this subject far too briefly, as it can be treated properly only by giving long
catalogues of facts. We shall, however, be enabled to discuss what circumstances are most favourable to
variation. In the next chapter the Struggle for Existence amongst all organic beings throughout the world,
Reading Passages [ 32 ]

which inevitably follows from the high geometrical ratio of their increase, will be considered. This is the
doctrine of Malthus, applied to the whole animal and vegetable kingdoms. As many more individuals of each
species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for
existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the
complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally
selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and
modified form.
This fundamental subject of Natural Selection will be treated at some length in the fourth chapter; and
we shall then see how Natural Selection almost inevitably causes much Extinction of the less improved forms of
life, and leads to what I have called Divergence of Character. In the next chapter I shall discuss the complex and
little known laws of variation. In the five succeeding chapters, the most apparent and gravest difficulties in
accepting the theory will be given: namely, first, the difficulties of transitions, or how a simple being or a simple
organ can be changed and perfected into a highly developed being or into an elaborately constructed organ;
secondly, the subject of Instinct, or the mental powers of animals; thirdly, Hybridism, or the infertility of
species and the fertility of varieties when intercrossed; and fourthly, the imperfection of the Geological Record.
In the next chapter I shall consider the geological succession of organic beings throughout time; in the twelfth
and thirteenth, their geographical distribution throughout space; in the fourteenth, their classification or mutual
affinities, both when mature and in an embryonic condition. In the last chapter I shall give a brief recapitulation
of the whole work, and a few concluding remarks.
No one ought to feel surprise at much remaining as yet unexplained in regard to the origin of species
and varieties, if he make due allowance for our profound ignorance in regard to the mutual relations of the
many beings which live around us. Who can explain why one species ranges widely and is very numerous, and
why another allied species has a narrow range and is rare? Yet these relations are of the highest importance, for
they determine the present welfare and, as I believe, the future success and modification of every inhabitant of
this world. Still less do we know of the mutual relations of the innumerable inhabitants of the world during the
many past geological epochs in its history. Although much remains obscure, and will long remain obscure, I
can entertain no doubt, after the most deliberate study and dispassionate judgment of which I am capable, that
the view which most naturalists until recently entertained, and which I formerly entertained - namely, that each
species has been independently created- is erroneous. I am fully convinced that species are not immutable; but
that those belonging to what are called the same genera are lineal descendants of some other and generally
extinct species, in the same manner as the acknowledged varieties of any one species are the descendants of
that species. Furthermore, I am convinced that Natural Selection has been the most important, but not the
exclusive, means of modification. (Introduction to On The Origin of Species by Charles Darwin, 1859)
BUSINESS
Brands Up.
New Labour has proved a marketable package, but it may be that Tony Blair and his cabinet colleagues
should now go the whole hog and reinvent themselves as individual brands. A survey recently found that
consumers consider Heineken more trustworthy than the Prime Minister, and brands like BT and BMW are
better known than Gordon Brown and Jack Straw.
You know where you are with a brand. Invented in the 19th century to reassure consumers they were
getting the real McCoy, brands have long been the way shoppers navigate in a sea of unknowns. They are
beacons of consistency, badges of style. You know what you're getting when you buy Marmite, Tennents lager
or PG Tips. Or do you? Most of our best-known brands have become baubles for multinationals. Brands are
revenue streams, assets which are made to sweat and sold between international corporations for millions.
Marmite may have been a national institution since 1902 when it was invented as a product of the brewing
industry in Burton on Trent. But it's now owned by the New Jersey based American giant, CPC International -
as are Bovril, Pot Noodles and Hellman's Mayonnaise. Lea & Perrins is part of the French company, Danone,
and PG Tips isn't owned by Brooke Bond; it's part of the gigantic portfolio of margarine baron Unilever.
You think brands deliver consistency? Think again. Persil is owned he by Unilever in Britain and by
Henkel in Germany. Persil is Omo in Spain and the Netherlands and Skip in France and Greece. Flora is Flora
in the UK, but it's Becel in France and the Netherlands, and it's Rama in Germany and Russia.
Names are misleading. Customers of the posh sounding Jeeves of Belgravia may be dismayed to know
it used to be owned by the down market chain Sketchley, until they sold it to a German shoe company called
Mr Minute. The QE2 sounds quintessentially British, but book a cruise and you'll be swelling the coffers of
Norwegian ship builders Kvaerner.
Reading Passages [ 33 ]

It's somehow equally disappointing to learn that the trendy new chain All Bar One is owned by boring
old Bass. All Bar One may be decently designed, but so it should be. Bass has a long way to go to make amends
for inflicting on the high street those fake Irish bars and eyesores, O'Neills. Don't, however, think you can get
away from Bass by drinking Grolsch, Carling, Hoopers Hooch, Caffreys or Britvic Soft Drinks - Bass owns the
lot. And if you round off your evening by staying in the Holiday Inn, you'll bump up its profits nicely.
We like to think brands mean something. Consumers don't buy products; they make style state-ments
and are often prepared to pay a premium for the privilege. “The amount of reliance placed on a brand is quite
high, but it's not very well justified,” says Robert East. You certainly may not have planned to benefit Barbie
makers, Mattel, when you bought Scrabble, nor profit the highly secretive Proc-ter and Gamble when you
popped a Pringle or washed your hair with Vidal Sassoon. You may have known P&G owned Ariel, Tampax
and Pampers, but did you know it also owns Sunny Delight orange drink, Crest, Clearasil, Pantene Pro V,
Cover Girl, Max Factor and Hugo Boss?
You might think Virgin at least delivers on its promise. As brands go, it has more reason than most to
lay claim to certain values. It's inextricably associated with the founder, Richard Branson. Yet even Virgin is not
what it seems. In February, The Economist did an audit of the empire: Virgin owns less than 50 per cent of
Virgin Direct, Virgin Cola, Virgin Spirits, Virgin Cinema, Virgin Vie and the Virgin Clothing Company. Hardly
like a Virgin at all.
Does it matter? Should we worry about who owns what? “No,” says Nicholas Morgan, marketing
director for premium malt whiskies at United Distillers (owned by Diageo.) “People buy a bottle of Bells or
Johnnie Walker. They don't think about United Distillers and we don't want them to. Information about the
company gets in the way. It's not good for anyone.
Consultants Inter-brand agree - but so they would. Inventors of brand names Hobnobs and Mondeo,
Interbrand were the first people to say you can put a price on a brand, and write it on the balance sheet. “Most
consumers are not that bothered about who owns the brand, providing they get the service they want,” says its
brand evaluation director, Alex Batchdor. “It's getting the product right that matters. When ownership changes,
no one gives a stuff.”
But shoppers may care much more than he would like to think. Angry consumers have in the past
inflicted major boycotts on the products of Barclays, Shell and Nestlé when they didn't like what the companies
were up to. Batchebr at Interbrand and Morgan at United Distillers say no one is being misled - it's very easy to
find out who owns what. But not from the product it's not. Bylaw, all products need to have is an address you
can write to. There's no need to put any-thing about the parent company. So there's no way of knowing, when
you're buying a product of, say, Kraft Jacobs Suchard, that its parent company is the tobacco giant Philip
Morris. There's even less chance of knowing that the Chinese government owns part of Midland Bank and First
Direct (because the Hong Kong government bought an 8.9 per cent stake in the banks' parent company, HSBC.)
Nor is it easy to unravel what ownership means at Rolls Royce after BMW bought the marque and VW
bought the factory. Ask either company exactly what is going on and both refer you to Germany.
Not caring about ownership is also not an argument you could get past a Manchester United supporter.
Man U is the sports brand par excellence. The All Blacks are one of many clubs on record as saying it's the brand
on which they want to model themselves. It's the best-known sports brand in the world - which is precisely
why Rupert Murdoch would like to buy it, much to the fans' dismay.
Andy Walsh, spokesman for the Manchester United Independent Supporters Association, will fight a
change of ownership tooth and nail. “We'd lose all our independence. It would no longer be Man U. A football
club generates a feeling of family, rather than a business, and in terms of the emotional attachment, who owns
it matters very much.”
There are other reasons why ownership matters. “Ownership is important in terms of public policy and
accountability,” says Robert East, professor of Marketing at Kingston University Business School. It's where
responsibility lies. Excluding own brands and fruit and veg, most supermarket products are made by just three
companies (four if you include booze). They are Unilever, P&G, Nestlé and Diageo. It's a stranglehold even the
supermarkets seem unaware of.
“Every week, we used to send a salesman to the supermarkets from each of our four main companies:
Van Den Berg Foods, Bird's Eye Walls, Lever Bros, and Elida Fabergé,” says a spokesman for Unilever. “The
supermarkets often had no idea they were all part of the same company.”
Ownership certainly matters to the companies. This year Guinness (which already owned United
Distillers), merged with Grand Metropolitan. The newly formed group came up with the unlovely name
Reading Passages [ 34 ]

Diageo. The regulators made it sell Dewar's whisky and Bombay gin. The two brands cost Bacardi-Martini
£1.15 billion.
Brands are the life-blood of companies. You can buy a familiar but floundering brand, as Unilever did
with Colman's mustard, re-market it, then through brand extension flog the brand to death. On the back of the
mustard (made in Norwich for seven generations), Unilever has now launched Colman's dry sauces, and
Colman's and Oxo condiments.
But Unilever is beginning to change its brand strategy. “We think people want to know who is
controlling what, and who's behind the things they buy,” says a spokesman, Stephen Milton. “They don't want
some faceless conglomerate, and we think it's a trend that will continue.” You can therefore expect to hear more
about Unilever - which is just as well, as you're likely to have plenty of its products in your home.
It's a high-risk strategy. Persil Power, accused of rotting your clothes, probably caused less of a hiccup
to Unilever's share price than New Coke's did to Coca-Cola. Being known to all your customers -
government/regulators, share-holders, trade and consumers - by the same name means you have to take great
care of it. A blip in one area, and the whole thing crashes down -as Virgin may find now it has put its name on
trains.
But it seems that the rewards can be greater. Whichever way they foster their brands, Nestlé, Unilever,
P&G, Diageo and the rest have a long way to go. Every time someone does a survey, the super-brands that
come out on top are the one-product or one-category companies, known by the names under which they trade.
You might not like them a great deal, but with Coca-Cola, McDonald's, Sony and Microsoft you do at least
know where you stand.
(From The Guardian November 5th 1998)

How to be a great manager


At the most general level, successful managers tend to have four characteristics:
 they take enormous pleasure and pride in the growth of their people;
 they are basically cheerful optimists - someone has to keep up morale when setbacks
occur;
 they don't promise more than they can deliver;
 when they move on from a job, they always leave the situation a little better than it was
when they arrived.
The following is a list of some essential tasks at which a manager must excel to be truly effective.
Great managers accept blame: When the big wheel from head office visits and expresses displeasure, the
great manager immediately accepts full responsibility. In everyday working life, the best managers are
constantly aware that they selected and should have developed their people. Errors made by team members are
in a very real sense their responsibility.
Great managers give praise: Praise is probably the most under-used management tool. Great managers are
forever trying to catch their people doing something right, and congratulating them on it. And when praise
comes from outside, they are swift not merely to publicise the fact, 'but to make clear who has earned it.
Managers who regularly give praise are in a much stronger position to criticise or reprimand poor performance.
If you simply comment when you are dissatisfied with performance, it is all too common for your words to be
taken as a straightforward expression of personal dislike.
Great managers make blue sky: Very few people are comfortable with the idea that they will be doing
exactly what they are doing today in 10 years' time. Great managers anticipate people's dissatisfaction.
Great managers put themselves about: Most managers now accept the need to find out not merely what
their team is thinking, but what the rest of the world, including their customers, is saying. So MBWA
(management by walking about) is an excellent thing, though it has to be distinguished from MBWAWP
(management by walking about - without purpose), where senior management wander aimlessly, annoying
customers, worrying staff and generally making a nuisance of themselves.
Great managers judge on merit: A great deal more difficult than it sounds. It's virtually impossible to
divorce your feelings about someone - whether you like or dislike them - from how you view their actions. But
suspicions of discrimination or favouritism are fatal to the smooth running of any team, so the great manager
accepts this as an aspect of the game that really needs to be worked on.
Great managers exploit strengths, not weaknesses, in themselves and in their people: Weak managers feel
threatened by other people's strengths. They also revel in the discovery of weakness and regard it as something
Reading Passages [ 35 ]

to be exploited rather than remedied. Great managers have no truck with this destructive thinking. They see
strengths, in themselves as well as in other people, as things to be built on, and weakness as something to be
accommodated, worked around and, if possible, eliminated.
Great managers make things happen: The old-fashioned approach to management was rather like the old-
fashioned approach to child-rearing: 'Go and see what the children are doing and tell them to stop it!' Great
managers have confidence that their people will be working in their' interests and do everything they can to
create an environment in which people feel free to express themselves.
Great managers make themselves redundant: Not as drastic as it sounds! What great managers do is learn
new skills and acquire useful information from the outside world, and then immediately pass them on, to
ensure that if they were to be run down by a bus, the team would still have the benefit of the new information.
No one in an organisation should be doing work that could be accomplished equally effectively by someone
less well paid than themselves. So great managers are perpetually on the look-out for ' higher-level activities to
occupy their own time, while constantly passing on tasks that they have already mastered.
(From The Independent )

Derivatives - The Beauty


Financial markets have grown more volatile since exchange rates were freed in 1973. Interest rates and
exchange rates now fluctuate more rapidly than at any time since the Crash of 1929. At the same time,
companies' profit margins have been squeezed by the lowering of trade barriers and increased international
competition. The result is that companies worldwide have been forced to come to terms with their financial
risks. No longer can managers stick their heads in the sand and pretend that because their firms make cars, or
sell soap powders, they need only worry about this year's convertible or whether their new formula washes
whiter than Brand X. As many have found to their cost, ignoring interest-rate, currency or commodity risks can
hurt a company just as badly as the failure of a new product.
Derivatives offer companies the chance to reduce their financial risks - chiefly by transferring them to
someone (usually a bank) who is willing to assume and manage them. As they realize this, more and more
companies are using derivatives to hedge their exposures. America's General Accounting Office reported that
between 1989 and 1992 derivative volumes grew 145% to $12.1 trillion (in terms of the notional amount
represented). This does not include about $5.5 trillion of foreign-exchange forwards. Interest-rate risk was the
main risk hedged - at the end of 1992, interest-rate contracts accounted for 62% of total notionals, compared
with 37% for foreign exchange.
In the US companies can now be sued for not hedging their exposures. In 1992, the Indiana Court of
Appeal held that the directors of a grain elevator co-operative had breached their fiduciary duty by failing to
sell forward the co-op's grain to hedge against a drop in prices. Since 90% of the co-operative's operating
income came from grain sales, its shareholders argued that it was only prudent for the directors to have
protected the co-op from the huge losses it suffered (Brave v Roth, Indiana Court of Appeal). In another case,
shareholders sued Compaq Computers for violating securities laws by failing to disclose that it lacked adequate
mechanisms to hedge foreign-exchange risks.
Hedging does not necessarily remove all of a company's financial risk. When a firm hedges a financial
exposure, it is protecting itself against adverse market moves. If the markets move in what would normally be
the company's favour, the hedger could find itself in a position that combined the worst of both hedged and
unhedged worlds. For many firms, though, this is a worthwhile price to pay for ensuring stability or certainty
for some of their cashflows.
(From Managing derivative risks by Lillian Chew)

Motives
There is a multitude of psychological theories about what motivates man. Is the force inside man,
outside man, conditioned or not conditioned, goal directed or not goal directed? These are all very controversial
issues in academic research into what gets people to want to work. Most people in organizations are not
concerned with academic controversies and rely on their commonsense view of behaviour.
The simplest motivation theory suggests that man is motivated by a series of needs or motives. This
theory argues that some of the motives are inherited and some learnt: that some are primarily physiological,
while others are primarily psychological. Other theories deny the existence of needs or motives. Therefore, at
one extreme the behaviourists argue that behaviour is a series of learned responses to stimuli, and at the other
extreme systems theorists talk about all systems - individuals, groups, and organizations - having needs.
Reading Passages [ 36 ]

Motivation can be either a conscious or an unconscious process: the allocation of time and energy to
work in return for rewards. Both internal and external stimuli lead to action. Internalized values, hopes,
expectations, and goals affect the decision process of the individual, and thereby affect the resultant behaviour.
Motivation is not an 'engine' built inside an individual - as so many training managers believe. It is the
individual responding to a whole range of experiences, and responding as a totality, not as 'a need'. If we are
threatened by physical force, the stimulus for activity is external. If the hormone secretions in our bodies
operate effectively then we will wish to behave in physically satisfying ways. In both examples, some of the
force is inside the individual, while some of the stimulus is external. How the individual will respond, how
much energy he will expend, and how important are the consequences (rewards) are all factors which moderate
his motivation.
There have been many attempts to classify personal moderators in the decision process. The most
popular construct is the need, and categories of needs (e.g., body needs, safety needs, social needs, achievement
needs) dominate the literature. Goal categories, remarkably like need categories, are also popular (e.g., money,
status, power, friendship). Satisfaction theories are a variation of goal theories, but have produced even 14 more
controversial classifications (e.g., implicit and explicit rewards).
There is no space here to go into what is primarily an academic debate on theories of behaviour. I will
contend that people are motivated to realize the outcome of ends or goals. Where I use the term 'need', I do so
in the sense of ends or goals desired by the individual. I have difficulty in accepting a 'need' as a personality
construct. However, desiring or wanting an outcome does reveal something about a person, and 'need' can be
used to refer to that wanting. To many psychologists this view will be heresy, but I doubt if managers care what
the energy force is called (need, want, goal, etc.).
Organizational psychologists adopt hierarchies of goals or needs, along the lines suggested by Maslow,
McClelland, Ghiselli, and Likert. Maslow's need classifications are the most extensively used, mainly because
they seem to fit organizations rather than because they have been empirically verified. We have little data to
support the concept of a hierarchy of needs in which lower order needs are satisfied before higher
(hierarchically) order needs. However, while need hierarchies may be difficult to accept, there is a great deal of
data on the relevance of these needs or ends or goals for individuals working in organizations, and it is these
data which are of value to managers.
The managers' dilemma is that, while they must accept the individual differences that exist among their
staff, organizational (and particularly personnel) practices assume that such differences do not exist. The field
of organization theory has been - and still is - plagued by the conflict between the individual and the
organization. As the orientation of this book is towards organizations, it is important to deal with sameness or
similarities between people, while acknowledging differences within groups.
(From Managing people at work by John Hunt)

Research and Development


There are two kinds of research: research and development, and basic research. The purpose of research
and development is to invent a product for sale. Edison invented the first commercially successful light bulb,
but he did not invent the underlying science that made the light bulb possible. Edison at least understood the
science, though, which was the primary difference between inventing the light bulb and inventing fire. Basic
research is something else - ostensibly the search for knowledge for its own sake. Basic research provides the
scientific knowledge upon which R&D is later based. Sending telescopes into orbit or building superconducting
supercolliders is basic research. There is no way, for example, that the $1.5 billion Hubble space telescope is
going to lead directly to a new car or computer or method of solid waste disposal. That is not what it is for. If a
product ever results from basic research, it usually does so fifteen to twenty years later, following a later period
of research and development.
Nearly all companies do research and development, but only a few do basic research. The companies
that can afford to do basic research (and cannot afford not to) are ones that dominate their markets. Most basic
research in industry is done by companies that have at least a 50 percent market share. They have both the
greatest resources to spare for this type of activity and the most to lose if, by choosing not to do basic research,
they eventually lose their technical advantage over competitors. Such companies typically devote about 1
percent of sales each year to research intended not to develop specific products but to ensure that the company
remains a dominant player in its industry twenty years from now. It is cheap insurance, since failing to do basic
research guarantees that the next major advance will be owned by someone else.
Reading Passages [ 37 ]

The problem with industrial basic research, and what differentiates it from government basic research,
is this fact that its true product is insurance, not knowledge. If a researcher at the government-sponsored
Lawrence Livermore Lab comes up with some particularly clever new way to kill millions of people, there is no
doubt that his work will be exploited and that weapons using the technology will eventually be built. The
simple rule about weapons is that if they can be built, they will be built. But basic researchers in industry find
their work is at the mercy of the marketplace and their captains-of-industry bosses. If a researcher at General
Motors comes up with a technology that will allow cars to be built for $100 each, GM executives will quickly
move to bury the technology, no matter how good it is, because it threatens their current business, which is
based on cars that cost thousands of dollars each to build. Consumers would revolt if it became known that GM
was still charging high prices for cars that cost $100 each to build, so the better part of business valor is to stick
with the old technology since it results in more profit dollars per car produced.
In the business world, just because something can be built does not at all guarantee that it will be built,
which explains why RCA took a look at the work of George Heilmeier, a young researcher working at the
company's research center in New Jersey and quickly decided to stop work on Heilmeier's invention, the liquid
crystal display. RCA made this mid-1960s decision because LCDs might have threatened its then-profitable
business of building cathode ray picture tubes. Twenty-five years later, of course, RCA is no longer a factor in
the television market, and LCD displays - nearly all made in Japan - are everywhere.
(From Accidental empires by Robert X Cringeley)
Reading Passages [ 38 ]

SONY
I had decided during my first trip abroad in 1953 that our full name - Tokyo Tsushin Kogyo Kabushiki
Kaisha - was not a good name to put on a product. It was a tongue-twister. Even in Japan, we shortened it
sometimes to Totsuko, but when I was in the United States I learned that nobody could pronounce either name.
The English-language translation - Tokyo Telecommunications Engineering Company - was too clumsy. We
tried Tokyo Teletech for a while, but then we learned there was an American company using the name
Teletech.
It seemed to me that our company name didn't have a chance of being recognized unless we came up
with something ingenious. I also thought that whatever new name we came up with should serve double duty-
that is, it should be both our company name and our brand name. That way we would not have to pay double
the advertising cost to make both well known.
We tried a symbol for a while, an inverted pyramid inside a thin circle with small wedges cut from the
sides of the pyramid to give us a stylized letter "T." But for our first transistors and for our first transistor radio,
we wanted a brand name that was special and clever and that people would remember. We decided our
transistor radio would be the first consumer, product available to the public with our new brand name on it.
I thought a lot about this when I was in the United States, where I noticed that many companies were
using three letter logotypes, such as ABC, NBC, RCA, and AT&T. Some companies were also using just their
full name as their logo. This looked like something new to me. When I was a boy, I had learned to recognize the
names of imported automobiles by their symbols, the three-pointed star for Mercedes, the blue oval with Ford
in it, the Cadillac crown, the Pierce Arrow arrow, the Winged Victory of Rolls-Royce. Later, many car
companies began to use their names together with the symbol, like Chevrolet, Ford,
Buick, and others, and I could recognize their names even if I couldn't actually read them. I pondered
every possibility. Ibuka and I took a long time deciding on a name. We agreed we didn't want a symbol. The
name would be the symbol, and therefore it should be short, no more than four or five characters. All Japanese
companies have a company badge and a lapel pin, usually in the shape of the company symbol, but except for a
prominent few, such as the three diamonds of Mitsubishi, for example, it would be impossible for an outsider
to recognize them. Like the automobile companies that began relying less and less on symbols and more and
more on their names, we felt we really needed a name to carry our message. Every day we would write down
possibilities and discuss them whenever we had the time. We wanted a new name that could be recognized
anywhere in the world, one that could be pronounced the same in any language. We made dozens and dozens
of tries. Ibuka and I went through dictionaries looking for a bright name, and we came across the Latin word
sonus, meaning "sound." The word itself seemed to have sound in it. Our business was full of sound, so we
began to zero in on sonus. At that time in Japan borrowed English slang and nicknames were becoming
popular and some people referred to bright young and cute boys as "sonny," or "sonny-boys," and, of course,
"sunny" and "sonny" both had an optimistic and bright sound similar to the Latin root with which we were
working. And we also thought of ourselves as "sonny-boys" in those days. Unfortunately, the single word
"sonny" by itself would give us troubles in Japan because in the romanization of our language, the word
"sonny" would be pronounced "sohn-nee," which means to lose money. That was no way to launch a new
product. We pondered this problem for a little while and the answer struck me one day: why not just drop one
of the letters and make it "Sony"? That was it!
The new name had the advantage of not meaning anything but "Sony" in any language; it was easy to
remember, and it carried the connotations we wanted. Furthermore, as I reminded Ibuka, because it was
written in roman letters, people in many countries could think of it as being in their own language. All over the
world governments were spending money to teach people how to read English and use the roman alphabet,
including Japan. And the more people who learned English and the roman alphabet, the more people would
recognize our company and product name-at no cost to us.
We kept our old corporate name for some time after we began putting the Sony logotype on our
products. For our first product logo, we used a tall, thin sloping initial letter inside a square box, but I soon
realized that the best way to get name recognition would be to make the name as legible and simple as possible,
so we moved to the more traditional and simple capital letters that remain today. The name itself is the logo.
We managed to produce our first transistorized radio in 1955 and our first tiny "pocketable" transistor
radio in 1957. It was the world's smallest, but actually it was a bit bigger than a standard men's shirt pocket,
and that gave us a problem for a while, even though we never said which pocket we had in mind when we said
"pocketable." We liked the idea of a salesman being able to demonstrate how simple it would be to drop it into
Reading Passages [ 39 ]

a shirt pocket. We came up with a simple solution. We had some shirts made for our salesmen with slightly
larger than normal pockets, just big enough to slip the radio into.
The introduction of this proud achievement was tinged with disappointment that our first
transistorized radio was not the very first one on the market. An American company called Regency, supported
by Texas Instruments, and using TI transistors, put out a radio with the Regency brand name a few months
before ours, but the company gave up without putting much effort into marketing it. As the first in the field,
they might have capitalized on their position and created a tremendous market for their product, as we did. But
they apparently judged mistakenly that there was no future in this business and gave it up.
Our fine little radio carried our company's new brand name, Sony, and we had big plans for the future
of transistorized electronics and hopes that the success of our small "pocketable" radio would be a harbinger of
successes to come.
In June 1957, we put up our first billboard carrying the Sony name opposite the entrance to Tokyo's
Haneda International Airport, and at the end of the year we put up another in the heart of the Ginza district of
Tokyo. In January 1958 we officially changed our company name to Sony Corporation and were listed on the
Tokyo Stock Exchange that December.
We had registered the name Sony in one hundred and seventy countries and territories and in various
categories, not just electronics, in order to protect it from being used by others on products that would exploit
the similarity. But we soon learned that we had failed to protect ourselves from some entrepreneurs right at
home in Japan. One day we learned that somebody was selling "Sony" chocolate.
We were very proud of our new corporate name and I was really upset that someone would try to
capitalize on it. The company that picked up our name had used a completely different name on their products
before and only changed the name when ours became popular. They registered the name "Sony" for a line of
chocolates and snack foods and even changed their company trade name to Sony Foods. In their logo they used
the same type of letters we used.
In those days we sometimes used a small cartoon character called "Sonny Boy" in our advertising. The
character was actually called "Atchan," and was created by cartoonist Fuyuhiko Okabe of the Japanese
newspaper Asahi Shimbun. The bogus Sony chocolate merchants started using a similar cartoon. Seeing this
stuff on sale in major department stores made me sick with anger. We took the imposters to court and brought
famous people such as entertainers, newspapermen, and critics to confirm the damage that was being done to
us. One witness said he thought the appearance of Sony chocolate meant that the Sony Corporation was in
financial difficulty if it had to resort to selling chocolate instead of high-technology electronics. Another witness
said she had the impression that since Sony was really a technical company, the chocolate must be some kind of
synthetic. We were afraid that if these chocolates continued to fill the marketplace, it would completely destroy
the trust people had in our company.
I have always believed that a trademark is the life of an enterprise and that it must be protected boldly.
A trademark and a company name are not just clever gimmicks-they carry responsibility and guarantee the
quality of the product. If someone tries to get a free ride on the reputation and the ability of another who has
worked to build up public trust, it is nothing short of thievery. We were not flattered by this theft of our name.
Court cases take a long time in Japan, and the case dragged on for almost four years, but we won. And
for the first time in Japanese history, the court used the unfair competition law rather than patent or trademark
registration laws in granting us relief. The chocolate people had registered the name, all right, but only after our
name had become popular. In trying to prove that the name was open for anyone to use, their lawyers went to
the major libraries of the country to show that the name was in the public domain, but they were in for a shock.
They came away empty-handed because no matter what dictionaries they went to they could not find the word
Sony. We knew they would discover that; we had done it ourselves long before. The name is unique, and it is
ours.
On our thirty-fifth anniversary, we thought we should consider revising our trademark. Styles and
fashions were changing in clothing, in product design, and in virtually everything, so we thought that perhaps
we should consider changing the style of the letters of our name. We held an international competition, and we
received hundreds of suggestions, along with hundreds of pleas from our dealers not to change. After
reviewing all the suggestions, we decided not to make any changes. S O N Y still looked very good to us, and
we decided, as they say today, that there was no point in fixing something that was far from broken.
(From Made in Japan by Akio Morita)
American And Japanese Styles
Reading Passages [ 40 ]

Japanese attitudes toward work seem to be critically different from American attitudes. Japanese people
tend to be much better adjusted to the notion of work, any kind of work, as honourable. Nobody would look
down on a man who retires at age fifty-five or sixty and then to keep earning money takes a more menial job
than the one he left. I should mention that top-level executives usually have no mandatory retirement age, and
many stay on into their seventies and even their eighties.
At Sony we have mandatory retirement from the presidency at sixty-five, but to utilize their experience
and knowledge we keep former executives who have retired as consultants. We provide them with office space
and staff, so that they can work apart from the day-to-day affairs of the company, at Ibuka Hall, a building
located five minutes away from the headquarters building. From time to time, we ask them for advice and they
attend conferences and other events as representatives of Sony. Many of those people who retire from
managerial jobs find executive positions in smaller companies or subsidiary companies of Sony where their
managerial experience and skill are needed and valued.
Workers generally are willing to learn new skills. Japan has never devised a system like the American,
in which a person is trained to do one thing and then refuses to take a job doing anything else-and is even
supported by government funds while he looks for a job that suits his specific tastes. Because of Japan's special
situation, our people do not have that luxury. And our unemployment rate lately has not reached 3 percent.
One old style of management that is still being practiced by many companies in the United States and
by some in Japan is based on the idea that the company that is successful is the one that can produce the
conventional product most efficiently at cheaper cost. Efficiency, in this system, becomes a god. Ultimately, it
means that machinery is everything, and the ideal factory is a perfectly automated one, perhaps one that is
unmanned. This machinelike management is a management of dehumanization.
But technology has accelerated at an unparalleled pace in the past few decades and it has entailed
digesting new knowledge, new information, and different technologies. Today, management must be able to
establish new business ahead of its competitors, rather than pursue higher efficiency in manufacturing
conventional products. In the U.S. and Europe today, old-fashioned low-level jobs are being protected while the
new technologies are being neglected.
More important, an employee today is no longer a slave to machinery who is expected to repeat simple
mechanical operations like Charlie Chaplin in the film Modern Times. He is no longer a beast of burden who
works under the carrot-and stick rule and sells his labour. After all, manual labour can be taken over by
machine or computer. Modern industry has to be brain-intensive and so does the employee. Neither machinery
nor animals can carry out brain-intensive tasks. In the late sixties, when integrated circuits had to be assembled
by hand, the deft fingers of Asian women were greatly in demand by U.S. companies. As the design of these
devices became more and more complicated, along came more sophisticated machinery, such as laser trimmers,
which required not deft fingers but agile minds and intelligence. And so this upgrading of the workers is
something that every country will have to be concerned about, and the idea of preserving old-fashioned jobs in
the modern era does not make sense. This means educating new employees and re-educating older employees
for new challenges.
That is not all. At Sony we at times have scientists participate in sales for a while because we don't want
our scientists to live in ivory towers. I have always felt they should know that we are in a very competitive
business and should have some experience in the front lines of the business. Part of the training program for
graduates who enter Sony as recruits fresh out of university includes a program where non-technical persons
undergo a month of training at a factory and technical persons work as salespeople in a Sony shop or
department store, selling our products.
Japanese labour practices are often called old-fashioned in today's world, and some say the old work
ethic is eroding in Japan as it has elsewhere, but I do not think this is inevitable. As I see it, the desire to work
and to perform well is not something unnatural that has to be imposed on people. I think all people get a sense
of satisfaction from accomplishing work that is challenging, when their work and role in the company are being
recognized. Managers abroad seem to overlook this. People in America, for example, have been conditioned to
a system in which a person sells his labour for a price. In a way, that's good because people cannot coast; they
know they have to work to earn their money or be fired. (I also think the way Americans make their children do
work to earn their allowance is a fine idea; in Japan we often just give the money without requiring anything of
our children.) In Japan we do take the risk of promising people job security, and then we have to keep
motivating them. Yet I believe it is a big mistake to think that money is the only way to compensate a person for
his work.
Reading Passages [ 41 ]

People need money, but they also want to be happy in their work and proud of it. So if we give a lot of
responsibility to a younger man, even if he doesn't have a title, he will believe he has a good future and will be
happy to work hard. In the United States, title and job and monetary incentives are all tied together. That is
why, if a young person has a big job, management thinks he has to have a big salary. But in Japan we
customarily give raises each year as employees get older and more experienced in the company. If we give an
unusually high salary to one person, we cannot continue to give him annual increases indefinitely. At some
point, his salary will have to level off, and at that point, he is likely to get discouraged. So we like to give the
same sort of raise to all. I think this keeps our people well motivated. This may be a Japanese trait, but I do not
think so.
I believe people work for satisfaction. I know that advertisements and commercials in the U.S. seem to
hold up leisure as the most satisfying goal in life, but it is not that way in Japan yet. I really believe there is such
a thing as company patriotism and job satisfaction - and that it is as important as money. It goes without saying
that you must pay good wages. But that also means, of course, that the company must not throw money away
on huge bonuses for executives or other frivolities but must share its fate with the workers. Japanese workers
seem to feel better about themselves if they get raises as they age, on an expectable curve. We have tried other
ways.
When we started our research laboratory, we had to go out and find researchers, and because these
people had more education and were, naturally, older than our normal new employees we decided they should
have higher wages, equivalent to U.S. salary levels. One suggested plan was to put them under short-term
contract, say three years, after which we would decide whether to renew or not. But before we decided on this
new pay scheme, I asked the new employees whether they would prefer the more common system of lower pay
to start, but with yearly increases, or the three-year contract at a much higher wage.
Not one of them asked for the American-level salary. Everyone opted for long-range security. That is
why I tell the Americans I meet that people don't work only for money. But often when I say it, they respond,
"Yes, I see, but how much do you pay the ones who really work hard?" Now this is an important point. When a
worker knows he will be getting a raise each year, he can feel so secure that he thinks there is no need to work
hard. Workers must be motivated to want to do a good job. We Japanese are, after all, human beings, with
much in common with people everywhere. Our evaluation system is complex and is designed to find really
capable persons, give them challenging jobs, and let them excel. It isn't the pay we give that makes the
difference-it is the challenge and the recognition they get on the job.
My eldest son, Hideo, may not be the best example of the typical Japanese worker, but he has an
interesting and, I think, typical view of work in Japan. He has studied in Britain and the United States, and all
his life he wanted to work for Sony. He went to work as an Artists and Repertory man at the CBS-Sony record
company on the urging of Norio Ohga. He and I felt that for him to come directly into Sony headquarters
would be wrong, because of the family connection and the overtones of nepotism. So he was proving himself at
CBS-Sony. He worked with foreign and local artists and became famous and successful in the record industry
in Japan. He worked very hard, from about noon until three or four o'clock in the morning, doing his regular
office business during the day and then dealing with musicians after they finished their work. Hideo doesn't
drink, and so it was hard for him to sit around the Tokyo discos and bars with these rock stars, drinking Coca-
Cola while they relaxed with whiskey in the wee small hours of the morning. But it was important for him to
do this, and although he could have gone on a long time resting on his laurels, he took stock of himself on his
thirtieth birthday and made a decision.
As he put it, "In the record business, there are many people in their late thirties and early forties
wearing jogging shoes and white socks and jeans and T-shirts to the office. I looked at those guys and said; I
don't want to be like that when I am forty or forty-five. This business is fine and I have been successful, and I
have no reason to leave it. If I keep this job, I thought, I might end up being a top officer of CBS-Sony, but I
didn't want to see myself at fifty coming into the office at one o'clock in the afternoon in jogging shoes and
white socks saying 'Good morning.' I felt I had to prove to myself after seven years in the record business that I
could work from nine to five, like ordinary people."
He was assigned to the Sony accounting division-quite a change, you might think, from the artistic side
of the record business-and some might have wondered whether he could make it or not, but I believed he
could. His attitude is very Japanese, despite his international upbringing:
"All jobs are basically the same. You have to apply yourself, whether you are a record A&R man, a
salesman on the street, or an accounting clerk. You get paid and you work one hundred percent to do the job at
hand. As an A&R man, I was interested and excited and happy, but naturally as long as you are satisfied with
Reading Passages [ 42 ]

your work and are using your energy, you will be happy. I was also very excited about the accounting division.
I found out something new every day, struggling with a whole bunch of invoices and the payment sheets, the
balance sheet, the profit and loss statement, and working with all those numbers. I began to get a broad picture
of the company, its financial position and what is happening day to day and which way the company is
heading. I discovered that that excitement and making music at the studio are the same thing."

In the late sixties a European Commission internal memo on Japan was leaked, and a great stir was
created because it referred to the Japanese as "workaholics" who live in "rabbit hutches." There is no doubt that
inadequate housing is a major problem in Japan, and nobody could deny that the Japanese are probably the
hardest working people in the world. We have many holidays in Japan, but only about the same number as the
United States. We do not give long summer vacations, even to our schoolchildren.
At Sony we were one of the first Japanese companies to close down our factory for one week in the
summer, so that everybody could take off at the same time. And we long ago instituted the five-day, forty-hour
week. The Japan Labour Standards Act still provides for a maximum forty-eight-hour workweek, though it is
soon to be revised downward, and the average workweek in manufacturing is now forty-three hours. But even
with up to twenty days of paid vacation a year, Japanese workers managed to take fewer days off and spend
more days on the job than workers in the United States and Europe.
It was only in 1983 that banks and financial institutions began to experiment with the five-day week,
closing one Saturday a month, and eventually the whole nation will move closer to the five-day week. Still,
International Labour Organization data show that Japanese work longer weeks and have fewer labour disputes
than workers in the U.S., the U.K., France, or West Germany. What I think this shows is that the Japanese
worker appears to be satisfied with a system that is not designed only to reward people with high pay and
leisure.
At Sony we learned that the problem with an employee who is accustomed to work only for the sake of
money is that he often forgets that he is expected to work for the group entity, and this self-cantered attitude of
working for himself and his family to the exclusion of the goals of his co-workers and the company is not
healthy. It is management's responsibility to keep challenging each employee to do important work that he will
find satisfying and to work within the family. To do this, we often reorganize the work at Sony to suit the
talents and abilities of the workers.
I have sometimes referred to American companies as being structures like brick walls while Japanese
companies are more like stone walls. By that I mean that in an American company, the company's plans are all
made up in advance, and the framework for each job is decided upon. Then, as a glance at the classified section
of any American newspaper will show, the company sets out to find a person to fit each job. When an applicant
is examined, if he is found to be oversized or undersized for the framework, he will usually be rejected. So this
structure is like a wall built of bricks: the shape of each employee must fit in perfectly, or not at all.
In Japan recruits are hired, and then we have to learn how to make use of them. They are a highly
educated but irregular lot. The manager takes a good long look at these rough stones, and he has to build a wall
by combining them in the best possible way, just as a master mason builds a stone wall. The stones are
sometimes round, sometimes square, long, large, or small, but somehow the management must figure out how
to put them together. People also mature, and Japanese managers must also think of the shapes of these stones
as changing from time to time. As the business changes, it becomes necessary to refit the stones into different
places. I do not want to carry this analogy too far, but it is a fact that adaptability of workers and managements
has become a hallmark of Japanese enterprise.
When Japanese companies in declining or sunset industries change their line of business or add to it,
workers are offered retraining and, for the most part, they accept it eagerly. This sometimes requires a family
move to the new job, and Japanese families are, again, generally disposed to do this.
(From Made in Japan by Akio Morita)

CHEMISTRY
Metallurgy: Making Alloys
The majority of alloys are prepared by mixing metals in the molten state; then the mixture is poured
into metal or sand moulds and allowed to solidify. Generally the major ingredient is melted first; then the
others are added to it and should completely dissolve. For instance, if a plumber makes solder he may melt his
lead, add tin, stir, and cast the alloy into stick form. Some pairs of metals do not dissolve in this way. When this
is so it is unlikely that a useful alloy will be formed. Thus if the plumber were to add aluminium, instead of tin,
Reading Passages [ 43 ]

to the lead, the two metals would not dissolve - they would behave like oil and water. When cast, the metals
would separate into two layers, the heavy lead below and aluminium above.
One difficulty in making alloys is that metals have different melting points. Thus copper melts at
1,083°C, while zinc melts at 419°C and boils at 907°C So, in making brass, if we just put pieces of copper and
zinc in a crucible and heated them above 1,083°C, both the metals would certainly melt. But at that high
temperature the liquid zinc would also boil away and the vapour would oxidize in the air. The method adopted
in this case is to heat first the metal having the higher melting point, namely the copper. When this is molten,
the solid zinc is added and is quickly dissolved in the liquid copper before very much zinc has boiled away.
Even so, in the making of brass, allowance has to be made for unavoidable zinc loss which amounts to about
one part in twenty of the zinc. Consequently, in weighing out the metals previous to alloying, an extra quantity
of zinc has to be added.
Sometimes the making of alloys is complicated because the higher melting point metal is in the smaller
proportion. For example, one light alloy contains 92 per cent aluminium (melting point 660°C) with 8 per cent
copper (melting point 1,083°C). To manufacture this alloy it would be undesirable to melt the few pounds of
copper and add nearly twelve times the weight of aluminium. The metal would have to be heated so much to
persuade the large bulk of aluminium to dissolve that gases would be absorbed, leading to unsoundness. In
this, as in many other cases, the alloying is done in two stages. First an intermediate 'hardener alloy' is made,
containing 50 per cent copper and 50 per cent aluminium, which alloy has a melting point considerably lower
than that of copper and, in fact, below that of aluminium. Then the aluminium is melted and the correct
amount of the hardener alloy added; thus, to make l00lb of the aluminium-copper alloy we should require 84lb.
of aluminium to be melted first and 16lb of hardener alloy to be added to it.
In a few cases, the melting point of the alloy can be worked out approximately by arithmetic. For
instance, if copper (melting point 1,083°C) is alloyed with nickel (melting point 1,454°C) a fifty-fifty alloy will
melt at about halfway between the two temperatures. Even in this case the behaviour of the alloy on melting is
not simple. A copper-nickel alloy does not melt or freeze at one fixed and definite temperature, but
progressively solidifies over a range of temperature. Thus, if a fifty-fifty copper-nickel alloy is liquefied and
then gradually cooled, it starts freezing at 1,312°C, and as the temperature falls, more and more of the alloy
becomes solid until finally at 1,248°C it has completely solidified. Except in certain special cases this 'freezing
range' occurs in all alloys, but it is not found in pure metals, metallic, or chemical compounds, and in some
special alloy compositions, referred to below, all of which melt and freeze at one definite temperature.
The alloying of tin and lead furnishes an example of one of these special cases. Lead melts at 327°C and
tin at 232°C. If lead is added to molten tin and the alloy is then cooled, the freezing point of the alloy is found to
be lower than the freezing points of both lead and tin (see figure 1). For instance, if a molten alloy containing 90
per cent tin and 10 per cent lead is cooled, the mixture reaches a temperature of 217°C before it begins to
solidify. Then, as the alloy cools further, it gradually changes from a completely fluid condition, through a stage
when it is like gruel, until it becomes as thick as porridge, and finally, at a temperature as low as 183°C, the
whole alloy has become completely solid. By referring to figure 1, it can be seen that with 80 per cent tin, the
alloy starts solidifying at 203°C, and finishes only when the temperature has fallen to 183°C (note the
recurrence of the 183°C).
What happens at the other end of the series, when tin is added to lead? Once again the freezing point is
lowered. An alloy with only 20 per cent tin and the remainder lead starts to freeze at 279°C and completes
solidification at the now familiar temperature of 183°C. One particular alloy, containing 62 per cent tin and 38
per cent lead, melts and solidifies entirely at 183°C. Obviously this temperature of 183°C and the 62/38 per cent
composition are important in the tin-lead alloy system. Similar effects occur in many other alloy systems and
the special composition which has the lowest freezing point of the series and which entirely freezes at that
temperature has been given a special name. The particular alloy is known as the 'eutectic' alloy and the freezing
temperature (183°C in the case of the tin-lead alloys) is called the eutectic temperature.
Reading Passages [ 44 ]

By a careful choice of constituents, it is possible to make alloys with unusually low melting points. Such
a fusible alloy is a complex eutectic of four or five metals, mixed so that the melting point is depressed until the
lowest melting point possible from any mixture of the selected metals is obtained. A familiar fusible alloy,
known as Wood's metal, has a composition:
Bismuth 4 parts
Lead 2 parts
Tin 1 part
Cadmium 1 part
and its melting point is about 70°C; that is, less than the boiling point of water. Practical jokers have
frequently amused themselves by casting this fusible alloy into the shape of a teaspoon, which will melt when
used to stir a cup of hot tea.
These low melting point alloys are regularly in use for more serious purposes, as for example, in
automatic anti-fire sprinklers installed in the ceilings of buildings. Each jet of the water sprinkler system
contains a piece of fusible alloy, so that if a fire occurs and the temperature rises sufficiently high, the alloy
melts and the water is released through the jets of the sprinkler.
(From Metals in the Service of Man by W. Alexander & A. Street.)
Reading Passages [ 45 ]

Electricity Helps Chemistry: Electro-plating


A liquid which is decomposed when an electric current passes through it is called an electrolyte. The
process is called electrolysis, and the two wires or plates dipping into the electrolyte are called electrodes. The
electrode which is connected to the positive terminal of the cell or battery is called the anode. The electrode
which is connected to the negative terminal of the battery is called the cathode.

Let us examine what happens when two copper electrodes are used in a solution of copper sulphate.
The circuit is shown in the diagram. The right-hand diagram shows the two copper electrodes dipping into the
copper sulphate solution contained in a glass jar. The current enters by the anode (+), passes through the
solution, enters the cathode (-), and then leaves the cathode as shown by the arrow. In the left-hand diagram, V
represents the glass vessel containing the copper sulphate (electrolyte), and the two electrodes are marked + for
the anode and - for the cathode. When the switch S is closed, the current flows from the - terminal of the battery
B in the direction of the arrow to the anode (+) of V, through the solution to the cathode (-), then round the
circuit through S back to the negative terminal of the battery B.
Before starting this experiment the weights of the two copper plates which are to be used for the anode
and cathode must be written down carefully for future reference. Next, place the anode and cathode in the
copper sulphate solution and connect them up to the battery B and switch S. The switch is then placed in the
'on' position and the current is allowed to flow through the circuit for about half an hour. The anode and
cathode are then removed and dried carefully in blotting paper before being weighed a second time.
You will find that a surprising thing has happened. The anode now weighs a few milligrams less than
before and the cathode weighs a few milligrams more than before. The weight lost by the anode is exactly equal to
the gain in weight by the cathode. In some strange way a few milligrams of copper have been removed from the
anode and carried through the electrolyte by the current and have finally become firmly attached to the
cathode. This is a most exciting discovery, for we have learned how to use an electric current to transfer tiny
particles of copper from the anode to the cathode.
Nineteenth-century industry soon found out how to apply this exciting discovery to our everyday lives.
Scientists found that many other metals could be transferred from anode to cathode. The anode had to be made
of the metal which it was desired to transfer to the cathode, and the electrolyte had to be a suitable solution or
salt of the metal. Then the cathode always became plated with metal from the anode. Copper, silver, gold,
nickel, zinc and chromium can all be used in this process, which is called electro-plating. Electro-plating is used
widely in industry for a number of reasons. Firstly, it is used for decoration. Coatings of nickel, gold, silver or
chromium give a nice shiny appearance to articles and make them look much more expensive. Watch-cases and
cutlery are often plated with silver or gold to give them a smart appearance so that they become attractive to
intending buyers. Handlebars of bicycles and the shiny fittings of cars are also made attractive by means of
nickel and chromium plating.
This leads us to the second reason for electro-plating - as a protection against rust or corrosion. Iron and
steel corrode easily when exposed to the atmosphere. Car fittings and the shiny parts of bicycles are electro-
plated chiefly for this reason, so that they may stand up to the hard wear and tear of daily use. Zinc is formed
into a protective layer for iron sheets by the electroplating process which we now call galvanizing. Galvanized
iron sheets resist the effects of wind and weather much better than sheets made of iron. Tin is also used as a
Reading Passages [ 46 ]

protective agent. Sheets of thin iron are plated with tin and used for canning fruit and jam, and for all kinds of
'tin' cans used in industry and trade. We may sum up by saying that industry has used the process of electro-
plating first to protect metal surfaces which would otherwise corrode; and secondly to provide a beautiful and
attractive finish to useful articles. As a result, our bicycles and cars, our watches and cutlery, our building and
manufacturing materials last much longer and are much more pleasant to look at.
The process of electrolysis is used for the production of very pure specimens of metal. Most metals in
industrial use contain many impurities. About 1 million tons of refined copper are produced each year by
electrolysis. In this case the anode consists of crude copper and the cathode of thin sheets of pure copper. As the
current passes, pure copper from the anode passes over to the cathode, and all impurities fall off the anode as a
kind of mud. In this way pure copper is collected at one electrode and the muddy residue, which falls off the
cathode, sinks to the bottom of the vat and is periodically removed.
Aluminium is so widely used today that we can scarcely think of times when it was not available. Yet a
few years back it was a costly metal because no satisfactory method had been found of producing it
commercially. Aluminium ores are so common in nature that scientists and engineers made many attempts to
find a cheap and convenient method of refining them. The problem was finally solved by electrolysis, using a
carbon anode and aluminium ores, which had been melted at a temperature of about 1,000 °C, as the electrolyte.
Aluminium is now plentiful and it is being put to fresh uses every day.
Electrolysis has an important industrial application in the printing trade, for it is often used to make the
'blocks' from which pictures and type are printed. A wax mould is first made of the printing block which is to
be reproduced. Since wax is a non-conductor of electricity it is dusted over with graphite so that the surface
becomes a conductor and can act as a cathode. This mould then becomes the cathode upon which copper or
chromium is deposited from the anode. When the wax is taken out of the electrolyte it is coated with a fine shell
of metal. The wax is removed by heating and the metal shell acts as a mould into which molten type metal can
be poured. Plates made in this way are very hard-wearing and can be used to print many thousands of copies
of newspapers, journals and magazines.
(From General Science by N. Ahmad, W. F. Hawkins And W. M. Zaki.)
ECONOMICS
THE CONVENTIONAL WISDOM
The first requirement for an understanding of contemporary economic and social life is a clear view of
the relation between events and the ideas which interpret them. For each of these has a life of its own, and
much as it may seem a contradiction in terms each is capable for a considerable period of pursuing an
independent course.
The reason is not difficult to discover. Economic, like other social life, does not conform to a simple and
coherent pattern. On the contrary it often seems incoherent, inchoate, and intellectually frustrating. But one
must have an explanation or interpretation of economic behaviour. Neither man's curiosity nor his inherent ego
allows him to remain contentedly oblivious to anything that is so close to his life.
Because economic and social phenomena are so forbidding, or at least so seem, and because they yield
few hard tests of what exists and what does not, they afford to the individual a luxury not given by physical
phenomena. Within a considerable range he is permitted to believe what he pleases, he may hold whatever
view of the world he finds most agreeable or otherwise to his taste.
As a consequence, in the interpretation of all social life there is a persistent and never-ending
competition between what is relevant and what is merely acceptable. In this competition, while a strategic
advantage lies with what exists, all tactical advantage is with the acceptable. Audiences of all kinds most
applaud what they like best. And in social comment the test of audience approval, far more than the test of
truth, comes to influence comment. The speaker or writer who addresses his audience with the proclaimed
intent of telling the hard, shocking facts invariably goes on to expound what the audience most wants to hear.
Just as truth ultimately serves to create a consensus, so in the short run does acceptability. Ideas come to
be organized around what the community as a whole or particular audiences find acceptable. And as the
laboratory worker devotes himself to discovering scientific verities, so the ghost writer and the public relations
man concern themselves with identifying the acceptable. If their clients are rewarded with applause, these
artisans are qualified in their craft. If not they have failed. However, by sampling audience reaction in advance,
or by pretesting speeches, articles, and other communications, the risk of failure can now be greatly minimized.
Numerous factors contribute to the acceptability of ideas. To a very large extent, of course, we associate
truth with convenience - with what most closely accords with self-interest and individual well-being or
promises best to avoid awkward effort or unwelcome dislocation of life. We also find highly acceptable what
Reading Passages [ 47 ]

contributes most to self-esteem. Speakers before the United States Chamber of Commerce rarely denigrate the
business man as an economic force. Those who appear before the AFL-CIO are prone to identify social progress
with a strong trade union movement. But perhaps most important of all, people approve most of what they best
understand. As just noted, economic and social behaviour are complex and mentally tiring. Therefore we
adhere, as though to a raft, to those ideas which represent our understanding. This is a prime manifestation of
vested interest. For a vested interest in understanding is more preciously guarded than any other treasure. It is
why men react, not infrequently with something akin to religious passion, to the defence of what they have so
laboriously learned. Familiarity may breed contempt in some areas of human behaviour, but in the field of
social ideas it is the touchstone of acceptability.
Because familiarity is such an important test of acceptability, the acceptable ideas have great stability.
They are highly predictable. It will be convenient to have a name for the ideas which are esteemed at any time
for their acceptability, and it should be a term that emphasized this predictability. I shall refer to those ideas
henceforth as the conventional wisdom.
(From The Affluent Society by J. K. Galbraith)
Reading Passages [ 48 ]

MARKETS
A market is commonly thought of as a place where commodities are bought and sold. Thus fruit and
vegetables are sold wholesale at Covent Garden Market and meat is sold wholesale at Smithfield Market. But
there are markets for things other than commodities, in the usual sense. There are real estate markets, foreign
exchange markets, labour markets, short-term capital markets, and so on; there may be a market for anything
which has a price. And there may be no particular place to which dealings are confined. Buyers and sellers may
be scattered over the whole world and instead of actually meeting together in a market-place they may deal
with one another by telephone, telegram, cable or letter. Even if dealings are restricted to a particular place, the
dealers may consist wholly or in part of agents acting on instructions from clients far away. Thus agents buy
meat at Smithfield on behalf of retail butchers all over England; and brokers on the London Stock Exchange buy
and sell securities on instructions from clients all over the world. We must therefore define a market as any area
over which buyers and sellers are in such close touch with one another, either directly or through dealers, that
the prices obtainable in one part of the market affect the prices paid in other parts.
Modern means of communication are so rapid that a buyer can discover what price a seller is asking,
and can accept it if he wishes, although he may be thousands of miles away. Thus the market for anything is,
potentially, the whole world. But in fact things have, normally, only a local or national market.
This may be because nearly the whole demand is concentrated in one locality. These special local
demands, however, are of quite minor importance. The main reason why many things have not a world market
is that they are costly or difficult to transport.
The lower the value per ton of a good, the greater is the percentage addition made to its price by a fixed
charge per ton-mile for transport. Thus, if coal is £2 a ton and tin £200 a ton at the place of production, a given
transport charge forms a percentage of the price of coal a hundred times greater than of the price of tin. Hence
transport costs may restrict the market for goods with a low value per ton, even if, as is often the case, they are
carried at relatively low rates. It may be cheaper to produce, say, coal or iron ore at A than at B, but the cost of
transporting it from A to B may outweigh the difference in production costs, so that it is produced for local
consumption at B, and B does not normally form part of the market output of A. For example, coal is produced
much more cheaply in the United States than in Europe, but, owing to the cost of transporting coal by rail from
the inland mines to the Atlantic seaboard of the United States, American coal seldom finds its way to Europe.
Sea transport, however, is very much cheaper than land transport. Hence commodities of this type
produced near a port can often be sent profitably quite long distances by sea. Thus Swedish iron ore comes by
sea from Narvik to the Ruhr, and British coal is exported to Canada and South America.
The markets for real estate are local. Soil has been transported from French vineyards to California, and
historic mansions have been demolished in Europe to be re-erected in the United States, but as a rule land and
buildings are not transported.
Some goods, like new bread and fresh cream and strawberries, must be consumed very soon after they
have been produced, and this restricts their sale to local markets. Other goods do not travel well. Thus many
local wines which cannot stand transport can be bought in the district more cheaply than similar wines which
have a wider market. The development of refrigeration, and of other devices which enable foodstuffs to be
preserved and transported, has greatly widened the market for such things as meat and fish and some kinds of
fruit. But such devices often transform the articles, from the standpoint of consumers, into a different
commodity. Condensed milk is not the same as fresh milk, and chilled meat or frozen butter has not the same
taste as fresh.
Many workers are reluctant to move to a different country, or even to a different part of their own
country, to get a higher wage. This should not be exaggerated. Before the war of 1914, over a million persons a
year emigrated overseas from Europe. Following it, there were considerable movements of population within
Great Britain away from the depressed areas towards the more prosperous South. Employers may take the
initiative. Thus girl textile workers have been engaged in Yorkshire to work in Australia, and during the inter-
war years French employers engaged groups of Poles and Italians to work in the coal-mines and steel-works of
France. Nevertheless labour markets are mainly local, or at any rate national.
Transport services by rail or tram are obviously local in that passengers or goods must travel between
points on the fixed track. A firm may charter, for example, a Greek ship rather than an English ship, if it is
cheaper, but low railway rates in Belgium are no help to the firm which wishes to send goods across Canada. In
the same way, such things as gas, water, and electricity, supplied by means of pipes or wires, cannot be sold to
places not connected with the system of pipes or wires.
(From Economics by Frederick Benham)
Reading Passages [ 49 ]

INVESTMENT
Of the various purposes which money serves, some essentially depend upon the assumption that its
real value is really constant over a period of time. The chief of these are those connected, in a wide sense, with
contracts for the investment of money. Such contracts - namely those which provide for the payment of fixed
sums of money over a long period of time - are the characteristic of what it is convenient to call the Investment
System, as distinct from the property system generally. Under this phase of capitalism, as developed during the
nineteenth century, many arrangements were devised for separating the management of property from its
ownership. These arrangements were of three leading types: (i) those in which the proprietor, while parting
with the management of his property, retained his ownership of it - i.e. of the actual land, buildings, and
machinery, or of whatever else it consisted in, this mode of tenure being typified by a holding of ordinary
shares in a joint-stock company; (ii) those in which he parted with the property temporarily, receiving a fixed
sum of money annually in the meantime, but regained his property eventually, as typified by a lease; and (iii)
those in which he parted with his real property permanently, in return either for a perpetual annuity fixed in
terms of money, or for a terminable annuity and the repayment of the principal in money at the end of the term,
as typified by mortgages, bonds, debentures, and preference shares. This third type represents the full
development of Investment.
Contracts to receive fixed sums of money at future dates (made without provision for possible changes
in the real value of money at those dates) must have existed as long as money has been lent and borrowed. In
the form of leases and mortgages, and also of permanent loans to Governments and to a few private bodies,
such as the East India Company, they were already frequent in the eighteenth century. But during the
nineteenth century they developed a new and increasing importance, and had, by the beginning of the
twentieth, divided the propertied classes into two groups-the 'business men' and the 'investors' - with partly
divergent interests. The division was not sharp as between individuals; for business men might be investors
also, and investors might hold ordinary shares; but the division was nevertheless real, and not the less
important because it was seldom noticed.
By the aid of this system the active business class could call to the aid of their enterprises not only their
own wealth but the savings of the whole community; and the professional and propertied classes, on the other
hand, could find an employment for their resources, which involved them in little trouble, no responsibility,
and (it was believed) small risk.
For a hundred years the system worked, throughout Europe, with an extraordinary success and
facilitated the growth of wealth on an unprecedented scale. To save and to invest became at once the duty and
the delight of a large class. The savings were seldom drawn on, and, accumulating at compound interest, made
possible the material triumphs which we now all take for granted. The morals, the politics, the literature, and
the religion of the age joined in a grand conspiracy for the promotion of saving. God and Mammon were
reconciled. Peace on earth to men of good means. A rich man could, after all, enter into the Kingdom of
Heaven-if only he saved. A new harmony sounded from the celestial spheres. 'It is curious to observe how,
through the wise and beneficent arrangement of Providence, men thus do the greatest service to the public,
when they are thinking of nothing but their own gain': so sang the angels.
The atmosphere thus created well harmonized the demands of expanding business and the needs of an
expanding population with the growth of a comfortable non-business class. But amidst the general enjoyment
of ease and progress, the extent to which the system depended on the stability of the money to which the
investing classes had committed their fortunes was generally overlooked; and an unquestioning confidence was
I apparently felt that this matter would look after itself. Investments spread and multiplied, until, for
the middle classes of the world, the gilt-edged bond came to typify all that was most permanent and most
secure. So rooted in our day has been the conventional belief in the stability and safety of a money contract that,
according to English law, trustees have been encouraged to embark their trust funds exclusively in such
transactions, and are indeed forbidden, except in the case of real estate (an exception which is itself a survival of
the conditions of an earlier age) to employ them otherwise.
As in other respects, so also in this, the nineteenth century relied on the future permanence of its own
happy experiences and disregarded the warning of past misfortunes. It chose to forget that there is no historical
warrant for expecting money to be represented even by a constant quantity of a particular metal, far less by a
constant purchasing power. Yet Money is simply that which the State declares from time to time to be a good
legal discharge of money contracts. In 1914 gold had not been the English standard for a century or the sole
standard of any other country for half a century. There is no record of a prolonged war or a great social
Reading Passages [ 50 ]

upheaval which has not been accompanied by a change in the legal tender, but an almost unbroken chronicle in
every country which has a history, back to the earliest dawn of economic record, of a progressive deterioration
in the real value of the successive legal tenders which have represented money.
(From A Tract on Monetary Reform by J M. Keynes)

BARTER
The existence of pure barter does not necessarily indicate a very primitive form of civilization. Often the
system survives long after the community has progressed considerably in other respects. This may be due to
conservatism, since primitive peoples are reluctant to change their trading methods, even though they be
sufficiently intelligent and advanced to adopt more convenient methods. In some cases there is prejudice
against the adoption of a monetary economy, though such prejudice is usually directed against the use of coins
rather than against primitive money. In many cases barter continues to be the principal method of trading long
after the adoption of some form of money, for the simple reason that there is not enough money to go round.
And a decline in the supply of money often causes a relapse into barter. Distrust in money has also been
responsible for reversion to the barter system; such distrust may have been caused by debasement or inflation.
In the light of the stock phrases used by many economists about the inconvenience of barter it may
appear puzzling to the student that any community which was sufficiently advanced to realize the possibilities
of a monetary system should continue to practise such an inconvenient method. The explanation is that in a
primitive community barter is not nearly so inconvenient as it appears through modern eyes. Economists are
inclined to exaggerate its inconvenience because they look at it from the point of view of modern man. The
instances-real or imaginary-they quote are calculated to make their readers wonder how any community could
possibly have existed under barter except in extremely primitive conditions. Some of them seek to demonstrate
the absurdity of barter by describing the difficulties that would arise if our modern communities were to
attempt to practise it. It is, of course, easy for a lecturer to earn the laughter of his audience by telling them
about the pathetic efforts of some market gardener who has to find a barber in need of radishes before he can
have his hair cut. What the lecturer and his audience do not realize is that in a primitive community the grower
of radishes usually cuts his own hair, or has it cut by a member of his family or household; and that even in
primitive communities with barbers as an independent profession the barber and the gardener have a fair idea
about each other's requirements, and have no difficulty in suiting each other. If the barber does not happen to
require to-day any of the products the gardener is in a position to offer, he simply performs his service in return
for the future delivery of products he is expected to need sooner or later.
Even the genuine instances quoted by economists to illustrate the absurdity of barter are apt to be
misleading in their implication. There is, for instance, the well-known experience of Mlle. Zelie, singer at the
Théâtre Lyrique in Paris, who, in the course of a tour round the world, gave a concert on one of the Society
Islands, and received the fee of three pigs, twenty-three turkeys, forty-four chickens, five thousand coconuts
and considerable quantities of bananas, lemons and oranges, representing one-third of the box office takings. In
a letter published by Wolowski and quoted to boredom by economists ever since, she says that, although this
amount of livestock and vegetables would have been worth about four thousand francs in Paris, in the Society
Islands it was of very little use to her. Another much-quoted experience is that of Cameron in Tanganyika,
when in order to buy an urgently needed boat he first had to swap brass wire against cloth, then cloth against
ivory and finally ivory against the boat.
What the economists quoting these and other similar instances do not appear to realize is that the
difficulties complained of are not inherent in the system of barter. They are largely anomalies arising from
sudden contact between two different civilizations. A native singer in the Society Islands would not have been
embarrassed at receiving payment in kind, since she would have known ways in which to dispose of her
takings, or store them for future use. Nor would a native of Tanganyika have found the system of barter
prevailing there at the time of Cameron's visit nearly so difficult as Cameron did. Knowing local conditions, he
would have been prepared for the difficulties, and, before embarking on a major capital transaction such as the
purchase of a boat, he would have made arrangements accordingly. In any case, the fact that the goods required
could not be obtained by a single transaction would not have worried him unduly. The majority of primitive
peoples enjoy bartering and bargaining, and the time lost in putting through three transactions instead of one
would not matter to them nearly as much as to modern man living at high speed, especially to an explorer in a
hurry to proceed on his journey. And while Cameron must have suffered a loss in each of the three
transactions, a local man with adequate time at his disposal and with a thorough knowledge of his market
Reading Passages [ 51 ]

would have chosen the right moment for effecting the necessary exchanges on terms reasonably advantageous
to him. (From Primitive Money by Paul Einzig)

PRODUCTIVITY AS A GUIDE TO WAGES


Through defective presentation the Government has allowed the wages pause to be interpreted as
involving a substantial sacrifice by all concerned, and especially by those least able to afford it. The very reverse
is true. In fact, substantial benefits would accrue to everybody by maintaining existing wages levels, because if
this were done for a reasonable time, real earnings, or purchasing power, would improve. And if thereafter
reasonable restraint were exercised in asking for, or giving, wages increases, there would be a real hope of
recapturing the habit of price reduction and so of still further improving purchasing power.
It is not true that prosperity must be equated with 'gentle' inflation-whatever that means. And while lip
service is paid to the concept that inflation is caused by wage increases outstripping increases in productivity,
few people are willing to do something positive about it. There is already active opposition to the wages pause,
and no discernible determination by any sections of the community concerned with wage negotiations to
devise a logical wages policy.
It was, of course, unfortunate for the climate of industrial relations that the wages pause was presented
to both managements and workers without the most careful preparation and education. Too much emphasis
was, and still is, placed on actual wage rates and earnings, whereas what matters most is real earnings, or
purchasing power. Charts comparing, for example, wage rates, earnings, and profits are unrealistic unless we
add the line for real earnings. When this is added, and also the line for productivity, it becomes startlingly clear
that however great the total wage claim and the eventual settlement, real earnings will always follow much the
same course as productivity. When substantial claims are substantially met real earnings may for a little time
rise above the line of productivity. But eventually they come together again.
In spite of some sizeable variations profits have followed much the same course as weekly wage rates
though they have been almost consistently below them. Thus all the tough bargaining by the unions on the
basis of rising profits has only effectively raised real purchasing power by about the same factor as rising
productivity would in any case have achieved had wages merely kept pace with it.
Of course, it is easy to see what is wrong. Put in its simplest terms it is that we have lost the habit of
price reduction. Everyone wants it but few achieve it in the face of ever rising costs.
At present we are importing too much and exporting too little. The reasons for this are complex. Design,
salesmanship, after-sales service, and many other factors enter into it. What is certain is that it will be even
more difficult to sell abroad after the next round of wage claims have been met and incorporated into the price
of the product. What would the picture look like now if wage increases had more nearly been associated with
the increase in productivity? Certainly our products would have cost overseas and home customers fewer
pounds, shillings and pence without the purchasing power of the workers having been at all impaired.
It has long seemed to me that the alternative is a rationalized wage policy for industry, maintaining the
existing bargaining machinery, but avoiding the bitterness and acrimony that is engendered at present, and
also avoiding the time-wasting strikes and deflection of management from its principal purpose. It may be
argued that to base wage rates on a productivity index would weaken the trade unions. I do not think so. If
there were bargaining committees of trade unions and employers for each industry and they based their
negotiations annually on whatever was the national increase in productivity, then I feel quite sure that much
more time would be left for the more important things - such as raising productivity itself and improving
working conditions and the training of craftsmen to adequate standards.
If workers knew that every year wage increases would be automatically considered without their
having to demand them, then this would remove the present acrimonious preliminaries to negotiations and
would guarantee that the result could never be inflationary - that is, wage increases would be real increases in
terms of purchasing power. It would also give workers a more direct interest in raising productivity by relating
it to their wage packets. As wages would never exceed productivity, the cost of production could not rise from
this cause; in all probability it would fall. This would encourage price reduction and tend to give a lower retail
price index, so that real wages would increase, and any increase in the money rate would be worth still more.
This would solve the problem of the pensioners and other people with fixed incomes. Furthermore, I am
convinced that such a wage policy, when clearly understood, would create a new spirit of collaboration
between management, supervision, and workpeople.
(J. J. Gracie From an article in The Guardian, December 7th, 1961)
Reading Passages [ 52 ]

THE FAILURE OF THE CLASSICAL THEORY OF COMMERCIAL POLICY


Let us try to sort out in appropriate groups the various influences which have been responsible for these
developments.
If we are to preserve a proper sense of proportion, I have no doubt whatever that right at the top of the
list we must put the direct impact and influence of war. This is a matter which is so obvious that we are very
apt to forget it. Yet whatever may be the importance of the political and ideological tendencies which I shall
shortly proceed to discuss, we get the perspective wrong if we regard them as more important than the brute
disruptive effects of the military convulsions of our age. It was these convulsions which, by bursting the cake of
custom and compelling the supersession of the normal institutions of peace, created the states of mind in which
restrictive and disintegrating policies seemed legitimate. It may be said that if adequate measures had been
taken, the difficulties of disequilibrium would have been less; and that if fundamental attitudes had not been
disturbed by illiberal ideologies, the chances of applying appropriate measures would have been greater.
Doubtless there is truth in this. But we are not dealing with communities of angels whose errors are always
deliberate sins against the light. We must not expect too much of the human spirit under strain; and we
simplify history unduly if, in the explanation of the policies of our time, we do not allot to the shock of war
something like autonomous status.
For somewhat similar reasons I am disposed to list separately the influence of mass unemployment or
imminent financial crisis. Of course, unemployment and financial crises are not to be regarded as acts of God:
there are often occasions when they are to be attributed to wrong economic policies, in some cases perhaps
springing from the same ideologies as the overt resistance to liberal commercial policies. But here again, I think
we oversimplify if we make our history monistic. In the explanation of how this or that community came to
adopt policies of commercial restriction, we do well to treat unemployment and financial crisis as at least semi-
independent causes. After all, we know that, in such circumstances, commercial restrictions may actually have
a favourable influence for a time: unemployment may be diminished, a drain of gold or dollars arrested. And
experience shows that it is just at such times that liberal commercial policies are most in danger. Take, for
instance, the final abandonment of free trade by Great Britain in the early thirties. No one who lived through
the crisis of those days will be disposed to deny the influence of ideological factors. The advocacy of tariff
protection by Keynes, hitherto an outstanding free trader, had an impact which should not be underestimated.
But perhaps Keynes himself would not have gone that way had there not been a depression. And certainly his
influence would have been less if people had not felt themselves to be in a sort of earthquake in which all the
old guide posts and landmarks were irrelevant.
Having thus recognized the catastrophic elements in the evolution of policy, we may now go on to
examine the more persistent and slow-moving forces. And since we are proceeding all the time from the
simpler to the more complex, we may put next on our list the influence of producer interest. This is an influence
which I am sure should be disentangled from those which we have already examined. I know that it is
sometimes argued that it is only because of under-employment or financial dislocation that the pressure groups
are effective; and I willingly concede that in such situations they have, so to speak, very powerful allies. But I
am not willing to admit that it is only in such situations that they are successful. Producer interest is ceaselessly
active, seeking to protect itself against competition and the incidence of disagreeable change. The influence of
the agrarian interest in Europe which, while tending to keep down the real incomes of European consumers,
has wrought such havoc among agricultural producers overseas, has certainly not been confined to times of
general unemployment. Nor - to allot blame evenly all round - have the many abuses of the infant industry
argument on the part of manufacturing interests. Much attention nowadays is given to the alleged influence on
history of the struggles between different classes, conceived on a social basis. In my judgment, a more realistic
view would pay more attention to the struggles of different groups organized on a producer basis. These were
the first foes of Classical liberalism and they may very well be the last.
(From The Economist in the Twentieth Century, by Lionel Robbins)

EDUCATION
The Personal Qualities of a Teacher
Here I want to try to give you an answer to the question: What personal qualities are desirable in a
teacher? Probably no two people would draw up exactly similar lists, but I think the following would be
generally accepted.
First, the teacher's personality should be pleasantly live and attractive. This does not rule out people
who are physically plain, or even ugly, because many such have great personal charm. But it does rule out such
Reading Passages [ 53 ]

types as the over-excitable, melancholy, frigid, sarcastic, cynical, frustrated, and over-bearing: I would say too,
that it excludes all of dull or purely negative personality. I still stick to what I said in my earlier book: that
school children probably 'suffer more from bores than from brutes'.
Secondly, it is not merely desirable but essential for a teacher to have a genuine capacity for sympathy-
in the literal meaning of that word; a capacity to tune in to the minds and feelings of other people, especially,
since most teachers are school teachers, to the minds and feelings of children. Closely related with this is the
capacity to be tolerant-not, indeed, of what is wrong, but of the frailty and immaturity of human nature which
induce people, and again especially children, to make mistakes.
Thirdly, I hold it essential for a teacher to be both intellectually and morally honest. This does not mean
being a plaster saint. It means that he will be aware of his intellectual strengths, and limitations, and will have
thought about and decided upon the moral principles by which his life shall be guided. There is no
contradiction in my going on to say that a teacher should be a bit of an actor. That is part of the technique of
teaching, which demands that every now and then a teacher should be able to put on an act -to enliven a lesson,
correct a fault, or award praise. Children, especially young children, live in a world that is rather larger than
life.
A teacher must remain mentally alert. He will not get into the profession if of low intelligence, but it is
all too easy, even for people of above-average intelligence, to stagnate intellectually and that means to
deteriorate intellectually. A teacher must be quick to adapt himself to any situation, however improbable (they
happen!) and able to improvise, if necessary at less than a moment's notice. (Here I should stress that I use 'he'
and 'his' throughout the book simply as a matter of convention and convenience.)
On the other hand, a teacher must be capable of infinite patience. This, I may say, is largely a matter of
self-discipline and self-training; we are none of us born like that. He must be pretty resilient; teaching makes
great demands on nervous energy. And he should be able to take in his stride the innumerable petty irritations
any adult dealing with children has to endure.
Finally, I think a teacher should have the kind of mind which always wants to go on learning. Teaching
is a job at which one will never be perfect; there is always something more to learn about it. There are three
principal objects of study: the subject, or subjects, which the teacher is teaching; the methods by which they can
best be taught to the particular pupils in the classes he is teaching; and -by far the most important-the children,
young people, or adults to whom they are to be taught. The two cardinal principles of British education today
are that education is education of the whole person, and that it is best acquired through full and active co-
operation between two persons, the teacher and the learner.
(From Teaching as a Career, by H . C . Dent.)
Rousseau's Emile
It is not intended, even if it were desirable, to give a running commentary on the Emile. The reader is
advised to study it for himself and to read in conjunction the account of the education of Julie's children
described in Part V of The New Heloise. All that space permits is a summary of some of the more important ideas
that Rousseau contributed to educational theory. Speaking of the Emile, Lord Morley described it as 'one of the
seminal books in the history of literature, and of such books the worth resides less in the parts than in the
whole. It touched the deeper things of character. It filled parents with a sense of the dignity and moment of
their task. It cleared away the accumulation of clogging prejudices and obscure inveterate usage, which made
education one of the dark formalistic arts. It admitted floods of light and air into the tightly closed nurseries
and schoolrooms. It effected the substitution of growth for mechanism . . . it was the charter of youthful
deliverance.'
It is the last sentence of this passage which expresses the most important influence that Rousseau
exercised upon education. One may justly hail him as the discoverer of the child. This is not to forget that
educational thinkers of the ancient world, the mediaeval period, and the Renaissance, had kindly, sympathetic,
and helpful ideas about the training of children. In their view the child did not come first. They fixed their eyes
upon what he was to be in the future and the curriculum they approved and the methods they recommended
were coloured by this attitude. Rousseau, on the other hand, emphasised that the prime factor to be considered
in education is the child and his present nature as a child. He wrote: 'Nature wants children to be children
before they are men. If we deliberately pervert this order, we shall get premature fruits which are neither ripe
nor well flavoured, and which soon decay. ... Childhood has ways of seeing, thinking, and feeling, peculiar to
itself; nothing can be more foolish than to substitute our ways for them.'
So important did this principle seem to him that he repeated it in the Emile. In the Preface to the Emile,
he wrote: 'We know nothing of childhood; and with our mistaken notions, the further we advance, the further
Reading Passages [ 54 ]

we go astray. The wisest writers devote themselves to what a man ought to know, without asking what a child
is capable of learning. They are always looking for the man in the child without considering what he is before
he becomes a man.... Begin thus by making a more careful study of your scholars, for it is clear that you know
nothing about them.' He carried out this advice in the Emile by expressing the view that education is a
continuous process which begins at birth and which can be divided into four consecutive stages. Thus Book I
deals with the infant; Book II with childhood; Book III with the pre-adolescent between the ages of twelve and
fifteen; and Book IV with adolescence.
These stages of development correspond to the progress made in the history of the human race. To all
intents and purposes, the infant is living at the animal level. The child can be compared with primitive man.
Boyhood is a period of self-sufficiency, whilst at adolescence the sex impulses ripen, social behaviour becomes
possible, and the young person is able to conduct his life according to moral principles.
It is an easy matter to criticise Rousseau's account of the child's development from the standpoint of
present-day child study, but it is essential to bear in mind that in the eighteenth century he was a pioneer in this
field. His exposition was rendered less effective by his adherence to a faculty psychology, but in his defence,
one could urge that this was the dominant view of his time. The continuity of the child's development may
seem to be broken by the emphasis placed upon the emergence of new faculties, for example, at adolescence,
the pupil appears to make a complete break with his former life. One may dispute Rousseau's statement that
the most dangerous period of human life lies between birth and the age of twelve, but when these partial
criticisms have been made, the fact remains that he concentrated attention upon the child and his nature rather
than upon the subject and the pupil's future occupation. This was one of the most revolutionary steps that
educational theory had so far taken.
(From A Short History of Educational Ideas, by S. J. Curtis & M. E. A. Boultwood.)

The Beginnings of Scientific and Technical Education


The higher instruction given to workers was mainly concerned at first with science. As early as 1760 a
professor at Glasgow, named Anderson, had begun to hold evening classes in science, which working men
were encouraged to attend. In his will he left an endowment for a chair of natural philosophy at the University.
Its first occupant was George Birkbeck (1776-1841), who held a degree in medicine. When he started his lectures
in 1799 he found it necessary to have a good deal of apparatus, and while this was being made under his
instructions he became acquainted with a number of Glasgow artisans. He found them so intelligent and so
eager to learn that he resolved to start a course of lectures and experiments in mechanics 'solely for persons
engaged in the practical exercise of the mechanical arts, men whose situation in early life has precluded the
possibility of acquiring even the smallest portion of scientific knowledge.' The lectures proved a great success.
After Birkbeck removed to London in 1804, the lectures were continued by the next occupant of the chair; and
finally, in 1823, the members of the class organised it into a 'Mechanics' Institute'. Its purpose was defined as
'instructing artisans in the scientific principles of arts and manufactures'.
Mechanics' institutes soon sprang up in many parts of the country. They were supported by
subscriptions from the members and by donations from sympathisers. By 1845 there were 610 institutions, with
102,050 members. They were naturally most popular in the manufacturing districts, such as London,
Lancashire, and Yorkshire; but there were a few successful institutes also in such rural centres as Lewes,
Basingstoke, Chichester and Lincoln. Each Institute usually included a library, reading-room, and museum of
models and apparatus. Lectures were provided on mathematics and its applications, and on natural and
experimental science and drawing. Sometimes literary subjects, such as English and foreign languages, were
included. Travelling lecturers and circulating boxes of books helped to keep the smaller institutes in touch with
one another.
The mechanics' institutes played an important part in English education, and yet they were only
partially successful. By 180 two changes had become noticeable. Their membership consisted more of clerks
and apprentices and middle-class people than of working men, for whose benefit they had been founded; and,
as a corollary of this, their syllabuses had tended to change. There was less purely technical instruction and
more recreational activities and popular lectures. Discussions, debates, and even social courses tended to take
the place of ad hoc courses designed to help artisans. There were several reasons for this change. The artisans
and working classes had not yet received an elementary education, which would form an adequate foundation
on which to build a superstructure of technical education. Reference has already been made to the meagre
limits of education provided by the monitorial schools and other elementary schools. It must also be
remembered that some of the children of the poor hardly went to school at all and that the average length of
Reading Passages [ 55 ]

school life was in any case only one and a half or two years. Moreover, a great obstacle to the spread of
knowledge at this period was the high cost of newspapers, owing to the Government duty: from 1819 to 1836
there was a stamp duty of 4d. a copy. In a Poor Law Commissioners' Report of 1834 there occurs this passage:
'The dearness of newspapers in this country is an insurmountable obstacle to the education of the poor. I could
name twenty villages within a circuit of a few miles in which a newspaper is never seen from one year's end to
another.' Again, the fees for membership and classes in mechanics' institutes tended to be too high for those for
whom they were originally designed. At the London Mechanics' Institute in 1823 the annual subscription was
fixed at £1, and this seems to have been a fairly usual charge. In 1826 1,477 workmen paid this fee at the London
Institute: but it would be a rather high fee for people of that type even today, and it must have been much more
onerous in the Corn Law days, after the Napoleonic Wars, when wages generally were low. Thus the
mechanics' institutes tended to decline in importance and change in character. But some of them retained much
of their original character and were stimulated into new life by the development of technical education during
the second half of the nineteenth century. For example, the London Mechanics' Institute was the forerunner of
the present Birkbeck College, which caters for evening students but is a constituent part of the University of
London. In the broadest sense, the mechanics' institutes have laid the foundation for the development of our
modern technical schools and colleges.
(From A Short History of English Education, by H. C. Barnard.)

Supposed Mental Faculties and Their Training


The mind is commonly thought to consist of a number of 'Faculties'-such as memory, observation,
perception, reasoning, will, judgment, and so on, pretty much the same as those described by the phrenologists
who feel the 'bumps' on a man's head and describe his capacities according to the chart of the skull which they
have mapped out. It is supposed that a man has a good (or bad) memory in general, and that the exercise of the
memory, say on history dates or Latin verbs, will strengthen the memory as a whole; and that a training in the
observation of wild flowers will sharpen the whole 'faculty of observation' whatever is to be observed; or again
that mathematics, since it exercises the 'faculty of reasoning', equally improves reasoning about politics, social
problems, and religion.
This view as to mental faculties is still widely held by most educated people who have not studied
Psychology, and it still has a harmful influence on education in some ways. Let us consider this 'popular'
psychology carefully, discussing in this chapter the supposed intellectual 'Faculties'.
An example of error in popular views about the mind appears in the idea of a faculty of observation.
One often hears it said that we should train the observation of our pupils; and it is imagined that by training
them to observe certain things we are training them to observe anything and everything. A method of
instruction used in the Army, in which a man had to observe quickly and remember a score of various objects
in a tray, seems to have been based on this idea.
One of my students once gave a lesson on Botany in the presence of an inspector of schools. After the
lesson, the inspector said to her: 'Yes: that was an interesting lesson, but what I want to know is, are you
training the pupil's powers of observation? would they, for example, be able to tell you the colour of the tie I
was wearing?' The inspector overlooked the fact that the more the pupils had concentrated their attention on
the lesson, the less would they be likely to notice the colour of his tie; and that the more interested they were in
the flowers studied, the less would they be likely to attend to him or his personal appearance. (I should like to
add that this incident occurred a good many years ago. Inspectors are better informed nowadays on
psychological matters.)
Observation, in fact, depends on interest and knowledge. If three friends travel abroad, one an architect,
another a botanist, and the third a stockbroker travelling with them only to take a 'cure' abroad, and interested
only in his health and moneymaking, then the architect is likely to notice the style of houses and other
buildings more than his friends do, because he is specially interested in them. The botanist will observe
especially the flowers and trees of the country more than his friends; and he will actually see more details
because he knows what to look for. Observation is guided by knowledge, and prompted by interest. We have,
however, no reason to suppose that the botanist, trained in such observation, or the architect, keenly observant
of the buildings, will be more observant than the stockbroker of the faces of the foreign people they meet, or the
dress of the women. Indeed, they are more likely to have their attention diverted by the objects of their special
interests. So training in the careful observation of the varied endings of Latin words, or of the changes in
chemical substances in experiments, will have no effect on the observation of pictures or the movements of the
stars.
Reading Passages [ 56 ]

These popular ideas about the mind and its faculties sometimes have the element of truth in them
which makes it all the harder for the psychologist to eliminate their exaggeration. For example, as to
observation: a careful training in observing plants under the microscope includes a training in method, in the
value of the precise description of what is actually seen (and not merely what one thinks should be there), and
so on: and a student with such experience will gain something from it if he turns to a similar study, e.g. zoology
or geology, especially when he also uses the microscope. Here we see that the adoption of an ideal of truth and
exactitude in such work or a training in a method of procedure in one kind of work, may result in its
application in a similar kind of work though it does not always do so.
(From Psychology and its Bearing on Education, by C. W. Valentine.)
Reading Passages [ 57 ]

The Concept of Number


The long struggle forward in children's thinking comes out very clearly in the development of their
number ideas-the part of Piaget's work which is now best known in this country. It offers a striking illustration
both of the nature of his discoveries and of the basic pattern of mental growth. We can watch how the child
starts from a level of utter confusion, without a notion of what number really means, even though he may be
able to count up to ten or twenty; a level where number is completely mixed up with size, shape, and
arrangement, or constantly shifts according to the way it is subdivided or added up. And we can see how, on
an average two years later, children declare of their own accord that a number must stay the same, whatever
you do with it, so long as you do not actually add to it or take away from it; or that whatever you have done
with it, you can always reverse this and get back to where you started from; or that you can always show it to
be the same by counting; and so on.
The following are a few examples of the ways in which Piaget's experiments bring out this pattern of
growth:
1. Each child was presented with two vessels of the same shape and size containing equal quantities of
coloured liquid. Then the contents of one of them was poured into (a) two similar but smaller vessels, (b)
several such, (c) a tall but narrow vessel, (d) a broad but shallow one. In each case the child was asked whether
the quantity of liquid was still the same as in the untouched vessel.
Piaget found that at a first stage, around 4-5 years, children took it for granted that the quantity of
liquid was now different - either more because the level was higher, or more because there were more glasses,
or less because the new vessel was narrower, or less because the levels in the two or more glasses were lower.
In other words, there was no idea of a constant quantity, independent of its changing forms; if its appearance
changed, the quantity changed and could become either more or less according to what aspect of the new
appearance caught the child's eye. At a second stage, at about 5½-6, children had reached a transitional phase,
in which they wavered uncertainly between the visual appearance and the dawning idea of conservation in
their minds. Thus the quantity of liquid might be regarded as still the same when it was poured into two
smaller glasses, but as greater when it was poured into three. Or as remaining the same if the difference in level
or cross-section in the new vessel was small, but not if it was larger. Or the child might try to allow for the
relation between cross section and level, and experiment uncertainly without reaching any clear conclusion. In
the third stage, 6½-8, children give the correct answers right away, either by reference to the height width
relation, or by pointing out that the quantity has not been changed: 'It's only been poured out.'
2. As a check on these results, Piaget carried out a similar set of experiments, with beads instead of
liquids. In this way something closer to counting could be introduced (e.g. the child putting beads into a
container one by one as the experimenter did the same into another vessel). Also he could be asked to imagine
that the beads inside each vessel were arranged into the familiar shape of a necklace. The outcome was entirely
the same. At the first stage, the children thought that the quantity of beads would be either more or less,
according as the level looked higher, or the width greater, or there were more vessels, and this happened even
when a child had put one bead into his vessel for each one that the experimenter placed in his. At stage 2 there
is a struggle in the child's mind as before. This may show itself for example by his first going wrong when
comparing levels between a wider and a taller vessel; then correcting himself if asked to think in terms of the
necklaces; but when the beads are spread over two or more containers, still thinking that the necklace will be
longer. At stage 3 once more the children reply correctly and cannot be shaken, however the questions or the
experiments may be varied.
(From The Growth of Understanding in the Young Child, by Nathan Isaacs.)

English in the Primary School


The writing of their own stories was started from the very first, long before the children could read. The
thought came from them, and was interpreted by means of a picture. When the picture was finished, each one
told me what it was about, and I wrote for them a few words about their picture. I have kept a set of books all
by one child, between the ages of four and a half and seven years, which make a most valuable study of infant
progress.
Jeffery came to me at four, a bright little fellow with a mop of wonderful auburn curls and a gift of
romancing which one morning caused him to rush into school declaring that a sow had just bitten his head off
'down by the church'. He showed skill in handling crayons and writing tools straight away, so I made him a
sewn book and began on an illustrated version of 'The House that Jack Built', a rhyme which he knew. He drew
all the pictures, and after each one copied, in his fashion, the appropriate text. Next came a book called 'The
Reading Passages [ 58 ]

Farmer', in which he himself thought up a situation every day and drew a picture of it; then I came along, wrote
for him what he wanted to say, and he copied it. As soon as this was finished he demanded a new book in
which to write the 'story of my donkey-the one that ran away'. As he had never had a donkey, I recognized his
romantic imagination at work, and was delighted at the thought of it being put to such good use. This book was
full of the most wonderful, lively pictures and it was evident that Jeff was often considerably irked by the
necessity of having to wait for me to write down what he wanted to say. He was by this time only a month or
two over five years old, but was already beginning to read and I used to find him struggling valiantly on bits of
odd paper to write for himself the next part of the tale.
Then we reached the long summer holiday. After it, Jeff returned to me a much more mature little boy,
and though still only five and a half, was reading very well when we had been a month or so into the new term.
His new book was called 'Jeff the Brave Cowboy', and this time he wrote for himself on a piece of paper. I
corrected it, and the fair copy was made into his book. This book was really remarkable for the sudden leap
forward he made in his use of English. It was vigorous and economical, with adjectives and adverbs appearing
quite naturally in the right places, but only where they were needed for extra emphasis. The undoubted success
of 'The Brave Cowboy' put Jeff's feet firmly on the road to authorship. He was away on 'The Dog Fight' before
the previous one was really finished, because, having seen a dog fight on his way to school one morning, he
simply could not wait to start putting down his memory of it. This was a change from the purely imaginative
work of his previous creations and very good as English practice, for he discovered not only the value of
possessing powers of keen observation, but of knowing the language that allowed one to record what one had
noticed in detail.
It appeared that Jeff's mother, a glorious red-head like himself, had thrown a bucket of water over the
fighting dogs. While he was drawing a picture of this, giving his mother unmistakable hair, he was thinking
deeply about it, for he must have been wondering how to record the mood of the moment in words. He came to
me for advice. Although he was still only five years old, I explained to him the use of conversation marks,
thinking that that was what he had asked. He was not satisfied.
'Does that make it say how Mummy said it?' he asked. 'She didn't just say "Stop it". She said (miming
the throwing of the water and stamping his foot) "STOP IT!" '
After Christmas Jeff, now six, wanted to write a pirate story. After reading some pirate books, he started
on 'Jeff and the Flying Bird', this time writing his own book without any correction or other interference from
me.
One day (Jeff) stood on the prow of his ship, he lookted in his telscope.
What did he see ?
He saw a ship sailing across the sea to Jeffs ship.
Attack shouted Jeff.
At seven he wrote 'Jeff the Space Man', text in pencil but pictures in coloured inks. At the end of his first
session with his new book, he came to me to show it. I was sitting at my desk, he standing by the side of it. I
took the book from him, admired the picture, and read aloud:
Jeff pulled the lever and pressed the button. The space ship, swished up into the air and was gene
I happened to glance at the real Jeff, who was pressing with all his might an imaginary button on the
side of my desk, after which he held the pit of his stomach and gazed at the school ceiling.
'Gosh,' he said, in an awed voice, 'can't you just hear it?' I could.
'The ssspace ssship ssswissshed ....' Of course! After that Jeff went on exercising his imagination, his
observation, and his increasing command of English, in book after book after book. His last was a treatise on
freshwater fish and how to catch them. At eleven-plus his I.Q. showed him to be only of good average
intelligence. I mention this just to prove that he was not the brilliantly academic type of child from whom one
would expect this lively interest in English as a matter of course.
(From An Experiment in Education, by Sybil Marshall.)

Britain and Japan: Two roads to higher education


Britain and Japan are the two great pioneers of industrialism and therefore of the modern world. Britain
was the pioneer industrial nation of the Western, European-dominated world, Japan of the Eastern, non-
European and, to many eyes, the hope of the Third World. The countries have always had much in common.
Both groups of islands off an initially more civilized and powerful continent, they had to defend themselves
from military and cultural conquest and to that end developed a powerful navy and an independence of mind
which made them increasingly different from their continental neighbours. Behind their sea defences they were
Reading Passages [ 59 ]

able to pursue their own ideals and ambitions which enabled them in the end to originate changes in industry
and society which, because they brought wealth and power, others tried to imitate. The British at the height of
their imperial power and economic domination recognized in the emerging Japanese a fellow pioneer and an
ally. They called her ‘the Britain of the East’ and in the 1902 Treaty were the first to recognize Japan as a world
power.
Yet the two countries took utterly different roads to industrialism, power and wealth. Britain, the first
industrial nation, evolved slowly without knowing - because nobody then knew what an Industrial Revolution
was - where she was going or what the end of modernization would be. Japan, coming late to economic growth
in a world which saw and felt only too clearly what the gains and dangers of industrialism were, adopted it
self-consciously and with explosive and revolutionary speed. And they still bear the marks of these differences
of origin, timing and approach. Britain had the first Industrial Revolution because she had the right kind of
society to generate it; but for that very reason she was riot forced to change her society as much as later
developing countries, and she now has the wrong kind of society for sustaining a high and continuing rate of
economic growth. That does not mean that she has the wrong kind of society to provide a highly civilized and
comfortable life for her people. On the contrary, just as the British were pioneers of the industrial society
dominated by a gospel of work so they may now be the pioneers of the post-industrial society dedicated to the
gospels of leisure and welfare.
Japan on the other hand has astonished the world by the degree to which she was prepared to change
her society in order to industrialize, and the speed at which, in less than a hundred years, she transformed
herself from a feudal society of samurai, artisans and peasants into one of the most efficient industrial and
egalitarian meritocracies in the world. However, it must be said that Tokugawa Japan was no ordinary feudal
society, and had hidden advantages for industrial development which most feudal societies lack: one of the
most urbanized populations in the world, with a highly educated ruling class of efficient bureaucrats, large
numbers of skilled craftsmen and sophisticated merchants, and a more literate populace than most other
countries, even in the West. But the fact remains that the leaders of the Meiji Restoration were prepared to
abolish feudalism, establish equality before the law, and make everyone, rich or poor, samurai, worker or
peasant, contribute to the survival and development of the country.
If the British and Japanese roads to industrialism were different, their two roads to higher education
were even more different. The British educational road was far slower, more indirect and evolutionary even
than their road to industrial development and, indeed, in the early stages had little connection with economic
growth. The ancient universities, particularly in England as distinct from Scotland, had become little more than
finishing schools for young gentlemen, chiefly landowners’ sons and young clergymen. They did not conduct
research and not one of the major inventions of the early Industrial Revolution originated in a university.
Oxford and Cambridge were even less important to English society when the Industrial Revolution began than
they were over a century earlier - many of the ruling elite came to find little to interest them in the repetition of
classical Greek and Latin texts.
When Japan was beginning its great transformation under the Meiji, the main contribution of the British
universities to economic growth was still in the future. It may seem surprising that, in relation to industrial
development and modernization, British higher education in the late 19th century was no more advanced than
the new Japanese system. By 1900 university students in both Britain and Japan were less than one per cent of
the student age group. In both countries higher education was exclusively for the elite, but whereas in Britain
the elite graduates went predominantly into the home civil service, colonial government and the traditional
professions, in Japan they went not only into these but still more into industry and commerce and the newer
technological professions.
This was because Japanese higher education, like the whole modern education system, was created by
the Meiji reformers for the express purpose of modernizing Japan. Japan, contrary to popular belief in the West,
did not start from scratch. Under the Tokugawa there were higher schools and colleges based on Confucian
learning, no more out of touch with the needs of a traditional ruling elite than were Oxford and Cambridge. But
the difference was that the Meiji elite knew that higher education had to be changed, and changed radically if
Japan was to be transformed into a modern nation able to expel the barbarians and become a strong and
wealthy country. Under the Fundamental Code of Education of 1872 they set out to establish a modern system
of education with an elementary school within reach of every child, a secondary school in every school district,
and a university in each of eight academic regions. In the next forty years, Japanese higher education expanded
explosively. By 1922 there were 6 imperial and 20 non-imperial universities and 235 other higher institutions.
Moreover, the whole system was geared to industrialization and economic growth, to the production of
Reading Passages [ 60 ]

bureaucrats, managers, technologists and technicians. Whereas in Britain the sons of the elite at this stage
avoided industry, which was left to the largely self-educated, trained in industry itself, in Japan the sons of the
Shizoku, the ex-samurai who formed the majority of students in the universities, went indiscriminately into the
service of the state and of private industry.
Britain too began a remarkable expansion of higher education in the late 19th century. New universities,
more responsive to the scientific and industrial needs of their regions, came into existence in nearly all the great
cities which did not already have them: Manchester, Leeds, Liverpool, Birmingham, Bristol, Newcastle,
Nottingham, Sheffield, and so on. These new civic universities were much more dedicated to scientific and
technological research and had a provocative and stimulating effect on the older universities too, and Oxford
and Cambridge came to develop science and engineering and other modern subjects. Thus at the time Japan
was using higher education as an instrument of industrialization, Britain began to do the same.
The road remained substantially different, however. Unlike the Japanese, the great majority of British
managers never went to university. Some went to a variety of technical colleges which grew up to meet the
demand which the universities had so long neglected, but the great majority were trained on the job with the
help of evening schools where they learned to pass the examinations of the new professional bodies like the
Institution of Mechanical Engineers or the Institute of Chemistry.
Thus the British road to industrial higher education was largely a part-time road, Most modern
universities began as technical or other colleges, mostly for part-time students. This helps to explain why
Britain, with one of the smallest university systems amongst the advanced countries, could sustain a
competitive industrial economy, and even remain the world’s largest exporter of manufactured goods down to
the First World War.
During the 1960s the number of British universities nearly doubled, from 25 to 45 and in addition 30
polytechnics were formed from existing technical colleges. But British industry still depends to a larger extent
than any other advanced country on part-time education and training on the job.
Japan by contrast believes in full-time higher education, and has far larger numbers in universities and
colleges. Since the Second World War, initially under the stimulus of the American Occupation, the system has
grown from 64 universities and 352 other colleges with about 250,000 students in l948 to 43l universities and
580 other colleges with nearly 2 million students in 1977, equal to 38 per cent of the age group. In terms of full-
time students Britain is still only on the threshold of mass higher education; Japan is already moving towards
universal higher education.
Most educationists still believe that if only the British would spend as much on education as the
Japanese they could achieve the same rate of economic growth. But perhaps too much influence is claimed for it
by educationists. Could it not be said that education is an effect rather than a cause - or rather, that it is an effect
before it can become a cause. It is an effect of the social values and social structure of the society which creates
and provides it.
In other words, the British and the Japanese took two different roads to higher education and to modern
industrialism because they were two different kinds of society; with different aims and ambitions, different
moral and social values, different principles of social connexion and of social structure.
In one sense the aims and objectives of the two societies were very similar. They both harnessed
personal ambition to the drive to wealth and power. The key to the British Industrial Revolution was social
ambition, ‘keeping up with the ‘Joneses,’ the desire to rise in society by making money and to confirm that rise
by spending money on conspicuous consumer goods. In a similar way, the Japanese of the Meiji Restoration
strove to become rich and powerful in order to expel the barbarians and restore the country’s independence.
The two kinds of ambition were fundamentally different. The British landlords, farmers, industrialists and
workers were self-seeking and individualistic in their ambition, and national economic growth was a by-
product of their individual success. The Japanese samurai, merchants, artisans, and peasants strove to succeed,
but success was not measured as much by personal wealth, status and power, as by the admiration and
applause of one’s family, one’s colleagues and one’s fellow citizens. Individual prosperity was a by-product of
the group’s. The British (and Western) kind of ambition may be called ‘acquisitive individualism’ and the
Japanese kind ‘participative self-fulfilment’.
Acquisitive individualism in Britain has deep roots in English society. Even now the British are more
concerned with their individual share of the national cake than with increasing its size.
The Japanese by contrast have never been individualists in this sense. They have always put the group -
the family, the village, the feudal han, the nation - before the individual and his material gain. The individual
Reading Passages [ 61 ]

has found his reward in and through the group and in loyalty to its leader, who represents the group to the
outside world.
This ideal of participative self-fulfilment has deep roots in Japanese society and goes back to the nature
of the Japanese family, the ie. In Western terms, ie is best translated as ‘household’ rather than ‘family’, since it
was more open to newcomers such as sons-in-law than the Western family, and outgoing members who
married into another ie ceased to belong. Its major feature was that every member, however new, strove for
respect in the eyes or the household and received protection and loyalty in return. This was the origin of that
participative self-fulfilment, that striving for success in and through whichever group one came to belong to,
which is the secret of Japanese unselfish ambition and co-operative loyalty.
Yet there are limits to the group responsibility produced by the ie tradition. Because it was rooted in a
system of group rivalries which drew a sharp distinction between one’s own group and all the others - which is
why it is difficult, for example, to unite workers from different companies in the same trade union - there is less
sense of responsibility in Japan for the welfare of those who do not belong to the same group. That is why
welfare, social security, pensions, medical services and leisure facilities are mainly organized by the large
corporations for their own workers, and the state welfare system is still undeveloped compared with Britain
and Europe.
Britain, despite its acquisitive individualism, always had another tradition, an aristocratic tradition of
paternalism or ‘noblesse oblige’ which, oddly enough, remained enshrined in the older, aristocratic universities
of Oxford and Cambridge while acquisitive individualism was creating or capturing the newer universities of
the industrial society. This tradition found its way into the university settlement movement in the slums of
London and other great cities, into housing improvement and factory reform, into adult education for the
working class, into social work, even into British non- Marxist socialism, and into the welfare state. It was a
tradition which went beyond all groups, whether of family, trade, profession or class. It asked in effect, ‘who is
my neighbour?’ and it answered zany member of society who needs my help’. This is the hidden principle
which has saved Britain from the excesses of acquisitive individualism.
Although British trade unions, employers and professional bodies today fight each other for a bigger
share of the cake regardless of what happens to the cake as a whole, there is a gentleman’s agreement,
stemming from that other, more gentlemanly tradition, that the welfare of the poor, the disabled, the elderly,
the sick and the unemployed comes first. For the same reason, economic growth comes second. Welfare, leisure,
a clean environment and a civilized social life are now more important acquisitions to the British than a larger
post-tax income. Acquisitive individualism has shifted its ground, from material possessions to the non-
material goods of health, education for pleasure, an enjoyable environment and a more leisurely and
pleasurable life.
Britain and Japan took two different roads to higher education and to industrialism because they were
two very different societies with different social structures and ideals. If the British could borrow some of their
unselfish self-fulfilment and co-operative efficiency from the Japanese and the Japanese some of their concern
for social welfare and public amenity from the British, perhaps East and West could at last meet in mutual
under- standing and each have the best of both worlds.
Enrolments in universities and colleges in Britain and Japan as percentage of the student age group
  Britain Japan
1885 (1.0) 0.5
1900 1.2 0.9
1920   1.8
1924 2.7  
1930   3.1
1938 2.7  
1940   4.0
1955 6.1 8.7
1960 8.3 10.2
1965 8.9 15.9
1970 13.8 18.7
1974 14.0 27.9
1977   33.9
Reading Passages [ 62 ]

1979 13.9  
Note:
The British figures include full-time advanced students in universities, teacher training colleges and
further education; the Japanese figures those in universities, two-year colleges and higher technical colleges
(and excluding higher vocational schools).

What type of student do you have to teach?


Most lecturers try to help students develop their understanding. But understanding a foreign language
is not the same as understanding why someone is upset or understanding electromagnetism or understanding
history. It is not to be expected therefore that the same teaching methods will be appropriate to these different
kinds of understanding.
Most forms of understanding are expressed by concepts which differ from everyday ones. For example,
we all know that suitcases get heavier the longer you carry them, but in science this is described in terms of
constant weight plus increasing fatigue. The concept “weight” is introduced and laid alongside the
commonsense concept of “heaviness’. Similarly we all know that time passes quickly when we are absorbed
and slowly when we are bored, but science tells us that this is an illusion; time really ticks away at a steady rate.
Note that conceptual change should not be the aim, as is sometimes suggested, since people still also need their
common sense. The aim is to add new sets of concepts and to explain when to use which set.
But “understanding” is not the only kind of learning which students need to master. Instruction,
demonstration and error-correction are the key teaching activities - which are quite different from those needed
to reach understanding - while practice is the main learning activity.
Students also have to memorise information and be able to recall it when required, as well as acquire
several other kinds of learning (such as know-how and attitudes and values) each of which calls for different
teaching methods. So learning-centred teaching includes a conscious matching of teaching methods to the
intended kind of learning.
While good teaching involves, among other things, helping students to achieve their chosen learning
goals, the picture is further complicated by the different learning styles adopted by different groups of
students.
Many ways of categorisation and modelling students as learners have been suggested, of which the
following are as useful as any, particularly in connection with understanding. (Differences between learners’
natural learning styles are not so significant when skills are being taught, since the appropriate style is
determined more by the activity involved than by students’ natural capabilities.)
Some students are “holists”: which means they like to take an overview of a subject first and then fill in
the details and concepts in their own way.
Others are “serialists” who like to follow a logical progression of a subject, beginning at the beginning.
Educational researcher Gordon Pask structured some teaching materials in both a holist and a serialist manner,
and then tested previously-sorted cohorts of students using them. He found that the best performance of those
who were mismatched (i.e. holist students with serialist material, and vice versa) was worse than the worst
performance of those who were matched to the learning materials.
This seems to imply, for example, that educational textbooks - which are naturally serialist in character -
should include signposts, summaries, alternative explanations of difficult concepts, explanatory figure captions,
a glossary of terms, a good index, etc, to help holist students find their own way through them. Similarly
projects, which are naturally holist in character, since they are usually specified in terms of a final goal, can
cause problems for serialists, who may therefore need step-by-step guidance.
Another group of students are “visualisers” whose learning is helped by the inclusion of diagrams,
pictures, flow-charts, films, etc. Others are “verbalisers” and prefer to listen, read, discuss, argue, attend
tutorials and write during their conceptual development. And some are “doers” and find that overt practical
activity is best. The saying that “to hear is to forget, to see is to remember, but to do is to understand” is only
true for “doers”. With a typical mix of students, attempts should be made to cater for each preferred style.
It is well known nowadays that for the development of “understanding” and for the memorisation of
information it is important that students adopt a “deep approach” to their learning, rather than a “surface
approach’. The deep approach refers to an intention to develop their understanding and to challenge ideas,
while the “surface approach” is the intention to memorise information and to follow instructions. Although
students are naturally inclined towards one approach rather than the other - often with a new subject the
Reading Passages [ 63 ]

inclination is towards the surface approach - this can vary from subject to subject and can usually be changed
by the teaching they receive. Overloading, for example, will encourage the surface approach; stimulating
interest may encourage the deep approach. Given the deep approach, even good lectures can make a
considerable contribution to students’ “understanding”.
Recently the need to encourage the deep approach in students has been allowed to dominate the choice
of teaching method, sometimes at the expense of effective teaching. Constructivism in science teaching, for
example, in which students are encouraged to devise their own explanations of phenomena, certainly tends to
encourage the deep approach, but it can also leave students with misconceptions. Similarly, though problem-
based learning is usually popular with students, it teaches “know-how” rather than “understanding”: unless
explicit conceptual guidance is also given.
The fact that students have different preferred learning styles also has important implications for course
evaluation through feedback. It often seems to be assumed that students are a homogeneous bunch and that
therefore a majority opinion condemning a certain aspect of a course justifies changing it for the future. But this
can well be a mistake. If a course is well matched, say, to “holist verbalisers” it is unlikely to be found very
helpful to “serialist visualisers”. In other words, feedback is likely to reveal as much about the students as
about the course or lecturer, and can be quite misleading unless it is properly analysed in terms of the preferred
learning styles of the particular cohort of students.
Indeed, student feedback about the teaching of “understanding” can, in any case, be quite misleading,
since students cannot be expected to judge what has been helpful to them until much of the necessary
conceptual development has occurred. Only after “the penny has dropped” is such feedback likely to be
reliable. Similarly, favourable feedback about the necessary but tedious practising of important “skills” cannot
normally be expected.
These considerations are all aspects of learning-centred teaching, with which all lecturers should, in due
course, become familiar. Innovation in education without taking these matters into consideration is at best
cavalier, at worst irresponsible, for it is the students who suffer from teachers’ ill-founded experiments.
John Sparkes, Times Higher Education Supplement. February 6th, 1998.

Spoon-fed feel lost at the cutting edge


Before arriving at university students will have been powerfully influenced by their school’s approach
to learning particular subjects. Yet this is only rarely taken into account by teachers in higher education,
according to new research carried out at Nottingham University, which could explain why so many students
experience problems making the transition.
Historian Alan Booth says there is a growing feeling on both sides of the Atlantic that the shift from
school to university-style learning could be vastly improved. But little consensus exists about who or what is at
fault when the students cannot cope. “School teachers commonly blame the poor quality of university teaching,
citing factors such as large first-year lectures, the widespread use of inexperienced postgraduate tutors and the
general lack of concern for students in an environment where research is dominant in career progression,” Dr
Booth said.
Many university tutors on the other hand claim that the school system is failing to prepare students for
what will be expected of them at university. A-level history in particular is seen to be teacher-dominated,
creating a passive dependency culture.
But while both sides are bent on attacking each other, little is heard during such exchanges from the
students themselves, according to Dr Booth, who has devised a questionnaire to test the views of more than 200
first-year history students at Nottingham over a three-year period. The students were asked about their
experience of how history is taught at the outset of their degree programme. It quickly became clear that
teaching methods in school were pretty staid.
About 30 per cent of respondents claimed to have made significant use of primary sources (few felt very
confident in handling them) and this had mostly been in connection with project work. Only 16 per cent had
used video/audio; 2 per cent had experienced field trips and less than 1 per cent had engaged in role-play.
Dr Booth found students and teachers were frequently restricted by the assessment style which remains
dominated by exams. These put obstacles in the way of more adventurous teaching and active learning, he said.
Of the students in the survey just 13 per cent felt their A-level course had prepared them very well for work at
university. Three-quarters felt it had prepared them fairly well.
Reading Passages [ 64 ]

One typical comment sums up the contrasting approach: “At A-level we tended to be spoon-fed with
dictated notes and if we were told to do any background reading (which was rare) we were told exactly which
pages to read out of the book”.
To test this further the students were asked how well they were prepared in specific skills central to
degree level history study. The answers reveal that the students felt most confident at taking notes from
lectures and organising their notes. They were least able to give an oral presentation and there was no great
confidence in contributing to seminars, knowing how much to read, using primary sources and searching for
texts. Even reading and taking notes from a book were often problematic. Just 6 per cent of the sample said they
felt competent at writing essays, the staple A level assessment activity.
The personal influence of the teacher was paramount. In fact individual teachers were the centre of
students’ learning at A level with some 86 per cent of respondents reporting that their teachers had been more
influential in their development as historians than the students’ own reading and thinking.
The ideal teacher turned out to be someone who was enthusiastic about the subject; a good clear
communicator who encouraged discussion. The ideal teacher was able to develop students involvement and
independence. He or she was approachable and willing to help. The bad teacher, according to the survey,
dictates notes and allows no room for discussion. He or she makes students learn strings of facts; appears
uninterested in the subject and fails to listen to other points of view.
No matter how poor the students judged their preparedness for degree-level study, however, there was
a fairly widespread optimism that the experience would change them significantly, particularly in terms of
their open mindedness and ability to cope with people.
But it was clear, Dr Booth said, that the importance attached by many departments to third-year
teaching could be misplaced. “Very often tutors regard the third year as the crucial time, allowing
postgraduates to do a lot of the earlier teaching. But I am coming to the conclusion that the first year at
university is the critical point of intervention”.
Alison Utley, Times Higher Education Supplement. February 6th, 1998.

GEOLOGY/GEOGRAPHY
The Age of the Earth
The age of the earth has aroused the interest of scientists, clergy, and laymen. The first scientists to
attack the problem were physicists, basing their estimates on assumptions that are not now generally accepted.
G. H. Darwin calculated that 57 million years had elapsed since the moon was separated from the earth, and
Lord Kelvin estimated that 20 - 40 million years were needed for the earth to cool from a molten condition to its
present temperature. Although these estimates were much greater than the 6,000 years decided upon some two
hundred years earlier from a Biblical study, geologists thought the earth was much older than 50 or 60 million
years. In 1899 the physicist Joly calculated the age of the ocean from the amount of sodium contained in its
waters. Sodium is dissolved from rocks during weathering and carried by streams to the ocean. Multiplying the
volume of water in the ocean by the percentage of sodium in solution, the total amount of sodium in the ocean
is determined as 16 quadrillion tons. Dividing this enormous quantity by the annual load of sodium
contributed by streams gives the number of years required to deposit the sodium at the present rate. This
calculation has been checked by Clark and by Knopi with the resulting figure in round numbers of
1,000,000,000 years for the age of the ocean. This is to be regarded as a minimum age for the earth, because all
the sodium carried by streams is not now in the ocean and the rate of deposition has not been constant. The
great beds of rock salt (sodium chloride), now stored as sedimentary rocks on land, were derived by
evaporation of salt once in the ocean. The annual contribution of sodium by streams is higher at present than it
was in past geological periods, for sodium is now released from sedimentary rocks more easily than it was from
the silicates of igneous rocks before sedimentary beds of salt were common. Also, man mines and uses tons of
salt that are added annually to the streams. These considerations indicate that the ocean and the earth have
been in existence much longer than 1,000,000,000 years, but there is no quantitative method of deciding how
much the figure should be increased.
Geologists have attempted to estimate the length of geologic time from the deposition of sedimentary
rocks. This method of measuring time was recognized about 450 B.C. by the Greek historian Herodotus after
observing deposition by the Nile and realizing that its delta was the result of repetitions of that process.
Schuchert has assembled fifteen such estimates of the age of the earth ranging from 3 to 1,584 million years with
the majority falling near 100 million years. These are based upon the known thicknesses of sedimentary rocks
and the average time required to deposit one foot of sediment. The thicknesses as well as the rates of deposition
Reading Passages [ 65 ]

used by geologists in making these estimates vary. Recently Schuchert has compiled for North America the
known maximum thicknesses of sedimentary rocks deposited since the beginning of Cambrian time and found
them to be 259,000 feet, about 50 miles. This thickness may be increased as other information accumulates, but
the real difficulty with the method is to decide on a representative rate of deposition, because modern streams
vary considerably in the amount of sediment deposited. In past geological periods the amount deposited may
have varied even more, depending on the height of the continents above sea level, the kind of sediment
transported, and other factors. But even if we knew exact values for the thickness of PreCambrian and
PostCambrian rocks and for the average rate of deposition, the figure so obtained would not give us the full
length of time involved. At many localities the rocks are separated by periods of erosion called unconformities,
during which the continents stood so high that the products of erosion were carried beyond the limits of the
present continents and “lost intervals” of unknown duration were recorded in the depositional record. It is also
recognized that underwater breaks or diastems caused by solution due to acids in sea water and erosion by
submarine currents may have reduced the original thickness of some formations. Geologists appreciated these
limitations and hoped that a method would be discovered which would yield convincing evidence of the vast
time recorded in rocks.
Unexpected help came from physicists studying the radioactive behavior of certain heavy elements
such as uranium, thorium, and actinium. These elements disintegrate with the evolution of heat and give off
particles at a constant rate that is not affected by high temperatures and great pressures. Helium gas is
liberated, radium is one of the intermediate products, and the stable end product is lead with an atomic weight
different from ordinary lead. Eight stages have been established in the radium disintegration series, in which
elements of lower atomic weights are formed at a rate which has been carefully measured. Thus, uranium with
an atomic weight of 238 is progressively changed by the loss of positively charged helium atoms each having an
atomic weight of 4 until there is formed a stable product, uranium lead with an atomic weight of 206. Knowing
the uranium-lead ratio and the rate at which atomic disintegration proceeds, it is possible to determine the time
when the uranium mineral crystallized and the age of the rock containing it. By this method the oldest rock,
which is of Archeozoic age, is 1,850,000,000 years old, while those of the latest Cambrian are 450,000,000 years
old. Allowing time for the deposition of the early Cambrian formations, the beginning of the Paleozoic is
estimated in round numbers at 500,000,000 years ago. This method dates the oldest intrusive rock thus far
found to contain radioactive minerals. But even older rocks occur on the earth’s surface, for they existed when
these intrusions penetrated them. How much time should be assigned to them, we have no accurate way of
judging. Recently attention has centered upon the radio activity of the isotopes of potassium, which
disintegrate into calcium with an atomic weight of 40 instead of 40.08 of ordinary calcium. On this basis A. K.
Brewer has calculated the age of the earth at not more than 2,500,000,000 years, but there is some question that
this method has the same order of accuracy as the uranium-lead method. Geologists are satisfied with the time
values now allotted by physicists for the long intervals of mountain-making, erosion, and deposition by which
the earth gradually reached its present condition.
DIVISIONS OF GEOLOGICAL TIME
The rocks of the accessible part of the earth are divided into five major divisions or eras, which are in
the order of decreasing age, Archeozoic, Proterozoic, Paleozoic, Mesozoic, and Cenozoic. Superposition is the
criterion of age. Each rock is considered younger than the one on which it rests, provided there is no structural
evidence to the contrary, such as overturning or thrust faulting. As one looks at a tall building there is no doubt
in the mind of the observer that the top story was erected after the one on which it rests and is younger than it
in order of time. So it is in stratigraphy in which strata are arranged in an orderly sequence based upon their
relative positions. Certainly the igneous and metamorphic rocks at the bottom of the Grand Canyon are the
oldest rocks exposed along the Colorado River in Arizona and each successively higher formation is relatively
younger than the one beneath it. The rocks of the Mississippi Valley are inclined at various angles so that
successively younger rocks overlap from Minnesota to the Gulf of Mexico. Strata are arranged in recognizable
groups by geologists utilizing a principle announced by William Smith in 1799. While surveying in England
Smith discovered that fossil shells of one geological formation were different from those above and below.
Once the vertical range and sequence of fossils are established the relative position of each formation can be
determined by its fossil content. By examining the succession of rocks in various parts of the world it was found
that the restriction of certain life forms to definite intervals of deposition was world wide and occurred always
in the same order. Apparently certain organisms lived in the ocean or on the land for a time, then became
extinct and were succeeded by new forms that were usually higher in their development than the ones whose
places they inherited. Thus, the name assigned to each era implies the stage of development of life on the earth
Reading Passages [ 66 ]

during the interval in which the rocks accumulated. The eras are subdivided into periods, which are grouped
together in to indicate the highest forms of life during that interval. As the rocks of increasingly younger
periods are examined higher types of life appear in the proper order, invertebrates, fish, amphibians, reptiles,
mammals, man. From this it is evident that certain fossil forms limited to a definite vertical range may be used
as index fossils of that division of geological time. Also, in this table are given for each era estimates of the
beginning, duration, and thickness of sediments, based largely upon a report of a Committee of the National
Research Council on the Age of the Earth. At the close of and within each era widespread mountain-making
disturbances or revolutions took place, which changed the distribution of land and sea and affected directly or
indirectly the life of the sea and the land. The close of the Paleozoic era brought with it the rise of the
Appalachian Mountains. It has been estimated that only 3 per cent of the Paleozoic forms of life survived and
lived on into the Mesozoic era. The birth of the Rocky Mountains at the close of the Mesozoic was accompanied
by widespread destruction of reptilian life. Faunal successions responded noticeably to crustal disturbances.
Reading Passages [ 67 ]

UNCONFORMITIES. In subdividing rocks geologists have been guided by the periods of erosion
resulting from extensive mountain construction. Uplift of the continents causes the shallow seas to withdraw
from land thereby deepening the ocean and allowing erosion to start on the evacuated land areas. Since all the
oceans are connected, sea level throughout the world was affected in many instances, leaving a record of crustal
movements in the depositional history of each of the continents. At many places the rocks of one era are
separated from those of another by unconformities or erosion intervals, in which miles of rocks were eroded
from the crests of folds before sedimentation was resumed on the truncated edges of the mountain structure.
There are four stages in the development of an angular unconformity, so named because there is an angular
difference between the bedding of the lower series and that of the overlying series. If the series above and
below an unconformity consist of marine formations, four movements of the area relative to sea level took
place. In stage 1 the sandstones and shales comprise a conformable marine series, which was laid down by
continuous deposition with the bedding of one formation conforming to the next. We have seen that the
deposition of 24,000 feet of sediment requires repeated sinking of the area below sea level. In stage 2 the region
was folded and elevated above sea level, so that erosion could take place. Since erosion starts as soon as the
land develops an effective slope for corrosion, there is no proof that this structure ever stood 24,000 feet high.
But, the evidence is clear that 24,000 feet were eroded to produce the flat surface, shown in stage 3. In order that
the over lying marine series could be deposited the area had to be again submerged below sea level. Since the
region now stands above sea level, a fourth movement is necessary. In some cases crustal movement does not
tilt or fold the beds, but merely elevates horizontal strata so that erosion removes material and leaves an
irregular surface on which sedimentation may be resumed with the deposition of an overlying formation
parallel to the first. An erosion interval between parallel formations is a disconformity. But not all
unconformities and disconformities are confined to the close of eras. Local deformation and uplift caused
erosion between formations within the same era and within the same period. In the Grand Canyon region
Devonian rocks rest on the eroded surface of Cambrian formations. At other North American and European
localities Ordovician and Silurian rocks occupy this interval, so that the disconformity within the Paleozoic era
at this locality represents two whole periods. It is only by carefully tracing the sequence of rocks of one region
into another that the immensity of geological time can be appreciated from stratigraphy.

Oils
There are three main groups of oils: animal, vegetable and mineral. Great quantities of animal oil come
from whales, those enormous creatures of the sea which are the largest remaining animals in the world. To
protect the whale from the cold of the Arctic seas, nature has provided it with a thick covering of fat called
blubber. When the whale is killed, the blubber is stripped off and boiled down, either on board ship or on
shore. It produces a great quantity of oil which can be made into food foör human consumption. A few other
creatures yield oil, but none so much as the whale. The livers of the cod and the halibut, two kinds of fish, yield
nourishing oil. Both cod liver oil and halibut liver oil are given to sick children and other invalids who need
certain vitamins. These oils may be bought at any chemist’s.
Vegetable oil has been known from antiquity. No household can get on without it, for it is used in
cooking. Perfumes may be made from the oils of certain flowers. Soaps are made from vegetable and animal
oils.
To the ordinary man, one kind of oil may be as important as another. But when the politician or the
engineer refers to oil, he almost always means mineral oil, the oil that drives tanks, aeroplanes and warships,
motor-cars and diesel locomotives; the oil that is used to lubricate all kinds of machinery. This is the oil that has
changed the life of the common man. When it is refined into petrol it is used to drive the internal combustion
engine. To it we owe the existence of the motorcar, which has replaced the private carriage drawn by the horse.
To it we owe the possibility of flying. It has changed the methods of warfare on land and sea. This kind of oil
comes out of the earth. Because it burns well, it is used as fuel and in some ways it is superior to coal in this
respect. Many big ships now burn oil instead of coal. Because it burns brightly, it is used for illumination;
countless homes are still illuminated with oil-burning lamps. Because it is very slippery, it is used for
lubrication. Two metal surfaces rubbing together cause friction and heat; but if they are separated by a thin film
of oil, the friction and heat are reduced. No machine would work for long if it were not properly lubricated. The
oil used for this purpose must be of the correct thickness; if it is too thin it will not give sufficient lubrication,
and if it is too thick it will not reach all parts that must be lubricated.
The existence of oil wells has been known for a long time. Some of the Indians of North America used to
collect and sell the oil from the wells of Pennsylvania. No one, however, seems to have realised the importance
Reading Passages [ 68 ]

of this oil until it was found that paraffin-oil could be made from it; this led to the development of the wells and
to the making of enormous profits. When the internal combustion engine was invented, oil became of
worldwide importance.
What was the origin of the oil which now drives our motor-cars and air-craft? Scientists are confident
about the formation of coal, but they do not seem so sure when asked about oil. They think that the oil under
the surface of the earth originated in the distant past, and was formed from living things in the sea. Countless
billions of minute sea creatures and plants lived and sank to the sea bed. They were covered with huge deposits
of mud; and by processes of chemistry, pressure and temperature were changed through long ages into what
we know as oil. For these creatures to become oil, it was necessary that they should be imprisoned between
layers of rock for an enormous length of time. The statement that oil originated in the sea is confirmed by a
glance at a map showing the chief oilfields of the world; very few of them are far distant from the oceans of
today. In some places gas and oil come up to the surface of the sea from its bed. The rocks in which oil is found
are of marine origin too. They are sedimentary rocks, rocks which were laid down by the action of water on the
bed of the ocean. Almost always the remains of shells, and other proofs of sea life, are found close to the oil. A
very common sedimentary rock is called shale, which is a soft rock and was obviously formed by being
deposited on the sea bed. And where there is shale there is likely to be oil.

Geologists, scientists who study rocks, indicate the likely places to the oil drillers. In some cases oil
comes out of the ground without any drilling at all and has been used for hundreds of years. In the island of
Trinidad the oil is in the form of asphalt, a substance used for making roads. Sir Walter Raleigh visited the
famous pitch lake of Trinidad in 1595; it is said to contain nine thousand million tons of asphalt. There are
probably huge quantities of crude oil beneath the surface.
The king of the oilfield is the driller. He is a very skilled man. Sometimes he sends his drill more than a
mile into the earth. During the process of drilling, gas and oil at great pressure may suddenly be met, and if this
rushes out and catches fire the oil well may never be brought into operation at all. This danger is well known
and steps are always taken to prevent it.
There is a lot of luck in drilling for oil. The drill may just miss the oil although it is near; on the other
hand, it may strike oil at a fairly high level. When the drill goes down, it brings up soil. The samples of soil
from various depths are examined for traces of oil. If they are disappointed at one place, the drillers go to
another. Great sums of money have been spent, for example in the deserts of Egypt, in ‘prospecting’ for oil.
Sometimes little is found. When we buy a few gallons of petrol for our cars, we pay not only the cost of the
petrol, but also part of the cost of the search that is always going on.
When the crude oil is obtained from the field, it is taken to the refineries to be treated. The commonest
form of treatment is heating. When the oil is heated, the first vapours to rise are cooled and become the finest
petrol. Petrol has a low boiling point; if a little is poured into the hand, it soon vaporizes. Gas that comes off the
oil later is condensed into paraffin. Last of all the lubricating oils of various grades are produced. What remains
is heavy oil that is used as fuel.
There are four main areas of the world where deposits of oil appear. The first is that of the Middle East,
and includes the regions near the Caspian Sea, the Black Sea, the Red Sea and the Persian Gulf. Another is the
area between North and South America, and the third, between Asia and Australia, includes the islands of
Sumatra, Borneo and Java.
The fourth area is the part near the North Pole. When all the present oil-fields are exhausted, it is
possible that this cold region may become the scene of oil activity. Yet the difficulties will be great, and the costs
may be so high that no company will undertake the work. If progress in using atomic power to drive machines
is fast enough, it is possible that oil-driven engines may give place to the new kind of engine. In that case the
demand for oil will fall, the oilfields will gradually disappear, and the deposits at the North Pole may rest
where they are for ever.
(From Power and Progress by G. C. Thornley (Longman))
The use of land
A very important world problem - in fact, I am inclined to say it is the most important of all the great
world problems which face us at the present time - is the rapidly increasing pressure of population on land and
on land resources.
It is not so much the actual population of the world but its rate of increase which is important. It works
out to be about 1.6 per cent per annum net increase. In terms of numbers this means something like forty to
fifty-five million additional people every year. Canada has a population of twenty million - rather less than six
Reading Passages [ 69 ]

months' climb in world population. Take Australia. There are ten million people in Australia. So, it takes the
world less than three months to add to itself a population which peoples that vast country. Let us take our own
crowded country - England and Wales: forty-five to fifty million people - just about a year's supply.
By this time tomorrow, and every day, there will be added to the earth about 120,000 extra people - just
about the population of the city of York.
I am not talking about birth rate. This is net increase. To give you some idea of birth rate, look at the
seconds hand of your watch. Every second three babies are born somewhere in the world. Another baby!
Another baby! Another baby! You cannot speak quickly enough to keep pace with the birth rate.
This enormous increase of population will create immense problems. By A.D. 2000, unless something
desperate happens, there will be as many as 7,000,000,000 people on the surface of this earth! So this is a
problem which you are going to see in your lifetime.
Why is this enormous increase in population taking place? It is really due to the spread of the
knowledge and the practice of what is coming to be called Death Control. You have heard of Birth Control?
Death Control is something rather different. Death Control recognizes the work of the doctors and the nurses
and the hospitals and the health services in keeping alive people who, a few years ago, would have died of
some of the incredibly serious killing diseases, as they used to be. Squalid conditions, which we can remedy by
an improved standard of living, caused a lot of disease and dirt. Medical examinations at school catch diseases
early and ensure healthier school children. Scientists are at work stamping out malaria and other more deadly
diseases. If you are seriously ill there is an ambulance to take you to a modern hospital. Medical care helps to
keep people alive longer. We used to think seventy was a good age; now eighty, ninety, it may be, are coming
to be recognized as a normal age for human beings. People are living longer because of this Death Control, and
fewer children are dying, so the population of the world is shooting up.
Imagine the position if you and I and everyone else living on earth shared the surface between us. How
much should we have each? It would be just over twelve acres - the sort of size of a small holding. But not all
that is useful land which is going to produce food. We can cut out one-fifth of it, for example, as being too cold.
That is land which is covered with ice and snow - Antarctica and Greenland and the great frozen areas of
northern Canada. Then we can cut out another fifth as being too dry - the great deserts of the world like the
Sahara and the heart of Australia and other areas where there is no known water supply to feed crops and so to
produce food. Then we can cut out another fifth as being too mountainous or with too great an elevation above
sea level. Then we can cut out another tenth as land which has insufficient soil, probably just rock at the
surface. Now, out of the twelve acres only about four are left as suitable for producing food. But not all that is
used. It includes land with enough soil and enough rainfall or water, and enough heat which, at present, we are
not using, such as, for example, the great Amazon forests and the Congo forest and the grasslands of Africa.
How much are we actually using? Only a little over one acre is what is required to support one human being on
an average at the present time.
Now we come to the next point, and that is, the haves and the have-nots amongst the countries of the
world. The standard share per person for the world is a little over twelve acres per head; potentially usable,
about four acres; and actually used about 1.1 acre. We are very often told in Britain to take the United States as
an example of what is done or what might be done. Every little American is born into this world with a heritage
of the home country, the continental United States, of just about the world average - about twelve acres. We can
estimate that probably some six acres of the total of twelve of the American homeland is cultivable in the sense
I have just given you. But the amount actually used - what the Americans call 'improved land' in crops and
pasture on farms - is three and a half acres. So the Americans have over three times the world average of land
on which to produce food for themselves. On that land they produce more food than they actually require, so
they have a surplus for export.
Now suppose we take the United States' great neighbour to the north, Canada. Every Canadian has 140
acres to roam around in. A lot of it is away in the frozen north, but there is still an enormous area of land in
Canada waiting to be settled and developed. The official figure is twenty-two acres. The Canadians use at the
moment four acres, and they too have a large food surplus available for export.
Now turn to our own country. Including land of all sorts, there is just over one acre per head in the
United Kingdom of Great Britain and Northern Ireland. That is why we have to be so very careful with it. How
much do we actually use? Just over half an acre to produce food - that is as farm land. The story is much the
same if you separate off Northern Ireland and Scotland and just take England and Wales. In this very crowded
country, we have only 0.8 acres per head of land of all sorts to do everything with which we need. That is why
we have to think so very carefully of this problem.
Reading Passages [ 70 ]

India, with 2.5 acres per head, has considerably more land than we have in this country. Not all of it is
usable for food production. But there is land which could be reclaimed by modern methods, that is being
tackled at the present time. The crucial figure is the actual area in agricultural use - three-quarters of an acre!
The yields from this land are low, methods of production are primitive, and that is why the Indians are so very
near the starvation level for almost every year of their lives. But they are not as badly off where land is
concerned as Japan.
The Japanese figures are the same as our own country in overall land - 1.1 acres per person - but it is a
very mountainous country with volcanoes, and so much less is cultivable. Less than a fifth of an acre - 0.17 of an
acre - is under cultivation. You see at once the tremendous land problem which there is in Japan.
There is a great variation, of course, in the intensity with which land is used. In the United States they
are extravagant in the use of land and take, perhaps, twenty times as much to feed one person as in Japan. You
may talk about the Japanese agriculture being twenty times as efficient as the American, but that raises a lot of
questions.
The intensive cultivation characteristic of Japan uses every little bit of land and only the barren hillsides
are not required. Much of the agriculture is based on rice. The farm workers plant by hand every individual rice
plant, and this kind of intensive cultivation enables the Japanese to support seven persons per acre.
By contrast, think of the ranch lands in North and South America, with animals ranging over immense
tracts of land. A diet of beef and of milk is extravagant of land ; in other words, it takes a lot of land for the
number of calories produced. In this sense it is less efficient than the Japanese rice-growing agriculture. But not
everyone likes eating rice.
Where the sea is concerned, we are scarcely, at the present time, out of the old Stone Age. In the Stone
Age, the people simply went out, killed wild animals - if they were lucky - and had a good meal; if they were
unlucky they just went hungry. At the present day, we do almost the same thing in the sea, hunting wild fish
from boats. In the future, perhaps, we shall cultivate the sea; we shall grow small fish and fish spawn in tanks,
take them to the part of the ocean where we want them, let them grow to the right size, and harvest them. This
is not fantasy, because, at the present time, fish are being cultivated like that in ponds and tanks in India, and
various parts of the Far East so that the people there have a supply of protein. There is a great development
possible.
A lot of things are going to happen in the next fifty years. It is enormously important to increase the
yield of grain plants and a great deal has happened through the work of the geneticists in the last few years. For
instance, there has been an enormous world increase in the production of what Americans call corn (maize to
us) due to the development of new strains. Throughout agriculture geneticists are improving plants to get
higher yields.
From 'Using Land Wisely' Discovery (Granada Television, 1961)

HISTORY
THE NATURE, OBJECT AND PURPOSE OF HISTORY
I shall therefore propound answers to my four questions such as I think any present-day historian
would accept. Here they will be rough and ready answers, but they will serve for a provisional definition of our
subject-matter and they will be defended and elaborated as the argument proceeds.
(a) What is history? Every historian would agree, I think, that history is a kind of research or inquiry.
What kind of inquiry it is I do not yet ask. The point is that generically it belongs to what we call the sciences:
that is, the forms of thought whereby we ask questions and try to answer them. Science in general, it is
important to realize, does not consist in collecting what we already know and arranging it in this or that kind of
pattern. It consists in fastening upon something we do not know, and trying to discover it. Playing patience
with things we already know may be a useful means towards this end, but it is not the end itself. It is at best
only the means. It is scientifically valuable only in so far as the new arrangement gives us the answer to a
question we have already decided to ask. That is why all science begins from the knowledge of our own
ignorance: not our ignorance of everything, but our ignorance of some definite thing-the origin of parliament,
the cause of cancer, the chemical composition of the sun, the way to make a pump work without muscular
exertion on the part of a man or a horse or some other docile animal. Science is finding things out: and in that
sense history is a science.
(b) What is the object of history? One science differs from another in that it finds out things of a
different kind. What kinds of things does history find out? I answer, res gestae: actions of human beings that
have been done in the past. Although this answer raises all kinds of further questions
Reading Passages [ 71 ]

many of which are controversial, still, however they may be answered, the answers do not discredit the
proposition that history is the science of res gestae, the attempt to answer questions about human actions done
in the past.
(c) How does history proceed? History proceeds by the interpretation of evidence: where evidence is a
collective name for things which singly are called documents, and a document is a thing existing here and now,
of such a kind that the historian, by thinking about it, can get answers to the questions he asks about past
events. Here again there are plenty of difficult questions to ask as to what the characteristics of evidence are
and how it is interpreted. But there is no need for us to raise them at this stage. However they are answered,
historians will agree that historical procedure, or method, consists essentially of interpreting evidence.
(d) Lastly, what is history for? This is perhaps a harder question than the others; a man who answers it
will have to reflect rather more widely than a man who answers the three we have answered already. He must
reflect not only on historical thinking but on other things as well, because to say that something is `for'
something implies a distinction between A and B, where A is good for something and B is that for which
something is good. But I will suggest an answer, and express the opinion that no historian would reject it,
although the further questions to which it gives rise are numerous and difficult.
My answer is that history is `for' human self-knowledge. It is generally thought to be of importance to
man that he should know himself: where knowing himself means knowing not his merely personal
peculiarities, the things that distinguish him from other men, but his nature as man. Knowing yourself means
knowing, first, what it is to be a man; secondly, knowing what it is to be the kind of man you are; and thirdly,
knowing what it is to be the man you are and nobody else is. Knowing yourself means knowing what you can
do; and since nobody knows what he can do until he tries, the only clue to what man can do is what man has
done. The value of history, then, is that it teaches us what man has done and thus what man is.
(From The idea of history, by R. G. Collingwood.)

THE EXPANSION OF WESTERN CIVILIZATION


The predominance of the Western civilization throughout he world on the eve of the fateful year 194
was, indeed, both recent and unprecedented. It was unprecedented in this sense-that, though many
civilizations before that of Europe had radiated their influence far beyond their original homelands, none had
previously cast its net right round the globe.
The civilization of Eastern orthodox Christendom, which grew up in mediaeval Byzantium, had been
carried by the Russians to the Pacific; but, so far from spreading westwards, it had itself succumbed to Western
influence since the close of the seventeenth century. The civilization of Islam had expanded from the Middle
East to Central Asia and Central Africa, to the Atlantic coast of Morocco and the Pacific coasts of the East
Indies, but it had obtained no permanent foothold in Europe and had never crossed the Atlantic into the New
World. The civilization of ancient Greece and Rome had extended its political dominion into North-Western
Europe under the Roman Empire and its artistic inspiration into India and the Far East, where the Graeco-
Roman models had stimulated the development of Buddhist art. Yet the Roman Empire and the Chinese
Empire had co-existed on the face of the same planet for two centuries with scarcely any direct intercourse,
either political or economic. It was the same with the other ancient civilizations. Ancient India radiated her
religion, her art, her commerce and her colonists into the Far East and the East Indies, but never penetrated the
West. As far as we know for certain, the only civilization that has ever yet become worldwide is ours.
Moreover, this is a very recent event. Nowadays we are apt to forget that Western Europe made two
unsuccessful attempts to expand before she eventually succeeded.
The first of these attempts was the mediaeval movement in the Mediterranean for which the most
convenient general name is the Crusades. In the Crusades, the attempt to impose the political and economic
dominion of West Europeans upon other peoples ended in a complete failure, while, in the interchange of
culture, the West Europeans received a greater impress from the Muslims and Byzantines than they imparted to
them. The second attempt was that of the Spaniards and Portuguese in the sixteenth century of our era. This
was more or less successful in the New World, but, elsewhere, Western civilization, as propagated by the
Spaniards and Portuguese, was rejected after about a century's trial.
The third attempt was begun in the seventeenth century by the Dutch, French and English, and these
three West European nations were the principal authors of the world-wide ascendancy that our Western
civilization was enjoying in 1914. By 1914 the network of European trade and European means of
communication had become world-wide. On the plane of politics, the European nations had not only colonized
the New World, but had conquered India and tropical Africa.
Reading Passages [ 72 ]

The political ascendancy of Europe, however, though outwardly even more imposing than her
economic ascendancy, was really more precarious. The daughter-nations overseas had already set their feet
firmly on the road towards independent nationhood. The United States and the Latin American Republics had
long since established their independence by revolutionary wars; and the self-governing British Dominions
were in the process of establishing theirs by peaceful evolution. In India and tropical Africa, European
domination was being maintained by a handful of Europeans who lived there as pilgrims and sojourners. They
had not found it possible to acclimatize themselves sufficiently to bring up their children in the tropics; this
meant that the hold of Europeans upon the tropics had not been made independent of a European base of
operations. Finally, the cultural influence of the West European civilization upon Russians, Muslims, Hindus,
Chinese, Japanese, and tropical Africans was so recent a ferment that it was not yet possible to predict whether
it would evaporate without permanent effect, or whether it would turn the dough sour, or whether it would
successfully leaven the lump.
This then, in very rough outline, was the position of Europe in the world on the eve of the War of
1914-1918. She was in the enjoyment of an undisputed ascendancy, and the peculiar civilization which she had
built up for herself was in process of becoming world-wide. Yet this position, brilliant though it was, was not
merely unprecedented and recent; it was also insecure. It was insecure chiefly because, at the very time when
European expansion was approaching its climax, the foundations of West European civilization had been
broken up and the great deeps loosed by the release and emergence of two elemental forces in European social
life-the forces of industrialism and democracy, which were brought into a merely temporary and unstable
equilibrium by the formula of nationalism. It is evident that a Europe which was undergoing a terrific double
strain of this inward transformation and outward expansion-both on the heroic scale-could not with impunity
squander her resources, spend her material wealth and man-power unproductively, or exhaust her muscular
and nervous energy. If her total command of resources was considerably greater than that which any other
civilization had ever enjoyed, these resources were relative to the calls upon them; and the liabilities of Europe
on the eve of 1914, as well as her assets, were of an unprecedented magnitude. Europe could not afford to wage
even one World War; and when we take stock of her position in the world after a Second World War and
compare it with her position before 1914, we are confronted with a contrast that is staggering to the
imagination.
(From Civilization on Trial, by Arnold Toynbee)

THE CAREER OF JENGHIS KHAN


We have now to tell of the last and greatest of all the raids of nomadism upon the civilizations of the
East and West. We have traced in this history the development side by side of these two ways of living, and we
have pointed out that as the civilizations grew more extensive and better organized, the arms, the mobility, and
the intelligence of the nomads also improved. The nomad was not simply an uncivilized man, he was a man
specialized and specializing along his own line. From the very beginning of history the nomad and the settled
people have been in reaction. Whenever civilization seems to be choking amidst its weeds of wealth and debt
and servitude, when its faiths seem rotting into cynicism and its powers of further growth are hopelessly
entangled in effete formulae, the nomad drives in like a plough to break up the festering stagnation and release
the world to new beginnings. The Mongol aggression, which began with the thirteenth century, was the
greatest, and so far it has been the last, of all these destructive ploughings of human association.
From entire obscurity the Mongols came very suddenly into history towards the close of the twelfth
century. They appeared in the country to the north of China, in the land of origin of the Huns and Turks, and
they were manifestly of the same strain as these peoples. They were gathered together under a chief; under his
son Jenghis Khan their power grew with extraordinary swiftness.
The career of Jenghis Khan and his immediate successors astounded the world, and probably
astounded no one more than these Mongol Khans themselves. The Mongols were in the twelfth century a tribe
subject to those Kin who had conquered North-east China. They were a horde of nomadic horsemen living in
tents, and subsisting mainly upon mare's milk products and meat. Their occupations were pasturage and
hunting, varied by war. They drifted northward as the snows melted for summer pasture, and southward to
winter pasture after the custom of the steppes. Their military education began with a successful insurrection
against the Kin. The empire of Kin had the resources of half' China behind it and in the struggle the Mongols
learnt very much of the military science of the Chinese. By the end of the twelfth century they were already a
fighting tribe of exceptional quality.
Reading Passages [ 73 ]

The opening years of the career of Jenghis were spent in developing his military machine, in
assimilating the Mongols and the associated tribes about them into an organised army. His first considerable
extension of power was westward, when the Tartar Kirghis and the Uigurs (who were the Tartar people of the
Tarim basin) were not so much conquered as induced to join his organisation. He then attacked the Kin empire
and took Pekin (1214). The Khitan people, who had been so recently subdued by the Kin, threw in their
fortunes with his, and were of very great help to him. The settled Chinese population went on sowing and
reaping and trading during this change of masters without lending its weight to either side.
We have already mentioned the very recent Kharismian empire of Turkestan, Persia and North India.
This empire extended eastward to Kashgar, and it must have seemed one of the most progressive and hopeful
empires of the time. Jenghis Khan, while still engaged in this war with the Kin empire, sent envoys to
Kharismia. They were put to death, an almost incredible stupidity. The Kharismian government, to use the
political jargon of today, had decided not to `recognize' Jenghis Khan and took this spirited course with him.
Thereupon (1218) the great host of horsemen that Jenghis Khan had consolidated swept over the Pamirs and
down into Turkestan. It was well armed, and probably it had some guns and gunpowder for siege work-for the
Chinese were certainly using gunpowder at this time, and the Mongols learnt its use from them. Kashgar,
Khokand, Bokhara fell and then Samarkand, the capital of the Kharismian empire. Thereafter nothing held the
Mongols in the Kharismian territories. They swept westward to the Caspian, and southward as far as Lahore.
To the north of the Caspian a Mongol army encountered a Russian force from Kieff. There was a series of
battles, in which the Russian armies were finally defeated and the Grand Duke of Kieff was taken prisoner. So it
was that the Mongols appeared on the northern shores of the Black Sea. A panic swept Constantinople, which
set itself to reconstruct its fortifications. Meanwhile other armies were engaged in the conquest of the empire of
the Hsia in China. This was annexed, and only the southern part of the Kin empire remained unsubdued. In
1227 Jenghis Khan died in the midst of a career of triumph. His empire reached already from the Pacific to the
Dnieper. And it was an empire still vigorously expanding.
Like all empires founded by nomads, it was, to begin with, purely a military and administrative empire,
a framework rather than a rule. It centred on the personality of the monarch, and its relations with the mass of
the population over which it ruled was simply one of taxation for the maintenance of the horde. But Jenghis
Khan had called to his aid a very able and experienced administrator of the Kin empire, who was learned in all
the traditions and science of the Chinese. This statesman, Yeliu Chutsai, was able to carry on the affairs of the
Mongols long after the death of Jenghis Khan, and there can be little doubt that he is one of the great political
heroes of history. He tempered the barbaric ferocity of his masters, and saved innumerable cities and works of
art from destruction. He collected archives and inscriptions, and when he was accused of corruption, his sole
wealth was found to consist of documents and a few musical instruments. To him perhaps quite as much as to
Jenghis is the efficiency of the Mongol military machine to be ascribed. Under Jenghis, we may note further, we
find the completest religious toleration established across the entire breadth of Asia.
(From The Outline of History, by H G Wells)

THE TRIAL AND EXECUTION OF CHARLES I


JANUARY, 1649
Officers and men had come home from the war in a fierce mood, prepared to commit any violence.
Colonel Pride and his musqueteers, stationed at the door of the Com-mons, excluded some hundred members
and carried off nearly fifty more to prison. On January 4th, 1649, the Rump that remained, not one hundred
strong, passed this resolution:
That the people are, under God, the original of all just power: and that the Commons of
England, in Parliament assembled, being chosen by and representing the people, have the supreme
power in this nation; that whatsoever is enacted or declared for law by the Commons in Parliament
assembled, hath the force of law, and all the people of this nation are concluded thereby, although the
consent of the King or House of Peers be not had thereunto.
In logical accordance with these new principles, it was voted a few weeks later that the House of Lords
was 'useless and dangerous and ought to be abolished'. The King disappeared from the constitution by a yet
more signal and symbolic act. 'We will cut off his head', said Cromwell, 'with the crown upon it.'
The Commons appointed a Commission to try Charles Stuart. Unless the theoretic declaration of the
omnipotence of the Lower House held good by 'the Law of Nature', this Commission had no legal power.
Furthermore, Charles had committed no legal crime. Law and pity, would both plead on his side. His own
outward dignity and patience made his appeal to pity and law the most effective appeal in that sort that is
Reading Passages [ 74 ]

recorded in the history of England. Law and pity were known of old to Englishmen; the new ideas ranged
against these two powerful pleaders were new and strange. The sovereignty of the people and the equality of
man with man in the scales of justice were first ushered into the world of English politics by this deed. Against
them stood the old-world ideals, as Charles proclaimed them from the scaffold with his dying breath:
For the people (he said, as he looked out into eternity) truly I desire their liberty and freedom as much
as anybody whatsoever; but I must tell you, their liberty and freedom consists in having government, those
laws by which their lives and goods may be most their own. It is not their having a share in the government;
that is nothing appertaining to them. A subject and a sovereign are clear different things.
That, indeed, was the issue. Those who sought to substitute a better theory of government for that
proclaimed by Charles as his legacy to England did not pause to consider that the people, for whose new-born
sovereignty they struck, was heart and soul against the deed. By the side of this objection, the charge that they
acted 'by no law' shrinks to insignificance. They were striving to make a law of democracy, and could not be
expected to effect this by observing the old laws. But their fault lay in this, that the new law by which they
condemned Charles, while it claimed to derive from the people of England, did not, save in theory, derive from
that source at all. When the bleeding head was held up; the shout of the soldiers was drowned in the groans of
the vast multitude.
If there was any chance that the establishment of a more democratic form of government could
gradually win the sup-port of the people at large, that chance was thrown away by the execution of the King.
The deed was done against the wish of many even of the Independents and Republicans; it outraged beyond
hope of reconciliation the two parties in the state who were strong in numbers and conservative tradition, the
Presbyterians and the Cavaliers; and it alienated the great mass of men who had no party at all. Thus the
Republicans, at the outset of their career, made it impossible for themselves ever to appeal in free election to the
people whom they had called to sovereignty.
It is much easier to show that the execution was a mistake and a crime (and it certainly was both) than
to show what else should have been done. Any other course, if considered in the light of the actual
circumstances, seems open to the gravest objection. The situation at the end of 1648 was this - that any sort of
government by consent had been rendered impossible for years to come, mainly by the untrustworthy
character of the King, and by the intolerant action of Parliament after the victory won for it by the Army.
Cromwell had advocated a real settlement by consent, only to have it rejected by King, Parliament, and Army
alike. The situation had thereby been rendered impossible, through no fault of his.
(From England under the Stuarts by G. M. Trevelyan)

PETER THE GREAT


The impact of Peter the Great upon Muscovy was like that of a peasant hitting a horse with his fist.
Muscovy bore many of the marks permanently and henceforward she became known as Russia. Yet his
reforms, for all their importance, did not create a new form of state: they were to a large extent the very rapid
and energetic extension of ideas and practices already to be found in the generation before him. The old and the
new continued to live on, in part conflicting, in part coalescing. Tsarism in his lifetime received a new stamp
owing to his wholly exceptional character and abilities, but its functioning remained as before dependent upon
different sec-tions of the landed class and the machinery of government. Peter did not introduce the idea of the
service state; he pushed it to extremes, enforced compulsory service in the army, navy, and government on the
landowners and himself set the example as the first servant of the state. He wrote of himself with full
justification: 'I have not spared and I do not spare my life for my fatherland and people.'
Peter was repellent in his brutality, coarseness and utter disregard of human life; but he was mightily
propellent, through his ever-flaming energy and will-power, his insatiable curiosity and love of experiment, his
refusal to accept any defeat and his capacity to launch, however crudely and over-hastily at first, great schemes
on an immense, however wasteful, scale. From earliest childhood his over-riding personal interest was the art
of war, by sea as well as by land; but he understood it in the widest sense as involving the full utilization of the
human and material resources of his country. He was at war for twenty-eight consecutive years, from 1695. He
began when he was twenty-four; when he finally had peace he was fifty-two and had little more than a year to
live.
Almost all Peter's reforms were born of military and naval requirements. Russia must be westernized in
order to ensure the 'two necessary things in government, namely order and defence'. His task was, again, as he
himself put it, to convert his subjects from 'beasts into human beings', from 'children into adults'. His strongest
modern critics allow that he was inspired by devotion to Russia, not by personal ambition, and that he aimed at
Reading Passages [ 75 ]

inculcating by example and precept rational ideas of citizenship in terms of efficient, and therefore educated,
service to the state, in contrast with blind service to a sacrosanct ruler throned far away on high in the hallowed
veneration of Muscovite ceremonial.
His reforms until about 1715 were imposed piecemeal, chaotically and (as he himself admitted) far too
hastily, in dependence on the urgent pressure of the moment. He was constantly scrapping or remodelling this
or that portion of his own handiwork in frantic search for more recruits, more labour, more revenue, more
munitions. In his last dozen years, when war was less onerous and his contacts with the West were closer, the
peremptory edicts, often conflicting with each other, gave way to systematic, carefully elaborated legislation
that remoulded state and church alike. His brutal violence, the enormous demands that he exacted and his
external flouting of national ways and prejudices supplied fertile ground for opposition. He had to crush in
blood four serious risings, and he condemned to death his own son and heir, Alexis, on the ground of his being
the ringleader of reaction (Q18). In actuality Alexis was a passive creature who was only hoping for his father's
death in terrified fear of his own. The opposition was leaderless; and as well it was too heterogeneous; almost
all interests in Russia were divided between supporters and opponents of Peter.
He aimed at transforming tsarism into a European kind of absolute monarchy, and to a considerable
extent he succeeded. Russia was never the same again, even though the pace was too hot and there was regress
after his death. He declared himself to be 'an absolute monarch who does not have to answer for any of his
actions to anyone in the world; but he has power and authority for the purpose of governing his states and
lands according to his will and wise decision as a Christian sovereign'. This version of enlightened despotism,
typically enough, appeared in Peter's new code for the army (1716). The creation of a national standing army on
Western models was one of the most fundamental of his legacies, and the links of tsarism with military power
and the military spirit were henceforth knitted even more closely than before. One external sign is significant.
Peter himself almost always appeared as a soldier or sailor (when not dressed as a mechanic) and all
succeeding emperors did likewise; his predecessors (when not hunting) had usually appeared in hieratic pomp,
half tsar, half high-priest.
No tsar has made such a lasting impression on Russia as Peter, whether in his works or his personality.
He was an unheard-of tsar-for some no tsar at all, but Antichrist. He brought the tsar to earth and entwined
himself in the hopes and fears and groans of his subjects as a dark and terrible force, rooting up the past,
putting to rout the Swedes; as a ruler such as they had never conceived before, to be seen throughout the length
and breadth of the land, immense in stature, with his tireless stride that was more of a run and his huge
calloused hands of which he was so proud; a ruler who went into battle as a bombardier, who wielded an axe
as well as any of his subjects, who could kill a man with a single blow of his fist-and on occasion did. He made
Russia conscious of great destiny, and ever since Europe and Asia have had to reckon with her.
(From Survey of Russian History, by B. H. Sumner)

THE UNITED STATES IN 1790


In the physiographic features of the settled part of America there was a certain uniformity. The coast-
line was low and un-inviting, except in northern New England, where it had something of the rugged
picturesqueness of the western coast of Britain. South of New York stretched a long succession of barrier
beaches in flattish curves, parting at the entrances of the great bays of Chesapeake and Delaware, and enclosing
the shallow lagoons of Albemarle and Pamlico. A vast forest of conifers and hardwood swept up from the coast
over the crest of the Appalachians, and down to the Great Lakes, the prairies of Illinois, the savannahs of the
lower Mississippi, and the Gulf of Mexico. Except for natural meadows along the rivers, open country did not
exist in the land that the English colonists wrested from the Indians. Their farms had been cleared from the
forest; and it was still too early to aver that the colonists had conquered the forest. Volney wrote that during his
journey in 1796 through the length and breadth of the United States he scarcely travelled for more than three
miles together on open and cleared land. 'Compared with France, the entire country is one vast wood.' Only in
southern New England, and the eastern portion of the Middle States, did the cultivated area exceed the
woodland; and the clearings became less frequent as one approached the Appalachians.
Like western Europe, the United States lies wholly within the northern temperate zone, and the belt of
prevailing westerly winds. The earliest European explorers had passed it by for the Caribbean and the St.
Lawrence because they were seeking tropical plantations, fur-trading posts, and fishing stations. Their
successors, in search of farm-lands, found the greater part of the Thirteen Colonies suitable for life and labour
as are few portions of the non-European world. Yet the climate of the area settled by 1790 is in many respects
unlike that of Europe.
Reading Passages [ 76 ]

Westerly winds reach it across a continent, without the moisture and the tempering of the Atlantic.
North-west is the prevailing wind in winter, and south-west in summer. Consequently the summers are
everywhere hotter than in the British Isles, and the winters, north of Virginia, colder; the extremes of heat and
cold in the same season are greater; the rainfall less, although adequate for animal and plant life. Near the sea-
coast a sea-turn in the wind may soften outlines, but inland the dry air, clear sky, and brilliant sunlight
foreshorten distant prospects, and make the landscape sharp and hard.
In most parts of the United States the weather is either fair or foul. It rains or shines with a businesslike
intensity; in comparison, the weather of the British Isles is perpetually unsettled. In the coastal plain of the
Carolinas and the Gulf, there is a soft gradation between the seasons, and a languor in the air; else-where, the
transition from a winter of ice and snow to a summer of almost tropical heat is abrupt.
Our spring gits everythin' in tune An' gives one leap from April into June wrote Lowell. Except where
the boreal forest of conifers maintains its sombre green, the sharp dry frosts of October turn the forest to a
tapestry of scarlet and gold, crimson and russet. High winds strip the leaves in November, and by New Year's
day the country north of Baltimore, along the Appalachians, and east of the Sierras, should be tucked into a
blanket of snow.
(From History of the United States by S. E. Morison)

Civilisation and History


Most of the people who appear most often and most gloriously in the history books are great
conquerors and generals and soldiers, whereas the people who really helped civilization forward are often
never mentioned at all. We do not know who first set a broken leg, or launched a seaworthy boat, or calculated
the length of the year, or manured a field; but we know all about the killers and destroyers. People think a great
deal of them, so much so that on all the highest pillars in the great cities of the world you will find the figure of
a conqueror or a general or a soldier. And I think most people believe that the greatest countries are those that
have beaten in battle the greatest number of other countries and ruled over them as conquerors. It is just
possible they are, but they are not the most civilized. Animals fight; so do savages; hence to be good at fighting
is to be good in the way in which an animal or a savage is good, but it is not to be civilized. Even being good at
getting other people to fight for you and telling them how to do it most efficiently - this, after all, is what
conquerors and generals have done - is not being civilized. People fight to settle quarrels. Fighting means
killing, and civilized peoples ought to be able to find some way of settling their disputes other than by seeing
which side can kill off the greater number of the other side, and then saying that that side which has killed most
has won. And not only has won, but, because it has won, has been in the right. For that is what going to war
means; it means saying that might is right.
That is what the story of mankind has on the whole been like. Even our own age has fought the two
greatest wars in history, in which millions of people were killed or mutilated. And while today it is true that
people do not fight and kill each other in the streets - while, that is to say, we have got to the stage of keeping
the rules and behaving properly to each other in daily life - nations and countries have not learnt to do this yet,
and still behave like savages.
But we must not expect too much. After all, the race of men has only just started. From the point of view
of evolution, human beings are very young children indeed, babies, in fact, of a few months old. Scientists
reckon that there has been life of some sort on the earth in the form of jellyfish and that kind of creature for
about twelve hundred million years; but there have been men for only one million years, and there have been
civilized men for about eight thousand years at the outside. These figures are difficult to grasp; so let us scale
them down. Suppose that we reckon the whole past of living creatures on the earth as one hundred years; then
the whole past of man works out at about one month, and during that month there have been civilizations for
between seven and eight hours. So you see there has been little time to learn in, but there will be oceans of time
in which to learn better. Taking man's civilized past at about seven or eight hours, we may estimate his future,
that is to say, the whole period between now and when the sun grows too cold to maintain life any longer on
the earth, at about one hundred thousand years. Thus mankind is only at the beginning of its civilized life, and
as I say, we must not expect too much. The past of man has been on the whole a pretty beastly business, a
business of fighting and bullying and gorging and grabbing and hurting. We must not expect even civilized
peoples not to have done these things. All we can ask is that they will sometimes have done something else.
From The Story of Civilization by C. E. M. Joad (A. D. Peters & Co. 1962)

Coal
Reading Passages [ 77 ]

The rapid growth of steam power relied directly on large supplies of its only fuel: coal. There was a
great increase in the amount of coal mined in Britain.
Coal was still cut by hand and the pits introduced few really new techniques. The increased demand
was met by employing more miners and by making them dig deeper. This was made possible by more efficient
steam pumps and steam-driven winding engines which used wire ropes to raise the coal to the surface, by
better ventilation and by the miner's safety lamp which detected some dangerous gases. By the 1830s scams
well below 1000 feet were being worked in south Durham and the inland coalfields of Lancashire and
Staffordshire. Central Scotland and south Wales were mined more intensively later. By the end of the
nineteenth century the best and most accessible seams were worked out. As the miners followed the eastern
scams of the south Yorkshire-north Midlands area, which dipped further below the surface, shafts of 3,000 feet
were not uncommon.
Some of the work in mines was done by women and children. Boys and girls were often put in charge of
the winding engines or of opening and shutting the trap doors which controlled the ventilation of the mines.
Then they had to crouch all day in the same spot by themselves in the dark. When these evils were at last
publicized in 1842 by a Royal Commission, many mines no longer employed women, but Parliament made it
illegal for them all. It also forbade them to employ boys under the age of ten. The limit, which was very difficult
to enforce, was increased to twelve in the 1870s. Subsequently it rose with the school leaving age.
Mining was very dangerous. Loose rocks were easily dislodged and the risk of being killed or injured
by one was always greater in the tall scams where they had further to fall. In the north of England fatal
accidents were not even followed by inquests to discover why they had happened until after 1815. Few safety
precautions were taken before the mid-nineteenth century. The mine owners insisted that they were not
responsible. The men were most reluctant to put up enough props to prevent the roof from falling in and to
inspect the winding gem: and other machinery on which their lives depended. If they did, they spent less time
mining and so earned less money because the miners' pay was based not on how long they worked but on how
much coal they extracted. They preferred to take risks.
The deeper seams contained a dangerous gas called 'fire-damp' which could be exploded by the miners'
candles. The safety lamp, which was invented in the early nineteenth century, did not really solve this problem,
but it was often used to detect gas and so made the mining of deeper seams possible. There the air was more
foul, the temperature higher (one pit paid the men an extra 6d a day for working in 130°F) and the risk of fire-
damp even greater. In the 1840s a series of terrible explosions in the deeper mines led to stricter regulations,
which inspectors helped enforce. The inspectors were particularly keen on proper ventilating machines and,
although deeper shafts were sunk, they did not become more dangerous. However, many serious accidents still
occurred.
(From Britain Transformed, Penguin Books)
Reading Passages [ 78 ]

LANGUAGE
'Primitiveness' in Language
'Primitive' is a word that is often used ill-advisedly in discussions of language. Many people think that
'primitive' is indeed a term to be applied to languages, though only to some languages, and not usually to the
language they themselves speak. They might agree in calling 'primitive' those uses of language that concern
greetings, grumbles and commands, but they would probably insist that these were especially common in the
so-called 'primitive languages'. These are misconceptions that we must quickly clear from our minds.
So far as we can tell, all human languages are equally complete and perfect as instruments of
communication: that is, every language appears to be as well equipped as any other to say the things its
speakers want to say. It may or may not be appropriate to talk about primitive peoples or cultures, but that is
another matter. Certainly, not all groups of people are equally competent in nuclear physics or psychology or
the cultivation of rice or the engraving of Benares brass. But this is not the fault of their language. The Eskimos
can speak about snow with a great deal more precision and subtlety than we can in English, but this is not
because the Eskimo language (one of those sometimes miscalled 'primitive') is inherently more precise and
subtle than English. This example does not bring to light a defect in English, a show of unexpected
'primitiveness'. The position is simply and obviously that the Eskimos and the English live in different
environments. The English language would be just as rich in terms for different kinds of snow, presumably, if
the environments in which English was habitually used made such distinction important.
Similarly, we have no reason to doubt that the Eskimo language could be as precise and subtle on the
subject of motor manufacture or cricket if these topics formed part of the Eskimos' life. For obvious historical
reasons, Englishmen in the nineteenth century could not talk about motorcars with the minute discrimination
which is possible today: cars were not a part of their culture. But they had a host of terms for horse-drawn
vehicles which send us, puzzled, to a historical dictionary when we are reading Scott or Dickens. How many of
us could distinguish between a chaise, a landau, a victoria, a brougham, a coupe, a gig, a diligence, a whisky, a
calash, a tilbury, a carriole, a phaeton, and a clarence ?
The discussion of 'primitiveness', incidentally, provides us with a good reason for sharply and
absolutely distinguishing human language from animal communication, because there is no sign of any
intermediate stage between the two. Whether we examine the earliest records of any language, or the present-
day language of some small tribe in a far-away place, we come no nearer to finding a stage of human language
more resembling animal communication and more 'primitive' than our own. In general, as has been said, any
language is as good as any other to express what its speakers want to say. An East African finds Swahili as
convenient, natural and complete as an East Londoner finds English. In general the Yorkshire Dalesman's
dialect is neither more nor less primitive or ill-fitted to its speaker's wants than Cockney is for the Londoner's.
We must always beware the temptation to adopt a naive parochialism which makes us feel that someone else's
language is less pleasant or less effective an instrument than our own.
This is not to say that an individual necessarily sounds as pleasant or as effective as he might be, when
using his language, but we must not confuse a language with an individual's ability to use it. Nor are we saying
that one language has no deficiencies as compared with another. The English words 'home' and 'gentleman'
have no exact counterparts in French, for example. These are tiny details in which English may well be thought
to have the advantage over French, but a large-scale comparison would not lead to the conclusion that English
was the superior language, since it would reveal other details in which the converse was true. Some years ago it
came as something of a shock to us that we had no exact word for translating the name that General de Gaulle
had given to his party - Rassemblement du Peuple Francais. The B.B.C. for some time used the word 'rally', and
although this scarcely answers the purpose it is a rather better translation of 'rassemblement' than either of the
alternatives offered by one well-known French-English dictionary, 'muster' and 'mob'.

The more we consider the question, then, the less reasonable does it seem to call any language 'inferior',
let alone 'primitive'.
The Sanskrit of the Rig-Veda four thousand years ago was as perfect an instrument for what its users
wanted to say as its modern descendant, Hindi, or as English.
(From The Use of English, by Randolph Quirk )

English in the Fifteenth Century


Soon after Chaucer's death, in the fifteenth century, there was a renewed drift towards simplification in
English. Final unaccented vowels, by 1400 already reduced to a very slight number, were entirely lost. Still
Reading Passages [ 79 ]

more nouns were shifted to the majority declension (with plurals in -s) out of the small group left in the
minority declensions. More and more verbs shifted to the weak conjugation from those still retaining the
internal vowel change. For a time, of course, there was a choice of forms: for example, between 'he clomb' and
'he climbed'; 'he halp' and 'he helped'. Some of the quaint surviving constructions out of Old English, such as
impersonal verbs with the dative, the inflected genitive case for nouns denoting things, and the double
negative, began to fall into disuse. They persist in the fifteenth century, indeed even into the sixteenth, but they
are increasingly felt to be archaic survivals.
Another important usage became increasingly prevalent in the fifteenth and early sixteenth: the
bolstering of verbs with a number of auxiliaries derived from 'do' and 'be'. In Middle English a question was
asked with the simple form of the verb in inverted position: 'What say you? What think you?' For a couple of
centuries after 1400 this was still done habitually, but more and more people fell into the habit of saying 'What
do you say? What do you think?' The 'do' was colourless and merely brought about a deferment of the main
verb. In effect it makes our English usage somewhat like Russian, which says 'What you say? What you think?'
without any inversion of the verb before the subject. In simple statements the 'do' forms were used for
situations where we no longer feel the need for them. An Elizabethan would say 'I do greatly fear it' (an
unrestricted statement). We should use the less emphatic 'I fear it greatly.'
During the same period there began the gradual spread of the so-called progressive conjugation, with
forms of 'to be': ' I am coming; he is sitting down.' These two special forms of English conjugation have
developed an intricate etiquette, with many modifications of usage, which cause great trouble to the foreign
student. One of the last distinctions he masters is the one between 'I eat breakfast every morning' and 'I am
eating breakfast now'; between 'I believe that, and 'I do indeed believe that.'
One of the most fateful innovations in English culture, the use of the printing press, had its effects on
the language in many ways. The dialect of London, which for over a century had been gaining in currency and
prestige, took an enormous spurt when it was more or less codified as the language of the press. As Caxton and
his successors normalized it, roughly speaking, it became the language of officialdom, of polite letters, of the
spreading commerce centres at the capital. The local dialects competed with it even less successfully than
formerly. The art of reading, though still a privilege of the favoured few, was extended lower into the ranks of
the middle classes. With the secularizing of education later on, the mastery of the printed page was extended to
still humbler folk. Boys who, like William Shakespeare, were sons of small-town merchants and craftsmen,
could learn to read Latin literature and the Bible even if they had no intention of entering the Church. Times
had distinctly changed since the thirteenth century. It may be added that changes in society-the gradual
emergence of a mercantile civilization-gave scope to printing which it would never have had in the earlier
Middle Ages. The invention was timely in more than one sense.
All this may have been anticipated by the early printers. Their technological innovations may have been
expected to facilitate the spread of culture. But they could not have foreseen that the spelling which they
standardized, more or less, as the record of contemporary pronunciation, would have been perpetuated for
centuries afterwards. Today, when our pronunciation has become quite different, we are still teaching our
unhappy children to spell as Caxton did. Respect for the printed page has become something like fetish-
worship. A few idiosyncrasies have been carefully preserved, although the reason for them is no longer
understood. When Caxton first set up the new business in London he brought with him Flemish workers from
the Low Countries, where he himself had learned it. Now the Flemish used the spelling 'gh' to represent their
own voiced guttural continuant, a long-rolled-out sound (y) unlike our English (g). English had no such sound
at the time, but the employees in Caxton's shop were accustomed to combining the two letters, and continued
to do so in setting up certain English words. In words like 'ghost' and 'ghastly' it has persisted, one of the many
mute witnesses to orthographical conservatism. (From The Gift of Tongues, by Margaret Schlauch.)

An International Language
Some languages are spoken by quite small communities and they are hardly likely to survive. Before
the end of the twentieth century many languages in Africa, Asia, and America will have passed into complete
oblivion unless some competent linguist has found time to record them. The languages that remain are
constantly changing with the changing needs and circumstances of the people who speak them. Change is the
manifestation of life in language. The great languages of the world, such as English, Russian, Chinese, Japanese,
German, Spanish, French, Italian, Portuguese, Dutch, and Arabic, are just as liable to change as Swahili, Tamil,
or Choctaw. Change may, it is true, be artificially retarded in special conditions: for example in the great
liturgical languages of mankind, such as Sanskrit, the language of the orthodox Hindu religion of India; or Pali,
Reading Passages [ 80 ]

the sacred language of Buddhism; or Latin, the liturgical language of the Roman Church. By arduous schooling
a man may train himself to read, write, and converse in these crystallized forms of speech. Sanskrit, Pali, and
Latin are magnificent and awe-inspiring exceptions to the otherwise universal principle of change. Their
immutability depends upon two main factors or conditions: first, that they are not normally used in everyday
conversation, but are entrusted instead to the care of a privileged class of priests and scholars; and secondly,
that they possess memorable recorded literatures and liturgies which are constantly read and recited in acts of
religious devotion and worship.
It is just because these two conditions do not apply to artificial languages like Volapuk, Esperanto, and
Ido, that they, however carefully devised and constructed, cannot come alive and then escape from the law of
change. Over one hundred artificial languages have been framed by men in recent times, but the three just
named are far more widely known and used than any others. Volapuk, or 'World's Speech', was created by a
Bavarian pastor named Johan Martin Schleyer, in 1879, when it was acclaimed with enthusiasm as the future
universal speech of mankind. Only eight years later, however, many of Volapuk's most ardent supporters
abandoned it in favour of the system invented by the 'hopeful doctor', Doktoro Esperanto, a Polish Jew named
Lazarus Zamenhof (1859-1917). Esperanto is certainly an improvement upon Volapuk in that it is both more
flexible and more regular. Even within Zamenhof's lifetime, however, the mechanism of Esperanto was
improved in various ways, and in 1907 Ido (a made-up name consisting of the initials of International
Delegation substantive suffix -o) was formulated. This Delegation included scholars prominent in various
branches of learning, but its recommendations were not accepted by the main body of Esperantists who were
reluctant to admit that all their well-established textbooks might now be out of date. Today Esperanto, and not
its more advanced form Ido, is easily the first constructed language in the world and it has proved its worth at
numerous international gatherings. It no longer aspires to supplant ethnic languages. Like those other artificial
languages created in the twentieth century -  Edgar de Wahl's Occidental (1922), Otto Jespersen's Novial (1928),
Guiseppe Peano's Interlingua, or Latino sine Flexione (1908), and Lancelot Hogben's Interglossa (1943), and
many more - Esperanto can be regarded as a valuable bridge-language which any man may find unexpectedly
useful in unforeseen contingencies. Learning Esperanto is a pleasant pastime, and manipulating its regularized
affixes and inflections may become a healthy form of mental gymnastics. Nevertheless, even loyal esperantists
have been known to chafe and strain under the necessary bonds of orthodoxy. However much society may
desire and demand that it should remain constant, 'language changes always and everywhere'. In the New
World, where opportunities are limitless and enthusiasm boundless, and where whole families have been
reputed to adopt Esperanto as their everyday language, it has become modified considerably within the space
of one year to suit the special circumstances and way of life of that particular community. The worlds in which
different social communities live are separate worlds, not just one world with different linguistic labels
attached. An American and a Russian may converse pleasantly in Esperanto about travel, food, dress, and
sport, but they may be quite incapable of talking seriously in Esperanto about religion, science, or philosophy.
'Men imagine', as Francis Bacon said long ago, 'that their minds have command over language: but it often
happens that language bears rule over their minds.' Whether we like it or not, we are all very much under the
spell of that particular form of speech which has become the medium of discourse for our society.
(From Language in the Modern World, by Simeon Potter.)

Language as Symbolism
Animals struggle with each other for food or for leadership, but they do not, like human beings,
struggle with each other for things that stand for food or leadership: such things as our paper symbols of wealth
(money, bonds, titles), badges of rank to wear on our clothes, or low-number licence plates, supposed by some
people to stand for social precedence. For animals, the relationship in which one thing stands for something else
does not appear to exist except in very rudimentary form.
The process by means of which human beings can arbitrarily make certain things stand for other things
may be called the symbolic process. Whenever two or more human beings can communicate with each other,
they can, by agreement, make anything stand for anything. For example, here are two symbols: X, Y
We can agree to let X stand for buttons and Y stand for bows: then we can freely change our agreement
and let X stand for Chaucer and Y for Shakespeare, X for the CIO and Y for the AFL. We are, as human beings,
uniquely free to manufacture and manipulate and assign values to our symbols as we please. Indeed, we can go further
by making symbols that stand for symbols. If necessary we can, for instance, let the symbol M stand for all the
X's in the above example (buttons, Chaucer, CIO) and let N stand for all the Y's (bows, Shakespeare, AFL). Then
we can make another symbol, T, stand for M and N, which would be an instance of a symbol of a symbol of
Reading Passages [ 81 ]

symbols. This freedom to create symbols of any assigned value and to create symbols that stand for symbols is
essential to what we call the symbolic process.
Everywhere we turn, we see the symbolic process at work. Feathers worn on the head or stripes on the
sleeve can be made to stand for military leadership; cowrie shells or rings of brass or pieces of paper can stand
for wealth; crossed sticks can stand for a set of religious beliefs; buttons, elks' teeth, ribbons, special styles of
ornamental haircutting or tattooing, can stand for social affiliations. The symbolic process permeates human life
at the most primitive as well as at the most civilized levels.
Of all forms of symbolism, language is the most highly developed, most subtle, and most complicated.
It has been pointed out that human beings, by agreement, can make anything stand for anything. Now human
beings have agreed, in the course of centuries of mutual dependency, to let the various noises that they can
produce with their lungs, throats, tongues, teeth, and lips systematically stand for specified happenings in their
nervous systems. We call that system of agreements language. For example, we who speak English have been so
trained that, when our nervous systems register the presence of a certain kind of animal, we may make the
following noise: 'There's a cat.' Anyone hearing us expects to find that, by looking in the same direction, he will
experience a similar event in his nervous system-one that will lead him to make an almost identical noise.
Again, we have been so trained that when we are conscious of wanting food we make the noise 'I'm hungry.'
There is, as has been said, no necessary connection between the symbol and that which is symbolized. Just as
men can wear yachting costumes without ever having been near a yacht, so they can make the noise, 'I'm
hungry', without being hungry. Furthermore, just as social rank can be symbolized by feathers in the hair, by
tattooing on the breast, by gold ornaments on the watch chain, or by a thousand different devices according to
the culture we live in, so the fact of being hungry can be symbolized by a thousand different noises according to
the culture we live in: J'ai faim', or 'Es hungert mich', or 'Ho appetito', or 'Hara ga hetta', and so on.
However obvious these facts may appear at first glance, they are actually not so obvious as they seem
except when we take special pains to think about the subject. Symbols and things symbolized are independent
of each other: nevertheless, we all have a way of feeling as if, and sometimes acting as if, there were necessary
connections. For example, there is the vague sense we all have that foreign languages are inherently absurd:
foreigners have such funny names for things, and why can't they call things by their right names? This feeling
exhibits itself most strongly in those English and American tourists who seem to believe that they can make the
natives of any country understand English if they shout loud enough. Like the little boy who is reported to
have said: 'Pigs are called pigs because they are such dirty animals', they feel that the symbol is inherently
connected in some way with the things symbolized. Then there are the people who feel that since snakes are
'nasty, slimy creatures' (incidentally, snakes are not slimy), the word 'snake' is a nasty, slimy word.
(From Language in Thought and Action, by S. Hayakawa)

From Word Symbol to Phoneme Symbol


The second process, by which pictures cease to be ideograms and come to stand for specific linguistic
forms, is even more important. The earliest Sumerian writings are just lists of objects with symbols for numbers
against them: for example, four semicircles and the picture of an ox's head would read 'four oxen'. It seems that
writing arose to meet the needs of the highly centralized city state, and the first writings are records of
payments to the temple or city treasury, and similar transactions.
In this way, pictorial symbols come to stand for various words, which are the names of concrete objects
like sheep, oxen, the sun, houses and so on. Next, by a process of extension, the same symbols are made to
stand for more abstract words related to the original word. Thus a picture of the sun may come to stand for the
words for 'bright', or 'white', and later for the words 'day' and 'time', and a picture of a whip for words like
'power' or 'authority'.
Perhaps the really crucial development, however, is 'phonetization', the association of a symbol with a
particular sound (or group o£ sounds). First, a symbol for a concrete object is transferred to some more abstract
object which is denoted by the same or a similar word. For example, the Sumerian word ti meant 'arrow', and
so was represented by an arrow in the script; but there was also a Sumerian word ti which meant 'life', so the
arrow symbol came to be used for this too. The arrow symbol was then felt to stand for the sound of the word ti,
and was used for the syllable ti in longer words. In this way, the original word symbols developed into syllable
symbols, which could be grouped together to spell out a word.
An analogous process in English can be imagined on these lines. A picture of a tavern is used to
represent the word inn. Because of the identity of sound, the same symbol then becomes used for the word in.
At the same time a picture of an eye is used for the word eye, and then by extension is used for the word sight.
Reading Passages [ 82 ]

Finally the tavern symbol and the eye symbol are combined to write the words incite and insight, and have now
become syllabic symbols. If we wanted to distinguish between insight and incite in our syllabic script, we could
add a third symbol to show which of the two was intended: we could draw a picture of an orator to show that
we meant incite, or add a symbol for some word like 'wisdom' to show that we meant insight. When we used the
eye symbol by itself, we might wish to indicate whether it stood for the word eye or the word sight; one way of
doing this would be to add a symbol after it suggesting one of the sounds used in the word intended: for
example, if we had a symbol for the words sow, sew, so, we could add this after our eye symbol to indicate that
the required word began with s. These and similar methods are used in ancient Egyptian and Sumerian writing.
Sumerian writing is very mixed, using ideograms, word symbols, syllable symbols, and various kinds
of indicators of the types mentioned. Out of it, however, developed the almost purely syllabic system of
cuneiform writing which was used for Akkadian (the language of the ancient Babylonians and Assyrians), and
which for centuries dominated the writing of the Near East.
Ancient Egyptian writing also developed into a syllabic system, and was particularly important for the
development of true alphabetic writing (i.e. a script that has symbols representing phonemes). The important
thing about the Egyptian system was that the vowels were not indicated. Most of the signs (about eighty) stood
for a group of two consonants, plus any vowels whatever. For example, the symbol for a house (par) stood for
the group pr, and this could mean par, per, apr, epr, epra, and so on. But there were twenty-four signs which
stood for only one consonant plus any vowel; for example, the symbol representing a mouth (ra) stood for the
consonant r, and could mean ra, ar, re, er, and so on. When the West Semitic peoples living round the eastern
shores of the Mediterranean developed a script, they did so by taking over from the Egyptians just these
twenty-four signs. Originally, this must have been a syllable system, in which each of the signs stood for a
number of possible syllables, like the Egyptian ra, ar, re, er, etc.: but in fact it is formally identical with a purely
alphabetic system in which only the consonants are written and the vowels are left out.
The final step, of having fixed and regular symbols for the vowels, was made by the Greeks when they
took over this Semitic alphabet. Some of the consonant sounds of Phoenician did not exist in Greek, and the
Greeks used the corresponding symbols for vowels. For example, the first letter of the West Semitic alphabet,
derived from the picture of an ox, was 'aleph', and stood for a kind of h sound (represented in the spelling by');
the Greeks of the period did not use this sound, and took the letter over as alpha, representing an a sound. Thus
was reached at last a system of writing where symbols stand for phonemes, and all later alphabetic systems are
ultimately derived from this Greek achievement. The great advantage of the system is the relatively small
number of symbols needed, which makes universal literacy possible.
(From The Story of Language, by A. L. Barber.)
Reading Passages [ 83 ]

LAW
MODERN CONSTITUTIONS
It is natural to ask, in the light of this discussion, why it is that countries have Constitutions, and why
most of them make the Constitution superior to the ordinary law.
If we investigate the origins of modern Constitutions, we find that, practically without exception, they
were drawn up and adopted because people wished to make a fresh start, so far as the statement of their
system of government was concerned. The desire or need for a fresh start arose either because, as in the United
States, some neighbouring communities wished to unite together under a new government, or because, as in
Austria or Hungary or Czechoslovakia after 1918, communities had been released from an Empire as the result
of a war and were now free to govern themselves; or because, as in France in 1789 or the U.S.S.R. in 1917, a
revolution had made a break with the past and a new form of government on new principles was desired; or
because, as in Germany after 1918 or in France in 1875 or in 1946, defeat in war had broken the continuity of
government and a fresh start was needed after the war. The circumstances in which a break with the past and
the need for a fresh start come about vary from country to country, but in almost every case in modern times,
countries have a Constitution for the very simple and elementary reason that they wanted, for some reason, to
begin again and so they put down in writing the main outline, at least, of their proposed system of government.
This has been the practice certainly since 1781 when the American Constitution was drafted, and as the years
passed no doubt imitation and the force of example have led all countries to think it necessary to have a
Constitution.
This does not explain, however, why many countries think it necessary to give the Constitution a higher
status in law than other rules of law. The short explanation of this phenomenon is that in many countries a
Constitution is thought of as an instrument by which government can be controlled. Constitutions spring from
a belief in limited government. Countries differ however in the extent to which they wish to impose limitations.
Sometimes the Constitution limits the executive or subordinate local bodies; sometimes it limits the legislature
also, but only so far as amendment of the Constitution itself is concerned; and sometimes it imposes restrictions
upon the legislature which go far beyond this point and forbid it to make laws upon certain subjects or in a
certain way or with certain effects. Whatever the nature and the extent of the restrictions, however, they are
based upon a common belief in limited government and in the use of a Constitution to impose these limitations.
The nature of the limitations to be imposed on a government, and therefore the degree to which a
Constitution will be supreme over a government, depends upon the objects which the framers of the
Constitution wish to safeguard. In the first place they may want to do no more than ensure that the
Constitution is not altered casually or carelessly or by subterfuge or by implication; they may want to secure
that this important document is not lightly tampered with, but solemnly, with due notice and deliberation,
consciously amended. In that case it is legitimate to require some special process of constitutional amendment -
say, that the legislature may amend the Constitution only by a two-thirds majority or after a general election or
perhaps upon three months notice.
The framers of Constitutions have more than this in mind. They may feel that a certain kind of
relationship between legislature and the executive is important, or that the judicature should have a certain
guaranteed degree of independence of the legislature and executive. They may feel that there are certain rights
which citizens have and which the legislature or the executive must not invade or remove. They may feel that
certain laws should not be made at all. The framers of the American Constitution, for example, forbade
Congress to pass any ex post,facto law, that is, a law made after the occurrence of the action or the situation
which it seeks to regulate-a type of law which may render a man guilty of an offence through an action which,
when he committed it, was innocent. The framers of the Irish Constitution of 1937 forbade the legislature to
pass any law permitting divorce.
Further safeguards may be called for when distinct and different communities decide to join together
under a common government but are anxious to retain certain rights for themselves. If these communities differ
in language, race, and religion, safeguards may be needed to guarantee to them a free exercise of these national
characteristics. Those who framed the Swiss, the Canadian, and the South African Constitutions, to name a few
only, had to consider these questions. Even when communities do not differ in language, race, or religion, they
may still be unwilling to unite unless they are guaranteed a measure of independence inside the union. To meet
this demand the Constitution must not only divide powers between the government of the Union and the
governments of the individual, component parts, but it must also be supreme in so far at any rate as it enshrines
and safeguards this division of powers.
Reading Passages [ 84 ]

In some countries only one of the considerations mentioned above may operate, in others some, and in
some, all. Thus, in the Irish Constitution, the framers were anxious that amendment should be a deliberate
process, that the rights of citizens should be safeguarded and that certain types of laws should not be passed at
all, and therefore they made the Constitution supreme and imposed restrictions upon the legislature to achieve
these ends. The framers of the American Constitution also had these objects in mind, but on top of that they had
to provide for the desire of the thirteen colonies to be united for some purposes only and to remain
independent for others. This was an additional reason for giving supremacy to the Constitution and for
introducing certain extra safeguards into it.
(From Modern constitutions by K. C. Wheare)
Reading Passages [ 85 ]

THE FUNCTIONS OF GOVERNMENT


England had become a political unit before the Norman Conquest. By the middle of the fourteenth
century England had a general or common law, and the main functions of government, those relating to the
agricultural economy of the village apart, were vested in the central authorities that surrounded the king and in
the king's local representatives. From this time we can speak of England as a 'State'.
The growth of the division of labour, which may also be described as the disappearance of subsistence
agriculture, and which became marked in England with the development of the wool trade, did not of itself
imply a corresponding increase in the functions of government. Some increase there was, of necessity. A more
extensive system for maintaining order, some organization for the repair of roads and bridges, a development
of the instruments for the settling of disputes and the imposition of taxation, were obviously required by the
gradual breakdown of village self-sufficiency. Trade and commerce are, however, matters of private
arrangement, and the State itself need do no more than provide the laws to regulate disputes, the judicial
institutions to administer the laws, and the currency to serve as the instrument of exchange. It happens that in
England the State went somewhat further, and was compelled to make some attempt to control the movement
of labour. So long as labour was provided within the manor by labourers who themselves had interests in the
land of the manor, the problem was one for the manor alone; but hired labour became more and more the
practice as specialization developed, in ancillary trades as well as in agriculture itself, and labourers in search of
work left their manors, with the result that the State interfered in the interests of public order and the needs of
employers. This was especially so after the Black Death created a dearth of labour. Justices of labour were
created to regulate labour, and were subsequently merged with the justices of the peace, who were originally
concerned only with the apprehension of criminals.
By the sixteenth century the State had to concern itself far more than in the past with external relations,
and at the same time it had not only to deal internally with 'wandering beggars' who were 'rogues and
vagabonds', but also to provide poor relief; and the requirements of transport compelled more effective
provision for the maintenance of roads and bridges. The result of further economic development was that in the
eighteenth century the State had additional functions relating to the regulation of foreign trade, and the
government of colonies. Moreover, though the foundation of the nation's economic life remained in the village,
the towns became increasingly important.
The Industrial Revolution - falsely so named, because there was no clear break in the chain of
development, and the use of steam-power merely accelerated the speed of change - altered the emphasis.
Mining, iron smelting, and other industries became as important as agriculture. The domestic wool industry
was superseded by weaving factories, and foreign trade developed the cotton industry. New methods of
transport were developed. These changes did not in themselves require any new functions of government;
indeed, the ideas of the time, especially after Adam Smith, were in favour of easing such governmental
restrictions on trade and industry as already existed and of developing a system of free competition. There
were, however, repercussions in the field of government. New roads, canals, and, in due course, railways were
required. New ports and harbours were opened. Though some or all of these might be provided by private
enterprise, the intervention of the State was necessary to enable land to be acquired, traffic to be regulated,
roads crossed, bridges built, and so on.
The most important effect from a governmental aspect, however, was the congregation of vast numbers
of inhabitants into towns, some of which were ancient market towns, some mere villages, and some entirely
new growths where before agriculture had reigned supreme. The old methods by which the family had
obtained its own water and disposed of its own sewage and refuse were dangerous to health in these great
urban centres. New police forces had to be created, streets paved and lighted, refuse collected, and water
supplies and main drainage provided. The existence of unemployment on a large scale created a new problem
for the poor law. In particular it was recognized that ill-health was responsible for much unemployment and,
therefore, for much expenditure on poor relief. On the one hand medical services and hospitals had to be
provided, and on the other hand there was new agitation for preventive remedies-pure water, clean streets,
good sewerage, and sewage disposal. Also, these remedies were not enough if a large part of the population
spent most of its time in unhealthy factories and mines. Hours and other conditions of employment had to be
regulated; and though this involved in the earlier experiments only restrictions on private enterprise, they were
soon. found to be ineffective without State inspection.
Thus, while the State in the nineteenth century was freeing trade from the cumbersome restrictions of
the eighteenth century, it proceeded to regulate industry both directly in the interests of public health and
indirectly by providing services out of the produce of taxation. After 1867 sections of the working class had the
Reading Passages [ 86 ]

vote, and the individualist principles which appealed to the middle class of the industrial towns became much
less strong. Accordingly, the provision of services became part of State policy. It recognized, for instance, its
responsibility for the education of the young and provided schools, not merely by subsidizing ecclesiastical
bodies, as it had done since 1833, nor merely for the benefit of 'pauper children', but directly through the local
authorities and for the benefit of all children. There was, too, a general and progressive reform of all the public
services, and new services, such as housing, transport, and electricity, were provided. In the present century the
pace of development has accelerated; and while on the one hand existing services have been expanded, new
services like pensions, insurance, and broadcasting have developed; and since 1945 some of the services
formerly provided by private enterprise have been taken over by the State.
(From The law and the constitution by Ivor Jennings)
Reading Passages [ 87 ]

INITIATIVE AND REFERENDUM


It is sometimes argued that a democratic system requires the embodiment of the initiative and the
referendum in the constitution. A people, it is said, does not really control its own life if its only direct
participation in the business of making legal imperatives is confined to choosing the persons responsible for
their substance. By the initiative, the popular will can take positive form; and by the referendum, the people can
prevent action by its representatives with which it is not in agreement. Direct government, it is claimed,
provides a necessary supplement to a representative system; otherwise, as Rousseau said of the English people,
it is only free at election time.
But this is, it may be suggested, both to mistake the nature of the problems which have to be decided,
and the place at which popular opinion can obtain the most valuable results in action. In all modern states, the
size of the electorate is necessarily so large that the people can hardly do more, as a whole people, than give a
direct negative or affirmative to the questions direct government would place before them. Legislation,
however, is a matter not less of detail than of principle; and no electorate can deal with the details of a measure
submitted to it for consideration. Direct government, in fact, is too crude an instrument for the purposes of
modern government. It fails to make discussion effective at the point where discussion is required; and it leaves
no room for the process of amendment. One might, it is true, leave certain broad questions of principle to
popular vote, whether, for instance, the supply of electricity should be a national or a private service. But all
other questions are so delicate and complex that the electorate would have neither the interest nor the
knowledge, when taken as an undifferentiated electorate, to arrive at adequate decisions.
Nor is this all. Not only can most questions not be framed in a way which can make direct government
effective; the secondary results of the system are also unsatisfactory. It is hardly compatible, for instance, with
the parliamentary system since it places the essential responsibility for measures outside the legislature. Such a
division of responsibility destroys that coherence of effort which enables a people adequately to judge the work
of its representatives. It assumes, further, that public opinion exists about the process of legislation, as well as
about its results. But the real problem of government is not forcibly to extract from the electorate an
undifferentiated and uninterested opinion upon measures about which it is not likely to be informed. It is
rather to relate to the law-making process that part of public opinion which is relevant to, and competent about,
its substance before that substance is made a legal imperative. 'This involves not direct government, but a
method of associating the relevant interest-units of the community with the making of measures that will affect
their lives. A referendum, for example, on a national scheme of health insurance would give far less valuable
results than a technique of consultation in which the opinions of doctors, trade-unions, and similar associations
were given a full opportunity to state their view before the scheme was debated in the legislative assembly.
Effective opinion for the purpose of government, in a word, is almost always opinion which is organized and
differentiated from that of the multitude by the possession of special knowledge. Popular opinion, as such, will
rarely give other than negative results; and it seems to be the lesson of experience, very notably on the record of
Switzerland, that it is so firmly encased in traditional habit, as to make social experiment difficult when it is a
reserve-power-outside.
(From An Introduction to Politics by H. J. Laski)

S v S (FINANCIAL PROVISION: DEPARTING FROM EQUALITY) [2001] 2 FLR 246


[2001] 2 FLR 246
Family Division
Peter Collier QC (sitting as a Deputy Judge of the High Court)
2 February 2001
Financial provision – Factors to be considered – Departure from equality
The parties were married in 1967 when the wife was 25 and the husband 27. There were two children, a
son aged 37 who was financially independent, and a daughter, aged 27, who lived with her mother and was
supported by her financially to some extent. The wife worked outside the home until her son was born and
thereafter ran the house and worked in the husband’s business. The marriage broke down in 1998 when the
husband left the matrimonial home and disclosed that he had been having an affair with another woman since
1992, and that she had a child by him. At the hearing of the wife’s application for ancillary relief the judge
found that the husband’s share of the total assets of the parties amounted to £2,102,890, consisting of his
interests in various businesses, his pension fund, some properties, his half-interest in the matrimonial home and
investments, a yacht, and a number of vintage cars. The wife had her half-share in the house, her car, and a
portfolio of investments, all amounting to £629,166. It was argued on the wife’s behalf that this was a case in
Reading Passages [ 88 ]

which to follow the principles set out in White v White, so as to achieve a clean break and an equal division of
the assets, by the transfer by the husband to the wife of his share of the matrimonial home and of the joint
investments, together with a lump sum of £400,000. Counsel for the husband contended that in the
circumstances that approach would not be fair having regard to the criteria in s 25 of the Matrimonial Causes
Act 1973.
Held – the assets of the parties had been valued on a net value basis, ie taking account of sale costs and
tax liabilities, and did not fall to be reduced in the husband’s case because some of the capital was not readily
available or was not producing an income. However, having regard to all the matters in s 25, the award of
£400,000 to the wife, required to achieve equality, would discriminate against the husband. On the facts, such
an award would mean that the wife lived in luxury, while for the husband with a new family to support, things
would be much tighter. If an agreement was harder on one party than the other, then there was good reason to
depart from equality. The court should aim to provide both parties with a comfortable house and sufficient
money to discharge their needs and obligations. On that basis there would be an order for the husband to
transfer to the wife his half-share of the matrimonial home and of the joint investments, and to pay to her a
lump sum of £300,000.
Statutory provision considered
Matrimonial Causes Act 1973, s 25
Cases referred to in judgment
Leadbeater v Leadbeater [1985] FLR 789, FD
Piglowska v Piglowski [1999] 2 FLR 763, [1999] 1 WLR 1360, [1999] 3 All ER 632, CA and HL
White v White [2000] 2 FLR 981, [2000] 3 WLR 1571, [2001] 1 All ER 43, HL
[2001] 2 FLR 247
Martin Pointer QC for the petitioner Rodger Hayward-Smith QC and Andrew Marsden for the respondent
PETER COLLIER QC: This is an application for ancillary relief by Mrs S, following the breakdown of
her marriage to her husband Mr S.
The marriage
The petitioner (who I will hereafter refer to as ‘the wife’) is Mrs S. She is 58 years of age, having been
born on 14 June 1942. The respondent, her husband, is Mr S. He is now 60 years of age, having been born on 30
April 1940. They were married on 20 September 1967, when she was 25 and he was 27.
The children
S, born on 27 July 1969, he is now 31. He lives in America and is, of course, financially independent. He
has not featured in the evidence, unlike his sister M, who is now 27, having been born on 15 July 1973. She has
completed her tertiary education; in fact she has two MA degrees, but at the present time has no permanent
employment. Her father thinks she should be more independent than does her mother, who continues to
support her, having provided a car for her, by way of paying the hire-purchase instalments on the car, which is
said to be a loan to her. She has also assisted her, towards the end of last year, in her attempts to set up a small
retail business, by buying goods for her; again that is a loan of some £5000 which has been shown in the
schedules.
The husband helped M to buy a property in Colchester some while ago now. She currently rents that
out, receiving about £500 per month, the mortgage costing her some £340 a month; and at the present time she
chooses to live with her mother.
There was a Trust Fund set up for the children in 1976, for tax and inheritance reasons. Since December
1998, M has received a total of £31,556 from the Trust Funds.
Employment
At the time of the marriage the husband was working with his brother, H, in a business that they owned
and ran, known as ‘[SCL]’, making car alarms and battery chargers. The business had been started by them in
1960.
When they married, the wife was working as a PA/secretary to the Managing Director of MGN. In
1969, when S was expected, she gave up work and, thereafter, she has not worked outside the family or the
family businesses. It is accepted that she worked in the home, bringing up the children and running the home.
When S was about 2, she began to work for the husband’s company. She was an employee, she was on the
books; she did secretarial and what I described as ‘general duties’, but they covered quite a lot of matters,
including hosting people who came to visit.
Reading Passages [ 89 ]

He says that she has exaggerated her role in the business, and she says that he has minimised it. I find it
is not necessary to define it with precision, suffice to say that in my judgment she made a significant
contribution to the business.
[2001] 2 FLR 248
Homes
The first matrimonial home was in Tomswood Road, Chigwell; it was bought for £12,000, with the
assistance of a gift of £3000 from the wife’s father, A. The husband says that he does not recall exactly how
much, but he believes they also had some help from his family. Again, greater precision will not affect the fact
that these two people began to share their lives, with support from their families, it was a real partnership, each
bringing what they could to the partnership, supported by both their families and they intended to work hard
to make their family and business something of which they could both be proud.
In 1976 or 1977, Tomswood Road was sold for £43,000 and they purchased Hunters, Chigwell, for
£58,750 with the aid of a mortgage of £25,000. The mortgage was paid off out of the proceeds of the subsequent
sale of the business. The wife remains there to this day, and it is accepted that she should stay there. I am told
that the house is not, itself, attractive, being a chalet-bungalow that has been extended, and at the time these
proceedings commenced it was in need of substantial redecoration. Over the years that she has lived there, the
wife has put time and money into the development of the garden, which is clearly her pride and joy. She
envisages that in due course, if necessary, she could put in a ramp and push her zimmer-frame up and down
between house and garden.
In October 1991, the husband and his brother H sold SCL to RG, and some of the proceeds have been
used to purchase investment portfolios: there is a joint portfolio; there are separate ones for each of the husband
and wife; there is one creating a settlement trust for the children; there is also a pension fund, which now
provides an income for the husband.
Some time before 1992, the husband formed a relationship with a Miss D who bore his daughter, B, in
that year. The existence of both that relationship and that child were kept secret from the wife until April of
1998. I simply record that the husband had had a brief relationship with another woman in the mid 1980s, he
had left the matrimonial home for a short time but had returned.
Businesses
In addition to SCL, there are other businesses in which the husband has been and, in some instances,
remains involved. PPEL, was set up by the parties to receive commission on the sale of battery chargers for RG
after the sale of SCL to RG. PPEL commenced business in 1993. The company has had an income of as high as
£19,000, but in recent years it has deteriorated to £2000–£3000 a year. I have been told that the company was
used as a vehicle to buy equipment for and fund travel by the wife; the husband says that he anticipates no
future profits from this company.
SL and TFL were also companies run by the husband and his brother. I shall say more of those later
when I come to deal with valuations. AVIL – I need say little about that: it was a business in which the parties
invested money and which failed. Some moneys had been repaid to the husband, and I understand there will
be no further income from that source.
Properties
There are other properties that have come within the family at different times. Lower Clapton Road,
these are commercial premises which are let.
[2001] 2 FLR 249
They are premises left to the husband by his mother, and comprise a shop, a flat and an advertising
hoarding. They currently produce an income: the shop and flat, some £11,000 plus; the hoarding, £7500.
There is an issue regarding the hoarding. At an earlier stage in the proceedings it was stated, in relation
to its value, that ‘the income from the advertising panel, which in our view is high, is relatively insecure’. Since
then, it has been disclosed that the local council, Hackney, had written to the husband on 18 November 1997
stating that in their view there was neither 'deemed nor 'express' consent for this board which is in a
conservation area. Since then, the advertiser, Postmobile, has written to the council stating that the board has
been in its present situation for so long that it must have deemed consent. That letter has been acknowledged
and there has been no further action on the part of the local authority. The husband told me that he thought the
board had been there for over 45 years. On that evidence, I am not able to discount the rent received in relation
to these premises.
There are factory premises in Springfield, Chelmsford. These premises were purchased through a
Business Expansion Scheme and are rented out, producing an income in the region of some £10,000 a year.
Reading Passages [ 90 ]

Aveley Way, Malden, is the husband’s present home where he lives with Miss D, B and another
daughter of Miss D’s by a former relationship.
For completeness, I should perhaps add that prior to the husband’s purchase of that property, into
which he moved with his new family, Miss D had been living in a council house which she has now purchased
with the aid of a loan of £33,331 from the husband. That property she now lets out at £5400 pa, and I am told
that the property is now worth some £55,000.
Cars
The parties have an interest in classic cars. The husband rebuilt several of these and they would spend a
significant amount of time touring and rallying in these cars. The family’s present collection of cars include
several classic cars, an E-Type Jaguar, a Jowett Jupiter and a Jowett Javelin. They are all with the husband, who
has built a garage at his new home to house them. He also owns an Audi Quatro and a Peugeot 205. The agreed
value of all those cars is £41,200. For everyday use the husband has an imported Toyota Estate car worth some
£1500; the wife has a Volkswagen Golf, bought with the help of a loan from her sister of £17,000 in November
1998. The Golf is now said to be worth £10,000.
The boat
There is a boat, an Oyster 435, known as ‘Star Charger 4’, which is currently moored in Antibes in the
South of France. There is no agreed valuation, but I am asked to come to a view about its value based on a
variety of material that has been put before me and to which I shall turn in due course.
Bank accounts
There are on both sides various bank accounts.
Breakdown of the marriage
The breakdown of the marriage happened in April 1998. The husband does not dispute the wife’s
account that he left the matrimonial home on 27 April 1998, having disclosed his affair 2 days earlier on 25
April 1998.
[2001] 2 FLR 250
Present circumstances
The wife continues to live in the former matrimonial home, she has her own investment income and the
£350 per month ordered by way of interim periodical payments.
The husband lives with Miss D, who is 41 years old, B, who is now aged 8, and Miss D’s other child, E,
who is aged 15, at Aveley Way, Malden. It has been described to me as a typical small modern estate house on
the outskirts of the town. The husband says he bought it only intending to stay there until he knows the
outcome of these proceedings, when he hopes to buy something more suitable where he can keep his cars and
spend his remaining years. He is retired but clearly he remains an active man.
Valuation of the assets
These are mostly agreed, but the outstanding issues remain in relation to the husband’s interest in the
companies SL and TFL, and the value of Star Charger 4, the boat.
I deal first with SL and TFL. SL manufactures tubular furniture, and TFL does the same, and there is a
trading relationship between the two companies. TFL owes SL in excess of £300,000, and TFL has a deficiency
of some £63,000. I am told it is to SL’s advantage to continue the present situation as it is able to use up tax
losses, but to run the two companies involves the doubling of some of the costs. The management of both
companies is the same; as I understand it, they have the same directors and similar shareholders. Mr P is the
only non-family member who is a director, otherwise, the husband, plus his sister-in-law, PS, and her father,
DL, are the directors.
I am asked to resolve the dispute as to the valuation of the husband’s interest in these two companies.
In order to assist me, I heard from two accountants, ME, called on behalf of the wife, and PM on behalf of the
husband.
In relation to SL, they agree that the basis of the valuation should be the company’s net asset value.
There is a small difference between them as to whether one should discount SL’s net assets by the amount of
TFL’s deficiency, which is £63,539, since TFL still owes SL £306,563. The net asset value of the shares at face
value, on ME’s valuation, is £92,582, and PM’s valuation produces £102,478.
They also differ in the degree to which the appropriate figure should be discounted. I am told that the
scale accepted by the Inland Revenue is between 40 and 50%. PM says the proper discount is 40%, ME says it is
50%. Which is chosen is dependent upon the extent of the control that the husband has over the company. The
Articles of Association provide that the sale should be, in the first instance, to its present shareholders who are
all in some way connected with the family, and, of course, Mr P. If they decide not to buy, then the company
Reading Passages [ 91 ]

can purchase the shares if it has sufficient reserves. Finally, if neither of those routes produce a sale, then a third
party sale can take place.
What is the extent of the control that the husband has? It is argued that he does not involve himself in
the day-to-day management of the business; that the other directors, as I have said, are Mr P who manages the
company, DL and his daughter, PS, who is the widow of H, the husband’s brother.
ME is the company auditor and occasionally attends board meetings. He [2001] 2 FLR 251 did not
disagree when I said to him that I thought it would be suggested that the husband was a strong character. He
told me that the husband does attend board meetings where he expresses his views and that he is respected by
the other directors. I am satisfied that in key decisions of the sort we are contemplating, the husband would
have substantial influence; and so, given the agreed scale of 40–50%, I have assessed the discount at 40% in
relation to SL. Also given the relationship of the companies, I do not propose to make a reduction in value
because of TFL’s deficiency. Consequently, I determine the value of the husband’s interest in SL at £62,000.
That leaves TFL. Unlike SL, where ME and PM agree about the basis of the valuation, that is not the
position with TFL. There are no assets, so I cannot value the company on a net asset basis. The company has
paid no dividends, so I cannot value it on a dividend yield basis. PM says that nevertheless a trade sale basis
can be arrived at on the basis of the profit. ME says that there is no track record and the profit is not reliable.
Having explored with ME how these companies are related in their trading, it became clear that TFL
was intended to have a genuinely separate existence, resulting from a joint venture between SL and another
business. However, the other party gradually withdrew, leaving the owners of SL to run TFL. Their two
directors, Messrs. C and J, were replaced by the husband and DL, and the company is now run solely to gain
certain tax advantages arising from past tax losses. The result is that TFL is, realistically, just a division of SL.
ME said that once someone was told the story of TFL, they would not want to buy TFL’s shares. I agree. In my
judgment, the shares have no saleable value, certainly not one that could be assessed on a trade sale basis (as
PM attempted to do), and no other basis has been put forward to me.
Reading Passages [ 92 ]

So I assess the value of the husband’s interest in SL at £62,000, but the value of his interest in TFL as nil.
Mr Hayward-Smith QC suggests that that figure should be further discounted as it is said by the accountants
not to be the equivalent of cash. I have considered carefully his submissions, but in the end I have decided that
although they are not realisable as immediately as if it was a publicly quoted company, I should not make any
allowance for that, otherwise I would have to go through the same exercise with all the properties, each of
which may take some considerable time to realise. I must assume that everything has a value and, in the
absence of firm evidence about a difference between market value and forced sale value, and also a reason for
preferring the latter in some particular instance, I should take the market value of all the assets, including this
one, as their true value.
The boat
That leaves the other issue, the value of the Star Charger boat. This was built between 1983 and 1985.
The husband has rebuilt part of the cabin area. He jointly owned the boat with his brother, but bought out his
brother’s share for £50,000 when his brother bought another boat in March 1998. Attempts had been made to
sell the boat in 1996 through Oyster Brokerage of Ipswich. They advertised the boat at an asking price of
£125,000; no sale was achieved. Oysters have since said, in the summer of 1999, that they had hoped to achieve
a price of £110,000–£100,000. Apparently it would sell better in the UK, so any sale here for a better price would
necessitate transporting the boat to the UK at a cost of about £4000, in addition to the
[2001] 2 FLR 252
10% commission charged by the selling agent.
The self-same agents were asked for a valuation in 1999. They said that their insurance policy precluded
them from giving valuations, but said that they estimated the value at £82,500–£87,500 in June of 1999. They
were again asked in January 2001. Making the same reservation regarding valuation/estimation, they said it
would have depreciated by a further 5–10% and so now would be worth £75,000
The insured sum throughout has been £112,900, which is a replacement value. The boat suffers from
osmosis, which has been taken into account in the estimations, but it will effect its saleability. The wife, going
on the original advert for sale, and discounting, says that the value should be assessed at £95,000 less 10%
commission.
Doing the best I can, it seems to me that if I take the mean between £82,500 and £87,500, which is
£85,000, and then deduct the commission and transport costs, and then also make some small allowance for
depreciation, I come to a figure of about £70,000, which is the amount, in the absence of better evidence, I shall
take as the value of the boat.
As the value of all the other assets is now agreed, those findings of mine produce a total asset value of
£2,732,055.
The arguments as to how those assets should be divided which were presented (in the broadest outline)
to me are as follows.
The wife’s case
On behalf of the wife it is argued that this is a classic case in which to follow the principles set out by
Lord Nicholls of Birkenhead in his speech in the House of Lords case of White v White [2000] 2 FLR 981, [2000] 3
WLR 1571. It is said that there are sufficient funds for a clean break, with an equal division of all the assets. That
can be achieved by transferring the husband’s half-share of the house, and of the joint funds, to her absolutely,
and adding to that a lump sum to make up half the value of the assets; that sum is just over £400,000. It is said
that the husband has not displaced that as a starting point.
The husband’s case
For the husband, it is said that that is not the correct approach; that this is only marginally a White case.
It is said that when I look at the statutory criteria set out in s 25 of the Matrimonial Causes Act 1973, my
tentative conclusions will be that equality is not possible here if I am to be fair and do justice to those criteria.
He particularly argues that given the fact that significant parts of the assets are, on the one hand, the home,
which will go to the wife, and, on the other hand, his pension fund, then I am restricted to the extent that I can
balance the figures equally.
Section 25 of the Matrimonial Causes Act 1973
It seems to me that I must begin by examining the statute. Lord Nicholls of Birkenhead said in White v
White [2000] 2 FLR 981, 992, [2000] 3 WLR 1571, 1581, quoting from Lord Hoffmann’s speech in Piglowska v
Piglowski [1999] 2 FLR 763, 782, [1999] 1 WLR 1360, 1379:
‘ section 25(2) does not rank the matters listed in that subsection in any kind of hierarchy.
[2001] 2 FLR 253
Reading Passages [ 93 ]

I think it would be helpful, therefore, if I deal first with those matters relative to background and
history. Subsection (d): ‘the age of each party to the marriage and the duration of the marriage;’. I recited the
history earlier: he 60, she is 58; the marriage was a long one, lasting almost 31 years.
Subsection (f): ‘the contributions which each of the parties is or is likely in the foreseeable future to
make to the welfare of the family, including any contribution by looking after the home or caring for the
family;’. As I indicated earlier, I take the view that for many years this marriage was a partnership, each
contributing equally but in different ways, and no one has argued otherwise.
Subsection (c): ‘the standard of living enjoyed by the family before the breakdown of the marriage;’. It is
part of the wife’s evidence that in the final year of marriage they got through £120,000, and she would have me
believe that this was typical expenditure. I find that the £120,000 is not proved. I consider it was an unlikely
sum to have been spent, given that it is the sum of their two proposed future budgets, which includes a £24,500
mortgage for the husband. Clearly they were very comfortably off, whatever they wanted to do they could and
did do. She told me they ‘lived the life of Riley’. She also said that her experience before the separation was that
there was plenty for whatever she wanted to do, he begrudged it but there were adequate funds. They enjoyed
classic cars that the husband rebuilt; they had a yacht in the south of France; they took holidays as and when
they wished. Her expenditure on the garden was unrestrained. She made regular major shopping expeditions
to the West End. In latter years, the husband had a child he was supporting to the tune of £350 per month,
without his wife, who monitored their finances, suspecting anything.
Subsection (e): ‘any physical or mental disability of either of the parties to the marriage;’. There are none
of significance. The wife has been depressed, but no one suggests that she should go out to work or that she has
special needs that should be provided for.
Subsection (g): ‘the conduct of each of the parties, if that conduct is such that it would in the opinion of
the court be inequitable to disregard it;’.
Subsection (h): ‘ the value to each of the parties to the marriage of any benefit which, by reason of the
dissolution or annulment of the marriage, that party will lose the chance of acquiring.
Neither of those subsections has any relevance to my determination.
I turn to the two main subsections, those dealing with assets and resources on the one hand, and with
needs, obligations and responsibilities on the other.
Subsection (a):
‘ the income, earning capacity, property and other financial resources which each of the parties to the
marriage has or is likely to have in the foreseeable future, including in the case of earning capacity any increase
in that capacity which it would in the opinion of the court be reasonable to expect a party to the marriage to
take steps to acquire.
Income
Apart from her interim maintenance of £350 month, the wife has investment income only. Currently
there is that produced by her own portfolio and her half of the joint investments, producing something in the
region, I have been
[2001] 2 FLR 254
told, of £25,000. It is not suggested that she will have any income other than what is produced by those
investments, plus any produced by the lump sum that I order the husband to pay her.
The husband has income from a number of sources. His pension fund currently produces £52,470 per
year. His rental from properties produces £28,500 per year, though I note that something less than that was
shown in his tax return for the year ending April 1999. He has investment income of his own, of about £12,500,
and his half-share of the joint investments, producing something over £5000. Some other earnings have been
produced from time to time, such as bonuses from SL. It is clear that he has more cards available to him in his
hand than she has in hers, and that he has some more flexibility than she does.
Earning capacity
She has none. I accept that he wants to remain retired, but I note that he is a company director, having
his say in those companies. I do not consider that either of them should take steps to increase their earning
capacity at this stage of their lives, they have both reached the age when they can expect to live now off the
fruits of their past labours.
Property and other financial resources
The matrimonial home and the portfolio of joint investments are the only jointly owned items; each has
other property in their own name and other expectations as well as obligations.
The wife
Reading Passages [ 94 ]

She has her half-interest in the home, which is £252,200, and her half-share of the joint investments at
£81,050. She owns a car said to be worth £10,000, but she owes her sister £17,000 which was loaned to her to buy
that car. She is owed £5000 by M, which are moneys advanced to help her set up a small business venture at the
end of last year. She has some small amounts in her bank accounts. Her main asset is her own portfolio of
investments, said to be worth £286,608. She has to date paid legal costs of £12,553 (relevant by reason of the
decision in Leadbeater v Leadbeater [1985] FLR 789). Her total pot, therefore, amounts to £629,166.
The husband
He has his half-interest in the home, £252,200 net, and his half-share of the joint investments, £81,050.
He owns a collection of cars valued at £41,200; his everyday car is worth £1500. He owns the boat which I have
assessed at £70,000. He has lent £33,331 to Miss D. He expects to receive £75,000 from his brother H’s estate. He
has an investment portfolio worth £356,313 net. He has two PEPs worth £12,526. He has SL shares, which I have
valued at £62,000. He has over £115,000 in various bank accounts. His pension fund is valued at 689,037. He has
paid costs to date of £38,369. His total pot, therefore, comes to £2,102,890. The total joint assets amount to
£2,732,055. I come to subs (b):
‘ the financial needs, obligations and responsibilities which each of the parties to the marriage has or is
likely to have in the foreseeable future.
[2001] 2 FLR 255
The section refers to ‘needs, obligations and responsibilities’.
Lord Nicholls of Birkenhead was critical of the phrase ‘reasonable requirements which has, in recent
years, been used in these courts to comprehend the assessment that is made under this subsection in the context
of the rest of the section.
I shall consider her ‘needs, obligations and responsibilities in the context that is set by the other factors
to which I have already referred. If this is different from ‘reasonable requirements’, then so be it.
Housing
The wife needs a house; she wants to live in Hunters. She tells me it is the ugliest house she knows, but
it has the nicest garden she knows. She wants to stay there; there are no dependent children, so there is no
particular need to do so. The house is worth over £500,000, which is a significant part of the assets. One of the
difficulties in this case is created by her desire to stay in that house.
The husband has bought a house; it has an equity of £80,000. He hopes to improve his lot in relation to
housing, dependent on the outcome of these proceedings. Mr Pointer QC says he does not plan to move, he is
settled where he is, but I accept the husband’s evidence that he would like to move, and he will if he is able.
Obligations and responsibilities
The husband has a new family, Miss D, B (aged 8) and E (aged 15), and there has been argument about
the effect that I should give to that. The wife has no obligations, though she supports her daughter M to an
extent, as I have already described. M’s father says that her mother indulges her. If she does, that is her choice,
perhaps understandable in the circumstances.
General standard of living
I find that the parties have a similar standard of living, as one might expect from people who have lived
together for over 30 years. They are not critical of each other’s proposed budgets overmuch. She questions his
need for a boat costing £10,000 pa, and some expenditure on his cars; and he inquires as to her proposed
expenditure of £7000 pa in relation to clothing.
In relation to housekeeping figures, he has a budget of £7800, which is £150 a week; she has a budget of
£4633, which is £90. That seems to me to be perfectly reasonable, relative to and consistent with the way they
have lived in the past.
As for hobbies, he has his boat and cars, and she has her garden.
Those are the criteria I have to consider. The aim, or, in modern parlance, ‘the overriding objective’, is
now said to be to achieve a fair outcome. Well, what are the arguments? Mr Pointer says this is a classic White
case and that all I need to do, having arrived at a grand total value for the assets, is to make an order that will
result in them each having a half share of that total.
Mr Hayward-Smith says: ‘Not so, that will result in unfairness and that there are reasons why I should
not go down that route. First, he says this is not really a White case: White assumes a ‘clean break case where the
children are grown up and independent, and where the assets exceed the amounts required by the parties for
their financial needs in terms of a home and income for each of them. In those circumstances, equality should
be the
[2001] 2 FLR 256
Reading Passages [ 95 ]

yardstick against which any division should be checked.


Mr Hayward-Smith says that if this is a White case, it is ‘on the cusp’, and he has sought to persuade me
that for various reasons the capital is not all readily available and sufficiently free to make that equal division.
There is, he says, nothing binding on H’s estate to pay, and no timescale within which they will pay; the estate
is not straightforward and the husband will not be paid until it is finalised.
I am not told that he has any charge on Miss D’s property; and, as she doesn’t earn, she will not be in a
position to raise a mortgage on it, so that loan could only be repaid if she sold her property and he cannot force
her to sell. The boat has been for sale previously and did not sell. There is also the argument about the
saleability of the SL shares. I must also note that whilst they remain in his hands pending sale, none of these
assets produces any income. So, for all these reasons, it is said that the assets are not such as enable an
immediate fair division by the ordering of a payment of a lump sum so as to even out the shares of assets held
by each.
Secondly, it is argued that to simply divide the assets is to take no notice of the fact that the wife
unnecessarily diminished the capital available after the separation. It is said that she removed and spent
moneys from the accounts as if it was ‘going out of fashion’. It is said that she liquidated some of her own
capital unnecessarily.
Finally, under this head, it is said that it was quite unnecessary for her to borrow £17,000 from her sister
to purchase a car, whilst at the same time purchasing for her daughter, on hire-purchase, a Polo motorcar.
It is said that I should take all that into account in deciding whether this is a White case; and, if it is, that
those sums should be reflected in the final calculations.
Thirdly, it is argued that the husband’s pension fund takes this case outside White: it is the largest single
part of the assets. It is said that this fund is now set up and he cannot vary it, he cannot draw capital from it,
and his flexibility in relation to how his income is delivered is therefore significantly affected.
Finally, it is said that he has different and additional needs: he has responsibilities for Miss D, B and E;
he wants B to have a private education, as did his other two children; he wants also to provide properly for his
new family. It is said that it is accepted that they had come to him knowing that he has a former wife to be
provided for; but, nevertheless, they must be considered and they do stretch the limited resources.
Mr Hayward-Smith invites me to perform an exercise whereby, leaving aside the house, I should
enhance the wife’s present asset position by an additional £143,475, made up of the various sums she is said to
have expended unnecessarily, so as to provide a current total of £388,788. To that should be added the joint
portfolio, a further £162,099. He says that if she were then to be given a lump sum of £100,000 she would have a
total free capital of £650,887, plus a house worth £504,400, giving her assets worth £1,155,287. He says that with
the £650,887 she would, on a Duxbury basis, have some £45,000 pa net, not far short of what she is looking for in
her proposed budget of £48,611.
In relation to those arguments, Mr Pointer says that first, ‘assets are assets and I should not over-
concern myself with how and when they may be realised.
[2001] 2 FLR 257
In relation to alleged diminution of capital, he says that the figures are wrong and the arguments are
flawed: the proper date to take as the start point must be the date when the husband left, which is 27 April
1998, not the start date on the page of the bank statement which was 6 April 1998. He says that if we go from
the date of separation, the amount drawn or paid from the bank is not £38,700 but £22,500, and that she has
accounted for £18,000, and more, of that in her schedule at A/57.
As to the withdrawals of capital, amounting to £45,000 in 3 years, that has only enabled her to bring her
income up to about £45,000 pa that she needs to spend.
In relation to the third point regarding the pension fund, he says that this is not different in principle
from the wife’s position: she will be entirely dependent upon investment income, she will have to invest in a
Duxbury fund and will have no more flexibility than him.
Fourthly, it is said that the position of his paramour and her children, only one of whom is his, cannot
reduce the rights of his wife as there are ample resources to provide for her and to enable the husband to look
after his new family.
I do not consider that I can differentiate between assets in relation to their value. The courts have
always approached these matters on a net value basis, ie, taking account of sale costs and tax liabilities. White
affirmed that approach.
I dealt earlier with the difficulty of applying forced sale values to particular items and said that there
would have to be very good reasons for doing so. I am not satisfied that there are any such reasons here; and,
Reading Passages [ 96 ]

indeed, no forced sale values have been provided. It seems to me that the place that this might have some
impact is on the time allowed for paying any lump sum ordered.
I must therefore, in this case, take the values that have been agreed, or that I have found, and work on
the basis that that value is realisable within a reasonable time.
As regards unnecessary diminution of the value of the assets by the wife, I have no doubt that she was
devastated by the revelations that her husband had been involved in a ongoing deception of her for many years
and that he had had a child by a woman some 6 years previously, which woman he was still seeing and which
child he was supporting. I have no doubt that she did spend as she saw fit. She changed the locks and the alarm
system. She bought a new car, which is included in the schedules, both the car and the loan; and she continued
in the lifestyle to which she had grown accustomed. I am equally sure that she felt drawn closer to her daughter
as a result of her husband’s revelations.
I am not prepared to find that she was unnecessarily extravagant or that she diminished the resources
available to any significant extent. I accept Mr Pointer’s arguments about the dates and the amounts of her bank
withdrawals. I also accept his argument that the pension fund is, in reality, no different from the Duxbury fund
in which she will have to invest.
As to the position of the second family, this is really a question of seeing what happens when the
arithmetic is done on different bases.
The proposed equation that Mr Hayward-Smith puts forward seems to me to fail for several reasons:
first, I do not accept his argument about the diminution of capital by which I should now notionally enhance
the wife’s
[2001] 2 FLR 258
fund. Even if I did do that, then, as such capital as she has liquidated has been spent, she will not be
able to buy a sufficient fund to provide for her reasonable needs.
The other side of that equation would be that if the husband only had to raise £100,000 for a lump sum
payment, that would leave him with over £500,000 free funds, ie bank accounts and investments, which could
purchase an income of some £35,000 pa. That would need to be added to his present income of £80,000 from
pension fund and rental properties, and he would still have his boat, cars and money, in due course, to come
from H’s estate and Miss D. Even given his needs, that does not seem to me to be a fair outcome.
I must test it by comparing it to the outcome of an equal division. What happens if I go down the
equality route? It requires the payment of a lump some to the wife of £400,000, in addition to the transfer of the
house and the joint investments. That would put £835,000 into the wife’s hands. A Duxbury calculation says that
that gives her over £50,000 pa; she only asks for £48,000. As Mr Pointer says, it is not for me, in some patriarchal
way, to say what she should and should not do with her money, to allow some of her items and to disallow
others. However, I am permitted to observe that she would, on her own figures, be living in relative luxury. By
that I mean that she will remain in the family house; she will be able to enjoy her garden, spending, as she plans
to do, over £7500 per year on it; also spending £7000 a year on clothes and £7000 a year on miscellaneous items
not included in her very detailed budget.
If the husband was ordered to pay that sum, and chose to do so by liquidating his investments, then
they would produce just under £400,000. He would then have just under £100,000 in bank accounts, plus his
expectation from H’s estate, the money due from Miss D, his boat, car, SL shares and his properties. His income
from his pension fund and rental properties would be the £80,000 gross, which is £51,000 net, plus whatever
came from the balance of his assets as he chose to invest them.
He still has several choices open to him: his current home is subject to a mortgage which expires in
September 2004; he could pay that off as it was only a 5-year mortgage and is heavily weighted to capital
repayments. He has the next 15 years to decide when to convert his pension fund into an annuity. He can make
choices about his leisure activity. But is he not as entitled, if at all possible, to maintain those in the same way
that the wife will have her garden?
Undoubtedly things will be very much tighter for him than for the wife. His budget contains no
provision for education for B; and, apart from his expenditure on his boat, is, on any view, modest, for example,
£150 a week housekeeping for a family of four.
So it seems to me that equality may not be the answer here. Is there another course that would be fair? It
seems to me that I must consider the various subsections of s 25 and aim to provide as fair a distribution as
possible. I am driven to conclude, against the background of this case, that I should therefore aim to provide
them each with a comfortable house and with sufficient money to discharge their needs, obligations and
Reading Passages [ 97 ]

responsibilities. If, in addition to the transfer of the house and the joint assets, I order the husband to pay a
lump sum of £300,000, then that will provide the wife with a fund of £735,000 which should produce a net
[2001] 2 FLR 259
income that will meet her stated needs. She will have an unencumbered house worth over £500,000. Her
life will, so far as I can judge, continue in the style that she says she enjoyed during the marriage.
The husband will have, immediately, somewhere in the region of £180,000 in liquid funds to top up the
income that comes from his pension fund and his rental properties. He will also have free capital of £275,000 in
property, including his home, and about £240,000 to come from the boat, the SL shares, H’s estate and Miss D’s
loan. His net income will be just about sufficient to meet his proposed budget.
I must then check my tentative conclusions against the yardstick of ‘equality’. My proposals would
share the joint pot as to £1,469,640 for the husband, and £1,262,416 for the wife. The question for me is whether
‘if the shoe pinches (to use a phrase of Ward LJ in Piglowska v Piglowski [1999] 2 FLR 763, [1999] 1 WLR 1360),
and pinches more on one party than the other, as a result of an equal distribution; is that sufficient
demonstration that there is good reason, in that case, to depart from equality? I believe that it is. I have looked
at my proposals to see if there is any discrimination in them. I do not believe that there is. It seems to me that if
I were to divide the assets equally, then I would be discriminating against the husband as a result of his
responsibility for his new family. I am satisfied that to deal with the matter in the way I have proposed will
result in as fair an outcome to both parties as is possible in this case.
I shall therefore order that the husband shall transfer to the wife his half-share in the matrimonial home,
Hunters, his half-share in the joint investments, and shall pay a lump sum of £300,000 to the wife.
Order accordingly.
Solicitors: Stokoe Partnership for the petitioner
Ellison & Co for the respondent
PATRICIA HARGROVE
Barrister

THE LEGAL CHARACTER OF INTERNATIONAL LAW


It has often been said that international law ought to be classified as a branch of ethics rather than of
law. The question is partly one of words, because its solution will clearly depend on the definition of law which
we choose to adopt; in any case it does not affect the value of the subject one way or the other, though those
who deny the legal character of international law often speak as though 'ethical' were a depreciatory epithet.
But in fact it is both practically inconvenient and also contrary to the best juristic thought to deny its legal
character. It is inconvenient because if international law is nothing but international morality, it is certainly not
the whole of international morality, and it is difficult to see how we are to distinguish it from those other
admittedly moral standards which we apply in forming our judgements on the conduct of states. Ordinary
usage certainly uses two tests in judging the 'rightness' of a state's act, a moral test and one which is somehow
felt to be independent of morality. Every state habitually commits acts of selfishness which are often gravely
injurious to other states, and yet are not contrary to international law; but we do not on that account necessarily
judge them to have been 'right'. It is confusing and pedantic to say that both these tests are moral. Moreover, it
is the pedantry of the theorist and not of the practical man; for questions of international law are invariably
treated as legal questions by the foreign offices which conduct our international business, and in the courts,
national or international, before which they are brought; legal forms and methods are used in diplomatic
controversies and in judicial and arbitral proceedings, and authorities and precedents are cited in argument as a
matter of course. It is significant too that when a breach of international law is alleged by one party to a
controversy, the act impugned is practically never defended by claiming the right of private judgement, which
would be the natural defence if the issue concerned the morality of the act, but always by attempting to prove
that no rule has been violated. This was true of the defences put forward even for such palpable breaches of
international law as the invasion of Belgium in 194, or the bombardment of Corfu in 1923.
But if international law is not the same thing as international morality, and if in some important
respects at least it certainly resembles law, why should we hesitate to accept its definitely legal character? The
objection comes in the main from the followers of writers such as Hobbes and Austin, who regard nothing as
law which is not the will of a political superior. But this is a misleading and inadequate analysis even of the law
of a modern state; it cannot, for instance, unless we distort the facts so as to fit them into the definition, account
for the existence of the English Common Law. In any case, even if such an analysis gave an adequate
explanation of law in the modern state, it would require us to assume that that law is the only true law, and not
Reading Passages [ 98 ]

merely law at a particular stage of growth or one species of a wider genus. Such an assumption is historically
unsound. Most of the characteristics which differentiate international law from the law of the state and are
often thought to throw doubt on its legal character, such, for instance, as its basis in custom, the fact that the
submission of parties to the jurisdiction of courts is voluntary, the absence of regular processes either for
creating or enforcing it, are familiar features of early legal systems; and it is only in quite modern times, when
we have come to regard it as natural that the state should be constantly making new laws and enforcing
existing ones, that to identify law with the will of the state has become even a plausible theory. If, as Sir
Frederick Pollock writes, and as probably most competent jurists would today agree, the only essential
conditions for the existence of law are the existence of a political community, and the recognition by its
members of settled rules binding upon them in that capacity, international law seems on the whole to satisfy
these conditions.
(From The Law of Nations by J. L. Brierly)

THE LAW OF NEGLIGENCE


Have you ever wondered how, with the infinite complexity of life, the law can provide for every
situation by a rule laid down in advance? Take the driving of a motor car. How can you have a law that will
govern every turn of the steering wheel? The answer found in the law books is that cars must be driven, and
almost everything else must be done, with reasonable care and without negligence. This at least seems to state a
rule. But is it a real rule, or just an appearance of one? Unless one can give a content to the notion of
reasonableness, and to its opposite, negligence, the problem will be raised afresh: how can the law provide for
every eventuality?
In practice the negligence rule means that the judge or jury decides whether the defendant is to be
blamed or not, and this involves fixing upon something that he could and should have done (or omitted to do)
in order to avoid the mischief.
During the nineteenth century the judges attempted to clarify the law by inventing the notion of the
reasonable man. Instead of asking, as before, whether the defendant was guilty of imprudence, the judges
required the jury to consider whether the defendant had behaved like a reasonable man, or a reasonably careful
man, or a reasonably prudent man.
Having invented the reasonable man, the judges had to make an effort to describe him. According to
some, he was the ordinary reasonable man. This was not the same as saying that he was the ordinary man, but
came rather near it. Indeed, Lord Bowen, with his gift for a phrase, equated the reasonable man with 'the man
on the Clapham omnibus', and this had led to the widespread supposition that the standard of care required by
the common law is that of the average man. The supposition is certainly untrue.
'The ordinary reasonable man' of the judges' imagining is a meticulously careful person, so careful that
very few gentlemen come up to his standard. Every form of average is a measure of central tendency, but the
standard required by law is not a central human tendency of any kind.
Take one obvious point. A defendant in an action for negligence would not be allowed to put the
passengers of a Clapham omnibus into the witness-box to say that they would have done the same as he did.
The evidence would not be listened to. One reason for this is that if ordinary standards were conclusive, the
courts could not use their influence to improve these standards.
Again, ordinary people, even though normally they are circumspect in their behaviour, lapse into
carelessness now and then. As Elbert Hubbard expressed it, with humorous exaggeration, 'Every man is a
damn fool for at least five minutes every day. Wisdom consists in not exceeding the limit'. But the idealized
stereotype of the law, so far from being given five minutes' indulgence a day, is never allowed an off-moment.
The defendant may be a good driver with twenty-five years' clean record, yet he will be held liable in damages
for negligence if he injures someone during a momentary lapse.
That the judges set the standard by the nearly perfect man, rather than the average man, is lavishly
illustrated in the law reports. The master of a small motor vessel, when off Greenhithe in the Thames, fainted at
the wheel, as a result of eating bad tinned salmon, and a collision followed. Up to the moment of fainting the
master had felt quite well, and he was obviously not negligent in fainting. A pure accident, you might say. But
the judge held that the master was negligent because he had failed to foresee that he might lose consciousness,
and had omitted to provide against this by having some other person on deck who might be able to get to the
bridge in time to prevent an accident if fainting occurred. As a landlubber I am in no position to comment upon
this decision, but obviously, it could not be applied to land vehicles without absurdity. No one would suggest
Reading Passages [ 99 ]

that the Clapham or any other omnibus should carry a reserve driver, ready to seize the wheel in case the other
driver faints.

Where, then, is our elusive standard to be found? If the reasonable man is not to be discovered on the
Clapham omnibus, can he be identified with the judge or juryman who has to decide the issue? Technically, at
least, the answer is again in the negative. A judge must not tell the members of the jury to determine what they
would have done, because that is not the question. The individual jurors might not have acted as prudently as
they now think, on reflection, they ought to have acted in the situation. jurors are expected to follow Hume's
precept that, in considering the moral character of an act, we should adopt the role of impartial spectator,
seeking reactions as to what we would approve or disapprove, not as to what we ourselves would have felt or
done.
Perhaps this is the key to the puzzle. The reasonable man is a phantom reflecting a certain ideal on the
part of the tribunal, whether judge or jury as the case may be. He is a personification of the court's social
judgment.
If this is the right conclusion, I think one is driven to admit that the device of trying to decide cases in
terms of the reasonable man is not such a good idea after all. Why not address oneself directly to the problem of
negligence? Values are important, but there is no point in personifying values, or attributing them to a fictitious
person. In speaking of what a reasonable man does or does not do, we appear to be stating a fact, whereas in
truth the question is how people ought to behave. It is merely misleading to use an expression that seems to
indicate the behaviour of real people, when we are evaluating behaviour by reference to an ideal standard.
(Glanville Williams from an article in The Listener, February 2nd, 1961)

THE DEATH PENALTY


I want to organize under five simple verbs my own reasons for thinking that the death penalty is a bad
thing. If we catch a man who has committed a murder, try him and convict him, we have to do something more
with him than punish him, because, although he must be punished, there are several other things that ought to
happen to him. I think that the whole theory of what ought to be done to a convicted murderer can be summed
up in the five verbs: prevent, reform, research, deter and avenge. Let me take these five things in turn and see
how the death penalty now looks as a means of achieving them.
The first is 'prevent'. By this I mean preventing the same man from doing it again, to check him in his
career-though, of course, nobody makes a career of being a murderer, except the insane, who are not at issue in
the question of the death penalty. I believe that I am right in saying that in the course of a century there is only
one doubtful case of a convicted murderer, after his release at the end of a normal life sentence, committing
another murder. I think that that means, statistically, that the released murderer is no more likely to murder
again than anybody else is. The question of long sentences comes in here. If the sane convicted murderer is not
to be hanged, should he be imprisoned, and should the length of his service be determined in a way not the
usual one for the actual sentence served? I think this question can be answered only by looking at the statistics
of how likely a man is to do it again. In other words, how likely a prison sentence for a given number of years,
15, 20 or 30 years, is to prevent him from doing it again. There is a wealth of statistics available to us on that. I
do not think they suggest that the convicted murderer who is not hanged should have his prison sentence dealt
with in any way differently from that in which prison sentences are usually dealt with.
To turn to the second verb on my list, 'reform'. That is rather a nineteenth century word, and perhaps
we should now say 'rehabilitate', stressing more the helping of a man with his social functions rather than
adjusting his internal character; but that is a minor point. It is clear that, whatever we may think about what is
able to be achieved in our prison system by treatment in the reformatory and rehabilitatory way - and it is open
to criticism for lack of funds and so on-it is obvious that less can be achieved if you hang a man. One man who
is utterly unreformable is a corpse; and hanging is out of the question, because you cannot achieve any form of
reform or rehabilitation by it.
The next word is 'research'. This is not part of the traditional idea of what to do with a convicted
murderer. It is rather a new notion that it may be an appropriate purpose in detaining a criminal and inflicting
punishment and other things upon him that research should be conducted into the criminal personality and the
causes of crime. At the moment we hang only the sanest criminals. We can get all the research we want into the
motives, characters and personality structures of those with diminished responsibility, the insane and those
under an age to be hanged. But the one we cannot research into is the man who is sane and who commits
capital murder in cold blood on purpose. It might be that if we were to keep this man alive and turn
Reading Passages [ 100 ]

psychiatrists and other qualified persons on to talking to him for twenty years during his prison sentence we
should find things that would enable us to take measures which would reduce the murder rate and save the
lives of the victims. But in hanging these men we cut ourselves off from this possible source of knowledge of
help to the victims of murder.
The fourth word, 'deter', is the crux of the whole thing. Abolitionists, as we all know, have held for
many years that evidence from abroad has for long been conclusive that the capital penalty is not a uniquely
effective deterrent against murder. Retentionists of the death penalty have been saying for years that we are not
like those abroad; we are a different country economically; our national temperament is different; and there is
this and that about us which is not so about those in Italy, Norway or certain States of the United States, New
Zealand, India, or wherever it may be. Now we have this remarkable pamphlet which in effect closes that gap
in the abolitionists' argument. It shows within mortal certitude that we are exactly like those abroad, and that in
this country the death penalty is not a uniquely effective deterrent against murder.

The last on the list of my five verbs is 'avenge'. Here the death penalty is uniquely effective. If a man has
taken life, the most effective, obvious and satisfying form of vengeance is to take his life. I have no argument
against that. I think it is true that if one accepts vengeance as a purpose proper for the State in its handling of
convicted criminals, then the death penalty should stay for convicted murderers. For myself-and it is only a
personal matter-I utterly reject the idea that vengeance is a proper motive for the State in dealing with
convicted criminals; and I hope that, from the date of the publication of this pamphlet onwards, those who wish
to retain the death penalty will admit that its only merit is precisely that of vengeance.
(Lord Kennet from a Speech in the House of Lords, November 9th, 1961)

MATHEMATICS
On Different Degrees of Smallness
We shall find that in our processes of calculation we have to deal with small quantities of various
degrees of smallness. We shall have also to learn under what circumstances we may consider small quantities
to be so minute that we may omit them from consideration. Everything depends upon relative minuteness.
Before we fix any rules let us think of some familiar cases. There are 60 minutes in the hour, 24 hours in the day,
7 days in the week. There are therefore 1,440 minutes in the day and 10,080 minutes in the week.
Obviously 1 minute is a very small quantity of time compared with a whole week. Indeed, our
forefathers considered it small as compared with an hour, and called it 'one minute', meaning a minute fraction
- namely one-sixtieth-of an hour. When they came to require still smaller subdivisions of time, they divided
each minute into 60 still smaller parts, which in Queen Elizabeth's days, they called 'second minutes' (i.e. small
quantities of the second order of minuteness). Nowadays we call these small quantities of the second order of
smallness 'seconds'. But few people know why they are so called. Now if one minute is so small as compared
with a whole day, how much smaller by comparison is one second!
Again, think of a farthing as compared with a sovereign; it is worth only a little more than 1/1,000 part.
A farthing more or less is of precious little importance compared with a sovereign; it may certainly be regarded
as a small quantity. But compare a farthing with £1,000: relatively to this greater sum, the farthing is of no more
importance than 1/1,000 of a farthing would be to a sovereign. Even a golden sovereign is relatively
a negligible quantity in the wealth of a millionaire.
Now if we fix upon any numerical fraction as constituting the proportion which for any purpose we call
relatively small, we can easily state other fractions of a higher degree of smallness.
Thus if, for the purpose of time, x/6o be called a small fraction, then 1/60 of 1/60 (being a small fraction
of a small fraction) may be regarded as a small quantity of the second order of smallness. Or, if for any purpose
we were to take 1 per cent (i.e. 1/100) as a small fraction, then 1 per cent of 1 per cent (i.e. 1/10,000) would be a
small fraction of the second order of smallness; and 1/1,000,000 would be a small fraction of the third order of
smallness, being 1 per cent of 1 per cent of 1 per cent.
Lastly, suppose that for some very precise purpose we should regard 1/1,000,000 as 'small'. Thus, if a
first-rate chronometer is not to lose or gain more than half a minute in a year, it must keep time with an
accuracy of 1 part in 1,051,200. Now if, for such a purpose, we regard 1/1,000,000 (or one millionth) as a small
quantity, then 1/1,000,000 of 1/1,000,000, that is 1/1,000,000,000,000 (or one billionth) will be a small quantity
of the second order of smallness, and may be utterly disregarded, by comparison.
Then we see that the smaller a small quantity itself is, the more negligible does the corresponding small
quantity of the second order become. Hence we know that in all cases we are justified in neglecting the small
Reading Passages [ 101 ]

quantities of the second - or third (or higher) - orders, if only we take the small quantity of the first order small
enough in itself. But it must be remembered that small quantities if they occur in our expressions as factors
multiplied by some other factor, may become important if the other factor is itself large. Even a farthing
becomes important if only it is multiplied by a few hundred.
Now in the calculus we write dx for a little bit of x. These things such as dx and du, and dy, are called
'differentials', the differential of x, or of u, or of y, as the case may be. (You read them as dee-eks, or dee-you, or
dee-wy). If dx be a small bit of x, and relatively small, it does not follow that such quantities as x.dx or a2.dx or
ax.dx are negligible. But dx times dx would be negligible, being a small quantity of the second order.
A very simple example will serve as illustration. Let us think of x as a quantity that can grow by a small
amount so as to become x + dx, where dx is the small increment added by growth. The square of this is x2 +
2x.dx + (dx)2. The second term is not negligible because it is a first-order quantity; while the third term is of the
second order of smallness, being a bit of a bit of x. Thus if we took dx to mean numerically, say 1/60 of x, then
the second term would be 2/60 of x2, whereas the third term would be 1/3,600 of x2. This last term is clearly less
important than the second. But if we go further and take dx to mean only 1/1000 of x, then the second term will
be 2/1,000 of x2, while the third term will be only 1/1,000,000 Of x2.
Geometrically this may be depicted as follows: draw a square the side of which we will take to
represent x. Now suppose the square to grow by having a bit dx added to its size each way. The enlarged
square is made up of the original square x2, the two rectangles at the top and on the right, each of which is of
area x. dx (or together 2x.dx), and the little square at the top right-hand corner which is (dx)2. We have taken dx
as quite a big fraction of x - about 1/5. But suppose we had taken it as only 1/1000 - about the thickness of an
inked line drawn with a fine pen. Then the little corner square will have an area of only 1/10,000 of x2, and be
practically invisible. Clearly (dx)2 is negligible if only we consider the increment dx to be itself small enough.
(From Calculus Made Easy by Silvanus P. Thompson.)

Chance or Probability
The aim of science is to describe the world in orderly language, in such a way that we can, if possible,
foresee the results of those alternative courses of action between which we are always choosing. The kind of
order which our description has is entirely one of convenience. Our purpose is always to predict. Of course, it is
most convenient if we can find an order by cause and effect; it makes our choice simple; but it is not essential.
There is of course nothing sacred about the causal form of natural laws. We are accustomed to this
form, until it has become our standard of what every natural law ought to look like. If you halve the space
which a gas fills, and keep other things constant, then you will double the pressure, we say. If you do such and
such, the result will be so and so; and it will always be so and so. And we feel by long habit that it is this
'always' which turns the prediction into a law. But of course there is no reason why laws should have this
always, all-or-nothing form. If you self-cross the offspring of a pure white and a pure pink sweet pea, said
Mendel, then on an average one-quarter of these grandchildren will be white, and three-quarters will be pink.
This is as good a law as any other; it says what will happen, in good quantitative terms, and what it says turns
out to be true. It is not any less respectable for not making that parade of every time certainty which the law of
gases makes.
It is important to seize this point. If I say that after a fine week, it always rains on Sunday, then this is
recognized and respected as law. But if I say that after a fine week, it rains on Sunday more often than not, then
this somehow is felt to be an unsatisfactory statement; and it is taken for granted that I have not really got down
to some underlying law which would chime with our habit of wanting science to say decisively either 'always'
or 'never'. Somehow it seems to lack the force of law.
Yet this is a mere prejudice. It is nice to have laws which say, 'This configuration of facts will always be
followed by event A, ten times out of ten.' But neither taste nor convenience really make this a more essential
form of law than one which says, 'This configuration of facts will be followed by event A seven times out of ten,
and by event B three times out of ten.' In form the first is a causal law and the second a statistical law. But in
content and in application, there is no reason to prefer one to the other.
There is, however, a limitation within every law which does not contain the word 'always'. Bluntly,
when I say that a configuration of facts will be followed sometimes by event A and at other times by B, I cannot
be certain whether at the next trial A or B will turn up. I may know that A is to turn up seven times and B three
times out of ten; but that brings me no nearer at all to knowing which is to turn up on the one occasion I have
my eye on next time. Mendel's law is all very fine when you grow sweet peas by the acre; but it does not tell
you, and cannot, whether the single second generation seed in your window box will flower white or pink.
Reading Passages [ 102 ]

But this limitation carries with it a less obvious one. If we are not sure whether A or B will turn up next
time, then neither can we be sure which will turn up the time after, or the time after that. We know that A is to
turn up seven times and B three; but this can never mean that every set of ten trials will give us exactly seven
As and three Bs.
Then what do I mean by saying that we expect A to turn up seven times to every three times which B
turns up? I mean that among all the sets of ten trials which we can choose from an extended series, picking as
we like, the greatest number will contain seven As and three Bs. This is the same thing as saying that if we have
enough trials, the proportion of As to Bs will tend to the ratio of seven to three. But of course, no run of trials,
however extended, is necessarily long enough. In no run of trials can we be sure of reaching precisely the
balance of seven to three.
Then how do I know that the law is in fact seven As and three Bs? What do I mean by saying that the
ratio tends to this in a long trial, when I never know if the trial is long enough? And more, when I know that at
the very moment when we have reached precisely this ratio, the next single trial must upset it because it must
add either a whole A or a whole B, and cannot add seven-tenths of one and three-tenths of the other. I mean
this. After ten trials, we may have eight As and only two Bs; it is not at all improbable. But it is very improbable
that, after a hundred trials, we shall have as many as eighty As. It is excessively improbable that after a
thousand trials we shall have as many as eight hundred As; indeed it is highly improbable that at this stage the
ratio of As and Bs departs from seven to three by as much as five per cent. And if after a hundred thousand
trials we should get a ratio which differs from our law by as much as one per cent, then we should have to face
the fact that the law itself is almost certainly in error.
Let me quote a practical example. The great naturalist Bufhon was a man of wide interests. His interest
in the laws of chance prompted him to ask an interesting question. If a needle is thrown at random on a sheet of
paper ruled with lines whose distance apart is exactly equal to the length of the needle, how often can it be
expected to fall on a line and how often into a blank space? The answer is rather odd: it should fall on a line a
little less than two times out of three - precisely, it should fall on a line two times out of pi where pi is the
familiar ratio of the circumference of a circle to its diameter, which has the value 3.14159265.... How near can
we get to this answer in actual trials? This depends of course on the care with which we rule the lines and do
the throwing; but, after that, it depends only on our patience. In 1901 an Italian mathematician, having taken
due care, demonstrated his patience by making well over 3,000 throws. The value he got for pi was right to the
sixth place of decimals, which is an error of only a hundred thousandth part of one per cent.
This is the method to which modern science is moving. It uses no principle but that of forecasting with
as much assurance as possible, but with no more than is possible. That is, it idealizes the future from the outset,
not as completely determined, but as determined within a defined area of uncertainty.
(From The Common Sense of Science by J. Bronowski.)

PHILOSOPHY
Definition and Some of its Difficulties
If our thought is to be clear and we are to succeed in communicating it to other people, we must have
some method of fixing the meanings of the words we use. When we use a word whose meaning is not certain,
we may well be asked to define it. There is a useful traditional device for doing this by indicating the class to
which whatever is indicated by the term belongs, and also the particular property which distinguishes it from
all other members of the same class. Thus we may define a whale as a 'marine animal that spouts'. 'Marine
animal' in this definition indicates the general class to which the whale belongs, and 'spouts' indicates the
particular property that distinguishes whales from other such marine animals as fishes, seals, jelly-fish, lobsters,
etc. In the same way we can define an even number as a finite integer divisible by two, or a democracy as a
system of government in which the people themselves rule.
There are other ways, of course, of indicating the meanings of words. We may, for example, find it hard
to make a suitable definition of the word 'animal', so we say that an animal is such a thing as a rabbit, dog, fish,
etc. Similarly we may say that religion is such a system as Christianity, Islam, Judaism, Christian Science, etc.
This way of indicating the meaning of a term by enumerating examples of what it includes is obviously of
limited usefulness. If we indicated our use of the word 'animal' as above, our hearers might, for example, be
doubtful whether a sea-anemone or a slug was to be included in the class of animals. It is, however, a useful
way of supplementing a definition if the definition itself is definite without being easily understandable. If, for
example, we explain what we mean by religion by saying: 'A religion is a system of beliefs and practices
Reading Passages [ 103 ]

connected with a spiritual world, such as Christianity, Islam, Judaism, Christian Science, and so on', we may
succeed in making our meaning more clear than it would be if we had been given the definition alone.
Failure of an attempt at definition to serve its purpose may result from giving as distinguishing mark
one which either does not belong to all the things the definition is intended to include, or does belong to some
members of the same general class which the definition is intended to exclude. Thinking, for example, of the
most obvious difference between a rabbit and a cabbage, we might be tempted to define an animal as a living
organism which is able to move about. This would combine both faults mentioned above, since some animals
(e.g. some shell-fish such as the oyster) are not able to move about for the whole or part of their lives, while
some vegetables (such as the fresh-water alga Volvox) do swim about. Of course, anyone who used the above
definition might claim to be defining 'animal' in a new and original way to include Volvox and exclude oysters,
but he would have failed to produce a definition which defined the ordinary use of the word 'animal'.
More commonly an attempt at definition fails by not indicating correctly the general class to which the
thing defined belongs. One meets, for example, in psychological writings such definitions as: 'Intelligence is a
state of mind characterized by the ability to learn and solve problems.' The second part of the definition is all
right, but the word 'intelligence' is not used for a state of mind; and the person who defines 'intelligence' like
this does not in his actual use of the word make it stand for a state of mind. Such conditions as despair,
concentration, alertness, and hope can be called 'states of mind'. 'Intelligence' is used for a quality of mind, not
for a state. If the word 'quality' replaced the word 'state' in the above definition it would indicate very well the
current use of the word 'intelligence'.
(From Straight and Crooked Thinking, by R. H. Thouless.)

The Subject Matter of Philosophy


It seems clear that subjects or fields of study are determined by the kind of questions to which they have
been invented to provide the answer. The questions themselves are intelligible if, and only if, we know where
to look for the answers.
If you ask someone an ordinary question, say, 'Where is my coat?', 'Why was Mr Kennedy elected
President of the United States?', 'What is the Soviet system of criminal law?' he would normally know how to
set about finding an answer. We may not know the answers ourselves, but we know that in the case of the
question about the coat, the proper procedure is to look on the chair, in the cupboard, etc. In the case of Mr
Kennedy's election or the Soviet system of law we consult writings of specialists for the kind of empirical
evidence which leads to the relevant conclusions, and renders them, if not certain, at any rate probable.
In other words, we know where to look for the answer: we know what makes some answers plausible
and others not. What makes this type of question intelligible in the first place is that we think that the answer
can be discovered by empirical means, that is, by orderly observation or experiment or methods compounded
of these, namely those of common sense or the natural sciences. There is another class of questions where we
are clear about the proper route by which the answers are to be sought, namely the formal disciplines:
mathematics, for example, or logic, or grammar, or chess, where there are certain fixed axioms, certain accepted
rules of deduction, and the answer to problems is to be found by applying these rules in the manner prescribed
as correct.
The hallmark of these provinces of human thought is that once the question is put we know which way
to proceed to try to obtain the answer. The history of systematic human thought is largely a sustained effort to
formulate all the questions that occur to mankind in such a way that they will fall into one or other of these two
great baskets: the empirical, that is, questions whose answers depend, in the end, on the data of observation;
and the formal, that is, questions whose answers depend on pure calculation, untrammelled by factual
knowledge.
But there are certain questions that do not easily fit into this classification. 'What is an okapi?' is
answered easily enough by an act of empirical observation. Similarly 'What is the cube root of 729?' is settled by
a piece of calculation in accordance with accepted rules. But if I ask 'What is a number?', 'What is the purpose of
human life on earth', 'Are you sure that all men are brothers?' how do I set about looking for the answer?
There seems to be something queer about all these questions as wide apart as those about number, or
the brotherhood of man, or purposes of life; they differ from the questions in the other baskets in that the
question itself does not seem to contain a pointer to the way in which the answer to it is to be found. The other,
more ordinary, questions contain precisely such pointers built-in techniques for finding the answers to them.
The questions about number and so on reduce the questioner to perplexity, and annoy practical people
precisely because they do not seem to lead to clear answers or useful knowledge of any kind.
Reading Passages [ 104 ]

This shows that between the two original baskets, the empirical and the formal, there is an intermediate
basket, in which all those questions live which cannot easily be fitted into the other two. These questions are of
the most diverse nature; some appear to be questions of fact, others of value; some are questions about words,
others are about methods pursued by scientists, artists, critics, common men in the ordinary affairs of life; still
others are about the relations between the various provinces of knowledge; some deal with the presuppositions
of thinking, some with the correct ends of moral or social or political action.
The only common characteristic which all these questions appear to have is that they cannot be
answered either by observation or calculation, either by inductive methods or deductive; and as a crucial
corollary of this, that those who ask them are faced with a perplexity from the very beginning-they do not
know where to look for the answer; there are no dictionaries, encyclopaedias, compendia of knowledge, no
experts, no orthodoxies, which can be referred to with confidence as possessing unquestionable authority or
knowledge in these matters. Such questions tend to be called philosophical.
(From an article by Sir Isaiah Berlin in The Sunday Times, 14th November, 1962.)

What can we Communicate?


The obvious answer to the question how we know about the experiences of others is that they are
communicated to us, either through their natural manifestations in the form of gestures, tears, laughter, play of
feature and so forth, or by the use of language. A very good way to find out what another person is thinking or
feeling is to ask him. He may not answer, or if he does answer he may not answer truly, but very often he will.
The fact that the information which people give about themselves can be deceptive does not entail that it is
never to be trusted. We do not depend on it alone; it may be, indeed, that the inferences which we draw from
people's non-verbal behaviour are more secure than those that we base upon what they say about themselves,
that actions speak more honestly than words. But were it not that we can rely a great deal upon words, we
should know very much less about each other than we do.
At this point, however, a difficulty arises. If I am to acquire information in this way about another
person's experiences, I must understand what he says about them. And this would seem to imply that I attach
the same meaning to his words as he does. But how, it may be asked, can I ever be sure that this is so? He tells
me that he is in pain, but may it not be that what he understands by pain is something quite different from
anything that I should call by that name? He tells me that something looks red to him, but how do I know that
what he calls 'red' is not what I should call 'blue', or that it is not a colour unlike any that I have ever seen, or
that it does not differ from anything that I should even take to be a colour? All these things would seem to be
possible. Yet how are such questions ever to be decided?
In face of this difficulty, some philosophers have maintained that experiences as such are
uncommunicable. They have held that in so far as one uses words to refer to the content of one's experiences,
they can be intelligible only to oneself. No one else can understand them, because no one else can get into one's
mind to verify the statements which they express. What can be communicated, on this view, is structures. I
have no means of knowing that other people have sensations or feelings which are in any way like my own. I
cannot tell even that they mean the same by the words which they use to refer to physical objects, since the
perceptions which they take as establishing the existence of these objects may be utterly different from any that
I have ever had myself. If I could get into my neighbour's head to see what it is that he refers to as a table, I
might fail to recognize it altogether, just as I might fail to recognize anything that he is disposed to call a colour
or a pain. On the other hand, however different the content of his experience may be from mine, I do know that
its structure is the same. The proof that it is the same is that his use of words corresponds with mine, in so far as
he applies them in a corresponding way. However different the table that he perceives may be from the table
that I perceive, he agrees with me in saying of certain things that they are tables and of others that they are not.
No matter what he actually sees when he refers to colour, his classification of things according to their colour is
the same as mine. Even if his conception of pain is quite different from my own, his behaviour when he is in
pain is such as I consider to be appropriate. Thus the possible differences of content can, and indeed must be
disregarded. What we can establish is that experiences are similarly ordered. It is this similarity of structure
that provides us with our common world: and it is only descriptions of this common world, that is, descriptions
of structure, that we are able to communicate.
On this view, the language which different people seem to share consists, as it were, of flesh and bones.
The bones represent its public aspect; they serve alike for all. But each of us puts flesh upon them in accordance
with the character of his experience. Whether one person's way of clothing the skeleton is or is not the same as
another's is an unanswerable question. The only thing that we can be satisfied about is the identity of the bones.
Reading Passages [ 105 ]

(From The Problem of Knowledge, By A. J. Ayer.)

Ethics
Ethics is traditionally a department of philosophy, and that is my reason for discussing it. I hardly think
myself that it ought to be included in the domain of philosophy, but to prove this would take as long as to
discuss the subject itself, and would be less interesting.
As a provisional definition, we may take ethics to consist of general principles which help to determine
rules of conduct. It is not the business of ethics to say how a person should act in such and such specific
circumstances; that is the province of casuistry. The word 'casuistry' has acquired bad connotations, as a result
of the Protestant and Jansenist attacks on the Jesuits. But in its old and proper sense it represents a perfectly
legitimate study. Take, say, the question: In what circumstances is it right to tell a lie? Some people,
unthinkingly, would say: Never! But this answer cannot be seriously defended. Everybody admits that you
should lie if you meet a homicidal maniac pursuing a man with a view to murdering him, and he asks you
whether the man has passed your way. It is admitted that lying is a legitimate branch of the art of warfare; also
that priests may lie to guard the secrets of the confessional, and doctors to protect the professional confidences
of their patients. All such questions belong to casuistry in the old sense, and it is evident that they are questions
deserving to be asked and answered. But they do not belong to ethics in the sense in which this study has been
included in philosophy.
It is not the business of ethics to arrive at actual rules of conduct, such as: 'Thou shalt not steal.' This is
the province of morals. Ethics is expected to provide a basis from which such rules can be deduced. The rules of
morals differ according to the age, the race, and the creed of the community concerned, to an extent that is
hardly realized by those who have neither travelled nor studied anthropology. Even within a homogeneous
community differences of opinion arise. Should a man kill his wife's lover? The Church says no, the law says
no, and common sense says no; yet many people would say yes, and juries often refuse to condemn. These
doubtful cases arise when a moral rule is in process of changing. But ethics is concerned with something more
general than moral rules, and less subject to change. It is true that, in a given community, an ethic which does
not lead to the moral rules accepted by that community is considered immoral. It does not, of course, follow
that such an ethic is in fact false since the moral rules of that community may be undesirable. Some tribes of
head-hunters hold that no man should marry until he can bring to the wedding the head of an enemy slain by
himself. Those who question this moral rule are held to be encouraging licence and lowering the standard of
manliness. Nevertheless, we should not demand of an ethic that it should justify the moral rules of head-
hunters.
Perhaps the best way to approach the subject of ethics is to ask what is meant when a person says: 'You
ought to do so-and-so' or 'I ought to do so-and-so.' Primarily, a sentence of this sort has an emotional content; it
means 'this is the act towards which I feel the emotion of approval'. But we do not wish to leave the matter
there; we want to find something more objective and systematic and constant. The ethical teacher says: 'You
ought to approve acts of such-and-such kinds.' He generally gives reasons for this view, and we have to
examine what sorts of reasons are possible. We are here on very ancient ground. Socrates was concerned
mainly with ethics; Plato and Aristotle both discussed the subject at length; before their time, Confucius and
Buddha had each founded a religion consisting almost entirely of ethical teaching, though in the case of
Buddhism there was afterwards a growth of theological doctrine. The views of the ancients on ethics are better
worth studying than their views on (say) physical science; the subject has not yet proved amenable to exact
reasoning, and we cannot boast that the moderns have as yet rendered their predecessors obsolete.
(From An Outline of Philosophy, by Bertrand Russell.)

Aristotle's Ethics
As to goodness of character in general, Aristotle says that we start by having a capacity for it, but that it
has to be developed by practice. How is it developed ? By doing virtuous acts. At first sight this looks like a
vicious circle. Aristotle tells us that we become virtuous by doing virtuous acts, but how can we do virtuous
acts unless we are already virtuous? Aristotle answers that we begin by doing acts which are objectively
virtuous, without having a reflex knowledge of the acts and a deliberate choice of the acts as good, a choice
resulting from an habitual disposition. For instance, a child may be told by its parents not to lie. It obeys
without realizing perhaps the inherent goodness of telling the truth, and without having yet formed a habit of
telling the truth; but the acts of truth-telling gradually form the habit, and as the process of education goes on,
the child comes to realize that truth-telling is right in itself, and to choose to tell the truth for its own sake, as
Reading Passages [ 106 ]

being the right thing to do. It is then virtuous in this respect. The accusation of the vicious circle is thus
answered by the distinction between the acts which create the good disposition and the acts which flow from the
good disposition once it has been created. Virtue itself is a disposition which has been developed out of a
capacity by the proper exercise of that capacity. (Further difficulties might arise, of course, concerning the
relation between the development of moral valuations and the influence of social environment, suggestion of
parents and teachers, etc., but with these Aristotle does not deal.)
How does virtue stand to vice? It is a common characteristic of all good actions that they have a certain
order and proportion, and virtue, in Aristotle's eyes, is a mean between two extremes, the extremes being vices,
one being a vice through excess, the other being a vice through defect. Through excess or defect of what? Either
in regard to a feeling or in regard to an action. Thus, in regard to the feeling of confidence, the excess of this
feeling constitutes rashness - at least when the feeling issues in action, and it is with human actions that ethics
are concerned - while the defect is cowardice. The mean, then, will be a mean between rashness on the one
hand and cowardice on the other hand: this mean is courage and is the virtue in respect to the feeling of
confidence. Again, if we take the action of giving of money, excess in regard to this action is prodigality - and
this is a vice - while defect in regard to this action is illiberality. The virtue, liberality, is the mean between the
two vices, that of excess and that of defect. Aristotle, therefore, describes or defines moral virtue as 'a
disposition to choose, consisting essentially in a mean relatively to us determined by a rule, i.e. the rule by
which a practically wise man would determine it.' Virtue, then, is a disposition, a disposition to choose
according to a rule, namely, the rule by which a truly virtuous man possessed of moral insight would choose.
Aristotle regarded the possession of practical wisdom, the ability to see what is the right thing to do in the
circumstances, as essential to the truly virtuous man, and he attaches much more value to the moral judgments
of the enlightened conscience than to any a priori and merely theoretical conclusions. This may seem somewhat
naive, but it must be remembered that for Aristotle the prudent man will be the man who sees what is truly
good for a man in any set of circumstances: he is not required to enter upon any academic preserve, but to see
what truly befits human nature in those circumstances.
When Aristotle speaks of virtue as a mean, he is not thinking of a mean that has to be calculated
arithmetically: that is why he says in his definition 'relatively to us'. We cannot determine what is excess, what
mean and what defect by hard-and-fast, mathematical rules: so much depends on the character of the feeling or
action in question: in some cases it may be preferable to err on the side of excess rather than on that of defect,
while in other cases the reverse may be true. Nor, of course, should the Aristotelian doctrine of the mean be
taken as equivalent to an exaltation of mediocrity in the moral life, for as far as excellence is concerned virtue is
an extreme: it is in respect of its essence and its definition that it is a mean.
(From A History of Philosophy, by Frederick Copleston.)

The Road to Happiness


It is a commonplace among moralists that you cannot get happiness by pursuing it. This is only true if
you pursue it unwisely. Gamblers at Monte Carlo are pursuing money, and most of them lose it instead, but
there are other ways of pursuing money which often succeed. So it is with happiness. If you pursue it by means
of drink, you are forgetting the hang-over. Epicurus pursued it by living only in congenial society and eating
only dry bread, supplemented by a little cheese on feast days. His method proved successful in his case, but he
was a valetudinarian, and most people would need something more vigorous. For most people, the pursuit of
happiness, unless supplemented in various ways, is too abstract and theoretical to be adequate as a personal
rule of life. But I think that whatever personal rule of life you may choose it should not, except in rare and
heroic cases, be incompatible with happiness.
There are a great many people who have all the material conditions of happiness, i.e. health and a
sufficient income, and who, nevertheless, are profoundly unhappy. In such cases it would seem as if the fault
must lie with a wrong theory as to how to live. In one sense, we may say that any theory as to how to live is
wrong. We imagine ourselves more different from the animals than we are. Animals live on impulse, and are
happy as long as external conditions are favourable. If you have a cat it will enjoy life if it has food and warmth
and opportunities for an occasional night on the tiles. Your needs are more complex than those of your cat, but
they still have their basis in instinct. In civilized societies, especially in English-speaking societies, this is too apt
to be forgotten. People propose to themselves some one paramount objective, and restrain all impulses that do
not minister to it. A businessman may be so anxious to grow rich that to this end he sacrifices health and
private affections. When at last he has become rich, no pleasure remains to him except harrying other people by
exhortations to imitate his noble example. Many rich ladies, although nature has not endowed them with any
Reading Passages [ 107 ]

spontaneous pleasure in literature or art, decide to be thought cultured, and spend boring hours learning the
right thing to say about fashionable new books that are written to give delight, not to afford opportunities for
dusty snobbism.
If you look around you at the men and women whom you can call happy, you will see that they all have
certain things in common. The most important of these things is an activity which at most times is enjoyable on
its own account, and which, in addition, gradually builds up something that you are glad to see coming into
existence. Women who take an instinctive pleasure in their children can get this kind of satisfaction out of
bringing up a family. Artists and authors and men of science get happiness in this way if their own work seems
good to them. But there are many humbler forms of the same kind of pleasure. Many men who spend their
working life in the City devote their week-ends to voluntary and unremunerated toil in their gardens, and
when the spring comes they experience all the joys of having created beauty.
The whole subject of happiness has, in my opinion, been treated too solemnly. It had been thought that
man cannot be happy without a theory of life or a religion. Perhaps those who have been rendered unhappy by
a bad theory may need a better theory to help them to recovery, just as you may need a tonic when you have
been ill. But when things are normal a man should be healthy without a tonic and happy without a theory. It is
the simple things that really matter. If a man delights in his wife and children, has success in work, and finds
pleasure in the alternation of day and night, spring and autumn, he will be happy whatever his philosophy
may be. If, on the other hand, he finds his wife hateful, his children's noise unendurable, and the office a
nightmare; if in the daytime he longs for night, and at night sighs for the light of day, then what he needs is not
a new philosophy but a new regimen - a different diet, or more exercise, or what not.
Man is an animal, and his happiness depends on his physiology more than he likes to think. This is a
humble conclusion, but I cannot make myself disbelieve it. Unhappy businessmen, I am convinced, would
increase their happiness more by walking six miles every day than by any conceivable change of philosophy.
This, incidentally, was the opinion of Jefferson, who on this ground deplored the horse. Language would have
failed him if he could have foreseen the motor-car.
(From an article in The Listener, by Bertrand Russell.)

Logic
Discourse is connected thought, expressed in words. It moves this way and that, like the shuttle in the
loom (as Plato said) weaving the fabric of reasoned argument. In discourse with others opinion is formed,
knowledge is acquired, and truth attained. What is said by one speaker, combined with what is said by another
speaker, may yield a truth, not previously known to either speaker. In that discourse with oneself which is
called reflection or meditation, a truth learned today links up with a truth learned yesterday, and the two truths
may point the way to some advance or even discovery of tomorrow. From what others have said or from what
we ourselves have thought, conclusions and inferences are drawn and they are the special concern of Logic. It is
all too easy to draw wrong conclusions and false inferences; and discourse without the discipline of Logic is a
fruitful source of false opinion, ignorance and error.
Logic trains the mind to draw the right conclusion, and to avoid the wrong, to make the true inference
and not the false. It has formulated rules of inference to govern and guide debate and to promote discovery.
Logic has to deal as well with other important elements of discourse, but its main province has always been,
and still is, inference.
Idle talk and trivial conversation do not rank as discourse for our purpose. Logic has little to do with the
frivolous; its business is with the serious statement which admits truth or falsity. Logic promotes truth; yet we
can go far in Logic without knowing or caring much whether a particular statement is true or false, in the
ordinary acceptation of those words. By true in ordinary speech we mean true to fact, and by false we mean the
opposite. Now a statement, true to fact, may in its context infringe a rule of Logic; and a statement, false in fact,
may in its context conform to the rules of Logic. The logician, as such, is not directly concerned with fact, but is
much more concerned with the observance of the rules of Logic, and therefore he uses a pair of technical terms,
valid and invalid, to express, respectively, what conforms to the rules of logic and what does not conform
thereto. By the aid of these terms he can set out the rules of reasoning without committing himself as to
whether a particular statement is true to fact, or not. Valid comes from the Latin, validus, strong. A valid
passport may make mistakes in fact, but if duly signed and not out of date, it may do its work and get you
through the barrier. On the other hand, it may give the colour of the eyes and all the other facts correctly, but if
it is out of date, it will not do its work; it is invalid. The distinction between truth and validity must be carefully
Reading Passages [ 108 ]

observed. It is illogical and therefore incorrect to speak of a true syllogism, if you mean a valid syllogism, or of a
valid conclusion, if you mean a true conclusion.
This distinction is of special importance here; for in the study of the syllogism, as such, Logic
concentrates on the form of the reasoning, for the most part, and is not directly concerned with the truth of its
contents. If the syllogism complies with the formal rules, it is valid, if not, not. If the conclusion follows from
the premisses, the conclusion is valid, even though premiss and conclusion may not be true to fact. Example:
All fish are cold-blooded.
Whales are fish.
Whales are cold-blooded.
The first premiss is true; the second is false; the conclusion is false; but the conclusion is correctly drawn
from the premisses, and therefore valid in its syllogism, even though it is not true to fact.
The reverse can happen too. A proposition, true to fact, may appear as the conclusion of an invalid
syllogism. Example:
The industrious are prudent.
Ants are prudent.
Ants are industrious.
These examples are warnings against the habit of judging the validity or invalidity of a syllogism by the
truth or falsity of the conclusion. As students of Logic our first duty is to look at the working of the syllogism,
and to judge its validity, or otherwise, by the rules. No good comes of confusing the two sets of terms, as is
sometimes done. Truth is truth, and validity is validity, and neither can do duty for the other. The lazy habit of
styling a valid conclusion true, or a true conclusion valid, weakens both our sense of truth and our feeling for
Logic. (From Teach Yourself Logic, by A. A. Luce.)
Reading Passages [ 109 ]

Inductive and deductive logic


Now I want to talk about methods of finding one's way through these hierarchies - logic.
Two kinds of logic are used, inductive and deductive. Inductive inferences start with observations of
the machine and arrive at general conclusions. For example, if the cycle goes over a bump and the engine
misfires, and then goes over another bump and the engine misfires, and then goes over another bump and the
engine misfires, and then goes over a long smooth stretch of road and there is no misfiring, and then goes over
a fourth bump and the engine misfires again, one can logically conclude that the misfiring is caused by the
bumps. That is induction: reasoning from particular experiences to general truths.
Deductive inferences do the reverse. They start with general knowledge and predict a specific
observation. For example, if, from reading the hierarchy of facts about the machine, the mechanic knows the
horn of the cycle is powered exclusively by electricity from the battery, then he can logically infer that if the
battery is dead the horn will not work. That is deduction.
Solution of problems too complicated for common sense to solve is achieved by long strings of mixed
inductive and deductive inferences that weave back and forth between the observed machine and the mental
hierarchy of the machine found in the manuals. The correct program for this interweaving is formalized as
scientific method.
Actually I've never seen a cycle-maintenance problem complex enough really to require full-scale
formal scientific method. Repair problems are not that hard. When I think of formal scientific method an image
sometimes comes to mind of an enormous juggernaut, a huge bulldozer-slow, tedious, lumbering, laborious,
but invincible. It takes twice as long, five times as long, maybe a dozen times as long as informal mechanic's
techniques, but you know in the end you're going to get it. There's no fault isolation problem in motorcycle
maintenance that can stand up to it. When you've hit a really tough one, tried everything, racked your brain
and nothing works, and you know that this time Nature has really decided to be difficult, you say, "Okay,
Nature, that's the end of the nice guy," and you crank up the formal scientific method.
For this you keep a lab notebook. Everything gets written down, formally, so that you know at all times
where you are, where you've been, where you're going and where you want to get. In scientific work and
electronics technology this is necessary because otherwise the problems get so complex you get lost in them and
confused and forget what you know and what you don't know and have to give up. In cycle maintenance
things are not that involved, but when confusion starts it's a good idea to hold it down by making everything
formal and exact. Sometimes just the act of writing down the problems straightens out your head as to what
they really are.
The logical statements entered into the notebook are broken down into six categories: (1) statement of
the problem, (2) hypotheses as to the cause of the problem, (3) experiments designed to test each hypothesis, (4)
predicted results of the experiments, (5) observed results of the experiments and (6) conclusions from the
results of the experiments. This is not different from the formal arrangement of many college and high-school
lab notebooks but the purpose here is no longer just busywork. The purpose now is precise guidance of
thoughts that will fail if they are not accurate.
The real purpose of scientific method is to make sure Nature hasn't misled you into thinking you know
something you don't actually know. There's not a mechanic or scientist or technician alive who hasn't suffered
from that one so much that he's not instinctively on guard. That's the main reason why so much scientific and
mechanical information sounds so dull and so cautious. If you get careless or go romanticizing scientific
information, giving it a flourish here and there, Nature will soon make a complete fool out of you. It does it
often enough anyway even when you don't give it opportunities. One must be extremely careful and rigidly
logical when dealing with Nature: one logical slip and an entire scientific edifice comes tumbling down. One
false deduction about the machine and you can get hung up indefinitely.
In Part One of formal scientific method, which is the statement of the problem, the main skill is in
stating absolutely no more than you are positive you know. It is much better to enter a statement "Solve
Problem: Why doesn't cycle work?" which sounds dumb but is correct, than it is to enter a statement "Solve
Problem: What is wrong with the electrical system?" when you don't absolutely know the trouble is in the
electrical system. What you should state is "Solve Problem: What is wrong with cycle?" and then state as the first
entry of Part Two: "Hypothesis Number One: The trouble is in the electrical system." You think of as many
hypotheses as you can, then you design experiments to test them to see which are true and which are false.
This careful approach to the beginning questions keeps you from taking a major wrong turn which
might cause you weeks of extra work or can even hang you up completely. Scientific questions often have a
surface appearance of dumbness for this reason. They are asked in order to prevent dumb mistakes later on.
Reading Passages [ 110 ]

Part Three, that part of formal scientific method called experimentation, is sometimes thought of by
romantics as all of science itself because that's the only part with much visual surface. They see lots of test tubes
and bizarre equipment and people running around making discoveries. They do not see the experiment as part
of a larger intellectual process and so they often confuse experiments with demonstrations, which look the
same. A man conducting a gee-whiz science show with fifty thousand dollars' worth of Frankenstein
equipment is not doing anything scientific if he knows beforehand what the results of his efforts are going to
be. A motorcycle mechanic, on the other hand, who honks the horn to see if the battery works is informally
conducting a true scientific experiment. He is testing a hypothesis by putting the question to nature. The TV
scientist who mutters sadly, "The experiment is a failure; we have failed to achieve what we had hoped for," is
suffering mainly from a bad scriptwriter. An experiment is never a failure solely because it fails to achieve
predicted results. An experiment is a failure only when it also fails adequately to test the hypothesis in
question, when the data it produces don't prove anything one way or another.
Skill at this point consists of using experiments that test only the hypothesis in question, nothing less,
nothing more. If the horn honks, and the mechanic concludes that the whole electrical system is working, he is
in deep trouble. He has reached an illogical conclusion. The honking horn only tells him that the battery and
horn are working. To design an experiment properly he has to think very rigidly in terms of what directly
causes what. This you know from the hierarchy. The horn doesn't make the cycle go. Neither does the battery,
except in a very indirect way. The point at which the electrical system directly causes the engine to fire is at the
spark plugs, and if you don't test here, at the output of the electrical system, you will never really know
whether the failure is electrical or not.
To test properly the mechanic removes the plug and lays it against the engine so that the base around
the plug is electrically grounded, kicks the starter lever and watches the spark-plug gap for a blue spark. If
there isn't any he can conclude one of two things: (a) there is an electrical' failure or (b) his experiment is
sloppy. If he is experienced he will try it a few more times, checking connections, trying every way he can think
of to get that plug to fire. Then, if he can't get it to fire, he finally concludes that a is correct, there's an electrical
failure, and the experiment is over. He has proved that his hypothesis is correct.
In the final category, conclusions, skill comes in stating no more than the experiment has proved. It
hasn't proved that when he fixes the electrical system the motorcycle will start. There may be other things
wrong. But he does know that the motorcycle isn't going to run until the electrical system is working and he
sets up the next formal question: "Solve problem: what is wrong with the electrical system?"
He then sets up hypotheses for these and tests them. By asking the right questions and choosing the
right tests and drawing the right conclusions the mechanic works his way down the echelons of the motorcycle
hierarchy until he has found the exact specific cause or causes of the engine failure, and then he changes them
so that they no longer cause the failure.

An untrained observer will see only physical labour and often get the idea that physical labour is
mainly what the mechanic does. Actually the physical labour is the smallest and easiest part of what the
mechanic does. By far the greatest part of his work is careful observation and precise thinking. That is why
mechanics sometimes seem so taciturn and withdrawn when performing tests. They don't like it when you talk
to them because they are concentrating on mental images, hierarchies, and not really looking at you or the
physical motorcycle at all. They are using the experiment as part of a program to expand their hierarchy of
knowledge of the faulty motorcycle and compare it to the correct hierarchy in their mind. They are looking at
underlying form.
(From Zen and the art of motorcycle maintenance by Robert Pirsig

PHYSICS
The Origin of the Sun and Planets
The suggestion that the material of the earth was indeed derived from an exploding star - a supernova,
is supported by strong evidence. The shower of stars must have been surrounded by a cloud of gas - the cloud
from which the stars had just condensed. A supernova, undergoing violent disintegration, must have expelled
gases that went to join this cloud, the material from the supernova thereby getting mixed with the large
quantity of hydrogen of which the cloud was mainly composed. Our problem is then to explain how both the
sun and the planets were formed out of this mixture of materials.
It is a characteristic of a good detective story that one vital clue should reveal the solution to the
mystery, but that the clue and its significance should be far from obvious. Such a clue exists in the present
Reading Passages [ 111 ]

problem. It turns on the simple fact that the sun takes some 26 days to spin once round on its axis-the axis being
nearly perpendicular to the orbits of the planets, which lie in nearly the same plane. The importance of this fact
is that the sun has no business to be rotating in 26 days. It ought to be rotating in a fraction of a day, several
hundred times faster than it is actually doing. Something has slowed the spin of the sun. It is this something
that yields the key to the mystery.
Stars are the products of condensations that occur in the dense inter-stellar gas clouds. A notable cloud
is the well-known Orion Nebula whose presence in the 'sword' of Onion can easily be seen with binoculars.
Stars forming out of the gas in such clouds must undergo a very great degree of condensation. To begin with,
the material of a star must occupy a very large volume, because of the extremely small density of the inter-
stellar gas. In order to contain as much material as the sun does, a sphere of gas in the Orion Nebula must have
a diameter of some 10,000,000,000,000 miles. Contrast this with the present diameter of the sun, which is only
about a million miles. Evidently in order to produce a star like the sun a blob of gas with an initial diameter of
some 10 million million miles must be shrunk down in some way to a mere million miles. This implies a
shrinkage to one ten millionth of the original size.
Now it is a consequence of the laws of dynamics that, unless some external process acts on it, a blob of
gas must spin more and more rapidly as it shrinks. The size of a condensation and the speed of its spin keep an
inverse proportion with each other. A decrease of size to one ten-millionth of the original dimensions leads to
an increase in the speed of spin by 10 million. But the rotation speed of the sun is only about 2 kilometres per
second. At a speed of 100 kilometres per second the sun would spin round once in about half a day, instead of
in the observed time of 26 days.
Only one loophole remains. We must appeal to some external process to slow down the spin of the solar
condensation. Our problem is to discover how such an external process operates. First we must decide at what
stage of the condensation the external process acts. Does it act while the condensing blob still has very large
dimensions? Or does it operate only in the later stages, as the condensation reaches the compact stellar state ?
Or does it operate more or less equally throughout the whole shrinkage?
A strong hint that the process must act mainly in the late stages of the condensation comes from
observations of the rates of spin of stars. It is found that the rates of spin have a very curious dependence on
surface temperature. Stars like the sun, with surface temperatures less than 6,000° C, rotate slowly like the sun.
But stars with surface temperatures greater than 7,000° C rotate considerably more rapidly, their equatorial
speeds of rotation being usually greater than 5o kilometres per second. Although this is still much less than
what we should expect if no external process were operative, it is considerably greater than the equatorial
rotation speed possessed by the sun.
This shows that while the external process must be operative in all cases, it is operative to different
degrees that depend on the surface temperature of the final star. Now the difference between one star and
another can scarcely show at all during the early stages of the shrinkage. Certainly the difference between two
condensations, one yielding a star of surface temperature 6,000° C and the other yielding a star of surface
temperature 7,000° C, must be very small indeed during the early stages: much too small for the stars to come
to have markedly different rotation speeds if the external process were of main effect during the early stages.
The inference is that the process operates mainly during the late stages of condensation.
8. Now what was the external process? We have mentioned that rotary forces must have become
important during the late stages of condensation. The effect of these forces was to cause the condensation to
become more and more flattened at its poles. Eventually the flattening became sufficient for an external rotating
disc to begin growing out of the equator. The sequence of events is illustrated in figure 1.
Reading Passages [ 112 ]

Once the sun had thus grown a disc the external process was able to come into operation. The process
consisted of a steady transference of 'rotational momentum from the sun to the disc. Two birds were thereby
killed with one stone. The sun was slowed down to its present slow rate of spin and the disc, containing the
material out of which the planets were subsequently to condense, was pushed farther and farther from the sun.
The solar condensation probably first grew its disc when it had shrunk to a size somewhat less than the orbit of
the innermost planet, Mercury. The pushing outwards of the main bulk of the disc explains why the larger
planets now lie so far from the sun.
It may be wondered why such an obvious theory was not put forward long ago. The answer is that
there seemed to be such grave objections to it that not until very recently has it been examined at all seriously.
And now it turns out that the objections are not so grave as was previously believed.
(From Chapter VI of Frontiers of Astronomy by Fred Hoyle.)
Reading Passages [ 113 ]

Can Life Exist on the Planets?

The old view that every point of light in the sky represented a possible home for life is quite foreign to
modern astronomy. The stars have surface-temperatures of anything from 1,650 degrees to 60,000 degrees or
more and are at far higher temperatures inside. A large part of the matter of the universe consists of stellar
matter at a temperature of millions of degrees, its molecules being broken up into atoms, and the atoms broken
up, partially or wholly, into their constituent parts. The rest consists, for the most part, of nebular gas or dust.
Now the very concept of life implies duration in time; there can be no life-or at least no life at all similar to that
we know on earth-where atoms change their make up millions of times a second and no pair of atoms can ever
stay joined together. It also implies a certain mobility in space, and these two implications restrict life to the
small range of physical conditions in which the liquid state is possible. Our survey of the universe has shown
how small this range is in comparison with that exhibited by the universe as a whole. It is not to be found in the
stars nor in the nebulae out of which the stars are born. Indeed, probably only an infinitesimal fraction of the
matter of the universe is in the liquid state.
Actually we know of no type of astronomical body in which the conditions can be favourable to life
except planets like our own revolving round a sun. Even these may be too hot or too cold for life to obtain a
footing. In the solar system, for instance, it is hard to imagine life existing on Mercury or Neptune since liquids
boil on the former and freeze hard on the latter.
Even when all the requisite conditions are satisfied, will life come or will it not? We must probably
discard the at one time widely accepted view that if once life had come into the universe in any way
whatsoever, it would rapidly spread from planet to planet and from one planetary system to another until the
whole universe teemed with life; space now seems too cold, and planetary systems too far apart. Our terrestrial
life must in all probability have originated on the earth itself. What we should like to know is whether it
originated as the result of some amazing accident or succession of coincidences, or whether it is the normal
event for inanimate matter to produce life in due course, when the physical environment is suitable. We look to
the biologist for the answer, which so far he has not been able to produce.
The astronomer might be able to give a partial answer if he could find evidence of life on some other
planet, for we should then at least know that life had occurred more than once in the history of the universe,
but so far no convincing evidence has been forthcoming. There is no definite evidence of life anywhere in the
universe, except on our own planet.
Apart from the certain knowledge that life exists on earth, our only definite knowledge is that, at the
best, life must be limited to a tiny fraction of the universe. Millions of millions of stars exist which support no
life, which have never done so and never will do so. Of the planetary systems in the sky, many must be entirely
lifeless, and in others life, if it exists at all, is probably limited to a few of the planets.
Let us leave these rather abstract speculations and come down to earth. The earth, which started life as a
hot mass of gas, has gradually cooled, until it has now about touched bottom, and has almost no heat beyond
Reading Passages [ 114 ]

that which it receives from the sun. This just about balances the amount it radiates away into space, so that it
would stay at its present temperature for ever if external conditions did not change, and any change in its
condition will be forced on it by changes occurring outside. These changes may be either gradual or
catastrophic.
Of the gradual changes which are possible, the most obvious is a diminution in the light and heat
received from the sun. We have seen that if the sun consisted of pure hydrogen, it could lose one part in 150 of
its whole mass through the transformation of hydrogen into helium. The energy thus set free would provide for
radiation at the present rate through a period of 1000,000 million years.
The sun does not consist of pure hydrogen, and has never done so, but a fair proportion of its present
substance is probably hydrogen, and this ought to provide radiation for at least several thousands of millions of
years, at the present rate. After all the available supplies of hydrogen are used up, the sun will, so far as we can
guess, proceed to contract to the white dwarf state, probably to a condition resembling that of the faint
companion of Sirius. The shrinkage of the sun to this state would transform our oceans into ice and our
atmosphere into liquid air; it seems impossible that terrestrial life could survive.
Such at least would be the normal course of events, the tragedy we have described happening after a
time of the order of perhaps 10,000 millions of years. But a variety of accidents may intervene to bring the
human race to an end long before any such interval has elapsed. To mention only possible astronomical
occurrences, the sun may run into another star, any asteroid may hit any other asteroid and, as a result, be so
deflected from its path as to strike the earth, any of the stars in space may wander into the solar system and, in
so doing, upset all the planetary orbits to such an extent that the earth becomes impossible as an abode of life. It
is difficult to estimate the likelihood of any of these events happening, but rough calculations suggest that none
of them is at all likely to happen within the next 10,000 million years or so.
A more serious possibility is that the sun's light and heat may increase so much as to shrivel up all
terrestrial life. We have seen how 'novae' occasionally appear in the sky, temporarily emitting anything up to
25,000 times the radiation of the sun. It seems fairly certain that if our sun were suddenly to become a nova, its
emission of light and heat would so increase as to scorch all life off the earth, but we are completely in the dark
as to whether our sun runs any risk of entering the nova stage. If it does, this is probably the greatest of all the
risks to which life on earth is exposed.
Apart from improbable accidents, it seems that if the solar system is left to the natural course of
evolution, the earth is likely to remain a possible abode of life for thousands of millions of years to come.
(From The Universe Around Us by Sir James Jeans, F.R.S.

The Theory of Continuous Creation


We must move on to consider the explanations that have been offered for this expansion of the
universe. Broadly speaking, the older ideas fall into two groups. One was that the universe started its life a
finite time ago in a single huge explosion, and that the present expansion is a relic of the violence of this
explosion. This big bang idea seemed to me to be unsatisfactory even before detailed examination showed that
it leads to serious difficulties. For when we look at our own galaxy there is not the smallest sign that such an
explosion ever occurred. But the really serious difficulty arises when we try to reconcile the idea of an explosion
with the requirement that the galaxies have condensed out of diffuse background material. The two concepts of
explosion and condensation are obviously contradictory, and it is easy to show, if you postulate an explosion of
sufficient violence to explain the expansion of the universe, that condensations looking at all like the galaxies
could never have been formed.
We come now to the second group of theories. The ordinary idea that two particles attract each other is
only accepted if their distance apart is not too great. At really large distances, so the argument goes, the two
particles repel each other instead. On this basis it can be shown that if the density of the background material is
sufficiently small, expansion must occur. But once again there is a difficulty in reconciling this with the
requirement that the background material must condense to form the galaxies.
I should like now to approach more recent ideas by describing what would be the fate of our observable
universe if any of these older theories had turned out to be correct. Every receding galaxy will eventually
increase its distance from us until it passes beyond the limit of the observable universe-that is to say, they will
move to a distance beyond the critical limit of about two thousand million light years that I have already
mentioned. When this happens, nothing that occurs within them can ever be observed from our galaxy. So if
any of the older theories were right we should end in a seemingly empty universe, or at any rate in a universe
Reading Passages [ 115 ]

that was empty apart perhaps from one or two very close galaxies that became attached to our galaxy as
satellites.
Although I think there is no doubt that every galaxy we now observe to be receding from us will, in
about ten thousand million years, have passed entirely beyond the limit of vision of an observer in our galaxy,
yet I think that such an observer would still be able to see about the same number of galaxies as we do now. By
this I mean that new galaxies will have condensed out of the background material at just about the rate
necessary to compensate for those that are being lost as a consequence of their passing beyond our observable
universe. At first sight it might be thought that this could not go on indefinitely because the material forming
the background would ultimately become exhausted. But again, I do not believe that this is so, for it seems
likely that new material is constantly being created so as to maintain a constant density in the background
material. So we have a situation in which the loss of galaxies, through the expansion of the universe, is
compensated by the condensation of new galaxies, and this can continue indefinitely.
The idea that matter is created continuously represents our ultimate goal in this series of lectures. The
idea in itself is not new. I know of references to the continuous creation of matter that go back more than
twenty years; and I have no doubt that a close inquiry would show that the idea, in its vaguest form, goes back
very much further than that. What is new is that it has now been found possible to put a hitherto vague idea in
a precise mathematical form. It is only when this has been done that the consequences of any physical idea can
be worked out and its scientific value assessed.
Now what are the consequences of continuous creation? Perhaps the most surprising result of the
mathematical theory is that the average density of the background material must stay constant. To achieve this
only a very slow creation rate is necessary. The new material does not appear in a concentrated form in small
localized regions but is spread throughout the whole of space. The average rate of appearance amounts to no
more than the creation of one atom in the course of a year in a volume equal to St. Paul's Cathedral. As you will
realize, it would be quite impossible to detect such a rate of creation by direct experiment.
But although this seems such a slow rate when judged by ordinary ideas, it is not small when you
consider that it is happening everywhere in space. The total rate for the observable universe alone is about a
hundred million, million, million, million, million tons per second. Do not let this surprise you because, as I
have said, the volume of the observable universe is very large. It is this creation that drives the universe. The
new material produces an outward pressure that leads to the steady expansion. But it does much more than
that. With continuous creation the apparent contradiction between the expansion of the universe and the
requirement that the background material shall be able to condense into galaxies is completely overcome. For it
can be shown that once an irregularity occurs in the background material a galaxy must eventually be formed.
Such irregularities are constantly being produced through the gravitational action of the galaxies themselves.
So the background material must give a steady supply of new galaxies. Moreover, the created material also
supplies unending quantities of atomic energy. For, by arranging that newly created material is composed of
hydrogen, we explain why, in spite of the fact that hydrogen is being consumed in huge quantities in the stars,
the universe is nevertheless observed to be overwhelmingly composed of it.
So we see that no large-scale changes in the universe can be expected to take place in the future.
Without continuous creation the universe must evolve towards a dead state in which all the matter is
condensed into a vast number of dead stars. With continuous creation, on the other hand, the universe has an
infinite future in which all its present very large-scale features will be preserved.
(From The Nature of the Universe by Fred Hoyle.)

The Creation of the Universe


Before we can discuss the basic problem of the origin of our universe, we must ask ourselves whether
such a discussion is necessary. Could it not be true that the universe has existed since eternity, changing
slightly in one way or another in its minor features, but always remaining essentially the same as we know it
today? The best way to answer this question is by collecting information about the probable age of various
basic parts and features that characterize the present state of our universe.
For example, we may ask a physicist or chemist: 'How old are the atoms that form the material from
which the universe is built?' Only half a century ago such a question would not have made much sense.
However, when the existence of natural radioactive elements was recognized, the situation. became quite
different. It became evident that if the atoms of the radio-active elements had been formed too far back in time,
they would by now have decayed completely and disappeared. Thus the observed relative abundances of
various radio-active elements may give us some clue as to the time of their origin.
Reading Passages [ 116 ]

We notice first of all that thorium and the common isotope of uranium (U 238) are not markedly less
abundant than the other heavy elements, such as, for example, bismuth, mercury or gold. Since the half-life
periods of thorium and of common uranium are 14 billion and 4.5 billion years respectively, we must conclude
that these atoms were formed not more than a few billion years ago. On the other hand the fissionable isotope
of uranium (U235) is very rare, constituting only 0.7 % of the main isotope. The half-life of U 235 is considerably
shorter than that of U238, being only about 0.9 billion years. Since the amount of fissionable uranium has been
cut in half every 0.9 billion years, it must have taken about seven such periods, or about 6 billion years, to bring
it down to its present rarity, if both isotopes were originally present in comparable amounts. Similarly, in a few
other radio-active elements, such as radio-active potassium, the unstable isotopes are also always found in very
small relative amounts. This suggests that these isotopes were reduced quite considerably by slow decay taking
place over a period of a few billion years. Of course, there is no a priori reason for assuming that all the isotopes
of a given element were originally produced in exactly equal amounts. But the coincidence of the results is
significant, inasmuch as it indicates the approximate date of the formation of these nuclei. Furthermore, no
radio-active elements with half-life periods shorter than a substantial portion of a billion years are found in
nature, although they can be produced artificially in atomic piles. This also indicates that the formation of
atomic species must have taken place not much more recently than a few billion years before the present time.
Thus there is a strong argument for assuming that radio-active atoms, and along with them, all other stable
atoms were formed under some unusual circumstances which must have existed in the universe a few billion
years ago.
As the next step in our enquiry, we may ask a geologist: 'How old are the rocks that form the crust of
our globe?' The age of various rocks - that is, the time that has elapsed since their solidification from the molten
state - can be estimated with great precision by the so-called radio-active clock method. This method, which
was originally developed by Lord Rutherford, is based on the determination of the lead content in various
radio-active minerals such as pitchblende and uraninite. The significant point is that the natural decay of radio-
active materials results in the formation of the so-called radiogenic lead isotopes. The decay of thorium
produces the lead isotope Pb208, whereas the two isotopes of uranium produce Pb 207 and Pb206. These radiogenic
lead isotopes differ from their companion Pb 204, natural lead, which is not the product of decay of any natural
radio-active element.
As long as the rock material is in the molten state various physical and chemical processes may separate
the newly produced lead from the mother substance. However, after the material has become solid and ore has
been formed, radiogenic lead remains at the place of its origin. The longer the time period after the
solidification of the rock, the larger the amount of lead deposited by any given amount of radio-active
substance. Therefore, if one measures the relative amounts of deposited radiogenic lead isotopes and the lead-
producing radio-active substances (that is, the ratios: Pb 208/Th232, Pb207/U235, Pb206/U238)
and if one knows
the corresponding decay rates, one can get three independent estimates of the time when a
given radio-active ore was formed. By applying this method, one gets results of the kind
shown in the following table.
Age in years x
Mineral Locality Geological period
10 6

Pitchblende Colorado, U.S.A. Tertiary 58


Pitchblende Bohemia, Europe Carboniferous 215
Pitchblende Belgium Congo, Africa Pre-Cambrian 580
Uraninite Wilberforce, Canada Pre-Cambrian 1,035
Pitchblende Great Bear Lake, Canada Pre-Cambrian 1,330
Uranite Karelia, U.S.S.R. Pre-Cambrian 1,765
Uranite Manitoba, Canada Pre-Cambrian 1,985
The last two minerals are the oldest yet found, and from their age we must conclude that the crust of
the earth is at least 2 billion years old.
A much more elaborate method was proposed recently by the British geologist, Arthur Holmes. This
method goes beyond the formation time of different radio-active deposits; and claims an accurate figure for the
age of the material forming the earth. By applying this method to the relative amounts of lead isotopes found in
rocks of different geological ages, Holmes found that all curves intersect near the point corresponding to a total
age of 3.35 billion years, which must represent the correct age of our earth.
Reading Passages [ 117 ]

How old is the moon? As was shown by the work of the British astronomer, George Darwin, the moon
is constantly receding from the earth, at the rate of about 5 inches every year. Dividing the present distance to
the moon (239,000 miles) by the estimated rate of recession, we find that the moon must have been practically in
contact with the earth about 4. billion years ago.
Thus we see that whenever we inquire about the age of some particular part or property of the universe
we always get the same approximate answer - a few billion years old. Thus it seems that we must reject the idea
of a permanent unchangeable universe and must assume that the basic features of the universe as we know it
today are the direct result of some evolutionary development which must have begun a few billion years ago.
(From The Creation of the Universe by George Gamow.
Atomic Radiation and Life
The radiation dose given off by an X-ray machine or by isotopes is usually measured by determining
the number of ions produced in a volume of gas. Since these carry an electric charge there are a number of
extremely delicate methods by which they can be detected. The widely used Geiger counter consists essentially
of a wire stretched inside a cylindrical tube, so arranged that an electric current can pass between the wire and
the tube only when there are ions in the gas. Consequently, when an ionizing particle passes through the tube,
an electric signal is given out. In this way the number of ionizing particles given off by a radio-active source can
be accurately counted. This is called the activity of the material. It is measured in a unit called the curie after the
discoverer of radium. The activity of one gram of radium together with its decay products is equal to one curie.
Every time an atom disintegrates a beta- or alpha-ray is given off, together with a certain amount of gamma
radiation.
The activity in curies can tell us nothing about the dose of radiation given off by the radio-active
material, since the curie measures only the number of ionizing particles emitted, independent of their range or
energy. If, for example, we put next to the skin one curie of radio-active cobalt, which gives off energetic
gamma-rays, the dose received on the surface will be one five-thousandth part of the dose received from one
curie of polonium which gives off alpha-particles. On the other hand the gamma-rays from the curie of cobalt
will penetrate deeply, while the alpha-rays will not affect anything which lies more than two one-thousandth of
an inch below the surface of the skin.
The best way of defining the dose of radiation which an irradiated material has received is in terms of
energy. We have seen that on exposure to ionizing radiation electrons, or other sub-atomic particles moving at
great speed, lose energy to the surrounding molecules. The amount of' energy gained by the irradiated
substance is clearly the important factor, and will determine the biological changes produced. The most widely
used unit for measuring X-ray and gamma-ray dosage is the roentgen - named after the discoverer of X-rays.
The remarkable property of ionizing radiation is that the small amount of energy represented by a few hundred
roentgens can kill a man.
The primitive embryonic cell known as the zygote, which is formed after the entry of the sperm into the
ovum, is very sensitive so radiation. For example, 80 per cent of mice, exposed to 200 rads of X-rays within the
first five days after conception, fail to give birth. Smaller doses give rise to a lower incidence of pre-natal death,
but an appreciable reduction in the average litter size has been observed with 50 rads.
At first the embryo grows by cell division without differentiation and becomes firmly implanted in the
wall of the uterus. This requires about eight days in human beings and five days in mice. Then differentiation
begins, and the individual organs and limbs are formed; the embryo takes shape. During this period it is in the
greatest danger. Now radiation no longer kills-the damaged embryo is not re-absorbed or aborted, but proceeds
to a live birth which is abnormal. These malformations can be very great, so as to give horrible and distressing
monsters, which are, however, quite capable of living for a time. The incidence is particularly high in the early
stages of the active development of the embryo.
The period of major organ production is over after about three months in human beings, and the foetus
then develops its finer aspects and generally grows and develops. Exposure to doses insufficient to produce
severe radiation sickness in the mother no longer produces gross deformities which can be recognized in small
experimental animals. But the absence of striking changes in the newborn does not mean that the irradiation
has been without harm. The general effect is less obvious, but none the less serious, and irradiation at the later
stages of pregnancy results in very marked growth reduction, giving small babies which develop into smaller
adults. Their life span is reduced and their reproductive organs are often affected so that they grow up sterile.
Damage to the brain and eyes was found a few weeks after birth in all cases which had been irradiated in the
foetal stage with zoo rads, and there is a significant incidence after 200 rads. Since only gross disorders of the
Reading Passages [ 118 ]

brain can be detected in experimental animals, it seems likely that much smaller doses will give effects which
are serious in man.
The detailed picture of the influence of radiation on prenatal development has been obtained from
studies with animals. Unhappily, sufficient human cases are known to make it certain that the same pattern
also occurs in man; and we can confidently superimpose a human time-scale on the mouse data. Some of our
information is derived from the survivors of the atom bombs in Japan. The children of the women who were
pregnant and exposed to irradiation at Nagasaki and Hiroshima are, on average, shorter and lighter and have
smaller heads, indicating an under-developed brain. Some show severe mental deficiencies, while others were
unable to speak normally at five years old.
Most of our knowledge comes from expectant mothers who were irradiated for therapeutic or
diagnostic reasons. Many cases are described in the medical literature of abnormalities following exposure of
the embryo. Most of these arose twenty or thirty years ago at a time when radiologists did not know of the
great radio-sensitivity of the foetus. A detailed survey showed that, where a mother received several hundred
roentgen within the first two months after the implantation of the embryo, severe mal-development was
observed in every child, a high proportion of whom lived for many years.
(From Atomic Radiation and Life by Peter Alexander.)
Marconi and the invention of Radio
The theory of energy waves - or wireless waves - in :he air was first put forward in 1864, ten years
before Marconi was born by the British physicist James Clerk Maxwell, Professor of Experimental Physics at
Cambridge University. Maxwell produce by brilliant mathematical reasoning, the theory that 'electro-magnetic
disturbances', though invisible to our eves, must exist in space, and that these waves travel at the same speed as
the light waves - that is, at the rate of 186,000 miles, the equivalent of more than seven times round the world,
per second.
This was a very remarkable deduction; but at that time no means of producing and detecting such
waves were known, and Maxwell was, therefore unable to carry out any scientific experiments to prove the
truth of his theory, and had to leave it to others to show how correct his reasoning had been. Twenty-four years
later the German physicist, Heinrich Hertz, was able to show that, when he produced an electric spark across a
gap between two metal balls by applying an electric current to them, a similar spark would jump across a small
gap in a metal ring a few feet away, though the ring was not connected in any way with the other apparatus.
The experiment made it clear that the second spark was caused by the first, and the fact that there was
no connexion between the balls and the ring suggested that the spark from the first apparatus must have been
transferred through the intervening space between the two pieces of apparatus in the form of some kind of
wave spreading outwards in all directions like light. In fact, it was clear that those were the electro-magnetic
waves of Maxwell's theory.
Marconi began his work by fixing up a simple contraption, similar to the one used by Hertz in his
discovery, and trying this out on a table. Then, when he, too, had succeeded in making a spark cross from one
apparatus to another, he designed a more elaborate set-up and tried to reproduce a bigger spark and at a
greater distance. He spent weeks making and testing new pieces of equipment, and then breaking them up
again when they proved to be useless. He had so many failures and disappointments that he sometimes felt
quite desperate; but he persevered, and eventually succeeded in sending a spark the full length of a thirty-foot
room.
Though Marconi was naturally delighted with this development, it was clearly only a beginning; he
must now try to make his sparks do something useful. This meant more gadgets and more experiments. Then,
one night after the rest of the family had gone to bed, he succeeded in making the wireless waves start an
electric bell ringing in a room two floors below his laboratory. He was so excited that he rushed to his mother's
bedroom and woke her up to tell her, and the next day he demonstrated his experiment successfully to his
father, too.
He extended the arm attached to one of the metal balls of the transmitting instrument into an elevated
aerial and connected the second arm to earth. On the receiving instrument he also used an elevated aerial and
earth. This led to an important advance, for not only did it greatly extend the range through which the waves
could be transmitted, but it also increased the volume of sound received. Marconi now found that he could
transmit Morse Code signals: 'I actually transmitted and received intelligible signals,' he said.
Marconi was asked to take his equipment on to the roof of the head office of the General Post Office at
St. Martin's-le-Grand (London) and to send a wireless signal from there to the roof-top of another building
some 300 yards away, where he set up a receiver. Both these tests were successful, and so he was then asked to
Reading Passages [ 119 ]

give a third demonstration - this time on Salisbury Plain - to a group of high-ranking army and naval officers
and government officials. This third demonstration was naturally a very important one for Marconi. He
therefore took immense pains over his preparations, testing and re-testing his apparatus to make certain that
everything was in order. Then, while the important spectators stood by his receiver nearly two miles away, he
tapped out a Morse signal on his transmitter-and waited anxiously for the result. The signal came through
perfectly.
The following March, a German vessel collided in a fog with the East Goodwin lightship off the Kentish
coast, and, for the first time, a distress signal calling for relief was despatched by wireless - and was answered.
This was a remarkable proof of the importance of radio to shipping, and not long after most of the larger
nations were fitting their ships with wireless equipment.
'He'll be sending messages across the Atlantic next,' people joked, never for a moment believing that
this might really be possible. But it seemed no impossibility to Marconi, and shortly after a visit to the United
States he seriously set to work to achieve the sending of a signal from Great Britain to America across the
Atlantic Ocean.
He chose a lonely spot on the south coast of Cornwall called Poldhu, high above the cliffs near Mullion.
Work on building the new station began in October 1900. The following January the elaborate new plant was
installed, and then Marconi, with his usual careful attention to detail, spent several months testing it and
suggesting improvements. Then on 27th November 1901, Marconi with two of his assistants, Kemp and Paget,
sailed for Newfoundland, 2,000 miles away, where they intended setting up both a receiver for the reception of
the signal from Poldhu and a transmitter for sending a second signal back to Cornwall. The small party
successfully installed their receiver at St. John's in a disused hospital attached to a naval barracks on a 500-foot
hill which, strangely enough, was called 'Signal Hill'.
As the weather was growing rapidly worse, Marconi decided to concentrate upon trying to receive a
signal from Cornwall. So he sent a cable back to Poldhu giving instructions for the three Morse Code dots
representing the letter 'S' to be transmitted at frequent intervals each day, starting on 11 December. He heard
nothing on the first day except for the roar of the gale outside. On the second day, 12 December, the gale was so
strong that it blew away the kite supporting the important aerial and a second kite had to be hoisted. But that
afternoon, just as he was beginning to think that his experiment had failed, Marconi heard, very faintly, the
sound for which he had been listening, the signal from Poldhu ... ... ... ...
(From Great Inventors by Norman Wymer.)
Particles or Waves?
The most obvious fact about a ray of light, at any rate to superficial observation, is its tendency to travel
in a straight line; everyone is familiar with the straight edges of a sunbeam in a dusty room. As a rapidly-
moving particle of matter also tends to travel in a straight line, the early scientists, rather naturally, thought of
light as a stream of particles thrown out from a luminous source, like shot from a gun. Newton adopted this
view, and added precision to it in his 'corpuscular theory of light'.
Yet it is a matter of common observation that a ray of light does not always travel in a straight line. It
can be abruptly turned by reflection, such as occurs when it falls on the surface of a mirror. Or its path may be
bent by refraction, such as occurs when it enters water or any liquid medium; it is refraction that makes our oar
look broken at the point where it enters the water, and makes the river look shallower than it proves to be when
we step into it. Even in Newton's time the laws which governed these phenomena were well known. In the case
of reflection the angle at which the ray of light struck the mirror was exactly the same as that at which it came
off after reflection; in other words, light bounces off a mirror like a tennis ball bouncing off a perfectly hard
tennis-court. In the case of refraction, the sine of the angle of incidence stood in a constant ratio to the sine of
the angle of refraction. We find Newton at pains to skew that his light-corpuscles would move in accordance
with these laws, if they were subjected to certain definite forces at the surfaces of a mirror or a refracting liquid.
Newton's corpuscular theory met its doom in the fact that when a ray of light falls on the surface of
water, only part of it is refracted. The remainder is reflected, and it is this latter part that produces the ordinary
reflections of objects in a lake, or the ripple of moonlight on the sea. It was objected that Newton's theory failed
to account for this reflection, for if light had consisted of corpuscles, the forces at the surface of the water ought
to have treated all corpuscles alike; when one corpuscle was refracted all ought to be, and this left water with
no power to reflect the sun, moon or stars. Newton tried to obviate this objection by attributing 'alternate fits of
transmission and reflection' to the surface of the water - the corpuscle which fell on the surface at one instant
was admitted, but the next instant the gates were shut, and its companion was turned away to form reflected
light. This concept was strangely and strikingly anticipatory of modern quantum theory in its abandonment of
Reading Passages [ 120 ]

the uniformity of nature and its replacement of determinism by probabilities, but it failed to carry conviction at
the time.
And, in any case, the corpuscular theory was confronted by other and graver difficulties. When studied
in sufficiently minute details, light is not found to travel in such absolutely straight lines as to suggest the
motions of particles. A big object, such as a house or a mountain, throws a definite shadow, and so gives as
good protection from the glare of the sun as it would from a shower of bullets. But a tiny object, such as a very
thin wire, hair or fibre, throws no such shadow. When we hold it in front of a screen, no part of the screen
remains unilluminated. In some way, the light contrives to bend round it, and, instead of a definite shadow, we
see an alternation of light and comparatively dark parallel bands, known as 'interference bands'. To take
another instance, a large circular hole in a screen lets through a circular patch of light. But make the hole as
small as the smallest of pinholes, and the pattern thrown on a screen beyond is not a tiny circular patch of light,
but a far larger pattern of concentric rings, in which light and dark rings alternate - 'diffraction rings'. All the
light which is more than a pinhole's radius from the centre has in some way bent round the edge of the hole.
Newton regarded these phenomena as evidence that his 'light-corpuscles' were attracted by solid
matter. He wrote:
The rays of light that are in our air, in their passage near the angles of bodies, whether transparent or
opaque (such as the circular and rectangular edges of coins, or of knives, or broken pieces of stone or glass), are
bent or inflected round those bodies, as if they were attracted to them; and those rays which in their passage
came nearest to the bodies are the most inflected, as if they were most attracted.
Here again Newton was strangely anticipatory of present-day science, his supposed forces being closely
analogous to the 'quantum forces' of the modern wave-mechanics. But they failed to give any detailed
explanation of diffraction-phenomena, and so met with no favour.
In time all these and similar phenomena were adequately explained by supposing that light consists of
waves, somewhat similar to those which the wind blows up on the sea, except that, instead of each wave being
many yards long, many thousands of waves go to a single inch. Waves of light bend round a small obstacle in
exactly the way in which waves of the sea bend round a small rock. A rocky reef miles long gives almost perfect
shelter from the sea, but a small rock gives no such protection - the waves pass round it on either side, and re-
unite behind it, just as waves of light re-unite behind our thin hair or fibre. In the same way sea-waves which
fall on the entrance to a harbour do not travel in a straight line across the harbour but bend round the edges of
the breakwater, and make the whole surface of the water in the harbour rough. The seventeenth century
regarded light as a shower of particles; the eighteenth century, discovering that this was inadequate to account
for small-scale phenomena such as we have just described, replaced the showers of particles by trains of waves.
(From The Mysterious Universe by Sir James Jeans.)

Matter, Mass and Energy


At the end of last century, physical science recognized three major conservation laws:
A The conservation of matter
B The conservation of mass
C The conservation of energy
Other minor laws, such as those of the conservation of linear and angular momenta, need not enter our
discussion, since they are mere deductions from the three major laws already mentioned. Of the three major
laws, the conservation of matter was the most venerable. It had been implied in the atomistic philosophy of
Democritus and Lucretius, which supposed all matter to be made up of untreatable, unalterable and
indestructible atoms. It asserted that the matter content of the universe remained always the same, and the
matter content of any bit of the universe or of any region of space remained the same except in so far as it was
altered by the ingress or egress of atoms. The universe was a stage in which always the same actors-the atoms-
played their parts, differing in disguises and groupings, but without change of identity. And these actors were
endowed with immortality.
The second law, that of the conservation of mass, was of more modern growth. Newton had supposed
every body or piece of substance to have associated with it an unvarying quantity, its mass, which gave a
measure of its 'inertia' or reluctance to change its motion. If one motor-car requires twice the engine power of
another to give us equal control over its motion we say that it has twice the mass of the latter car. The law of
gravitation asserts that the gravitational pulls on two bodies are in exact proportion to their masses, so that if
the earth's attraction on two bodies proves to be the same, their 'masses' must be the same, whence it follows
that the simplest way of measuring the mass of any body is by weighing it.
Reading Passages [ 121 ]

The third principle, that of conservation of energy, is the most recent of all. Energy can exist in a vast
variety of forms, of which the simplest is pure energy of motion-the motion of a train along a level track, or of a
billiard ball over a table. Newton had shown that this purely mechanical energy is 'conserved'. For instance,
when two billiard balls collide, the energy of each is changed, but the total energy of the two remains unaltered;
one gives energy to the other, but no energy is lost or gained in the transaction. This, however, is only true if the
balls are 'perfectly elastic', an ideal condition in which the balls spring back from one another with the same
speed with which they approached. Under actual conditions such as occur in nature, mechanical energy
invariably appears to be lost; a bullet loses speed on passing through the air, and a train comes to rest in time if
the engine is shut off In all such cases heat and sound are produced. Now a long series of investigations has
shown that heat and sound are themselves forms of energy. In a classical series of experiments made in 1840-50,
Joule measured the energy of sound with the rudimentary apparatus of a violoncello string. Imperfect though
his experiments were, they resulted in the recognition of 'conservation of energy' as a principle which covered
all known transformations of energy through its various modes of mechanical energy, heat, sound and
electrical energy. They showed in brief that energy is transformed rather than lost, an apparent loss of energy of
motion being compensated by the appearance of an exactly equal energy of heat and sound; the energy of
motion of the rushing train is replaced by the equivalent energy of the noise of the shrieking brakes, and of the
heating of wheels, brake-blocks and rails.
These three conservation laws ought of course to have been treated merely as working hypotheses, to
be tested in every conceivable way and discarded as soon as they showed signs of failing. Yet so securely did
they seem to be established that they were treated as indisputable universal laws. Nineteenth-century
physicists were accustomed to write of them as though they governed the whole of creation, and on this basis
philosophers dogmatized as to the fundamental nature of the universe.
It was the calm before the hurricane. The first rumble of the approaching storm was a theoretical
investigation by Sir J. J. Thomson, which showed that the mass of an electrified body could be changed by
setting it into motion; the faster such a body moved the greater its mass became, in opposition to Newton's
concept of a fixed unalterable mass. For the moment, the principle of conservation of mass appeared to have
abandoned science.
For a time this conclusion remained of merely academic interest; it could not be tested observationally
because ordinary bodies could neither be charged with sufficient electricity, nor set into motion with sufficient
speed, for the variations of mass predicted by theory to become appreciable in amount. Then, just as the
nineteenth century was drawing to a close, Sir J. J. Thomson and his followers began to break up the atom,
which now proved to be no more uncuttable, and so no more entitled to the name of 'atom' than the molecule to
which the name had previously been attached. They were only able to detach small fragments, and even now
the complete break-up of the atom into its ultimate constituents has not been fully achieved. These fragments
were found to be all precisely similar, and charged with negative electricity. They were accordingly named
electrons.
These electrons are far more intensely electrified than an ordinary body can ever be. A gram of gold,
beaten as thin as it will go, into a gold leaf a yard square, can with luck be made to hold a charge of about
60,000 electrostatic units of electricity, but a gram of electrons carries a permanent charge which is about nine
million million times greater. Because of this, and because electrons can be set into motion by electrical means
with speeds of more than a hundred thousand miles a second, it is easy to verify that an electron's mass varies
with its speed. Exact experiments have shown that the variation is precisely that predicted by theory.
(From The Mysterious Universe by Sir James Jeans.)
Structure of Matter
One topic which we shall certainly discuss is the structure of matter. We shall find that there is a simple,
general pattern in the structure of all the solid materials which principally concern us, and that electrons are a
key part of that pattern.
There are ninety-two separate chemical substances from which the world is made; substances such as
oxygen, carbon, hydrogen, iron, sulphur and silicon. Often, of course, they are combined together to make more
elaborate materials, such as hydrogen and oxygen in water. But these are the ninety-two elements; elements
which the chemists have grouped in a list called the Periodic Table and which, in various combinations, go to
make up all the millions of compounds that exist.
If, however, an atom of any one of these elements is examined, it will be found to consist of an assembly
of three different kinds of particle: protons, electrons and neutrons. The atoms of some elements may contain
several hundred particles, while other elements may have less than ten particles in the atom.
Reading Passages [ 122 ]

Before we look at the way in which these particles makeup an atom, we need to know something of
their two most important properties: mass and electrical charge. The proton and the neutron both have
approximately the same mass, and the mass of the electron is very much less - about 1/1840 of the mass of the
proton. The electron is negatively charged, and the proton has a charge of the same size but of positive sign.
The neutron carries no charge.
Any object, from an atom upwards in size, will normally contain equal numbers of protons and
electrons, and will thus have no net electrical charge. If electrons are removed in some way from the object, it
will be left with a net positive charge. If electrons are added to the object, it will become negatively charged.
Two objects which are electrically charged exert a force on each other which is inversely proportional to
the square of their distance apart. If the two charges have the same sign, then the objects repel each other, and if
the charges are of opposite sign, then they attract each other. In particular, a proton and an electron will attract
each other, and the closer they are together, the greater will be the force.
In general, an atom has a central core called the nucleus, which consists of protons and neutrons.
Surrounding this nucleus is a cloud of electrons. The number of protons in the nucleus is equal to the number
of electrons in the cloud. The total positive charge on the nucleus due to all the protons is just balanced by the
total negative charge of all the electrons in the cloud, and the atom as a whole is electrically neutral.
To get an idea of the principles common to the structure of all atoms we shall start by considering the
simplest atom-that of the element hydrogen.
Hydrogen is the lightest of all the atoms, with a single electron in the electron `cloud' and a single
proton as its nucleus., All other elements have neutrons as well as protons in the nucleus; for example, helium,
the next simplest atom, has two electrons in the cloud, with two protons and two neutrons in the nucleus.
In the hydrogen atom the electron rotates round the nucleus-in this case a single proton-rather like the
Earth rotates round the Sun. If we think of the circular electron orbit, then its radius is such that the electrical
attraction between the positively charged proton and the negatively charged electron is just sufficient to
provide the force needed to bend the electron path into a circle, just as a stone whirled round on a string would
fly off at a tangent if it were not constrained to its circular path by the tension in the string.
This simple system with the electron rotating round the nucleus has a certain amount of energy
associated with it. First, there is the kinetic energy of the moving electron. Any moving body possesses kinetic
energy, and the amount of energy is proportional to the mass of the body and the square of its velocity. Thus a
small, high-speed body like a bullet may have more kinetic energy than a much larger body moving slowly.
The second kind of energy associated with the hydrogen atom is potential energy, due to the fact that a
positive and negative charge separated by a certain distance attract each other, and could be organised to do
work in coming together. In the same way, a lake of water at the top of a mountain has potential energy
associated with it, because it could be organised to do work by running through turbines down into the valley.
The higher the lake is above the valley, the greater will be the potential energy. Similarly, the farther the
electron is from the nucleus in the hydrogen atom, the greater will be the potential energy.
The total energy associated with the hydrogen atom. is the sum of the potential energy and the kinetic
energy. This total depends upon the radius of the orbit in which the electron rotates and has a minimum value
for hydrogen atoms in the normal state. If a normal hydrogen atom is in some way given a little extra energy,
then the electron moves out into an orbit of greater radius, and the total ; energy associated with the atom is
now greater than it was in the normal state. Atoms possessing more than the normal amount of energy are said
to be `excited'.
The way in which atoms receive extra energy to go into an excited state, and the way in which they give
up that energy in returning to the normal state, is of fundamental; importance and we shall discuss it later as
the quantum: theory. In particular, we shall find that energy can only be given to an atom in packets of certain
sizes and that energy is emitted by excited atoms in similar packets. The wrong size packet will not be accepted
by an atom, and an excited atom will never emit anything other than one of a limited set of packet sizes.
The fact that an atom of any particular element can emit or absorb energy only in packets of certain
sizes is' one consequence of a set of rules which governs the behaviour of electrons in atoms. These rules also
give rise to two other important general properties of the' electrons in atoms.
The first of these is that, within the atom, an electron may only possess certain energies-there are certain
`permitted energy levels' for the electron. This concept of permitted energy levels is extended later from the
single isolated atom to the electrons in solids, where the electrical properties are largely determined by the
permitted energy levels and which of them are possessed by electrons.
Reading Passages [ 123 ]

Another consequence of the rules for electron behaviour within the atom is one which concerns us
immediately, because these rules establish a set of patterns into which the electrons are arranged in more
elaborate atoms than those of hydrogen.
The hydrogen atom is very simple, with its single electron rotating in a circular orbit, which is of a fixed
radius for the normal state of the atom.
Helium is the next simplest atom, with two electrons and, of course, two protons in the nucleus to
balance the negative charge of the electrons. The nucleus also contains two neutrons, and the two electrons
circulate round this nucleus in the same orbit.
When we come to the next element, lithium, which has three orbital electrons, the electrons are
arranged in a new way round the nucleus. The nucleus now contains three protons and some neutrons.
Two of the electrons are in the same orbit, but the third one is in a different orbit, farther away from the
nucleus. The rules say that the innermost orbit, or shell, is completely filled when it has two electrons. Atoms
like lithium, with more than two electrons, must start a second orbit of greater radius to accommodate the
third; fourth, etc., electrons. When this second orbit has eight electrons in it, there is no room for more, and a
third orbit of still larger radius has to be started. Thus sodium, which has eleven orbital electrons, has two in
the innermost orbit, eight in the next orbit and one in the outermost orbit.
In the illustrations of electron orbits for various atoms, the number with the positive sign indicates the
number of protons in the nucleus and thus is equal to the number of orbital electrons. This number is called the
atomic number of the element. There are neutrons in all the nuclei except hydrogen. The chemical properties of
an element are determined by the electrons, and in this respect it is not surprising that the outermost electrons
are most important because, when two atoms come together in a chemical reaction, it is the electrons in the
outside orbits which will first meet and interact. It is worth noting from the illustrations that hydrogen, lithium
and sodium, all with a single electron in their outside shell, have a general similarity of chemical behaviour.
Similarly, many of the atoms with two electrons-or with three, etc.- in the outer shell, can be grouped together
as being related chemically.

Atoms do not normally have a separate existence, and the simplest form in which we meet matter in the
natural world is the gas. This consists of single molecules of the substance moving about at random and largely
independent of each other except when they collide.
Typical gas molecules are oxygen (O2) and hydrogen (H2). In each of these molecules, two atoms of the
element are joined together to make the stable unit found in natural oxygen or hydrogen. The atoms in a
molecule are bound together by forces due to complex interactions between the electron systems of the
individual atoms. The exact nature of these forces does not concern us as we shall be more interested in solids
than in gases. However, it is important to notice that molecules, like atoms, emit or absorb energy in packets of
certain sizes and that the electrons in the molecular system also have only certain permitted energies.
Gas molecules may contain atoms of more than one element, e.g. carbon dioxide (CO 2), and in the case
of some organic gases may contain many atoms of several different elements and be very large.
The solid is in many ways similar to a very large molecule. Most of the solids that are important in
electronics are crystalline, and the characteristic feature of a crystal is a regular arrangement of atoms. The
atoms may all be of the same element, as in copper, or they may be of different elements, as in common salt
(NaCl) or copper sulphate (CuSO4).
In the crystal, as in the molecule, the interaction of the orbital electrons binds the individual atoms into
the characteristic pattern or lattice.
There are certain permitted energy levels for electrons in the solid, just as there are in the atom and the
molecule, and these energy levels will be different in different materials.
Reading Passages [ 124 ]

One special aspect of electron properties in solids is that certain materials, particularly metals, contain
some electrons which are able to move away from their parent atoms. If a battery is connected to such a
material, electrons will flow through the material, and it is called a conductor of electricity. If the application of
a battery to a material does not cause a flow of electric current, then the material is called an insulator.
As they flow through a conductor, the electrons which make up the current being driven round the
circuit by the battery give up some of their energy to the main solid structure of the material, which thus
becomes hot. If electrons lose much energy in passing through a conducting material, then that material is said
to have a large resistance to current flow. A given current flowing: through a high-resistance conductor will
generate much more heat than the same current flowing through a low resistance conductor. Thus the heating
element of an electric fire will be made of high-resistance material, usually a metal alloy. The wires carrying the
current under the floor to the fire will be made of low-resistance material, invariably a metal. Materials which
offer low` resistance to the passage of an electric current are called good electrical conductors.
This, then, is the way in which the chemical energy stored in a battery is converted into heat through
the action of a flow of electrons in a conductor. If a metal is heated to a sufficiently high temperature by the
current, as in the tungsten lamp filament, then the same mechanism can provide light - another form of energy.
Sometimes, however, it will be necessary to encourage electrons actually to escape from the surface of
the metal so that, for instance, they can be formed into a beam passing' down a cathode-ray tube to paint a
picture on a screen.
The emission of electrons from a material occurs only if the electrons inside are in some way given
sufficient energy to break through the surface where a sort of energy barrier exists, called the work function of
the material. Materials with low work functions emit electrons easily because less energy is required for an
electron to overcome the energy barrier and escape.
If a solid is heated, then it receives extra energy. This may be shared between the regular crystal lattice
and some of the electrons which can escape from their parent atoms. If the electrons thereby acquire sufficient
energy to escape from the surface of the solid, then the process is called thermionic emission because it is
brought about by heat. Thermionic emission provides the electrons which move through the vacuum in a radio
valve, but not the electrons in a transistor, which always remain inside the solid material of which the transistor
is made.
Another way in which electrons may be given sufficient energy to escape is by shining light onto the
surface of the material. If electrons escape as a result of receiving energy from incident light, then the process is
called photoelectric emission. This is the basis of many light sensitive devices using photoelectric cells.
For a material with a given work function there is a certain critical wavelength for photoelectric
emission to occur. If the wavelength is too long, there will be no emission. In general, therefore, ultra-violet or
blue light, which has a short wavelength, causes photoelectric emission from more materials than the longer
wavelength red or infra-red radiation.
Whether an electron stays inside a solid or escapes depends on the amount of energy the electron
possesses. But inside the solid the electrons have a number of permitted energy levels, and very often important
properties of the material depend on which of these energy levels are occupied. Thus a material may be an
insulator if its electrons are in low energy levels, but it can be made to conduct electricity if sufficient energy is
given to it to raise the electrons to higher permitted levels. Furthermore, if electrons in high energy levels give
up energy and fall to lower permitted levels, then the energy emitted may be useful in special ways - lasers, for
instance, depend upon such emission.
We have seen that electrons are fundamental to the structure of matter. Moreover, that the part played
by an electron in the structure is very much connected with the energy levels permitted to it, and with the
energy it actually possesses. In particular, we have seen that the electrons flowing as electric current through a
solid may give up energy to the crystal structure, so that heat and perhaps light may be given off by the
material.
The Quantum Theory of Radiation
Now, before the quantum theory was put forward, there was no notion of natural units of radiant
energy: it was believed that we could have any amount of energy, as small as we pleased, radiated by a hot
body or a luminous atom. It could, however, be shown mathematically that, if this were true, we should expect
a hot body to radiate nearly all its energy in the violet and ultraviolet end of the spectrum, which we know to
be against the facts of observation.
The problem was solved in the first year of the present century, when Planck showed that, to get the
right result, it was necessary to make a revolutionary hypothesis: to suppose that radiant energy was sent out in
Reading Passages [ 125 ]

packets, as it were - in units or atoms of energy, just as matter existed in atomic units. We cannot have less than
an atom of lead, say; any minute piece of lead must consist of a whole number of atoms. We cannot have an
electric charge of less than an electron. In the same way, we cannot have less than a unit - or quantum, as it is
called - of radiant energy, and any body that sends out or absorbs radiation must deal with one quantum or a
whole number of quanta.
The little parcel of light of one particular frequency in which radiant energy is delivered is sometimes
called a 'light dart', a very expressive term, but is more generally known as a photon. The photon is simply a
quantum of radiant energy, the only object of sometimes using the new term being that 'quantum' is a more
inclusive term, which can be applied to other things as well as light - for instance, to the vibration of whole
atoms and molecules.
The quantum of radiant energy differs from the quantum of electricity, the electron, in a very important
way. The amount of charge is the same on all electrons: there is but one unit. The magnitude of this unit of
radiant energy, however, is different for every different kind-that is, for every different wave-length - of
radiation. It is, in fact, proportional to the frequency, so that the quantum of energy of extreme visible red
radiation is only half that of the extreme visible violet radiation, which, as we have said before, has double the
frequency. The quantum of an X-radiation is very much greater than the quantum of any visible radiation.
The quantum of energy corresponding to a given species of radiation is found, then, by multiplying the
frequency by a certain fixed number, which is called Planck's universal constant, and always indicated by h.
Planck's constant enters into every aspect of modern atomic physics and its numerical value has been found by
at least ten different methods, involving such things as X-ray properties, the distribution of energy in black-
body radiation, the frequencies of spectral lines, and so on. All the methods give values agreeing to within a
few parts in ten thousand.
Light, then, or radiation in general, has a packet property as well as a wave property, and this is one of
the paradoxes of modern physics. Newton's conception of light was a stream of particles, which he endowed
with something in the nature of pulsating properties in an attempt to account for certain phenomena which we
can now easily explain on the wave theory. He felt the need for the double aspect, the particle and the periodic,
and provided for it in his theory.
If white light could be considered to consist of particles of various sizes, corresponding to all the
different colours which it contains, it would be fairly easy to imagine the amount of energy in each particle to
depend upon its size, and the quantum, or atomic, nature of the energy would be a natural consequence. We
could have one or two or three particles of sodium light, but not a fraction of a particle. However, to account for
the various phenomena of interference and diffraction, we have to admit that light behaves under some
conditions as a stream of particles, each one of which has a fixed energy belonging to it, and under other
conditions as a wave motion. Both wave aspects and particle aspects are included today in a comprehensive
theory known as wave mechanics.

Thus in geometrical optics we usefully treat light as travelling in straight lines through lenses, although
we know that it has wave properties which must be considered when optical resolution is in question. In the
simple kinetic theory of gases - say, for working out the action of high-vacuum pumps - we can safely treat
atoms as small elastic particles and neglect their structure. The essential is to know which aspect is the
dominating one in the problem concerned.
Take, first of all, the photo-electric effect. We have seen that, when light or any radiation of short wave-
length falls on a metal, electrons are shot off, the number depending on the strength of the light; but the speed, or
more correctly the energy, on the kind of light. If the radiation is done up in packets of energy, and this energy
can be expended in pushing out electrons, then clearly we should expect each atom of light either to throw out
an electron with a definite speed, or to do nothing; but not to throw out an electron with some greater speed.
According to the quantum theory, a short wave-length - that is, a high-frequency - radiation should therefore
throw out electrons with an energy of motion greater than that which characterizes electrons ejected by
radiation of long wave-length.
This is just what is observed; red light does not eject electrons at all from most metals, violet light drives
them out with a small speed, ultra-violet light produces greater speed, and X-rays throw out very fast electrons.
Red light, the quanta of which are very small, on account of its low frequency, has no effect, because a certain
minimum energy is required just to jerk an electron out of an atom: the energy of the red quanta is, for most
metals, less than this minimum. Allowing for the small energy required just to free the electron, the energy
which the electron acquires can be shown experimentally to be exactly proportional to the frequency of the
Reading Passages [ 126 ]

radiation. This is now so well established that, in cases where it is difficult to measure the frequency of very
short ultra-violet waves by ordinary means, the energy of the electrons which they eject from matter has been
determined, and taken as a measure of the wave-length. The same method has been applied to the exceedingly
high-frequency gamma rays of radium.
(From An Approach to Modern Physics by E. N. Da C. Andrade.)
Footprints of the Atom
The Cambridge physicist C. T. R. Wilson was studying the formation of fogs in 1898 when he started on
a train of ideas and discoveries which led ultimately to the perfection of the Wilson Cloud Chamber as a
marvellous aid to nuclear physics. This first fog-making apparatus was, however, very simple just two glass
jars connected by a pipe with a tap in it. One jar contained moist (saturated) air and the other was pumped
empty of air. When the tap was opened the air expanded quickly into the empty jar. When gases expand very
quickly they cool. You may have noticed that the air rushing out of a bicycle tyre when it is suddenly let down
is quite cold. As a result of such cooling, clouds form in the moist air, since only a smaller amount of water
vapour can be held by the air at the lower temperature.
Fogs cannot form unless the cooling is very marked. This is because small drops tend to evaporate again
more easily than the large drops. It is therefore very difficult for any drops to begin to form at all unless they
form immediately into large drops, as they would if the cooling were pronounced. The dust particles in
ordinary air act as a very convenient beginning for the drops, because they are already of sufficient size for the
drops formed on them to avoid re-evaporation. This explains why fogs and mists are much more common and
more persistent near large manufacturing towns, where there is a lot of smoke and dust, rather than in the clear
air of the countryside.
One of Wilson's cloud chambers, designed in 1912, is shown in figure 1. The moist air is contained in a
glass-topped cylinder B by a close-fitting piston. The air in flask A is pumped out by a vacuum pump, while the
stopper C is kept closed. When you want to operate the apparatus this stopper is pulled out so that the air
beneath the piston rushes out into A. As there is nothing to hold up the piston it falls, and thus allows the damp
air to expand and cool, and form a cloud.

Wilson soon discovered that even if he used very clean air he still occasionally got clouds with only a
moderate amount of cooling when there was some radio-active substance or a source of the then recently
discovered X-rays near his cloud chamber. He soon showed that this was because the air in his chamber was
being 'ionized' by their radio-activity or by the X-rays.
Let us see what happens when air is ionized. The atoms of the various gases that make up air are all
built of a heavy positive nucleus surrounded by very light negative electrons. The amount of positive charge on
the nucleus is exactly balanced by the number of negative electrons surrounding it, so that the atom as a whole
is electrically neutral. Although it takes quite a hard knock by a particle from one of the powerful 'atom-
smashing' machines to break off a bit of the nucleus, the negative electrons are held to the atom only by the
Reading Passages [ 127 ]

electrostatic attraction of opposite charges between negative electron and positive nucleus. It is therefore much
easier to remove an electron from the atom, either by strong electro-magnetic radiation (X-rays) or by collision
with another atomic particle like those shot by radio-active substances.
Before impact the atom was neutral, but after it has lost a negative electron it becomes a positively
charged 'ion'. The electron it has lost is a negative ion; it is free to move until it joins another positive ion to
form a neutral atom. When an atom is split in this way into two ions, positive and negative, we say it is ionized.
If ions of this type are present in moist air their effect is twofold. First, they attract many more normal
atoms to themselves, with the result that a cluster of atoms is formed. Secondly, the charge on the ion reduces
the tendency of small drops to evaporate. All this means that the charged ions in moist cooled air act as suitable
nuclei, like dust and smoke, on which fog and clouds can form.
To make this clear we will follow up what happens when an alpha-particle passes through a cloud
chamber immediately after expansion has taken place, and when the moist air has been cooled so that it is
ready to form clouds. An alpha-particle is a fragment shot out of the nucleus of a radio-active atom. It is about
four times as heavy as a hydrogen atom. It has a double positive charge, and it will travel a few inches in
normal air before it is brought to a stop by repeated collisions with the atoms in the air. While it is travelling
very fast through the air of a cloud chamber, its positive charge attracts many of the outer negative electrons of
the atoms in the air, which may be drawn i out of their original atoms, leaving them ionized.
The passage of one alpha-particle thus leaves a trail of ionized atoms behind it all along its track, each of
which is capable of acting as a centre about which a tiny drop of water can form. If the alpha-particle is shot
into the moist air of a cloud chamber just after a cooling expansion has taken place, so that the air is
supersaturated, a trail of little water drops appears in its wake, looking just like the cloud trails left by a high-
flying aircraft.
The first time that an atomic nucleus was split artificially was in 1919 when Lord Rutherford turned a
nitrogen atom into an oxygen atom by bombardment with alpha-particles. This nucleus-splitting reaction has
been studied by Professor Blackett with the aid of a cloud chamber. He obtained an actual photo graph of the
famous event. In this photograph a beam of alpha-particles appears as white trails crossing the chamber. One of
them stops half-way as it hits a nitrogen atom in the air and the lighter trail of a proton (a positively charged
hydrogen atom), knocked out of the nitrogen nucleus, moves off to the left, while the resulting oxygen nucleus
gives a thick short trail to the right. We have come as close as we can to actually seeing the invisible atom.
(From an article by R. R. Campbell in Adventures in Science edited by B. C. Brookes.)
Splitting the Atom
'The experiments started about four in the afternoon,' recalled a scientist whom Rutherford had invited
one day in 1919 to see what he was doing. 'We went into his laboratory to spend a preliminary half hour in the
dark to get our eyes into the sensitive state necessary for counting. Sitting there, drinking tea, in the dim light of
a minute gas jet at the further end of the laboratory, we listened to Rutherford talking of all things under the
sun. It was curiously intimate, yet impersonal, and all of it coloured by that characteristic of his of considering
statements independently of the person who put them forward.'
Then Rutherford, in his unassuming white coat, made a last minute inspection tour round his
laboratory, a high and wide room with a cement floor. There was, in one corner, the enormous column of the
condenser, which went right up through the ceiling; and at the other end of the room a large tube, enthroned on
top of a work-bench in the midst of a mass of entangled electric wires. There was an arc-lamp projector behind
the tube, and a screen had been set up in front of it.
'You know, we might go up through the roof,' warned Rutherford, but the boyish smile under the big
greying moustache belied his words. The blinds were now pulled down over the big leaded windows, and
bluish-green sparks were seen to jump to and fro in the tube. The screen lit up. At first there was nothing but a
thick grey mist. Then some large objects, like the shadows of enormous fish, flowed across the screen in a
steady stream.
The Professor explained. Alpha particles - helium nuclei - were being hurled through the tube, in which
an artificial mist had been created. It was an adaptation of Wilson's cloud chamber, filled with nitrogen gas.
Suddenly a thick streak appeared on the screen, striking off at right angles at terrific speed. 'That's it,' said
Rutherford. 'The atom has been split!'
The performance was repeated - once, twice, a third time at irregular intervals. Millions of alpha
particles went straight through the nitrogen gas without touching any of its atoms. But now and then there
came a direct hit on a nitrogen nucleus, which split it. 'Where are we going from here?' mused one of
Rutherford's guests. 'Who knows?' he replied, 'We are entering no-man's land.'
Reading Passages [ 128 ]

What interested Rutherford in these experiments was the transmutation of one element into another-
which furnished the proof that his theory of what the atom looked like was correct. When an alpha particle hit a
nitrogen nucleus it drove out some of its seven protons. And each of these loose protons became the nucleus of
a hydrogen atom, which has only one proton with an electron revolving around it. Thus nitrogen changed into
hydrogen!
But Rutherford proved yet another theory, which was closely connected with Einstein's hotly disputed
claim-that there is no real difference between mass and energy, and that the destruction of matter would free its
latent energy. Already in 1905, Albert Einstein, then a young man of 26, had startled the scientific world with
his Special Theory of Relativity, in which he gave the phenomenon of radio-activity an important place within the
framework of his new picture of the universe. He explained that, if matter is converted into energy by the
disintegration of atoms, that process would be represented by a simple little equation: E = mc².
What does it mean? Basically it says that mass and energy are not different things which have no
relation to one another, but that one can be changed into the other. Einstein's equation connects the two
quantities. E is the energy in ergs released when a mass of m grams is completely disintegrated. And c is the
velocity of light in centimetres per second, and therefore c² is 900 million million ergs.
This sounded completely fantastic. Even if matter could ever be converted into energy, surely the
energy released in this process would not be of such unimaginable magnitude! There was, of course, no way of
proving or disproving Einstein's equation - until Rutherford showed how to split the atom. In 1905 no one
really believed that Einstein's equation would ever be put to the test; that man could ever release the incredible
forces locked up in the atoms of matter. Today we know that, if one ounce of matter could be completely
destroyed and changed into energy, it would yield as much power as we derive from burning 100,000 tons of
coal in a conventional power station!
'Atom-splitting' became almost a fashion in the physical laboratories of Europe and America, and in the
1920s most of the lighter nuclei were being split up by bombarding them with alpha particles. Only beryllium -
the fourth lightest of the elements resisted all attempts to break up its nucleus. Instead of releasing one of its
four protons when hit, it gave off a burst of radiation more penetrating than even the hard gamma rays. Sir
James Chadwick, again at the Cavendish Laboratory in Cambridge, proved that this radiation must consist of
particles about as heavy as protons, but without an electric charge. 'If such a neutral particle exists,' Rutherford
had said in 1920, 'it should be able to move freely through matter, and it may be impossible to contain it in a
sealed vessel.' He knew that its discovery would be of great importance-an electrically neutral particle could be
fired into any matter without being attracted or repelled by protons or electrons.
In 1932 the Joliot-Curies made a radio-active metal bombard beryllium - which is a non-radio-active
metal - with its rays. The result was that the beryllium, too, became radio-active - even more so than the
original source of the rays. Sir James Chadwick's explanation was that the beryllium nuclei had released their
non-electrical particles, which he called neutrons. They were found to have slightly greater mass than the
protons - but neutrons can change to protons by acquiring a positive electric charge.
The discovery of the neutron not only solved quite a number of problems which had so far defied the
efforts of the scientists, but it also gave an even greater impetus to atom-splitting. The Americans, true to style,
went into the business in a big way. The University of California built an enormous machine, the cyclotron, for
the head of its radiation laboratory, Professor E. O. Lawrence, who had just come to the conclusion that the
heavy hydrogen atom, consisting of one proton plus one neutron, would make an ideal bullet for shooting up
other nuclei.
(From Atomic Energy - A Layman's Guide to the Nuclear Age by E. Larsen.)
THE DEVELOPMENT OF ELECTRICITY
The phenomenon which Thales had observed and recorded five centuries before the birth of Christ
aroused the interest of many scientists through the ages. They made various practical experiments in their
efforts to identify the elusive force which Thales had likened to a 'soul' and which we now know to have been
static electricity.
Of all forms of energy, electricity is the most baffling and difficult to describe. An electric current cannot
be seen. In fact it does not exist outside the wires and other conductors which carry it. A live wire carrying a
current looks exactly the same and weighs exactly the same as it does when it is not carrying a current. An
electric current is simply a movement or flow of electrons.
Benjamin Franklin, the American statesman and scientist born in Boston in 1706, investigated the nature
of thunder and lightning by flying a child's kite during a thunderstorm. He had attached a metal spike to the
kite, and at the other end of the string to which the kite was tied he secured a key. As the rain soaked into the
Reading Passages [ 129 ]

string, electricity flowed freely down the string and Franklin was able to draw large sparks from the key. Of
course this could have been very dangerous, but he had foreseen it and had supported the string through an
insulator. He observed that this electricity had the same properties as the static electricity produced by friction.
But long before Franklin many other scientists had carried out research into the nature of electricity.
In England William Gilbert (1544-1603) had noticed that the powers of attraction and repulsion of two
non-metallic rods which he had rubbed briskly were similar to those of lodestone and amber - they had
acquired the curious quality we call magnetism. Remembering Thales of old he coined the word 'electricity'.
Otto von Guericke (1602-1686) a Mayor of Magdeburg in Germany, was an amateur scientist who had
constructed all manner of gadgets. One of them was a machine consisting of two glass discs revolving in
opposite directions which produced high voltage charges through friction. Ramsden and Wimshurst built
improved versions of the machine.
A significant breakthrough occurred when Alessandro Volta (1745-1827) in Italy constructed a simple
electric cell (in 1799) which produced a flow of electrons by chemical means. Two plates, one of copper and the
other of zinc, were placed in an acid solution and a current flowed through an external wire connecting the two
plates. Later he connected cells in series (voltaic pile) which consisted of alternate layers of zinc and copper
discs separated by flannel discs soaked in brine or acid which produced a higher electric pressure (voltage). But
Volta never found the right explanation of why his cell was working. He thought the flow of electric current
was due to the contact between the two metals, whereas in fact it results from the chemical action of the
electrolyte on the zinc plate. However, his discovery proved to be of incalculable value in research, as it enabled
scientists to carry out experiments which led to the discoveries of the heating, lighting, chemical and magnetic
effects of electricity.
One of the many scientists and physicists who took advantage of the 'current electricity' made possible
by Volta's cells was Hans Christian Oersted (1777-1851) of Denmark. Like many others he was looking for a
connection between the age-old study of magnetism and electricity, but now he was able to pass electric
currents through wires and place magnets in various positions near the wires. His epoch-making discovery
which established for the first time the relationship between magnetism and electricity was in fact an accident.
While lecturing to students he showed them that the current flowing in a wire held over a magnetic
compass needle and at right angles to it (that is east-west) had no effect on the needle. Oersted suggested to his
assistant that he might try holding the wire parallel to the length of the needle (north- south) and hey presto,
the needle was deflected! He had stumbled upon the electromagnetic effect in the first recorded instance of a
wire behaving like a magnet when a current is passed through it.
A development of Oersted's demonstration with the compass needle was used to construct the world's
first system of signaling by the use of electricity.
In 1837 Charles Wheatstone and William Cooke took out a patent for the world's first Five-needle
Telegraph, which was installed between Paddington railway station in west London and West Drayton station
a few miles away. The five copper wires required for this system were embedded in blocks of wood.
Electrolysis, the chemical decomposition of a substance into its constituent elements by the action of an
electric current, was discovered by the English chemists Carlisle and William Nicholson (1753-1815). If an
electric current is passed through water it is broken down into the two elements of which it is composed --
hydrogen and oxygen. The process is used extensively in modern industry for electroplating. Michael Faraday
(1791-1867) who was employed as a chemist at the Royal Institution, was responsible for introducing many of
the technical terms connected with electrolysis, like electrolyte for the liquid through which the electric current
is passed, and anode and cathode for the positive and negative electrodes respectively. He also established the
laws of the process itself. But most people remember his name in connection with his practical demonstration of
electromagnetic induction.
In France Andre-Marie Ampere (1775-1836) carried out a complete mathematical study of the laws
which govern the interaction between wires carrying electric currents.
In Germany in 1826 a Bavarian schoolmaster Georg Ohm (1789- 1854) had defined the relationship
between electric pressure (voltage), current (flow rate) and resistance in a circuit (Ohm's law) but 16 years had
to elapse before he received recognition for his work.
Scientists were now convinced that since the flow of an electric current in a wire or a coil of wire caused
it to acquire magnetic properties, the opposite might also prove to be true: a magnet could possibly be used to
generate a flow of electricity.
Michael Faraday had worked on this problem for ten years when finally, in 1830, he gave his famous
lecture in which he demonstrated, for the first time in history, the principle of electromagnetic induction. He
Reading Passages [ 130 ]

had constructed powerful electromagnets consisting of coils of wire. When he caused the magnetic lines of
force surrounding one coil to rise and fall by interrupting or varying the flow of current, a similar current was
induced in a neighbouring coil closely coupled to the first.
The colossal importance of Faraday's discovery was that it paved the way for the generation of
electricity by mechanical means. However, as can be seen from the drawing, the basic generator produces an
alternating flow of current.(A.C.)
Rotating a coil of wire steadily through a complete revolution in the steady magnetic field between the
north and south poles of a magnet results in an electromotive force (E.M.F.) at its terminals which rises in value,
falls back to zero, reverses in a negative direction, reaches a peak and again returns to zero. This completes one
cycle or sine wave. (1Hz in S.I. units).
In recent years other methods have been developed for generating electrical power in relatively small
quantities for special applications. Semiconductors, which combine heat insulation with good electrical
conduction, are used for thermoelectric generators to power isolated weather stations, artificial satellites,
undersea cables and marker buoys. Specially developed diode valves are used as thermionic generators with an
efficiency, at present, of only 20% but the heat taken away from the anode is used to raise steam for
conventional power generation.
Sir Humphry Davy (1778-1829) one of Britain's leading chemists of the 18th century, is best remembered
for his safety lamp for miners which cut down the risk of methane gas explosions in mines. It was Davy who
first demonstrated that electricity could be used to produce light. He connected two carbon rods to a heavy
duty storage battery. When he touched the tips of the rods together a very bright white light was produced. As
he drew the rods apart, the arc light persisted until the tips had burnt away to the critical gap which
extinguished the light. As a researcher and lecturer at the Royal Institution Davy worked closely with Michael
Faraday who first joined the institution as his manservant and later became his secretary. Davy's crowning
honour in the scientific world came in 1820, when he was elected President of the Royal Society.
In the U.S.A. the prolific inventor Thomas Alva Edison (1847-1831) who had invented the incandescent
carbon filament bulb, built a number of electricity generators in the vicinity of the Niagara Falls. These used the
power of the falling water to drive hydraulic turbines which were coupled to the dynamos. These generators
were fitted with a spinning switch or commutator (one of the neatest gadgets Edison ever invented) to make the
current flow in unidirectional pulses (D.C.) In 1876 all electrical equipment was powered by direct current.
Today mains electricity plays a vital part in our everyday lives and its applications are widespread and
staggering in their immensity. But we must not forget that popular demand for this convenient form of power
arose only about 100 years ago, mainly for illumination.
Recent experiments in superconductivity, using ceramic instead metal conductors have given us an
exciting glimpse into what might be achieved for improving efficiency in the distribution of electric power.
Historians of the future may well characterise the 20th century as 'the century of electricity &
electronics'. But Edison's D.C. generators could not in themselves, have achieved the spectacular progress that
has been made. All over the world we depend totally on a system of transmitting mains electricity over long
distances which was originally created by an amazing inventor whose scientific discoveries changed, and are
still changing, the whole world. His name was scarcely known to the general public, especially in Europe,
where he was born.
Who was this unknown pioneer? Some people reckon that it was this astonishing visionary who
invented wireless, remote control, robotics and a form of X-ray photography using high frequency radio waves.
A patent which he took out in the U.S.A. in 1890 ultimately led to the design of the humble ignition coil which
energises billions and billions of spark plugs in all the motor cars of the world. His American patents fill a book
two inches thick. His name was Nicola Tesla (1856-1943).
Nicola Tesla was born in a small village in Croatia which at that time formed part of the great Austro-
Hungarian Empire. Today it is a northern province of Yugoslavia, a state created after the 1914-1918 war. Tesla
studied at the Graz Technical University and later in Budapest. Early in his studies he had the idea that a way
had to be found to run electric motors directly from A.C. generators. His professor in Graz had assured him
categorically that this was not possible. But young Tesla was not convinced. When he went to Budapest he got a
job in the Central Telegraph Office, and one evening in 1882, as he was sitting on a bench in the City Park he
had an inspiration which ultimately led to the solution of the problem.
Tesla remembered a poem by the German poet Goethe about the sun which supports life on the earth
and when the day is over moves on to give life to the other side of the globe. He picked up a twig and began to
scratch a drawing on the soil in front of him. He drew four coils arranged symmetrically round the
Reading Passages [ 131 ]

circumference of a circle. In the centre he drew a rotor or armature. As each coil in turn was energised it
attracted the rotor towards it and the rotary motion was established. When he constructed the first practical
models he used eight, sixteen and even more coils. The simple drawing on the ground led to the design of the
first induction motor driven directly by A.C.electricity.
Tesla emigrated to the U.S.A. in 1884. During the first year he filed no less than 30 patents mostly in
relation to the generation and distribution of A.C. mains electricity. He designed and built his 'A.C.Polyphase
System' which generated three-phase alternating current at 25 Hz. One particular unit delivered 422 amperes at
12,000 volts. The beauty of this system was that the voltage could be stepped down using transformers for local
use, or stepped up to many thousands of volts for transmission over long distances through relatively thin
conductors. Edison's generating stations were incapable of any such thing.
Tesla signed a lucrative contract with the famous railway engineer George Westinghouse, the inventor
of the Westinghouse Air Brake which is used by most railways all over the world to the present day. Their
generating station was put into service in 1895 and was called the Niagara Falls Electricity Generating
Company. It supplied power for the Westinghouse network of trains and also for an industrial complex in
Buffalo, New York.
After ten years Tesla began to experiment with high frequencies. The Tesla Coil which he had patented
in 1890 was capable of raising voltages to unheard of levels such as 300,000 volts. Edison, who was still
generating D.C., claimed A.C. was dangerous and to prove it contracted with the government to produce the
first electric chair using A.C. for the execution of murderers condemned to death. When it was first used it was
a ghastly flop. The condemned man moaned and groaned and foamed at the mouth. After four minutes of
repeated application of the A.C.voltage smoke began to come out of his back. It was obvious that the victim had
suffered a horribly drawn-out death.
Tesla said he could prove that A.C. was not dangerous. He gave a demonstration of high voltage
electricity flowing harmlessly over his body. But in reality, he cheated, because he had used a frequency of
10,000 cycles (10 kHz) at extremely low current and because of the skin effect suffered no harm.
One of Tesla's patents related to a system of lighting using glass tubes filled with fluorine (not neon)
excited by H.F.voltages. His workshop was lit by this method. Several years before Wilhelm Roentgen
demonstrated his system of X-rays Tesla had been taking photographs of the bones in his hand and his foot
from up to 40 feet away using H.F.currents.
More astonishing still is the fact that in 1893, two years before Marconi demonstrated his system of
wireless signaling, Tesla had built a model boat in which he combined power to drive it with radio control and
robotics. He put the small boat in a lake in Madison Square Gardens in New York. Standing on the shore with a
control box, he invited onlookers to suggest movements. He was able to make the boat go forwards and
backwards and round in circles. We all know how model cars and aircraft are controlled by radio today, but
when Tesla did it a century ago the motor car had not been invented, and the only method by which man could
cover long distances was on horseback!
Many people believe that a modification of Tesla's 'Magnifying Transmitter' was used by the Soviet
Union when suddenly one day in October 1976 they produced an amazing noise which blotted out all radio
transmissions between 6 and 20 MHz. (The Woodpecker) The B.B.C., the N.B.C. and most broadcasting and
telecommunication organisations of the world complained to Moscow (the noise had persisted continuously for
10 hours on the first day), but all the Russians would say in reply was that they were carrying out an
experiment. At first nobody seemed to know what they were doing because it was obviously not intended as
another form of jamming of foreign broadcasts, an old Russian custom as we all know.
It is believed that in the pursuit of his life's ambition to send power through the earth without the use of
wires, Tesla had achieved a small measure of success at E.L.F. (extremely low frequencies) of the order of 7 to
12 Hz. These frequencies are at present used by the military for communicating with submarines submerged in
the oceans of the world.
Tesla's career and private life have remained something of a mystery. He lived alone and shunned
public life. He never read any of his papers before academic institutions, though he was friendly with some
journalists who wrote sensational stories about him. They said he was terrified of microbes and that when he
ate out at a restaurant he would ask for a number of clean napkins to wipe the cutlery and the glasses he drank
out of. For the last 20 years of his life until he died during World War II in 1943 he lived the life of a semi-
recluse, with a pigeon as his only companion. A disastrous fire had destroyed his workshops and many of his
experimental models and all his papers were lost for ever.
Reading Passages [ 132 ]

Tesla had moved to Colorado Springs where he built his largest ever coil which was 52 feet in diameter.
He studied all the different forms of lightning in his unsuccessful quest for the transmission of power without
wires.
In Yugoslavia, Tesla is a national hero and a well-equipped museum in Belgrade contains abundant
proof of the genius of this extraordinary man.
(From: The dawn of amateur radio in the U.K. and Greece: a personal view by Norman F. Joly.)
The Discovery of X-rays
Except for a brief description of the Compton effect, and a few other remarks, we have postponed the
discussion of X-rays until the present chapter because it is particularly convenient to treat X-ray spectra after
treating optical spectra. Although this ordering may have given the reader a distorted impression of the
historical importance of X-rays, this impression will be corrected shortly as we describe the crucial role played
by X-rays in the development of modern physics.
X-rays were discovered in 1895 by Roentgen while studying the phenomena of gaseous discharge.
Using a cathode ray tube with a high voltage of several tens of kilovolts, he noticed that salts of barium would
fluoresce when brought near the tube, although nothing visible was emitted by the tube. This effect persisted
when the tube was wrapped with a layer of black cardboard. Roentgen soon established that the agency
responsible for the fluorescence originated at the point at which the stream of energetic electrons struck the
glass wall of the tube. Because of its unknown nature, he gave this agency the name X-rays. He found that X-
rays could manifest themselves by darkening wrapped photographic plates, discharging charged electroscopes,
as well as by causing fluorescence in a number of different substances. He also found that X-rays can penetrate
considerable thicknesses of materials of low atomic number, whereas substances of high atomic number are
relatively opaque. Roentgen took the first steps in identifying the nature of X-rays by using a system of slits to
show that (1) they travel in straight lines, and that (2) they are uncharged, because they are not deflected by electric
or magnetic fields.
The discovery of X-rays aroused the interest of all physicists, and many joined in the investigation of
their properties. In 1899 Haga and Wind performed a single slit diffraction experiment with X-rays which
showed that (3) X-rays are a wave motion phenomenon, and, from the size of the diffraction pattern, their
wavelength could be estimated to be 10-8 cm. In 1906 Barkla proved that (4) the waves are transverse by showing
that they can be polarized by scattering from many materials.
There is, of course, no longer anything unknown about the nature of X-rays. They are electromagnetic
radiation of exactly the same nature as visible light, except that their wavelength is several orders of magnitude
shorter. This conclusion follows from comparing properties 1 through 4 with the similar properties of visible
light, but it was actually postulated by Thomson several years before all these properties were known.
Thomson argued that X-rays are electromagnetic radiation because such radiation would be expected to be
emitted from the point at which the electrons strike the wall of a cathode ray tube. At this point, the electrons
suffer very violent accelerations in coming to a stop and, according to classical electromagnetic theory, all
accelerated charged particles emit electromagnetic radiations. We shall see later that this explanation of the
production of X-rays is at least partially correct.
In common with other electromagnetic radiations, X-rays exhibit particle-like aspects as well as wave-
like aspects. The reader will recall that the Compton effect, which is one of the most convincing demonstrations
of the existence of quanta, was originally observed with electromagnetic radiation in the X-ray region of
wavelengths.
POLITICS
CROWDS
On October 30, 1938, thousands of Americans in the New York area were terror stricken by a radio
broadcast describing an invasion from Mars. The presentation was merely a dramatisation of H. G. Wells'
fantastic novel The War of the Worlds, but it was presented with such stark realism - including reports from
fictitious astronomers, governmental officials, and 'eye-witnesses' - that many listeners fled their homes and
their communities. In London on January 15, 1955, a thick belt of darkness, caused by an accumulation of
smoke under an extremely thick layer of cloud, suddenly wrapped itself around the city in the early afternoon.
It lasted only ten minutes; but during this time, women screamed in the streets, others fell to their knees on the
sidewalks and prayed. Some cried out hysterically that they had gone blind. A man at Croydon groped through
the inky blackness shouting, 'The end of the world has come'. These two episodes are at once identified as cases
of panic, an extreme type of crowd behaviour, which in turn is a variety of collective behaviour. In the interests
Reading Passages [ 133 ]

of coherence of treatment and because of limitation of space, this chapter will treat only two related aspects of
collective behaviour: crowds and publics.
Ordinarily when we think of the crowd, we picture a group of individuals massed in one place; but as
the opening illustration of the chapter indicates, physical proximity is not essential to crowd behaviour,
especially in a society like ours with instruments of mass communication like the newspaper and radio. What is
crucial to the understanding of the crowd is the highly emotional responses of individuals when they are
released from the restraints that usually inhibit extreme behaviour. What releases the customary restraints and
leads to crowd behaviour?
Crowd emotionality is perhaps best interpreted in terms of heightened suggestibility, that is, the
tendency of an individual in a crowd to respond uncritically to the stimuli provided by the other members. The
individual learns to make almost automatic responses to the wishes of others, particularly those in authority
and those he greatly respects. From infancy on, he is so dependent upon the judgment of others for direction in
his own affairs that he comes to lean heavily on the opinions of others. Moreover, he learns to value highly the
esteem in which other persons hold him, and consequently he courts their favour by conforming to their ways
and wishes. For these reasons, among others, when he finds himself in a congenial crowd of persons, all of
whom are excited, it is natural that he, too, should be affected.
The effect of suggestion is to produce a partial dislocation of consciousness. When we are critical about
a matter, we give it close attention, and our whole interest is centred upon it. But when a suggestion is made by
someone whom we esteem, our attention is divided, partly on the issue at hand, partly on the person who
made the suggestion. The more awesome the source of the suggestion, the greater the degree of dissociation
and the greater the amount of automatic behaviour.
If the crowd has a leader who is admired, the effect of the suggestion is still further heightened. The
situation is illustrated by hypnotism, where the effectiveness of the suggestion depends on the attitude of the
subject towards the hypnotist. No one can be hypnotized against his will; and the best results are obtained
where close co-operation exists between subject and experimenter. The effect of suggestion should help us to
understand the frenzy of a camp meeting led by an evangelist like the late Billy Sunday, or the hysteria of a
Nazi mass meeting led by Hitler.
The group factor also influences crowd behaviour through the security which an individual feels when
he is part of the mass. Individuals are less reluctant to join a lynching party than to commit murder on their
own. The explanation would seem to lie partly in the fact that the action seems more defensible when carried
out by the group and partly in the fact that individual responsibility is blotted out. The participants remain
anonymous and there is no one upon whom the authorities can pin the offence. This condition, in which the
group does not identify its members as individuals and which therefore has been called 'de-individuation',
leads to reduction of inner restraint and to more expressive behaviour.
(From A handbook of sociology by William F. Ogburn and Meyer J. Nimkoff)
DIPLOMACY
It is thus essential, at the outset of this study, to define what the word 'diplomacy' really means and in
what sense, or senses, it will be used in the pages that follow.
In current language this word 'diplomacy' is carelessly taken to denote several quite different things. At
one moment it is employed as a synonym for 'foreign policy', as when we say 'British diplomacy in the Near
East has been lacking in vigour'. At another moment it signifies 'negotiation', as when we say 'the problem is
one which might well be solved by diplomacy'. More specifically, the word denotes the processes and
machinery by which such negotiation is carried out. A fourth meaning is that of a branch of the Foreign Service,
as when one says 'my nephew is working for diplomacy'. And a fifth interpretation which this unfortunate
word is made to carry is that of an abstract quality or gift, which, in its best sense, implies skill in the conduct of
international negotiation; and, in its worst sense, implies the more guileful aspects of tact.
These five interpretations are, in English-speaking countries, used indiscriminately, with the result that
there are few branches of politics which have been exposed to such confusion of thought. If, for instance, the
word 'army' were used to mean the exercise of power, the art of strategy, the science of tactics, the profession of
a soldier and the combative instincts of man - we should expect public discussion on military matters to lead to
much misunderstanding.
The purpose of this monograph is to describe, in simple but precise terms, what diplomacy is and what
it is not. In the first two chapters a short sketch will be given of the origins and evolution of diplomatic practice
and theory. The purpose of this historical review will be to show that diplomacy is neither the invention nor the
pastime of some particular political system, but is an essential element in any reasonable relation
Reading Passages [ 134 ]

between man and man and between nation and nation. An examination will follow of recent
modifications in diplomatic methods, with special reference to the problems of 'open' and 'secret' diplomacy
and to the difficulty of combining efficient diplomacy with democratic control. Other sections will deal with the
actual functioning of modern diplomacy, with the relation between diplomacy and commerce, with the
organization and administration of the Foreign Service, with diplomacy by Conference and with the League of
Nations as an instrument of negotiation. At the end a reasoned catalogue will be given of current diplomatic
phrases such as may assist the student in understanding the technical language (it is something more than mere
jargon) which diplomacy has evolved.
Yet before embarking upon so wide a field of examination, it is, as has been said, necessary to define in
what sense, or senses, the word 'diplomacy' will be used in this study. I propose to employ the definition given
by the Oxford English Dictionary. It is as follows:
'Diplomacy is the management of international relations by negotiation; the method by which these
relations are adjusted and managed by ambassadors and envoys; the business or art of the diplomatist'.
By taking this precise, although wide, definition as my terms of reference I hope to avoid straying, on
the one hand into the sands of foreign policy, and on the other into the marshes of international law. I shall
discuss the several policies or systems of the different nations only in so far as they affect the methods by
which, and the standards according to which, such policies are carried out. I shall mention international law
only in so far as it advances diplomatic theory or affects the privileges, immunities, and actions of diplomatic
envoys. And I shall thus hope to be able to concentrate upon the 'executive' rather than upon the 'legislative'
aspects of the problem.
(From Diplomacy by Harold Nicholson)
WHAT FUTURE FOR AFRICA?
Even if the most ambitious programme of studies of the lives and contributions of individual African
rulers, statesmen, religious leaders, scholars, could be carried out, we should still be a long way from achieving
a reasonable understanding of African history. The principle that the history of a people cannot be adequately
understood as the history of its dynasties, kings, paramount chiefs, nobility, and top people generally, is as
valid for Africa as it is for Europe. Until we know a great deal more about the conditions of the people-vassal
tribes, slaves, with their differing degrees of servitude, free commoners, the rank-and-file of the army-in the
various African states at successive periods, we shall remain very much in the dark about the real character of
these systems and of the causes of the changes which they have undergone. But until very recently such
questions have scarcely been raised-let alone answered. Hence belle-lettrist generalizations about 'the African
past', and speculations about 'the African future' derived from them, are likely in present circumstances to be
valueless.
A further reason for our incompetence is that we do not, for the most part, see the contemporary
African situation from at all the same standpoint as the majority of African intellectuals. We have come to make
use of the term 'decolonization'; to take for granted that Western European colonial power is being withdrawn,
more or less rapidly, from the African continent; and to be worried only about the possible consequences of this
withdrawal. Few Africans, even those whose political thinking approximates most closely to that of Westerners,
regard the phase of history through which we are now passing in this way. They are far more conscious of the
survival of unlimited, naked colonial power in a number of states - Algeria, the Union of South Africa, Southern
Rhodesia, the Portuguese territories; and the emergence of 'neo-colonialism' (meaning the gradual granting of
formal independence, combined with the continuing effective présence of the metropolitan power) in others the
states of the French Community, or, under very different conditions, the Congo. For Africans who think
continentally, or globally-and there are many of these-'colonialism' remains a dominant aspect of the
contemporary situation. indeed, they think with much greater subtlety than we about such matters, and are
skilled in distinguishing the varying degrees of intensity with which 'le poids du colonialisme' presses upon the
peoples of the various African states. And 'colonialism', in almost every context, means Western colonialism:
naturally, since Africans do not feel themselves confronted with the fact, or likelihood, of domination from any
other quarter. They are not at all impressed by the thesis, favoured by American publicists - 'European colonial
systems are now virtually liquidated; therefore let us, the Free World, protect you against the dangers of Soviet
or Chinese colonialism'.
There is a deeper sense in which we are ill-equipped to understand the processes of political and social
change taking place in modern Africa.
We are too remote, historically, from our own revolutions three centuries away in the British, almost
two centuries in the American, case. And when African intellectuals, interested in our histories as well as theirs,
Reading Passages [ 135 ]

draw attention to resemblances between their political theories and those of the English Levellers, or Thomas
Jefferson, or Robespierre, we seldom see the point. Americans in particular, are inclined to argue that Africans
must either be democrats in their sense of the term-meaning 'Western values', anti-Communism, two-party
systems, free enterprise, church-on-Sundays, glossy magazines, etcetera - or, if they claim to be democrats in
some other sense, they must be crypto-Communists. But, I have argued elsewhere, there is a strong vein of
Rousseauian revolutionary democracy running through the ideologies of most African radical nationalist
parties and popular organizations.
There are, I think, five respects in which these similarities are most marked. There is, first, the tendency
to take certain ethical presuppositions as the point of departure: the purpose of liberation is to recover the
dignity, or assert the moral worth, of African man. Second, there is the classic conception of 'democracy', as
involving, primarily, the transfer of political power from a European dominant minority-an oligarchy of
colonial officials, or some combination of the two-to an undifferentiated African demos. Third, there is the idea
of the radical mass party, and its associated organizations, as expressing - as adequately as it can be expressed-
the desires and aspirations of the demos, the common people; and as having therefore the right, once
independence is achieved, to take control of the state, and reconstruct it in the party's own image and in
relation to its own objectives. There is also the constant insistence upon human equality: 'il n'y a pas de
surhommes' as Sekou Touré once put it. Hence it is not only the privileged status of the European minority that
is attacked, but also the special privileges and claims to authority-deriving from the precolonial or colonial
period-of groups within contemporary African society, chiefs, marabouts, 'bourgeois intellectuals'. They can play
a part within the new system only to the extent that they accept its egalitarian assumptions. Finally, there is the
idea of the artificiality, undesirability, and impermanence of barriers-ethnic, linguistic, territorial, cultural,
'racial', religious - within humanity, 'Pan-Africanism', in one of its aspects, being an attempt to apply this idea
in the modern African context.
(Thomas Hodgkin from an article in Encounter, June 1961)
NATIONALISM
Language alone is not, of course, enough to explain the rise of modern nationalism. Even language is a
shorthand for the sense of belonging together, of sharing the same memories, the same historical experience,
the same cultural and imaginative heritage. When in the eighteenth century, nationalism began to take form as
a modern movement, its forerunners in many parts of Europe were not soldiers and statesmen but scholars and
poets who sought to find in ancient legends and half forgotten folksongs the 'soul' of the nation. But it was
language that enshrined the memories, the common experience and the historical record.
Nor could the sense of common tongue and culture have become the political battering ram that it is in
our own times, if it had not been inextricably bound up with the modern political and economic revolutions of
the West-the political drive for democracy and the economic revolution of science and industry.
Three thousand years ago, in one small area of the Mediterranean world, there came a break with the
earlier traditions of state building which were all despotic. The dominance of a strong local tribe or conquest by
foreign groups had turned the old fragmented tribal societies into centralized dynastic and imperial states in
which the inhabitant was subject absolutely to the ruler's will. But in the Greek City State, for the first time, the
idea was formulated that a man should govern himself under law, and that he should not be a subject but a free
citizen.
After the post-Roman collapse, it re-emerged as a seminal idea in the development of later European
history. Even in the Middle Ages, before there were any fully articulated democratic systems, two or three of
the essential foundations of democracy had appeared. The rule of law was recognized. The right of the subject
to be consulted had called into being the parliaments and 'estates' of the fourteenth century. And the possibility
of a plurality of power - through State, through Church, through royal boroughs and free municipalities -
mitigated the centralizing tendencies of government. It was in fact for a restoration of these rights after the
Tudor interregnum that the first modern political revolution, the English Civil War, was fought.
But if a man had a right to take part in his own government, it followed logically that his government
could not be arbitrarily controlled from elsewhere. It was useless to give him representation if it did not affect
the true centre of power. The American Revolution symbolized the connexion between the rights of the citizen
and the rights of the state. The free citizen had a right to govern himself, ergo the whole community of free
citizens had a right to govern itself. This was not yet modern nationalism. The American people did not see
themselves as a national group but as a community of free men 'dedicated to a proposition'. But within two
decades, the identification had been made.
Reading Passages [ 136 ]

The French Revolution, proclaiming the Rights of Man, formed the new style of nation. The levee-en-
masse which defeated the old dynastic armies of Europe was the first expression of total national unity as the
basis of the sovereign state. Men and nations had equally the right to self-determination. Men could not be free
if their national community was not.
The same revolution quickly proved that the reverse might not be true. The nation could become
completely unfettered in its dealings with other states while enslaving its own citizens. In fact, over-
glorification of the nation might lead inevitably to the extinction of individual rights. The citizen could become
just a tool of the national will, of the so-called 'general will'. But in the first explosion of revolutionary ardour,
the idea of the Rights of Man and of the Rights of the Nation went together. And, formally, that is where they
have remained. At the end of the First World War, it was the world's leading democratic statesman, President
Woodrow Wilson, who wrote the right of self-determination, the right of national groups to form their own
sovereign government, into the Peace Treaties and at no time in human history have so many independent
national states been formed as after the Second World War.

From all this it will be clear that the development of nationalism is a recognizable, historical process. It
happened in certain countries, it happened in a certain way, and it created a certain mood which became
embodied in the national idea. But with the means of communication open to the modern world, an idea
developed in one place can quickly become the possession of all mankind. What is certain is that, in the
twentieth century, nationalism, the historical product of certain political institutions, geographical facts, and
economic developments in Western Europe, has swept round the world to become the greatest lever of change
in our day.
We see it at the United Nations where, in the course of eleven short years, the number of sovereign
states based upon the principle of nationhood has grown by three score and more. As I have pointed out, it is
unprecedented in human history that such a number of separate, autonomous, sovereign institutions should
come into being in so short a space of time.
(From Five Ideas that Change the World by Barbara Ward)
DEMOCRACY
In the last days of 1945 I received a letter from Gothenburg, from a gathering of progressive Swedish
students, begging to be advised by me as to what was the task of intellectuals in the present condition of the
world. Their letter was dated July 26, the day of my defeat at Berwick, and I sent them an answer from the
heart: 'The task of intellectuals in Sweden, as elsewhere, is to introduce reason and foresight into practical
affairs. Only if it is governed by reason will democracy be sufficiently successful in practical affairs, to make
certain of survival'.
Democracy is better than despotism, offers the only hope for mankind of freedom, of justice, and of
peace. But is democracy, as we know it, good enough? A general election in any of the larger democracies
today, in the United States or in Britain, in France or in Italy, is not conspicuously a feast of reason. If
democracy is not all that nineteenth-century fancy used to paint, how should it be made better? Can it be made
to do well enough to be sure of survival?
In an Epilogue to another story I can only ask these questions. I cannot attempt the answers. But, having
regard to Britain's internal revolutions since the beginning of my story, it may be worthwhile to name some of
the problems illustrated by the story, and facing us today. We have to learn as a democracy to choose our
governors wisely, by reason, not greed. We have in an economically flattened society to find men who will
undertake office in a public spirit, not for personal gain or glory; we must carry on the aristocratic tradition
without the aristocrats. We have to keep open channels for new ideas of unknown men to reach and influence
the temporary holders of power. We seem to have solved for the present the problems of full employment, but
we have not solved two of the problems to which full employment in a free society gives rise-how to preserve
the value of our money against endless rise of costs, wages and prices, and how without fear of unemployment
to secure the maximum of output.
Democracy must be efficient in practical affairs, as efficient as the nearest despotism. Democracy must
be democratic in substance, not only in form. This means that the process of choosing and changing holders of
power shall be unaffected by privilege of established organization and wealth, that the holders of political
power, when an election comes, shall compete with their opponents on equal terms. Power must not be used to
prolong itself. Power, the stupid necessary mule, should have neither pride of ancestry nor hope of posterity. In
the leading democracies of today many special measures have been taken to secure this. But, at any risk of
causing offence, a question must be asked about Britain. Is it consistent with democratic principle that
Reading Passages [ 137 ]

organizations like the trade unions which have received special privileges for industrial work should become
tied to a political party? Ought it to be difficult for an individual to earn his living by employment without
contributing from his wages to the retention of power by one set of politicians rather than another? A one-party
State in any form is the destruction of freedom.
Democracies need to look within. They must look without as well. They must, in one way or another,
abandon and lead others to abandon any claim to absolute sovereignty-the claim to kill in one's own cause
without selection or limit. The headnote of this Epilogue is not a paradox but a truism. If with our growing
control over nature we could abolish war, we should be in Utopia. If we cannot abolish war, we shall plunge
ever deeper into a hell of evil imagining and evil doing.
The theme of my story returns at its end. Power as a means of getting things done appeals to that which
men share with brutes; to fear and to greed; power leads those who wield it to desire it for its own sake, not for
the service it may render, and to seek its continuance in their own hands. Influence as a means of getting things
done appeals to that which distinguishes men from brutes. The way out of the world's troubles today is to treat
men as men, to enthrone influence over power, and to make power revocable.
(From Power and influence: An autobiography by Lord Beveridge)
Reading Passages [ 138 ]

LOCKE'S POLITICAL THEORY


Locke's political theory is to be found in his Two Treatises of Civil Government, particularly in the second
of these. The immediate aim of that treatise is apparent: to justify the Revolution of 1688 and to help 'establish
the throne of our great restorer, our present King William'. But this aim is achieved by securing in turn a great
and fundamental political principle, true for the English nation in 1688 and true, in Locke's opinion, for all well
regulated communities everywhere and at all times, that government must be with the consent of the governed,
that a ruler who has lost the confidence of his people no longer has the right to govern them.
The principle involves a particular view of government and of political community. Locke set himself to
refute two theories which were used to justify privilege, oppression, and political slavery. The first was the
theory of the divine right of kings as put forward by Robert Filmer, that the king is the divinely ordained father
of his people, and that the relation between king and subjects is precisely the same as that between father and
child. Locke ridicules the comparison. In the modern state, a large, highly complex organization, parental or
patriarchal government is no longer possible, and the claim that it is divinely ordained cannot be substantiated.
The second theory is to be found in its most explicit form in the works of Hobbes, although Locke does not refer
to Hobbes by name, at least in the Treatise. Government, in this theory, necessarily involves the complete
subjection of the governed to the absolute will of the governor, for without such subjection no civil society is
possible. Locke denies this theory categorically. The facts of human experience are against it and reason is
against it. A political community is possible in which the governor is limited; in which sovereignty ultimately
pertains not to the monarch, as opposed to those whom he governs, but to the people as a whole. Government
becomes an instrument for securing the lives, property, and well-being of the governed, and this without
enslaving the governed in any way. Government is not their master; it is created by the people voluntarily and
maintained by them to secure their own good. Those who, because of their superior talent, have been set to rule
by the community, rule not as masters over slaves, or even as fathers over children. They are officers elected by
the people to carry out certain tasks. Their powers are to be used in accordance with 'that trust which is put into
their hands by their brethren'. For Locke government is a 'trust' and a political community is an organization of
equals, of 'brothers', into which men enter voluntarily in order to achieve what they cannot achieve apart.
Such was the view of government which Locke adopted, and the second treatise is an effort to discover
a rational justification of this view. Locke might have appealed to experience and to history, or again he might
have contented himself with showing the public utility of the theory he advocated. But the late seventeenth
century was rationalist and would listen to no arguments other than rationalist ones, and so Locke analysed the
notion of political society in order to prove rationally that it was from the first a community of free individuals
and that it remained so throughout. He spoke in the language of his day and he made use of the theories of his
day. In particular, he borrowed two concepts from earlier political theories, the law of nature and the social
contract.
(From John Locke by Richard I. Aaron)
THE SEARCH FOR WORLD ORDER
Before I leave the subject of disarmament there is one further point of importance. Some Western
writers, and some people in Russia too, argue that the best way to minimize the explosive quality of the present
arms race is somehow to develop a stable balance of terror or deterrence. This means developing nuclear
weapons and delivery systems so strong and so varied that no surprise attack could knock out the power to
retaliate.
I can see some force in this argument. Effective deterrence depends to some extent on mutual conviction
that the other man can and will do what he threatens if he is attacked. And it may be that this is, for the time
being, the only practical way of curbing hasty action. But, in fact, attempting to produce stability in this way
also means continuing the arms race. Because, as the power to retaliate increases, there is bound to be a
corresponding search for improved weapons which will increase the element of surprise. In any case, inaction
through fear, which is the basis of deterrence, is not a positive way to secure peace-at any rate in the long run. I
feel bound to doubt whether safety, as Winston Churchill once claimed, can really become the 'sturdy child of
terror'.
It is important to remember that neither the League of Nations nor, so far, the United Nations has
contemplated the abolition of all armaments. The Covenant of the League spoke of: 'the reduction of national
armaments to the lowest point consistent with national safety and the enforcement by common action of
international obligations'. The first article of the Charter of the United Nations charges the Organization with
the duty of suppressing 'acts of aggression and other breaches of the peace', and Article 51 allows the
organization to use force for this purpose. Indeed, right at the beginning a Military Staffs Committee was set up
Reading Passages [ 139 ]

at United Nations headquarters and charged with the strategic direction of whatever military forces were to be
made available to the Security Council.
In practice, however, the United Nations does not have any military force permanently at its disposal or
any staff to plan operations in advance and direct them when they become necessary. Whatever operations the
Organization has undertaken have been done on an entirely ad hoc and improvised basis. In fact, in 1958 Mr.
Hammarskjold himself argued against the creation of a permanent United Nations military force. One of the
main reasons for this failure to develop a United Nations peace-keeping capacity in terms of military forces has
undoubtedly been the opposition of some of the Great Powers. And it must be admitted that there is no
prospect of the United Nations coercing the Great Powers into keeping the peace at present. But perhaps we
can make a virtue of necessity here.
I have tried to suggest that international agreements, like any system of municipal law, demand a
sanction of force if observance is normally to be guaranteed and non-observance controlled before it explodes
into general disorder. In other words, legislative decision demands as its corollary some form of executive
action. It was surely this which Mr. Hammarskjold had in mind in presenting his last annual report as Secretary
General. Some people, he said, wanted the United Nations to work simply as a conference system producing
reconciliation by discussion. Others - and clearly himself among them-looked upon the Organization primarily
as a dynamic instrument of government through which they, jointly and for the same purpose, should seek
such reconciliation but through which they should also try to develop forms of executive action undertaken on
behalf of all members, aiming at forestalling conflicts and resolving them, once they have arisen, by appropriate
diplomatic or political means. The word 'military' was not used. But at that very moment, the United Nations
had in the Congo, and largely through Mr. Hammarskjold's efforts, a military force expressly designed to re-
establish order and to prevent civil strife from exploding into general war.
It seems to me that any international organization designed to keep the peace must have the power not
merely to talk but also to act. Indeed I see this as the central theme of any progress towards an international
community in which war is avoided not by chance but by design. Nor need our present limitations daunt us.
This is a slow process in which experience grows into habit, and habit into trust. Many people have already
suggested how this development could be encouraged. The United Nations could have a bigger permanent
staff to act as observers and intelligence officers in potential trouble spots. Here would be part of the political
basis of control. It could develop much more detailed methods in advance for drawing on national armed
forces when police action becomes inevitable, even without possessing a big military establishment of its own.
It could prepare training manuals for the police action its forces are likely to undertake, and for which the
ordinary soldier is not normally trained. And it could begin to hold under its own control small specialist staffs,
for example, multilingual signallers, and some small stocks of equipment such as transport aircraft, which its
operations almost inevitably demand.
The fact that coercion of the Great Powers is impossible does not invalidate any of these suggestions. If
these Powers can, for the time being, avoid major war among themselves by nuclear deterrence, then the
likeliest explosive situations will occur in areas not of vital interest to them. It is there that the United Nations
can experiment and develop. Nor can a firm beginning be made otherwise. At present the United Nations
Organization, in the words of a recent writer, 'is not ... the parliament and government of mankind but an
institution of international diplomacy'. It can only hope to grow from the one into the other by admitting its
present limitations and, more than that, by beginning to practise its own terms and conditions. If a start could
be made now-and even if only in miniature--international government might finally emerge.
(Norman Gibbs from an article in The Listener, December 28th, 1961)
THE DECLARATION OF INDEPENDENCE
In Congress, July 4, 1776
THE UNANIMOUS DECLARATION OF THE THIRTEEN UNITED STATES OF AMERICA
When in the Course of human Events, it becomes necessary for one People to dissolve the Political
Bands which have connected them with another, and to assume among the Powers of the Earth, the separate
and equal Station to which the Laws of Nature and of Nature's God entitle them, a decent Respect to the
Opinions of Mankind requires that they should declare the causes which impel them to the Separation.
We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their
Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness -- That
to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of
the Governed, that Whenever any Form of Government becomes destructive of these Ends, it is the Right of the
People to alter or to abolish it, and to institute new Government, laying its Foundation on such Principles, and
Reading Passages [ 140 ]

organizing its Powers in such Form, as to them shall seem most likely to effect their Safety and Happiness.
Prudence, indeed, will dictate that Governments long established should not be changed for light and transient
Causes; and accordingly all Experience hath shewn, that Mankind are more disposed to suffer, while Evils are
sufferable, than to right themselves by abolishing the Forms to which they are accustomed. But When a long
Train of Abuses and Usurpations, pursuing invariably the same Object, evinces a Design to reduce them under
absolute Despotism, it is their Right, it is their Duty, to throw off such Government, and to provide new Guards
for their future Security. Such has been the patient Sufferance of these Colonies; and such is now the Necessity
which constrains them to alter their former Systems of Government. The History of the present King of Great-
Britain is a History of repeated Injuries and Usurpations, all having in direct Object the Establishment of an
absolute Tyranny over these States. To prove this, let Facts be submitted to a candid World.
He has refused his Assent to Laws, the most wholesome and necessary for the public Good.
He has forbidden his Governors to pass Laws of immediate and pressing Importance, unless suspended
in their Operation till his Assent should be obtained; and When so suspended, he has utterly neglected to
attend to them.
He has refused to pass other Laws for the Accommodation of large Districts of People, unless those
People would relinquish the Right of Representation in the Legislature, a Right inestimable to them, and
formidable to Tyrants only.
He has called together Legislative Bodies at Places unusual, uncomfortable, and distant from the
Depository of their public Records, for the sole Purpose of fatiguing them into Compliance with his Measures.
He has dissolved Representative Houses repeatedly, for opposing with manly Firmness his Invasions
on the Rights of the People.
He has refused for a long Time, after such Dissolutions, to cause others to be elected; whereby the
Legislative Powers, incapable of the Annihilation, have returned to the People at large for their exercise; the
State remaining in the mean time exposed to all the Dangers of Invasion from without, and the Convulsions
within.
He has endeavoured to prevent the Population of these States; for that Purpose obstructing the Laws for
Naturalization of Foreigners; refusing to pass others to encourage their Migrations hither, and raising the
Conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing
Judiciary Powers.
He has made Judges dependent on his Will alone, for the Tenure of their Offices, and the Amount and
Payment of their Salaries.
He has erected a Multitude of new Offices, and sent hither Swarms of Officers to harrass our People,
and eat out their Substance.
He has kept among us, in Times of Peace, Standing Armies, without the consent of our Legislatures.
He has affected to render the Military independent of and superior to the Civil Power.
He has combined with others to subject us to a Jurisdiction foreign to our Constitution, and
unacknowledged by our Laws; giving his Assent to their Acts of pretended Legislation:
For quartering large Bodies of Armed Troops among us;
For protecting them, by a mock Trial, from Punishment for any Murders which they should commit on
the Inhabitants of these States:
For cutting off our Trade with all Parts of the World:
For imposing Taxes on us without our Consent:
For depriving us, in many Cases, of the Benefits of Trial by Jury:
For transporting us beyond Seas to be tried for pretended Offences:
For abolishing the free System of English Laws in a neighbouring Province, establishing therein an
arbitrary Government, and enlarging its Boundaries, so as to render it at once an Example and fit Instrument
for introducing the same absolute Rules into these Colonies:
For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the
Forms of our Governments:
For suspending our own Legislatures, and declaring themselves invested with Power to legislate for us
in all Cases whatsoever.
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our Seas, ravaged our Coasts, burnt our Towns, and destroyed the Lives of our
People.
Reading Passages [ 141 ]

He is, at this Time, transporting large Armies of foreign Mercenaries to compleat the Works of Death,
Desolation, and Tyranny, already begun with circumstances of Cruelty and Perfidy, scarcely paralleled in the
most barbarous Ages, and totally unworthy the Head of a civilized Nation.
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their
Country, to become the Executioners of their Friends and Brethren, or to fall themselves by their Hands.
He has excited domestic Insurrections amongst us, and has endeavoured to bring on the Inhabitants of
our Frontiers, the merciless Indian Savages, whose known Rule of Warfare, is an undistinguished Destruction,
of all Ages, Sexes and Conditions.
In every stage of these Oppressions we have Petitioned for Redress in the most humble Terms: Our
repeated Petitions have been answered only by repeated Injury. A Prince, whose Character is thus marked by
every act which may define a Tyrant, is unfit to be the Ruler of a free People.
Nor have we been wanting in Attentions to our British Brethren. We have warned them from Time to
Time of Attempts by their Legislature to extend an unwarrantable Jurisdiction over us. We have reminded
them of the Circumstances of our Emigration and Settlement here. We have appealed to their native Justice and
Magnanimity, and we have conjured them by the Ties of our common Kindred to disavow these Usurpations,
which, would inevitably interrupt our Connections and Correspondence. They too have been deaf to the Voice
of Justice and of Consanguinity. We must, therefore, acquiesce in the Necessity, which denounces our
Separation, and hold them, as we hold the rest of Mankind, Enemies in War, in Peace, Friends.
We, therefore, the Representatives of the UNITED STATES OF AMERICA, in GENERAL CONGRESS,
Assembled, appealing to the Supreme Judge of the World for the Rectitude of our Intentions, do, in the Name,
and by Authority of the good People of these Colonies, solemnly Publish and Declare, That these United
Colonies are, and of Right ought to be, FREE AND INDEPENDENT STATES; that they are absolved from all
Allegiance to the British Crown, and that all political Connection between them and the State of Great-Britain,
is and ought to be totally dissolved; and that as FREE AND INDEPENDENT STATES, they have full Power to
levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which
INDEPENDENT STATES may of right do. And for the support of this Declaration, with a firm Reliance on the
Protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes, and our sacred
Honor.
DECLARATION OF THE RIGHTS OF MAN AND OF THE CITIZEN
Approved by the National Assembly of France, August 26, 1789
The representatives of the French people, organized as a National Assembly, believing that the
ignorance, neglect, or contempt of the rights of man are the sole cause of public calamities and of the corruption
of governments, have determined to set forth in a solemn declaration the natural, unalienable, and sacred rights
of man, in order that this declaration, being constantly before all the members of the Social body, shall remind
them continually of their rights and duties; in order that the acts of the legislative power, as well as those of the
executive power, may be compared at any moment with the objects and purposes of all political institutions
and may thus be more respected, and, lastly, in order that the grievances of the citizens, based hereafter upon
simple and incontestable principles, shall tend to the maintenance of the constitution and redound to the
happiness of all. Therefore the National Assembly recognizes and proclaims, in the presence and under the
auspices of the Supreme Being, the following rights of man and of the citizen:
Articles:
1. Men are born and remain free and equal in rights. Social distinctions may be founded
only upon the general good.
2. The aim of all political association is the preservation of the natural and imprescriptible
rights of man. These rights are liberty, property, security, and resistance to oppression.
3. The principle of all sovereignty resides essentially in the nation. No body nor
individual may exercise any authority which does not proceed directly from the nation.
4. Liberty consists in the freedom to do everything which injures no one else; hence the
exercise of the natural rights of each man has no limits except those which assure to the other
members of the society the enjoyment of the same rights. These limits can only be determined by
law.
5. Law can only prohibit such actions as are hurtful to society. Nothing may be prevented
which is not forbidden by law, and no one may be forced to do anything not provided for by law.
6. Law is the expression of the general will. Every citizen has a right to participate
personally, or through his representative, in its foundation. It must be the same for all, whether it
Reading Passages [ 142 ]

protects or punishes. All citizens, being equal in the eyes of the law, are equally eligible to all
dignities and to all public positions and occupations, according to their abilities, and without
distinction except that of their virtues and talents.
7. No person shall be accused, arrested, or imprisoned except in the cases and according
to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be
executed, any arbitrary order, shall be punished. But any citizen summoned or arrested in virtue of
the law shall submit without delay, as resistance constitutes an offense.
8. The law shall provide for such punishments only as are strictly and obviously
necessary, and no one shall suffer punishment except it be legally inflicted in virtue of a law passed
and promulgated before the commission of the offense.
9. As all persons are held innocent until they shall have been declared guilty, if arrest
shall be deemed indispensable, all harshness not essential to the securing of the prisoner's person
shall be severely repressed by law.
10. No one shall be disquieted on account of his opinions, including his religious views,
provided their manifestation does not disturb the public order established by law.
11. The free communication of ideas and opinions is one of the most precious of the rights
of man. Every citizen may, accordingly, speak, write, and print with freedom, but shall be
responsible for such abuses of this freedom as shall be defined by law.
12. The security of the rights of man and of the citizen requires public military forces.
These forces are, therefore, established for the good of all and not for the personal advantage of
those to whom they shall be intrusted.
13. A common contribution is essential for the maintenance of the public forces and for the
cost of administration. This should be equitably distributed among all the citizens in proportion to
their means.
14. All the citizens have a right to decide, either personally or by their representatives, as to
the necessity of the public contribution; to grant this freely; to know to what uses it is put; and to fix
the proportion, the mode of assessment and of collection and the duration of the taxes.
15. Society has the right to require of every public agent an account of his administration.
16. A society in which the observance of the law is not assured, nor the separation of
powers defined, has no constitution at all.
17. Since property is an inviolable and sacred right, no one shall be deprived thereof except
where public necessity, legally determined, shall clearly demand it, and then only on condition that
the owner shall have been previously and equitably indemnified.
Written by The Marquis de Lafayette, with help from his friend and neighbor, American envoy to
France, Thomas Jefferson.
French King Louis XVI signed this document, under duress, but never intended to support it.
PSYCHOLOGY
Society and Intelligence
Intelligence tests have been constructed of three kinds. Verbal paper-and-pencil tests, non-verbal paper-
and-pencil tests, where the tasks are presented by means of pictures and diagrams, and performance tests
which require the manipulation of objects. Some, such as the Binet test and the performance tests, are given to
subjects separately; most verbal and non-verbal tests can be done by a group of subjects writing at the same
time.
The subjects are told to do their tasks within a certain time, their results are marked, and the result of
each is compared with a scale indicating what may be expected of children of the same age, i.e. what marks are
expected of the relatively few bright ones, what marks are expected of the few dull ones, and what marks are
expected of the bulk of the population with whom the comparison is being made. This 'calibration' of the test
has been made beforehand and we are not concerned with the methods employed. One thing, however, we
have to notice, and that is that the assessment of the intelligence of any subject is essentially a comparative
affair.
The results of assessment are expressed in various ways, the most familiar being in terms of what is
called the Intelligence Quotient. For our purposes we need not consider how this has been devised, it is enough
to say that an I.Q. round about 100 is 'average', while more than 105 or less than 95 are above or below the
average respectively.
Reading Passages [ 143 ]

Now since the assessment of intelligence is a comparative matter we must be sure that the scale with
which we are comparing our subjects provides a 'valid' or 'fair' comparison. It is here that some of the
difficulties, which interest us, begin. Any test performed involves at least three factors: the intention to do one's
best, the knowledge required for understanding what you have to do, and the intellectual ability to do it. The
first two must be held equal for all who are being compared, if any comparison in terms of intelligence is to be
made. In school populations in our culture these assumptions can be made with fair plausibility, and the value
of intelligence testing has been proved up to the hilt. Its value lies, of course, in its providing a satisfactory basis
for prediction. No one is in the least interested in the marks little Basil gets on his test, what we are interested in
is whether we can infer from his mark on the test that Basil will do better or worse than other children of his
age at other tasks which we think require 'general intelligence'. On the whole such inference can be made with a
certain degree of confidence, but only if Basil can be assumed to have had the same attitude towards the test as
the others with whom he is being compared, and only if he was not penalized by lack of relevant information
which they possessed.
It is precisely here that the trouble begins when we use our tests for people from different cultures. If, as
happens among the Dakota Indians, it is indelicate to ask a question if you think there is someone present who
does not know the answer already, this means that a Dakota child's test result is not comparable with the
results of children brought up in a less sensitive environment. Porteous found difficulty among the Australian
aborigines. They were brought up to believe that all problems had to be discussed in the group, and they
thought it very eccentric to be expected to solve one by oneself.
Supposing, however, a satisfactory attitude towards the test can be assumed, what about equality in
relevant knowledge? In a society where children play with bricks, performance tests involving the
manipulation of little cubes present an easier problem than they would in a society where such toys were
unknown. Bartlett reports that a group of East African natives were unable to arrange coloured pegs in an
alternating series, but they planted trees according to the same plan in everyday life.
Then there is the story of the little boy in Kentucky who was asked a test question: 'If you went to a
store and bought six cents worth of candy and gave the clerk ten cents what change would you receive?' The
boy replied: 'I never had ten cents and if I had I wouldn't spend it on candy and anyway candy is what mother
makes.' The tester reformulated the question: 'If you had taken ten cows to pasture for your father and six of
them strayed away, how many would you have left to drive home?' The boy replied: 'We don't have ten cows,
but if we did and I lost six I wouldn't dare go home.' Undeterred the tester pressed his question: 'If there were
ten children in your school and six of them were sick with the measles how many would there be in school?'
The answer came: 'None, because the rest would be afraid of catching it too.'
Thus all intercultural comparisons of intelligence are vitiated by the lack of true comparability, and any
generalizations about 'racial' differences in intellectual competence which do not take account of this are
worthless. So are many comparisons which have been made between children of different social classes.
(From Social Psychology, by W. J. H. Sprott.)
Reading Passages [ 144 ]

The Pressure to Conform


Suppose that you saw somebody being shown a pair of cards. On one of them there is a line, and on the
other three lines. Of these three, one is obviously longer than the line on the other card, one is shorter, and one
the same length. The person to whom these cards are being shown is asked to point to the line on the second
card which is the same length as the one on the first. To your surprise, he makes one of the obviously wrong
choices. You might suppose that he, or she, perhaps suffers from distorted vision, or is insane, or perhaps
merely cussed. But you might be wrong in all these suggestions; you might be observing a sane, ordinary
citizen, just like yourself. Because, by fairly simple processes, sane and ordinary citizens can be induced to deny
the plain evidence of their senses-not always, but often. In recent years psychologists have carried out some
exceedingly interesting experiments in which this sort of thing is done.
The general procedure was first devised by Dr Asch in the United States. What happens is this:
Someone is asked to join a group who are helping to study the discrimination of length. The victim, having
agreed to this seemingly innocent request, goes to a room where a number of people-about half a dozen-and
the experimenter are seated. Unbeknown to our victim, none of the other people in the room is a volunteer like
himself; they are all in league with the experimenter. A pair of cards, like those I have described, is produced;
and everyone in turn is asked which of the three lines on the second card is equal to the line on the first. They
all, without hesitation, pick-as they have been told to pick-the same wrong line. Last of all comes the turn of our
volunteer. In many cases the volunteer, faced with this unanimity, denies the plain evidence of his senses, and
agrees.
An ingenious variation of this experiment was devised by Stanley Milgram of Harvard. He used sounds
instead of lines, and the subjects were merely asked to state which of two successive sounds lasted longer. The
volunteer would come into a room where there was a row of five cubicles with their doors shut, and coats
hanging outside, and one open cubicle for him. He would sit in it and don earphones provided. He would hear
the occupants of the other cubicles tested in turn, and each would give a wrong answer. But the other cubicles
were, in fact, empty, and what he heard were tape-recordings manipulated by the experimenter. Milgram
conducted a whole series of experiments in this way, in which he varied considerably the pressure put upon the
subjects. As expected, their conformity varied with the pressure, but, over all, he clearly showed that, faced
with the unanimous opinion of the group they were in, people could be made to deny the obvious facts of the
case in anything up to 75 per cent of the trials.
The victim of brainwashing can be induced to assert falsehoods, as we well know. But he is subjected to
terrible and continuous stress: to hunger, sleeplessness, cold, and fear. The people we have been discussing
were free of all these things, and subjected to nothing more than the complete agreement of the group in which
they found themselves. Nevertheless, they too could be made to assert manifest falsehoods. I find this more
than a trifle alarming -and very thought-provoking.
You may reply that there is no cause for alarm, because in real situations the total unanimity of a group
is rare. The more usual case concerns the effects of what we might call a 'pressure group'. This has been
examined, at least partially, by W. M. and H. H. Kassarjian, in California. They used the 'group in a room' and
'lines on cards' situation; and I imagine that they must be kindly people, because they made things much easier
for their volunteers. In the first place, the genuine volunteers were in a majority: twenty out of thirty. Secondly,
the volunteers never had to make their selections out loud, but always enjoyed the anonymity of paper and
pencil. The experimenter explained that some people would be asked to declare their choices publicly, and
asked only his primed collaborators. Thus each volunteer heard the views of only a third of the group he was
in. This third was unanimous, and the volunteers probably concluded that they expressed a majority view, but
they were not put in a glaring minority of one, and their choice was secret. Nevertheless, a substantial
distortion was still produced: almost, though not quite, as large as in the harsher situations we looked at first.
So there is only small comfort here.
I am aware that there is grave danger in taking results obtained in the special and carefully simplified
situation of the laboratory, or of the clinic, and applying them directly to the immensely complicated affairs of
normal life. But these results seem to me so interesting and so suggestive that, in spite of the obvious risks, it
may be worth while to see where they may shed a little light. In speculating thus, I am stepping outside the
proper bounds of scientific rigour; so, if I only make myself a laughing-stock, it is my own fault.
Whether one line is or is not the same length as another is a matter fairly easy to judge as a rule. But
many things-and many more important things-are by no means so clear cut. If we are asked which of two cars
or two schools is the better, or which of two 'pop' songs or two politicians is the worse, we may be genuinely
perplexed to answer. We may guess that in such doubtful cases the 'majority effect' or the 'pressure group
Reading Passages [ 145 ]

effect' would be even more pronounced. Recent experiments by A. E. M. Seaborne suggest that they would not
always be, but it seems generally likely. Can we observe such effects taking place around us now, or having
taken place in the past? I think we can, and that they help us a little to understand the massive inertia of
commonly held ideas, and the fantastic standing of some more ephemeral ones.
(From an article by Max Hammerton in The Listener, October 18th, 1962.)
Reading Passages [ 146 ]

Learning to Live With the Computer


A rapid pace of technological advance has been accepted by many manufacturing industries for some
time now, but for the office worker, who has led a sheltered existence in comparison, radical changes are a new
experience. With the advent of electronic data processing techniques and, especially, computers, this situation
has altered very swiftly. Office staff are suddenly finding themselves exposed to the traumatic consequences of
scientific progress.
Most offices, by the very nature of their structure and function, are geared to stability or slow change.
Accelerated change of the kind that a computer brings is likely to prove disrupting and disturbing. This is
because people in stable organizations tend to expect a steady continuation of existing arrangements, and
because departments unaccustomed to change frequently find they have become too inflexible to assimilate it
without stress. Social as well as technical factors are therefore highly relevant to a successful adaptation to new
techniques.
Research into the social and organizational problems of introducing computers into offices has been in
progress in the social science department in Liverpool University for the past four years. My colleagues and I
have shown that many firms get into difficulties with their new computers partly because of lack of technical
knowledge and experience, but also because they have not been sufficiently aware of the need to understand
and plan for the social as well as the technical implications of change. In the firms we have been studying,
change has usually been seen simply as a technical problem to be handled by technologists. The fact that the
staff might regard the introduction of a computer as a threat to their security and status has not been
anticipated. Company directors have been surprised when, instead of cooperation, they encountered anxiety
and hostility.
Once the firm has signed the contract to purchase a computer, its next step, one might expect, would be
to 'sell' the idea to its staff, by giving reassurances about redundancy, and investigating how individual jobs
will be affected so that displaced staff can be prepared for a move elsewhere. In fact, this may not happen. It is
more usual for the firm to spend much time and energy investigating the technical aspects of the computer, yet
largely to ignore the possibility of personnel difficulties. This neglect is due to the absence from most firms of
anyone knowledgeable about human relations. The personnel manager, who might be expected to have some
understanding of employee motivation, is in many cases not even involved in the changeover.
Again, because the changeover is seen only as a technical problem, little thought is given to
communication and consultation with staff. Some firms go so far as to adopt a policy of complete secrecy,
telling their staff nothing. One director told us: 'If we are too frank, we may create difficulties for ourselves.'
This policy was applied to managers as well as clerks because, it was explained, 'our managers will worry if
they find out they will lose workers and so have their empires reduced'. Several months after the arrival of the
computer, the sales manager in this firm had still not been given full information on the consequences of this
change.
One computer manufacturer with extensive American experience has tried to give advice to firms
purchasing its computers on how and when to communicate. It suggests to customers that as soon as the
contract is signed management should hold a meeting with all office staff and explain what a computer is, what
the firm's plans are, how it proposes to use the computer, and how staff will be affected. Management should
also explain at this meeting that there will be some redundancy, but that it will be absorbed by the normal
processes of people changing jobs and retiring. This manufacturer tells us that he frequently encounters
resistance to this approach. Directors often take the line: 'No, don't tell anyone.'
The real bogey of the computer is that it is likely-or even intended-to displace staff. So it constitutes a
major threat to staff security, and for this reason alone is likely to be resisted. An important part of the
preparations for a machine must be, therefore, the estimating of the number of redundancies, and identifying
jobs which will be eliminated or reduced in scope by the machine.
Briefly, I would offer the following advice to firms embarking on computers. First, do not think your
problems will be predominantly technical, because this is unlikely. Secondly, get as much information as you
can on both social and technical problems before you sign the contract to purchase; other firms' experience can
be a useful guide here. And thirdly, plan well in advance, communicate and consult with all your staff and do
not be surprised if they take a sour view of the proposed change. No one likes to think that he can be replaced
by a machine.
(From an article by Enid Mumford in The New Scientist, 18th June, 1964.)
Forgetting
Reading Passages [ 147 ]

In 1914, Freud published an English edition of his The Psychopathology of Everyday Life. In this book he
endeavours to show that many 'lapses of memory' and 'slips of the tongue' are not inexplicable accidents but
can be readily understood if fitted into the personality picture of the individual. The reader is recommended to
look at this well-written book for himself and discover the wealth of intriguing anecdotal evidence with which
Freud supports and develops his thesis.
Freud is at his best when discussing those seemingly accidental mistakes of speech and writing where
one word is substituted for another and, especially, where the substitute word means the opposite of the word
intended. A physician is writing out a prescription for an impecunious patient who asks him not to give her big
bills because she cannot swallow them-and then says that, of course, she meant pills. An arrogant lecturer says
that he could count the number of real authorities on his subject on one finger - he means the fingers of one
hand. A President of the Austrian House of Deputies is opening a session from which he fears little good will
come and announces that, since such and such a number of gentlemen are present, he declares the session as
closed; amid laughter, he corrects his mistake and declares the session as opened. All of these examples clearly
derive from the person saying what he actually thinks without checking himself to make his insincere but
diplomatic statement. No doubt we have all encountered similar examples in our everyday life. Certainly
writers of fiction have long been aware of this phenomenon, and have exploited it to good dramatic effect by
putting such lapsus linguae into the mouths of characters. In Shakespeare's Merchant of Venice, for example,
Portia has lost her affections to Bassanio but is under a vow not to reveal it. She directs a speech to this welcome
suitor in which, throughout, her love for him is thinly disguised and finishes with the words: 'One half of me is
yours, the other half yours - Mine own, I would say.' The same expression of our thoughts and wishes is seen in
some erroneously carried-out actions. Thus, one physician reports that he is quite often disturbed in the midst
of engrossing work at home by having to go to hospital to carry out some routine duty. When this happens he
is apt to find himself trying to open the door of his laboratory with the key of his desk at home. The two keys
are quite unlike each other and the mistake does not occur under normal circumstances but only under
conditions where he would rather be at home. His error seems to express his wish.
When Freud begins to discuss 'lapses of memory' in terms of repression, he seems to move on less firm
ground. He does not, of course, claim that all lapses are due to repression. His concern is to show that at least
some are and, to this end, he gives examples in which a name or a word is unexpectedly forgotten and proceeds
to demonstrate that the forgotten item is associated either directly or indirectly with unpleasant circumstances.
Here we may cite two of his most convincing examples. The first concerns a man (X) who repeatedly forgot the
name of an old acquaintance and business associate (Y). When he required to correspond with Y, he had to ask
other people for his name. It transpired that Y had recently married a young woman X himself had hoped to
marry. Thus X had good reason to dislike his happy rival and want to forget all about him. The second example
concerns a man who set out to recite a poem, got so far, and then could recall no more although he knew the
poem well. The line on which he blocked was descriptive of a pine-tree which is covered 'with the white sheet'.
Why should this phrase have been forgotten? Asked to relate what came to his mind when he thought of this
phrase, it was found that it immediately reminded him of the white sheet which covers a dead body, and of the
recent death of his brother from a heart condition which was common in his family and from which he feared
he too might die. The phrase referring to the white sheet appears to have been forgotten because it was
associated with circumstances which the man did not wish to recall. In Freud's other examples, the link
between the forgotten item and some unpleasant circumstance is not so easily demonstrated.
(From Memory: Facts and Fallacies, by Ian Hunter.)
Adolescence
The period of adolescence has fascinated people of all ages. Even Aristotle turned aside from his
philosophical and ethical speculations to make a study of the adolescent. He realistically described a boy's voice
as 'the bleating of a billy goat'. He also characterized the adolescent as being 'high-minded', but somewhat
cynically put this down to lack of experience! Plato devoted much time and thought to discovering how best to
bring up youth to true citizenship.
Growing-up. Adolescence means 'growing-up' and strictly speaking should apply to a child from birth to
maturity. Why then do we use it for this teenage period alone? Because when we speak of the adolescent as
'growing-up', we mean that the youth is leaving behind the phase of protective childhood and is becoming
independent, capable of going out to fend for himself.
Girls of this age used to be called 'flappers', a very descriptive term, for they are figuratively trying out
their wings. Very often, like fledglings, both boys and girls require a gentle push off! Sometimes they push off
too soon and hurt themselves.
Reading Passages [ 148 ]

Venturesomeness. A characteristic of 'growing-up' is a desire to be venturesome - so unlike the dependence


of the child and the set ways of the adult. The adolescent seeks for new experience in life, and likes roughing it.
In their camps and hiking, for example, boys and girls seek uncomfortable and difficult conditions-and then set
about making themselves comfortable in them. They deliberately seek difficulties in order to overcome them.
Responsibility. The adolescent also loves responsibility. The boy likes to be given the job of packing the
luggage in the car; the girl, the responsibility of getting the younger children ready for the trip. This is a natural
urge and requires expression.
Relation to life. The healthy adolescent boy or girl likes to do the real things in life, to do the things that
matter. He would rather be a plumber's mate and do a real job that requires doing than learn about hydrostatics
sitting at a desk, without understanding what practical use they are going to be. A girl would rather look after
the baby than learn about child care.
Logically we should learn about things before doing them and that is presumably why the pundits
enforce this in our educational system. But it is not the natural way-nor, I venture to think, the best way. The
adolescent wants to do things first for only then does he appreciate the problems involved and want to learn
more about them.
They do these things better in primitive life, for there at puberty the boy joins his father in making
canoes, patching huts, going out fishing or hunting, and preparing weapons of war. He is serving his
apprenticeship in the actual accomplishments of life. It is not surprising that anthropologists find that the
adolescents of primitive communities do not suffer from the same neurotic 'difficulties' as those of civilized life.
This is not, as some assume, because they are permitted more sexual freedom, but because they are given more
natural outlets for their native interests and powers and are allowed to grow up freely into a full life of
responsibility in the community.
In the last century this was recognized in the apprenticeship system, which allowed the boy to go out
with the master carpenter, thatcher, or ploughman, to engage in the actual work of carpentry, roof-mending, or
ploughing, and so to learn his trade. It was the same in medicine, in which a budding young doctor of sixteen
learnt his job by going round with the general practitioner and helping with the blood-letting and physic. In
our agricultural colleges at the present time young men have to do a year's work on a farm before their
theoretical training at college. The great advantage of this system is that it lets the apprentice see the practical
problems before he sets to work learning how to solve them, and he can therefore take a more intelligent
interest in his theoretical work. That is also why a girl should be allowed to give expression to her natural desire
to look after children, and then, when she comes up against difficulties, to learn the principles of child care.
Since more knowledge of more things is now required in order to cope with the adult world, the period
of growing-up to independence takes much longer than it did in a more primitive community, and the
responsibility for such education, which formerly was in the hands of the parents, is now necessarily
undertaken by experts at school. But that should not make us lose sight of the basic principle, namely the need
and the desire of the adolescent to engage responsibly in the 'real' pursuits of life and then to learn how-to learn
through responsibility, not to learn before responsibility.
(From Childhood and Adolescence, by J. A. Hadfield)
Body Language.
What does scientific literature tell us about the idea that body language reflects our real feelings? One
experiment carried out about 10 years ago by Ross Buck from Carnegie-Mellon University in Pennsylvania
suggests that spontaneous facial expression is not a very good index of real emotional state. Buck and his
colleagues tested the accuracy with which people could identify the emotions felt by another person. They
presented one set of subjects with colour slides involving a variety of emotionally-loaded visual stimuli - such
as “scenic” slides (landscapes, etc), “maternal” slides (mothers and young children), disgusting slides (severe
facial injuries and burns) and unusual slides (art objects). Unknown to these subjects, they were being televised
and viewed by another matched set of subjects, who were asked to decide, on the basis of the televised facial
expressions, which of the four sets of slides had just been viewed. This experiment involved both male and
female pairs, but no pairs comprising both men and women; that is men observed only men, and women
observed women. Buck found that the female pairs correctly identified almost 40 per cent of the slides used -
this was above the level which would be predicted by chance alone. (Chance level is 25 per cent here, as there
were four classes of slide). But male pairs correctly identified only 28 per cent of slides - not significantly above
chance level. In other words, this study suggests that facial expression is not a very good index of “real” feeling
- and in the case of men watching and interpreting other men, is almost useless.
Reading Passages [ 149 ]

Paul Ekman from the University of California has conducted a long series of experiments on nonverbal
leakage (or how nonverbal behaviour may reveal real inner states) which has yielded some more positive and
counter-intuitive results. Ekman has suggested that nonverbal behaviour may indeed provide a clue to real
feelings and has explored in some detail people actively involved in deception, where their verbal language is
not a true indication of how they really feel. Ekman here agrees with Sigmund Freud, who was also convinced
of the importance of nonverbal behaviour in spotting deception when he wrote: “He that has eyes to see and
ears to hear may convince himself that no mortal can keep a secret. If his lips are silent, he chatters with his
finger-tips; betrayal oozes out of him at every pore.”
Ekman predicted that the feet and legs would probably hold the best clue to deception because
although the face sends out very quick instantaneous messages, people attend to and receive most feedback
from the face and therefore try to control it most. In the case of the feet and legs the “transmission time” is
much longer but we have little feedback from this part of the body. In other words, we are often unaware of
what we are doing with our feet and legs. Ekman suggested that the face is equipped to lie the most (because
we are often aware of our facial expression) and to “leak” the most (because it sends out many fast momentary
messages) and is therefore going to be a very confusing source of information during deception. The legs and
feet would be the primary source of nonverbal leakage and hold the main clue to deception. The form the
leakage in the legs and feet would take would include “aggressive foot kicks, flirtatious leg displays, abortive
restless flight movements”. Clues to deception could be seen in “tense leg positions, frequent shifts of leg
posture, and in restless or repetitive leg and foot movements.”
Ekman conducted a series of experiments to test his speculations, some involving psychiatric patients
who were engaging in deception, usually to obtain release from hospital. He made films of interviews
involving the patients and showed these, without sound, to one of two groups of observers. One group viewed
only the face and head, the other group, the body from the neck down. Each observer was given a list of 300
adjectives describing attitudes, emotional state, and so on, and had to say which adjectives best described the
patients. The results indicated quite dramatically that individuals who utilised the face tended to be misled by
the patients, whereas those who concentrated on the lower body were much more likely to detect the real state
of the patients and not be misled by the attempted deception.
These studies thus suggest that some body language may indeed reflect our real feelings, even when we
are trying to disguise them. Most people can, however, manage to control facial expression quite well and the
face often seems to provide little information about real feeling. Paul Ekman has more recently demonstrated
that people can be trained to interpret facial expression more accurately but this, not surprisingly, is a slow
laborious process. Ekman’s research, suggests that the feet and legs betray a great deal about real feelings and
attitudes but the research is nowhere near identifying the meanings of particular foot movements. Ray
Birdwhistell of the Eastern Pennsylvania Psychiatric Institute has gone some way towards identifying some of
the basic nonverbal elements of the legs and feet, and as a first approximation has identified 58 separate
elements. But the meanings of these particular elements is far from clear and neither are the rules for combining
the elements into larger meaningful units. Perhaps in years to come we will have a “language” of the feet
provided that we can successfully surmount the problems described earlier in identifying the basic forms of
movement following Birdwhistell’s pioneering efforts, of how they may combine into larger units, and in
teaching people how they might make sense of apparently contradictory movements.
In the meantime, if you go to a party and find someone peering intently at your feet - beware.
DISTANCE REGULATION IN ANIMALS
Comparative studies of animals help to show how man's space requirements are influenced by his
environment. In animals we can observe the direction, the rate, and the extent of changes in behaviour that
follow changes in space available to them as we can never hope to do in men. For one thing, by using animals it
is possible to accelerate time, since animal generations are relatively short. A scientist can, in forty years,
observe four hundred and forty generations of mice, while he has in the same span of time seen only two
generations of his own kind. And, of course, he can be more detached about the fate of animals.
In addition, animals don’t rationalise their behaviour and thus obscure issues. In their natural state,
they respond in an amazingly consistent manner so that it is possible to observe repeated and virtually identical
performances. By restricting our observations to the way animals handle space, it is possible to learn an
amazing amount that is translatable to human terms.
Territoriality, a basic concept in the study of animal behaviour, is usually defined as behaviour by
which an organism characteristically lays claim to an area and defends it against members of its own species. It
is a recent concept, first described by the English ornithologist H. B. Howard in his Territory in Bird Life, written
Reading Passages [ 150 ]

in 1920. Howard stated the concept in some detail, though naturalists as far back as the seventeenth century
had taken note of various events which Howard recognised as manifestations of territoriality.
Territoriality studies are already revising many of our basic ideas of animal life and human life as well.
The expression “free as a bird” is an encapsulated form of man’s conception of his relation to nature. He sees
animals as free to roam the world, while he himself is imprisoned by society. Studies of territoriality show that
the reverse is closer to the truth and that animals are often imprisoned in their own territories. It is doubtful if
Freud, had he known what is known today about the relation of animals to space, could have attributed man’s
advances to trapped energy redirected by culturally imposed inhibitions.
Many important functions are expressed in territoriality, and new ones are constantly being discovered.
H. Hediger, Zurich’s famous animal psychologist, described the most important aspects of territoriality and
explained succinctly the mechanisms by which it operates. Territoriality, he says, insures the propagation of the
species by regulating density. It provides a frame in which things are done - places to learn, places to play, safe
places to hide. Thus it coordinates the activities of the group and holds the group together. It keeps animals
within communicating distance of each other, so that the presence of food or an enemy can be signalled. An
animal with a territory of its own can develop an inventory of reflex responses to terrain features. When danger
strikes, the animal on its home ground can take advantage of automatic responses rather than having to take
time to think about where to hide.
The psychologist C. R. Carpenter, who pioneered in the observation of monkeys in a native setting,
listed thirty-two functions of territoriality, including important ones relating to the protection and evolution of
the species. The list that follows is not complete, nor is it representative of all species, but it indicates the crucial
nature of territoriality as a behavioural system, a system that evolved in very much the same way as anatomical
systems evolved. In fact, differences in territoriality have become so widely recognised that they are used as a
basis for distinguishing between species, much as anatomical features are used.
Territoriality offers protection from predators, and also exposes to predation the unfit who are too weak
to establish and defend a territory. Thus, it reinforces dominance in selective breeding because the less
dominant animals are less likely to establish territories. On the other hand territoriality facilitates breeding by
providing a home base that is safe. It aids in protecting the nests and the young in them. In some species it
localises waste disposal and inhibits or prevents parasites. Yet one of the most important functions of
territoriality is proper spacing, which protects against over-exploitation of that part of the environment on
which a species depends for its living.
In addition to preservation of the species and the environment, personal and social functions are
associated with territoriality. C. R. Carpenter tested the relative roles of sexual vigour and dominance in a
territorial context and found that even a desexed pigeon will in its own territory regularly win a test encounter
with a normal male, even though desexing usually results in loss of position in a social hierarchy. Thus, while
dominant animals determine the general direction in which the species develops, the fact that the subordinate
can win (and so breed) on his home grounds helps to preserve plasticity in the species by increasing variety and
thus preventing the dominant animals from freezing the direction which evolution takes.
Territoriality is also associated with status. A series of experiments by the British ornithologist A. D.
Bain on the great tit altered and even reversed dominance relationships by shifting the position of feeding
stations in relation to birds living in adjacent areas. As the feeding station was placed closer and closer to a
bird’s home range, the bird would accrue advantages it lacked when away from its own home ground.
Man, too, has territoriality and he has invented many ways of defending what he considers his own
land, turf, or spread. The removal of boundary markers and trespass upon the property of another man are
punishable acts in much of the Western world. A man’s home has been his castle in English common law for
centuries, and it is protected by prohibitions on unlawful search and seizure even by officials of his
government. The distinction is carefully made between private property, which is the territory of an individual,
and public property, which is the territory of the group.
This cursory review of the functions of territoriality should suffice to establish the fact that it is a basic
behavioural system characteristic of living organisms including man.
An observation and an explanation
It is worth looking at one or two aspects of the way a mother behaves towards her baby. The usual
fondling, cuddling and cleaning require little comment, but the position in which she holds the baby against
her body when resting is rather revealing. Careful American studies have disclosed the fact that 80 per cent of
mothers cradle their infants in their left arms, holding them against the left side of their bodies. If asked to
explain the significance of this preference most people reply that it is obviously the result of the predominance
Reading Passages [ 151 ]

of right-handedness in the population. By holding the babies in their left arms, the mothers keep their dominant
arm free for manipulations. But a detailed analysis shows that this is not the case. True, there is a slight
difference between right-handed and left-handed females, but not enough to provide an adequate explanation.
It emerges that 83 per cent of right-handed mothers hold the baby on the left side, but then so do 78 per cent of
left-handed mothers. In other words, only 22 per cent of the left-handed mothers have their dominant hands
free for actions. Clearly there must be some other, less obvious explanation.
The only other clue comes from the fact that the heart is on the left side of the mother’s body. Could it
be that the sound of her heartbeat is the vital factor? And in what way? Thinking along these lines it was
argued that perhaps during its existence inside the body of the mother, the growing embryo becomes fixated
(‘imprinted’) on the sound of the heart beat. If this is so, then the re-discovery of this familiar sound after birth
might have a calming effect on the infant, especially as it has just been thrust into a strange and frighteningly
new world outside. If this is so then the mother, either instinctively or by an unconscious series of trials and
errors, would soon arrive at the discovery that her baby is more at peace if held on the left against her heart,
than on the right.
This may sound far-fetched, but tests have now been carried out which reveal that it is nevertheless the
true explanation. Groups of new-born babies in a hospital nursery were exposed for a considerable time to the
recorded sound of a heartbeat at a standard rate of 72 beats per minute. There were nine babies in each group
and it was found that one or more of them was crying for 60 per cent of the time when the sound was not
switched on, but that this figure fell to only 38 per cent when the heartbeat recording was thumping away. The
heartbeat groups also showed a greater weight-gain than the others, although the amount of food taken was the
same in both cases. Clearly the beatless groups were burning up a lot more energy as a result of the vigorous
actions of their crying.
Another test was done with slightly older infants at bedtime. In some groups the room was silent, in
others recorded lullabies were played. In others a ticking metronome was operating at the heartbeat speed of 72
beats per minute. In still others the heartbeat recording itself was played. It was then checked to see which
groups fell asleep more quickly. The heartbeat group dropped off in half the time it took for any of the other
groups. This not only clinches the idea that the sound of the heart beating is a powerfully calming stimulus, but
it also shows that the response is a highly specific one. The metronome imitation will not do - at least, not for
young infants.
So it seems fairly certain that this is the explanation of the mother’s left-side approach to baby-holding.
It is interesting that when 466 Madonna and child paintings (dating back over several hundred years) were
analysed for this feature, 373 of them showed the baby on the left breast. Here again the figure was at the 80 per
cent level. This contrasts with observations of females carrying parcels, where it was found that 50 per cent
carried them on the left and 50 per cent on the right.
What other possible results could this heartbeat imprinting have? It may, for example, explain why we
insist on locating feelings of love in the heart rather than the head. As the song says: ‘You gotta have a heart!’ It
may also explain why mothers rock their babies to lull them to sleep. The rocking motion is carried on at about
the same speed as the heartbeat, and once again it probably ‘reminds’ the infants of the rhythmic sensations
they became so familiar with inside the womb, as the great heart of the mother pumped and thumped away
above them.
Nor does it stop there. Right into adult life the phenomenon seems to stay with us. We rock with
anguish. We rock back and forth on our feet when we are in a state of conflict. The next time you see a lecturer
or an after-dinner speaker swaying rhythmically from side to side, check his speed for heartbeat time. His
discomfort at having to face an audience leads him to perform the most comforting movements his body can
offer in the somewhat limited circumstances; and so he switches on the old familiar beat of the womb.
Wherever you find insecurity, you are liable to find the comforting heartbeat rhythm in one kind of
disguise or another. It is no accident that most folk music and dancing has a syncopated rhythm. Here again the
sounds and movements take the performers back to the safe world of the womb.
From The Naked Ape by Desmond Morris. (Jonathan Cape and McGraw Hill, 1967)
Reading Passages [ 152 ]

Adaptive control of reading rate


One important factor in reading is the voluntary, adaptive control of reading rate, i.e. the ability to
adjust the reading rate to the particular type of material being read.
Adaptive reading means changing reading speed throughout a text in response to both the difficulty of
material and one’s purpose in reading it. Learning how to monitor and adjust reading style is a skill that
requires a great deal of practice.
Many people, even college students are unaware that they can learn to control their reading speed.
However, this factor can be greatly improved with a couple of hundred hours of work, as opposed to the
thousands of hours needed to significantly alter language comprehension. Many college reading skills
programmes include a training procedure aimed at improving students’ control of reading speed. However, a
number of problems are involved in success-fully implementing such a programme. The first problem is to
convince the students that they should adjust their reading rates. Many students regard skimming as a sin and
read everything in a slow methodical manner. On the other hand some students believe that everything,
including difficult mathematical texts, can be read at the rate appropriate for a light novel. There seems to be
evidence that people read more slowly than necessary. A number of studies on college students have found
that when the students are forced to read faster than their self-imposed rate, there is no loss in retention of
information typically regarded as important.
The second problem involved in teaching adaptive reading lies in convincing the students of the need to
be aware of their purposes in reading. The point of adjusting reading rates is to serve particular purposes.
Students who are unaware of what they want to get out of a reading assignment will find it difficult to adjust
their rates appropriately. They should know in advance what they want.
Once these problems of attitude are overcome, a reading skills course can concentrate on teaching the
students the techniques for reading at different rates. Since most students have had little practice at rapid
reading, most of the instruction focuses on how to read rapidly. Scanning is a rapid reading technique
appropriate for searching out a piece of information embedded in a much larger text - for example a student
might scan this passage for an evaluation of adaptive reading. A skilled scanner can process 10,000 or more
words per minute. Obviously, at this rate scanners only pick up bits and pieces of information and skip whole
paragraphs. It is easy for scanners to miss the target entirely, and they often have to rescan the text. Making
quick decisions as to what should be ignored and what should be looked at takes practice. However, the
benefits are enormous. I would not be able to function as an academic without this skill because I would not be
able to keep up with all the information that is generated in my field.
Skimming is the processing of about 800-1500 words a minute - a rate at which identifying every word
is probably impossible. Skimming is used for extracting the gist of the text. The skill is useful when the
skimmer is deciding whether to read a text, or is previewing a text he wants to read, or is going over material
that is already known.
Both scanning and skimming are aided by a knowledge of where the main points tend to be found in
the text. A reader who knows where an author tends to put the main points can read selectively. Authors vary
in their construction style, and one has to adjust to author differences, but some general rules usually apply.
Section headings, first and last paragraphs in a section, first and last sentences in a paragraph, and highlighted
material all tend to convey the main points.
Students in reading skills programmes often complain that rapid reading techniques require hard work
and that they tend to regress towards less efficient reading habits after the end of the programme. Therefore, it
should be emphasised that the adaptive control of the reading rate is hard work because it is a novel skill. Older
reading habits seem easy because they have been practised for longer. As students become more practised in
adjusting reading rate, they find it easier. I can report that after practising variable reading rates for more than
ten years, I find it easier to read a text using an adjustable rate than to read at a slow methodical word by word
rate. This is something of a problem for me because part of my professional duties is to edit papers that I would
not normally process word by word. I find it very painful to have to read at this rate.
SOCIOLOGY
RATIONAL AND IRRATIONAL ELEMENTS IN CONTEMPORARY SOCIETY
Imagine yourselves standing at a street corner of a large and  busy city. Everything in front of you is
bustling, moving. Here, to your left, a man laboriously pushes a wheelbarrow. There in measured trot, a horse
is pulling a carriage; on all sides you see a constant stream of cars and buses. Above you, somewhere in the
distance, can be heard the buzzing noise of an aeroplane. In all this there is nothing unusual; nothing that
would to-day call for surprise or astonishment; it is only when concentrated analysis has revealed the
Reading Passages [ 153 ]

problematic aspect of even the most obvious things in life that we discover sociological problems underlying
these everyday phenomena. Wheelbarrow, carriage, automobile, and aeroplane are each typical of the means of
conveyance in different phases of historical development. They originate in different times, thus they represent
different phases of technical development; and yet they are all used simultaneously. This particular
phenomenon has been called the law of the 'contemporaneousness of the non-contemporaneous'. However well
these different phases of history seem to exist side by side in the picture before us, in certain situations and
under particular circumstances they can lead to the most convulsive disturbances in our social life.
No sooner does this thought occur to us than we can see a different picture unfolding itself. The pilot
who only a minute ago seemed to be flying quietly above us hurls a hurricane of bombs and in the twinkle of
an eye lays waste everything and annihilates everybody underneath him. You know that this idea is far from
being a figment of the imagination, and the uneasiness which its horror awakens in you leads involuntarily to a
modification of your previous admiration of human progress. In his scientific and technical knowledge man
has, indeed, made miraculous strides forward in the span of time that separates us from the days when the
carriage came into use; but is human reason and rationality, in other than the technical field, today so very
different from what it was in the distant past of which the wheelbarrow is a symbol? Do our motives and
impulses really operate on a higher plane than those of our ancestors? What, in essence, does the action of the
pilot who drops bombs signify?
Surely this: that man is availing himself of the most up-to-date results of technical ingenuity in order to
satisfy ancient impulses and primitive motives. If, therefore, the city is destroyed by the deadly means of
modern warfare this must be attributed solely to the fact that the development of man's technical powers over
nature is far ahead of the development of his moral faculties and his knowledge of the guidance and
government of society. The phenomenon suggested by this whole analogy can now be given a sociological
designation; it is the phenomenon of a disproportionate development of human faculties. This phenomenon of a
disproportionate development can be observed not only in the life of groups but also in that of individuals. We
know from child-psychology that a child may be intellectually extremely precocious, whilst the development of
his moral or temperamental qualities has been arrested at an infantile stage. Such an unevenly balanced
development of his various faculties may be a source of acute danger to an individual; in the case of society, it
is nothing short of catastrophic.
(From Hobhouse Memorial Lecture, 1940, by Karl Mannheim)
SOCIAL LIFE IN A PROVINCIAL UNIVERSITY
A number of assumptions about the way of life in a pro-vincial university are current today. I used
myself to hold a number of them but in the course of an inquiry in King's College, Newcastle, on which this
paper is based, I modified my views considerably.
It was with a view to seeing how much truth there was in these assumptions that an inquiry was
undertaken at King's College, Newcastle, during the academic years 1952-54. One-tenth of the undergraduate
population was interviewed; in order to check whether conditions in Newcastle were peculiar, or common to
other northern university cities, a small sample of Liverpool students living at home were also seen, which
made a total of 373.
First the question of the students' homes: university teachers may find it deplorable that many of the
students live at home with parents or other guardians, but this is by no means the unanimous view of the
students themselves. Nearly half of them, both at Liverpool and Newcastle, consciously prefer it, and think it
has many advantages over going away to another university. About an equal number regret it, and a small
number are undecided either way. Those who prefer living at home ascribe their preference to a number of
factors, principally home cooking, good facilities for working, without the distractions of resident life, and the
companionship of those for whom they feel affection.
Not a few of those who wished themselves away said 'It's hopeless to try and work at home'; and
indeed the first question we must ask, if we wish to get a true picture of the home student's life, is: how good a
place is the home for a young man or woman who needs to spend many hours each week in quiet,
uninterrupted reading? It might be supposed that household chores would affect the picture, and students
were asked to say how much time they spent on them daily. The results were classified as 'Heavy', 'Negligible'
and 'Neither'. Counted under negligible were those who replied 'None at all' or 'Hardly any' or 'I wash up
occasionally'. These accounted for just 44 per cent of the Newcastle and the Liverpool students. The criterion by
which to judge of 'Heavy' chores was whether the student himself felt his share to be a heavy one. It is possible
that some of the others, not so classified, spent as many hours on house-work; but what is undertaken from
choice, rather as a hobby and relaxation, is bound to seem different from a regular responsibility, undertaken
Reading Passages [ 154 ]

from necessity. In this category were found 12 per cent Newcastle students and only 5 out of the 67 in
Liverpool. As one would expect, more women than men are found in this group, and with both, those whose
responsibilities are heavy are usually the victims of some kind of family emergency, such as the illness of the
mother or (occasionally) the father, or the death of the mother. The remainder, though spending an amount of
time on domestic chores which is by no means negligible, are not oppressed by it, and do not feel it a hindrance
to their work. It may be taken that their own instinct is sound in this, and that if they had not been doing house-
work they would have been doing something other than reading.
Much more fundamental to the matter we are considering is the question of where they do their work
when they are at home. The students interviewed, then, were asked the question 'Where do you usually work
when at home?' In Newcastle 30 Out of 152 give the family living-room as their usual place, and another 18 say
they sometimes work there. But 95, or 62 per cent, have special provision made for them, either by some
heating arrangement in their bedroom, or by the putting of a fire in another living-room. Much depends on
how one looks at it. In comparison with the Oxford college or the hall of residence, where every student has his
own room, it seems regrettable that as many as a third of these students worked in the family room, either
sometimes or always. But if one reflects that in all probability most of these people, when they were grammar
school pupils, worked in the family room, then the figure of 62 per cent for whom special provision is made
indicates a real effort on the part of these families to meet the felt needs of a university student, for heating is a
heavy item in a family budget. The question also arises in connection with students living in lodgings, who will
be considered later. The culprit seems to be chiefly that unique social institution the English bedroom, on
account of which about half the available living space in a house is unheated and unfurnished so as to render it
unusable for anything but sleeping.
(Alice Eden From an article in The British Journal of Sociology, December 1959)
THE MENACE OF OVER-POPULATION
The essential fact about the population problem is well known. It is simply that world population is
increasing at a rate with which food and other production may not be able to keep pace and will certainly not
be able to overtake sufficiently to raise the standard of living in the underdeveloped countries. World
population is thought to have increased by 1,000 million since 1800 - that is, by more than 60 per cent (Figure 1)
- to raise the estimated total in 1956 to about 2,700 million, and the present rate of increase, which far exceeds
the estimates of 25 years ago, is such as to conjure up visions of staggering numbers in the foreseeable future.
The total, now thought to be approaching 3,000 million, is not, of course, equally distributed in
proportion to the land areas. Europe, including the U.S.S.R., is about averagely populated, Africa, North and
South America and Oceania are under-populated and Asia is greatly over-populated (Figure 2 a, b). The rate of
increase is also very uneven. Figure 3 shows birth-rates and death-rates since 1947 for four contrasting areas,
the difference in the two rates representing natural increase. In Singapore a very high and almost static birth-
rate, coupled with a rapid decline in the death-rate, is giving rise to serious alarm. Many other regions, such as
Malaya, Ceylon and Mexico, are in the same position to a greater or lesser extent. In Japan a decline in the
death-rate has been offset by a dramatic drop of nearly 50 per cent in the birth-rate brought about mainly by
quasi-legalized abortion. In the United States a low and slightly declining death-rate has combined with a
substantial and recently static birth-rate to give a formidable rate of natural increase. In Great Britain a death-
rate which is not particularly low combines with a low birth-rate to give a very small rate of increase. Those
Asian countries in which the rate of increase is still slow are being held back by a high death-rate rather than a
low birth-rate, and the same applies to Africa. Unless steps are taken now, it is only a matter of time before the
population explosion extends also to these areas.
The differing patterns of birth-rates and death-rates cause differences in the age structure of
populations (Figure 4). A high death-rate and a high, static birth-rate mean that each age group comprises
fewer people than the one before, giving the pyramidal 'profile' typical of India and many other countries today
and of Great Britain a century ago. The same applies as a whole to South America, which demographically is
essentially a country of the young. The profile for the United Kingdom, in which five years ago there were
about as many people aged 40 to 45 as there were infants 0 to 5 years old, shows the effect of an erratic but
overall decrease in the birth-rate over many years.
The explosive growth of world-population has not been caused by a sudden increase in human fertility,
and probably owes little in any part of the world to an increase in birth-rate. It has been caused almost entirely
by advances in the medical and ancillary sciences, and the consequent decrease of the death-rate in areas where
the birth-rate remains high. This is of some biological interest. Nature takes as her motto that nothing succeeds
like excess, and any living thing, including Man, if able to reproduce without restraint to the limit of its
Reading Passages [ 155 ]

capacity, would soon inundate all parts of the world where it could exist. As it is, biological populations are
kept severely in check by limiting factors, of which the most important are limitations of food supply, disease
and enemies, and fluctuations in natural populations are determined by fluctuations in these limiting factors.
Generally speaking relaxation of one factor, after a period of expansion, brings into operation one of the other
two.
In the l00 years before the second World War, the expectation of life at birth in England and Wales rose
from about 40 years to over 60 years - that is, one year every five years. In India the expectation of life at birth is
about 40 years, and is said to be increasing by 2½ years every five years. Even if the birth-rate were at no more
than replacement level, the increasing expectation of life would add enormously to the population of India.
True, the expectation of life is not likely to increase indefinitely at the present velocity, but it has a very long
way to go in many countries of the world. Even in the developed countries it has some way to go before
everyone dies essentially of senility, and in the meantime increasing longevity will reinforce natural
reproductivity. With present birth-rate and death-rate trends, the world is threatened with astronomical
numbers of people. To quote from the preface of a United Nations report, The Future Growth of World Population
(1958) : '. . . it took 200,000 years for the world's human population to reach 2,500 million; it will now take a
mere 30 years to add another 2,000 million. With the present rate of increase, it can be calculated that in 600
years the number of human beings on Earth will be such that there will be only one square metre for each to
live on. It goes without saying that this can never take place, something will happen to prevent it.' The human
race will have to decide whether that 'something' is to be pleasant or unpleasant.
(A. S . Parkes from an article in The New Scientist )
Reading Passages [ 156 ]
Reading Passages [ 157 ]

CHANGES IN ENGLISH SOCIAL LIFE AFTER 1918


The most remarkable outward change of the Twenties was in the looks of women in the towns. The
prematurely aged wife was coming to be the exception rather than the rule. Children were fewer and healthier
and gave less trouble; labour-saving devices were introduced, especially for washing, cleaning, and cooking-the
introduction of stainless plate and cutlery saved an appreciable amount of time daily and this was only one of a
hundred such innovations. Provisioning also had become very much easier. The advertising of branded goods
was simplifying shopping problems. Housewives came to count on certain brands of goods, which advertisers
never allowed them to forget. The manufacturers' motto was: 'Swear not by the moon, the inconstant moon, but
swear by constant advertising'. They made things very easy for the housewives by selling their foods in the
nearest possible stage to table-readiness: the complicated processes of making custard, caramel, blanc-mange,
jelly, and other puddings and sweets, were reduced to a single, short operation by the use of prepared
powders. Porridge had once been the almost universal middle class breakfast food. It now no longer took
twenty minutes to cook, Quick Quaker Oats reducing the time to two; but even so, cereals in the American
style, eaten with milk, began to challenge porridge and bacon and eggs in prosperous homes, and the bread
and margarine eaten by the poor. At first the only choice was Force and Grape-Nuts; but soon there was a
bewildering variety of different 'flakes'; and grains of rice, wheat and barley 'puffed' by being fired at high
velocity from a sort of air gun. Bottled and tinned goods grew more and more various and plentiful. When the
war ended the only choice was soup, salmon, corned beef, Californian fruits, and potted meat; but by the
Thirties almost every kind of domestic and foreign fruit, meat, game, fish, vegetable could be bought, even in
country groceries. Foodstuffs that needed no tin-opener were also gradually standardized: eggs, milk, and
butter were graded and guaranteed and greengrocers began selling branded oranges and bananas. Housewives
could send or ring up for goods without inspecting them; more and more shops called daily or weekly for
orders and delivered free of charge as light commercial vans displaced the horse and cart. The fish-van brought
fresh fish to the door even in inland towns and villages. The cleanest and neatest shops secured the best
custom; flies and wasps disappeared from grocers' counters, finding no open pots of treacle or boxes of sugar to
attract them, and the butchers began keeping their carcases in refrigerators out of sight, not suspended bleeding
from hooks in the full glare of the sun. By the Thirties cellophane, a cheap wood-pulp product, was coming into
general use for keeping dry groceries and cigarettes fresh and clean, and soon also covered baskets of
strawberries, lumps of dates, and even kippers and other cured fish.
Woolworth's stores were the great cheap providers of household utensils and materials. There had been
a few '6½d. Bazaars' before the war, but the Woolworth system was altogether new. It worked by small profits
and quick returns in a huge variety of classified and displayed cut-price goods; some, such as excellent glass
and hardware, were even sold below cost price to attract custom. The Daily Herald reported in 1924 that the
T.U.C. was reviewing complaints about working conditions in Woolworth's-'the well-known bazaar-owners'-
and that this was the more serious because the stores were patronized chiefly by the working-class. But the firm
never had any difficulty in engaging unskilled sales-girls at a low wage; for 'the local Woolworth's' was
increasingly the focus of popular life in most small towns. And the name of Woolworth was a blessed one to the
general public; wherever a new branch was opened the prices of ironmongers, drapers, and household
furnishers in the neighbourhood would drop twopence in the shilling. The middle class at first affected to
despise Woolworth's goods, but they soon caught the working-class habit and would exclaim brightly among
themselves: 'My dear - guess where I got this amazing object - threepence at Maison Woolworth! I don't know
how they do it!'
Woolworth's, the Building Societies, and the Instalment System made it financially possible for people
of small means to take over new houses. The instalment or 'never-never' system was being applied to all major
household purchases, such as furniture, sewing-machines, vacuum-cleaners, gas-ovens, wireless sets. A Punch
illustration showed a young mother, watching her husband writing out the monthly cheque to pay off the
maternity-home debt: 'Darling, only one more instalment and Baby will be ours'.
(From The Long Weekend by Robert Graves and Alan Hodge)
SCIENTIFIC METHOD IN THE SOCIAL SCIENCES
Even the social scientist who is occupied with the study of what are called institutions must draw his
ultimate data (with one important exception mentioned below) from the experience of the senses. Suppose, for
instance, that he is engaged on a study of the role of trade unions in contemporary England. The abstract
conception 'trade union' is simply a shorthand for certain types of behaviour by certain people, of which we can
only be aware by sensory perception. It means men sitting in a room and making certain sounds in the conduct
of a 'trade union meeting', or handing over to other persons tangible objects (money) as their subscriptions to
Reading Passages [ 158 ]

the union. Anyone who wishes to make a study of trade unions, or even of the more abstract conception 'trade
unionism', can only do so by personally observing such behaviour, or by using his eyes and ears on books and
speeches made by other people who have themselves made such observations (or who have in their turn heard
or seen records of such observations made by others). Even such comments on a union meeting as that it was
'orderly' or 'peaceful' are fundamentally statements about its physical properties: an orderly meeting is
presumably one in which people do not make noises by banging upon the table or speaking very loudly.
This dependence of social studies upon sense perception is certainly a wholesome reminder of the
fundamental homogeneity of the original data of science. For knowledge of the external world, whether of
things or of people, we continually come back to our five senses in the end. Nevertheless, if a great mass of data
relevant to social science is sensory, we have, I think, also to admit an important collection that is not namely
the whole body of primary mental or psychological experience. Perception of mental pleasure and pain appears
to have the same universality as sensory experience. At all levels of culture, sensations of simple happiness and
unhappiness are as general as are the experiences of seeing and hearing. It is of course true that no person can
experience the feelings of anyone other than himself; but equally no one can see with another's eyes or hear
with another's ears. The grounds for belief in the sense experiences of other people and the grounds for belief in
their primitive psychological experiences are thus both equally shaky, or equally firm. We derive our
conviction that other people experience emotion from the fact that they say so, and from analogies between
their behaviour and our own: we derive our conviction that they see and hear from exactly the same evidence.
The irresistibility of psychological experience is perhaps slightly more disputable. If one's eyes are open
and one looks in a certain quarter one cannot help seeing. Is it equally true that one cannot help a feeling of
pleasure or pain or shock or excitement? Essentially, I should say that it is. But it is clear that primitive
emotional reactions can be inhibited: one can, for example contrive not to be depressed by an event.
Nevertheless, if we stand back from all philosophical niceties and ask ourselves whether psychological
sensation ought to be omitted from the data of the social sciences on the ground that it is doubtfully 'primitive',
there cannot, I think, be much doubt about the answer. We must conclude with Bertrand Russell 'that there is
knowledge of private data, and there is no reason why there should not be a science of them'. Equally, if we
consider whether the similarities or the differences, in this matter of universality-plus-irresistibility, between
psychological and sensory experience are the more impressive, we are surely bound to come down on the side
of the similarities. Certainly, social studies which consistently ignored human feelings would be worse than
laughable.
(From Testament for Social Science by Barbara Wootton)
The Troubles of shopping in Russia
A large crowd gathered outside a photographic studio in Arbat Street, one of the busiest. shopping
streets in Moscow, recently. There was no policeman within sight and the crowd was blocking the pavement.
The centre of attraction - and amusement - was a fairly well-dressed man, perhaps some official, who was
waving his arm out of the ventilation window of the studio and begging to be allowed out. The woman in
charge of the studio was standing outside and arguing with him. The man had apparently arrived just when
the studio was about to close for lunch and insisted upon taking delivery of some prints which had been
promised to him. He refused to wait so the staff had locked the shop and gone away for lunch. The incident
was an extreme example of the common attitude in service industries in the Soviet Union generally, and
especially in Moscow. Shop assistants do not consider the customer as a valuable client but as a nuisance of
some kind who has to he treated with little ceremony and without concern for his requirements.
For nearly a decade, the Soviet authorities have been trying to improve the service facilities. More shops
are being opened, more restaurants are being established and the press frequently runs campaigns urging
better service in shops and places of entertainment. It is all to no avail. The main reason for this is shortage of
staff. Young people are more reluctant to make a career in shops, restaurants and other such establishments.
Older staff are gradually retiring and this leaves a big gap. It is not at all unusual to see part of a restaurant or a
shop roped off because there is nobody available to serve. Sometimes, establishments have been known to be
closed for several days because of this.
One reason for the unpopularity of jobs in the service industries is their low prestige. Soviet papers and
journals have reported that people generally consider most shop assistants to be dishonest and this conviction
remains unshakeable. Several directors of business establishments, for instance, who are loudest in complaining
about shortage of labour, are also equally vehement that they will not let their children have anything to do
with trade.
Reading Passages [ 159 ]

The greatest irritant for the people is not the shortage of goods but the time consumed in hunting for
them and queuing up to buy them. This naturally causes ill-feeling between the shoppers and the assistants
behind the counters, though often it may not be the fault of the assistants at all. This too, damages hopes of
attracting new recruits. Many educated youngsters would be ashamed to have to behave in such a negative
way.
Rules and regulations laid down by the shop managers often have little regard for logic or convenience.
An irate Soviet journalist recently told of his experiences when trying to have an electric shaver repaired.
Outside a repair shop he saw a notice: ‘Repairs done within 45 minutes.’ After queuing for 45 minutes he was
asked what brand of shaver he owned. He identified it and was told that the shop only mended shavers made
in a particular factory and he would have to go to another shop, four miles away. When he complained, the
red-faced girl behind the counter could only tell him miserably that those were her instructions.
All organisations connected with youth, particularly the Young Communist League (Komsomo1), have
been instructed to help in the campaign for better recruitment to service industries. The Komsomol provides a
nicely-printed application form which is given to anyone asking for a job. But one district head of a distribution
organisation claimed that in the last in years only one person had come to him with this form. ‘We do not need
fancy paper. We do need people!’ he said. More and more people are arguing that the only way to solve the
problem is to introduce mechanisation. In grocery stores, for instance, the work load could be made easier with
mechanical devices to move sacks and heavy packages.
The shortages of workers are bringing unfortunate consequences in other areas. Minor rackets flourish.
Only a few days ago, Pravda, the Communist Party newspaper, carried a long humorous feature about a
plumber who earns a lot of extra money on the side and gets gloriously drunk every night. He is nominally in
charge of looking after 300 flats and is paid for it. But whenever he has a repair job to do, he manages to screw
some more money from the flat dwellers, pretending that spare parts are required. Complaints against him
have no effect because the housing board responsible is afraid that they will be unable to get a replacement. In a
few years’ time, things could be even worse if the supply of recruits to these jobs dries up altogether.
TECHNOLOGY
SEDUCED BY TECHNOLOGY
My neighbour, Nick, is a soft-spoken, easy-going fellow who owns a big, ungainly dog named Duffy
and has a passion for music. He has been a freelance musician all his adult life and plays the double bass for a
living. But things have changed in the music business: it's not easy to earn a living as a freelance musician today
as it was a few years ago.
A lot of the well-paid 'session work' has disappeared and been replaced by pre-programmed,
computerised synthesizers. Nick still plays in the 'pit' when he can, in touring musicals like Miss Saigon or
Phantom of the Opera. But today he also works part-time in a music shop, helping to ship out trumpets and
French horns to school bands and re-stocking inventory when new shipments arrive.
Nick is not untypical these days. In fact his story is just one of millions that unveil the other side of the
computer revolution - the human costs and consequences of the new 'wired world' which receive little attention
from government bureaucrats or industry boosters.
Fantastic, science-fiction tinged claims about the benefits of the coming 'information age' are hard to
escape. The press is full of hacks extolling the liberating virtues of electronic mail and tub-thumping about how
the Internet will unite the masses in a sort of electronic, Jeffersonian democracy (at least those with a personal
computer, modem and enough spare cash to pay the monthly hook-up fee).
'If you snooze, you lose' is the underlying message. Jump on board now or be brushed aside as the new
high-tech era reshapes the contours of modern life. This is not the first time that technology has been packaged
as a panacea for social progress. I can still recall a youthful Ronald Reagan touting for General Electric on
American television back in the 1950s: 'Progress is our most important product,' the future President intoned.
That ideology of progress is welded as firmly now as it was to the power-loom in the early nineteenth
century, the automobile in the 1920s or nuclear power in the 1960s. Yet the introduction of all these technologies
had disastrous side effects. The power-loom promised cheap clothing and a wealthier Britain but produced
catastrophic social dislocation and job loss. The car promised independence and freedom and delivered
expressways choked with traffic, suburbanization, air pollution and destructive wars fought over oil supplies.
Nuclear power promised energy 'too cheap to meter' and produced Chernobyl and Three Mile Island.
There is a lesson here that can and should be applied to all new technologies - and none more so than
computers. One of the last century's more astute analysts of communications technologies, Marshall McLuhan,
said it best: 'We shape our tools and thereafter our tools shape us.' In his cryptic way McLuhan was simply
Reading Passages [ 160 ]

summing up what a small band of dogged critics have been saying for decades. Technology is not just
hardware - whether it's a hammer, an axe or a desk-top PC with muscular RAM and a pentium chip. Limiting it
in this way wrenches technology from its social roots. The conclusion? It's not 'things' that are the problem, it's
people.
This has the simple attraction of common sense. Yet the more complex truth is that technologies carry
the imprint of the cultures from which they issue. They arise out of a system, a social structure: They are grafted
on to it,' argues Canadian scientist Ursula Franklin, 'and they may reinforce or destroy it, often in ways that are
neither foreseen or foreseeable.' What this means is that technology is never neutral. Even seemingly benign
technologies can have earth-shaking, unintended, social consequences.
The American writer Richard Sclove outlines what happened when water was piped into the homes of
villagers in Ibieca in north eastern Spain in the early 1970s. The village fountain soon disappeared as the centre
of community life when families gradually bought washing machines and women were released from
scrubbing laundry by hand. But at the same time the village's social bonds began to fray. Women no longer
shared grievances and gossip together; when men stopped using their donkeys to haul water the animals were
seen as uneconomical. Tractors began to replace donkeys for field work, thus increasing the villager's
dependence on outside jobs to earn cash needed to pay for their new machines. The villagers opted for
convenience and productivity. But, concludes Sciove: 'They didn't reckon on the hidden costs of deepening
inequality, social alienation and community dissolution.'
When it comes to introducing new technologies we need to look less at how they influence our lives as
individuals and more at how they impact on society as a whole.
Let's consider computers for a moment. Over the last decade new technologies based on micro-
electronics and digitized data have completely changed the way information is transmitted and stored, And
word processors and electronic mail have made writing and sending messages around the globe both cheap
and quick. 'Surfing the net' (clicking around the Internet in a random fashion for fun and entertainment) has
become the fashionable way to spend your leisure time. But these are benefits filtered through the narrow
prism of personal gain.
What happens when we step back and examine the broad social impact? How else are computers used?
Let's look at just four examples:
The money maze: The computer that allows us to withdraw cash from an automatic teller, day or night,
is the same technology that makes possible the international capital market. Freed from the shackles of
government regulation corporate money managers now shift billions of dollars a day around the globe. 'Surfing
the yield curve,' big money speculators can move funds at lightning speed, day and night - destabilizing
national economies and sucking millions out of productive long-term investment. The global foreign exchange
trade alone is now estimated at more than $1.3 billion a day.
Computer games: They come in all shapes and sizes and you can find them as easily in Kuala Lumpur
as in Santiago or Harare. They vary from the jolt-a-second, shoot -'em-up games (often offensively sexist) to the
mesmerizing hand-held and usually more innocuous variety. Now think of 'Desert Storm', the world's first (and
certainly not the last) electronic war. Lethal firepower as colourful blips on our TV screens, charred bodies
reduced to the arching trail of an explosive starburst.
The ability to kill and maim large numbers of our fellow human beings is not a new skill. We've been
able to destroy human life many times over for more than half a century and computers have not changed that
reality. What they have done is sideline human decision-making in favour of computer programs - making
catastrophe ever more likely. As one software engineer has pointed out, complex computer programs require
maintenance just like mechanical systems. The problem is that 'every feature that is added and every bug that is
fixed adds the possibility of some new interaction between parts of the program' - thus making the software
less rather than more reliable.
Information as power: This is the mantra of those who suggest that both the Internet and the World
Wide Web will establish a new on-line paradigm of decentralized power, placing real tools for liberation into
the hands of the marginalized and the poor. That's a tall order but it is nonetheless true that the new
communications technologies can be used positively by political dissidents and human-rights activists.
Examples abound. At the 1995 UN Conference on Women in Beijing the proceedings were posted
instantaneously over the Net thus bringing thousands of women, who would have otherwise been left out, into
the discussions.
This 'high-tech jujitsu', as critic Jerry Mander calls it is both valiant and necessary. But it doesn't change
the key fact that computers contribute more to centralization than to decentralization. They help activists, but
Reading Passages [ 161 ]

they help the centralizing forces of corporate globalization even more. This is what the communications theorist
Harold Tunis described as the 'bias' of technology in the modern era. Computers, as the most powerful of
modern communications tools, reflect their commercial and military origins.
Efficiency and employment:
Technology has always destroyed jobs. In the economy of industrial society that is its main purpose - to
replace labour with machines, thereby reducing the unit cost of production while increasing both productivity
and efficiency. In theory this spurs growth: producing more and better jobs, higher wages and an increased
standard of living. This is the credo of orthodox economics and there are still many true believers.
But evidence to support this view in the real world of technology is fading fast. More widespread is the
pattern detailed in a recent Statistics Canada report which underlined the growth of a 'two-tiered' labour
market in that country. On the top tier: long hours of overtime by educated, experienced and relatively well-
paid workers. And on the bottom: a large group of low- paid, unskilled and part-time workers 'who can be
treated as roughly interchangeable' . And then there are those who miss out altogether - the chronic jobless, the
socially marginalized who form a permanent and troubling underclass.
This same trend is repeated throughout the industrialized world. In the US author Jeremy Rifkin says
less than 12 per cent of Americans will work in factories within a decade and less than two per cent of the
global work force will be engaged in factory work by the year 2020. 'Near-workerless factories and virtual
companies' are already looming on the horizon, Rifkin claims. The result? 'Every nation will have to grapple
with the question of what to do with the millions of people whose labour is needed less or not at all in an ever-
more-automated economy.'
Computerization is at the core of the slimmed down, re-engineered workplace that free-market boosters
claim is necessary to survive the new, lean-and-mean global competition. Even factory jobs that have relocated
to the Third World are being automated quickly. In the long run machines will do little to absorb the millions of
young people in Asia, Africa and Latin America who will be searching for work in the coming decades. Slowly,
that sobering fact is beginning to strike home. A Chinese government official recently warned that
unemployment in the world's most populous nation could soon leap to 268 million as Chinese industries
modernize and automate.
In the long run computers don't eliminate work, they eliminate workers. But in a social system based on
the buying and selling of commodities this may have an even more pernicious effect. With fewer jobs there is
less money in circulation; market demand slackens, reinforcing recession and sending the economy into a
tailspin. The impact of automation on jobs is a dilemma which can no longer be ignored.
Though thinkers in the green movement have been grappling with this issue for over a decade, most
governments and even fewer business people are prepared to grasp the nettle. Both cling to the increasingly
flimsy belief that economic growth spurred by an increasing consumption of the earth's finite resources will
solve the problem. It won't. And serious questions need now to be raised about alternatives.
First, we need to think about democratizing the process of introducing new technologies into society
and into the workplace. At the moment these decisions are left typically in the hands of bureaucrats and
corporations who base their decisions on the narrow criteria of profit and loss. This bunkered mindset that
equates technological innovation with social progress needs to be challenged.
But there is also the critical issue of the distribution of work and income in a world where waged labour
is in a steady, inexorable decline. We can't continue to punish and stigmatize those who are unable to find jobs
just because there aren't enough to go around. Instead, we need to think creatively about how to redefine work
so that people can find self-esteem and social acceptance outside of wage labour. This may mean redesigning
jobs so that workers have more control and input into decisions about which technologies to adopt and what
products to make. Up to now this has been exclusively a management prerogative. But it also means
developing strategies to cut the average work week - without cutting pay. This would be one way of sharing
the wealth created by new technology and of creating jobs at the same time, Hard work also needs to go into
designing a plan for a guaranteed annual social wage. This is a radical (some would say outrageous) idea for
societies like ours that have anchored their value systems on the bedrock of wage labour.
But how can we deny people the basic rights of citizenship and physical well-being simply because the
economic system is no longer capable of providing for them?
(By: Wayne. Elwood, New Internationalist, 12,1996)
Blowing hot and cold on British windmills
Last year the Department of Energy and the Science Research Council together spent less than £1
million in research into wind energy, although £100 million each year goes into nuclear research and
Reading Passages [ 162 ]

development. In sharp contrast the USA has been spending some 60 million dollars each year on wind energy
and now plans a 1,000 million dollar demonstration and "commercialisation" programme.
In Germany, Denmark, and Sweden large programmes are under way and a number of megawatt size
windmills have been or are being built. In the USA, competitive electricity prices are already envisaged when
the present models of machines can be produced in large numbers (bringing down costs). The British Wind
Energy Association brings together scientists, engineers, and entrepreneurs from industry, universities, and
various Government bodies. With some 120 members, it now presents a respected view on the subject of wind
energy.
But the activity has remained small, and in my view seriously under-funded compared with many of
the other developed countries of the world. The UK is still without a single really large windmill (in the
megawatt range) and this means that we are failing to build up the practical experience which is essential if any
serious progress is to be made.
We in the UK are doing some very nice work on many of the associated problems of wind energy. But
none of these "generic studies" can replace real life experience with one or two very big windmills.
It is in the light of this background that we must examine the CEGB decision to press ahead with a
rather complete and ambitious programme.
1. An island site is to be sought and a megawatt size windmill is to be purchased and
erected by 1985.
2. A smaller (100kW) windmill is to be bought and set up, as soon as possible, so that an
early experience may be built up in the CEGB of windmill operation and integration into the grid.
3. International collaboration will be sought for research on offshore windmills.
It is my guess that the CEGB might well wish to build its first windmill group or "cluster" in the period
1985-1990. This first cluster might have 10 machines in it. In the long term we are thinking, of course, of clusters
of 400-1,000 machines (each of about 4MW). Such a cluster would provide an output similar in magnitude to a
modern coal fired or nuclear powered station.
The importance of this CEGB decision is that the United Kingdom is at last moving forward towards the
building of its first multi-megawatt windmill station. This is a milestone for those who believe that the so called
"renewable" energy sources (wind, wave and sun) have an important part to play in our energy future. Even
more important from the point of view of UK industry is the fact that it is the potentially largest UK customer
(the CEGB) who is taking the initiative.
Meanwhile, why has the CEGB opted for a lowland windmill? Good lowland sites offer average wind
speeds of about six metres per second, whereas hilltop and offshore sites can offer average wind speeds in
excess of eight metres/second. This ratio of 1.33 in windspeed actually represents a factor of 1.33 cubed (i.e.
about 2.5) in available energy, for a given size of windmill.
Three or four years ago UK interest centred on hilltop and coastal sites. Developments since 1977 have
greatly changed the picture. Dr Peter Musgrove pioneered in the UK the idea of putting windmills in the
shallow waters of the North Sea. He pointed out that there are vast areas of shallow waters (less than 30 metres
deep) off the east coast of the UK. He showed that a number of windmill clusters in these shallow waters could
meet up to 30 per cent of UK annual electricity needs.
To their credit the Department of Energy took up these ideas and funded a detailed study. No
insuperable technical difficulties were found and the estimated building and running costs would indicate a
price for electricity which could become competitive in the near future with nuclear or coal-generated
electricity.
The CEGB has given the production costs for electricity for stations currently being built. Nuclear
power from Dungeness B is put at 2.62p/kWh: coal at Drax B is 3.59p/kWh: oil at Littlebrook D is 6.63p/kWh.
The Taylor Woodrow led study came up with a number of different figures for the cost of North Sea
electricity depending on the assumptions made. Let me pick out the figure of 4.20p/kWh which was based on
the windmills having a diameter of 100 metres (which is the largest size presently being built in the world). The
figure taken for average wind speed is 9.5m/s.
Thus North Sea-generated electricity looks like being close to competitive on present fuel costs. This
figure of 4.2p/kWh could come down dramatically if we find that we are able to build much larger diameter
windmills than the present 100 metres. This is because foundation and lower costs were dominating the picture.
Larger windmills would mean fewer windmills and hence lower overall foundation and tower expenditure.
The CEGB has chosen, in the meantime, to go for the on-land option for their initial programme. This is
an eminently sensible decision. A number of windmill designs have been developed in the USA and elsewhere
Reading Passages [ 163 ]

for machines to operate in moderate wind regions. The 6 metres/second average wind speeds that we get in
many lowland areas of the UK are considered to be satisfactory speeds for wind turbine operation in the USA.
Given a lower wind speed you simply design a larger diameter wind turbine.
What now remains to be seen is not so much whether you can build large windmills or whether they
will be economically viable, but whether they will be environmentally and socially acceptable, placed for
example in the windy lowlands of Lincolnshire and East Anglia. The CEGB search for its first site should bring
out some interesting attitudes.
I have seen the 200 ft. Mod 1 Windmill which is on a hilltop near the small town of Boone in North
Carolina. Even from a distance of only two miles it is far from obtrusive. In fact I found it a most attractive sight
- but perhaps I am biased. The locals nonetheless are very proud of it!
Professor N. H. Lipman of the Department of Engineering at Reading University, is a member of the Reading
Energy Group.
(From an article by Norman Lipman in The Guardian)
Direct use of solar radiation
Simply because a very large energy flux falls on Britain through the year, it is wrongly assumed by
many that the life to which they have become accustomed can be supported with solar energy. Even with the
reduced level of energy use we managed to achieve with proper insulation, the average level of radiation
occurring during the two months when we need warmth most - December and January - is down to an average
over the period of 15 watts per square metre. This is vanishingly small. To make any use of such small radiation
requires that correspondingly very large areas be set aside to low temperature collection, which will be
redundant during the summer months when average radiation, over the month of June for example, is 225
watts per square metre. Thus, unless you can set aside at least 40 square metres of cheap solar collecting
surface, facing due south at an angle of around 70° to the horizontal, it is best to forget energy self-sufficiency
from the sun alone.
It is not so silly, however, to use the sun when it does shine hot to provide between 50 and 90% of your
hot water, usually during the months of April to September when it is warm enough. 4 square metres of
collecting surface plumbed into the hot water system of your house can provide up to 100kw-h/m² per year. At
1p per kw-h this could be worth £24 a year, and at 2p per kwh, which is the price we are likely to be paying
soon, it would be worth £48 a year.
For anyone wishing to embark on this course it should be remembered that while solar collectors are
very efficient in the lower temperature ranges up to around 10-17°C, this 'efficiency' falls off very quickly when
the temperature of the heat transfer fluid inside the collector rises above 35-40°C. In tests carried out on an
unglazed solar collector we found that we could achieve an average of about 50% efficiency at 40°C. By putting
a clear plastic glazing on top we increased the average efficiency from about 50 to 60% in the higher
temperature ranges. It is really not worth spending a great deal on glazing, since the extra energy you collect
will take years to pay back the additional cost of the framing and glazing, in terms of hot water.
I doubt whether it is worth spending money on making your own solar collector. The cheapest do-it-
yourself collector is a scrap steel radiator stripped down and painted black. Insulation and aluminium foil
underneath stops radiation and conduction losses. So far as possible, collectors should face south. The cheapest
place to site them is on the south wall of the house. The most expensive place is on the south roof, since you
will need the services, or skills, of a roofer and carpenter, in addition to those of a plumber and electrician,
unless you have these skills yourself.
Industrial Robots
There are few micro-electronic applications more likely to raise fears regarding future employment
opportunities than robots for the very obvious reason that such machines directly replace human labour. The
emotive nature of the subject inevitably gives rise to misapprehensions.
It is necessary first to define an industrial robot. Alternative definitions and classifications abound but
basically a robot is a machine which moves, manipulates, joins or processes components in the same way as
human hand or arm. It consists basically of three elements: the mechanical structure (including the artificial
wrist and gripper), the power unit (hydraulic, pneumatic or, increasingly, electrical) and the control system
(increasingly mini-computers and microprocessors). However, the essential characteristic of a robot is that it
can be programmed. Thus many devices (often called robots) would be better termed 'numerically controlled
arms', since they are mechanical arms controlled by rudimentary (non-computer) software and as such are not
radically different to much existing automation equipment. There are reportedly about 20,000 of the latter in
Reading Passages [ 164 ]

use in Japan, and perhaps several thousand in the United Kingdom. A robot, however, is here defined as a
hybrid of mechanical, electrical and computing engineering.
Most robots in current use handle fairly straightforward tasks such as welding and spraying where the
software programmes controlling the machines are not very complex. However, the newer machines, usually
referred to as 'universal' but which are still under development, will be able to perform more complex assembly
tasks (for example, carburettor assembly).
Table 1 gives some world-wide estimates of robot diffusion. The table is based on a
number of different studies and must be treated with caution since there are problems of
definition: some companies producing medium technology robots do not classify them as
robots; and many large companies are known to have developed robots in-house, but there
is little statistical information on them.
Table l Estimates of the international diffusion of robots in 1978
3
Japan    
500
2
USA    
500
2
Europe    
000
of which West Germany   600
  Sweden   600
  Italy   300-400
  France   200
  United Kingdom   100
8
World-wide    
000
The distribution of robots according to applications is again very difficult to estimate, but figures
released by the Unimate Company of America indicate that by June 1977 they had installed around 1600
machines world-wide. Of these, virtually 50 per cent were used for spot welding, about 11 per cent for die-
casting and another 5 per cent for machine loading. Other surveys have suggested that a significant proportion
of robots are used for coating, mainly for paint spraying.
What is clear from surveys to date is that the automobile and metal working and forming industries
have been the biggest users of robots. Indeed, the relative decline of these industries in Britain is probably a
significant cause of the relatively small use of robots in this country. With so little happening in Britain, and
indeed only marginal penetration world-wide, it is difficult to generalize either upon current motives for
investment in them or indeed the likely pattern of future applications; but obviously these are the matters that
will determine the employment impact.
The reasons most commonly cited for the introduction of robots into the work place are:
1. improvement in productivity both in work-rate and quality of output;
2. improvement of working conditions, most usually where health hazards are involved;
3. improved flexibility of production systems;
4. greater effective management for production control.
Balanced against these, however, are factors limiting the spread of robots such as:
1. high price;
2. hardware and software problems;
3. lack of knowledge, and perhaps even fear of robots;
4. competition from alternative automation systems.
It might be thought that union resistance should be added to the list; but it does not appear to be a
restraining influence overseas, and in our discussions on the subject with British manufacturers it has not been
placed high on their list of constraints. More often it has been stressed that the relatively low cost of labour in
this country greatly reduces the economic justification for investment in robots.
(From The Manpower Implications of Micro-Electronic Technology, MSC.)
Tomorrow's Phone Calls
Reading Passages [ 165 ]

One day we all may find it useful to have a facility for sending documents, writing and pictures across
the telephone lines. A detector at the sending end would quickly transmit the signals representing the
document to a printer at the receiving end where the documents would be accurately and quickly reproduced.
View-phone would become an easy facility to provide; not that the findings in America indicate an
overwhelming demand for us all to be seen as well as heard on the telephone.
Conference telephone facilities could become widely available and multi-channel cable television could
also consume some of that capacity, but perhaps the greatest use, initially at least, will be in what our Post
Office now calls Prestel.
The system will link the subscriber's television set to a public computer via his telephone. By dialling
the appropriate number on the telephone and connecting the television receiver to the circuit, the user will have
at his disposal sixty thousand or so pages of information, and all sorts of services which the telephone and the
television on their own could never provide.
The television receiver is fitted with a small memory capable of storing the digital information
necessary to generate one page of display. The conversion to Prestel also requires fitting the television with a
key pad, rather like the finger panel of a pocket calculator.
On dialling the central computer the massive memory there transmits the signals to the small memory
on the television set. These signals generate the first page of the instruction sequence. The picture welcomes
you to Prestel and gives you any message it has stored for you in its memory. You, of course, can leave a
message for someone else to pick up. Prestel talks to you in that strange computer fashion of asking you
questions to which you can answer 'Yes', 'No', or give a number.
By asking questions the Prestel computer finds out what services you require, whether you want to find
out the closing prices on the Stock Exchange, what is on at the theatre or something more complicated. Your
answers on your key pad dictate when the memory in your set will receive a new set of instructions from
central control and what those instructions will be.
Leaving a complicated message may prove difficult but as long as you are prepared to accept the
alternatives Prestel offers, your wife can, for instance, learn whilst you are on your way, that you are arriving at
the station at the time you have keyed in from your number pad, or that you are not coming home at all.
Then there are more subtle applications. On file in the main memory could be a whole host of valuable
data ranging from, for instance, the Highway Code, to when you are likely to be approved for a mortgage.
Because the computer can respond to your 'Yes', 'No' or number answers, it can actually give you advice.
One programme I tried was designed to help those anxious to adopt a child. The first question was, 'Are
you applying on behalf of yourself only? If so key zero, if not key one.' I keyed zero, for the sake of argument,
and up came question two. 'Are you the parent of the child?' Again zero for no. 'Are you over twenty-five?' I
lied a little and said 'no' again and this proved too much for the computer. 'You are not eligible to adopt,' came
back the answer.
If I had told the truth about being over twenty-five that would have made a difference and another
question might have emerged before a final answer was given. Exactly the same technique is being used in
some hospitals now for routine diagnosis of patients' complaints. The computers in this case are not public ones
on an open network, but maybe the time will come when the Prestel computer will also tell you what is the
matter with you, even if it takes a while longer for it actually to prescribe a treatment.
Prestel will also play games with you. Computerised noughts and crosses, mazes, and problems like
balancing fuel consumption in the retro-rockets of your Mars lander against the gravitational pull of the planet
so that you land at a comfortable speed without running out of fuel, are all in the compendium at computer
headquarters.
In fact it is easy to forget that what Prestel is really trying to do is get you to use the phone, The Post
Office does not want to engage in games or even take over completely from the Citizen's Advice Bureau, but it
is in business to sell telephone calls, and the advent of Prestel gives it a mighty potent marketing weapon. But
Prestel and all other new services which will emerge in its wake can only work if the capacity for these extra
services is built into the system. It is optical fibre communication which promises to make that possible.
(From Tomorrow's World by M. Blackstad, BBC Publications)
Coal
The rapid growth of steam power relied directly on large supplies of its only fuel: coal. There was a
great increase in the amount of coal mined in Britain.
Coal was still cut by hand and the pits introduced few really new techniques. The increased demand
was met by employing more miners and by making them dig deeper. This was made possible by more efficient
Reading Passages [ 166 ]

steam pumps and steam-driven winding engines which used wire ropes to raise the coal to the surface, by
better ventilation and by the miner's safety lamp which detected some dangerous gases. By the 1830s scams
well below 1000 feet were being worked in south Durham and the inland coalfields of Lancashire and
Staffordshire. Central Scotland and south Wales were mined more intensively later. By the end of the
nineteenth century the best and most accessible seams were worked out. As the miners followed the eastern
scams of the south Yorkshire-north Midlands area, which dipped further below the surface, shafts of 3,000 feet
were not uncommon.
Some of the work in mines was done by women and children. Boys and girls were often put in charge of
the winding engines or of opening and shutting the trap doors which controlled the ventilation of the mines.
Then they had to crouch all day in the same spot by themselves in the dark. When these evils were at last
publicized in 1842 by a Royal Commission, many mines no longer employed women, but Parliament made it
illegal for them all. It also forbade them to employ boys under the age of ten. The limit, which was very difficult
to enforce, was increased to twelve in the 1870s. Subsequently it rose with the school leaving age.
Mining was very dangerous. Loose rocks were easily dislodged and the risk of being killed or injured
by one was always greater in the tall scams where they had further to fall. In the north of England fatal
accidents were not even followed by inquests to discover why they had happened until after 1815. Few safety
precautions were taken before the mid-nineteenth century. The mine owners insisted that they were not
responsible. The men were most reluctant to put up enough props to prevent the roof from falling in and to
inspect the winding gem: and other machinery on which their lives depended. If they did, they spent less time
mining and so earned less money because the miners' pay was based not on how long they worked but on how
much coal they extracted. They preferred to take risks.
The deeper seams contained a dangerous gas called 'fire-damp' which could be exploded by the miners'
candles. The safety lamp, which was invented in the early nineteenth century, did not really solve this problem,
but it was often used to detect gas and so made the mining of deeper seams possible. There the air was more
foul, the temperature higher (one pit paid the men an extra 6d a day for working in 130°F) and the risk of fire-
damp even greater. In the 1840s a series of terrible explosions in the deeper mines led to stricter regulations,
which inspectors helped enforce. The inspectors were particularly keen on proper ventilating machines and,
although deeper shafts were sunk, they did not become more dangerous. However, many serious accidents still
occurred.
(From Britain Transformed, Penguin Books)
The Medium Is the Message
In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is
sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message.
This is merely to say that the personal and social consequences of any medium - that is, of any extension of
ourselves - result from the new scale that is introduced into our affairs by each extension of ourselves, or by any
new technology. Thus, with automation, for example, the new patterns of human association tend to eliminate
jobs, it is true. That is the negative result. Positively, automation creates roles for people, which is to say depth
of involvement in their work and human association that our preceding mechanical technology had. destroyed.
Many people would be disposed to say that it was not the machine, but what one did with the machine, that
was its meaning or message. In terms of the ways in which the machine altered our relations to one another and
to ourselves, it mattered not in the least whether it turned out cornflakes or Cadillacs. The restructuring of
human work and association was shaped by the technique of fragmentation that is the essence of machine
technology. The essence of automation technology is the opposite. It is integral and decentralist in depth, just as
the machine was fragmentary, centralist, and superficial in its patterning of human relationships.
The instance of the electric light may prove illuminating in this connection. The electric light is pure
information. It is a medium without a message, as it were, unless it is used to spell out some verbal ad or name.
This fact, characteristic of all media, means that the "content" of any medium is always another medium. The
content of writing is speech, just as the written word is the content of print, and print is the content of the
telegraph. If it is asked, "What is the content of speech?," it is necessary to say, "It is an actual process of
thought, which is in itself nonverbal." An abstract painting represents direct manifestation of creative thought
processes as they might appear in computer designs. What we are considering here, however, are the psychic
and social consequences of the designs or patterns as they amplify or accelerate existing processes. For the
"message" of any medium or technology is the change of scale or pace or pattern that it introduces into human ;
affairs. The railway did not introduce movement or transportation or wheel or road into human society, but it
accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new
Reading Passages [ 167 ]

kinds of work and leisure. This happened whether the railway functioned in a tropical or a northern
environment, and is quite independent of the freight or content of the railway medium. The airplane, on the
other hand, by accelerating the rate of transportation, tends to dissolve the railway form of city, politics, and
association, quite independently of what the airplane is used for.
Let us return to the electric light. Whether the light is being used for brain surgery or night baseball is a
matter of indifference. It could be argued that these activities are in some way the "content" of the electric fight,
since they could not exist without the electric light. This fact merely underlines the point that "the medium is
the message" because it is the . medium that shapes and controls the scale and form of human association and
action. The content or uses of such media are as diverse as they are ineffectual in shaping the form of human
association. Indeed, it is only too typical that the "content" of any medium blinds us to the character of the
medium. It is only today that industries have become aware of the various kinds of business in which they are
engaged. When IBM discovered that it was not in the business of making office equipment or business
machines, but that it was in the business of processing information, then it began to navigate with clear vision.
The General Electric Company makes a considerable portion of its profits from electric light bulbs and lighting
systems. It has not yet discovered that, quite as much as A.T.&T., it is in the business of moving information.
The electric light escapes attention as a communication medium just because it has no "content." And
this makes it an invaluable instance of how people fail to study media at all. For it is not till the electric light is
used to spell out some brand name that it is noticed as a medium. Then it is not the light but the "content" (or
what is really another medium) that is noticed. The message of the electric light is like the message of electric
power in industry, totally radical, pervasive, and decentralized. For electric light and power are separate from
their uses, yet they eliminate time and space factors in human association exactly as do radio, telegraph,
telephone, and TV, creating involvement in depth.
(From Understanding media by Marshall McLuhan)
THE DEVELOPMENT OF ELECTRICITY
The phenomenon which Thales had observed and recorded five centuries before the birth of Christ
aroused the interest of many scientists through the ages. They made various practical experiments in their
efforts to identify the elusive force which Thales had likened to a 'soul' and which we now know to have been
static electricity.
Of all forms of energy, electricity is the most baffling and difficult to describe. An electric current cannot
be seen. In fact it does not exist outside the wires and other conductors which carry it. A live wire carrying a
current looks exactly the same and weighs exactly the same as it does when it is not carrying a current. An
electric current is simply a movement or flow of electrons.
Benjamin Franklin, the American statesman and scientist born in Boston in 1706, investigated the nature
of thunder and lightning by flying a child's kite during a thunderstorm. He had attached a metal spike to the
kite, and at the other end of the string to which the kite was tied he secured a key. As the rain soaked into the
string, electricity flowed freely down the string and Franklin was able to draw large sparks from the key. Of
course this could have been very dangerous, but he had foreseen it and had supported the string through an
insulator. He observed that this electricity had the same properties as the static electricity produced by friction.
But long before Franklin many other scientists had carried out research into the nature of electricity.
In England William Gilbert (1544-1603) had noticed that the powers of attraction and repulsion of two
non-metallic rods which he had rubbed briskly were similar to those of lodestone and amber - they had
acquired the curious quality we call magnetism. Remembering Thales of old he coined the word 'electricity'.
Otto von Guericke (1602-1686) a Mayor of Magdeburg in Germany, was an amateur scientist who had
constructed all manner of gadgets. One of them was a machine consisting of two glass discs revolving in
opposite directions which produced high voltage charges through friction. Ramsden and Wimshurst built
improved versions of the machine.
A significant breakthrough occurred when Alessandro Volta (1745-1827) in Italy constructed a simple
electric cell (in 1799) which produced a flow of electrons by chemical means. Two plates, one of copper and the
other of zinc, were placed in an acid solution and a current flowed through an external wire connecting the two
plates. Later he connected cells in series (voltaic pile) which consisted of alternate layers of zinc and copper
discs separated by flannel discs soaked in brine or acid which produced a higher electric pressure (voltage). But
Volta never found the right explanation of why his cell was working. He thought the flow of electric current
was due to the contact between the two metals, whereas in fact it results from the chemical action of the
electrolyte on the zinc plate. However, his discovery proved to be of incalculable value in research, as it enabled
Reading Passages [ 168 ]

scientists to carry out experiments which led to the discoveries of the heating, lighting, chemical and magnetic
effects of electricity.
One of the many scientists and physicists who took advantage of the 'current electricity' made possible
by Volta's cells was Hans Christian Oersted (1777-1851) of Denmark. Like many others he was looking for a
connection between the age-old study of magnetism and electricity, but now he was able to pass electric
currents through wires and place magnets in various positions near the wires. His epoch-making discovery
which established for the first time the relationship between magnetism and electricity was in fact an accident.
While lecturing to students he showed them that the current flowing in a wire held over a magnetic
compass needle and at right angles to it (that is east-west) had no effect on the needle. Oersted suggested to his
assistant that he might try holding the wire parallel to the length of the needle (north- south) and hey presto,
the needle was deflected! He had stumbled upon the electromagnetic effect in the first recorded instance of a
wire behaving like a magnet when a current is passed through it.
A development of Oersted's demonstration with the compass needle was used to construct the world's
first system of signaling by the use of electricity.
In 1837 Charles Wheatstone and William Cooke took out a patent for the world's first Five-needle
Telegraph, which was installed between Paddington railway station in west London and West Drayton station
a few miles away. The five copper wires required for this system were embedded in blocks of wood.
Electrolysis, the chemical decomposition of a substance into its constituent elements by the action of an
electric current, was discovered by the English chemists Carlisle and William Nicholson (1753-1815). If an
electric current is passed through water it is broken down into the two elements of which it is composed --
hydrogen and oxygen. The process is used extensively in modern industry for electroplating. Michael Faraday
(1791-1867) who was employed as a chemist at the Royal Institution, was responsible for introducing many of
the technical terms connected with electrolysis, like electrolyte for the liquid through which the electric current
is passed, and anode and cathode for the positive and negative electrodes respectively. He also established the
laws of the process itself. But most people remember his name in connection with his practical demonstration of
electromagnetic induction.
In France Andre-Marie Ampere (1775-1836) carried out a complete mathematical study of the laws
which govern the interaction between wires carrying electric currents.
In Germany in 1826 a Bavarian schoolmaster Georg Ohm (1789- 1854) had defined the relationship
between electric pressure (voltage), current (flow rate) and resistance in a circuit (Ohm's law) but 16 years had
to elapse before he received recognition for his work.
Scientists were now convinced that since the flow of an electric current in a wire or a coil of wire caused
it to acquire magnetic properties, the opposite might also prove to be true: a magnet could possibly be used to
generate a flow of electricity.
Michael Faraday had worked on this problem for ten years when finally, in 1830, he gave his famous
lecture in which he demonstrated, for the first time in history, the principle of electromagnetic induction. He
had constructed powerful electromagnets consisting of coils of wire. When he caused the magnetic lines of
force surrounding one coil to rise and fall by interrupting or varying the flow of current, a similar current was
induced in a neighbouring coil closely coupled to the first.
The colossal importance of Faraday's discovery was that it paved the way for the generation of
electricity by mechanical means. However, as can be seen from the drawing, the basic generator produces an
alternating flow of current.(A.C.)
Rotating a coil of wire steadily through a complete revolution in the steady magnetic field between the
north and south poles of a magnet results in an electromotive force (E.M.F.) at its terminals which rises in value,
falls back to zero, reverses in a negative direction, reaches a peak and again returns to zero. This completes one
cycle or sine wave. (1Hz in S.I. units).
In recent years other methods have been developed for generating electrical power in relatively small
quantities for special applications. Semiconductors, which combine heat insulation with good electrical
conduction, are used for thermoelectric generators to power isolated weather stations, artificial satellites,
undersea cables and marker buoys. Specially developed diode valves are used as thermionic generators with an
efficiency, at present, of only 20% but the heat taken away from the anode is used to raise steam for
conventional power generation.
Sir Humphry Davy (1778-1829) one of Britain's leading chemists of the 18th century, is best remembered
for his safety lamp for miners which cut down the risk of methane gas explosions in mines. It was Davy who
first demonstrated that electricity could be used to produce light. He connected two carbon rods to a heavy
Reading Passages [ 169 ]

duty storage battery. When he touched the tips of the rods together a very bright white light was produced. As
he drew the rods apart, the arc light persisted until the tips had burnt away to the critical gap which
extinguished the light. As a researcher and lecturer at the Royal Institution Davy worked closely with Michael
Faraday who first joined the institution as his manservant and later became his secretary. Davy's crowning
honour in the scientific world came in 1820, when he was elected President of the Royal Society.
In the U.S.A. the prolific inventor Thomas Alva Edison (1847-1831) who had invented the incandescent
carbon filament bulb, built a number of electricity generators in the vicinity of the Niagara Falls. These used the
power of the falling water to drive hydraulic turbines which were coupled to the dynamos. These generators
were fitted with a spinning switch or commutator (one of the neatest gadgets Edison ever invented) to make the
current flow in unidirectional pulses (D.C.) In 1876 all electrical equipment was powered by direct current.
Today mains electricity plays a vital part in our everyday lives and its applications are widespread and
staggering in their immensity. But we must not forget that popular demand for this convenient form of power
arose only about 100 years ago, mainly for illumination.
Recent experiments in superconductivity, using ceramic instead metal conductors have given us an
exciting glimpse into what might be achieved for improving efficiency in the distribution of electric power.
Historians of the future may well characterise the 20th century as 'the century of electricity &
electronics'. But Edison's D.C. generators could not in themselves, have achieved the spectacular progress that
has been made. All over the world we depend totally on a system of transmitting mains electricity over long
distances which was originally created by an amazing inventor whose scientific discoveries changed, and are
still changing, the whole world. His name was scarcely known to the general public, especially in Europe,
where he was born.
Who was this unknown pioneer? Some people reckon that it was this astonishing visionary who
invented wireless, remote control, robotics and a form of X-ray photography using high frequency radio waves.
A patent which he took out in the U.S.A. in 1890 ultimately led to the design of the humble ignition coil which
energises billions and billions of spark plugs in all the motor cars of the world. His American patents fill a book
two inches thick. His name was Nicola Tesla (1856-1943).
Nicola Tesla was born in a small village in Croatia which at that time formed part of the great Austro-
Hungarian Empire. Today it is a northern province of Yugoslavia, a state created after the 1914-1918 war. Tesla
studied at the Graz Technical University and later in Budapest. Early in his studies he had the idea that a way
had to be found to run electric motors directly from A.C. generators. His professor in Graz had assured him
categorically that this was not possible. But young Tesla was not convinced. When he went to Budapest he got a
job in the Central Telegraph Office, and one evening in 1882, as he was sitting on a bench in the City Park he
had an inspiration which ultimately led to the solution of the problem.
Tesla remembered a poem by the German poet Goethe about the sun which supports life on the earth
and when the day is over moves on to give life to the other side of the globe. He picked up a twig and began to
scratch a drawing on the soil in front of him. He drew four coils arranged symmetrically round the
circumference of a circle. In the centre he drew a rotor or armature. As each coil in turn was energised it
attracted the rotor towards it and the rotary motion was established. When he constructed the first practical
models he used eight, sixteen and even more coils. The simple drawing on the ground led to the design of the
first induction motor driven directly by A.C.electricity.
Tesla emigrated to the U.S.A. in 1884. During the first year he filed no less than 30 patents mostly in
relation to the generation and distribution of A.C. mains electricity. He designed and built his 'A.C.Polyphase
System' which generated three-phase alternating current at 25 Hz. One particular unit delivered 422 amperes at
12,000 volts. The beauty of this system was that the voltage could be stepped down using transformers for local
use, or stepped up to many thousands of volts for transmission over long distances through relatively thin
conductors. Edison's generating stations were incapable of any such thing.
Tesla signed a lucrative contract with the famous railway engineer George Westinghouse, the inventor
of the Westinghouse Air Brake which is used by most railways all over the world to the present day. Their
generating station was put into service in 1895 and was called the Niagara Falls Electricity Generating
Company. It supplied power for the Westinghouse network of trains and also for an industrial complex in
Buffalo, New York.
After ten years Tesla began to experiment with high frequencies. The Tesla Coil which he had patented
in 1890 was capable of raising voltages to unheard of levels such as 300,000 volts. Edison, who was still
generating D.C., claimed A.C. was dangerous and to prove it contracted with the government to produce the
first electric chair using A.C. for the execution of murderers condemned to death. When it was first used it was
Reading Passages [ 170 ]

a ghastly flop. The condemned man moaned and groaned and foamed at the mouth. After four minutes of
repeated application of the A.C.voltage smoke began to come out of his back. It was obvious that the victim had
suffered a horribly drawn-out death.
Tesla said he could prove that A.C. was not dangerous. He gave a demonstration of high voltage
electricity flowing harmlessly over his body. But in reality, he cheated, because he had used a frequency of
10,000 cycles (10 kHz) at extremely low current and because of the skin effect suffered no harm.
One of Tesla's patents related to a system of lighting using glass tubes filled with fluorine (not neon)
excited by H.F.voltages. His workshop was lit by this method. Several years before Wilhelm Roentgen
demonstrated his system of X-rays Tesla had been taking photographs of the bones in his hand and his foot
from up to 40 feet away using H.F.currents.
More astonishing still is the fact that in 1893, two years before Marconi demonstrated his system of
wireless signaling, Tesla had built a model boat in which he combined power to drive it with radio control and
robotics. He put the small boat in a lake in Madison Square Gardens in New York. Standing on the shore with a
control box, he invited onlookers to suggest movements. He was able to make the boat go forwards and
backwards and round in circles. We all know how model cars and aircraft are controlled by radio today, but
when Tesla did it a century ago the motor car had not been invented, and the only method by which man could
cover long distances was on horseback!
Many people believe that a modification of Tesla's 'Magnifying Transmitter' was used by the Soviet
Union when suddenly one day in October 1976 they produced an amazing noise which blotted out all radio
transmissions between 6 and 20 MHz. (The Woodpecker) The B.B.C., the N.B.C. and most broadcasting and
telecommunication organisations of the world complained to Moscow (the noise had persisted continuously for
10 hours on the first day), but all the Russians would say in reply was that they were carrying out an
experiment. At first nobody seemed to know what they were doing because it was obviously not intended as
another form of jamming of foreign broadcasts, an old Russian custom as we all know.
It is believed that in the pursuit of his life's ambition to send power through the earth without the use of
wires, Tesla had achieved a small measure of success at E.L.F. (extremely low frequencies) of the order of 7 to
12 Hz. These frequencies are at present used by the military for communicating with submarines submerged in
the oceans of the world.
Tesla's career and private life have remained something of a mystery. He lived alone and shunned
public life. He never read any of his papers before academic institutions, though he was friendly with some
journalists who wrote sensational stories about him. They said he was terrified of microbes and that when he
ate out at a restaurant he would ask for a number of clean napkins to wipe the cutlery and the glasses he drank
out of. For the last 20 years of his life until he died during World War II in 1943 he lived the life of a semi-
recluse, with a pigeon as his only companion. A disastrous fire had destroyed his workshops and many of his
experimental models and all his papers were lost for ever.
Tesla had moved to Colorado Springs where he built his largest ever coil which was 52 feet in diameter.
He studied all the different forms of lightning in his unsuccessful quest for the transmission of power without
wires.
In Yugoslavia, Tesla is a national hero and a well-equipped museum in Belgrade contains abundant
proof of the genius of this extraordinary man.
(From: The dawn of amateur radio in the U.K. and Greece: a personal view by Norman F. Joly.)
The Autonomous House
The autonomous house on its site is defined as a house operating independently of any inputs except
those of its immediate environment. The house is not linked to the mains services of gas, water, electricity or
drainage, but instead uses the income-energy sources of sun, wind and rain to service itself and process its own
wastes. In some ways it resembles a land-based space station which is designed to provide an environment
suitable for life but unconnected with the existing life-support structure of Earth. The autonomous house uses
the life-giving properties of the Earth but in so doing provides an environment for the occupants without
interfering with or altering these properties.
Although the self-serviced house provides a useful starting-point for experiments in autonomy, as it
forms a small unit that can be designed, built and tested within a relatively short time, the idea can be
expanded to include self-sufficiency in food, the use of on-site materials for building and the reduction of the
building and servicing technology to a level where the techniques can be understood and equipment repaired
by a person without recourse to specialized training. Although it is possible to survive with pre-industrial
technology, this is not what is proposed by autonomous living. At present, however, technology appears to be
Reading Passages [ 171 ]

exploited for its own sake, without thought to its benefits, uses or effects on people or the external environment.
We are persuaded to expect a higher material standard of living when, for the majority, the standard that we
already have in the West is perfectly adequate. A marginal increase in this standard can only be made with the
use of yet greater quantities of the existing resources of the Earth. What are essentials for the American way of
life (full central heating, air conditioning, a car per person) are considered, albeit less so now, as luxuries for
Europeans, and what are considered necessary for a satisfactory European life (enough to eat, a home and fuel
to heat it, access to transport) would be luxuries for the ‘third world’. If we cannot find a way of levelling
standards rationally while there is time left to consider the problem, then the levelling may be forced on us as
the lack of fossil fuels on which western economy so critically depends precipitates a collapse which must
change our way of living if we are to survive at all.
The autonomous house is not seen as a regressive step. It is not simply a romantic vision of ‘back to the
land’, with life again assuming a rural pace and every man dependent upon himself and his immediate
environment for survival. Rather, it is a different direction for society to take. Instead of growth, stability is the
aim; instead of working to earn money to pay other people to keep him alive, the individual is presented with
the choice of self-autonomy or working to pay for survival. No such choice exists at present. ‘Dropping out’
now is a game for those with private means.
Stability would be an obvious goal were it not for the fact that society is so geared to growth in every
sense. A stable population, making only what it actually needs, with each article being considered with regard
to the material it is made of and what is to be done with it once its useful life is over, and finding all its power
from what can be grown or from the sun, would give man back a true place in the world’s system. However, a
consumer society can exist only by living off the capital resources of the Earth, whether the stored fuels or the
reserves of oxygen for running the machinery of the growth economy; and, as has frequently been shown, these
reserves are not infinite. The oil shortage in 1974 gave a taste of enforced ‘no growth’ economy, and our
survival at whatever price or hardship will be a first lesson in stability. Whether this lesson will provide the
impetus for yet more growth from a nuclear-based economy, or whether it could form the basis of a more
rational society, remains to be seen. The autonomous house would only form a very small part of this total
picture, but it is an object that can be grasped and realized in material terms at present.
However, the attractive idea of a house generating its own power and recycling its own wastes is almost
as difficult to realize as the idea of a stable economy. Apart from the physical limitations of income-energy
sources, the system can be made only marginally competitive with existing methods of servicing houses. This
difficulty could be removed if autonomy did not have to fit within the present system. At the moment,
however, with houses already more expensive than most people can afford, the idea of an increased capital cost
for houses, even though future running costs would be reduced, could never be accepted.
The idea of autonomy probably arose from two quests. The first was to gain free power for house
heating, etc., so that conventional fuels need not be bought, and the second was to free the planning of
communities. At present any new building must link to an existing or purpose-built service network. Cities,
therefore, expand around their edges in order to keep houses on the mains, although expansion is limited by
the size of the existing servicing plants. Removal of this restraint would enable houses to be built virtually
anywhere, and communities would be formed for a more logical reason than the need to be fed and watered at
a central point. Existing cities can be likened to babies in that they are serviced completely from the outside and
the control of their functions is at the will of a very few people. If any one person declares a state of emergency,
half a million people may sit in the dark unable to help themselves. Autonomy could provide for every
community to become adult. Each person or community would be in control of his own heating, lighting, food
production, etc. A real decentralization of control would be achieved and every person would become self-
governing.
How desirable such decentralization is in political terms, with removal of choice from the few to the
many, is open to discussion. An autonomous country would mean one where there would be no growth in the
economy, where population size was strictly controlled, where a higher standard of living could not be
expected, where resources were shared equally between every man, where freedom to act was curtailed by the
need to survive. The society would be unlike any that we know at the moment. It would encompass something
of many previous political doctrines but it would be aimed at providing for the survival of mankind, given that
our present method of living off capital cannot go on for all time.
Any acceptance of the desirability of autonomy can only be based on faith. If you believe that it is
important for man to be part of his natural ecology, to know how survival is accomplished, to be in control of
his own life, then autonomy is a logical outcome. If, however, you believe that mankind has always solved
Reading Passages [ 172 ]

every problem that arises, that eventually some way will be found for dealing with nuclear waste after a given
number of years of research and that the benefits of cheap nuclear power outweigh the possible dangers, then
there is no case for autonomy and the status quo will be maintained.
(From The autonomous house - design and planning for self-sufficiency by Brenda and Robert Vale)
Reading Passages [ 173 ]

TWENTIETH CENTURY DISCOVERY


1 - War Against The Six-Legs: The Discovery of insecticides and pesticides
Just about the greatest problem we all face now is our own numbers. We crowd the earth more thickly
now than we ever have before and this is creating strains.
Before the invention of agriculture about 8500 B.C., man lived on the animals he could catch and kill
and on the plants he could find that were good to eat. At that time, there weren't many human beings on Earth.
One careful guess is that there were only eight million people on the whole planet. (That's about the population
of New York City today. Imagine New Yorkers being the only people alive and that they were spread over the
entire planet.)
The reason there were so few then was that there are only so many animals to be caught and only so
many plants to be found. If, for some reason, there were suddenly more people, some of them would be sure to
starve to death. The population would shrink again.
Once agriculture was developed, people deliberately grew large quantities of plants that could be eaten.
There was more food to be found in one spot and more people could eat well. Population increased.
By the time of Julius Caesar, in 50 B.C., there were fifty million people living on agriculture around the
shores of the Mediterranean Sea. Another fifty million were living in China and another fifty million in the rest
of the world. The total for the world was 150 million but that was still less than the population of the United
States alone today.
Population continued to increase and by 1600 A.D., it had reached 500 million.
After that, the increase became so rapid that we can speak of a "population explosion." New continents
had been discovered with large tracts of land into which people could push and where they could begin to
farm. The Industrial Revolution came and made it possible to farm more efficiently and ship food greater
distances.
By 1800, the world population was 900 million; by 1900, it was 1,600,000,000. Now, it is about
3,500,000,000. Three and a half billion people are alive today.
In recent years, medical advances have placed many diseases under control. The death rate has dropped
and with fewer people dying, population is increasing faster than ever. The world population doubled between
1900 and 1969, a period of sixty-nine years. It will double again, in all likelihood, between 1969 and 2009, a
period of only forty years.
When the twenty-first century opens, and the youngsters of today are in middle life and raising a
family, the world population will be something like 6,500,000,000. The United States alone will have a
population of 330 million.
Naturally, this can't continue forever. There comes a point when the number of men, women, and
children is too great to feed and take care of. If the numbers become too great, there will be famine and disease.
Desperate, hungry men will fight and there will be wars and revolts.
With this in mind, many people are trying to discover ways of limiting the population by controlling
the number of births. It seems to make sense that no more children should be born than we can feed and take
care of. It is no act of kindness to bring a child into the world who must starve, or live a miserable, stunted life.

It is possible that kind and intelligent ways of controlling birth will be accepted and that human
population will reach some reasonable level and stay there. It will take time for this to come to pass, however,
and no matter what we do the figure of 6,500,000,000 will probably be reached. Even if it goes no higher, we
will have to count on feeding and taking care of this number.
This will be difficult. At this very time, when the world population is only 3,500,000,000, we are having
difficulty. Large sections of the world are poorly fed. There are perhaps 300 million children in the world who
are so badly underfed that they may have suffered permanent brain damage and will therefore never be quite
as intelligent and useful as they might have been if they had only received proper food. Nations such as India
face famine and would have seen millions die already if it were not that the United States shipped them huge
quantities of grain out of its own plentiful supplies. But American supplies are dwindling fast, and when they
are gone, what will happen to nations like India?
There are no longer large empty spaces of good land which farmers can utilize. The fertile areas of the
world are all in use. We have to try to find less easy solutions. We can bring water to dry areas. We can use
chemicals to restore the fertility of soil which has been fading out after centuries of farming. We can use more
fish from the ocean; and perhaps we can even grow plants in the sea.
Reading Passages [ 174 ]

Actually, mankind has been steadily increasing food production since World War II. The trouble is that
this food increase has barely matched the population increase. Despite all the extra food, each individual today
gets no more than he used to get twenty years ago. The percentage of hungry people in the world stays the
same.
And as the population rises ever faster, it is important that the food supply increase ever faster also. It is
important to feed the ever-increasing numbers of human beings until such time as the population can come
under control.
One way of doing so, without having to increase the size of our farmlands one bit, would be to prevent
any of our precious food from being eaten by creatures other than humans. Farmers are always on the watch
for hawks that eat their chickens, coyotes that eat their lambs, crows that eat their corn.
These are creatures we can see and do something about. We can lay traps, or shoot, or set up
scarecrows.
But hawks, and coyotes, and crows are nothing at all compared to an enemy that is much smaller, much
more dangerous, and until very recently, almost impossible to fight.
These are the insects; the little buzzing, flying six-legged creatures that we find everywhere.
Insects are the most successful form of animal life on earth. There are nearly a million different kinds (or
"species") of insects known, and perhaps another two million species exist that have not yet been discovered
and described. This is far more than the total number of different species of all other animals put together.
The number of individual insects is incredible. In and above a single acre of moist soil there may be as
many as four million insects of hundreds of different species. There may be as many as a billion billion
(1,000,000,000,000,000,000) insects living in the world right now-over 300 million insects for each man, woman,
and child alive.
Almost all the different species of insects are harmless to man. They are, indeed, useful in the scheme of
life. Many insects serve as the food supply for the pleasant songbirds we all enjoy. Other insects help pollinate
plants, and without that the plants would die.

Some insects are directly useful to man. The bee produces honey and wax, the silkworm produces silk,
and certain scale insects produce a brilliant red dye. Some insects, such as locusts, are even eaten by men in
some areas of the world. To be sure, there are some species of insects that are troublesome. Perhaps 3,000
species at most (out of a possible three million) are nuisances. These include the mosquitoes, flies, fleas, lice,
wasps, hornets, weevils, cockroaches, carpet beetles, and so on.
As a result, people come to dislike "bugs" and get the urge to swat or crush anything with six legs that
flies or crawls. This is wrong, though. We don't want really to wipe out all insects because a few are
bothersome. Insects, as I said, are necessary to the scheme of life.
In fact, all the different species of creatures are useful to each other. Even killer animals are useful to the
creatures they kill.
As an example, mountain lions kill deer. Now deer are pretty animals while mountain lions seem to be
dangerous killers that deserve to be wiped out. It has happened that men have killed the mountain lions in
some areas and freed the deer from the danger.
That does not do the deer a favour!
While the mountain lions were active they killed some deer but never very many. What's more, they
usually killed old or sick deer, for the strong young ones had a better chance to get away. The mountain lions
kept the numbers of deer down and there was that much more food for those that were left.
Once the mountain lions were gone, the deer population increased quickly. Even the old and sick had a
chance to live. All the deer searched the countryside for food and in no time the area was stripped bare.
Starvation gripped the herd and all became weak and sick. They began to die and in the end there were far
fewer deer than there had been in the days when the mountain lions were active.
So you see, the deer depend for their life and health on the very animals that seem to be killing them.
The way in which different species of animals depend upon one another results in a "balance of nature."
The numbers of any particular species stay about the same for long periods of time because of this balance.
Even if the balance is temporarily upset, when one species grows unusually numerous or unusually rare, the
food supplies drop, or increase, in. such a way that the proper number is restored.
The study of this balance of nature is called "ecology" and it has grown to be one of the branches of
science that is of greatest interest to mankind, for we have badly upset the balance of nature and are upsetting it
worse each year.
Reading Passages [ 175 ]

In the end, we might suffer as the deer suffer when the mountain lions are gone, and scientists are
anxious to prevent this if possible. By studying the principles of ecology, they hope to learn how best to prevent
it.
Actually, insects wouldn't have grown to be such nuisances, if mankind hadn't upset the balance of
nature many thousands of years ago when he first developed agriculture. Once he began to plant barley, for
instance, he saw to it that many acres of land produced hardly anything but barley, barley, barley. All the other
plants that might have been growing on those acres he wiped out as much as possible. They were "weeds."
Animals that lived on those weeds were starved out. On the other hand, animals that lived on barley
multiplied, for suddenly they had a huge food supply.

In this way, agriculture encouraged certain insects to multiply and what had been just a nuisance
became a great danger. As an example, locusts may suddenly multiply and swarm down on fields in gigantic
armies of billions. This happened frequently in ancient times and even the Bible describes such a locust plague
in the book of Joel. Locusts would sweep across the fields, eating everything green. When they left, a barren
waste would remain.
This would be a disaster, for large numbers of people would be depending upon those vanished crops.
Widespread famine would be the result.
Nor could anything be done about it. People were completely helpless as they watched their food
disappear. They might go out and try to kill locusts, but no matter how hard they worked at it, there would be
ten thousand left alive for every one they killed.
Even today, although scientists have discovered ways of fighting insects, there is serious trouble in
some places and at some times. This is especially true in the less-developed countries where scientific methods
of fighting insects are least available-and where the population can least afford the loss.
In India, for instance, there is an insect called the "red cotton bug" which lives on the cotton plant. If
cotton plants were growing wild, some of them might be affected by the bug, but the plants would be few in
number and would be spread widely apart. The bugs would not have much to eat and would find it difficult to
get from one plant to the other. The number of red cotton bugs would therefore remain small and the cotton
plants themselves would be only slightly damaged. They would continue to grow quite well.
In large cotton fields, however, the bugs have a tremendous food supply, with one plant right on top of
the other. The bugs increase in numbers, therefore, and become a huge horde. Each year, half of all the cotton
grown in India is destroyed by them.
Even in the United States, we have trouble. An insect called the "boll weevil" feeds on the cotton plant
in this country. We can fight the boll weevil better than the Indians can fight the cotton bug. Still, as a result of
the boll weevil damage, each pound of cotton produced in the United States costs ten cents (about 10d.) more
than it would if the boll weevil didn't exist.
The losses resulting from insect damage in the United States alone run to something like eight billion
dollars each year. Man himself has also vastly increased in numbers since agriculture was developed. Before
that, small groups of men hunted through wide stretches of forests. They offered only a small target for fleas
and lice.
After the appearance of agriculture, farming communities were established. These were much larger
than hunting bands, and in such communities, men lived huddled together. Fleas and lice multiplied and men
had to do a great deal more scratching. Mosquitoes, too, gained a much larger food supply and increased in
numbers.
You might think that insects like termites and boll weevils did real damage and that fleas and lice were
just nuisances, but that is wrong. The insects that bite and sting human beings can be terrible dangers; and this
was something that wasn't discovered until the opening of the twentieth century.
The discovery came in connection with yellow fever. This is a rapidly spreading disease that can kill
vast numbers of people. Nowadays it is rarely heard of in the United States but in previous centuries, it would
suddenly flare up in huge epidemics that would lay whole cities low. Twenty times in the history of the city of
Philadelphia, yellow fever epidemics raged across it. New York had fifteen epidemics.
There seemed no way of preventing the epidemics. They struck out of nowhere and suddenly people
were dying on every side. The United States military forces grew particularly interested in the problem in 1898.
That year they fought a short war with Spain. Most of the fighting took place in Cuba where few
Americans were killed by Spanish guns, but many died of yellow fever. What people didn't understand was
Reading Passages [ 176 ]

how the yellow fever passed from one person to another. Was it by infected clothing, by polluted water, or
how?
In 1899, the American government sent to Cuba a team of doctors headed by Walter Reed. Their
mission was to find out how yellow fever was spread. Yellow fever does not attack animals so the mission had
to work with human beings, and that meant using themselves as guinea pigs.
They handled the clothing and bedding of people sick with yellow fever yet didn't come down with it
themselves. Walter Reed remembered that a few people had advanced the notion some years before that
mosquitoes might carry the disease. They would bite sick men and suck in infected blood, then pass the
infection to the next person they bit.
Reed's group checked this. They introduced mosquito netting to keep mosquitoes away from certain
houses. Sure enough, they found that people protected by mosquito netting usually didn't get the disease even
when it was striking all around.
They went on to something more daring. They captured mosquitoes in rooms where there were men
sick with yellow fever and then allowed those mosquitoes to bite them. Some of the group soon came down
with yellow fever and one of them, Jesse William Lazear, died.
A mosquito bite is more than a nuisance, then. Mosquitoes of a certain species can pass on a deadly
disease with their bite.
Yellow fever struck the United States again, for the last time, in 1904, with New Orleans the victim. But
Reed had shown how to fight the disease. The mosquitoes were kept away with netting. The places where they
bred were wiped out. As a result, yellow fever is no longer a serious danger in the United States. There hasn't
been an epidemic in this country in over sixty years.
Another species of mosquito was found to spread the disease called malaria. Malaria isn't as dramatic
as yellow fever. It isn't as rapid a killer. Besides, there is a drug, quinine (obtained from the bark of a South
American tree), that, for centuries now, has been known to control the disease.
Even so, malaria is the most widespread disease in the world-or it was. As late as 1955, there were
estimated to be no less than 250 million people in the world who were ill with malaria. Each year 2,500,000
people died of it. Those who didn't die were greatly weakened and couldn't work as healthy people could.
Entire nations were greatly reduced in vigour and in the ability to help themselves because so many
individuals among them were malarial. And all the result of mosquito bites.
Certain species of insects in Africa, called the "tsetse fly," spread sleeping sickness, a brain infection that
usually ends in death. This disease spread into eastern Africa at the beginning of the twentieth century and
between 1901 and 1906 it killed 200,000 people in Uganda. About two out of every three people in the affected
areas died.
The disease also affects horses and cattle. It is the tsetse fly more than anything else-more than the heat,
the jungle, or the large wild animals-that keeps sections of Africa from advancing.
Naturally, men were anxious to kill insects. Insects were starving mankind, eating his grain and fruits
and fibres, too Insects were killing men with their infected bites. Men had to strike back.
One way was to poison insects. Suppose, for instance, you sprayed your crops with a solution of "Paris
green," a deadly poison compound containing copper and arsenic.
Paris green did not affect the plants. The plants lived on carbon dioxide in the air and on certain
minerals which they absorbed from the soil. If there was some poison on their leaves, that made no difference.
Any insect trying to feed on the leaves that were coated with Paris green would, however, die at once.
Insects simply could not live on sprayed plants and the plants grew large and ripe without being bothered.
Paris green was an "insecticide," a word meaning "insect-killer."
(Nowadays, the word is used less often because insects are not the only kind of creature we want to kill.
There are also worms and snails, mice and rats, even rabbits-all of which become serious problems if they grow
too numerous. They are all lumped together as "pests" and any chemical used to kill any of them is a
"pesticide." In this chapter, though, I will be talking chiefly about insects and I will continue to use the word
insecticide.)
Paris green and other mineral insecticides have their drawbacks. For one thing, they are just as
poisonous to human beings as they are to insects. Foods which have been sprayed with these solutions must be
carefully washed, or they could be deadly.
And, of course, plants are washed, naturally, by rain. The rain tends to remove some of the mineral
poison and drip it down to the soil. Little by little, the soil accumulates copper, arsenic, and other elements
Reading Passages [ 177 ]

which will reach the roots of the plants eventually. There they do affect plants and the soil will after a while
become poisonous to them.
What's more, such mineral insecticides can't be used on human beings themselves. Sometimes it would
be most useful if we could use them so, to destroy insects that live directly on people.
Mosquitoes and flies may bite people and annoy them (or sometimes transmit diseases that kill them)
but at least they don't actually live on people. If we want to attack them, we can keep them off by netting, spray
the places where they land with poison, or find the stagnant pools or garbage where they breed and either
remove or spray them.
But what about the fleas and lice that live in human clothing or hair? In many parts of the world even
today there are no automatic washers in which clothes can be washed every couple of days. There isn't even a
supply of soap or of clean running water. The poorer people have very little in the way of clothing and- if there
is a cold season they must simply wear the same clothes all winter long.
Naturally, the fleas and lice in that clothing have a happy hunting ground all winter long. This was all
the more true if people were forced to crowd into small dirty hovels or tenements. If anyone happened not to
have fleas and lice, he quickly caught them from others.
This could be extremely serious because typhus, a disease always present among the poor, every once
in a while became epidemic and spread everywhere. It was most likely to be found among poor, dirty people
huddled together on ships, for instance, or in jails. It was particularly dangerous during wars when many
thousands of soldiers might be penned up in a besieged town or in lines of trenches or in prisoners' camps.
When thousands of Irish emigrated to America after the potato blight brought famine to Ireland in the
1840s, half of them sickened with typhus on the way here. In World War I, typhus did more damage among the
miserable soldiers in eastern and south-eastern Europe than the guns did.
The little country of Serbia drove back the armies of much larger Austria-Hungary several times in 1914
and 1915, but then typhus struck and crippled the small nation. The Austrians dared not invade while the
epidemic was raging but afterwards they marched in and what was left of the Serbian army could not stop
them.
By the time of World War I, however, doctors knew very well what was causing the spread of typhus.
They had learned that from a French physician, Charles Nicolle, who, in 1903, had been appointed director of a
medical institute in Tunis in North Africa. (Tunis belonged to France at the time.)
Tunis was riddled with typhus but Nicolle noticed a very curious thing. The disease was infectious only
outside the hospital, not inside. Doctors visiting patients in their homes caught typhus. Medical attendants who
admitted patients into the hospital caught it. But once the patients were in the hospital, they stopped being
infectious, even though they might
be sicker than ever. Doctors and nurses who tended typhus patients inside the hospital never caught
typhus themselves. Nicolle decided that something happened at the moment that patients entered the hospital
that changed everything. For one thing, the patient had removed the clothes he was wearing and took a bath.
The clothes were got rid of and the infectiousness disappeared.
By that time the word was about that mosquitoes spread yellow fever and malaria, so it didn't seem
hard to believe that maybe typhus fever was spread by the lice in the dirty clothes.
Nicolle worked with animals, first with chimpanzees, and then with guinea pigs, and he proved his
case completely. Typhus would spread by a louse bite, not otherwise.
Nor is typhus the only disease to be spread by such body insects. There is a dreaded disease called
"plague." In the fourteenth century, it spread all across Europe and killed one out of every three human beings
on the continent. It was called "the Black Death" then.
This disease is spread by fleas. The fleas that are most dangerous live on rats and wherever the rats
spread, so do the fleas. When a flea bites a sick rat, then jumps on a human being and bites him, it is usually all
up with the human.
These are hard diseases to conquer. Rats are difficult creatures to get rid of. Even today they infest
American slums and are a downright danger to sleeping babies. Body lice or fleas are even harder to eliminate.
After all you can't avoid lice and fleas by something as simple as mosquito netting. You must wash
clothes and body regularly, but how can you ask people to do that who have no soap and no clean water?
It would be helpful if you could spray the bodies and clothes with insecticide, but you would have to
find one that would kill the insects without killing the person. Certainly Paris green wouldn't do.
Instead of minerals, then, the search was on for some suitable organic substance. An organic substance
is one that has a structure similar to the compounds contained in living tissue. There are many millions of
Reading Passages [ 178 ]

different organic substances, and no two species of creatures act exactly alike in response to particular organic
substances.
Might it not be possible to find an organic substance which would interfere with some of the chemical
reactions that go on in insects, but not in other kinds of animals.
In 1935, a Swiss chemist, Paul Muller, began to search for such a compound. He wanted one that could
be easily made and would therefore be cheap. It had to be without an unpleasant odour. It had to kill insects
but be reasonably harmless to other kinds of life.
He narrowed down the search by studying different classes of organic compounds and then following
up those classes that showed at least a little promise. He would study the chemical structure of those
compounds that showed a little promise and would then try a slightly different compound to see if that had
more promise. If it did, he would study the difference in structure and see how to make a still further difference
that would be better still.
It took four years but in September of 1939 (the very month in which World War II started), Muffler
came across a compound called "dichlorodiphenyltrichloroethane." That is a long name even for chemists and it
is usually referred to by its initials, as DDT. This compound had first been prepared and described in 1874 but
at that time there seemed nothing unusual about it. Now, however, Muller discovered that DDT was the very
thing he was looking for. It was cheap, stable, and odourless, fairly harmless to most forms of life, but meant
death to insects.
By 1942, preparations containing DDT were beginning to be manufactured for sale to the public, and in
1943, it had its first dramatic use. The city of Naples, in Italy, had been captured by Allied forces and, as winter
came on, typhus began to spread.
It wasn't possible to make the population strip off their clothes, burn them, and put on new clothes, so
something else was done. Soldiers and civilians were lined up and sprayed with a DDT solution. The lice died
and typhus died with them. For the first time in human history, a winter epidemic of typhus had been stopped
in its tracks.
To show that this was no accident the same thing was done in Japan in late 1945, after the American
occupation. Since World War II, DDT and other organic insecticides have been used in large quantities. Tens of
thousands of tons are produced each year. The United States alone spent over a billion dollars for such
insecticides in the single year of 1966. Not only are our crops saved but the various insect-spread diseases are
all but wiped out. Since DDT wipes out mosquitoes and flies, as well as lice, malaria is now almost unknown in
the United States. Less than a hundred cases a year are reported and almost all are brought in from abroad.
Yet this does not represent a happy ending. The use of organic insecticides has brought troubles in its
train. Sometimes such insecticides don't work because they upset the balance of nature.
For instance, DDT might be fairly deadly to an insect we want to kill, but even more deadly to another
insect that lives on the first one. Only a few harmful insects survive but their insect enemies are now all dead. In
a short time, the insects we don't want are more numerous than they were before the use of DDT.
Then, too, organic insecticides don't kill all species of insects. Some insects have a chemical machinery
that isn't affected by these poisons; they are "resistant." It may happen that a resistant insect could do damage to
our crops but usually doesn't because some other insect is more numerous and gets the lion's share of the food.
If DDT kills the damaging insect, but leaves the resistant insect behind, then that resistant insect can
multiply enormously. It then becomes a great danger and DDT can't touch it.
In fact, even among those species of insects that are killed by DDT there are always a few individuals
that differ chemically from the rest and are resistant. They survive when all other individuals are killed. They
multiply and then a whole species of resistant insects comes into existence.
Thus, as the years pass, DDT has become less effective on the house fly, for instance. Some resistance
was reported as early as 1947, and this has been growing more serious. By now almost every species of insect
has developed resistance, including the body louse that spreads typhus.
Finally, even though organic insecticides are not very poisonous to creatures other than insects, they are
not entirely harmless either. If too much insecticide is used, some birds can be poisoned. Fish are particularly
easy to kill, and if insecticides are used on water to kill young insects, young fish may also go in great numbers.
Organic insecticides are also washed into the soil. Eventually, they are broken down by bacteria but not
very quickly. Some accumulates in the soil, then in the plants that grow in the soil, then in the animals that eat
the plants. All animals, including man, have a little bit of DDT inside ourselves. Not enough to hurt us so far,
but it is there.
For that reason, attempts have been made to control insects by means that don't involve chemicals.
Reading Passages [ 179 ]

For one thing, there are certain strains of plants which are naturally resistant to particular insects. These
strains might be cultivated.
Then, too, crops might be rotated; one crop might be grown one year, another crop the next. In this way,
an insect which flourished one year might drop to very low levels the next when the wrong plants were grown,
plants it could not eat. It would have to start from scratch again and its numbers would stay low. Or else one
might break up the fields so that not too large an area would be devoted to a single crop. That would make it
harder for an insect to spread.
Here's something else-insects have their enemies. The enemy might exist in one part of the world but
not in another. It might be another insect or some kind of parasite. If it could be introduced in places where the
insect we were after was flourishing, the numbers of that insect might be brought under control.
Modern science has worked up a number of additional devices for killing insects. Bats eat insects and
locate them by emitting very shrill squeaks, squeaks too shrill for us to hear. The sound waves of these squeaks
bounce off the insect, and the bat, by listening for the echo, knows where the insect is.
Naturally, insects have developed an instinctive avoidance of such a sound. If a device is set up to send
out these shrill "ultrasonic" squeaks, insects stay away from a wide area near it.
Another device is just the opposite-to attract rather than to repel. Insects can find each other over large
distances because they can smell each other. Female moths give off an odour that a male moth of the same
species can detect many hundreds of yards away. Female moths can tell by smell a good spot on which to lay
eggs.
Chemists have worked to isolate the chemicals that give off this attractive odour. Once they isolate it,
they can place it on certain spots to attract insects. If those spots are sprayed with insecticide, too, insects could
die in great numbers. Only a little insecticide would have to be used; it wouldn't have to be spread everywhere;
and it would be less likely to affect other forms of life.
Or else a female could be induced to lay eggs in an unsuitable place by means of a sprayed odour, so
that the eggs would not develop.
Then, too, male insects can be subjected to radioactivity that destroys some of their internal organs so
they cannot fertilize the female's eggs. If such sterilized males are released, the females end up laying eggs that
cannot develop. An insect called the "screwworm," which infests cattle in south-eastern United States, was
almost wiped out by this method.
But all that mankind is doing today is not yet enough. The insecticides are too poisonous and the other
methods are a little too fancy for undeveloped countries where the insect menace is greatest. Is there something
better we can do to help feed the doubled population of 2000?
Actually, the 1960s are seeing the development of an exciting new way of battling insects, a way that
makes the insects fight themselves, so to speak. To understand how this should be, let's consider how insects
grow.
An insect has two chief stages to its life. In its young form, it is a "larva"; later on, it is an "adult." The
two forms are very often completely different in appearance.
Consider the caterpillar, for instance. It is a larva, a wingless, wormlike creature with stumpy little leg-
like structures. Eventually, though, it becomes a moth or butterfly, with the usual six legs of the insect, and
often with gorgeous wings. Similarly, the housefly develops out of its egg as a tiny, wormlike "maggot."
The reason for two such different forms is that the two have widely different specialities. The larva
spends almost all its time eating and growing. It is almost what we might call an eating machine with all its
makeup concentrated on that. The adult, on the other hand, is an egg-laying machine. Sometimes adult insects
do nothing but lay eggs. Mayflies live less than a day after they reach the adult stage and don't even have
proper eating apparatus. In their short adult life they just lay eggs; they don't have to eat.
The change from larva to adult is called "metamorphosis." Sometimes the metamorphosis is not a very
startling one. A young grasshopper looks quite grasshopperish, for instance.
Where the metamorphosis is a thoroughgoing one, as in the case of the caterpillar, the insect must pause
in its life cycle to make the enormous change within its body. It is almost as though it must go back into an
original egg stage and start again. It becomes a motionless, apparently dead object, slowly changing within and
making itself over until it is ready to step forth as an adult. In this motionless intermediate stage it is called a
"pupa."
There are insect species which act in such a way as to protect this defenceless pupa stage. In its final
period as a larva, it will produce thin jets of liquid from special openings in its abdomen. These jets harden into
a tough fibre which the larva weaves round about itself until it is completely enclosed. This is the "cocoon"
Reading Passages [ 180 ]

within which the pupa remains hidden till metamorphosis is done. It is the fibre from the cocoon of the
silkworm moth that provides mankind with silk.
All this requires careful organization. For instance, it is a problem for a larva just to grow. The larva is
surrounded by a thin, but tough, cuticle made of a substance called "chitin." This protects it and gives it an
anchor for its muscles, but chitin doesn't expand with the body.
As a larva grows, its cuticle becomes tighter and tighter about it. Finally, the cuticle splits and is shed.
The larva is said to "moult." From the split cuticle, the larva wriggles. It is expanded now and is distinctly
bigger now that the cuticle which had been holding it in like a tight girdle is gone. A new, but roomier, cuticle
quickly forms and within it the larva grows again.
But what makes the cuticle split at just the right time? The answer is that there is an automatic chemical
control involved. Any living creature is a complex setup of automatic self-regulating chemical machinery. This
is true even of the human being and it was only at the very opening of-the twentieth century that biologists
began to have an inkling as to how some of this machinery worked.
In the human being there is a large gland called the pancreas. It manufactures a digestive juice which
enters the small intestine and mixes with food emerging from the stomach. The interesting thing is that the
pancreas doesn't bother wasting its juice when the small intestine is empty. Nothing happens until food enters
the small intestine and then, instantly, the pancreatic juice starts flowing.
Something automatic must be involved and in 1902, two English biologists, William Maddock Bayliss
and Ernest Henry Starling, discovered what it was.
The food in the stomach is mixed with a strongly acid juice. When the food emerges from the stomach
and enters the small intestine, the touch of its acidity has a chemical effect on the intestinal lining and causes it
to produce a substance which Bayliss and Starling called "Secretin."
Secretin is discharged into the bloodstream and is carried to all the body. When it reached the pancreas,
it brings about a chemical effect that causes the pancreas to begin to manufacture and discharge its juice.
Secretin is a substance which rouses the pancreas to activity. In 1905, Bayliss suggested that secretin,
and all other substances like it, be called "hormones," from a Greek word meaning "to arouse."
The process of moulting seems to be an automatic process controlled by a hormone. As the larva grows,
there is growing pressure from the cuticle. When the pressure reaches a certain point, a hormone is triggered. It
pours into the larva's bloodstream and when it reaches the cuticle that cuticle is made to split.
The hormone that does this has been given the name "ecdysone," from a Greek word meaning "to
moult."
But moulting doesn't go on forever. After a certain number of moults, there is a sudden change. Instead
of continuing to grow in order to prepare the way for still another moult, the larva begins to undergo
metamorphosis instead.
Can this be because a second hormone is involved? Is there a second hormone that suddenly appears
after a certain number of moults and brings about the metamorphosis?
Not quite. In 1936, an English biologist, Vincent Brian Wigglesworth, was working with a certain
disease-spreading, blood-sucking bug called Rhodnius. In the course of his experiments, he thought it would be
useful to see what would happen if he cut off the head of the larva of these bugs.
Naturally, if you cut off the head of a mammal or a bird, the creature would die and that would be all.
An insect, however, is far less dependent on its head, and life could continue in some ways.
But different parts of the body produce different hormones and some can be produced in the head. By
cutting off the head of a larva, Wigglesworth could tell what hormones the insect head might be producing.
After all, the headless larva would grow differently than one with a head would and the differences might be at
least partly due to the missing head-hormones.
Wigglesworth did indeed notice a change. As soon as he had cut off the head, the larva went into a
moult and emerged as an adult. (Rhodnius was not one of the bugs that went through a pupa stage.)
It did this even when it was nowhere near ready for such a change. It hadn't moulted enough times; it
was far too small. Yet it did change and a miniature adult would appear.
But if metamorphosis was caused by the production of a hormone, how could cutting off the head
produce it? Cutting off the head should cause the loss of a hormone, not its production.
Wigglesworth argued that the head produced a hormone that prevented metamorphosis. As long as it
was produced ecdysone, the moulting hormone, did its work; the larva moulted and grew, moulted and grew.
At a certain point, though, in the course of the life of the normal insect, something happened which cut off the
Reading Passages [ 181 ]

supply of this head hormone. Without that hormone, ecdysone couldn't work even though it was present, and
metamorphosis began.
If the head were cut off, the supply of the hormone was destroyed at once and metamorphosis began
even though the insect body wasn't really ready for it.
Wigglesworth called this hormone from the insect head "juvenile hormone" because it kept the insect in
its juvenile, or youthful, form. He also located tiny glands, barely visible without a microscope, behind the
brain of the larva and these, Wigglesworth was certain, produced the hormone.
What Wigglesworth found to be true of Rhodnius was true of other insects, too; of the silkworm
caterpillar, for instance. It seems that all insects that undergo metamorphosis do so because the supply of
juvenile hormone stops at a certain time.
Wigglesworth's suggestion about the glands in the head was quickly shown to be correct. In 1938, a
French biologist, Jean Bounhiel, worked out a delicate technique for removing the tiny hormone-producing
glands from a small silkworm caterpillar and placing them in a large one.
The large silkworm caterpillar was about ready to enter its pupal stage, which meant that its glands had
stopped producing juvenile hormone. The glands from the small caterpillar, however, were still capable of
producing the hormone. When the glands from the small caterpillar were grafted into
the large one, the large caterpillar suddenly found itself with a new supply of juvenile hormone. Instead
of entering the pupal stage, it continued to moult one or two extra times.
Naturally, it continued to grow, too, and when it finally did switch to the pupa, it was a considerably
larger-than-normal one, and out of it emerged a considerably larger-than-normal adult moth.
At this point, Carroll Williams of Harvard University stepped onto the scene. He transferred hormone-
producing glands, not to another larva, but to the pupa of a silkworm. The pupa was well along in
metamorphosis. It wasn't supposed to be exposed to any juvenile hormone at all; it was past that stage. But
what if juvenile hormone were forced upon it?
Williams had his answer at once. The presence of juvenile hormone seemed to stop the metamorphosis,
or at least slow it down. When the adult moth finally appeared it was incomplete. Some of it had not changed
over.
Williams found that the more gland material he inserted into the pupa, the more incomplete the
metamorphosis. He could use the amount of incompleteness of metamorphosis to judge how much juvenile
hormone were present in the glands at different stages in the life of the larva.
He could also determine if there were juvenile hormone anywhere else in an insect body, and here he
stumbled over something that was a complete surprise.
In 1956, Williams found that an insect called the "Cecropia moth" produced a large quantity of juvenile
hormone just before entering the adult stage, after having passed through the pupa stage entirely without it.
Why they do this nobody knows.
This juvenile hormone is stored in the abdomen of the moth for some reason. Only the male moth does
it, not the female. Only one other kind of moth, as far as is known, stores juvenile hormone in this fashion. All
other insects do not.
Even if biologists don't know the reason for any of this, it still turned out to be a useful fact. The tiny
glands that produce juvenile hormone in larvas contain so little that it is just about impossible to extract a
useful amount. The reserve supply in the abdomen of the male Cecropia moth is so large, on the other hand,
that the hormone can be produced in visible quantities.
Williams produced an extract from the abdomens of many such moths; a few drops of golden oil that
contained huge quantities of juvenile hormone. Now he had plenty of material with which he could experiment
further.
One Cecropia abdomen supplied enough hormone to block completely the metamorphosis of ten pupas
of almost any kind of moth or butterfly. The extract did not even have to be injected into the pupa. If some were
just applied to the skin of the pupa, enough hormone leaked into the inner tissues to upset the metamorphosis.
The metamorphosis could be so badly upset, if enough juvenile hormone were used, that the pupa
could not develop at all. It simply died.
The thought at once occurred to Williams that here might be a potential insecticide that would have
great advantages over every other kind known. After all, it turned the insect's own chemistry against itself.
An insect couldn't grow resistant to juvenile hormone, as it could to any other sort of insecticide. It had
to respond to its own hormones. If it didn't, it would die.
Reading Passages [ 182 ]

In other words, an insect had to respond to juvenile hormone at the right time or it would die. And if it
did respond at the right time, then it would also respond at the wrong time and still die. Either way, the insect
was dead.
Even more important, the juvenile hormone would be no danger to forms of life other than insects. It
affected only insects and has no effect whatever (as far as has been found so far) on any form of life other than
insects.
Of course, it is one thing to kill a few pupas in a laboratory and quite another to kill vast quantities out
in the fields. Thousands of tons of insecticides are needed for the work that must be done and it would be
impossible to get thousands of tons out of Cecropia moths.
If only the chemical structure of the juvenile hormone were known. It would then be possible to
manufacture it from other chemicals; or else manufacture something that was close enough to do the job.
Unfortunately, the structure was not known.
Williams and a colleague, John Law, sat in their Harvard Laboratories one summer day in 1962,
wondering if they could reason out what the structure might be. A lab assistant, listening to them, suggested a
particular type of compound as a joke.
John Law thought he would go along with the gag. It wouldn't be too difficult to make the compound,
or something with a name very like the lab assistant's joke. With scarcely any trouble at all, he produced an oily
solution of a mixture of substances which he intended to show the young assistant and say, "Well, here is the
compound you joked about."
Still as long as he had it, he tried it first on insect pupas. To John Law's everlasting surprise, it worked!
It worked unbelievably well. It was over a thousand times as powerful as the extract from Cecropia abdomens.
An ounce of Law's solution would kill all the insects over an area of two and one-half acres-at least all the
insects that were metamorphosing.
This substance is "synthetic juvenile hormone." It contains at least four different chemicals, and none of
them seems to have a structure like that of the natural hormone.
Synthetic juvenile hormone works on all insects tested, including the mosquito that spreads yellow
fever and the louse that spreads typhus. Yet it doesn't affect any creature other than insects. It would be no
danger to birds, fish, mammals, or man.
Still, killing all insects is a little too much. That would upset the balance of nature.
We want to kill only certain insects, only one species out of a thousand. This could be done perhaps
with the natural juvenile hormone. Each different group of species of insects manufactures its own kind of
juvenile hormone which works for itself but not for others. Perhaps then, you can use a particular juvenile
hormone and get just the insect you're after and no other kind.
For instance, a biologist in Prague, Czechoslovakia, named Karel Sláma, was trying to make natural
juvenile hormone work on a harmless insect called the "red linden bug." He used the technique developed by
Carroll Williams, but the extract from Cecropia moths didn't affect the red linden bugs. It might kill moths and
butterflies but it had no effect at all on the red linden bugs. The red linden bugs must have a juvenile hormone
so different from those of moths and butterflies that the effects didn't cross.
Williams heard of these experiments and was most curious. In the summer of 1965, Williams asked
Sláma to bring his red linden bugs to Harvard and to come with them. Sláma came, and together the two men
began to grow the bugs. In Prague, Sláma had grown them by the tens of thousands and their way of growing
was always the same. The larvas went through exactly five moults and then moved into the adult stage. (The
red linden bug does not go through a pupa stage.)
Yet at Harvard this did not happen. Bug after bug went through the fifth moult. Then, instead of
changing into an adult, they stayed larvas and tried to moult a sixth time. Usually, they didn't make it, but died.
In the few cases where a bug survived the sixth moult, they died when they attempted a seventh moult. About
1,500 insects died in the Harvard laboratories, where none had died in Prague.
Why? It was as though the bugs had received a dose of juvenile hormone and couldn't stop being
larvas-but no juvenile hormone had been given them.
Williams and Sláma tried to think of all possible differences between the work at Harvard and the work
in Prague. In Harvard, the red linden bugs were surrounded by all sorts of other insects which were involved in
juvenile hormone experiments. Perhaps some of the hormone got across somehow. The other insects were
therefore removed but the red linden bugs still died.
Could the glassware have been contaminated during cleaning? Maybe. So Williams ordered new
glassware that had never been used. The bugs still died.
Reading Passages [ 183 ]

Could there be something wrong with the city water? Williams got spring water, but the bugs still died.
Altogether fourteen different possibilities were thought of and thirteen were cancelled out. One thing
remained, and one only -
Strips of paper were placed into the jars in which the red linden bugs were grown. They were slanted
against the sides, as a kind of path for the bugs to scurry along. (That seemed to keep them more active and in
better health.) Of course, the paper used at Harvard was not the same as the paper used in Prague. Williams
was, in fact, using strips of ordinary paper towels produced by a local manufacturer.
Williams proceeded to check that. He used strips of chemically pure filter paper instead. At once, the
bugs stopped dying.
There was something in the paper towels that acted like juvenile hormone and upset the chemical
machinery of the larvas. It kept them moulting after they should have stopped doing so and that killed them.
Williams called the mysterious substance that did this the "paper factor." Later, it received the more chemical
sounding name of "juvabione."
Williams and Sláma went on to try all kinds of paper. They found that almost any American newspaper
and magazine contained the factor. Red linden bug larvas that crawled over them never made it to the adult
stage. On the other hand, paper from Great Britain, the European continent, or Japan, did not have it and the
bugs lived well on such paper. (That's why they lived in Prague.)
Could it be that American manufacturers put something in paper that other manufacturers did not? A
check with the manufacturers showed they didn't. Well, then, what about the trees from which the paper was
made.
They began to test extracts from the trees and found one called the "balsam fir" which was much used
for American paper but which did not grow in Europe. It was particularly rich in paper factor, and this paper
factor could be obtained from the tree in large quantities.
Here is an interesting point. The paper factor works on only one group of insect species, the one to
which the red linden bug happens to belong. If Sláma had brought with him some insect from another group of
species, the paper factor might have gone undiscovered.
The paper factor is an example of an insecticide that will kill only one small group of insects and won't
touch anything else. Not only are fish, birds, and mammals safe, but so are all insects outside that one group of
species.
To be sure, the red linden bug is harmless and there is no purpose in killing it, but the red cotton bug,
which eats up half of India's cotton crop, is closely related to it. The red cotton bug can also be hit by the paper
factor and experiments are underway to see how well it will work in India's cotton fields.
Paper factor catches bugs at the time of their metamorphosis. This is better than nothing but it still isn't
quite as good as it might be. By the time the metamorphosis is reached, the insect has spent a lot of time as a
larva-eating, eating, eating.
Then any insects that happen to survive the paper factor for some reason can lay a great many eggs.
They will develop into larvas that will eat and eat and eat and will only be caught at the metamorphosis.
It would be better if insects were caught at the beginning of the larval stage rather than at the end.
And they can be! It turns out that the eggs, like the period of metamorphosis, must be free of juvenile
hormone. In 1966, Sláma placed eggs of the red linden bug on paper containing the factor and-if the eggs were
fresh enough and weren't already on the point of hatching-they didn't hatch.
Then he tried it on adult females that were ready to lay eggs but hadn't laid them yet. He placed a drop
of the factor on the adult's body and found that it worked its way inside and, apparently, even into the eggs. At
least such a female laid eggs that didn't hatch.
The paper factor was more valuable than ever now, for it could be used to catch the insects at the very
beginning of their life.
But why should the balsam fir possess a compound that acts like juvenile hormone? The answer seems
clear. Insects eat plants and plants must have developed methods of self-protection over the millions of years of
evolution.
A good method of self-protection is for the plants to develop substances that taste bad to insects or that
kill them. Plants which happen to develop such substances live and flourish better than those that don't.
Naturally, a plant would develop a substance that would affect the particular insects that are dangerous
to it. It seemed that if biologists were to make extracts from a large variety of plants, they might find a variety
of substances that would kill this type of insect or that. In the end, they would have, perhaps, a whole collection
Reading Passages [ 184 ]

of insecticides to use on particular insect pests. We would be able to attack only the insects we want to attack
and leave the rest of nature alone. By 1968, indeed, some fifteen such plant self-defence chemicals were located.
Then, too, in 1967, Williams took another step in this direction, while with an expedition exploring the
interior of the South American country Brazil. There the large river Rio Negro flows into the Amazon. The
name means "Black River" because its waters are so dark.
Williams -noticed there were surprisingly few insects about the banks of the river and wondered if the
trees might not contain considerable paper factor of different kinds. Then he wondered if the darkness of the
river water might not come from its soaking substances out of the trees that lined its bank. If so, the river water
might contain all kinds of paper factors.
Tests have shown that the Rio Negro does have insecticide properties. Perhaps many different paper
factors will be extracted from it in endless supply. Perhaps other rivers may be found to be as useful.
In 1968, Sláma synthesized a hormone like compound which was the most powerful yet. An insect
treated with such a hormone would pass a bit of it on to any other insect with which it mated. One treated
insect could sterilize hundreds of others.
So things look quite hopeful. Between the supplies found in nature and between the chemicals that can
be formed in the laboratory, we may get our insect enemies at last.
This will mean that man's supply of food and fibre will increase. It will mean that a number of diseases
will no longer threaten him, and he will be able to work harder to produce goods.
In that case, we may well be able to feed, clothe, and take care of all the billions who will swell Earth's
population in the next forty years or so.... And by that time we may have learned to control our own numbers
and we will then be safe.
From: Twentieth Century Discovery by Isaac Azimov
2 - In The Beginning: The Origin Of Life
The first chapter dealt with a scientific search that had a very practical goal-ways of killing dangerous
insects. When you solve the problem, there is no mistake about it; the insects die.
But there are also problems that are much more difficult to tackle; problems that are so complicated it is
even hard to tell whether we are on the road to solving them, or just in a blind alley. Yet they are problems so
important that man's curiosity forces him to tackle them anyway.
Consider the question: What is life?
There is no plain answer yet and some scientists wonder if there ever can be. Even the simplest form of
life is composed of very complex substances that are involved in so many numerous complicated chemical
changes that it is almost hopeless to try to follow them. What parts of those changes make up the real basis of
life? Do any of them?
The problem is so enormous that it is like a continent that must be explored at different points. One
group of explorers follows a huge river inland; another group may follow jungle trails elsewhere; while a third
sets up a camel caravan into the desert.
In the same way, some biologists analyze the behaviour of living animals under various conditions;
others study the structure of portions of tissue under microscopes while still others separate certain chemicals
from tissue and work with them. All such work contributes in its own way to increasing knowledge concerning
life and living things.
Enormous advances have indeed been made. The two greatest biological discoveries of the nineteenth
century were 1) that all living creatures are constructed of cells, and 2) that life has slowly developed from
simple creatures to more complex ones.
The first discovery is referred to as the "cell theory," the second as the "theory of evolution."
Both theories made the problem of life seem a little simpler. Cells are tiny droplets of living substance
marked off from the rest of the world by a thin membrane. They are surprisingly alike no matter what creature
they are found in. A liver cell from a fish and one from a dog are much more similar than the whole fish and
dog are.
Perhaps if one could work out all the details of what makes individual cells alive, it would not be so
difficult to go on and get an understanding about whole creatures.
Then, too, consider that there was a gradual development of complex organisms from simpler ones. In
that case, it might well be that all creatures that exist today developed from the same very simple one that
existed long ages ago. There would then be only one form of life, existing in many different varieties. If you
understood what made a housefly alive, or even a germ, you ought then understand what makes a man alive.
Reading Passages [ 185 ]

But these nineteenth century theories also raised a new problem. The more people investigated cells and
evolution, the more clear it became that all living creatures came from other living creatures; all cells came from
other cells. New life, in other words, is only formed from old life. You, for example, are born of your parents.
Yet there must have been a time in the early history of the Earth when there was no life upon it. How,
then, did life come to be? This is a crucial question, for if scientists knew how the first life was formed on a
world that had no life on it, they might find they had taken a big step forward in understanding what life is.
Some nineteenth century scientists were aware of this question and understood its importance. Charles
Darwin, the English biologist who first presented the theory of evolution to the world in its modern form,
speculated on the subject. In a letter written to a friend in 1871, he wondered if the kind of complex chemicals
that make up living creatures might not have been formed somewhere in a "warm little pond" where all the
ingredients might be present.
If such complex compounds were formed nowadays, tiny living creatures existing in that pond would
eat them up at once. In a world where there was no life, however, such compounds would remain and
accumulate. In the end, they might perhaps come together in the proper way to form a very simple kind of life.
But how can one ever find out? No one can go back billions of years into the past to look at the Earth as
it was before life was on it. Can one even be sure what conditions were like on such an Earth, what chemicals
existed, how they would act?
So fascinating was the question of life's origin, however, that even if there was no real information,
some scientists were willing to guess.
The twentieth century opened with a very dramatic guess that won lots of attention. The person making
the guess was a well-known Swedish chemist, Svante August Arrhenius. In 1908, he published a book, Worlds
in the Making, in which he considered some new discoveries that had recently been made.
It had just been shown that light actually exerted a push against anything it shone upon. This push was
very small, but if the light were strong and an object were tiny, the light-push would be stronger than gravity
and would drive the object away from the Sun.
The size of particles that could most easily be pushed by sunlight was just about the size of small cells.
Suppose cells were blown, by air currents, into the thin atmosphere high above the Earth's surface. Could they
then be caught by the push of sunlight and driven away from the Earth altogether? Could they then go
wandering through space?
That might be so but wouldn't the cells then die after having been exposed to the vacuum of outer
space?
Not necessarily. It had also been discovered that certain bacterial cells could go into a kind of
suspended animation. If there was a shortage of food or water, they could form a thick wall about themselves.
Within the wall, the bit of life contained in the cell could wait for years, if necessary, without food or water
from the outside. They could withstand freezing cold or boiling heat. Then, when conditions had improved, the
wall would break away and the bacterial cell could start to live actively once more.
Such walled cells in suspended animation are called "spores." Arrhenius argued that such spores,
driven by the push of light, could wander through space for many years, perhaps for millions of years, without
dying.
Eventually, such spores might strike some object. It might be some tiny asteroid or some other cold
world without air or water. The spore would have to remain a spore forever, until even its patient spark of life
winked out. Or it might strike a world so hot as to cause it to scorch to death.
But what if the spore struck a world with a warm, pleasant atmosphere and with oceans of water? Then
it would unfold and begin to live actively. It would divide and redivide and form many cells like itself. Over
long periods of time, these cells would grow more complicated. They would evolve and form many-celled
creatures. In the end, the whole planet would become a home for millions of species of life.
Is that how life originated on Earth itself, perhaps? Once long ago; billions of years ago; did a spore
from a far distant planet make its way into Earth's atmosphere? Did it fall into Earth's ocean and begin to grow?
Is all the life on Earth, including you and I, the descendant of that little spore that found its way here?
It was a very attractive theory and many people were pleased with it, but alas, there were two things
wrong with it.
In the first place, it wouldn't work. It was true that bacterial spores would survive many of the
conditions of outer space, but not all. After Arrhenius' book had been published, astronomers began to learn
more about what it was like in outer space. They learned more about the sun's radiation for instance.
Reading Passages [ 186 ]

The sun gives out not visible light alone, but all kinds of similar radiation that might be less energetic or
more energetic than light itself.
It radiates infrared waves and radio waves, which are less energetic than ordinary light. It also radiates
ultraviolet waves and x rays, which are more energetic than ordinary light. The more energetic radiation is
dangerous to life.
Much of the energetic radiation is absorbed by Earth's atmosphere. None of the x rays and very little
ultraviolet manage to make their way down to Earth's surface, under a blanket of air miles and miles thick.
Even so, if we stand on the beach on a sunny summer day, enough ultraviolet light reaches us to penetrate the
outer layers of the skin and to give us sunburn (if we are fair-skinned).
In outer space, the ultraviolet light and x rays are present in full force. They easily penetrate a spore
wall and kill the spark of life inside.
If spores were drifting towards our solar system from other stars, they might strike the outermost
planets without harm, but on Pluto or on Neptune they would find conditions too cold for development. As
they drifted inward towards Earth, they would be coming into regions where sunlight was stronger and
stronger. Long before they could actually reach our planet, the energetic radiation in sunlight would have killed
them.
It would seem then that spores, giving rise to the kind of life we now have on Earth, couldn't possibly
have reached Earth alive.
Then, too, another flaw in Arrhenius's theory is that it doesn't really answer the question of how life
began. It just pushes the whole problem back in time. It says that life didn't begin on Earth but on some other
planet far away and long ago and that it reached our world from that other planet. In that case, how did life
start on that other planet? Did it reach that other planet from still another planet?
We can go back and back that way but we must admit that originally life must have started on some
planet from nonliving materials. Now that is the question. How did life do that? And if life started somewhere
from non-living materials, it might just as well have done so on Earth.
So don't let's worry about the possibility of life starting elsewhere and reaching Earth. Let us
concentrate on asking how life might have started on earth itself from non-living materials.
 
Naturally, we ought to try to make the problem as simple as possible. We wouldn't expect non-living
substances to come together and suddenly form a man, or even a mouse, or even a mosquito. It would seem
reasonable that before any creature even as complicated as a mosquito was formed, single cells would have
come into existence; little bits of life too small to be seen except under a microscope.
Creatures exist, even today, that are made up of just one cell. The amoeba is such a creature. Thousands
of different species of one-celled plants and animals exist everywhere. There are also the bacteria, which are
composed of single cells even smaller than those of the one-celled plants and animals.
But these cells are complicated, too; very complicated. They are surrounded by membranes made up of
many thousands of complex molecules arranged in very intricate fashion. Inside that membrane are numerous
small particles that have a delicately organized structure.
It seems hopeless to expect the chemicals in a non-living world to come together and suddenly form
even as much as a modern bacterial cell. We must get down to things that are even simpler.
Every cell contains chemicals that don't seem to exist in the non-living world. When such chemicals are
found among non-living surroundings, we can be sure that those surroundings were once alive, or that the
substances were originally taken from living cells.
This seems to be so clear that early in the nineteenth century chemists began to speak of two kinds of
substances. Chemicals that were associated with living creatures, or organisms, were called "organic." Those
that were not were "inorganic."
Thus, wood and sugar are two very common organic substances. They are certainly not alive in
themselves. You may be sitting in a wooden chair, and you can be sure that it is no more alive than if it were
made of stone. However, that wood, as you know very well, was once part of a living tree.
Again, the sugar you put on your morning cereal is certainly not alive. Still, it was once part of a living
sugar cane or sugar beet plant.
Salt and water, on the other hand, are inorganic substances. They are found in all living organisms, to
be sure; your own tears, for instance, are nothing but a solution of salt in water. However, they are not found
only in organisms and did not originate only in organisms. There is a whole ocean of salt water that we feel
pretty sure existed in some form or other before life appeared on this planet.
Reading Passages [ 187 ]

(Beginning in the middle of the nineteenth century, chemists began to form new compounds that were
not to be found in nature. They were very similar in many ways to organic compounds, though they were
never found in living organisms or anywhere else outside the chemists' test tubes. These "synthetic"
compounds were, nevertheless, lumped together with the organic group because of the similarity in
properties.)
It would seem then we could simplify our problem. Instead of asking how life began out of non-living
substances, we could begin by asking how organic substances came to be formed out of inorganic substances in
the absence of life.
To answer that question, we ought to know in what way organic substances differ from inorganic ones.
Both organic and inorganic substances are made up of "molecules"; that is, of groups of atoms that cling
together for long periods of time. Organic molecules are generally larger and more complicated than inorganic
ones. Most inorganic molecules are composed of a couple of dozen atoms at most; sometimes only two or three
atoms. Organic molecules, however, usually contain well over a dozen atoms and may, indeed, be made up of
hundreds, thousands, or even millions of atoms.
When we ask how organic compounds may be formed from inorganic compounds, then, we are really
asking how large and complicated molecules might be formed from small and simple ones.

Chemists know that to force small and simple molecules to join together to form large and complicated
ones, energy must be added. This is no problem, really, for a very common source of a great deal of energy is
sunlight, and in the early lifeless Earth, sunlight was certainly blazing down upon the ocean. We will come
back to that later.
It is also true that the different kinds of atoms within molecules cannot change their nature under
ordinary circumstances. The large organic molecules in living matter must be formed from small and simple
molecules that contain the same kinds of atoms.
We must ask ourselves what kinds of atoms organic molecules contain.
There are over a hundred different kinds of atoms known today (each kind making up a separate
"element"). Over eighty are found in reasonable quantities in the inorganic substances making up the Earth's
crust. Only half a dozen of these elements, however, make up the bulk of the atoms in organic molecules.
The six types of atoms occurring most frequently in organic molecules are carbon, hydrogen, oxygen,
nitrogen, phosphorus, and sulphur. We can let each one be represented by its initial letter: C, H, O, N, P,. and S.
The initial letters could also stand for a single atom of each element. C could be a carbon atom, H a hydrogen
atom, and so on.
Of these elements, carbon is, in a way, the crucial one. Carbon atoms can combine with each other to
form long chains, which can branch in complicated ways. They can also form single rings or groups of rings; or,
for that matter, rings with chains attached. To the carbon atoms arranged in any of these ways, other atoms can
be attached in different manners.
These complicated chains and rings of carbon atoms are found only in organic compounds, never in
inorganic compounds. It is this which makes organic molecules larger and more complicated than inorganic
ones.
Carbon atoms can be hooked together in so many ways, and can attach other atoms to themselves in so
many ways that there is almost no end to the different variations. And each different variation is a different
substance with different properties.
Hundreds of thousands of different organic compounds are known today. Every year many more
organic compounds are discovered and there is no danger of ever running out of new ones. Uncounted trillions
upon trillions of such compounds can exist.
This seems to make the problem of the origin of life more difficult again. If we are trying to find out
how organic substances are formed from inorganic ones, and if there are uncounted trillions upon trillions of
organic substances possible, how can we decide which organic substance ought to be formed and which were
formed in the past.
Suppose, though, we can narrow down the choice. Not all organic compounds are equally vital to life.
Some of them seem to be more central to the basic properties of life than others are.
All cells without exception, whether plant, animal, or bacterial, seem to be built about two kinds of
substances that are more important than any others. These are "proteins" and "nucleic acids."
Reading Passages [ 188 ]

Even viruses can be included here. They are tiny objects, far smaller than even the smallest cells, yet
they seem to be alive since they can invade cells and multiply there. They, too, contain proteins and nucleic
acids. Some viruses, in fact, contain practically nothing else but proteins and nucleic acids.
Now we have narrowed the problem. We must not ask how organic compounds were built up out of
inorganic ones, but how proteins and nucleic acids were built up out of them.
That still leaves matters complicated enough. Both proteins and nucleic acids are made up of very large
molecules, often containing millions of atoms. It is too much to expect that small inorganic molecules would
come together suddenly to form a complete molecule of protein or nucleic acid.
Let's look more closely at such giant molecules. Both proteins and nucleic acids are composed of
simpler structures strung together like beads on a necklace. Both protein and nucleic acid molecules can be
treated chemically in such a way that the string breaks and the individual "building blocks" separate. They can
then be studied separately.
In the case of the protein molecule, the building blocks are called "amino acids." The molecule of each
amino acid is built around a chain of three atoms, two of which are carbon and one nitrogen. We can write this
chain as -C-C-N-.
There would be different atoms attached to each of these. The atoms attached to the carbon and
nitrogen atoms at the end are always the same in all the amino acids obtained from proteins (with a minor
exception we needn't worry about). The carbon atom in the middle, however, can have any of a number of
different atom-groupings attached to it. If we call this atom-grouping R, then the amino acid would look like

this:
Each different structure for R results in a slightly different amino acid. Altogether there are nineteen
different amino acids that are found in almost every protein molecule. The simplest R consists of just a
hydrogen atom. The rest all contain different numbers of carbon and hydrogen atoms, while some contain one
or two oxygen atoms in addition, or one or two nitrogen atoms, or even one or two sulphur atoms. Individual
amino acids are made up of from eleven to twenty-six atoms.
Although there are only nineteen different amino acids in most proteins, they can be put together in
many different ways, each way making up a slightly different molecule. Even a middle-sized protein molecule
is made up of several hundred of these amino acids and the number of different combinations is enormous.
Imagine yourself to be given several hundred beads of nineteen different colours and that you set to
work to string them. You could make necklaces of many trillions of different colour combinations. In the same
way, you could imagine protein molecules of many trillions of different amino acid combinations.
In thinking of the origin of life, then, you don't have to worry, just at first, about forming complicated
protein molecules. That would come later. To begin with, it would be satisfying to know whether the amino
acid building blocks could be formed and, if so, how.
The nucleic acids are both simpler and more complicated than the protein. Nucleic acid molecules are
made up of fewer different kinds of building blocks but the individual building block is more complicated.
The huge nucleic acid molecule is made up of long chains of smaller compounds known as
"nucleotides," each of which is made up of about three dozen atoms. These include carbon, hydrogen, oxygen,
nitrogen, and phosphorus.
An individual nucleotide molecule is made up of three parts. First there is a one-ring or two-ring
combination made up of carbon and nitrogen atoms. If there is only one ring, this portion is called a
"pyrimidine"; two rings is a "purine."
The second portion is made up of a ring of carbon and oxygen atoms. This comes in two varieties. One
is called "ribose"; the other, with one fewer oxygen atom, is "deoxyribose." Both these compounds belong to the
class called sugars.
Finally, the third part is a small atom group containing a phosphorus atom. It is the "phosphate group."
We might picture a nucleotide as follows:
ribose
purine phosp
-or -
or pyrimidine hate group
deoxyribose
Reading Passages [ 189 ]

There are two kinds of nucleic acid molecules. One of them is built up of nucleotides that all contain
ribose. This is, there fore, "ribosenucleic acid" or RNA. The other is built up of nucleotides that all contain
deoxyribose; "deoxyribosenucleic acid" or DNA.
In both cases, individual nucleotides vary in the particular kind of purine or pyrimidine they contain.
Both RNA and DNA are made up of chains of four different nucleotides. Even though there are only four
different nucleotides, so many of them are present in each enormous nucleic acid molecule that they can be
arranged in trillions upon trillions of different ways.
Now that we have decided we want to form amino acids and nucleotides out of inorganic compounds,
we must ask out of what inorganic compounds we can expect them to be formed. We must have inorganic
compounds, to start with, that contain the right atoms: carbon, hydrogen, oxygen, and the rest.
To begin with, there is the water molecule in the oceans. That is made up of two hydrogen atoms and an
oxygen atom and it can therefore be written H2O. Then there is the carbon dioxide of the air, which dissolves in
the ocean water and which is made up of a carbon atom and two oxygen atoms, C02. Water and carbon dioxide
can supply carbon, hydrogen, and oxygen, three of the necessary elements.
Also dissolved in ocean water are substances that are called nitrates, sulphates, and phosphates. They
contain nitrogen atoms, sulphur atoms, and phosphorus atoms respectively. These substances all have certain
properties in common with ordinary table salt and can be lumped together as "salts."
What we have to ask ourselves now is this: Is it possible that once long ago, when the world was young,
water, carbon dioxide, and salts combined to form amino acids and nucleotides. If so, how was it done?
There are certain difficulties in this thought.
To begin with, in order for water, carbon dioxide, and salts to form amino acids and nucleotides,
oxygen atoms must be discarded. There is much more oxygen in water, carbon dioxide, and salts, than there is
in amino acids and nucleotides.
But Earth's atmosphere contains a great deal of oxygen. To discard oxygen, when oxygen is already all
about, is very difficult. It is like trying to bail the water out of a boat that is resting on the lake bottom.
Secondly, it takes energy to build up amino acids and nucleotides out of simple inorganic molecules
and the most likely source is sunlight. Just sunlight isn't enough, however. To get enough energy, you must use
the very energetic portion of the sunlight; you must use ultraviolet waves.
But very little of the ultraviolet waves gets down to the surface of the Earth. The air absorbs most of it.
When scientists studied the situation more closely it turned out that it was the oxygen in the air that produced
the substance that absorbed the ultraviolet.
So oxygen was a double villain. It kept the ultraviolet away from the surface of the Earth and its
presence made it very difficult to discard excess oxygen.
To be sure, the plant life that covers the land and fills the sea is carrying through just the sort of thing
we are talking about and doing it right now. Plants absorb water, carbon dioxide, and salts and use the energy of
sunlight to manufacture all sorts of complicated organic compounds out of them. In doing so, they discard
oxygen and pour it into the atmosphere.
However, to do this, plants make use of visible light, not ultraviolet waves. Visible light (unlike
ultraviolet waves) can penetrate the atmosphere easily, so that it is available for the plants to use. Visible light
has considerably less energy than ultraviolet waves but the plants make use of it anyway.
You might wonder if this could not have happened on the early Earth. Suppose the energy of visible
light had been used to build up the amino acids and nucleotides.
It doesn't seem likely, though, that it could have happened that way. The reason it happens now is that
plants make use of a complicated chemical system that includes a substance known as "chlorophyll."
Chlorophyll is an organic compound with a most complicated molecule that is formed only by living
organisms.
In thinking of the early Earth, a planet without life on it, we must suppose that chlorophyll was absent.
Without chlorophyll, the energy of visible light is not enough to form amino acids and nucleotides. The more
energetic ultraviolet waves are necessary and that can't pass through our atmosphere.
We seem to be stuck.
But then, in the 1920s, an English biochemist, John Burdon Sanderson Haldane, suggested that oxygen
had not always existed in Earth's atmosphere.
After all, plant life is always using up carbon dioxide and producing oxygen, as it forms organic
substances from inorganic substances. Might it not be that all the oxygen that is now in the Earth's atmosphere
Reading Passages [ 190 ]

is the result of plant action? Before there was life, and therefore before there were plants, might not the
atmosphere have been made up of nitrogen and carbon dioxide, instead of nitrogen and oxygen, as today?
If that were the case, ultraviolet waves could get right down to the Earth's surface without being much
absorbed. And, of course, oxygen could be discarded with much greater ease.
The suggestion turned the whole question in a new direction. It wasn't proper to ask how amino acids
and nucleotides might be formed from small compounds that are now available under conditions as they exist
now. Instead we must ask how amino acids and nucleotides might be formed from small compounds that
would be available when the Earth was a young and lifeless planet under conditions as they existed then.
It became necessary to ask, then, what kind of an atmosphere and ocean the Earth had before life
developed upon it.
That depends on what the universe is made up of, generally. In the nineteenth century, ways were
worked out whereby the light from the stars could be analyzed to tell us what elements were to be found in
those stars (and even in the space between the stars).
Gradually, during the early decades of the twentieth century, astronomers came more and more to the
conclusion that by far the most common atoms in the universe were the two simplest: hydrogen and helium. In
general, you can say that 90 percent of all the atoms in the universe are hydrogen and 9 percent are helium. All
the other elements together make up only 1 percent or less. Of these other elements, the bulk was made up of
carbon, nitrogen, oxygen, sulphur, phosphorus, neon, argon, silicon, and iron.
If that is so, then you might expect that when a planet forms out of the dust and gas that fills certain
sections of space, it ought to be mostly hydrogen and helium. These are the gases that would make up most of
the original atmosphere.
Helium atoms do not combine with any other atoms, but hydrogen atoms do. Because hydrogen atoms
are present in such quantities, any type of atom that can combine with hydrogen will do so.
Each carbon atom combines with four hydrogen atoms to form "methane" (CH 4). Each nitrogen atom
combines with three hydrogen atoms to form "ammonia" (NH 3). Each sulphur atom combines with two
hydrogen atoms to form "hydrogen sulphide" (H 2S). And, of course, oxygen atoms combine with hydrogen to
form water.
These hydrogen-containing compounds are all gases, or liquids that can easily be turned into gases, so
they would all be found in the primitive atmosphere and ocean.
The silicon and iron atoms, together with those of various other fairly common elements such as
sodium, potassium, calcium, and magnesium, don't form gases. They make up the solid core of the planet.
This sort of logic seems reasonable, for a large, cold planet like Jupiter was found, in 1932, to have just
this sort of atmosphere. Its atmosphere is chiefly hydrogen and helium, and it contains large quantities of
ammonia and methane.
Jupiter is a huge planet, however, with strong gravitation. Smaller planets like Earth, Venus, or Mars,
have gravitation that is too weak to hold the very small and very nimble helium atoms or hydrogen molecules.
(Each hydrogen molecule is made up of two hydrogen atoms, H2.)
On Earth, therefore, we would expect the very early atmosphere to contain mostly ammonia, methane,
hydrogen sulphide, and water vapour. Most of the water would go to make up the ocean and in that ocean
would be dissolved ammonia and hydrogen sulphide. Methane is not very soluble but small quantities would
be present in the ocean also.
If we began with such an atmosphere, would it stay like that forever? Perhaps not. Earth is fairly close
to the sun and a great deal of ultraviolet waves strike the Earth's atmosphere. These ultraviolet waves are
energetic enough to tear apart molecules of water vapour in the upper atmosphere and produce hydrogen and
oxygen.
The hydrogen can't be held by Earth's gravity and drifts off into space, leaving the oxygen behind.
(Oxygen forms molecules made up of two oxygen atoms each, 0 2, and these are heavy enough to be held by
Earth's gravity.)
The oxygen does not remain free, however. It combines with the carbon and hydrogen atoms in
methane to form carbon dioxide and water. It wouldn't combine with the nitrogen atoms of ammonia, but it
would combine with the hydrogen to form water, leaving the nitrogen over to form molecules made up of two
atoms each (N2).
Little by little, as more and more water is broken apart by ultraviolet light, all the ammonia and
methane in the atmosphere is converted to nitrogen and carbon dioxide. In fact, the planets Mars and Venus
seem to have a nitrogen plus carbon dioxide atmosphere right now.
Reading Passages [ 191 ]

You might wonder, though, what could happen if all the ammonia and methane were converted to
nitrogen and carbon dioxide and if water molecules continued to break up into hydrogen and oxygen. The
oxygen would not have anything more to combine with. Perhaps it would gradually accumulate in the air.
This, however, would not happen. As free oxygen accumulates, the energy of sunlight turns some of it
into a three-atom combination called "ozone" (O 3). This ozone absorbs the ultraviolet light of the sun and
because the ozone layer forms about fifteen miles high in the atmosphere, the ultraviolet light is shielded from
the regions of the atmosphere where water vapour exists.
No further water molecules can be broken up and the whole process comes to an end before oxygen can
really fill the atmosphere. It is only later on when plants develop and make use of chlorophyll to tap the energy
of visible light which can get through the ozone layer that the process begins again. After plants come on the
scene, the atmosphere fills with oxygen.
So we have three atmospheres for Earth. The first, "Atmosphere I" was chiefly ammonia, methane, and
water vapour, with an ocean containing much ammonia in solution. "Atmosphere II" was chiefly nitrogen,
carbon dioxide, and water vapour, with an ocean containing much carbon dioxide in solution. Our present
atmosphere "Atmosphere III," is chiefly nitrogen, oxygen, and water vapour, with an ocean in which only small
quantities of gas are dissolved.
Atmosphere III formed only after life had developed, so life must have originated in the first place in
either Atmosphere I or Atmosphere II (or possibly while Atmosphere I was changing into Atmosphere II) .
Haldane had speculated that life had originated in Atmosphere II, but a Russian biochemist, Alexander
Ivanovich Oparin, thought otherwise.
In 1936, he published a book called The Origin of Life, which was translated into English in 1938. Oparin
was the first to go into the problem of the origin of life in great detail, and he felt that life must have originated
in Atmosphere I.
How was one to decide which was the correct answer? How about experiment? Suppose you were
actually to start with a particular mixture of gases that represents an early atmosphere and add energy in the
way it might have been added on the early Earth. Will more complicated compounds be formed out of simple
ones? And if they are, will they be the kind of compounds that are found in living creatures?
The first scientist who actually tried the experiment was Melvin Calvin at the University of California.
In 1950, he began to work with a portion of Atmosphere 11- carbon dioxide and water vapour. The fact
that he left out nitrogen meant that he couldn't possibly form nitrogen containing molecules, like amino acids
and nucleotides. However, he was curious to see what he would get.
What he needed, to get anything at all, was a source of energy. He might have used ultraviolet waves,
the most likely source on the early Earth, but he preferred not to.
Instead, he made use of the energy of certain kinds of atoms that were always exploding. They were
"radioactive" atoms. The radioactive elements on Earth are very slowly breaking down so that every year there
are very slightly less than the year before. Several billion years ago there must have been twice as much
radioactivity in the Earth's crust as there is now. The energy of radioactivity could therefore have been
important in forming life.
Since Melvin Calvin was engaged in experimental work that made use of radioactive substances, he had
a good supply
of them to work with. He bombarded his gas mixture with flying particles released by radioactive
atomic explosions. After a while, he tested the gas mixture and found that in addition to carbon dioxide and
water, he had some very simple organic molecules in solution. He had, for instance, a molecule containing one
carbon atom, two hydrogen atoms, and one oxygen atom (CH2O), which was well known to chemists under the
name of "formaldehyde." He also had formic acid, which has a second oxygen atom, and has a formula written
HCOOH by chemists.
This was just a beginning but it showed a few important things. It showed that molecules could be
made more complicated under early Earth conditions. For another the complicated molecules contained less
oxygen than the original molecules, so that oxygen was being discarded.
In 1953 came an important turning point, something that was the key discovery in the search for the
origin of life. It came in the laboratories of Harold Clayton Urey at the University of Chicago.
Urey was one of those who had tried to reason out the atmosphere of the early Earth, and, like Oparin,
he felt it was in Atmosphere I that life might have got its start. He suggested to one of his students, Stanley
Lloyd Miller, that he set up an experiment in which energy would be added to a sample of Atmosphere I. (At
the time Miller was in his early twenties, working for his Ph.D. degree.)
Reading Passages [ 192 ]

Miller set up a mixture of ammonia, methane, and hydrogen in a large glass vessel. In another glass
vessel, he boiled water. The steam that was formed passed up a tube and into the gas mixture. The gas mixture
was pushed by the steam through another tube back into the boiling water. The second tube was kept cool so
that the steam turned back into water before dripping back into the hot water.
The result was that a mixture of ammonia, methane, hydrogen, and water vapour was kept circulating
through the system of vessels and tubes, driven by the boiling water. Miller made very certain that everything
he used was completely sterile; that there were no bacteria or other cells in the water or in the gases. (If he
formed complicated compounds he wanted to make sure they weren't formed by living cells.)
Next, energy had to be supplied. Urey and Miller reasoned that two likely sources of energy were
ultraviolet light from the sun and electric sparks from lightning. (There may have been numerous
thunderstorms in Earth's early days.)
Of the two, ultraviolet light is easily absorbed by glass and there was a problem as to how to get enough
energy through the glass into the chemicals within. Miller therefore thought that as a first try he would use an
electric spark like a small bolt of lightning. Through the gas in one portion of the system he therefore set up a
continuing electric spark.
Now it was only necessary to wait.
Something was happening. The water and gases were colourless to begin with, but by the end of one
day, the water had turned pink. As the days continued to pass, the colour grew darker and darker, till it was a
deep red.
After a week, Miller was ready to see what he had formed in his water reservoir. Fortunately, he had at
his disposal a new technique for separating and identifying tiny quantities of chemical substances. This is called
"paper chromatography" and it had been first developed in 1944 by a group of English chemists.
Like Calvin, Miller found that the simple gas molecules had combined with each other to form more
complicated molecules, discarding oxygen atoms.
Again like Calvin, Miller found that formic acid was an important product. He also found, however,
that compounds had been formed which were similar to formic acid but were more complicated. These
included acetic acid, glycolic acid, and lactic acid, all substances that were intimately associated with life.

Miller had begun with a nitrogen-containing gas, ammonia, which Calvin had lacked. It is not
surprising, therefore, that Miller ended up with some molecules that contained nitrogen as well as carbon,
hydrogen, and oxygen. He found some hydrogen cyanide, for instance, which is made up of a carbon atom, a
hydrogen atom, and a nitrogen atom in its molecule (HCN).
He also found urea, which has molecules made up of two nitrogen atoms, four hydrogen atoms, a
carbon atom, and an oxygen atom (NH2CONH2).
Most important of all, though, Miller discovered among his products two of the nineteen amino acid
building blocks that go to make up the various protein molecules. These were "glycine" and "alanine," the two
simplest of all the amino acids, but also the two that appear most frequently in proteins.
With a single experiment, Miller seemed to have accomplished a great deal. In the first place, these
compounds had formed quickly and in surprisingly large quantities. One-sixth of the methane with which he
had started had gone into the formation of more complex organic compounds.
He had only worked for a week, and with just a small quantity of gas. How must it have been on the
early Earth, with its warm ocean, full of ammonia, and with winds of methane blowing over it, all baking under
the sun's ultraviolet radiation or being lashed by colossal lightning bolts for a billion years?
Millions of tons of these complex compounds must have been formed, so that the ocean became a kind
of "warm soup." Secondly, the kind of organic molecules formed in Miller's experiment proved particularly
interesting. Among the first compounds formed were simple amino acids, the building blocks of proteins. In
fact, the path taken by the simple molecules as they grew more complex seemed pointed directly towards life.
No molecules were formed that seemed to point in an unfamiliar direction.
Suppose that, as time went on, more and more complicated molecules were built up, always in the
direction of compounds now involved with life and not in other directions. Gradually, bigger and bigger
molecules would form as building blocks would join together. Finally, something like a real protein molecule
and nucleic acid molecule would form and these would eventually associate with each other in a very simple
kind of cell.
All this would take a lot of time, to be sure. But then, there was a whole ocean of chemicals to work
with, and there was lots of time - a billion years, at least.
Reading Passages [ 193 ]

Miller's experiment was only a beginning, but it was an extremely hopeful beginning. When its results
were announced, a number of biochemists (some of whom were already thinking and working in similar
directions) began to experiment in this fashion.
In no time at all, Miller's work was confirmed; that is, other scientists tried the same experiment and got
the same results. Indeed, Philip Hauge Abelson, working at the Carnegie Institution of Washington, tried a
variety of experiments with different gases in different combinations.
It turned out that as long as he began with molecules that included atoms of carbon, hydrogen, oxygen,
and nitrogen somewhere in their structure, he always found amino acids included among the substances
formed. And they were always amino acids of the kind that served as protein building blocks.
Nor were electric discharges the only source of energy that would work. In 1959, two German scientists,
Wilhelm Groth and H. von Weyssenhoff, tried ultraviolet waves and they also got amino acids.
It could be no accident. There was a great tendency for atoms to click together in such a way as to
produce amino acids. Under the conditions that seemed to have prevailed on the early Earth, it appeared
impossible not to form amino acids.
By 1968, every single amino acid important to protein structure had been formed in such experiments.
The last to be formed were certain important sulphur-containing amino acids, according to a report from
Pennsylvania State University and from George Williams University in Montreal.
Perhaps other important compounds also couldn't help but form. Perhaps they would just naturally
come together to form the important large molecules of living tissue.
If that is so, life may be no "miracle." It couldn't help forming, any more than you can help dropping
downward if you jump off a building. Any planet that is something like the Earth, with a nearby sun and a
supply of water and an atmosphere full of hydrogen compounds, would then have to form life. The kinds of
creatures that eventually evolved on other worlds would be widely different and might not resemble us any
more than an octopus resembles a gorilla. But, the chances are, they would be built up of the same chemical
building blocks as we.
More and more, scientists are beginning to think in this way, and they are beginning to speculate that
life may be very common in the universe.
Of course, on planets that are quite different from Earth; much bigger and colder, like Jupiter, or much
smaller and hotter, like Mercury, our kind of life could not form. On the other hand, other kinds of life, based
on other types of chemistry, might be formed. We have no way of telling.
But we are getting ahead of ourselves. Miller's experiments were enough to start speculation of this sort,
but it was still important to check matters. A couple of amino acids weren't enough. What about the
nucleotides, which served as building blocks for nucleic acids? (Since the 1940s, biochemists have come to
believe that nucleic acids are even more important than proteins.)
One could repeat Miller's experiment for longer and longer periods, hoping that more and more
complicated molecules would be formed. However, as more and more kinds of compounds were formed, there
would be less and less of each separate kind, and it would become more difficult to spot each one.
Possibly, one could start with bigger and bigger quantities of gases in the first place. Even so, the large
number of complicated molecules that would be formed would confuse matters.
It occurred to some experimenters to begin not at the beginning of Miller's experiment, but at its end.
For instance, one of the most simple products of Miller's experiment was hydrogen cyanide, HCN.
Suppose you assumed that this gas was formed in quantity in Earth's early ocean and then started with
it. In that way you would begin partway along the road of development of life and carry it on further.
At the University of Houston, a Spanish-born biochemist, Juan Oro, tried just this in 1961. He found that
not only amino acids were formed once he added HCN to the starting mixture, but individual amino acids
were hooked together in short chains, in just the way in which they are hooked together in proteins.
Even more interesting was the fact that purines were formed, the double rings of carbon and nitrogen
atoms that are found in nucleotides. A particular purine called "adenine" was obtained. This is found not only
in nucleic acids but in other important compounds associated with life.
As the 1960s opened, then, imitations of the chemical environment of the early Earth were being made
to produce not only the building blocks of the proteins, but the beginnings of the nucleotide building blocks of
the nucleic acids.
It was just the beginnings in the latter case. The nucleotides contained not only purines but also the
somewhat similar, but simpler, one-ringed compounds, the pyrimidines. Then there were the sugars, ribose
and deoxyribose. And, of course, there was the phosphate group.
Reading Passages [ 194 ]

The experimenters bore on. All the necessary purines and pyrimidines were formed. The sugars proved
particularly easy. Sugar molecules are made up of carbon, hydrogen, and oxygen atoms only. No nitrogen
atoms are needed. That reminded one of Calvin's original experiment. Calvin had obtained formaldehyde
(CH20) from carbon dioxide and water. What if one went a step farther and began with formaldehyde and
water.
In 1962, Oro found that if he began with formaldehyde in water and let ultraviolet waves fall upon it, a
variety of sugar molecules were formed, and among them were ribose and deoxyribose.
What next?
Purines and pyrimidines were formed. Ribose and deoxyribose were formed. Phosphate groups didn't
have to be formed. They existed in solution in the ocean now, and very likely did then, in just the form they
existed in inorganic molecules.
One researcher who drove onward was a Ceylon-born biochemist, Cyril Ponnamperuma, at Ames
Research Center at Moffett Field, California. He had conducted experiments in which he had, as a beginning,
formed various purines with and without hydrogen cyanide. He had formed them through the energy of beams
of electrons (very light particles) as well as ultraviolet waves.
In 1963, he, along with Ruth Mariner and Carl Sagan, began a series of experiments in which he exposed
a solution of adenine and ribose to ultraviolet waves. They hooked together in just the fashion they were
hooked together in nucleotides. If the experimenters began with phosphate also present in the mixture, then the
complete nucleotide was formed. Indeed, by 1965, Ponnamperuma was able to announce that he had formed a
double nucleotide, a molecule consisting of two nucleotides combined in just the fashion found in nucleic acids.
By the middle 1960s, then, it seemed clear to biochemists that the conditions on the early Earth were
capable of leading to the formation of a wide variety of substances associated with life. These would certainly
include the amino acids and nucleotides, those building blocks that go to make up the allimportant proteins
and nucleic acids. Furthermore, these building blocks hook together under early conditions to make up the very
chains out of which proteins and nucleic acids are formed.
All the raw materials for life were there on the early Earth, all the necessary chemicals. But life is more
than just chemicals. There are all sorts of chemical changes going on in living organisms, and they must be
taken into account. Atoms and groups of atoms are shifting here, shifting there, coming apart and reuniting in
different ways.
Many of these changes won't take place unless energy is supplied. If we're dealing with the ocean, the
energy is supplied by the sun's ultraviolet radiation, or in other ways. But what happens inside the tiny living
creatures once they come into existence?
Actually, there are certain chemicals in living creatures which break up easily, releasing energy. Such
chemicals make it possible for important chemical changes to take place that would not take place without
them. Without such chemicals life as we know it would be impossible no matter how many proteins and
nucleic acids built up in the early ocean.
Could it be that some of the energy of sunlight went into the production of these energy-rich
compounds? In that case, everything necessary for life might really be supplied.
The best-known of the energy-rich compounds is one called "adenosine triphosphate," a name that is
usually abbreviated as ATP. It resembles a nucleotide to which two additional phosphate groups (making three
altogether) have been added.
If, then, adenine, ribose, and phosphate groups are exposed to ultraviolet waves and if they hook
together to form a nucleotide containing one phosphate group, perhaps we can go farther. Perhaps longer
irradiation or the use of more phosphate to begin with will cause them to hook together to form ATP, with
three phosphate groups. Ponnamperuma tried, and it worked. ATP was formed.
In 1967 a type of molecule belonging to a class called "porphyrins" was synthesized from simpler
substances by Ponnamperuma. Belonging to this class is the important chlorophyll molecule in green plants.
No one doubts now that all the necessary chemicals of life could have been produced in the oceans of
the early Earth by chemical reactions under ultraviolet.
To be sure, the life that was formed at first was probably so simple that we might hesitate to call it life.
Perhaps it consisted of a collection of just a few chemicals that could bring about certain changes that would
keep the collection from breaking apart. Perhaps it would manage to bring about the formation of another
collection like itself.
It may be that life isn't so clear-cut a thing that we can point a finger and say: Right here is something
that was dead before and is now alive.
Reading Passages [ 195 ]

There may be a whole set of more and more complex systems developing over hundreds of millions of
years. To begin with, the systems would be so simple that we couldn't admit they were alive. To end with, they
would be so complex that we would have to admit they were indeed alive. But where, in between, would be the
changeover point?
We couldn't tell. Maybe there is no definite changeover point. Chemical systems might just slowly
become more and more "alive" and where they passed the key point, no one could say.
With all the successful production of compounds that followed the work of Calvin and Miller, there still
remained the question of how cells were formed. The experimenters who formed compounds recognized that
that question would have to be answered somehow.
No one type of compound is living, all by itself. Everything that seems living to us is a mixture of all
sorts of substances which are kept close together by a membrane and which react with each other in a very
complicated way.
There are viruses, to be sure, which are considered alive and which sometimes consist of a single
nucleic acid molecule wrapped in a protein shell. Such viruses, however, don't really get to work in a truly
living way till they can get inside some cell. In there, they make use of cell machinery.
Haldane, who had started the modern attack on the problem, wondered how cells might have formed.
He pointed out that when oil is added to water, thin films of oil sometimes form bubbles in which tiny droplets
of water are enclosed.
Some of the compounds formed by the energy of ultraviolet light are oily and won't mix with water.
What if they were to form a little bubble and just happen to enclose a proper mixture of protein, nucleic acid,
and other things? Today's cell membrane may be the development of that early oily film.
Oparin, the Russian biochemist, went into further detail. He showed that proteins in solution might
sometimes gather together into droplets and form a kind of skin on the outside of those droplets.
The most eager experimenter in this direction, once Miller's work had opened up the problem, was
Sidney W. Fox at the University of Miami. It seemed to him that the early Earth must have been a hot planet
indeed. Volcanoes may have kept the dry land steaming and brought the ocean nearly to a boil.
Perhaps the energy of heat alone was sufficient to form complex compounds out of simple ones.
To test this, Fox began with a mixture of gases like that in Atmosphere I (the type that Oparin suggested
and Miller used) and ran them through a hot tube. Sure enough, a variety of amino acids, at least a dozen, were
formed. All the amino acids that were formed happened to be among those making up proteins. No amino
acids were formed that were not found in proteins.
Fox went a step farther. In 1958, he took a bit of each of the various amino acids that are found in
protein, mixed them together, and heated the mixture. He found that he had driven the amino acids together,
higgledy-piggledy, into long chains which resembled the chains in protein molecules. Fox called these chains
"proteinoids" (meaning "protein-like"). The likeness was a good one. Stomach juices, which digest ordinary
protein, would also digest proteinoids. Bacteria, which would feed and grow on ordinary protein, would also
feed and grow on proteinoids.
Most startling of all, when Fox dissolved the proteinoids in hot water and let the solution cool, he found
that the proteinoids clumped together in little spheres about the size of small bacteria. Fox called these
"microspheres."
These microspheres are not alive, but in some ways they behaved as cells do. They are surrounded by a
kind of membrane. Then, by adding certain chemicals to the solution, Fox could make the microspheres swell
or shrink, much as ordinary cells do. The microspheres can produce buds, which sometimes seem to grow
larger and break off. Microspheres can divide in two or cling together in chains.
Not all scientists accept Fox's arguments, but what if, on the early Earth, more and more complicated
substances were built up, turning the ocean into the "warm soup" we spoke of. What if these substances formed
microspheres? Might it not be that, little by little, as the substances grew more complicated and the
microspheres grew more elaborate, that eventually an almost-living cell would be formed? And after that, a
fully living one?
Before life began, then, and before evolutionary changes in cells led to living creatures that were more
and more complicated, there must first have been a period of "chemical evolution." In this period, the very
simplest gases of the atmosphere and ocean gradually become more and more complicated until life and cells
formed.
Reading Passages [ 196 ]

All these guesses about the origin of life, from Haldane on, are backed up by small experiments in the
laboratory and by careful reasoning. Is it possible that we might find traces of what actually happened on the
early Earth if we look deep into the Earth's crust.
We find out about ordinary evolution by studying fossils in the crust. These are the remains of ancient
creatures, with their bones or shells turned to stone. From these stony remains we can tell what they looked like
and how they must have lived.
Fossils have been found deep in layers of rock that must be 600 million years old. Before that we find
hardly anything. Perhaps some great catastrophe wiped out the earlier record. Perhaps forms of life existed
before then that were too simple to leave clear records.
Actually, in the 1960s discoveries were reported of traces left behind by microscopic one-cell creatures
in rocks that are more than two billion years old. Prominent in such research is Elso Sterrenberg Barghoorn of
Harvard. It is a good guess that there were simple forms of life on Earth at least as long as three billion years
ago.
If we are interested in discovering traces of the period of chemical evolution, then, we must search for
still older rocks. In them, we might hope to find chemicals that seem to be on the road to life.
But will chemicals remain unchanged in the Earth for billions of years? Can we actually find such traces
if we look for them?
Certainly the important chemicals of life, the proteins and nucleic acids, are too complex to remain
unchanged for long after the creature they were in dies and decomposes. In a very short time, it would seem,
they must decompose and fall apart.
And yet, it turns out, sometimes they linger on, especially when they are in a particularly well-protected
spot. Abelson, one of the people who experimented with early atmospheres, also worked with fossils. He
reasoned that living bones and shell contain protein. Bones may be 50 percent protein. Clam shells have much
less, but there is some. Once such bones and shells are buried deep in the Earth's crust, remaining there for
millions of years while they turned to stone, it might be that some of the protein trapped between thin layers of
mineral might survive.... Or at least they might break down to amino acids or short chains of amino acids that
might survive.
Painstakingly, Abelson dissolved these ancient relics and analyzed the organic material he extracted.
There were amino acids present all right, exactly the same amino acids that are present in proteins of living
creatures. He found some even in a fossil fish which might have been 300 million years old.
Apparently, then, organic compounds last longer than one might think and Melvin Calvin began the
search for "chemical fossils" in 1961.
In really old rocks, it is unlikely that the organic chemicals would remain entirely untouched. The less
hardy portions would be chipped away. What would linger longest would be the chains and rings of carbon
atoms, with hydrogen atoms attached. These compounds of carbon and hydrogen only are called
"hydrocarbons."
Calvin has isolated hydrocarbons from ancient rocks as much as three billion years old. The
hydrocarbons have molecules of a complicated structure that looked very much as though they could have
originated from chemicals found in living plants.
J. William Schopf of Harvard, a student of Barghoorn, has gone even further. He has detected traces of
22 different amino acids in rocks more than three billion years old.
They are probably the remnants of primitive life. It is necessary now to probe farther back and find
chemical remnants that precede life and show the route actually taken.
Very likely it will be the route worked out by chemists in their experiments, but possibly it won't be.
We must wait and see. And perhaps increasing knowledge of what went on in the days of Earth's youth
will help us understand more about life now.
From: Twentieth Century Discovery by Isaac Azimov
3 - Littler And Littler: The Structure Of Matter
One of the words that fascinates scientists in the 1960s is "quark."
No one has ever seen a quark or come across one in any way. It is far too small to see and no one is even
sure it exists. Yet scientists are anxious to build enormous machines costing hundreds of millions of dollars to
try to find quarks, if they exist.
This is not the first time scientists have looked for objects they weren't sure existed, and were too small
to see even if they did exist. They were doing it as early as the very beginning of the nineteenth century.
Reading Passages [ 197 ]

In 1803, an English chemist, John Dalton, suggested that a great many chemical facts could be explained
if one would only suppose that everything was made up of tiny particles, too small to be seen under any
microscope. These particles would be so small that there couldn't be anything smaller. Dalton called these
particles "atoms" from Greek words meaning "not capable of being divided further." Dalton's suggestion came
to be called the "atomic theory."
No one was sure that atoms really existed, to begin with, but they did turn out to be very convenient.
Judging by what went on in test tubes, chemists decided that there were a number of different kinds of atoms.
When a particular substance is made up of one kind of atom only, it is an "element." Iron is an element,
for instance, and is made up only of iron atoms. Gold is an element; so is the oxygen in the air we breathe.
Atoms can join together into groups and these groups are called "molecules." Oxygen atoms get
together in groups of two and these two-atom oxygen groups are called oxygen molecules. The oxygen in the
air is made up of oxygen molecules, not of separate oxygen atoms.
Atoms of different elements can come together to form molecules of "compounds." Water is a
compound with molecules made up of two hydrogen atoms and one oxygen atom.
Dalton and the nineteenth century chemists who followed him felt that every atom was just a round
little ball. There was no reason to think there was anything more to it than that. They imagined that if an atom
could be seen under a very powerful microscope, it would turn out to be absolutely smooth and shiny, without
a mark.
They were also able to tell that the atom was extremely small. They weren't quite certain exactly how
small it was but nowadays we know that it would take about 250 million atoms laid side by side to stretch
across a distance of only one inch.
The chief difference between one kind of atom and another kind, in the nineteenth century view, lay in
their mass, or weight. Each atom had its own particular mass, or "atomic weight." The hydrogen atom was the
lightest of all, and was considered to have an atomic weight of l. An oxygen atom was about sixteen times as
massive as a hydrogen atom, so it had an atomic weight of 16. A mercury atom had an atomic weight of 200,
and so on.
As the nineteenth century wore on, the atomic theory was found to explain more and more things.
Chemists learned how atoms were arranged inside molecules and how to design new molecules so as to form
substances that didn't exist in nature.
By the end of the century, the atomic theory seemed firmly established. There seemed no room for
surprises.
And then, in 1896, there came a huge surprise that blew the old picture into smithereens. The chemists
of the new twentieth century were forced into a new series of investigations that led them deep into the tiny
atom.
What happened in 1896 was that a French physicist, Antoine Henri Becquerel, discovered quite by
accident that a certain substance had properties no one had ever dreamed of before.
Becquerel had been interested in x rays, which had only been discovered the year before. He had
samples of a substance containing atoms of the heavy metal uranium in its molecules. This substance gave off
light of its own after being exposed to sunlight and Becquerel wondered if this light might include x rays.
It didn't, but Becquerel found it gave off mysterious radiations of some kind; radiations that went right
through black paper and fogged a photographic film. It eventually turned out that it was the uranium atoms
that were doing it. The uranium atoms were exploding and hurling small fragments of themselves in every
direction.
Scientists had never expected atoms could explode, but here some of them were doing it. A new word
was invented. Uranium was "radioactive."
Other examples of radioactivity were found and physicists began to study the new phenomenon with
great interest as the twentieth century opened.
One thing was clear at once. The atom was not just a hard, shiny ball that could not be divided into
smaller objects. Small as it was, it had a complicated structure and was made up of many objects still smaller
than atoms. This had to be, for the uranium atom, in exploding, hurled outward some of these still smaller
"subatomic particles."
One of the most skillful of the new experimenters was a New Zealander, Ernest Rutherford. He used
the subatomic particles that came flying out of radioactive elements and made them serve as bullets. He aimed
them at thin films of metal and found they passed right through the metal without trouble. Atoms weren't hard
shiny balls at all. Indeed, they seemed to be mostly empty space.
Reading Passages [ 198 ]

But then, every once in a while, one of the subatomic bullets would bounce off sharply. It had hit
something hard and heavy somewhere in the atom.
By 1911, Rutherford was able to announce that the atom was not entirely empty space. In the very
centre of the atom was a tiny "atomic nucleus" that contained almost all the mass of the atom. This nucleus was
so small that it would take about 100,000 of them, placed side by side, to stretch across the width of a single
atom.
Outside the nucleus, filling up the rest of the atom, were a number of very light particles called
"electrons." Each different kind of atom had its own particular number of electrons. The hydrogen atom had
only a single electron; the oxygen atom had eight; the iron atom had twenty-six; the uranium atom had ninety-
two, and so on.
All electrons, no matter what atom they are found in, are alike in every way. All of them, for instance,
carry an electric charge. There are two kinds of electric charges-positive and negative. All electrons carry a
negative electric charge and the charge is always of exactly the same size. We can say that every electron has a
charge of just -1.
The atomic nucleus has an electric charge, too, but a positive one. The charge on the nucleus just
balances the charge on the electrons. A hydrogen atom has a single electron with a charge of -l. Therefore, the
charge on the hydrogen nucleus is +1.
An oxygen atom has eight electrons with a total charge of -8. The oxygen nucleus has a charge of +8,
therefore. You can see, then, that the iron nucleus would have to have a charge of +26, the uranium nucleus one
of +92, and so on.
Both parts of the atom-the tiny nucleus at the centre and the whirling electrons outside-have been
involved in unusual discoveries since Rutherford made his announcement in 1911. In this chapter, however, we
are going to be concerned only with the nucleus.
Naturally, physicists were interested in knowing whether the atomic nucleus was a single particle. It
was so much smaller than the atom that it would seem reasonable to suppose that here at last was something as
small as it could be. The atom had proved a surprise, however, and scientists were not going to be too sure of
the nucleus either.
Rutherford bombarded atoms with subatomic particles, hoping to discover something about the
nucleus if he hit them enough times.
He did. Every once in a while, when one of his subatomic bullets hit a nucleus squarely, that nucleus
changed its nature. It became the nucleus of a different variety of atom. Rutherford first discovered this in 1919.
This change of one nucleus into another made it seem as though the nucleus had to be a collection of
still smaller particles. Changes would come about because the collection of still smaller particles was broken
apart and rearranged.
The smallest nucleus was that of the hydrogen atom. That had a charge of +1 and it did indeed seem to
be composed of a single particle. Nothing Rutherford did could break it up (nor have we found a way to do so
even today). Rutherford therefore considered it to be composed of a single particle which he called a "proton."
The proton's charge, +1, was exactly the size of the electron's, but it was of the opposite kind. It was a
positive electric charge, rather than a negative one.
The big difference between the proton and electron, however, was in mass. The proton is 1,836 times as
massive as the electron though to this day physicists don't know why that should be so.
It soon seemed clear that the nuclei of different atoms had different electric charges because they were
made up of different numbers of protons. Since an oxygen nucleus had a charge of +8, it must contain eight
protons. In the same way, an iron nucleus contained twenty-six protons and a uranium nucleus ninety-two
protons.
This is why the nucleus contains just about all the mass of the atom, by the way. The nucleus is made up
of protons which are so much heavier than the electrons that circle about outside the nucleus.
But a problem arose at this point that plagued physicists all through the 1920s. The protons could
account for the electric charge of the nucleus, but not for all its mass. Because the oxygen nucleus had a charge
of +8, it therefore had to contain eight protons, but it also had a mass that was sixteen times as great as a single
proton and therefore twice as great as all eight protons put together. Where did the extra mass come from?
The uranium nucleus had a charge of +92 and therefore had to contain ninety-two protons. However,
the mass of the uranium nucleus was two and a half times as great as all those ninety-two protons put together.
Where did that come from?
Reading Passages [ 199 ]

Physicists tried to explain this in several ways that proved to be unsatisfactory. A few, however,
speculated that there might be particles in the nucleus that were as heavy as protons but that didn't carry an
electric charge.
Such uncharged particles, if they existed, would add to the mass of the nuclei without adding to the
electric charge. They would solve a great many puzzles concerning the nucleus, but there was one catch.
There seemed no way of detecting such uncharged particles, if they existed. To see why this is so, let's
see how physicists were detecting ordinary charged particles in the 1920s.
Physicists used a number of techniques for the purpose, actually, but the most convenient had been
invented in 1911 by a Scottish physicist, Charles Thomson Rees Wilson.
He had begun his career studying weather and he grew interested in how clouds came to form. Clouds
consist of very tiny droplets of water (or particles of ice) but these don't form easily in pure air. Instead, each
one forms about a tiny piece of dust or grit that happens to be floating about in the upper air. In the absence of
such dust, clouds would not form even though the air was filled with water vapour to the very limit it would
hold, and more.
It turned out also that a water droplet formed with particular ease, if it formed about a piece of dust that
carried an electric charge.
With this in mind, Wilson went about constructing a small chamber into which moist air could be
introduced. If the chamber were expanded, the air inside would expand and cool. Cold air cannot hold much
water vapour, so as the air cooled the vapour would come out as a tiny fog.
But suppose the moist air introduced into the chamber were completely without dust. Then even if the
chamber were expanded and the air cooled, a fog would not form.
Next suppose that a subatomic particle comes smashing through the glass and streaks into the moist air
in the chamber. Suppose also that the particle is electrically charged.
Electric charges have an effect on one another. Similar charges (two negatives or two positives) repel
each other; push each other away. Opposite charges (a negative and a positive) attract each other.
If a negatively charged particle, like an electron, rushes through the air, it repels other electrons it comes
near. It pushes electrons out of the atoms with which it collides. A positively charged particle, like a proton,
attracts electrons and pulls them out of the atom. In either case, atoms in the path of electrically charged
particles lose electrons.
What is left of the atom then has a positive electrical charge, because the positive charge on the nucleus
is now greater than the negative charge on the remaining electrons. Such an electrically charged atom is called
an "ion."
Water droplets, which form with particular ease about electrically charged dust particles, also form with
particular ease about ions. If a subatomic particle passes through the moist air in the cloud chamber just as that
air is cooled, droplets of water will form about the ions that the subatomic particle leaves in its track. The path
of the subatomic particle can be photographed and the particle can be detected by the trail it leaves.
Suppose a cloud chamber is placed near a magnet. The magnet causes the moving subatomic particle to
curve in its path. It therefore leaves a curved trail of dewdrops.
The curve tells volumes. If the particle carries a positive electric charge, it curves in one direction and if
it carries a negative electric charge it curves in the other. The more massive it is, the more gently it curves. The
larger its charge, the more sharply it curves.
Physicists took many thousands of photographs of cloud chambers and studied the trails of dewdrops.
They grew familiar with the kind of tracks each particular kind of particle left. They learned to tell from those
tracks what was happening when a particle struck an atom, or when two particles struck each other.
Yet all of this worked well only for charged particles. Suppose a particle didn't carry an electric charge.
It would have no tendency to pull or push electrons out of an atom. The atoms would remain intact and
uncharged. No ions would be formed and no water droplets would appear. In other words, an uncharged
particle would pass through a cloud chamber without leaving any sign.
Still, might it not be possible to detect an uncharged particle indirectly? Suppose you faced three men,
one of whom was invisible. You would only see two men and if none of them moved you would have no
reason to suspect that the third man existed. If, however, the invisible man were suddenly to push one of his
neighbours, you would see one of the men stagger. You might then decide that a third man was present but
invisible.
Reading Passages [ 200 ]

Something of the sort happened to physicists in 1930. When a certain metal called beryllium was
exposed to a spray of subatomic particles, a radiation was produced by it which could not be detected by cloud
chamber.
How did anyone know there was that radiation present then? Well, if paraffin were placed some
distance away from the beryllium, protons were knocked out of it. Something had to be knocking out those
protons.
In 1932, an English physicist, James Chadwick, argued that the radiation from beryllium consisted of
uncharged particles. These particles were electrically neutral and they were therefore called "neutrons."
Neutrons were quickly studied, not by cloud chamber, but by the manner in which they knocked atoms
about, and much was learned. It was found that the neutron was a massive particle, just a trifle more massive
than the proton. Where the proton was 1,836 times as massive as the electron, the neutron was 1,839 times as
massive as the electron.
Physicists now found that they had a description of the structure of the nucleus that was better than
anything that had gone before. The nucleus consisted of both protons and neutrons. It was the neutrons that
accounted for the extra mass of the nucleus.
Thus, the oxygen nucleus had a charge of +8 but a mass of 16. That was because it was made up of 8
protons and 8 neutrons. The uranium nucleus had a charge of +92 and a mass of 238; it was made up of 92
protons and 146 neutrons. The atomic nucleus, small as it was, did indeed consist of still smaller particles
(except in the case of hydrogen). Indeed, the nuclei of the more complicated atoms were made up of a couple of
hundred smaller particles.
This does not mean that there weren't some serious questions raised by this proton-neutron theory of
nucleus structure. For instance, protons are all positively charged and positively charged particles repel each
other. The closer they are, the more strongly they repel each other. Inside the atomic nucleus, dozens of protons
are pushed together so closely they are practically touching. The strength of the repulsion must be enormous
and yet the nucleus doesn't fly apart.
Physicists began to wonder if there was a special pull, or force, that held the protons together. This force
had to be extremely strong to overcome the "electromagnetic force" that pushed protons apart. Furthermore, the
new force had to operate only at very small distances, for when protons were outside nuclei, they repelled each
other with no sign of any attraction.
Such a strong attraction that could be felt only within nuclei is called a "nuclear force."
Could such a nuclear force exist? A Japanese physicist, Hideki Yukawa, tackled the problem shortly
after the neutron was discovered. He carefully worked out the sort of thing that would account for such an
extremely strong and extremely short-range force.
In 1935, he announced that if such a force existed, then it might be built up by the constant exchange of
particles by the protons and neutrons in the nucleus. It would be as though the protons and neutrons were
tossing particles back and forth and held firmly together as long as they were close enough to toss and catch. As
soon as the neutrons and protons were far enough apart so that the particles couldn't reach, then the nuclear
force would be no longer effective.
According to Yukawa, the exchange particle should have a mass intermediate between that of the
proton and the electron. It was therefore eventually named a "meson" from a Greek work meaning
"intermediate."
But did the meson really exist?
The best way of deciding whether it existed and if Yukawa's theory was actually correct was to detect
the meson inside the nucleus, while it was being tossed back and forth between protons and neutrons.
Unfortunately, that seemed impossible. The exchange took place so quickly and it was so difficult to find out
what was going on deep inside the nucleus, that there seemed no hope.
But perhaps the meson could be somehow knocked out of the nucleus and detected in the open. To do
that the nucleus would really have to be made to undergo a hard collision.
According to a theory worked out by the German-Swiss physicist, Albert Einstein, in 1905, matter and
energy are two different forms of the same thing. Matter is, however, a very concentrated form of energy. It
would take the energy produced by burning twenty million gallons of petrol to make one ounce of matter.
To knock a meson out of the nucleus of an atom would be very much like creating the amount of matter
in a meson. To produce that quantity of matter doesn't really take much energy, but that energy has to be
concentrated into a single tiny atomic nucleus and doing that turns out to be very difficult.
Reading Passages [ 201 ]

All through the 1930s and 1940s, physicists devised machines for pushing subatomic particles by
electromagnetic forces and making them go faster and faster, piling up more and more energy, until finally,
crash-they were sent barreling into a nucleus.
Gradually, more and more energy was concentrated into these speeding particles. Such energy was
measured in "electron volts" and by the 1940s particles with energies of ten million electron volts (10 Mev) were
produced. This sounds like a great deal, and it is, but it still wasn't enough to form mesons.
Fortunately, physicists weren't entirely stopped. There is a natural radiation ("cosmic rays") striking the
Earth all the time. This is made up of subatomic particles of a wide range of energies; some of them are
enormously energetic.
They originate somewhere deep in outer space. Even today, physicists are not entirely certain as to the
origin of cosmic rays or what makes them possess so much energy. Still, the energy is there to be used.
Cosmic rays aren't the perfect answer. When physicists produce energetic particles, they can aim them
at the desired spot. When cosmic rays bombard Earth, they do so without aiming. Physicists must wait for a
lucky hit; when a cosmic ray particle with sufficient energy just happens to hit a nucleus in the right way. And
then he must hope that someone with a detecting device happens to be at the right place and at the right
moment.
For a while, though, it seemed that the lucky break had taken place almost at once. Even while Yukawa
was announcing his theory, an American physicist, Carl David Anderson, was high on Pike's Peak in Colorado,
studying cosmic rays.
The cosmic ray particles hit atoms in the air and sent other particles smashing out of the atoms and into
cloud chambers. When there was finally a chance to study the thousands of photographs that had been taken,
tracks were found which curved in such a way as to show that the particle that caused them was heavier than
an electron but lighter than the proton. In 1936, then, it was announced that the meson had been discovered.
Unfortunately, it quickly turned out that this meson was a little too light to be the particle called for by
Yukawa's theory. It was wrong in several other ways, too.
Nothing further happened till 1947. In that year, an English physicist, Cecil Frank Powell, was studying
cosmic rays far up in the Bolivian Andes. He wasn't using cloud chambers, but special photographic chemicals
which darkened when a subatomic particle struck them.
When he studied the tracks in these chemicals, he found that he, too, had a meson, but a heavier one
than had earlier been found. Once there was a chance to study the new meson it turned out to have just the
properties predicted by Yukawa.
The first meson that had been discovered, the lighter one, was named the "mu-meson." The heavier one
that Powell had discovered was the "pi-meson." ("Mu" and "pi" are letters of the Greek alphabet. Scientists often
use Greek letters and Greek words in making up scientific names.)
It is becoming more and more common to abbreviate the names of these mesons. The light one is called
the "muon" and the heavy one the "pion."
The new mesons are very unstable particles. They don't last long once they are formed. The pion only
lasts about twenty-five billionths of a second and then it breaks down into the lighter muon. The only reason
the pion can be detected at all is that when it is formed it is usually travelling at enormous speed, many
thousands of miles a second. Even in a billionth of a second it has a chance to travel a few inches, leaving a trail
as it does so. The change in the kind of trail it leaves towards the end shows that the pion has disappeared and
a muon has taken its place.
The muon lasts much longer, a couple of millionths of a second, and then it breaks down, forming an
electron. The electron is stable and, if left to itself, will remain unchanged forever.
By the end of the 1940s, then, the atomic nucleus seemed to be in pretty good shape. It contained
protons and neutrons and these were held together by pions flashing back and forth. Chemists worked out the
number of protons and neutrons in every different kind of atom and all seemed well.
But it did seem that there ought to be two kinds of nucleithe kind that exists all about us and a sort of
mirror image that in the late 1940s, no one had yet seen.
That possibility had first been suggested in 1930 by an English physicist, Paul Adrien Maurice Dirac. He
calculated what atomic structure ought to be like according to the latest theories and it seemed to him that
every particle ought to have an opposite number. This opposite could be called an "antiparticle."
In addition to an electron, for instance, there ought also to be an "antielectron" that would have a mass
just like that of an electron but would be opposite in electric charge. Instead of having a charge of -1, it would
have one of +1.
Reading Passages [ 202 ]

In 1932, C. D. Anderson (who was later to discover the muon) was studying cosmic rays. He noticed on
one of his photographs a cloud-chamber track which he easily identified as that of an electron. There was only
one thing wrong with it; it curved the wrong way. That meant it had a positive charge instead of a negative one.
Anderson had discovered the antielectron. Because of its positive charge, it is usually called a
"positron." The existence of the antielectron was strong evidence in favor of Dirac's theory, and as time went on
more and more antiparticles were uncovered.
The ordinary muon, for instance, has a negative charge of -l, like the electron, and it is usually called the
"negative muon." There is an antimuon, exactly like the muon except that it has a positive charge of +1 like the
positron. It is the "positive muon."
The ordinary pion is a "positive pion" with a charge of +1. The antipion is the "negative pion" with a
charge of -1.
By the close of the 1940s, it seemed quite reasonable to suppose that there were ordinary nuclei made
up of protons and neutrons with positive pions shifting back and forth among them; and that there were also
"antinuclei" made up of "antiprotons" and "antineutrons" with antipions shifting back and forth.
Physicists didn't really feel they actually had to detect antiprotons and antineutrons to be sure of this
but, of course, they would have liked to.
To detect antiprotons is even more difficult than to detect pions. An antiproton is as massive as a
proton, which means it is seven times as massive as a pion. To form an antiproton requires a concentration of
seven times as much energy as to form a pion.
To form a pion required several hundred million electron volts, but to form an antiproton would
require several billion electron volts. (A billion electron volts is abbreviated "Bev.")
To be sure, there are cosmic ray particles that contain several Bev of energy, even several million Bev.
The higher the energy level required, however, the smaller the percentage of cosmic ray particles possessing
that energy. The chances that one would come along energetic enough to knock antiprotons out of atoms just
when a physicist was waiting to take a picture of the results were very small indeed.
However, the machines for producing man-made energetic particles were becoming ever huger and
more powerful. By the early 1950s, devices for producing subatomic particles with energies of several Bev were
built. One of these was completed at the University of California in March 1954. Because of the energy of the
particles it produced, it was called the "Bevatron."
Almost at once, the Bevatron was set to work in the hope that it might produce antiprotons. It was used
to speed up protons until they possessed 6 Bev of energy and then those protons were smashed against a piece
of copper. The men in charge of this project were an Italian-born physicist, Emilio Segré, and a young
American, Owen Chamberlain.
In the process, mesons were formed; thousands of mesons for every possible antiproton. The mesons,
however, were much lighter than antiprotons and moved more quickly. Segré's group set up detecting devices
that would react in just the proper manner to pick up heavy, slow-moving, negatively charged particles. When
the detecting devices reacted properly, only something with exactly the properties expected of an antiproton
could have turned the trip.
By October 1955, the detection devices had been tripped sixty times. It could be no accident. The
antiproton was there and its discovery was announced.
The antiproton was the twin of the proton. The great difference was that the proton had a charge of +l
and the antiproton had a charge of -1.
Once enough antiprotons were produced for study, it was found that occasionally one would pass close
by a proton and the opposite charges would cancel. The proton would become a neutron and the antiproton
would become an antineutron.
You might wonder how you could tell an antineutron from a neutron since both are uncharged. The
answer is that although the neutron and antineutron have no electric charge, they spin rapidly in a way that
causes them to behave like tiny magnets. The neutron is like a magnet that points in one direction while the
antineutron is like a magnet that points in the opposite direction.
By the mid-1950s, it was clear that antiprotons and antineutrons existed. But could they combine to
form an antinucleus?
Physicists were sure they could but the final answer did not come till 1965. In that year, at Brookhaven
National Laboratories in Long Island, New York, protons with energies of 7 Bev were smashed against a
beryllium target. Several cases of an antiproton and antineutron in contact were produced and detected.
Reading Passages [ 203 ]

In the case of ordinary particles, there is an atomic nucleus that consists of one proton and one neutron.
This is the nucleus of a rare variety of hydrogen atom that is called "deuterium." The proton-neutron
combination is therefore called a "deuteron."
What had been formed at Brookhaven was an "antideuteron." It is the very simplest antinucleus that
could be formed of more than one particle, but that is enough. It proved that it could be done. It was proof
enough that matter could be built up out of antiparticles just as it could be built of ordinary particles. Matter
built up of antiparticles is "antimatter."
When the existence of antiparticles was first proposed, it was natural to wonder why if they could exist,
they weren't anywhere around us. When they were detected at last, they were found only in tiny quantities and
even those quantities didn't last long.
Consider the positron, or antielectron. All around us, in every atom of all the matter we can see and
touch on Earth, are ordinary electrons. Nowhere are there any antielectrons to speak of. Occasionally, cosmic
ray particles produce a few or physicists form a few in the laboratory. When they do, those antielectrons
disappear quickly.
As an antielectron speeds along, it is bound to come up against one of the trillions of ordinary electrons
in its immediate neighbourhood. It will do that in perhaps a millionth of a second.
When an electron meets an antielectron, both particles vanish. They are opposites and cancel out. It is
like a peg falling into a hole which it fits exactly. Peg and hole both disappear and nothing is left but a flat
surface.
In the case of the electron and antielectron, however, not everything disappears. Both electron and
antielectron have mass, exactly the same amount of the same kind of mass. (We only know of one kind of mass
so far.) When the electron and antielectron cancel out, the mass is left over and that turns into energy.
This happens with all other particles and antiparticles. A positive muon will cancel a negative muon; a
negative pion will cancel a positive pion; an antiproton will cancel a proton, and so on. In each case both
particles disappear and energy takes their place. Naturally, the more massive the particles, the greater the
amount of energy that appears.
It is possible to reverse the process, too. When enough energy is concentrated into a small space,
particles may be formed out of it. A particle is never formed out of energy by itself, however. If an electron is
formed, an antielectron must be formed at the same time. If a proton is formed, an antiproton must be formed
at the same time.
When Segré and Chamberlain set about forming antiprotons, they had to allow for twice as much
energy as would be sufficient just for an antiproton. After all, they had to form a proton at the same time.
Since this is so, astronomers are faced with a pretty problem. They have worked up many theories of
how the universe came to be, but in all the theories, it would seem that antiparticles ought to be formed along
with the particles. There should be just as much antimatter as there is matter.
Where is all this antimatter? It doesn't seem to be around. Perhaps it has combined with matter and
turned into energy. In that case, why is there all the ordinary matter about us left over. There should be equal
amounts of each, and each set should cancel out the other completely.
Some astronomers suggest that there are two separate universes, one made out of matter (our own) and
another made out of antimatter. Other astronomers think there is only one universe but that some parts of it
(like the parts near ourselves) are matter, while other parts are antimatter.
What made the matter and antimatter separate into different parts of the universe, or even into different
universes, no one can yet say. It may even be possible that for some reason we don't understand, only matter,
and no antimatter, was formed to begin with.
The problem of the universe was something for astronomers to worry about, however. Physicists in
1947 were quite satisfied to concentrate on particles and antiparticles and leave the universe alone.
And physicists in that year seemed to have much ground for satisfaction. If they ignored the problem of
how the universe began and just concentrated on how it was now, they felt they could explain the whole thing
in terms of a little over a dozen particles altogether. Some of these particles they had actually detected. Some
they had not, but were sure of anyway.
Of course, not everything was absolutely clear, but what mysteries existed ought to be cleared up, they
hoped, without too much trouble.
The particles they knew, or strongly suspected they were soon going to know, fell into three groups,
depending on their mass. There were the light particles, the middle-sized particles, and the heavy particles.
Reading Passages [ 204 ]

These were eventually given Greek names from words meaning light, middle-sized, and heavy: "leptons,"
"mesons," and "baryons."
The leptons, or light particles, include the electron and the antielectron, of course. In order to explain
some of the observed facts about electrons, the Austrian physicist Wolfgang Pauli suggested, in 1931, that
another kind of particle also existed. This was a very small one, possibly with no mass at all, and certainly with
no charge. It was called a "neutrino." This tiny particle was finally detected in 1956. There was not only a
neutrino but also an "antineutrino."
Although the muon was considered a meson, to begin with, it was soon recognized as a kind of heavy
electron. All its properties but mass were identical with those of the electron. Along with the muon, a neutrino
or antineutrino is also formed as in the case of the electron. In 1962, this muonneutrino was found to be
different from the electron-neutrino.
Two other particles might be mentioned. Light, together with other radiation similar to it (like x rays,
for instance) behaves in some ways as though it were composed of particles. These particles are called
"photons."
There is no antiparticle for a photon; no antiphoton. The photon acts as its own opposite. If you were to
fold a sheet of paper down the middle and put the particles on one side and the antiparticles on the other, you
would have to put the photon right on the crease.
Then, too, physicists speculate that the reason different objects pull at each other gravitationally is
because there are tiny particles called "gravitons" flying between them. Some of the properties of the graviton
have been worked out in theory; for instance, it is its own antiparticle. The graviton is so tiny, however, and so
hard to pin down, that it has not yet been detected.
This is the total list of leptons so far, then:
1. the graviton
2. the photon
3. the electron and the antielectron
4. the electron-neutrino and the electron-antineutrino
5. the negative muon and the positive muon
6. the muon-neutrino and the muon-antineutrino
The leptons pose physicists some problems. Does the graviton really exist? Why does the muon exist;
what is the purpose of something that is just a heavy electron? Why and how are the muon-neutrinos different
from the electron-neutrinos? These puzzles are intriguing but they don't drive physicists to despair.
In 1947, only three particles were coming to be known which would now be considered mesons. Two of
them were the positive pion and the negative antipion. The third was a neutral pion which, like the photon and
the graviton, was its own antiparticle.
Only four particles were known in 1947 that would now be classified as baryons. These are the proton,
antiproton, neutron, and antineutron. Both antiproton and antineutron had not yet actually been detected, but
physicists were quite sure they existed.
The situation with regard to the nucleus seemed particularly well settled. There was the nucleus made
up of protons and neutrons held together by pions, and the antinucleus made up of antiprotons and
antineutrons held together by antipions. All seemed well.
But in 1947, the very year which saw the discovery of the pion and the apparent solution of the problem
of the nucleus, there began a new series of discoveries that upset the applecart again.
Two English physicists, George Dixon Rochester and Clifford Charles Butler, studying cosmic rays with
cloud chambers in 1947, came across an odd V-shaped track. It was as though some neutral particle, which left
no track, had suddenly broken into two particles, which each had a charge and left a track, and which hastened
away in different directions.
The particle that moved off in one direction and formed one branch of the V seemed to be a pion, but
the other was something new. From the nature of the track it left, it seemed to be as massive as a thousand
electrons, or as three and a half pions. It was half as massive as a proton.
Nothing like such a particle had ever been suspected of existing. It caught the world of physicists by
surprise, and at first all that could be done was to give it a name. It was called a "V-particle," and the collision
that produced it was a "V-event."
Once physicists became aware of V-events, they began to watch for them and, of course, soon
discovered additional ones. By 1950, V-particles were found which seemed to be actually more massive than
Reading Passages [ 205 ]

protons or neutrons. This was another shock. Somehow physicists had taken it for granted that protons and
neutrons were the most massive particles there were.
The astonished physicists began to study the new particles carefully. The first V-particle to be
discovered, the one that was only half as massive as a proton, was found to have certain properties much like
those of the pion. The new particle was therefore classified as a meson. It was called a "K-meson" and the name
was quickly abbreviated to "kaon." There were four of these: a positive kaon, a negative antikaon, a neutral
kaon, and a neutral antikaon.
The other V-particles discovered in the early 1950s were all more massive than the proton and were
grouped together as "hyperons." There were three kinds of these and each kind was given the name of a Greek
letter. The lightest were the "lambda particles," which were about 20 percent heavier than protons. These came
in two varieties, a lambda and an antilambda, both of them uncharged.
Next lightest were the "sigma particles," which were nearly 30 percent heavier than the proton. There
was a positive sigma, a negative, and a neutral, and each had its antiparticle. That meant six sigma particles
altogether.
Finally, there were the "xi particles," which were 40 percent heavier than the proton. There was a
negative xi particle and a neutral one (no positive variety) and each had its antiparticle, making four altogether.
All these hyperons, an even dozen of them, had many properties that resembled those of the proton and
neutron. They were therefore lumped with them as baryons. Whereas there had been four baryons known, or
suspected, in 1947, there were sixteen in 1957.
But then things grew rapidly more complicated still. Partly, it was because physicists were building
machines capable of producing particles with more and more energy. This meant that nuclei were being
smashed into with greater and greater force and it was possible to turn the energy into all sorts of particles.
What's more, physicists were developing new and better means of detecting particles. In 1952, a young
American physicist, Donald Arthur Glaser, got an idea for something that turned out to be better than the cloud
chamber. It was, in fact, rather the reverse of the cloud chamber.
A cloud chamber contains gas that is on the point of turning partly liquid. Charged particles, racing
through, help the liquid to form and leave trails of water droplets.
But suppose it were the reverse. Suppose there was a chamber which contained liquid that was on the
point of boiling and turning into gas. Charged particles passing through the liquid would form ions. The liquid
immediately around the ion would boil with particular ease and form small bubbles of gas. The tracks would
be gas bubbles in liquid, instead of liquid drops in gas.
This new kind of detecting device was called a "bubble chamber."
The advantage of a bubble chamber is that the liquid it contains is much denser than the air in a cloud
chamber. There are more atoms and molecules in the liquid for a flying particle to collide with. More ions are
formed and a clearer trail is left behind. Particles that could scarcely be seen in a cloud chamber are seen very
clearly in a bubble chamber.
By using bubble chambers and finding many more kinds of tracks, physicists began to suspect, by 1960,
that there were certain particles that came into existence very briefly. They were never detected but unless they
existed there was no way of explaining the tracks that were detected.
These new particles were very short-lived indeed. Until now the most unstable particles that had been
detected lasted for a billionth of a second or so. That was a long enough time for them to make visible tracks in
a bubble chamber.
The new particles, however, broke down in something like a hundred thousandth of a billionth of a
billionth of a second. In that time, the particle has only a chance to travel about the width of a nucleus before
breaking down.
These new particles were called "resonance particles" and different varieties have been deduced in great
numbers since 1960. By now there are over a hundred baryons known that are heavier than protons. The
heaviest are over twice as massive as protons.
Some of the new particles are mesons, all of them heavier than the pion. There are about sixty of these.
In the 1960s then, physicists were faced with the problem of finding some way of accounting for a large
number of massive particles for which they could think of no uses and whose existence they couldn't predict.
At first all that physicists could do was to study the way in which one particle broke down into another;
or the way in which one particle was built up into another when energy was added. Some changes could take
place, while some changes could not. Particle A might change into particles B and C, but never into particles D
and E.
Reading Passages [ 206 ]

Physicists tried to work out rules which would explain why some changes could take place and some
could not. For instance, a neutron couldn't change into only a proton, because the proton has a positive electric
charge and that can't be made out of nothing.
A neutron might, however, change into a proton plus an electron. In that case, a positive and a negative
charge would be formed simultaneously. Together, they might be considered as balancing each other, so it
would be as though no charge at all were formed.
But then to balance certain other qualities, such as the particle spin, more was required. In the end, it
turned out that a neutron had to break down to three particles: a proton, an electron, and an antineutrino.
Matters such as electric charge and particle spin were enough to explain the events that were known in
the old days when only a dozen or so different particles were known. In order to explain all the events that took
place among nearly 200 particles, more rules had to be worked out. Quantities such as "isotopic spin,"
"hypercharge," "parity," and so on, had to be taken into account.
There is even something called "strangeness." Every particle is given a "strangeness number" and if this
is done correctly, it turns out that whenever one group of particles changes into another group, the total
strangeness number isn't altered.
The notion of strangeness made it plainer that there were actually two kinds of nuclear forces. The one
that had first been proposed by Yukawa and that involved pions was an extremely strong one. In the course of
the 1950s, however, it became clear that there was also a much weaker nuclear force, only about a hundred
trillionths as strong as the strong one.
Changes that took place under the influence of the strong nuclear force took place extremely rapidly-
just long enough to allow a resonance particle to break down. Changes that took place under the influence of
the weak nuclear force took much longer-at least a billionth of a second or so.
Only the baryons and the mesons could take part in strong force changes. The leptons took part only in
weak-force changes. The baryons and mesons are therefore lumped together sometimes as "hadrons."
Even when physicists gradually worked out the rules that showed what particle changes could take
place and what couldn't take place, they were very unsatisfied. They didn't understand why there should be so
many particles.
More and more physicists began to wonder if the actual number of particles was unimportant. Perhaps
particles existed in families and they ought to concentrate on families of particles.
For instance, the first two baryons discovered were the proton and the neutron. They seemed two
completely different particles because there was an important unlikeness about them. The proton had a positive
electric charge and the neutron had no electric charge at all.
This seemed to be an enormous difference. Because of it, a proton could be detected easily in a cloud
chamber and a neutron couldn't. Because of it a proton followed a curved path when brought near a magnet
but a neutron didn't.
And yet when the strong nuclear force was discovered, it was found that it affected protons and
neutrons exactly the same, as though there were no difference between them. If the proton and neutron are
considered from the standpoint of the strong nuclear force only, they are twins.
Could it be, then, that we ought to consider the proton and neutron as two forms of a single particle
which we might call the "nucleon" (because it is found in the nucleus)? Certainly, that might simplify matters.
You can see what this means if you consider people. Certainly, a husband and a wife are two different
people, very different in important ways. To the income tax people, however, they are just one tax-paying
combination when they file a joint return. It doesn't matter whether the husband makes the money, or the wife,
or both make half; in the return it is all lumped together. For tax purposes we simply have a taxpayer in two
different forms, husband and wife.
After 1960, when the resonance particles began to turn up, physicists began to think more and more
seriously of particle families. In 1961, two physicists, Murray Gell-Mann in the United States and Yuval
Ne'eman in Israel, working separately, came up with very much the same scheme for forming particle families.
To do this, one had to take all the various particle properties that physicists had worked out and
arrange them in a very regular way. There were eight different kinds of properties that Gell-Mann worked with
in order to set up a family pattern. Jokingly, he called his system the "Eightfold Way," after a phrase in the
teachings of the Indian religious leader Buddha. The more formal name of his scheme is "SU (3) symmetry."
In what turned out to be the most famous example of SU (3) symmetry, Gell-Mann prepared a family of
ten particles. This family of ten can be pictured as follows. Imagine a triangle made up of four objects at the
bottom, three objects above them, two objects above them, and one object all by itself at the apex.
Reading Passages [ 207 ]

The four objects at the bottom are four related "delta particles" each about 30 percent heavier than a
proton. The chief difference among them is the electric charge. The four delta particles have charges of -1, 0, +1,
and +2.
Above these are three "sigma particles" more massive than the deltas and with charges of -1, 0, and +1.
Above that are two "xi particles," which are still more massive and which have charges of -1, and 0. Finally, at
the apex of the triangle is a single particle that is most massive of all and that has a charge of -1. Gell-Mann
called this last particle the "omegaminus" particle, because "omega" is the last letter in the Greek alphabet and
because the particle has a negative electric charge.
Notice that there is a regular way in which mass goes up and the number of separate particles goes
down. Notice also that there is a regular pattern to the electric charges: -1, 0, +1, +2 for the first set; then -1, 0, +l;
then -1, 0; finally -1.

Other properties also change in a regular way from place to place in the pattern. The whole thing is very
neat indeed. There was just one problem. Of the ten particles in this family, only nine were known. The tenth
particle, the omegaminus at the apex, had never been observed. If it did not exist the whole pattern was ruined.
Gell-Mann suggested that it did exist; that if people looked for it and knew exactly what they were looking for,
they would find it.
If Gell-Mann's pattern was correct, one ought to be able to work out all the properties of the omega-
minus by taking those values that would fit into the pattern. When this was done, it was found that the omega-
minus would have to be a most unusual particle for some of its properties were like nothing yet seen.
For one thing, if it were to fit into its position at the top of the triangle it would have to have an unusual
strangeness number. The deltas at the bottom of the triangle had a strangeness number of 0, the sigmas above
them a strangeness number of -1, and the xis above them one of -2. The omega-minus particle at the top would
therefore have to have a strangeness number of -3. No strangeness number that large had ever been
encountered and physicists could scarcely bring themselves to believe that one would be.
Nevertheless, they began to search for it.
The instrument for the purpose was at Brookhaven, where, as the 1960s opened, an enormous new
device for speeding particles was put into operation. It could speed up particles to the point where they would
possess energies as high as 33
Bev. This was more than five times the quantity of energy that was enough to produce antiprotons
some years before.
In November 1963, this instrument was put to work in the search for the omega-minus particle. Along
with it was a new bubble chamber that contained liquid hydrogen. Hydrogen was liquid only at very frigid
temperatures, hundreds of degrees below the ordinary zero.
The advantage to the use of liquid hydrogen was that hydrogen nuclei were made up of single protons
(except for the very rare deuterium form of the element). Nothing else could supply so many protons squeezed
into so small a space without any neutrons present to confuse matters.
The liquid hydrogen bubble chamber was nearly seven feet across and contained over 900 quarts of
liquid hydrogen. There would be very little that would escape it.
Physicists had to calculate what kind of particle collisions might possess sufficient energy plus all the
necessary properties to form an omega-minus particle, if one could be formed at all. You would have to have a
collision that would supply the necessary strangeness number of -3, for instance. It would also have to be a
collision that would supply no quantity of something called "isotopic spin," for the isotopic spin of omega-
minus would have to be 0 if it were to fit Gell-Mann's pattern.
It was finally decided that what was needed was to smash high-energy negative kaons into protons. If
everything went right, an occasional collision should produce a proton, a positive kaon, a neutral kaon, and an
omega-minus particle.
A beam of 5 Bev negative kaons was therefore shot into the liquid hydrogen bubble chamber and by
January 30, 1964, fifty thousand photographs had been taken. Nothing unusual was found in any of them.
On January 31, however, a photograph appeared in which a series of tracks were produced which
seemed to indicate that an omega-minus particle had been formed and had broken down to form other
particles. If certain known and easily recognized particles were followed backward, and it were calculated what
kind of particles they must have come from, and then those were followed backward, one reached the very
brief existence of an omega-minus particle.
Reading Passages [ 208 ]

A few weeks later, another photograph showed a different combination of tracks which could be
worked backward to an omega-minus particle.
In other words, a particle had been detected which had broken down in two different ways. Both
breakdown routes were possible for the omega-minus particle if it had exactly the properties predicted by Gell-
Mann. Since then, a number of other omega-minus particles have been detected, all with exactly the predicted
properties.
There seemed no question about it. The omega-minus particle did exist. It had never been detected
because it was formed so rarely and existed so briefly. Now that physicists had been told exactly what to look
for and where to look for it, however, they had found it.
Physicists are now satisfied that they must deal with particle families. There are arguments as to exactly
how to arrange these families, of course, but that will probably be straightened out.
But can matters become simpler still? It has often happened in the history of science that when matters
seemed to grow very complicated, it could all be made simpler by some basic discovery.
For instance, there are uncounted millions of different kinds of materials on Earth, but chemists
eventually found they were all formed out of a hundred or so different kinds of elements, and that all the
elements were made up, in the main, of three kinds of particles: protons, neutrons, and electrons.
In the twentieth century, as physicists looked more and more closely at these subatomic particles and
found that nearly two hundred of them existed altogether, naturally they began to think of going deeper still.
What lies beyond the protons and neutrons?
It is a case of digging downward into the littler and littler and littler. First to atoms, then beyond that to
the nucleus, then beyond that to the proton and neutron, and now beyond that to-what?
Gell-Mann, in working out his family patterns, found that he could arrange them by letting each
particle consist of three different symbols in different combinations. He began to wonder if these different
symbols were just mathematical conveniences or if they were real objects.
For instance, you can write one dollar as $1.00, which is the same as writing 100¢. This would make it
seem that there are one hundred cents in a dollar, and there certainly are. But does this mean that if you were to
take a paper dollar bill and tear it carefully apart you would find a hundred one-cent pieces in it? Of course not!
The question was, then, if you tore a proton apart, would you find the three smaller objects that
represented the three symbols used by Gell-Mann.
Gell-Mann decided to give the particles a name at least. He happened to think of a passage in
Finnegan's Wake by James Joyce. This is a very difficult book in which words are deliberately twisted so as to
give them more than one meaning. The passage he thought of was a sentence that went "Three quarks for
Muster Mark."
Since three of these simple particles were needed for each of the different baryons, Gell-Mann decided,
in 1963, to call them "quarks."
If the quarks were to fit the picture, they would have to have some very amazing properties. The most
amazing was that they would have to have fractional electric charges.
When the electron was first discovered, its electric charge was set at -1 for simplicity's sake. Since then,
all new particles discovered have either no electric charge at all or have one that is exactly equal to that of the
electron or to an exact multiple of that charge. The same held for positive charges.
In other words, particles can have charges of 0, -1, +1, -2, +2, and so on. What has never been observed
has been any fractional charge. No particle has ever yet been found to have a charge of +11/2 or -21/3.
Yet a fractional charge was exactly what the quarks would have to have. Charges of -1/3 and +2/3s
would have to be found among them.
An immense search is now on for the quarks, for if they are found, they will simplify the physicist's
picture of the structure of matter a great deal.
There is one important difficulty. Gell-Mann's theory makes it quite plain that when quarks come
together to form ordinary subatomic particles, the process gives off a great deal of energy. In fact, almost all the
mass of the quarks is given off as energy and only about one-thirtieth is left to form the particle. This means
that quarks are thirty times as massive as the particles they produce.
(This sounds strange, but think about it. Suppose you see three balloons blown up almost to bursting.
Would you suppose it were possible to squeeze them into a small box just an inch long in each direction? All
you would have to do would be to let the air out of the balloons and what is left can easily be packed away in a
small box. Similarly, when three quarks combine, you "let the mass out" and what is left can easily fit into a
proton.)
Reading Passages [ 209 ]

If you want to form a quark by breaking apart a proton or some other particle, then you have to supply
all the energy that the quarks gave up in the first place. You have to supply enough energy to form a group of
particles thirty times as massive as a proton. You would need at least fifteen times as much energy as was
enough to form a proton and antiproton in the 1950s, and probably even more.
There is no instrument on Earth, not even Brookhaven's 33-Bev colossus, that can supply the necessary
energy. Physicists have two things they can do. First, they can turn to the astronomers and ask them to watch
for any sign of quarks in outer space. There are cosmic ray particles with sufficient energy to form quarks. Most
cosmic ray particles are protons and if two of them smash together hard enough they may chip themselves into
quarks.
However, this would happen very rarely and so far astronomers have detected nothing they could
identify as quarks. The second possibility is to build a device that will produce particles with sufficient energy
to form quarks. In January 1967, the government of the United States announced plans to build such an
instrument in Weston, Illinois.
This will be a huge device, nearly a mile across. It will take six or seven years to build and will cost 375
million dollars. Once it is completed, it will cost 60 million dollars a year to run.
But when it is done, physicists hope it will produce streams of particles with energies up to 200 Bev.
This may be enough to produce quarks-or to show that they probably don't exist.
Physicists are awaiting the completion of the new instrument with considerable excitement and the rest
of us should be excited, also. So far, every new advance in the study of the atom has meant important
discoveries for the good of mankind.
By studying atoms in the first place, chemists learned to put together a variety of dyes and medicines,
fertilizers and explosives, alloys and plastics that had never existed in nature.
By digging inside the atom and studying the electron, physicists made possible the production of such
devices as radio and television.
The study of the atomic nucleus gave us the various nuclear bombs. These are not very pleasant things,
to be sure, but the same knowledge also gave us nuclear power stations. It may make possible the production of
so much cheap energy that our old planet may possibly reach towards a new era of comfort and ease.
Now physicists are trying to find the quarks that lie beyond the subatomic particle. We can't predict
what this will result in, but it seems certain there will be results that may change the world even more than
plastics, and television, and atomic power.
We will have to wait and see. Once. the new device is put into action at Weston, it is just possible we
may not have to wait long.
From: Twentieth Century Discovery by Isaac Azimov
4 - A New Look At The Planets: Distance In Our Solar System
The study of the planets reached a peak in the nineteenth century and then, towards its end, seemed to
die down. Other subjects began to interest astronomers much more. There was nothing left, it would appear, for
twentieth century astronomers to do about planets.
If, indeed, the planets seemed worked out by 1900, that is not surprising. After all, astronomers had
been dealing with them for over 2,000 years, and what more could be left?
To be sure, the ancients had got off on the wrong foot. The Greeks had worked out careful and
interesting theories concerning the motions of the planets as early as 350 B.C. They thought, however, that all
the planets revolved about the Earth.
In 1543, the Polish astronomer Nicolaus Copernicus published a book which argued that the planets
revolved about the sun. He also insisted that the Earth was one of the planets, too, and that it also revolved
about the sun. (The moon, however, revolved about the Earth in the new system as well as the old.)
In 1609, the German astronomer Johannes Kepler worked out the fact that the planets revolved about
the sun in ellipses, which resembled slightly flattened circles. Then, in 1683, the English scientist Isaac Newton
showed how the sun and its planets (the solar system) were held together by gravitational force. All the
motions of the planets could be worked out quite accurately by means of a clear formula which Newton
presented.
Meanwhile in 1609, the Italian astronomer Galileo Galilei had devised a small telescope which he
pointed at the heavens. At once, he saw numerous things no one had ever seen before. He discovered that there
were spots on the sun, for instance, and that there were mountains on the moon. He also found that Jupiter had
four moons that moved about it just as our moon goes about the Earth.
Reading Passages [ 210 ]

For a hundred and fifty years after Newton, astronomers worked hard to make new discoveries about
the solar system with ever-improving telescopes. They showed more and more of its workings to be explained
by Newton's simple law of gravitation.
New bodies were discovered. Saturn was found to have moons like Jupiter. It was found to have rings,
thin circles of light, about its equator. Even a new planet was discovered in 1781 by the German-English
astronomer William Herschel. It was found to circle the sun at a distance far beyond Saturn and it was named
Uranus.
The climax of all this came in the middle of the nineteenth century. The motions of Uranus about the
sun had been followed for over half a century and they did not quite follow Newton's law. This was very
puzzling and upsetting to the astronomers of the time.
One or two of them wondered if there might not be a planet beyond Uranus; one that had not yet been
discovered. Perhaps this unknown planet was exerting a gravitational pull on Uranus, a pull that wasn't being
taken into account.
In 1845, two young astronomers actually tried to calculate where such a planet ought to be located if it
were to produce just enough effect to make Uranus move as it did. One was an Englishman, John Couch
Adams, and the other a Frenchman, Urbain Jean Joseph Leverrier. Neither knew the other was working on the
problem, but both ended with just about the same answer.
When telescopes were turned on the spot where they said the planet ought to be, it was found! It was a
new giant planet far beyond Uranus and it was given the name Neptune.
It was the greatest triumph in the history of astronomy and the great climax of the study of the solar
system. There seemed nothing left to do among the planets that could possibly equal the drama of 1845.
This seemed all the more true as by the middle of the nineteenth century, the solar system was
beginning to seem a small and insignificant thing anyway. Astronomers' attention was beginning to switch
more and more to the distant stars.
In the 1830s, they had developed methods for determining the distance of the nearer stars. By 1860,
methods were devised to analyze starlight. Astronomers could tell how hot a star was, whether it was moving
towards us or away from us, even the materials of which it was made.
With all these exciting discoveries being made about the stars, fewer and fewer astronomers were left to
concern themselves with the little worlds of our own sun's family.
The solar system wasn't entirely deserted, of course. Some new discoveries were made that were pretty
exciting.
In 1877, for instance, Mars and Earth happened to be in those parts of their orbits that brought them
only thirty-five million miles apart. That is as close together as they ever get. With Mars that close, the
American astronomer Asaph Hall discovered that it possessed two tiny moons.
At the same time, an Italian astronomer, Giovanni Virginio Schiaparelli, found straight dark markings
on Mars, which he called "canali" an Italian word for "channels." The word was mistranslated into English as
"canals."
This made a great difference. Channels are merely narrow bodies of water, but canals are man-made. If
Mars had "canals" that would mean there was intelligent life on it. Naturally, this excited people and there was
considerable discussion about it in the newspapers.
Astronomers, however, did not get overly excited about the matter. Most of them couldn't see the
markings that Schiaparelli had seen, and they suspected that even if they did, there was bound to be some
explanation for it other than the presence of intelligent life.
As the twentieth century opened, however, one man brought the question of the canals to the fore. He
was an American astronomer named Percival Lowell, who came of an old Boston family and had considerable
money.
Using his private fortune, Lowell built an elaborate astronomical observatory in Arizona where the
clean desert air and the absence of clouds made it easy to observe the heavens. This observatory was opened in
1894 and for fifteen years Lowell concentrated on watching Mars.
He was sure that Mars was covered with a fine network of straight lines. He made elaborate maps of
these lines and was convinced they were indeed the work of intelligent beings. Many people who weren't
astronomers were convinced by him. Most astronomers, however, remained skeptical. They insisted that
whatever Lowell saw must be optical illusions.
Reading Passages [ 211 ]

Lowell created a stir in another direction as well. He was not satisfied that Neptune was indeed the
farthest planet from the sun. Even after Neptune's gravitational pull was taken into account, Uranus still didn't
travel in quite the path one would consider correct from gravitational theory.
Lowell insisted there was yet another planet beyond Neptune and that it, too, pulled on Uranus-though
more weakly because it was farther away. He searched and searched for this "Planet X" but when he died in
1916, he had not yet found it.
The trouble is that the more distant a planet, the smaller and dimmer it appears and the harder it is to
distinguish it from the stars that can be seen at the same time. Planet X was probably so far away that a
telescope good enough to see it would also make out crowds of faint stars. The problem would then become
that of telling one dot of light which was a planet from a million other dots of light which were stars.
Even if calculations were to tell an astronomer about where such a planet might be, it would still have
to be picked out from among the many stars in the same neighbourhood.
After Lowell died, the observatory he had built kept on going and occasionally astronomers who
worked there would do a bit of looking for Planet X. In 1929, the search went back into high gear when a
twenty-three-year-old youngster, Clyde William Tombaugh, joined the staff.
Tombaugh's family were poor farmers who could not afford to send their son to college. Tombaugh,
however, was fascinated by astronomy. He read all he could on the subject and when he was twelve years old
he even built a small telescope for himself out of material he managed to get his hands on. By the time he was
twenty, he had built a neat nine-inch telescope that worked very well indeed.
With his homemade telescope he studied Mars and managed to observe a few canals now and then. He
grew interested, wrote to Lowell Observatory in the hope of getting a job, and got one.
Tombaugh set to work searching for Planet X. It might be just a point of light (if it were there) like any
star, but it was different from a star in one important way. Planet X moved about the sun and that meant that it
shifted its position in space.
Planet X was very far from the sun, of course, so that it moved slowly. That slow motion was even
slower in appearance to astronomers on Earth because the planet was so far away. Even so, the motion could be
spotted easily after two or three days in comparison with the surrounding stars, which didn't move at all!
Tombaugh's technique, then, was to take a photograph of a particular tiny portion of the sky. Then, two
or three days later, that same tiny portion of the sky would be photographed again. If the only thing on the
photographs were stars then nothing at all would have changed position in the slightest. All Tombaugh would
have to do would be to check whether any of the tiny star-images on one plate was in a different position when
compared to the other.
That was easier said than done. Each photograph contained, on the average, 160,000 stars, and it was
just impractical to go over all of them. It would take too much time, and, unless Tombaugh had a tremendous
stroke of luck, the chance of finding Planet X would be very small.
But Tombaugh did the following. The two photographic plates were placed side by side under a kind of
viewer through which Tombaugh could look and see only one. He could adjust a tiny mirror that would enable
him to see first the photograph on the left, then the one on the right.
He could adjust the two photographs so that both would be in exactly the same position. Then, if he
flipped the lever that adjusted the mirror, he would view the photographs left, right, left, right, left, right, over
and over. If both were properly adjusted, the photographs would be so alike that he wouldn't be able to tell
them apart.
But if Planet X were somewhere on the plate, it would change position, for it would have moved during
the several days between the taking of the first photograph and the second. As Tombaugh flipped the lever, the
image of Planet X would shift back and forth, back and forth. All he had to do then was to adjust the
photographs, flip the lever, and watch for any point that blinked. He could ignore all the thousands upon
thousands of other points.
Even that wasn't very easy. He had to study the plates one tiny bit at a time. Sometimes he had to flip
the photographs back and forth for six or seven hours before he could study all parts of them and be convinced
that no point blinked. Then, too, sometimes there was a moving point but it was an asteroid, one of the tiny
planets that moved about the sun between the orbits of Mars and Jupiter.
Such asteroids were much closer to the sun and to the Earth than Planet X was. This meant that they
moved more quickly and that there was a much larger shift against the stars. If Tombaugh found a spot that
moved too far, that was as bad as one that didn't move at all. It couldn't be Planet X.
Reading Passages [ 212 ]

In late January 1930, Tombaugh photographed the stars in a section of the constellation Gemini. For
nearly a month he kept examining those photographs and on February 18, he caught a shift that was so small it
had to be a very distant planet. For weeks he kept taking photographs of that spot and watching the way that
the little dot moved until there was no doubt. The path of the object agreed with what would be expected of a
planet beyond Neptune.
The discovery was announced on March 13, 1930, the day when Percival Lowell would have celebrated
his seventy-fifth birthday if he had lived. The planet was named Pluto, partly because Pluto was the god of the
dark underground in the Greek myths and the new planet was so far from the sun that it received less light
than any other planet. Partly, also, the name was chosen because the first two letters were the initials of Percival
Lowell.
At just about the time the discovery of the new planet Pluto was announced, astronomers were getting
ready for a huge international project involving the solar system.
Ever since Kepler had worked out the elliptical orbits of the planets in 1609, astronomers had been able
to draw an exact model of the solar system. What was lacking, though, was any notion of the actual size of this
model. If only they could get the exact distance of any planet of the solar system, they could work out the
distances of all the rest from the model. The closest planets were Mars and Venus. Mars sometimes was as close
as thirty-five million miles from the Earth and Venus was even closer, sometimes only twenty-five million miles
away.
If either Mars or Venus were viewed at the same time from two widely separated observatories on
Earth, the planet would be seen against two slightly different backgrounds. That is, it would be seen from two
different angles against the stars.
From the distance between the two observatories and from the size of the shift in the position of the
planet, the distance of the planet could be calculated. Then the distance of all the other planets could be
calculated, too. In particular, the distance of the sun from the Earth could be calculated.
There were problems, though. When Venus was closest to the Earth, it was more or less between the
sun and the Earth and it couldn't be seen. Sometimes Venus passes exactly between the sun and the Earth and
then it can be seen as a dark spot against the sun's brightness. If the moment at which Venus moves in front of
before the sun is measured from two widely separated observatories, then the distance of the planet can be
calculated.
Unfortunately, these "transits" don't happen often. Not a single transit will take place in the twentieth
century, for instance. Another problem is that Venus has a thick atmosphere, which blurs the exact moment at
which it begins to move before the sun.
Mars makes a better target, therefore, even though it is farther away and never passes in front of the
sun. Using Mars careful astronomers were able to determine the size of the solar system pretty well. The
distance of the sun was placed at somewhere between ninety-three and ninety-five million miles from the
Earth.
Just the same, Mars has a thin atmosphere and it shows up as a small globe in the telescope so that its
exact position is a little fuzzy. What is needed is a planet even closer than Mars or Venus and one that is so
small it has no atmosphere and looks like a mere dot of light in the telescope.
Unfortunately, there is no such planet. Or is there?
Tiny planets do exist. There are the asteroids that circle in orbits between Mars and Jupiter. The largest
is less than 500 miles in diameter (as compared with 8,000 miles for the Earth) and it was discovered on January
1, 1801. Most of those discovered afterwards were less than 100 miles across and there may be many thousands
that are only a couple of miles across and are too dim to see.
In 1896, a German astronomer, G. Witt, discovered a new asteroid, which happened to be number 433.
Another new asteroid wasn't much, but when Witt came to calculate its orbit, in 1898, he received a shock.
Unlike all the other asteroid orbits known, this new one slipped inwards, so that much of the time the new
asteroid was closer to the sun than Mars.
Ordinarily asteroids receive female names, but Witt named this one Eros, and ever since then asteroids
with unusual orbits get male names.
The orbit of Eros is such that at long intervals it can approach the Earth much more closely than either
Mars or Venus. In 1931, it was scheduled to pass within sixteen million miles of Earth, almost its minimum
distance.
Reading Passages [ 213 ]

Astronomers thought it would be wonderful if Eros could be observed from different places. It was just
a dot of light and would shift its position far more than either Mars or Venus. There would be no trouble
making an accurate measurement of that shift.
The greatest international astronomical project ever attempted up to that time was set up. Fourteen
observatories in nine different countries took part. Seven months were spent on the project and nearly three
thousand photographs were taken. The position of Eros was carefully checked on each one of them.
It took ten years for the proper calculations to be made under the supervision of the English astronomer
Harold Spencer-Jones. Finally, the results were announced. The solar system had been cut to size more
accurately than ever before. The average distance of the sun from the Earth was found to be 93,005,000 miles.
The twentieth century saw a number of discoveries of new small members of the solar system. When
the century opened, only five satellites of Jupiter were known, but between 1904 and 1951 seven more were
discovered. All were small and all were distant from Jupiter. Astronomers feel they are asteroids that had
managed to move too close to Jupiter and had got caught in its gravitational field.
The planet Uranus had four known satellites and Neptune one in 1900, but in 1948, a fifth satellite of
Uranus, smaller than the rest and closer to the planet than any of the others, was discovered by the Dutch-
American astronomer Gerard Peter Kuiper. It was named Miranda. The next year, 1949, Kuiper also discovered
a second satellite of Neptune. It was small and circled Neptune at a great distance. He named it Nereid.
Nine newly discovered satellites had thus joined the lists of known members of the solar system
between 1900 and 1966, bringing the total number to 31.
Saturn was the only one of the outer planets to have received no addition to its satellite family. It had
nine known satellites and the ninth, Phoebe, had been discovered in 1898 by the American astronomer William
Henry Pickering. Now, nearly seventy years had passed and nothing new had been added. To be sure, in 1905,
Pickering had reported a tenth, which he named Thernis, but that seems to have been a mistake. No one has
ever seen it since.
But Saturn has something other planets do not have. It has a set of thin, flat rings that circle the planet at
its equator. They are composed of innumerable small fragments which may be no more than pebble-size and
which may be largely ice.
Saturn's poles are tipped towards and away from the sun (just as Earth's poles are) and that means the
rings are tipped, too. We see them either a little from above or a little from below, depending on where Earth
and Saturn are in their orbits. Whether we see them from above or below the brightness of the rings makes it
hard to see anything else that may be very near Saturn.
As Saturn's rings shift from a top view to a bottom view, however, there comes a short period, once
every fourteen and a half years, in which we see the rings edge-on. The rings are so thin that they become
invisible when seen from the edge and the area close to Saturn can then be studied.
In December 1966, the rings were edge-on and a French astronomer, Audoin Dollfuss, photographed
the regions near the planet. He studied the photographs and was pleased to find a tenth satellite. It was closer
to Saturn than any of the others and lay just outside the rings. Edge-on time was about the only moment when
it could be seen easily. Because it was the first satellite, counting out from Saturn, and the last satellite to be
discovered, Dollfuss named it Janus after the Roman god of first and last things.
New asteroids were also discovered in the twentieth century and some of them were even more
remarkable than Eros. In 1906, the German astronomer Max Wolf discovered the 588th asteroid. It was odd,
indeed, for its orbit was almost exactly that of Jupiter. He therefore gave it the masculine name of Achilles. A
whole group of asteroids has been found in Jupiter's orbit since then, some moving about 480 million miles
behind Jupiter and some about 480 million miles ahead of it. (Gravitational theory explains that such a situation
is a stable one.) They were all given names of characters from Homer's poem about Troy and are called the
"Trojan asteroids."
In 1920, the German astronomer Walter Baade discovered what is, even today, the farthest of all known
asteroids. Its orbit carries it far beyond Jupiter and takes it nearly as far from the sun as Saturn. He named it
Hidalgo.
Then in 1948, Baade (who by now had become an American citizen) discovered the satellite that
approaches most closely to the sun. This is Icarus (named after a character in the Greek myths who flew
through the air on feathered wings held together by wax but who flew so close to the sun that the wax melted
so that he dropped to his death). Icarus approaches within seventeen million miles of the sun. This is
considerably closer than the approach of Mercury, the innermost large planet.
Reading Passages [ 214 ]

The orbit of Icarus is such that it can approach within four million miles of Earth. This is a much closer
approach than even that of Eros, so that Icarus is one of the group of asteroids now called "Earth-grazers."
About half a dozen of these are now known, most having been discovered in the 1930s.
In 1937, the German astronomer Karl Reinmuth detected an asteroid which he named Hermes. Its orbit,
when calculated, showed that it could approach as closely as 200,000 miles. It would then be even closer than
the moon.
Yet none of all these discoveries of the first thirty years of the twentieth century seemed to make the
solar system very exciting.
They lacked drama. The discovery of Pluto was the result of years of hard work, instead of the product
of one great stroke. The work on Eros just resulted in a slight adjustment of the calculated distance of the sun.
The discovery of a few small satellites and asteroids didn't seem like much.
The great excitement was going on far beyond the solar system. It was found that all the hundred
billion stars of the Milky Way (of which the sun is one) make up a huge collection called the galaxy. Far outside
that collection are many millions of other galaxies.
In the 1920s, moreover, it was discovered that the distant galaxies were moving away from us. The
farther away they were, the faster they were moving. The whole universe was expanding.
It was a brand-new vision of endless space that broke on the eyes of the astronomers as the twentieth
century progressed. There seemed little to compare with that in the solar system.
There were some interesting puzzles in the solar system, to be sure. There was still the question of
canals on Mars. Were those marks really canals? Was there intelligent life on Mars? What lay under the
mysterious blanket of clouds that hid the surface of Venus? What was on the other side of the moon, the side
men never saw.
These were fascinating problems because they involved bodies that were so close to us, but there was
no way astronomers could answer them. It seemed there would never be any way.
Yet although astronomers didn't realize it at the time, the 1920s and 1930s saw two enormous
breakthroughs which were to revolutionize completely the study of the solar system in ways undreamed of in
the nineteenth century.
One of these breakthroughs took place in 1926, when a professor at Clark University in Worcester,
Massachusetts, fired a rocket into the air. This event, and what followed, will be considered in the next chapter.
The other event, which took place in early 1932, will be described now.
One of the problems that faces astronomers is the fact that the Earth has an atmosphere. Naturally,
people need the atmosphere to breathe; even astronomers do. But it is a problem when it comes to observing
the heavens.
The atmosphere absorbs some of the light from the stars and planets. It curves the light that reaches it
from objects near the horizon and makes those objects appear higher in the sky than they really are. There are
temperature differences that cause light beams to waver, so that it is hard to get sharp pictures. There is often
haze and smoke in the air and sometimes clouds that blank out everything.
Then, too, as human population grows, cities grow too and become more and more lit up at night. This
light is scattered by the air and it becomes harder than ever to watch the sky. Astronomers can scarcely find
sites high enough on the mountains and far enough from cities to make it possible to observe the skies in peace.
But need astronomers be confined to studying the sky by ordinary light?
Ordinary light is only a small section of a huge band of radiation, and it seems quite likely that stars and
planets send out other radiations in this band. Unfortunately, the other sections of the band of radiation can't be
detected by eye so that special instruments are needed to receive them. Furthermore, Earth's atmosphere, which
lets ordinary visible light through, stops most other sections of the radiation band cold.
All these different types of radiation, including visible light, act as though they are made up of tiny
waves. The difference between one type of radiation and another is in the size of these waves. When the waves
are very long, we have what we call "radio waves." These were discovered in 1888 by a German physicist,
Heinrich Rudolf Hertz.
Whereas the waves of ordinary light are so short that there are about 50,000 to the inch, the individual
radio wave can be many miles long. Even the shortest radio waves (called "microwaves") can be several inches
long.
Once radio waves were discovered, physicists began to try to use them to carry signals over long
distances. The Italian engineer, Guglielmo Marconi, managed to send signals by radio waves from England to
Newfoundland in 1901, and that can be considered the birth of our modern radio. Marconi's achievement was
Reading Passages [ 215 ]

puzzling in a way. Radio waves travel in straight lines while the surface of the round Earth curves. How can
radio waves manage to go round the curve? It turned out that the radio waves used by Marconi bounce off
layers of ions in the upper atmosphere and zigzag up and down as they cross the Atlantic.
This does not happen if the radio waves are too short. The microwaves, for instance, go shooting
through the layers of ions in the upper air (the "ionosphere") without trouble. Signals carried by microwaves
would not travel along the Earth's surface for more than a few miles.
As a result, engineers who worked with radio (and there were many of them during the 1910s and
1920s) worked with long radio waves. Short radio waves were ignored because they seemed useless. No one
paid attention to the fact that if they could go through the atmosphere as easily as ordinary light did, they
might be useful to astronomers.
The man who first got a hint of that fact was Karl Jansky, a young American radio engineer, working
for the Bell Telephone Laboratories. The people at Bell Telephone were interested in telephone conversations
carried on over long distances with the help of radio waves. These were often interfered with by static and it
was Jansky's job to try to pin down the causes of the static. Once the causes were known, the cures might be
found.
Jansky, working in New Jersey, devised a large radio antenna which could be rotated to receive signals
from any direction. When there was static, there were sure to be stray radio waves acting to produce it. Jansky's
antenna could be rotated until the static was loudest and it would then be pointing to the source. If the source
were known, then perhaps something could be done about it.
Jansky expected that a lot of the trouble arose from thunderstorms and the stray radio waves set up by
the lightning. Sure enough, he did get a kind of crackling static from lightning, even when it was far off on the
horizon, too far to see.
But then, in January 1932, he became aware of a faint hiss in his receivers, a sound quite unlike the
lightning crackle. He might have thought it was just "noise" created by imperfections in his apparatus, but the
hiss became louder and softer as he turned his antenna.
He found that the hiss was loudest in the direction of the sun. He wondered if he might be receiving
radio waves from the sun.
If the sun had happened to have a great many sunspots at the time, the radio waves would indeed have
been coming from the sun, for it was eventually discovered that the spots give rise to intense radio waves. In
1932, however, the sun was at a quiet period with few spots. It was producing very little in the way of radio
waves.
Therefore as Jansky turned his antenna every day, he found that the spot from which the hiss was
coming was not from the sun at all. In fact, it moved farther from the sun every day.
The sun moves slowly against the background of the stars (because the Earth, from which we watch the
sun, is revolving about it so that we see the sun from a different angle every day) but the source of the hiss did
not move. It remained at the same point in the constellation of Sagittarius.
Jansky realized he was getting radio waves not from the sun but from a different and possibly much
more distant source. We now know he was getting it from the centre of our galaxy.
Jansky reported his findings, but they did not make much of a splash. The kind of radio waves that
Jansky had detected coming from outer space were just those short microwaves with which nobody did any
work. There were no instruments available that could really handle it. Astronomers preferred to work in fields
where they had the instruments. .
They didn't seem to realize that they were ignoring something that was perhaps the greatest
astronomical discovery of the twentieth century.
One youngster, in his twenties, was inspired by the report, however. He was Grote Reber. He built a
device in the back yard of his home in Wheaton, Illinois. It was a curved reflector, thirty-one feet across, with
which he received radio waves and reflected them into a detecting device at the centre. He put his "radio
telescope" to work in 1937 and became the world's first radio astronomer.
All through the years of World War II, Reber kept carefully noting the quantity of radio waves coming
from different portions of the sky. He was able, in this way, to produce the first radio map of the sky. He was
also able to detect a few places from which radio waves seemed to be coming in particularly great quantities.
These were the first "radio sources."
What eventually saved the situation was that during the 1930s interest grew in another angle of radio.
Reading Passages [ 216 ]

You can tell a great deal about an object if you bounce radiation off it and study the reflection. If you
reflect light waves from a chair, the nature of the reflection will tell you the chair's shape, size, position,
distance, colour, and so on.
Bats use sound waves for the purpose. Their squeaks are reflected by insects, twigs, and other objects
and by listening to the echo, they can catch the insects or avoid the twigs. There are other examples of the same
process.
Now suppose you wanted to detect an enemy aeroplane at night without letting the enemy pilot know
he was detected. You could use a bright beam of ordinary light but the enemy would see it. Besides light is
easily stopped by clouds, fog, mist, or smoke.
It would be much better to use some other form of radiation that he couldn't see and that would pass
through clouds and other such obstructions. The longer the waves of the radiation, the better they would pass
through clouds and the rest. If the waves were too long, however, there would be too much of a tendency for
them to move around an object instead of being reflected by it.
It turned out that microwaves were just right. Their waves were long enough to go through clouds and
short enough to be reflected by planes.
In Great Britain especially, methods were developed for sending out a tight beam of microwaves and
receiving the echo.. Then, from the echo, you could tell the position and distance (or "range") of the reflecting
object, which could be an enemy plane. The device was called "radio detection and ranging" and this was
abbreviated as ra. d. a. r., which became the word "radar." Radar wave has therefore become another name for
microwave.
Great Britain developed radar just in time to have it take part in the Battle of Britain in 1940. The British
could detect the German planes coming in over the Channel by night as well as by day and were always
waiting for them in the proper place. Without radar, Britain might have lost the war.
The important thing to astronomers was this: In developing radar, engineers had to learn to handle
microwaves. Once they developed instruments to do that, those same instruments could be used to detect
microwaves from outer space.
What's more, Great Britain became aware of microwaves from outer space in the course of the war.
In February 1942, Great Britain found severe interference with its radar network. The first thought was
that the Germans had discovered the network and were jamming it in preparation for large new air strikes. A
team under the British engineer Stanley Hey began to investigate the matter.
Hey discovered the source of the jamming in a few days. The sun was not quiet, as it had been when
Jansky made his key discovery. It was loaded with sunspots and it was broadcasting radio waves. For the first
time, radio waves from outer space were pinned down to a definite source-the sun. Immediately after the war,
astronomers, using all the equipment and techniques worked out through radar developments, turned to the
study of radio astronomy in a big way.
The "radio sky" was mapped in greater and greater detail, and certain radio sources were identified. It
was found that stars that had once exploded were such strong sources of radio waves that they could be
detected through all the vast distances that separated those stars from us.
Indeed, it was discovered that whole galaxies could be sources of radio waves of even greater
intensities. Distant galaxies could be detected with greater ease by radio telescopes than by ordinary ones.
Radio astronomy in the 1960s uncovered mysterious objects which were named "quasars" by
astronomers. There is no certainty as to exactly what they are, but some think that they are small but
enormously bright objects farther away than anything else we know. The quasars may tell us a great deal about
the youth of the universe billions of years ago, and about its edges billions of trillions of miles away.
In fact, in 1964, certain types of radio waves were studied which seemed to come from all directions and
which some astronomers think is the radiation that was released when our universe was first formed.
Interestingly enough, the great discoveries of radio astronomy were not confined to far away places
only. News was brought to mankind concerning its nearest neighbours in space, the planets of the solar system.
Some of the news was so exciting and unexpected that the study of the planets, which seemed to have been
played out, suddenly burst out into fascinating new directions.
For instance, if beams of microwaves can be reflected from enemy aircraft, and if the echoes can give us
information, why can't such beams be reflected from objects of astronomical interest.
Hey, who discovered the radio wave radiation of the sun during the war, also noted certain echoes that
seemed to be originating in the upper atmosphere. From the time it took the echoes to return, he could calculate
the height, and he began to wonder if he weren't detecting meteors.
Reading Passages [ 217 ]

After the war, he studied these echoes in detail. Finally, in 1946, he was able to show that meteors leave
so thick a trail of ions that some microwaves are reflected. One could therefore study meteor trails by radar.
This was useful, for only the larger meteors (about the size of pinheads or more) could be seen by their
gleaming light, as friction with the air heated them white-hot, and even then they could only be seen at night.
Using radar, however, small meteors could be detected day or night, if they were in sufficiently large clusters.
Certain large clusters of meteors move around the sun in what had once been the orbits of comets that
had finally fallen apart. Once a year, the Earth will pass through a particular cloud and there will then be a
shower of flashing trails left by many meteors moving quickly through the atmosphere.
Once in a longish while the Earth may move through the thickest part of such a cloud and then the trails
may appear to be as thick as snowflakes. This happened over the eastern United States in November 1833.
There are about a dozen meteor clouds that have been observed in this way. Now that radar
observations are made, at least three more have been found that always strike from the general direction of the
sun. They always approach on the daylight side of the Earth, in other words, and can never be seen by eye.
But do we have to confine ourselves to Earth's atmosphere? Could not a beam of microwaves travel
outside the air altogether? If it were aimed in the direction of the moon, it could reach the moon in one and one-
quarter seconds, strike its surface, bounce off, and shoot back. The echo would reach Earth again after another
one and one-quarter seconds. There would be two and a half seconds altogether between the time of sending
and the time of return.
Naturally, the radar beam would spread out with distance. Some of it would be absorbed by the moon.
Some of it would bounce off in directions away from the Earth. Then the returning echo would spread out
again over the distance between moon and Earth. Only a very faint echo would be received.
To detect such a faint echo, either a very intense beam must be sent out in the first place, or very
sensitive devices must be developed for detecting echoes or both.
Difficult as it was, the feat was accomplished almost as soon as the end of World War II freed radar
equipment for the task. In early 1946, a Hungarian, Zoltan Lajos Bay, (who has since emigrated to the United
States) reported receiving echoes. A very short time afterwards, the United States Army, with more powerful
equipment, managed to do the job in an even more clear-cut way.
Reaching the moon by microwave was comparatively easy, because it is so close as compared with
other astronomical bodies. The sun is much farther away but it is a giant in size so that it offers a large target. In
1959 astronomers aimed - a beam of microwaves at it and a group at Stanford University in California managed
to get an echo back. The sun's own microwave radiations confused the echo, of course, but it could be made
out.
The important target, however, was Venus. Venus was closer than the sun and echoes could be received
from it much more sharply. Still, Venus was a much smaller body than the sun, a little smaller than the Earth,
even. It made a tiny target in the heavens, and it would be a triumph, indeed, if a beam of microwaves could be
made to strike Venus and return to Earth. The returning echo would be exceedingly feeble and to detect it
would require the most delicate instruments and the most careful work.
If it could be done, however, a great deal could be gained. Scientists knew quite accurately how quickly
a beam of microwaves traveled through space. It traveled at the speed of light which is a fraction over 186,282
miles per second. If one could measure the exact length of time it took for the microwaves to travel from Earth
to Venus and back, one could calculate just how far Venus was at that moment.
Then all the other distances of the various bodies of the solar system could be calculated from that. In
just a few days, the distance of the sun could be determined more accurately than through the entire ten-year
project that involved the asteroid Eros.
Everyone was trying for the Venus echo and in 1961 three different American groups, one British group,
and one Russian group all succeeded. Each calculated the distance of Venus and then of the sun. The best
figures, obtained by a group from M.I.T., seem to show that the average distance of the sun from the Earth is
about 92,955,600 miles. That is 50,000 miles closer than the results given by the Eros project.
After Venus was successfully touched, other planets were reached. In 1962, a Russian team made
microwave contact with Mercury, a smaller and more distant target than Venus. In 1963, astronomers at the
California Institute of Technology made contact with Mars. There have also been reports of contact with Jupiter,
a planet more distant by far than any of the earlier targets, but this is still uncertain.
 
Microwave echoes can tell us far more than the distance of an object. It can tell us a great deal about the
kind of surface that is reflecting the beam.
Reading Passages [ 218 ]

Suppose the microwaves were bouncing off a perfectly smooth sphere. Those waves that hit the exact
centre of the side of the sphere facing us would bounce back perfectly.
The echo would come back right on the line along which the original wave had approached. The echo
would return to the instrument that had sent out the wave and it would be detected.
Microwaves that hit the sphere a little way from the centre of the side facing us would bounce off to one
side. (You can see why this would be so if you imagined yourself throwing a ball at a curved wall. If the ball hit
the wall where it curved away from you, it would bounce to one side.) The farther from the centre that the
radar touched, the farther to the side it would bounce.
But, of course, the moon is not a perfectly smooth sphere. It is uneven. It has mountains and craters,
hills and rocks. A microwave striking the centre of the moon might hit the side of a hill or even the side of a
rock and be reflected away from us, instead of coming straight back.
Then, too, if a microwave struck a point on the. moon quite a bit away from the centre, it might hit an
uneven portion slanted in such a way that the wave would be reflected right back to us. So you see we would
be getting some echoes from all over the moon.
But the moon's surface curves away from us and near the rim of the part we can see, the surface is over
a thousand miles farther from us than is the surface in the very centre. This means that the microwave echo isn't
absolutely clean and sharp. The part reflected from the centre of the moon comes back first and then small
echoes come back from uneven surfaces a little farther along the curve of the moon, and then from uneven
surfaces a little farther still, and so on.
The echo is a little fuzzier than the original wave. The fuzziness becomes greater or less as microwaves
with different wavelengths are used, for the smaller the wavelength, the more the wave is affected by small
unevennesses. From all this astronomers can get an idea of how rough the moon's surface is.
To work out the roughness of the moon's surface by "feeling" it with microwaves is exciting, but again
Venus is much more important.
Venus is our nearest neighbour in space, next to the moon, but we know almost nothing about it. Its
thick atmosphere is filled with clouds that never thin out. All we can see is the cloud layer so that Venus, in the
telescope, looks like a shiny, white ball with no markings.
Microwaves can penetrate those clouds, though, and bounce off the rocky soil no one has ever seen.
From the fuzziness of the echo, something can be worked out about the unevenness of that surface.
Late in 1965, for instance, it was decided that there were at least two huge mountain ranges on Venus.
One of them runs from north to south for about 2,000 miles and is several hundred miles wide. The other is
even larger and runs east and west. The two ranges are named for the first two letters of the Greek alphabet.
They are the "Alpha Mountains" and the "Beta Mountains."
It is still uncertain as to how high these mountains are, but astronomers are using additional microwave
measurements to work out a crude map of Venus-the map of a surface we have never seen.
Microwave measurements have also been used to test the roughness of Mars and by 1967 it was decided
that Mars was about as rough as the Earth. This was a surprise, for studies by ordinary telescopes had made it
seem that Mars was rather smooth.
It now seems that some Martian mountain peaks are as much as eight miles above the lowland depths.
This is actually higher than Earth's mountain peaks, but then Mars has no ocean. If we measured the height of
our mountains above our ocean bottom instead of above the top of the ocean water, some of our ranges would
be over ten miles high.
Even that isn't all the information microwave echoes can give us.
Suppose that a microwave beam is reflected by a body that is turning,on its axis, and suppose the body
is turning from left to right as we look at it.
The part of the body at the left is turning along the curve of its surface, towards the middle, which is
closer to us than any other part is. The part of the body at the left is coming towards us, in other words. The
part of the body at the right is naturally turning away from us.
If the microwave beam hits the left side of the body, which is coming towards us, then the waves are
squeezed together. Those parts of the echo that reach us from there have shorter waves than the original beam
had. In the same way, the radar beam that hits the right side bounces back from a part that is moving away and
its waves are pulled apart. That part of the echo has longer waves than the original.
From the way in which the lengths of the radar waves have stretched out and pushed together as
compared with the original, astronomers can tell how fast the body is turning.
Reading Passages [ 219 ]

This can be tried on the moon. We know how fast it is turning. Microwave echoes give the right answer.
Astronomers were therefore confident they could try it on other bodies. What about Mercury, for instance?
They thought they knew how fast Mercury rotated on its axis-once in eighty-eight days, exactly as long as it
took to go around the sun once.
This is no coincidence. When a small body turns about a nearby large body, the gravitational force of
the large body pulls some of the small body towards itself and makes a bulge in its direction. As the small body
turns, this bulge is forced to remain pointing to the large body. It slips about the small body and as it does so, it
sets up friction that slows down the rotation; just as the friction of a brake slows down a bicycle.
Finally, the small body slows its rotation till it is turning just once on its axis each time it moves around
the big body. When this happens, the small body always turns the same side to the big body, so that the bulge
is always in one place. There is no more friction.
The moon turns on its axis in just the time it takes to move once around the Earth so it always shows us
the same side. It has a bulge in the centre of that side that faces us; a bulge about two miles high.
In order to tell how fast a planet turns on its axis (without the use of microwaves) astronomers would
watch for certain markings on its surface and measure the time it took for those markings to disappear round
the other side and come back. Accurate measurements can be made on even distant planets in this way.
The rotation of Mercury was hard to measure in this fashion, though. It is so close to the sun that it is
difficult to make out its surface features in the glare.
In 1890, Schiaparelli (the astronomer who had first detected the "canals" on Mars) did follow certain
features on Mercury. He found that when Mercury was in a certain position with respect to the sun, he could
often make out the same markings in the same position. This would be what was to be expected if Mercury
always turned with the same face towards the sun and this would happen if it turned on its axis in the same
time that it turned about the sun-eighty eight days.
Astronomers were quite satisfied with that, for it made sense. The huge sun had slowed the rotation of
nearby Mercury, as Earth had slowed the rotation of the moon. And, indeed, the first microwave contact made
with Mercury seemed to show that that was so.
However, more and better contacts followed and in 1965, astronomers found themselves faced with
surprising data. Careful work on microwave echoes from an observatory in
Puerto Rico showed that Mercury did not turn on its axis in eighty-eight days, but in a rather shorter
time. Other laboratories pointed their microwaves at Mercury at once and the result was found to be correct.
Mercury turns on its axis once in fifty-nine days.
But if that is the case, how could Schiaparelli have thought that the revolution was an eighty-eight-day
one? Did he make a mistake in observing the markings?
Perhaps not. A period of fifty-nine days is just two-thirds of the eighty-eight-day swing about the sun.
This means that every time Mercury moves about the sun two times, it turns on its axis three times.
Imagine that a certain spot on Mercury's surface faces the sun at a particular time. When Mercury has
gone around the sun twice, it has turned on its axis three times. and the same spot is again facing the sun.
When Schiaparelli observed markings, he would have seen the same one in the same place every other
time Mercury turned about the sun. He didn't see them in between but perhaps he paid little attention to that
because Mercury was so close to the sun, one couldn't always be sure what one saw anyway. So he made the
easy supposition that the markings were probably there every time, whether he saw them or not, and that
Mercury rotated in eighty-eight days.
But again it was Venus that supplied the still greater surprise. That had happened a year before
Mercury's rotation had been given a new look.
In the case of Mercury, astronomers at least thought they knew what the time of rotation was, even
though they were wrong. In the case of Venus, no one knew. There were never any markings that could be
followed.
That was so frustrating. All the other planets had definite rotation times that could be measured (even
though Mercury's was measured wrong). Even distant Pluto, over 150 times as far as Venus, was not
mysterious in this respect. Pluto is so distant it can only be seen as a dot of light even in a good telescope and
no markings can be made out. However, it seems to grow slightly brighter and dimmer in a regular way.
Astronomers have decided that this is the result of some part of it being brighter than the rest for some reason;
and it is the bright part showing and vanishing as the planet rotates that makes the flicker. Judging by this,
Pluto seems to rotate once every 6.4 days.
Reading Passages [ 220 ]

Yet Venus had no known period of rotation at all. Most astronomers thought that probably Venus's
rotation was slowed by the sun and that it showed only one face to the sun. That would mean it would turn on
its axis only once each time it turned about the sun-once in 225 days.
But what would radar say?
Radar had its say in 1964, and the answer was a startling one. Venus rotated not once in 225 days, but
once in 243 days, so that it did not show only one face to the sun. But what really astonished astronomers was
that Venus turned in the wrong direction!
To see what we mean by the wrong direction, imagine that you are viewing the solar system from a
point high above the Earth's North Pole. All the planets would be seen to move around the sun in the same
direction - counter-clockwise; that is, the direction opposite to that in which the hands of a clock move about its
face. All the large satellites turn counter-clockwise about their planets, too, provided they move about the
planet's equator. (Neptune's large satellite does not move about its equator and it is exceptional.)
The sun and the planets also rotate about their own axes in counter clockwise fashion. (Uranus is a
partial exception. Its axis tips over so far that it seems to be rolling on its side. Astronomers don't know why.)
All these counter clockwise motions are thought to have arisen at the very beginning of the history of
the solar system. The solar system began its life as a huge cloud of gas and dust turning slowly in a
counterclockwise direction. That counterclockwise turning remains to this day in all the motions of the various
parts of the solar system.
Yet Venus turns about its axis very slowly in the wrong direction. It turns clockwise. This is not because
its axis is tipped, as in the case of Uranus. The axis of Venus is almost perfectly upright.... Astronomers can't
explain this wrong-way motion.
There is an even greater mystery involved, for the period of rotation seems to be tied to Earth. Every
once in a while, Earth and Venus reach positions in their orbits which place them as close together as they ever
get. Venus manages to turn just four times in that period.
This means that every time Venus comes as close as possible to the Earth, it shows the same face to the
Earth. We can't see this, because we can't see through the clouds, but it seems to be so.
But why is it so? Can Earth's gravitational pull have slowed the rotation of Venus and made it show the
same face to us at every close approach? How could that be since Earth's gravitational pull is so much less than
the sun's. Why would Venus respond to Earth instead of to the sun?
Astronomers don't know.... At least, not yet.
So far I have talked about microwaves being sent out from Earth to various bodies in the solar system.
How about microwaves sent out from the various bodies to the Earth?
The sun sends out microwaves, of course. That has been known since 1942. But then every body in the
solar system ought to be producing them too.
Every body contains a certain amount of heat and that means it produces a certain amount of radiation.
The greater the temperature of the body, the greater the energy of the radiation it produces and, on the average,
the shorter the waves making up that radiation.
If a body has a temperature of about 1000° F. or more, it sends out radiation that is so energetic and
short wave that some of it appears in the visible light region. The body is "red hot," for it glows a deep red. As
the temperature gets still higher, the light grows brighter and shorter in waves. The sun's surface is at 10,000° F.
and it radiates brightly all the colours. It even radiates ultraviolet light, which is invisible, but which has more
energy and shorter waves than ordinary light.
An object that has a temperature of less than 1000° F. doesn't radiate visible light, but it does radiate all
the wavelengths longer than visible light. It radiates infrared light, for instance, which has less energy and
longer waves than visible light. We can't see infrared but we can absorb it and feel it as heat. We can feel the
heat of a hot iron from a small distance even though it isn't hot enough to glow.
These too-cool-to-glow bodies all radiate microwaves as well and even longer radio waves. Such waves
are so long and have so little energy that even the coldest bodies can radiate them. They have so little energy
that we can't feel them in any way, but we have instruments that can detect them.
Every body in the solar system radiates a certain quantity of long-wave radiation. The exact quantity
and the exact length of the waves depend on the temperature of the body.
By studying the microwaves sent out by the moon or by a planet, we can therefore determine the
temperature of the body. The first determination of this sort came in 1946 when two American astronomers,
Robert Henry Dicke and R. Beringer, picked up radio waves sent out by the moon.
Reading Passages [ 221 ]

Promptly, this produced a puzzle. By studying the moon's infrared radiation, it had seemed that the
temperature varied a great deal because there was no atmosphere on the moon to hold and spread the heat. At
the height of the moon's day, the temperature reached 250° F. in some places, and this is well above the boiling
point of water. At the close of the moon's long night, the temperature had dropped to 280° below 0° F. (which
we can write as -280° F.).
The microwaves sent out by the moon, however, seemed to show much smaller variations in
temperature. Astronomers decided that the infrared radiation comes from the very surface of the moon, while
the radio waves come from some distance below the surface.
As the sun glares down on the moon, the surface heats up. The heat can't penetrate far beneath the
moon's surface, however, and the lower layers remain cool. Then, in the moon's night time, the surface layer
loses heat but the deeper layers don't.
It may be that about a yard below the surface of the moon, the temperature remains about -40° F. day
and night. Naturally, astronomers went on to try to detect microwave radiation from other planets to see what
that would tell them about the temperature of the planets. They could compare that with what they knew the
temperature of the planet ought to be considering its distance from the sun.
They expected no surprises, but they got a big one from the very planet that has been turning
everything upside down in the 1960s-Venus.
Earlier measurements of infrared radiation from Venus had showed the temperature to be -40° F. This
may seem too cold for a planet that is closer to the sun than Earth is. Infrared radiation, however, reaches us
from above the cloud layer of Venus. Naturally, that part of the atmosphere of Venus would be cold. It is cold
on Earth, too; that is why high mountains have snow on them all year round even when they are located on
Earth's equator.
Microwaves are another thing altogether. They can penetrate the cloud layer on Venus easily. Therefore
if the solid surface of the planet gives off microwaves, those would go through the cloud layer and reach us.
(Infrared radiation wouldn't.) The microwaves would give us the temperature of the solid surface of the planet.
In May of 1956, microwave emission from Venus was finally detected by C. H. Mayer at the Naval
Research Laboratories in Washington. Surprisingly, the flood of microwaves was much greater than had been
expected. They showed that the surface of Venus must be at a temperature of 600° F. and later measurements
backed that up.
Astronomers expected Venus to be a warm world and, because of its thick clouds, sometimes visualized
it as covered with a warm ocean. But now it seemed there was no ocean at all, for the planet was far hotter than
the boiling point of water.
Any water on Venus would have to be in the form of steam and that might be why the cloud layer on
the planet is so thick and permanent. (On the other hand, some astronomers believe that Venus has no water at
all and that the clouds are something else.)
But why should Venus be so hot? One explanation involves its atmosphere.
When visible light strikes a planet it passes through the atmosphere and strikes the surface of the
planet. The atmosphere doesn't interfere much with such visible light. Even clouds only stop part of the light.
The light that is absorbed by the planet's surface heats it up a little. The surface then gives off radiation
of its own that is less energetic than visible light (after all, the planet's surface isn't as hot as the sun). Much of
the light radiated by the planet's surface is infrared radiation.
This infrared ought to pass through the atmosphere and vanish into space and the planet, then, with
light coming in and infrared going out, would be at a certain temperature.
But there are some gases which are transparent to visible light but not to infrared radiation. One of
these is carbon dioxide. Earth's atmosphere has only three-hundredths of 1 percent carbon dioxide but even
that small quantity is enough
to make it difficult for infrared to get through the atmosphere. The infrared leaks out so slowly that a
considerable quantity accumulates and heats up the air and surface of the planet. The temperature of the Earth
is higher than it would otherwise be, thanks to the small quantity of carbon dioxide in the atmosphere. (Water
vapour also has this effect.)
The same thing happens in a greenhouse. The glass of the greenhouse lets sunlight in but doesn't let
infrared radiation out. For that reason, the temperature inside the greenhouse stays warm on sunny days even
in cold weather. The action of carbon dioxide and water vapour is therefore referred to as the "greenhouse
effect."
Reading Passages [ 222 ]

The atmosphere of Venus is far richer in carbon dioxide than our own atmosphere. Not only does
Venus get more heat from the sun than we do because it is closer to the sun, but the heat is trapped to a much
greater extent. This is the most popular explanation for the unusually high temperature of Venus.
 
It is possible, to be sure, that some microwaves sent out by a planet may not be produced just by its
heat. There may be other causes.
This came up as a strong possibility in 1955. In that year, two astronomers, Kenneth Linn Franklin and
Bernard F. Burke, at the Carnegie Institution in Washington, were measuring radio waves from the heavens.
They received strange interference at one point and wondered what it might be. It could just be static; perhaps
some faulty electrical device was sparking somewhere in the vicinity.
However, they kept getting the interference night after night and it seemed to be coming from some
particular place in the heavens; some place that was moving from night to night in a particular way. They
studied the sky to see if something were in that place that might be moving in just that way, and they found the
planet Jupiter in that place and moving in that way.
There was no mistake. Jupiter was sending out strong bursts of microwaves. Going back through the
records, they found that strong bursts had been reported from the direction of Jupiter in 1950 and 1951, but no
one had followed it up.
When a planet sends out radiation, it sends it out over a broad band of different wavelengths. In
receiving the microwaves from Jupiter, then, one could study first one part of the band and then another.
Astronomers could, for instance, study those microwaves that were one or two inches long. When this
was done, it was found that the quantity of microwaves received was about what one would expect of a body at
a temperature of, say -200°F.
This was the temperature of Jupiter judging from infrared radiation, and about the temperature one
would expect for a planet as far from the sun as Jupiter was.
So far, so good, but what about the microwaves with longer wavelengths. There the quantity rose
unexpectedly. An object with a temperature of -200° F. couldn't possibly radiate as much long-wave
microwaves as Jupiter did, if temperature were the only cause of the radiation.
Jupiter's radiation of four-inch microwaves was what would be expected of a body at a temperature of
700° F. or so. Its radiation of twelve-inch microwaves would have required a temperature of nearly 10,000° F.,
the temperature of the sun's surface. The radiation of twenty-seven-inch microwaves would have required
90,000° F., hotter than the surface of the hottest stars we can see.
This is quite impossible. Jupiter can't be that hot. It must be sending out long microwaves for other
reasons.
One possible cause is related to the fact that Jupiter behaves like a strong magnet. Our own Earth
behaves like a magnet, which is why the compass needle always points north, but Jupiter is apparently a much
stronger one.
Electrons and other particles streaming out of the sun are trapped in Jupiter's magnetic field and are
made to move in rapid spirals high above Jupiter's surface. Such spiraling particles would send out floods of
microwaves.
In some wavelengths, though, the microwaves come off in unsteady bursts. Are they produced by
gigantic thunderstorms in Jupiter's vast atmosphere, which is much thicker, deeper, and larger than ours? Are
there lightning bolts a billion times as strong as those we witness on our own planet, each sending out a crackle
of microwaves?
Then, too, as Jupiter rotates about its axis, the quantity of microwaves rises and falls regularly. There
seem to be certain places on the planet that are particularly rich sources. What these might be nobody yet
knows.
These bursts of microwaves also seem to be stronger than usual whenever Jupiter's innermost large
satellite, lo, is in particular positions in its orbit around Jupiter. Why that should be no one knows.
Someday we will find answers and when we do, then through microwaves we will find out more about
Jupiter than would have seemed possible just a couple of decades ago.
But all that followed from Jansky's discovery of radio waves from the sky does not exhaust the new
studies of the solar system.
Even more dramatic is the other breakthrough I mentioned - the flight of the rocket in 1926. This I will
now turn to in the book's last chapter.
From: Twentieth Century Discovery by Isaac Azimov
Reading Passages [ 223 ]

5 - Up We Go: Space Travel


As long as we can investigate the planets only from the surface of the Earth, we are limited in what we
can find out. No matter how we analyze the light and the radio waves that reach us, there must be so much we
miss.
If only we could get closer. If only we could get away from our Earth-prison.
Actually, such a dream doesn't date only from the time of modern astronomy. Men have always longed
to free themselves from being bound to the Earth's surface. This is not just to get a better view of the heavens; it
is to gain freedom. Surely almost every child at one time or another, watching a bird fly, has wished that he,
too, had wings and could swoop through the air.
A famous Greek myth tells of a man who flew. The man was Daedalus, a clever inventor of legend, who
was imprisoned on a small island near Crete. He had no boat, so in order to escape from the island he fashioned
wings.
He constructed a light framework and stuck feathers to it with wax. By flapping these wings, he could
rise in the air and fly. He made another pair for his son, Icarus, and together they flew away.
Daedalus escaped to Sicily. Icarus, however, in the joy of flying, soared too high and the heat of the sun
melted the wax that held the feathers of his wings. He fell to his death.
Of course, wings alone, no matter how feathered and birdlike, can't make you fly. What counts are the
muscles that flap them fast enough and manoeuvre them properly, so as to use the air as a cushion. Human
muscles are simply not strong enough to raise the weight of the human body into the air simply by flapping
wings.
When man finally did lift off the surface of the Earth, it was not by flapping but by floating. In 1783, two
French brothers, Joseph Michel Montgolfier and Jacques Etienne Montgolfier, filled a large linen bag with hot
air. Hot air is lighter than the same quantity of cold air (that is, hot air is less dense), so it floats on cold air as
wood floats on water. The hot air rose, carrying the bag with it, and drifted for a mile and a half.
Soon larger bags were filled with hydrogen, which is far less dense than hot air. Such bags, or "balloons"
could not only lift themselves, but also gondolas carrying human beings.
There was a ballooning craze in the first part of the nineteenth century. For the first time men rose miles
high into the air.
Of course, such balloons were at the mercy of the wind. To make it possible for a balloon to go in some
particular direction, even against the wind, a motor and a propeller would have to be placed on board. This
was first done successfully by a German inventor, Count Ferdinand von Zeppelin, in 1900.
Such "dirigible balloons" eventually carried hundreds of people over wide oceans, but they were
terribly fragile. Storms destroyed them. The future of air travel lay elsewhere.
After all must things be lighter than air to be lifted by it? Leaves and pieces of paper are denser than air;
if still, they will not float. A brisk wind will, however, set them whirling through the air. If a heavier-than-air
object has flat surfaces and if it moves fast enough, those flat surfaces will ride the air and lift the object high.
Towards the end of the nineteenth century there was a glider craze. Light objects, with broad, flat
wings, could ride the wind like kites and could carry men with them.
But gliders, like the original balloons, were at the mercy of the wind. Could one place an engine upon
them? In 1903, the American brothers, Wilbur Wright and Orville Wright, placed a motor and propeller on a
glider of their own design. The propeller pulled the glider through the air quickly enough to raise it into the air
and allow it to fly without wind or even against wind. That first power-glider remained in flight for almost a
minute.
Thus, the third year of the twentieth century saw the construction of the first "heavier-than-air" flying
machine; or, as we call it now, "aeroplane."
Aeroplanes have improved and developed until now they are capable of carrying a hundred or more
people in luxurious surroundings for thousands of miles at speeds of many hundreds of miles an hour.
Balloons and aeroplanes both float on air. The difference is that balloons will float even if motionless,
while aeroplanes must travel with great speed in order to ride on moving currents of air.
Neither balloons nor aeroplanes could rise off the ground if there were no air.
The air gets thinner as one moves higher above the surface of the Earth. Eventually, it gets so thin that
neither balloons nor planes will get enough support to move higher. Twenty miles above the Earth's surface
represents a reasonable limit.
Reading Passages [ 224 ]

Even twenty miles rise can be very useful to astronomers. At that height, something like 99 percent of
the atmosphere is below the balloon or plane. The trace of air left above can scarcely obscure the heavens in any
way, and this is important.
For instance, to reach us here at the low-lying surface of the Earth, the sun's radiation must travel
through the twenty miles of thick atmosphere that would lie under a high-flying balloon. The visible light
reaches us scarcely diminished, but ultraviolet light and infrared light are mostly absorbed and can't be studied.
If sunlight were observed from a height of twenty miles, the ultraviolet and infrared could be studied as
carefully as we have studied visible light in the past.
For this reason, photographs have been made of the sun from the gondolas of large balloons, and the
sunlight has been carefully analyzed from that height.
As another example, the light reflected to us by Venus shows certain regions of absorption which
indicate that light has passed through layers of water vapour molecules on its way to our eyes. Does that mean
there is water vapour in Venus's atmosphere and that its clouds are made up of water droplets or ice particles?
Or is it just the water vapour in our own atmosphere?
If the light from Venus were studied from a high balloon, there would be no problem. The balloon
would be above the water vapour content of Earth's atmosphere. Any sign of water vapour in the light
absorption would have to be caused by water in Venus's atmosphere.
In 1959, light from Venus was studied by an American astronomer, John Strong, from a high-flying
balloon. He did indeed detect small quantities of water vapour but, unfortunately, that did not end the
problem. Similar studies in highflying aeroplanes in 1967 have failed to detect water, so there is still a dispute
as to whether Venus's atmosphere contains water vapour or not.
But planes and balloons don't represent complete freedom. They lift man from the surface of the Earth
but not more than twenty miles high. Man is still a prisoner of the atmosphere.
Is there any way of rising beyond the atmosphere? There might be if one weren't forced to depend on
floating. There must be some way of lifting an object that could work in a vacuum as well as in air.
One way would be to shoot an object upwards out of a giant cannon. Cannonballs may be made to go
high in the air this way. The faster they are sent shooting out of the muzzle, the higher they go.
As they go higher and higher, Earth's gravitational force grows slightly weaker so that they go a little
higher than one might expect. If they are sent up fast enough, by the time they lose half their speed they are up
where Earth's gravity is only half its strength. Though the objects continue to lose speed, so does Earth's gravity
continue to lose strength. If the cannonball goes fast enough, Earth's gravity can never bring it to a halt, let
alone cause it to start falling back to Earth.
An object which is shot upwards at such a velocity that it never returns is said to have been fired at
"escape velocity." For Earth, escape velocity is 7 miles per second, or 25,200 miles an hour. If a large hollow
object with people inside could be fired upward at 7 miles per second (or more), it would rise and rise and
continue to rise. If it were aimed correctly, it would rise to the moon.
In 1865, the French science fiction writer Jules Verne wrote From the Earth to the Moon, a novel describing
how a group of men are hurled to the moon in this fashion.
Unfortunately, the method, while correct in theory, is not practical. Not only would it require an
enormous cannon that is not likely ever to be built, but if a spaceship were fired out of a cannon in this way, the
sudden increase of speed (or "acceleration") would kill every person on board in a moment.
Another method, though, is to make use of the "law of action and reaction," which was first announced
by Isaac Newton in 1687. This law explains that if a portion of a body is thrown off in one direction, the rest of
the body must move in the opposite direction.
Imagine yourself sitting on a smooth aluminium platter resting on a sheet of smooth ice. With you are a
bunch of heavy steel balls. If you threw one of the balls away with all your might, the platter carrying you and
the rest of the balls would start sliding in the opposite direction. Throw a second ball after the first and the
platter will move more quickly. Keep it up, and if you have enough balls you will end by skimming along the
ice quite rapidly.
In 1891, an eccentric German inventor, Hermann Ganswindt, suggested a trip beyond the atmosphere
by using this method. (He was the first man to try to design a spaceship along scientific principles.) Instead of
throwing steel balls by hand, he imagined a ship that would fire them out to the rear by dynamite explosions.
If enough steel balls were hurled backwards with enough speed and in sufficient quantity, the ship
would reach escape velocity. It would then travel away from the Earth indefinitely. The important difference
between this and a cannon is that the speed would be built up slowly over a long period in Ganswindt's ship,
Reading Passages [ 225 ]

where it would build up all at once before the ship left the muzzle in Verne's cannon. Acceleration would not
be murderous in Ganswindt's ship.
But why fire out heavy objects? If a ship fired out a jet of gas from the rear, that could do the job, too,
provided the gas were fired out quickly enough.
The advantage of gas over solids is that gas can be made to shoot out in a continuous stream. The ship
would gain speed smoothly instead of in a series of jerks and it would do so more efficiently.
We can actually watch a gas jet do the work of moving an object. Suppose you fill a toy balloon with air,
hold it up and let the air escape. The air, rushing out in one direction, will cause the balloon to move in the
other.

For such action and reaction to take place, air does not have to surround the moving object. In fact, air
gets in the way. When escaping air moves a balloon, the balloon's motion is slowed by the resistance of the air
all about it. The balloon is pushed this way and that by air currents. Action and reaction would work best in a
vacuum where nothing would interfere with motion.
Actually, the spherical shape of a balloon is bad for rapid motion. To allow for rapid motion through air
with least interference, you need an object that is narrow and streamlined. Then, too, you want as much gas in
it as possible so that it will come out with great speed and in large quantities. One way of packing an object
with much gas is to pack it with a solid that can be easily and quickly turned into a gas.
Suppose you take a narrow cylinder, coming to a pointed end on one side and open at the other. Fill it
with gunpowder, close the open end lightly, and push a fuse through into the gunpowder. Once the fuse is lit,
the gunpowder will quickly catch fire and form large quantities of gas. A hot jet of these gases will push out of
this "rocket," which will move rapidly in the opposite direction. Small rockets shot into the air in this way can
be very impressive.
Large rockets of this same sort might easily be used as a war weapon. By sending burning rockets into a
city, buildings could be set afire, ammunition could start exploding, and people could be panicked. For a while
in the nineteenth century such rockets were indeed used in warfare. They were used in the War of 1812
between the United States and Great Britain. The Star-Spangled Banner, our national anthem, written during that
war, speaks of "the rockets' red glare."
Rockets faded out as a war weapon because cannonballs could be fired more accurately from cannon
and would do more damage.
However, rockets remain more practical for reaching great heights than cannon. A cannon must fire off
all its gunpowder before the cannonball comes out of the muzzle. After that the cannonball can only slow
down. A rocket rises up while the gunpowder is still burning, and it carries the gunpowder upwards along
with itself. As it rises, it therefore goes faster and faster as more and more of the gunpowder burns.
In order for the ordinary rocket to work, however, it must be surrounded by air while the gunpowder is
burning, for the gunpowder won't burn in the absence of air. This means that such a rocket can accelerate only
inside the atmosphere.
Acceleration inside the atmosphere is important for many purposes, to be sure. The rocket principle can
be applied to aeroplanes very neatly.
At first, aeroplanes were sped through the air by means of a propeller. The propeller was the weak
point of the plane. Its tips had to move through the air much more quickly than the plane itself did. There was a
limit to how quickly propellers could be whirled and that helped set a limit to how quickly planes could fly.
Suppose, though, that you fed kerosene into a rocket arrangement, had it burn, and sent the gases out
through the rear. The plane would then be driven forwards without a propeller. At high speeds, such a "jet
plane" is much more efficient than a propeller plane. Indeed, a jet plane can easily reach speeds a propeller
plane could never achieve.
The jet plane was developed during World War II as a war weapon. In 1952, it made its first appearance
in commercial aviation and travel by jet is now very common. Jet planes can easily go faster than the speed of
sound, which is 750 miles an hour, or 0.2 miles a second.
If a jet plane built up enough speed and reached escape velocity, it could leave the atmosphere
altogether and enter space. It would need no further jet blasts to continue onward indefinitely.
This is not practical, though. The jet that drives the plane is kept going by fuel burning in air drawn in
from the surrounding atmosphere. This means that the jet only works where the atmosphere is fairly dense. All
the acceleration must take place in this dense atmosphere, where air resistance is so high it would waste fuel
and would heat up the ship dangerously.
Reading Passages [ 226 ]

It would be much better if the jet plane could reach the upper atmosphere at low speeds, avoiding too
much resistance and heating. Then up there, where the atmosphere is too thin to be any trouble, the real job of
acceleration could take place. Unfortunately, up there the atmosphere is too thin to keep the kerosene burning.
A spaceship must, therefore, carry its own supply of air (or, better, oxygen) along with the fuel. Then,
once the spaceship got into the upper atmosphere, it could mix its stored fuel with its stored oxygen, burn the
mixture, and accelerate to escape velocity without trouble.
A self-educated Russian schoolteacher, Konstantin Eduardovich Tsiolkovsky, was the first to make this
clear. In 1898, he wrote a long article in which he described a spaceship that would be powered by a rocket
exhaust. It was published, finally, in 1903, the same year in which the aeroplane was invented. It was the first
description of the kind of spaceship that eventually came into use.
The real breakthrough, however, came in the United States, through the work of an American rocket
engineer, Robert Hutchings Goddard.
As a boy, he was fascinated by science fiction. In 1899, he read War of the Worlds by Herbert George
Wells, a thrilling adventure in which Martians invade Earth and almost conquer it. With that began Goddard's
lifelong dream of penetrating outer space. By 1901, he was writing essays on the possibility of space travel.
Both Goddard and Tsiolkovsky saw that the older rockets were unsuitable. When gunpowder was
used, its burning could not easily be controlled and it did not produce a fast enough exhaust anyway. Both men
felt that what was really needed was a liquid fuel. This could be pumped into a chamber where it could be
burned. The pumping could be started or stopped, made to go fast or slow. The exhaust could thus be
controlled.
Tsiolkovsky was content merely to theorize, but Goddard went further. He began to design actual
rocket engines. In 1914, he obtained two patents for inventions to be used in such engines. In 1919, he finally
published a small book (only sixty-nine pages) on the subject.
Now he was ready to build small rocket engines and see how they worked. In 1923, he tested an engine
in which a stored supply of kerosene and a stored supply of liquid oxygen were contained. The two liquids
were pumped into the burning chamber where they were mixed and ignited. The engine worked well and the
next step, Goddard decided, was to send a liquid-fuel rocket upwards.
He was teaching at Clark University in Worcester at this time, and he performed his experiments on an
aunt's farm in Auburn, Massachusetts.
There, on March 16, 1926, he made ready to fire his rocket. His wife took a picture of him standing next
to it. It was a cold day and there was snow on the ground. Goddard, wearing overcoat and boots, was standing
next to what seemed a child's jungle gym. At the top of the structure was a small rocket, four feet long and six
inches thick.
There were no reporters present and no one was interested in what he was doing. That was too bad, for
what was about to happen was one of the news stories of the century, if only the world had known. The first
liquid-fuel rocket was about to rise into the air.
Goddard ignited it and the rocket rose 184 feet into the air, reaching a speed of 60 miles an hour. This
wasn't much, but it showed that Goddard's rocket engine worked. It was only necessary to build improved
rockets on a larger scale.
Goddard managed to get a few thousand dollars from the Smithsonian Institution and continued his
work. In July 1929, he sent up a larger rocket, which went faster and higher than the first. More important, it
carried a barometer and a thermometer, along with a small camera to photograph their readings. This was the
first instrument-carrying rocket.
Unfortunately, Goddard now ran into trouble. News had leaked out that he was trying to reach the
moon and many people began to laugh at him. The New York Times printed an editorial telling him his science
was all wrong. (Actually, the editorial writer was quite foolish, for he didn't even understand the law of action
and reaction, thinking that air was necessary for its working-yet he dared lecture an expert like Goddard.)
When one of Goddard's rockets made a loud noise while being launched, policemen and firemen were
called and he was ordered to conduct no more rocket experiments in Massachusetts.
But Charles Augustus Lindbergh, the famous aviator, had heard of Goddard's experiments and he used
his influence to get the rocket engineer some financial help. Goddard built a new rocket-launching site in New
Mexico, where he could experiment without disturbing anybody.
Here he built larger rockets and developed many of the ideas now used in all rockets. He showed how
to build a combustion chamber of the proper shape and how to keep its walls cool. He showed how the rocket
could be steered and how it could be kept on a straight course.
Reading Passages [ 227 ]

He also worked out and patented the notion of multi-stage rockets. A two-stage rocket, for instance,
consists of a small rocket built on a large one. The large one burns its fuel and carries itself and the small rocket
up into the upper atmosphere. Then the large rocket, empty of fuel, breaks loose and drops away, while the
small rocket goes into action.
High up where the air is too thin to interfere, the small rocket's fuel blasts off. It is already moving
upwards at considerable speed thanks to the action of the large rocket, and now its own engine makes it go
higher still.
The small rocket moves a lot higher and faster than the whole rocket would have moved if it were all
one piece.
In the early 1930s, Goddard finally fired rockets that reached speeds faster than sound and rose a mile
and a half into the air. The American government was never really interested in this work while Goddard was
alive, but years after his death, it had to pay a million dollars for the use of two hundred of his patents. Work
on rockets would have come to a dead halt otherwise.
 
Interest in rocket experiments was particularly great in Germany. In 1923, a book on space travel was
published in that country by Hermann Oberth, who was born in a region that is now part of Rumania. By 1927,
a "Society for Space Travel" had been founded in Germany. Its young and enthusiastic members began to plan
rocket experiments. Similar societies were formed in other countries but the German society was by far the
most successful.
Among the members of the German society were two young men, Willy Ley and Wernher von Braun,
each destined for great fame. They threw themselves into rocket-building and in the next couple of years some
eighty-five rockets were fired. One reached an altitude of nearly a mile.
Goddard was doing even better, but he was a lone wolf, ignored by the United States. The German
rocket engineers were soon receiving government support. When Adolf Hitler came to power in Germany in
1933, he began to think of the new rockets as a possible war weapon.
In 1936, a secret experimental station was built at Peenemunde, on the Baltic seacoast of Germany.
There, by 1938, rockets capable of flying eleven miles were built. Such rockets might be expensive just at first,
but they flew by themselves and required no human pilots. They could be aimed quite accurately and they
went so quickly they couldn't even be detected, let alone stopped.
The first rocket-driven "missile" was fired in 1942 and by 1944, Wernher von Braun's group put these
missiles into action. They were the famous V-2 rockets. (The V stood for vergeltung, meaning "vengeance.")
In all, 4,300 V-2 rockets were fired during World War II and of these, 1,230 hit London. Von Braun's
missiles killed 2,511 Englishmen and seriously wounded 5,869 others. Luckily for the world, the V-2 came too
late. Hitler had lost the war and the V-2 couldn't reverse that decision.
Goddard lived just long enough to see this awful triumph of the rocket. He died on August 10, 1945.
One thing the V-2 rocket did was to rouse the interest of Germany's adversaries, the United States and
the Soviet Union. Immediately after the war, both made efforts to capture Germany's rocket experts. The United
States got most of them, including Wernher von Braun. (Willy Ley had left Germany for the United States long
before-as soon as Hitler came to power.)
Both nations then worked hard to build missiles. By the 1950s the old V-2 was a piddling affair
compared to the monsters that were coming into existence. Both the Soviet Union and the United States
developed "Inter-Continental Ballistic Missiles" (ICBMs). These could travel for thousands of miles and land
accurately on target.
Both nations could strike any place on Earth, now, with missiles based on their own territory. These
missiles could carry hydrogen bombs. A new world war would be more terrible than had ever been imagined.
In the space of half an hour, hundreds of millions of people could die, and civilization might be destroyed.
But rockets were not used only for war weapons. Some were sent up into the heavens in order that new
knowledge might be brought back. Soon after the war, captured V-2 missiles were used by the United States to
carry instruments into the upper atmosphere. One reached a height of 114 miles, five times as high as any plane
or balloon could reach.
In 1949, the United States put a small American rocket on top of a V-2. When the V-2 had reached its
maximum height, the small rocket took off and reached a height of 240 miles.
Another way of accomplishing the same purpose was to send a balloon as high into the atmosphere as
possible and then to launch a small rocket from it. The air would be too thin to interfere and such a "rockoon"
Reading Passages [ 228 ]

combination could reach great heights with very little expense. A leader in this work was the American
physicist James Alfred Van Allen.
Such high-flying rockets brought back useful information about the nature of the upper atmosphere.
They described the temperature, density, winds, gases, and ions of the upper atmosphere and recorded how all
of these changed from time to time.
But such rockets only stayed in the upper air a short period of time and could only gather information
concerning the portion immediately about it. What was wanted was a rocket that could stay up for a long time.
Suppose a rocket were sent up at a velocity less than escape, and was steered so as to travel parallel to
the surface of the Earth. Since it could be travelling at less than escape velocity, it would fall towards the Earth.
The surface of the Earth, however, is curved. The surface curves away from the rocket as the rocket falls while
moving forwards.
If the speed of the rocket is just right, then it will travel so far parallel to the Earth's surface while it is
falling a mile that the Earth's surface will have curved away one mile. In that case the rocket will never actually
fall to Earth, but will circle it forever. The rocket will be "in orbit" about the Earth; it will become a "man-made
satellite" of our planet.
If the speed and direction of the rocket is just right, it will go about the Earth in a perfect circle.
Otherwise it will circle the Earth in an ellipse. This ellipse can be quite oval, sort of long and flattened. The
satellite could come quite close to the surface of the Earth on one side of its orbit and be quite far away at the
other.
Although, in theory, such a satellite should stay in space forever, part or all of its orbit might be within
100 or 150 miles of the Earth's surface. In that case, the very thin air of the upper atmosphere will produce
enough resistance to consume the satellite's energy of motion very slowly. The satellite will spiral lower and
lower and eventually penetrate the thick atmosphere and burn up.
Rocket experts began thinking of possible satellites in connection with a huge international study of our
planet planned for 1957 and 1958 (the "International Geophysical Year" or IGY). Perhaps the launching of a
satellite could be made part of the IGY. On July 29, 1955, the American government officially announced the
attempt would be made.
The Soviet Union then announced that it would also make such an attempt, but most Americans paid
no attention. Those that did thought the Soviets were just playing "copy-cat" and that only the United States
had the ability to perform such a difficult rocket feat.
The Soviet Union therefore surprised the whole world (and particularly the United States) when, on
October 4, 1957, they launched the first successful satellite. This was meant to celebrate the hundredth
anniversary of the birth of Tsiolkovsky (which had taken place on September 17). They called it "Sputnik,"
meaning "satellite," a name that Tsiolkovsky himself had used to describe such man-made objects in orbit.
The United States was soon launching satellites of its own. On January 31, 1958, the first successful
American satellite, Explorer I, was launched. In the years that followed, hundreds of satellites were launched
by each nation.
These satellites turned out to have a great many practical uses. For instance, some were designed to take
many thousands of photographs of the Earth. Such photographs would show the cloud pattern over large
areas. Scientists would learn more about the way in which air circulated and clouds formed. They could watch
the birth and development of hurricanes. They could predict weather more accurately.
The first satellite intended for such a weather-watch was launched on April 1, 1960. It was called TIROS
(standing for "Television and Infra-Red Observation Satellite) and it proved to be a great success. Soon, the
sight of the Earth as seen from hundreds of miles in the air grew to be common.
Eight such satellites were launched altogether and then a more advanced type of satellite, "Nimbus,"
was launched on August 28, 1964.
Satellites can also be used for communications. Ordinary radio waves bounce off the charged particles
in the ionosphere. That makes it possible to send radio messages around the world. Short radio waves, like
those used in television, go right through the ionosphere. However, if they could be made to strike a satellite
outside the Earth's atmosphere, they could be reflected back to another part of the Earth.
This was first pointed out in 1945 by Arthur C. Clarke, a young Englishman who was to become one of
the best science fiction writers in the world. Another science fiction enthusiast, the American engineer John
Robinson Pierce, who worked at the Bell Telephone Laboratories, endeavored to bring this idea to reality.
On August 12, 1960, Echo I, made possible by work at Bell Telephone, was launched. It carried a
collapsed plastic balloon which was inflated, once it was in space, into a huge sphere that was as tall as a ten-
Reading Passages [ 229 ]

story building. Radio waves striking it were reflected, and messages could be sent from continent to continent
in this way.
Messages reflected from Echo I were very weak by the time they were received, of course. On July 10,
1962, Telstar I was launched. It did more than receive messages; it amplified them once received and made
them stronger. Then it sent the strengthened signals back to Earth. This meant that American television sets
could now easily receive pictures live from Europe and vice versa.
These early "communications satellites" were close to the Earth and travelled rapidly around it. They
could only be used to transmit messages across the Atlantic when they happened to be in the right spot above
the Atlantic.
If a satellite is sent higher and higher, it takes longer and longer to travel about the Earth. If it is about
22,300 miles above the Earth's surface, it takes twenty-four hours to circle the Earth, or just the time it takes the
planet to turn on its axis. The satellite moves in time with the planet and is always over a particular spot on the
surface. Clarke had suggested satellites of this kind.
This was achieved with full success on August 19, 1964, when Syncom III was launched. It was placed
over the Pacific Ocean just in time to make it possible to broadcast the Olympic Games, live, from Tokyo to the
United States.
Satellites can also be used to help determine the shape of the Earth. The Earth is not a perfect sphere.
Because it turns, a centrifugal effect tends to lift its matter upwards against gravity. (If you attach a heavy object
firmly to a cord and whirl it rapidly round your head, you will feel it pull away from your hand.)
The Earth turns most rapidly in the equatorial regions. Its matter lifts up highest there. The Earth has an
"equatorial bulge," therefore, that is thirteen miles high at the equator.
On March 17, 1958, Vanguard I was launched. It was a tiny thing, only the second satellite the United
States had placed in orbit, and all it carried was a small radio sending out a steady signal. Its motion could be
followed by that signal, and that was sufficient to be useful.
Vanguard I had an orbit that was at an angle to the equator. In part of its orbit it was north of the
equatorial bulge and in the other part it was south. The bulge had a special gravitational effect on the tiny
satellite and altered its orbit in a way that scientists could easily calculate.
Scientists expected that the bulge would have the same effect on the satellite whether it was to the north
or the south. That turned out not to be so. The part of the bulge south of the equator turned out to be a little
higher than the part north.
Indeed, by studying the orbit of Vanguard I and later satellites very carefully, scientists could determine
all kinds of bulges and hollows in the Earth's surface, even though these were only a few dozen feet high or
low.
By knowing the Earth's shape more exactly than ever before, it became possible to make maps with
greater accuracy. It turned out that some islands were a mile or more away from where the old maps had
showed them to be. For the first time, the distance between London and New York could be worked out to
within a few feet.
What's more, ships could locate their own positions on the ocean with new accuracy, by observing
satellites.
Nor was it only knowledge of the Earth itself that was the product of satellite work. Those portions of
space through which the satellites travelled could be studied in detail for the first time. Ordinary telescopes
could see nothing there, but did that mean that nothing was really there? What about cosmic ray particles?
Explorer I, America's first satellite, carried special devices to record cosmic ray and other electrically
charged particles. Its orbit was elliptical enough to bring it as close to 217 miles to Earth's surface in one part of
its orbit and take it out to 1,155 miles in the opposite part. It could record charged particles at all heights
between.
Up to a height of 500 miles, the number of particles recorded per minute was about as expected, and
increased slowly as the height increased. Above 500 miles, however, the number of detected particles dropped
suddenly, sometimes all the way to zero.
Scientists wondered if it might not be that the instrument was out of order. But then a later satellite sent
back the same kind of records.
James A. Van Allen, in charge of these experiments, thought the trouble might be that there were so
many charged particles that they were "blinding" the instruments. On July 26, 1958, Explorer IV was launched.
Its instruments were designed to handle very high quantities of particles and now things were different.
Reading Passages [ 230 ]

Around the Earth, there proved to be regions that were enormously rich in charged particles. These
were sent out by the sun (the "Solar wind") and were trapped by the Earth's magnetic field. These particle-rich
regions were called "Van Allen belts."
The belts came closest to the Earth near the magnetic poles in the polar regions. There the charged
particles leaked into the atmosphere and produced the beautiful shifting colours of the aurora (or "Northern
Lights").
At first, it was thought these belts were perfectly even, all around the Earth. Further satellite studies
showed that the solar wind struck the Van Allen belts and flattened them on the sun-side. The solar wind then
veered to either side, circled the Earth and passed on beyond. The Van Allen belts on the night-side of the
planet were drawn out almost as though they were a comet tail.
The lopside area inside the solar wind and circling the Earth is now called the "magnetosphere." No one
suspected its existence until the age of satellites had opened.
But satellites need not be restricted to the neighbourhood of the Earth. If they are made to go at
velocities that are a little faster they can reach the moon. They can escape from Earth altogether and take up
orbits about the sun as "manmade planets."
The first successful "Lunar probe," that is, the first satellite to pass near the moon, was sent up by the
Soviet Union on January 2, 1959. It was "Lunik I." It was the first man-made object to take up an orbit about the
sun, and within two months, the United States had duplicated the feat.
On September 12, 1959, the Soviets sent up Lunik II and it was aimed so accurately that it hit the moon.
For the first time in history a man-made object rested on the surface of another world.
Then, a month later, the Soviet satellite Lunik III slipped beyond the moon and pointed a television
camera at the side we never see from Earth. (The moon always faces the same side toward us.)
Lunik III changed the photographs into radio signals that could be transmitted to Earth and changed
back into photographs. They were fuzzy and of poor quality, but they showed something interesting.
The side of the moon we see is covered with craters but there are also large flat "maria" (or "seas") which
are dark in colour and have hardly any craters. It is the maria that make the dim splotches on the face of the
moon that cause some people to imagine they see the "man in the moon" there.
On the other side of the moon, though, as revealed by Lunik III, there are hardly any maria and no
sizable ones at all. A number of satellites since Lunik III, both American and Russian, have made similar
photographs of far better quality, and this is borne out. There are no maria to speak of on the other side of the
moon.
Astronomers don't know why.
Lunar probes also reported on conditions in the neighbourhood of the moon. It was found that the
moon did not behave like a magnet and did not have any Van Allen belts of its own.
This was not surprising, really. In order for a heavenly object to behave like a magnet and collect belts
of charged particles, it should have a core of melted iron and it should turn rapidly. The turning sets up swirls
of liquid in the melted iron and these swirls are what cause the planet to act like a magnet.
The moon is too small to have a melted iron core, and even if it had one, it rotated on its axis too slowly
(once in twenty seven days) to set up important swirls.
Such observations could be made in greater detail by satellites sent into the neighbourhood of the
moon, and then manoeuvred (by tiny bursts of rocket fuel set off by radio message from Earth) into orbit about
the moon. This is a most delicate feat but by 1966, both the Soviet Union and the United States had worked out
their rocket techniques so well that they could do it.
The Soviet Union's "Luna 10" took up an orbit about the moon after having been launched on March 31,
1966. The United States satellite, "Lunar Orbiter 1" was launched on August 10, 1966, and was the first of
several like it.
The Lunar Orbiters took pictures of various portions of the moon's surface. Some of them were from an
angle so that the rolling hilly nature could be seen clearly. Such photographs looked just like a desolate desert
might seem on Earth. It was hard to believe they were taken of another world, a quarter of a million miles out
in space.
Even more startling, perhaps, were pictures taken, past the curve of the moon's surface, of the Earth.
There was our own planet, seen as a thick "crescent Earth," from a distance of a quarter of a million miles.
From the orbits of the satellites circling the moon, astronomers were able to figure out the exact location
of the centre of the moon. Combining that with studies of radar echoes, as described in the previous chapter,
they found they could calculate the diameter of the moon down to a fraction of a mile. Pictures of the moon
Reading Passages [ 231 ]

from probes and orbiters that flew by and around the body might be startling but they were usually taken from
a considerable distance. What about really close photographs?
The United States planned a whole series of probes designed to strike the moon and take photos on
their way down. These satellites were called "Rangers." Ranger I through Ranger V were test satellites that were
not sent to the moon. Finally, on January 30, 1964, Ranger VI was launched and headed for the moon. The
aiming was very good and it hit the moon only twenty miles from target-but the television cameras failed.
A half-year later, on July 28, 1964, Ranger VII was shot into the sky and this time everything worked
perfectly. Photographs were taken, right down to the very moment of impact, and the portion of the moon in
view of the cameras was seen with greater detail than had ever before been possible.
Some astronomers had thought that the moon might be covered by a thick layer of fine dust, and they
searched the Ranger photographs for some sign of that. Most astronomers felt that no dust showed up, but the
matter wasn't settled.
What was needed was a "soft landing." Until 1966, all the probes that had reached the moon's surface
had made a "hard landing," hitting with such force that they had been destroyed. If a satellite fired rockets
downward just before landing, however, its speed of fall would be slowed up and it might then come down
gently enough to allow its instruments to keep working. This would be a soft landing.
Both the Soviet Union and the United States tried for a soft landing and both succeeded. On January 31,
1966, the Soviet probe, Luna 9, was launched and succeeded in landing softly on February 3. It took the first
pictures of the moon from its surface.
On May 30, 1966, the Americans launched Surveyor I, which landed softly on the Moon by June 2 and
which took additional photographs. These and other such successful attempts seem to have made it quite clear
by now that the moon's surface is rather like the Earth's. No signs of any dust layer have been detected. One of
the later Surveyors even dug up a shovelful of moon soil on signal from Earth and a television camera scanning
that soil showed it to be a rather usual soil. Another surveyor carried through a delicate analysis of Lunar soil
in 1967 and showed it to resemble earthly basalt.
What about heavenly bodies farther than the moon? The next nearest bodies of importance are Venus
and Mars, and both the Soviet Union and the United States have attempted to send out "planetary probes" in
the direction of these two bodies.
So far, the Soviet Union has been plagued with bad luck in this respect. One of the "Venus probes,"
named "Venus 3," actually landed on the planet on March 1, 1966, but the feat was a disappointing one, for the
probe's instruments had failed and no information was sent back.
A more successful Venus probe was the American "Mariner 11"
It was launched on August 27, 1962, and travelled through space for four months to make its
rendezvous with Venus. The probe skimmed by within 21,000 miles of Venus on December 14, 1962. At that
time, it was thirty-five million miles from Earth but successfully returned the information it gathered. It was a
wonderful example of good aim and clever communications.
Mariner 11 was able to study the space in the neighbourhood of Venus. It found that Venus was not a
magnet and did not have any Van Allen belts. To be sure, Venus was large enough (almost as large as the
Earth) to have a melted iron core. However, it turned on its axis even more slowly than the moon, so it set up
no swirls in that core.
The most exciting thing that Mariner II did was to scan the surface of Venus for microwaves.
Astronomers had received microwaves from Venus in such quantity that they had decided the surface of the
planet must be exceedingly hot.
This was such a surprising fact, though, that they were eager to have the microwaves studied at close
range. Mariner 11 did this little job and the earlier findings were confirmed. Venus did indeed seem to be very
hot.

On October 19, 1967, an even more sophisticated American probe, Mariner 5, flew past Venus. At the
same time, a Soviet probe, Venus 4, landed on the planet and this time sent back information. Venus's high
temperature was confirmed and its atmosphere, much thicker than Earth's, seemed almost entirely carbon
dioxide.
Spectacular rocket successes were also carried through in connection with Mars.
On November 28, 1964, "Mariner IV" was launched in the direction of Mars. Mars is the more distant of
the two planets and the journey took eight months. On July 15, 1965, Mariner IV edged past Mars at a distance
Reading Passages [ 232 ]

of little more than 6,000 miles. The information gathered by the probe had to be relayed back to Earth over a
distance of nearly 150 million miles.
Mariner IV investigated the space near Mars in a number of ways. It reported on the concentration of
dust and particles, the strength of the solar wind, and on the magnetic nature of the planet. It quickly turned
out that Mars, like the moon, was too small to have much of a melted iron core. It was no magnet and had no
Van Allen belts.
Mariner IV was able to check on the density of the atmosphere of Mars and this turned out to be only
one-tenth of what astronomers had thought.
This was important. Astronomers had long suspected there might possibly be life on Mars. By this, they
didn't mean the kind of intelligent, canal-building life that Percival Lowell had speculated about (as described
in the previous chapter). Astronomers didn't accept that, but they thought it just barely possible that very
simple forms of plant life might exist.
The reason for considering this possibility was that Mars has a climate that does not completely
eliminate the chance of life. It is colder than Earth and the air is thinner and there is no oxygen and very little
water. Still, some very simple forms of Earth life could be made to live under conditions that were similar to
those that astronomers thought existed on Mars. If there were Martian life, it would be especially adapted to
Martian conditions, and it would get along even better than Earth life would.
Besides, there actually seemed to be signs of life on Mars. Mars had ice caps just as the Earth had,
though the Martian ice caps were much smaller. Its axis was tipped so that the northern hemisphere had spring
and summer when the southern hemisphere had autumn and winter, and vice versa, just as was true on Earth.
As seen through the telescope, Mars had reddish areas that might be desert, and dark areas that might,
just possibly, be a sign of plant life. When spring came to one of the hemispheres, the ice cap on that side would
begin to melt and the dark areas would grow darker and larger, almost as though plant life were flourishing
because water from the ice caps was soaking into the soil.
But that notion seemed less likely thanks to the unexpected thinness of the Martian atmosphere. It was
only 1/100 as dense as Earth's instead of 1/10 as had been thought, and that seemed to make the possibility of
life a poorer one.
Of course, it might have been that the Martian atmosphere was thicker ages ago and that more water
had been present then. Life would have started and might then have slowly adapted itself as conditions grew
ever harder.
Arguing against this was the most astonishing feat of Mariner IV. These were the photographs it took of
Mar's surface and then transmitted to Earth. Mariner II might have taken pictures of Venus but all it would
have got would have been unbroken, featureless clouds. Mars, however, had very few clouds, if any, and its
surface lay exposed.
Twenty-one photographs were taken. They were of poor quality and not at all clear but they showed
the Martian surface in far greater detail than it had ever been seen from Earth.
When the pictures were received on Earth, there was instant astonishment. It turned out that the surface
of Mars was riddled with craters, just like those on the moon. These were craters that had never been seen
through the telescope because Mars was so far away and because its atmosphere, thin as it was, blurred the fine
detail on the surface.
But there they were now. More than seventy craters were counted on the various photographs and one
of them was seventy-five miles across. Astronomers, such as Fred Whipple of Harvard, and Tombaugh, the
discoverer of Pluto, had predicted there might be craters on Mars, but few seemed to take such speculations
seriously. Now they had to.
The existence of craters makes it seem that not only is the air thin now, but it may have been very thin
through all of Mars's history. There may have been very little water, too. Only in that way could the craters
have survived. Otherwise, the action of air and water would have smoothed them down.
The chances for life on Mars looked considerably worse than they had looked before, but not all
astronomers were disheartened. It was pointed out that satellites much closer to Earth than Mariner IV had
been to Mars could see no signs of life on Earth. A still closer look is required.
A number of projects are being considered whereby a Mars probe might make a soft landing on Mars. It
would carry an instrument that would test for possible life on Mars. A sticky string might be cast out into
Martian soil, then pulled back into the craft. Perhaps some Martian bacteria or one-celled plants might stick to
the string. If the string were then placed in certain chemicals, the living cells might bring about changes in those
chemicals and information about the changes could be transmitted back to Earth.
Reading Passages [ 233 ]

That, however, is for the future.


More spectacular still than soft landings on the moon and probes passing by Venus and Mars is the
notion of sending men into space!
No matter how many instruments we send to the moon and how much information they gather, they
could not possibly excite the world as much as would the landing of men upon another world.
But can men survive the rocket takeoff into space? They will have to undergo strong accelerations. They
will feel as though they were being pressed down by weights of hundreds of pounds.
Then, once they are in space, with the rocket engines turned off, they will be in "free fall." They will be
falling constantly even though they never hit the Earth and they will feel no weight in consequence. They will
feel weightless all the time they are in orbit.
What's more, there is the question of radiation out in space. How dangerous are the solar wind and the
Van Allen belts? From the very beginning, the satellite program was geared to such questions both in the Soviet
Union and the United States. The second satellite sent up, the Soviet Union's Sputnik Il, launched on November
3, 1957, carried a dog. The dog survived the takeoff and the weightlessness and lived until it was painlessly
poisoned. There was no way of bringing it back to Earth, however.
Later, as techniques improved, both nations sent all sorts of animals into orbit-mice, dogs, even
chimpanzees-and brought them back. They also began to train men for trips into space. In the United States,
these men were called "astronauts"; in the Soviet Union, they were called "cosmonauts."
The first step was merely to put the men into orbit about the Earth. In orbit, the men could be brought
back after only a few hours in space. They would also stay beneath the possibly dangerous Van Allen belts.
The United States took elaborate precautions to make sure the men would be brought back safely. They
set up a worldwide network of observers and planned to have the satellites make landings in the ocean with
navy ships standing by.
The Soviet Union worked more secretly and without seeking the cooperation of other nations. This
made tracking harder for them. They also planned for the return of the satellite, by parachute, to a land surface,
which also made things harder.
Even so, the Soviet Union got men into space first. On April 12, 1961, the Soviet cosmonaut Yuri
Gagarin was launched in the spaceship Vostok I. It was shot into orbit, travelled once around the Earth in 108
minutes. and was brought safely back to Earth.
On August 6, 1961, less than four months later, the feat was repeated. Another Soviet cosmonaut,
Gherman Titov, was launched in Vostok II. He remained in space through seventeen orbits, which kept him
weightless for over twentyfive hours before being returned to Earth.
Then, on February 20, 1962, the United States put its first man into orbit. This was John Herschel Glenn,
Jr., who made three orbits in just under five hours and was brought back safely.
In the years that have passed since those first manned launchings, both nations have put more men in
orbit for longer and longer periods. The Soviet Union, on June 14,
1963, launched Valery F. Bykovsky, who stayed in space five days, circling the Earth eighty-one times
before coming down.
While he was still in orbit, Valentina V. Tereshkova was launched on June 16, 1963. She was the first
woman in space. She has since married and had a child, so the experience seems to have done her no harm.
The American-manned space program took up speed as President John Fitzgerald Kennedy called for
an American on the moon by 1970. The first few American launchings were in Mercury capsules, little one-man
jobs, nine feet high and six feet wide, weighing one and a half tons. In 1965, more ambitious capsules were put
in use for the "Gemini" project. This is the Latin word for "twins" and it is used because the new craft was to
carry two men.
The Gemini craft was twice as large and twice as heavy as the Mercury. To put a Mercury into orbit
required 360,000 pounds of thrust; the Gemini required 530,000 pounds.
On August 21, 1965, a Gemini capsule carrying L. Gordon Cooper and Charles Conrad stayed in orbit
for eight days for a new endurance record. The Russians retained another, though, for on October 12, 1964, a
Soviet spacecraft was launched with a crew of three. In 1968, a three-man American craft, the Apollo-7
remained in orbit eleven days.
Both the Soviet Union and the United States now had rockets with sufficient power to send men to the
moon. The United States had the Saturn V rocket which, with 7,600,000 pounds of thrust, was capable of
launching a 45 ton object into space. Sheer power, however, was not enough, complex manoeuvres also had to
be learnt; manoeuvres such as ships moving into lunar orbit, smaller ships leaving larger ones and descending
Reading Passages [ 234 ]

to the moon and returning. Astronauts had to learn to rendezvous; that is, to bring one ship into contact with
another. They also had to learn how to leave their ship, if necessary, and manoeuvre in space, clad in a space-
suit, powered by a hand-rocket, and linked to the ship by a lifeline.
On March 18, 1965, during the course of a two-man Soviet space flight, the cosmonaut Aleksei A.
Leonov stepped out of his capsule and became the first man in history to take a "spacewalk." On June 3, 1965,
an American astronaut, Edward H. White, duplicated the feat.
In 1966, the United States was suddenly alone in the field. For some unexplained reason, Soviet manned
flights ceased, though they continued to launch many unmanned satellites. America's Gemini Project continued
in high gear as several dramatic and successful rendezvous were carried through.
The manned flights had not been without their problems. Some rendezvous attempts had had to be
abandoned. One flight had had to make a premature landing because of malfunctioning controls. Nevertheless,
no lives had been lost in the American programme, and none (despite rumours to the contrary) in the Soviet
programme either as 1967 opened.
The next step on the American side was the Apollo program, in which capsules containing three men
were to be launched into space.
Then came disaster. On January 27, 1967, three astronauts, including White, who had been the first
American to walk in space, were ground-testing the Apollo capsule in preparation for the first flight, scheduled
for only a few weeks later. A fire started, somehow, and in a matter of a couple of minutes, all three were dead.
A long delay was at once necessary. The United States, to save on weight, had been using a simple
oxygen atmosphere in its space capsules. This meant that if a fire did start, it would burn much more quickly
and ferociously than if there were ordinary air in the capsule.
Soviet capsules, which were larger and heavier (since the Russians used larger rockets), used ordinary
air, which required bulkier equipment but was safer. Naturally, public pressure began to rise for the Americans
to use ordinary air, too.
This meant new equipment, new designs, new precautions. It seemed that no new manned flights
would be launched by Americans in 1967.
Nor could the Soviets find much cheer in their own programme. Not long after the American disaster,
in April 1967, they launched a manned capsule, their first in nearly two years. After a troubled flight, a landing
was attempted and it failed. The cosmonaut, Vladimir M. Komarov, died in the crash and the Soviets found
they would have to go slow, too.
Both nations continued to move forward with determination, however. In 1968, the Soviet Union sent
several unmanned probes to the moon, had them circle the moon several times, then return to Earth, where
they were recovered safely.
The United States then performed an even more spectacular feat. In December 1968, the probe, Apollo 8,
duplicated the Soviet manoeuvre, but with three men aboard-Frank Borman, James Lovell, and William
Anders. They left on December 21 and spent Christmas Eve, circling the moon ten times at a height of less than
70 miles. They arrived safely back on Earth on December 27.
With the success of Apollo 8, the United States began its final preparations for landing men on the
Moon. First, however, they had to test the Lunar Module, which would land men on the Moon, and practise the
rendezvous between the Lunar Module and the Command Module, which would return the men safely to
Earth.
On March 3, 1969, astronauts James McDivitt, Russell Schweickart and David Scott, in Apollo 9, were
launched into Earth orbit. Once out in space, McDivitt and Schweickart climbed into the Lunar Module at the
back of the final stage of Apollo 9. They separated from Apollo 9 and moved into a different orbit. After a series
of orbital manoeuvres, McDivitt and Schweickart rendezvoused with Scott in the Command Module, docked,
and climbed back into the Command Module.

The stage was now set for the final dress rehearsal for the Moon landing. In May, 1969, Apollo 10 was
launched. On board were Eugene Cernan, Tom Stafford and John Young. On this mission, as Apollo 10 circled
the Moon, Stafford and Cernan took the Lunar Module to within 9 miles of the lunar surface. The astronauts
also gathered information about the Moon's surface and gravity; information essential for a successful Moon
landing.
All was now ready for the United States to put the first man on the Moon. The men chosen for the
mission were Edwin Aldrin, Neil Armstrong and Michael Collins. Apollo 11, the heaviest spaceship ever
launched-6,484,280 lb.-blasted off on July 16, 1969 (E.D.T.). After an uneventful voyage, Armstrong and Aldrin
Reading Passages [ 235 ]

climbed into the Lung Module, and, leaving Collins behind in


the Command Service Module, at 4.20 p.m. on Sunday, July 20
(E.D.T.) touched down on the surface of the Moon.
Then, watched by over 600,000,000 television viewers,
Neil Armstrong, soon followed by Edwin Aldrin, climbed down
from the Lunar Module on to the lunar surface. One of man's
oldest and greatest dreams had become reality.
Of this event Wernher von Braun, the father of the
United States space effort, said that this was the most historical
moment in the Earth's history since the first creature crawled out
of the sea on to dry land.
It seems very likely that the horizons which have been opened by Man's landing on the Moon will be
further extended by landings on Mars before the twentieth century is done and this may lead to developments
that will make the twenty-first century even more exciting and astonishing than the one in which we now live.
From: Twentieth Century Discovery by Isaac Azimov
THE ARTIFICIAL WORLD ARAOUND US

WHAT MEN ARE DOING TO THINGS


Nowadays we wear Orlon sweaters that look like wool, Dacron shirts that resemble cotton, and nylon
blouses that remind us of silk. At night we snuggle under blankets whose comforting warmth does not owe
anything to the fleece of a sheep.
Our shoes are soled with rubber that comes out of a factory instead of from a plantation. We walk on
nylon or Acrilan carpets that are startling in their similarity to wool. The chairs we sit on are cushioned with a
plastic foam, covered with an imitation of fabric or leather. The bottom of the wet glass does not leave a ring on
the table when we set it down carelessly. The table top only looks like wood; it is really plastic. Our food is
stored in transparent bags, instead of paper.
We are living in the age of chemicals. Sometimes it seems that everything from the frames of our
eyeglasses to the fillings in our teeth are made of synthetics. This is the word commonly used to describe man-
made or artificial products.
The development of synthetics is a perfectly natural thing. It is human nature to want something better.
Better than what, you may ask? Better than the things we have, is the answer.
The caveman kept warm by covering his nakedness with the coarse skin of an animal. His descendants
began to look for ways of improving on this. They discovered that they could shear sheep, take the wool, weave
it and fashion the material into warm coats and suits. As time passed, people became dissatisfied with this
solution, too.
"How about a coat that is both warm and light in weight?" asked one.
"Or a material that is waterproof?"
"Perhaps some fabric could be found that will not be eaten by moths."
"It would be nice to have clothes that can be washed and worn again without ironing."
The desire to improve on nature brings us to the laboratory. If nature does not provide a fiber that can
do all the things we, want it to, let us make one ourselves.
This refusal to accept the shortcomings of natural products has led to attempts to change almost
everything we use. A glass bottle breaks; perhaps, said scientists a few years back, an unbreakable substance
could be found that would work as well as glass. Toys made of metal are heavy and expensive; maybe
something lighter in weight and less costly could be developed. Most of us like to take pictures of the people
around us. Tintypes and daguerreotypes and other early forms of photography were beyond the talents of the
teen-age camera enthusiast. An easier way of snapping pictures of friends and family had to be found.
"So far the motive behind the search for synthetics has been a wish to produce better things... for less
money ... for more people," a scientist sums it up neatly. "We are fast reaching the point, however, where these
discoveries will become a matter of necessity, too."

The population of the globe is growing by leaps and bounds. The same cannot be said for our natural
resources. There is just so much iron, copper, coal or oil, and no more. These products will eventually be used
up. Then, too, the more people there are in the world, the more space they occupy. This means that there is less
room for forests, plantations or farms. Our descendants will find life cold, hard and desolate, unless substitutes
Reading Passages [ 236 ]

are found for wood, rubber, cotton and steel. One look at the artificial world around you shows that mankind is
already meeting this problem head-on.
The modern age of synthetics began nearly one hundred years ago as a result of a shortage of ivory. At
least this is the story that scientists like to tell.
"Shortly after the Civil War ivory became very scarce," they report. "At that time the game of billiards
was extremely popular and the balls used to play it were made of ivory. Rather than give up billiards, men
went hunting - not big-game hunting in Africa, but synthetic hunting in the laboratory. The object of their
search was a substitute for ivory."
In the year 1868, a young man named John Wesley Hyatt tried his hand at mixing solid camphor with
"pyroxylin," a chemical made out of nitric acid and the cellulose in cotton. The result was a hard, solid
substance. Perhaps you recognize the name that was given it: "Celluloid."
This early plastic was used in making billiard balls and for parts of dental plates. It appeared as a
primitive version of today's "wash-'n-wear" fabrics. Removable collars and cuffs for men's shirts were made out
of celluloid. These were wiped clean with a damp cloth and worn again. In the 1880's, the problem of how to
capture Aunt Susan's fleeting smile for posterity was solved when it was discovered that a thin form of
celluloid could be used to make photographic film, both for snapshots and for movies.
That was about as far as plastics went at the start of the present century. The world around our
grandparents was still a pretty natural one. If an object looked as if it were made of wood or rubber or steel it
was indeed made of one of those products.
Then in 1909, the second great plastics discovery was made by a scientist named Dr. Leo Hendrik
Baekeland. The synthetic he produced was particularly hard and strong. It belonged to a group of plastics that
has come to be known as "phenolics." The first member of that family bears the name of its inventor, changed so
as to be a little easier to pronounce: "Bakelite."
From that time to the present, many different plastics were developed, and things were no longer
necessarily what they seemed. The world became a bewildering mixture of the true and the false. The cabinet of
the television set looks like wood. But is it really? Better go up close to make sure. The suitcase you pack for a
weekend visit to a friend certainly seems to be cowhide. A second or even a third look is needed to prove
whether the "leather" comes from an animal or a factory. The tire must be made of rubber, and so it is - a rubber
that is exactly the same in chemical structure as the natural, but. that was produced by the chemist, not by the
tree.
What has happened to the solid objects in the world around you is as nothing compared to what has
happened to the things you wear. You are the first generation to grow up clothed in test-tube materials. The
only manmade fiber your grandmother knew was rayon, which was discovered in 1910. In its early days it was
called "artificial silk," and that was the only natural fiber it replaced.
Fewer than thirty years ago it seemed that nothing could ever substitute for cotton, wool, linen and fur.

The clothing revolution began on a day in 1938 when a completely new kind of synthetic fiber was
discovered. Chemists took oxygen and nitrogen from the air, hydrogen from water, and carbon from
petroleum, and put them together to form the product we know as "nylon." It sounds easy, but it took many
years of research by a group of scientists, headed by Dr. Wallace R. Carothers. Nylon was the first of a series of
discoveries. One synthetic after another followed it out of the test tubes - bringing us such fibers as Orlon,
Dacron, Acrilan, Saran, Teflon and Lycra.
You can see the result of this research wherever you go. If you were to stop one thousand men and
women on the street today, you would not be likely to find a single one with clothing made out of natural
products only. The lady's stockings and lingerie are almost surely nylon; her shoes may be plastic; her
uncrushable dress with its permanently pleated skirt contains at least some nylon, Orlon, Dacron or Acrilan.
Even the "fur" on her coat may never have sheltered a mink or a beaver. Her husband's drip-dry shirt and
underwear owe these qualities to synthetics. His sweater and socks are a result of good laboratory techniques,
not good farming.
Everywhere you look today you will find synthetics: the bristles on your toothbrush; the comb you run
through your hair; the buttons on your coat; the raincoat that shields you from the storm; the baby's bottle; the
little boy's toy airplane; the dishes that cannot be broken; the squeeze bottles for ketchup or cologne; the
delicate white curtains at the window; the film in the camera; the television cabinet; the telephone.
Reading Passages [ 237 ]

All of these, however, make up only a part of the story of synthetics today. They are the obvious things,
the ones everybody knows about. You are surely not surprised to find that stockings which look like silk are
made of nylon, and that the "wood" paneling is really plastic. Nonetheless, there are many other synthetics that
are not generally recognized. Few of us realize just how artificial the world around us has become.
You go to visit a modern factory. Is the hammer being used in the shop made of metal? The conveyor
belt on the assembly line? You go window-shopping past the best stores in town. Did the breathtakingly
beautiful star sapphire in the jewelry store window once lie buried in the earth? You sit down to dinner. What
do you really know about the food you are eating? Is it the natural color? The natural smell? The natural taste?
Is it even food?
Artificial products are slowly but surely taking the place of things that we still think of as natural. The
story of these little-known synthetics will be told in this book. Some are already in use today; others will be
here tomorrow. We live in a world that is constantly changing and that is full of surprises.
"The only thing you can be sure of in these artificial times," says a chemist, with a devilish twinkle in his
eye, "is that you can be sure of nothing."
(From The artificial world around us, by Lucy Kavaler)
THE NOSE DOES NOT KNOW
You may not know it, but science is leading you around by the nose. Though your life may never be a
bed of roses, it can certainly smell like one.
There was a time when perfume was just something for ladies to dab behind their ears. Today your
shirts, your shoes, your house, your family car, little sister's rubber toys, even your mattresses are perfumed.
You are living in a new era of smells, and practically everything you use has odor added, subtracted or
changed. Chemists like to tell a joke about a little boy who was seen one day happily eating his spinach. His
father had sprayed the bowl with chocolate smell.
A family went out to buy a used car recently. The dealer showed them a car that looked new; why, it
even smelled new. But was it new? Not at all. It had been sprayed with "new car" smell. Criminal court records
reveal cases in which people have actually been tricked into buying "new cars" which turned out to be old ones.
The characteristic aromas of fresh paint, glue, leather and other materials can be applied to a Model T Ford.
Unfortunately, they do not fool the motor.
Used cars are smelling so good nowadays that people who buy new cars sometimes feel disappointed.
As a result, one of the biggest automobile manufacturers is secretly adding "new car" odor to his new models as
they come off the assembly line.
In the supermarket one day you may reach for a package of buns, tempted by the pleasing aroma of
cinnamon and honey. On second thought you may realize that you could not possibly smell the buns through
the layers of wrapping. The fact is that "cinnamon bun" odor has been added to the cardboard of the package.
As you go over to the fruits and vegetables department the pungent aroma of oranges, grapefruit and
lemons tickles your nose. These fruits, too, in this modern age of packaging are neatly wrapped in plastic bags.
Once again it is the wrapping that you smell, not the fruit.
Grass seed in bags does not smell either, but the home gardener thinks it should. A seed company,
therefore, is treating the bag with "grass" odor.
"Most packages eventually will be made to smell like the products they contain," states a research
chemist optimistically.
And what of the products themselves? The odor is no longer a safe guide as to what is inside. Does beer
make a girl's hair more beautiful? At least one shampoo manufacturer believes that the smell of beer alone will
do the job. His shampoo reeks of beer, but does not contain an eyedropperful. A competitor, on the other hand,
puts beer in his shampoo and hides it with a heavy rose perfume.
Like most Americans today, your family probably has cook-outs several times a week during the
summer. One of the great pleasures of barbecue cooking is the woodsy odor of hickory smoke. Nowadays you
can get the smell without burning a single stick of hickory. The aroma of the wood is added to the chemical
which is used to light the charcoal on your grill. Left to itself, this product smells of unromantic kerosene.
Your Christmas tree last year may have come straight from the woods. Many people, though, use trees
that grow in a factory instead. At least as far as smell is concerned, you could not tell the difference.
You pick up an inexpensive pocketbook in one of the big department stores. At that price it must be
plastic, but it smells like leather. Yes; "leather" odor has been added.
Which tire has the real rubber? During World War II, it became very difficult to get rubber from the Far
East. An artificial type was made here and used as a substitute in tires and other manufactured goods.
Reading Passages [ 238 ]

Although natural rubber has a most unpleasant smell, the chemists


were called in to reproduce it. The odor completed the masquerade
costume of the artificial product.
When natural rubber became available again, its smell once
more was considered unacceptable. The chemists were asked to
come up with an improvement. Both natural and synthetic rubber
products are now scented with odors that do not bear the slightest
resemblance to either one in its original state. They smell of flowers
or of chocolate or of many other things. The aroma selected
depends upon how the product will be used. Baby toys, after all,
should not smell like tires, which in turn should not smell like
foam-rubber mattresses or girdles.
Shirts and sheets coming out of electrical clothes driers are
every bit as dry as those hung out on the line in the back yard.
Something, however, is missing.
"They don't smell of sunlight and grass the way they used to," complains the suburban housewife.
To provide these, the smell of "sun, grass and the great outdoors" has been distilled in the laboratory
and can be added to the machine. Where were your clothes dried? You must follow the laundry basket to find
out. Your nose cannot give you the answer.
If you are sometimes fooled by artificial aromas, pity the poor animals. Fish-lure odors entice luckless
fish. A strong carrot odor draws rabbits into enclosures where not a single carrot lies. A familiar aroma added
to the mash fed to piglets being weaned from their mothers helps them to adjust to the change. Some hunters
resort to the use of an artificial "deer" scent to deceive their quarry. Perhaps the greatest trick of all has been
thought up for hunters by a sporting-goods house. It is an "odor" that conceals the smell of man altogether.
Wait until the mystery story writers get the idea of concealing the presence of the murderer with this.
Not all artificial smells are intended to deceive; some are designed to serve as warnings. Natural gas,
now generally used in gas stoves, has no smell. Imagine what would happen if the wind blew out the flame on
the stove burner. Gas could then escape freely into the kitchen. Whole families would collapse, never knowing
what hit them. A "gas" smell, therefore, is added to the natural gas in the pipe lines.
Drivers are warned when the motors of their automobiles are overheating by a carefully designed
characteristic smell that is released whenever the temperature reaches too high a point. The chemicals in the fire
extinguishers are also perfumed so that it is possible to tell if there is a leak. Otherwise, the cry of "fire" would
ring out and fire fighters would rush to the extinguisher, only to discover - too late - that nothing was left inside
to put out the flames.
Odors, you can see, fill many different needs. As one chemist put it: "Everything smells." Even things
which you think are unscented have perfume added. The smell may be so faint that you are not aware of it.
Some years ago an experiment was performed in which women were asked to vote on the one of two
pairs of stockings they preferred. One was lightly perfumed; the other was left completely odorless. An
overwhelming majority picked the scented pair, but none of the ladies gave that as her reason.
"It is a prettier color," said some. The colors were identical.
"It is filmier," said others. The stockings were exactly the same in weight, too.
In the light of these findings is it any wonder that a manufacturer of a cleaning fluid for ladies' suede
shoes and pocketbooks spends more to perfume his product than he does for the chemical that does the
cleaning job? In fact he uses perfume oils equal in quality to those put in regular perfumes costing twenty
dollars or more an ounce.
This kind of thinking applies to household products, too. Homes smell a lot better than they used to.
Perfume chemists dare you to name a single thing - from hot water bottles to scouring powder - that has not
received the odor treatment. Fragrance is added even to toilet paper. Some eager manufacturers want to lull
you into sweet dreams by perfuming mattresses and pillows which have already been deodorized.
In some cases the effectiveness of a product is judged by odor alone. How can you be sure which
insecticide will make most bugs bite the dust? A long list of ingredients is given on the can, but it is hard to
read the small print, let alone figure out what it means. At least you can tell if the spray smells bad. Many
people, therefore, buy the insecticide that smells the best, even though that might be the least effective.
Reading Passages [ 239 ]

"You could make an odorless baby powder that was every bit as soothing as the one that is most widely
used today," comments a chemist, "but what mother would buy it? Without the familiar odor, everyone would
assume that her baby was not getting good care. The perfume must be there."
You would probably define perfume as something that smells good. That is not necessarily true in this
artificial era. Many smells which are quite unpleasant are added to products on purpose.
What does fertilizer smell like? It smells terrible, of course; everybody knows that. Actually many
chemical fertilizers do not - or did not - until the artificial odor of "partially decomposed barnyard manure" was
added. This disagreeable aroma makes the farmer happier; an odorless fertilizer, he feels, could not. really do
the job.
At one time if you had your shoes shined everyone within a three-block radius knew it by the smell.
The next step was for the chemists to take the shoeshine smell out of shoe polish. Now there is talk of putting it
back in.
A smell that duplicates the odor of hemp lying on a dock has been developed. Nylon sails, ropes and
fish nets were too odorless to suit fishermen. They are glad to have the old familiar smell back.
Even some household products have an intensely unpleasant aroma - on purpose. Two of the most
widely used household cleaners literally reek. Why?
"It makes it obvious that the housewife has been working hard," explains a chemist, who was called in
to provide smells for several of these cleaners. "The husband comes home, sniffs the air, and knows at once that
his wife has been slaving away all day. It is perfectly possible to make the same cleaner with no odor, or with a
pleasant odor - but nobody wants that."
"On the whole, though, the great challenge that must be met by perfumers is that of concealing bad
smells," reports Givaudan-Delawanna, a leading firm in this field.
"We 'deodorize,' which means that we find a way of taking an unpleasant odor away altogether, or we
'reodorize,' which means that we change the smell to something agreeable."
Most plastics must have odor added or taken away. Not long ago a number of housewives noticed that
the milk and butter in their refrigerators smelled as if a skunk had just passed by. What was to blame? The
culprit was a plastic. used in the refrigerator door. A pleasanter aroma was quickly added.
At just about the same time, in another community, youngsters came home from school complaining
that the sandwiches in their lunch boxes smelled decidedly odd. "I couldn't even eat the peanut butter and
jelly!" exclaimed an angry first-grader. This time it turned out that the plastic wrapping was responsible, and it
had to be deodorized.
Textiles require the best that laboratories have to offer. Foul-smelling chemicals are used in making
fabrics out of fibers. Your grandmother may remember a time when she would walk into a dress shop in the
summer and find her nose wrinkling in disgust and her eyes watering as a result of the formaldehyde in the
dyes. This problem still exists today, along with a lot of new ones. These are caused by the development of
plastic materials called "resins," which are used to make modern wash-'n'-wear, uncrushable, shrinkproof and
water-resistant fabrics. These resins give off a strong, fishy odor. In the case of plastic raincoats, shower curtains
and tablecloths, the problem is particularly severe. At one time quite a number of families were dismayed to
find that their bathrooms smelled fishy whenever the hot water was turned on. The vinyl shower curtains
proved to be at fault. Chemists meet the plastic challenge by adding scents. The fragrance has to be carefully
chosen. What boy would be willing to wear a shirt that smelled of roses?
It is not only synthetics that smell bad; many natural products have unbelievably unpleasant aromas,
too.
Do leather and fur smell good? Most people think so - mistakenly. A few years ago a merchant seaman
on a trip to Latin America found a leather pocketbook painstakingly made by hand by a native craftsman.
Delighted, he sent it home to his wife. In the steaming jungles he did not notice the smell. This was not the case
in Chicago where his wife opened the package. The appalling odor was so strong that, with a handkerchief to
her nose, she hurriedly hung the pocketbook out of the kitchen window. She left it there for three days before
she could even bring herself to examine the gift more closely.
Most natural leather products are perfumed with an odor that people think is "leather," but that actually
contains many extra perfume ingredients. This is the "leather" odor that is added to synthetic leather, too.
If you took a walk down the blocks that contain New York's famous fur market and got a whiff of the
mink skins you would be horrified. The stench is indescribable. By the time the skin has been transformed into
a coat or stole, however, its smell has been transformed, too. It is faintly exotic and glamorous, quite suitable for
a princess or a movie star. You could not detect the smell of the furry, four-legged beast.
Reading Passages [ 240 ]

Even if you like animals, chances are you do not like the
way they smell. What about your pet? Shampoos, sprays and
powders to kill insects are all perfumed. Your dog or cat can smell
of pine or lavender or even chocolate. There is one unhappy side
to all of this, however. The refined scents may appeal to you, but
your puppy might become unpopular with his fellows. They want
a dog to smell like a dog.
(From The artificial world around us, by Lucy Kavaler)
"THIS MOVIE SMELLS"
"Wouldn't you like to smell Elizabeth Taylor's perfume?"
asks a movie producer, speaking of one of Hollywood's most
glamorous stars. "When Huck Finn travels down the Mississippi
wouldn't you like to know what he smells as well as what he sees,
hears and says?" "This movie smells" may at some time become a compliment. The movie of the future could
not only be in color and three dimensional; it could be scented, too. Unfortunately, it is extremely difficult to
make this happen. At least that has been the experience of the bold pioneers in the movie industry who have
tried it. One movie, a tale of murder, was produced by Michael Todd, Jr. It was given the suitable title of The
Scent of Mystery. The process used was called "Smell-o-vision."
"It was an extremely complicated and expensive system," recalls one of the engineers who worked on it.
"In the basement of the theatre we placed a special mechanism. In it chemicals were mixed to create fifty-two
different odors. Each one was to be released at the suitable moment during the movie. The scents were then
piped through the entire theatre. Between smells, fresh air had to be run through the pipes to clear the
atmosphere for the next odor coming up. Five theatres in the United States were `fitted' for smells."
For a travel movie on China, Behind the Great Wall, the odors of harbors, opium dens, and rivers in flood
were blown through the air ducts of the theatre.
The movies, however, were not successful, and the whole idea is at present in eclipse. It may someday
be revived. Some movie producers still believe that smells could add to your understanding of a movie. They
like to imagine an audience made so hungry by the aroma of the food being eaten on the screen that it would
rise as one man and dash for the popcorn machine in the lobby.
You may also someday smell the horse ridden by the television cowboy, and enjoy the scent of grass,
earth and rock as he rides the range. Television with odors added might become commonplace in the world of
tomorrow. Experiments have already been performed with closed circuit shows.
The likeliest use of odor is in the field of advertising. A process called "Scentovision" is based on an
electronic device that releases smells in connection with both sound and pictures. This is particularly suitable
for advertising display stands in supermarkets. The machine used is about the size of a television set, and is
similar to it in a number of ways. A picture of a banana appears on a screen, and at the very same moment the
smell of banana is wafted into the air. The picture changes to show coffee, and at once the scent of freshly
roasted coffee tickles the nose. The mechanism is carefully timed so that the banana odor vanishes by the time
the coffee smell is released.
Advertising that smells can appear on the printed page, too. This is done by the addition of chemical
odors to the ink or to the paper. Your family may already have received direct-mail advertising brochures that
were perfumed. A number of companies are using these today. In addition, magazines and newspapers are
experimenting with scented advertisements.
"These must be handled with kid gloves," complains an advertising manager. "You can run only one
perfumed advertisement in each issue. Can you imagine what would happen if there were an advertisement for
rose perfume on one page and frozen orange juice on the next? The two smells would knock each other out."
Perfumed paper and ink run into the same problem that people do when they use perfume. It wears off
in a few hours. A chemist who developed a pine smell for use in a newspaper advertisement spent the
afternoon at the newsstand checking on his work. To his sorrow, he discovered that in two hours the odor was
gone.
Research is being done into ways of making smells last longer, and some success has been reported.
There are odors that will cling to paper for as long as thirty days. And this is only the beginning.
Members of the advertising industry are not easily discouraged. If they find that odor helps to sell
products, they will invest in further research. You may yet live in a world in which billboards, posters, and
Reading Passages [ 241 ]

magazine and newspaper advertisements will bring you both the picture and the smell of canned pineapple,
goulash, Chanel Number Five or cigarettes.
(From The artificial world around us, by Lucy Kavaler)
THE ARTIFICIAL AIR
Have you ever walked into a supermarket and headed
straight for the bakery counter, enticed by the smell of freshly
baked cake? If you have a logical mind, you may have wondered
how the smell could be so strong when most of the cake was boxed
or behind a glass counter. The reason: A synthetic "freshly baked
cake" smell was sprayed in the air just to make your mouth water.
This sometimes happens even in a regular bakery, though
you would not be as likely to suspect it there. Nowadays many
bakeries are part of large chain-store operations. All the baking is
done at only one of the stores; trucks carry the bread to the other
member bakeries. When you next smell the aroma of bread baking,
check to see if there are ovens anywhere on the premises. They may
really be miles away.
Making the smell of baking bread causes many headaches in the laboratory. White bread, chemists
complain, is one of the hardest odors to reproduce. Rye bread is much easier, because of the pungent smell of
caraway seeds. Most people prefer the smell of white bread baking, nonetheless.
"I know of only three smells that are appetizing to practically everybody: baking bread, frying chicken
and roasting coffee," says Victor di Giacomo, chief perfumer of Givaudan-Delawanna.
Modern packaging has not yet robbed us of the smell of frying chicken, but the other two might become
victims of the times were it not for the chemist's art. Coffee that is vacuum packed in cans does not smell, but
you may forget that fact when you sniff the air around it. Sometimes the odor of roasting coffee is sprayed on
the outside of the grocery or supermarket. The absent-minded customer believes that coffee is being roasted
inside the store.
The old-fashioned delicatessen or grocery store is slowly but surely being replaced by supermarkets,
but its unforgettable smell will linger on. Chemists have devised a "pickle" perfume that can be sprayed on
shelves holding jars of pickles. If you use your nose instead of your eyes you will think that you are in the old-
style store, with its big, open barrels of pickles.
Even stores with bottled whisky do not always leave well enough alone. The owner of a large liquor
store has given a laboratory the assignment of reproducing the smell of a certain brand of whisky. He wishes to
spray his store with it.
Odors are not always sprayed into the air in order to get you to buy something; the motive sometimes is
to bring you pleasure. The president of a large French perfume house was so carried away by his belief that
perfume is essential to happiness that he climbed into an airplane and flew over Paris, spraying the city with
gallons of the costly essence. There is no way of knowing how much happiness he brought into the lives of
others. Personally, he got nothing but trouble. City officials objected, because he did not have a license. It is
hard, though, to imagine just what license would cover spraying a city with perfume from an airplane.
The idea of improving the smells of subways, buses and streetcars appeals to anyone who travels any
distance to school or to work. Can you imagine pushing your way onto a bus or subway train at 8:30 in the
morning and taking a deep breath of sweet, fresh-smelling air? The French have tried to make this dream come
true. Every station of the Paris subway, or metro, as it is called in French, was sprayed with perfume. The
sprayers were on the last car of each subway train.
This principle could logically be carried a step further. Each car in a subway train or each bus can be
sprayed. The cost is so high, however, that people just talk about it. Nobody does anything.
If you live in a city you have undoubtedly noticed that it is more unpleasant to be behind a bus than
inside it. The strong-smelling exhaust fumes are thoroughly disagreeable. This problem can be solved. At least
in Cleveland, you can be right behind a large bus without smelling the horrid odors. In that city a few drops of
a chemical are added to each gallon of diesel fuel used by the buses.
When the engine heats up, this chemical is released into the exhaust. What kind of a smell is it? Violet
and peach odors go into it, but the end result is fairly neutral.
The companies that make the deodorizing chemical think it will cut down air pollution, in addition to
cutting down the bad smell. This has not been proven so far. The only evidence comes from the men who work
Reading Passages [ 242 ]

in garages where the buses are kept. Those handling the untreated buses complain that their eyes keep
watering. In the garages housing the deodorized buses there are fewer such complaints. But this may simply
mean that without the smell the men are less aware of their discomfort. It is the same for the man on the street:
The buses still give off exhaust fumes, but he does not mind them so much. The air he breathes seems to be
fresh. He should stay away from exhaust fumes, nonetheless, as they contain deadly carbon-monoxide gas.
What does air smell like? Your answer to that would depend on whether you live in New York City,
Los Angeles, Pittsburgh or the Maine woods. Today there are very few places where the air smells good. We
like to think we have fresh air around us, but not many of us do - whether we live in cities, small towns or
suburbs.
Is any one of the following near you? A paper mill? Petroleum refinery? Plastics manufacturing
company? Synthetic rubber mill? A plant making textiles? Cannery? Fishery? Meat-packing plant? Chemical
plant? Brewery? Even if you have answered No to all of these, there is surely a sewage-disposal plant or
incinerator in your area. Just one Yes is enough to mean unpleasant odors in the air.
Luckily, for every bad smell there is a solution. Some industrial plants have equipment designed to
keep gases out of the air; others depend on the skill of the chemist.
Not long ago people living in a small New Jersey town noticed that the air around them often smelled
of dead fish. The odor, they soon learned, came from a factory that had just opened. Fish was being ground to
make chicken feed. This knowledge was small comfort to anyone but the chickens. At last a large chemical firm
was called in and asked to conquer that fish smell once and for all. A floral perfume strong enough to conceal
this odor was developed, and engineers worked out a way of putting it into the plant's ventilating system.
Oil refineries produce some of the worst smells of all. The gases coming out of the smokestacks, called
"stack gases," are very much like the odor of a mixture of rotten eggs and cabbages. In one Midwestern town
people living near a refinery became so desperate that they went to court to force the oil company to shut down
its worstsmelling operations. The company hurriedly called in the odor-control chemists. A smell-killing
compound is now sprayed into the base of the stack.
You can take it as a fact that if you walk past a refinery and do not feel like burying your nose in a
handkerchief some method of controlling the smells is being used.
Sewage-disposal plants, as you can imagine, produce perfectly horrible odors. To conquer these
effectively it is necessary first to put a strong deodorant into the sewage water itself, and then to spray the air
around the plant. Perfumes of an orange-lemon type appear to be the most successful.
Conquering bad odors must, like most things in life, be done in moderation. Mother Nature does know
best, and terrible smells sometimes warn of danger. A rottenegg smell tells you that hydrogen sulfide is in the
air around you, and you get away from it as fast as possible. This is a sound and healthy instinct. The gas will
irritate your eyes and nose. If you breathe in too much of it you can become seriously ill. The smell-killing
compounds that have been developed, therefore, are designed to stop working when the amount of gas in the
air reaches a dangerous point. The odor of rotten eggs returns and everyone in the neighborhood departs.
Animal smells are not much of an improvement over industrial odors. If you had to work in an animal
laboratory, a zoo or a circus menagerie, you might find it hard to keep on being kind to the beasts. The odors
around the monkey, lion and tiger cages are particularly strong when nature is allowed to take its course.
Nowadays it is often restrained. There is now a special "zoo" smell that can be blown through the cages by fans.
Its manufacturer, Dodge & Olcott, reports that it contains eucalyptus, thyme, pine and camphor. A special
formula has been developed to improve the air in animal laboratories and around them.
The air inside as well as outside a building can be made fragrant. The last word in smell conditioning
took place one night a couple of years ago at a ball held in New York's Waldorf-Astoria Hotel. During the
course of one evening $10,000 worth of rare French perfumes were sprayed into the air.
Somewhat less costly fragrances are sometimes added to the air-conditioning systems in modern hotels,
public buildings, theatres and apartment house lobbies. In time, built-in smells may come to private homes as
well. The scent is introduced into the blower system and comes out with the air. Light floral odors or mint types
or citrus are most often used. Those who believe that variety is the spice of life use one odor for a few days and
then shift to something else.
At the present time, spraying theatres and public buildings is still more customary than adding scent to
air conditioning systems.
Department stores discovered a few years ago that spraying the air of the main floor with a well-known
brand of perfume helped to sell not only that particular fragrance, but also everything else from socks to ice
buckets. The pleasant smell puts people into a good mood. At holiday times scents connected with the occasion
Reading Passages [ 243 ]

are often used. Next Christmas you might notice how many stores smell of pine and balsam, even if there is not
a single pine branch to be found in any department.
Not all smells introduced artificially are pleasant. A patient at an extremely modern West Coast hospital
complained recently that for all the bright chintz drapery and light green paint, the place still smelled like a
hospital. Surely, air conditioning could keep the antiseptic aroma under control. It could, but it would make
many patients uneasy. If they did not smell the familiar antiseptic and medicinal odors they would not think
they were being cured. As a rule, when you smell camphor or eucalyptus you feel that a place has been made
extremely clean. Some hospitals are gradually switching to floral perfumes, but it will probably take years for
people to get used to the change.
Since the development of aerosols - the push-button spray cans - it is hard to find a home that does not
have at least one deodorizer or reodorizer. Aerosols are the direct descendants of the old-fashioned spray gun.
The cans contain perfume and a chemical, called a "propellant," which pushes or propels the fragrance out. You
may be surprised to learn how little perfume is needed. The aerosol can is almost entirely filled with the
propellant. Less than one quarter of one percent of the liquid in the can is perfume.
When these aerosols were first developed, they were used to kill bad smells only. After cooking
cabbage, for example, the housewife would hurriedly reach for a deodorizer spray. In the last few years the
principle has been carried further. Chemists have developed aerosols which can do several things at once. You
can mothproof a closet and make it smell of cedar or pine besides. Insecticides have perfumes added, and so a
playroom can be made sweet smelling as well as insectproof. There is now a room deodorizer designed to
remove the odor of pets from the air, and replace it with the scent of your choosing. Does your linen closet have
a section where you throw old rubbers, baseball bats and rock collections? A spray can take those odors away
and add the scent of lavender or one of the "clean" smells specially designed to remind you of freshly laundered
linens. Another simply removes smells, making the air completely odorless.

Aerosols can give each room in the house its own characteristic aroma. Kitchens can be sprayed with
the scent of spice or mint. And who knows? An aggressive baking company might someday offer an aerosol of
"freshly baked bread" smell to go with each packaged loaf. There is even a nursery odor, designed to smell the
way people think a baby should. A girl's closet and dresser drawers can be sprayed with the cologne of the
perfume she uses. A hostess could perfume her living room as well as her person before company comes.
Even your car does not need to smell of gasoline or the bananas your little brother mashed into the
upholstery. Perfumed blocks can be placed in the glove compartment or under the seat.
Almost everywhere you go you are in for surprises planned by the enterprising smell-makers. A new
world is right under your nose.
(From The artificial world around us, by Lucy Kavaler)
Reading Passages [ 244 ]

THE ODOR-MAKERS
The truth about why and how we smell is still shrouded in
mystery. How does the nose do its work? What makes some smells
delightful and others repulsive? These are questions that scientists are
working on.
One theory has it that pleasant smells are made up of a great
number of different chemical compounds. Laboratory analysis of
roses, for example, shows that the scent is produced by 20 or more
ingredients; the aroma of coffee is the end result of at least 35. Bad
smells, on the other hand, tend to be rather simple. They are too crude
for our taste. That may be why we do not like them.
Chemists do not let the lack of understanding of the causes of smells hold them back. They continue
mixing and blending compounds that smell good, even if they are not sure of the reasons.
The men who specialize in creating odors in the laboratory must have exceptionally keen senses of
smell. They are known in this business as "the noses," or sometimes as "the long noses," and they train
themselves to recognize a tremendous number of different odors. A whiff of perfume is in the air; they instantly
identify the scents that have gone into it - lemon, tweed, eucalyptus, sandalwood. A good odor chemist can
recognize about 4,000 different smells. Some can do even better than that. One chemist claims that he can spot
7,000.
In the laboratory the scientist sits at what has been described as a "perfumer's organ." Instead of the
keyboard, he has row upon row of flasks ranged in front of him, each containing a different scent. A typical
"kit" has some 600 or 700 ingredients in it. These are combined until the right effect is produced.
Not all "artificial" odors are made from synthetic products. Natural oils and extracts are often used.
Some of the "wood" smells added to plastic "wood," for example, contain at least a little of an extract made from
bark. The end result still deserves to be called artificial, because it does not smell the way nature intended.
"Any smell from a rose to rhinoceros hide can be reproduced in the laboratory," says a chemist
confidently. The oil is taken from the flower; an extract is made from leather or rubber by dissolving it. What
chemicals have combined in nature to produce each smell? The "nose" takes the essence or extract and puts a
few drops on a small white square of blotter. He smells it, lets it dry, and sniffs again. Then he tries to figure out
which chemicals are responsible for the odor. A series of laboratory tests pick up those compounds which the
"nose" may have missed.
Identifying the chemicals is only the beginning. In what order were they combined in nature? How
much of each is needed? Nobody knows exactly when a rose begins to smell like a rose. The scientist can only
try to repeat the process by a system of trial and error. He puts the chemicals together in one way, and then in
another ... tries a little more of this and a little less of that . . . until at last the familiar smell of the lily or the
orange rises from his flask.
How does he know he is right? No two people have the same sense of smell. One chemist tells how he
developed what he thought was a perfect imitation grass smell, only to have a friend say cheerfully: "Ah,
buckwheat cakes!" The opinion of the majority has to rule. The perfumer takes the natural product and his
duplicate of it around the laboratory, and asks his co-workers to sniff it. If they agree that the artificial smells
like the natural product - and the right natural product - it goes on for additional tests. A "smell panel" of men
and women is called in and asked for an opinion. Once in awhile a public-opinion survey is held.

You may wonder why the chemist tries so hard to copy the odor of a rose or lily of the valley when
these flowers are grown in many gardens. Strange as it seems, it is easier and cheaper to make perfume out of
chemicals than out of natural products. Oil of violet is used in many perfumes. It takes 66 million violets to
make only one pound of this fragrance. This is why scientists were so proud of reproducing the smell.
Sometimes the chemist is told to create a completely new odor. In these cases he does not need to copy
nature, but he must make something that smells good.
"Deciding whether a new scent is good is a tricky business," complains a chemist. "You love the odor of
gardenias; your sister hates it. Neither of you can say why."
Even when the fragrance wins the approval of the majority, the perfumer cannot bring his research to a
halt. The synthetic smell may not behave well. Perhaps it vanishes into thin air so quickly as to be quite useless.
The passage of time may change the flower into a fish. In addition, many fragrances are planned for a specific
use. An imitation pine smell is needed for an artificial Christmas tree. It is not enough for it to smell good on a
Reading Passages [ 245 ]

square of blotter; the fragrance must not get lost in the odor of
plastic. A tweedy aroma is to be added to a dress fabric.
The scientist must take to his washtub to discover what
happens after the garment is washed once, twice, fifty times. The
chemist must also find out whether the essence works best
dissolved in water or in alcohol, or when soaked into a ceramic
block.
Copying natural odors, making new aromas and deciding
how to use them, are only part of the smell maker's work. You have
read in an earlier chapter about the importance of "deodorizing"
and "reodorizing." How is this done? The most common way is to
add a strong, pleasant smell which covers up or "masks" the bad one. The name given to the process is,
therefore, "masking." Pine, wintergreen, sassafras and citronella are just a few of the strong odors capable of
conquering weaker ones. The scents which are used in this way are called "masking agents."
Not all odor control, however, is based on this coverup technique. Some aromatic chemicals work on
the principle best described in the slangy expression, "if you can't lick 'em, join 'em." They combine physically
with the chemicals that produce the bad odors. The structure of those compounds is completely changed. In
this way a new odor is created.
"Mixing smells is very similar to blending colors," explains a research chemist at Rhodia, one of the
major firms working in this field. "If you add yellow to an ugly shade of blue you will not get a pretty shade of
blue, you will get green. Similarly, if you add a good smell to a bad one you can get a pleasant smell that is
quite different from both. Another possibility is for the two to knock each other out and leave no smell at all."
As agreeable odors are complicated compared to the simpler bad ones, the masking agent usually
contains anywhere from 10 to 126 different ingredients. Thirty is considered typical.
"Each masking agent must be custom made," states an industrial chemist. "The aromatic chemical that
conquers natural rubber's foul smell can be quite helpless against the vinyl plastic in a shower curtain. And
changing the smell can lead to some unpleasant side effects, as any chemical - even a perfume - reacts with
another."
One chemist tried putting a minty odor into the "bath" of chemicals in which a fabric was soaking. The
idea was to replace the fishy odors described before. The new fragrance did win out, but the material was no
longer shrinkproof. The problem is to find a chemical that does not have a bad effect on any of the other
chemicals in the "bath." The material has to smell good and still be drip dry, water resistant, shrinkproof, color
fast and crease resistant.
The problems involved in controlling the terrible smells released into the air by industrial plants are
even harder to solve. The first step is the discovery of just what gases are to blame for the stench. The "noses"
must play detective. In a city in Texas not long ago passers-by were surprised to see a man standing outside a
factory and sniffing the foul air with all the enthusiasm of a sailor at the ocean's edge. It was an odor chemist
doing his homework. Can this be sulfur? Or ammonia? Or nitrogen? When he had identified the ingredients he
went back to the laboratory and got to work blending a similar concoction.
"We first try to figure out just how strong the bad smell really is," declares a research chemist. "We
dilute the chemicals responsible for the smell until we hit on a concentration that seems just about the same as
the one at the factory. The masking agent we design has to be at least that strong if it is to have any effect."
Every ingredient that goes into the "mask" is matched to a chemical in the bad smell. Some of these
compounds boil faster than others. As a result, they enter the atmosphere at different moments. Laboratory
tests give the chemists a timetable. They find good-smelling chemicals which perform in the same way as the
ones they are to attack. The masking agent is like a team of fighters. As each bad-smelling compound boils and
enters the air a fragrant compound boils, too, and knocks it out ... until no unpleasant odor remains.
Achieving this happy result is just as difficult as it sounds. More than 200 compounds, many of them
made up of over 25 ingredients, were tested before one was found that successfully concealed the exhaust
fumes produced by diesel fuel.
"We have yet another way of controlling odors," reports an industrial chemist. "We force the
compounds that produce bad smells to break down and destroy themselves. This is done by adding chemicals
that cause this type of reaction."
Reading Passages [ 246 ]

Chemicals can be used to make hydrogen sulfide come apart before it ever has a chance to get into the
air and fill the atmosphere with the odor of rotten eggs. This method has already been tried at a number of
paper mills and oil refineries.
You may by now be wondering why anything should smell bad when it is possible to perfume every
product you use, the air around you and the atmosphere surrounding factories and garbage dumps. Cost is the
big stumbling block. A paper mill, for example, must make good paper if it is to stay in business. The fact that
sulfur gases are released into the air when pulp is made has nothing to do with the sales of finished paper. This
is one of the unfortunate facts of business life. Only one out of every ten paper mills uses an effective system of
odor control today.
There is every reason to hope that tomorrow will be better, as one industry after another enters the fight
against unpleasant aromas. This is only the dawn of the chemical age. You will have to be patient in waiting for
the day when a bad smell will become no more than a bad memory.
(From The artificial world around us, by Lucy Kavaler)
Reading Passages [ 247 ]

THE TRUTH ABOUT TASTES


America's favorite ice cream is flavored with wood pulp, and many soft drinks owe their sweetness to
coal. There are hardly ever any cherries in a cherry lollipop, or bananas in banana cake. Maple syrup need not
come from the tree that bears this name, and a chocolate cookie can contain only a trace of chocolate.
If this information comes as any surprise to you, just read the
list of ingredients printed on soda bottles, candy bars, cookie boxes or
packages of prepared foods. Down in the small print you are almost
sure to read the words, "artificial flavoring."
What is the reason? Are not natural flavors good enough? Of
course they are; they often taste better than the artificial. But they
present many problems. The most obvious is that of cost. Strange as it
seems, it is cheaper to make peach flavor in the laboratory than to
squeeze and concentrate the natural juice. You can often get the real
thing, if you are prepared to pay for it.
"It is not necessarily better, though," insist chemists at Dodge
& Olcott, which makes flavors as well as smells. Many natural
products, for example, do not like to be cold.
"Make mine vanilla," cry more than half of America's
enthusiastic ice-cream eaters.
It just so happens that vanilla loses some of its taste when it gets too cold. In addition, pure vanilla is
likely to have a somewhat fatty taste. It is better when mixed with vanillin, a synthetic made out of wood pulp.
"A number of other natural flavors have not kept up with the times, either," adds a food manufacturer.
"They are not up to date."
In the old days a woman thought nothing of spending an afternoon turning out a batch of delicious
bonbons in her own kitchen. A bottle of natural flavoring, a bar of cooking chocolate, perhaps a stick of
cinnamon, and sugar were all mixed by hand in small quantities. Today the candy manufacturer has taken over
the job. Hundreds of thousands of soft creamy bonbons, chocolates with fruit and cream centers, chewy
caramels and succulent hard candies are turned out. The single bottle of flavoring must be multiplied hundreds
of thousands of times, too - with unexpected results.
Recently a man planning to open a candy factory sat down with pencil and paper to figure out just how
much of everything he would need. He was stunned to discover that it would take 30 pounds of natural
strawberry juice
to get enough flavoring for only 100 pounds of candy. The number of men needed to handle such
quantities and the amount of space he would have to rent would make each candy cost more than a steak
dinner. What is more, enough natural juice to flavor an entire vat would make the mixture so liquid that it
would be impossible for the candy to form.
Luckily for those of us who like sweets, he found a way out - artificial flavoring. Only one half an ounce
gave the taste and smell of strawberries to the 100 pounds of candy.
The natural fruit is just as uncooperative when it comes to mass-produced ice cream and cake. An ice-
cream maker who used only natural strawberries or peaches would need so much juice and strained fruit that
the dessert would end up as water ice. A fruit-flavored cookie or cake would be likely to leave the oven as a
soggy mess. The taste would not be one for you to smack your lips over, either. Cooking in itself makes flavors
endure conditions they never met in nature. Many of them simply shrivel up and fly away when it gets too hot.
A cherry or banana cake made with the natural fruits would be tasteless, because these flavors vanish into thin
air when heated. Synthetics that thrive on extreme heat can be made.
"It is not always necessary to switch to synthetics completely," says a food processor. "We often use
some natural, reinforced by the highly concentrated imitation."
Every change in our food habits makes new demands on flavors. In your grandmother's day no
housewife could possibly dash home from an afternoon Parent-Teachers' Association meeting and prepare a
delicious dinner by popping frozen foods into boiling water, opening cans, and producing a cake from a
packaged mix. Today the shelves of every supermarket are stacked with canned foods, powders for gelatin
desserts and pudding, ice cream, baked goods, cake mixes, soft drinks, breakfast cereals, and specialties
ranging from canned chicken fricassee to frozen pizza.
Several months may pass between the time the manufacturer puts the mix into the package and the day
when the housewife opens it, adds eggs and milk, and whisks it into the oven. Some of the natural flavors do
Reading Passages [ 248 ]

not like to stand around. They are impatient and change - usually
for the worse. A number of them become insipid; others turn bad.
Nut flavors, for example, are apt to develop a rancid taste.
Another problem that must be solved by the manufacturer
of processed foods today is that of making his product taste the
same every time you buy it. Nature is full of surprises. Sometimes
an apple is sweet, and sometimes it is tart. The amount of sunshine
or frost or rain can change the taste of an entire crop. There are
good years for grapes, and bad years. But no one will put up with a
bad year for gelatin desserts. We expect every dish to taste exactly
like the one before. The only way to assure this is to control the
flavor chemically in the laboratory.
Many natural flavors are good in any season ... if you can get them. Poor crops, wars, revolutions,
earthquakes and floods remove many plant products from our shelves. At a number of times in our history
shipments of spices from the Far East were completely cut off. This was true during World War II, for example.
At that time the flavorings industry produced synthetic pepper, cinnamon, cassia and anise to fill the gap. They
were not quite so good as the natural; the bite of the seed was lacking. But they were better than nothing.
Artificial flavors also help the millions of Americans with steak tastes and stew budgets. The average
person today is accustomed to dishes that once were eaten only by the very rich.
"There are still luxury foods, like caviar and guinea hen," a food manufacturer remarks, "but there are
no longer any luxury flavors."
Whenever the price of a product mounts, chemists rush to their laboratories to make a substitute.
Nutmeg, for example, became scarce and expensive at one time; a synthetic quickly made its appearance.
The most obvious use for imitations is to conceal unpleasant tastes. Just as a good smell can mask a bad
one, a good artificial flavor can conquer a natural bad one.
The mother of a sick five-year-old was surprised recently when the doctor pulled out his prescription
pad and asked the child: "Well, which shall it be? Cherry, strawberry, or peach?"
If none of these had suited the patient he might have tried banana, chocolate, lemon, raspberry, mint,
cinnamon or even maple.
As people grow older they lose their enthusiasm for sweet, fruit-flavored medicines. The flavor of wine
or an after-dinner liqueur has been developed to suit sophisticated taste buds.
When you consider the uses of artificial flavors you probably think first of soft drinks, candy, cakes,
cookies and chewing gum. You are quite right to do so. How many soft drinks do you swallow in a year? If you
are typical of Americans today you down about 200. The largest number of these are cola drinks, with orange,
lemon and lemon-lime as the runners up. Between one half and one ounce of flavor is all that is needed for six
gallons of "pop." Some of these extracts do two jobs and color as they flavor; other soft drinks have color added
separately.
When your father was a child many drugstore soda fountains sold glasses of plain soda water for two
cents. Little boys often came in and ordered this drink from the clerk. When he had put the two cents in the
cash register they would beg, "Just put in a little flavor, please." Few clerks were able to resist.
More than four million pounds of flavoring have gone into the candy vats in the last few years. Orange,
lemon and lime are the most popular for hard candies, though such odd flavors as rose, violet and root beer are
used, too. When imitation chocolate is added, the candy maker needs just a touch of the expensive natural
flavor.
There is nothing modern about chewing gum, except for its flavor. Hundreds of years ago the Mayan
Indians of Central America and Mexico chewed the gum of the sapodilla tree, and the North American Indians
popped the hardened sap from the spruce tree into their mouths.
Both types were certainly chewy, but not very tasty. The first flavor added to gum was licorice. It was
quickly followed by spearmint, peppermint, wintergreen, cinnamon and all the others that we know today.
People looking for new taste sensations can chew a chocolate-flavored gum, carnation, or any fruit from banana
to blueberry. Chewing gum has come a long way since the heyday of the Mayans.
Smoking is another old habit treated in a modern way. You may have read in your history books about
the time a servant poured water over Sir Walter Raleigh in an effort to save his master from burning to death.
The English statesman, who was enjoying his cigar, was rather annoyed. Despite this famous story, Sir Walter
was not the first to discover the pleasures of smoking. Centuries before, the Indians of the West Indies and
Reading Passages [ 249 ]

parts of Central and South America used tobacco. The North American Indians rolled tobacco leaves to form
crude cigars.
Our cigarettes taste much better today. Tobacco leaves are treated with a combination of natural and
artificial flavors. The most widely used have the romantic names of "deer-tongue" and "tonka," which are plants
in the vanilla family. Since filter-tip cigarettes have become popular the flavors are made even stronger than
they were before.
When soldiers are stationed abroad, they are often surprised by the eagerness with which people of
other countries ask for American cigarettes. Flavor chemists insist that they are responsible.
Their skills are also welcomed by those who eat to grow thin. Americans today wish to be slender. Low-
calorie foods make it possible for them to diet painlessly, particularly when artificial flavorings are used. Many
natural flavors are not for the plump. Fruit juices, for example, contain some sugar. Most low-calorie foods
proudly boast that they have no sugar at all. Chemical sweeteners plus synthetic flavors do the job instead. One
of the best known of the sugar substitutes is "saccharine," which is made out of "toluene," a by-product of coal.
Most artificial flavorings have no calories at all. This is a good thing, as sugarless drinks need from 10 to 20
percent more flavor than those made with regular sugar.
Old food tastes, as well as new, are satisfied by the flavor makers. There are artificial flavorings that
meet the centuries-old rigid rules set down for the diet of Orthodox Jews. The making of, these "kosher" flavors
is approved by rabbis, the Jewish religious leaders.
Each development in food processing brings a new use for synthetic flavors. An inexpensive butter
substitute is produced; a "butter" flavor makes its appearance. A few years ago a breakfast-cereal manufacturer
came up with a brand-new idea: why not coat dry cereal with flavored sugar? He called in the flavor makers,
who appeared with flasks of liquid caramel, butterscotch and fruit. Today you find the results on the shelves of
any supermarket.
Canned or frozen beef stew, ravioli and chicken a la king have been in the markets for some time, but
new research on these products is now being done.
"Much of the meat taste is lost in the processing," explains a flavor chemist. "And so we are trying to
create an artificial beef or chicken flavor that could be added to the sauce. Some of the workers in the laboratory
joke that we are preparing for a world in which beef stew will have no beef, and chicken a la king no chicken.
Our aim, however, is to make food more enjoyable for everybody, rich or poor."
Even the animal kingdom is not neglected. One day a flavor chemist visited the steamship docks and
watched the longshoremen unpacking the lead containers of anise oil from China. He noticed that each
container was badly scarred by the teeth marks of rats trying to get at the anise. This gave him an idea. He
rushed back to his laboratory and developed imitation anise flavor. This was then used in animal feed to the
great joy of the beasts involved.
(From The artificial world around us, by Lucy Kavaler)
INSIDE THE FLAVOR FACTORY
In the early years of the twentieth century a chemist took home with him each night a bulging briefcase
containing the formulas and reports on the work he was doing. He was afraid that if he left this top-secret
information in the laboratory somebody else might come upon it and steal it. The artificial-flavorings industry
was born in this atmosphere of secrecy.
For the past fifty years chemists and druggists have been trying to match nature's work in the
laboratory. The idea of putting a concentrated flavor into a bottle goes back much further in history than that.
The early flavors produced, however, were not synthetic. Your great-grandmother baked cakes and made
candies with syrups of natural vanilla or peppermint. Even Coca-Cola, which was developed in the 1880's, was
made out of natural products - caffeine taken from the cola nut and a tiny bit of cocaine from the coca leaf.
Flavor pioneers also hit on the idea of substituting one natural product for another with a similar taste.
In this way "maple syrup" flavor was discovered in the seed of a plant called "fenugreek," which was plentiful
in India, Iran, Arabia and Greece.
As the science of chemistry advanced, chemists and druggists began the difficult task of finding
chemical substitutes for natural products. One chemist noticed that a compound called "amyl acetate" smelled
like a banana. He combined it with some other chemicals and came up with a crude but recognizable banana
flavor. Another observed that "methyl anthranilate" somehow reminded him of grape, so he used that as the
main ingredient for an artificial grape flavor. Long afterward, with improved equipment, scientists learned that
these early workers in the field had stumbled on some basic truths. Amyl acetate really does exist in the natural
banana, and methyl anthranilate in grapes.
Reading Passages [ 250 ]

"In order to make synthetic flavors we must look into the depths of the essential oil or the fruit juice or
extract and discover all the chemical compounds that are present," declares a research chemist.
How is this done? Some of the compounds are unmasked by means of fractional distillation. This is a
heating process in which one compound after another reaches its boiling point. As chemists know the boiling
points of each major chemical, this gives them an important identifying clue.
The natural product is also studied under infrared and ultraviolet light. The newest and most important
piece of equipment for the flavor detective has the impressive name of "vapor phase chromatograph." In this
apparatus the material being studied is changed into the form of gases. The gases are then separated and
analyzed, and the findings are recorded on a graph.
With these tests 47 different chemicals have been discovered in tea and nearly 100 in coffee. In theory
the flavor chemist simply needs to combine these chemicals in the laboratory, and presto! he has the perfect
apple, grape or pepper flavor. Sometimes it works that way, but not always. In some cases the total taste is
different from the sum of its parts. The identified chemicals are duplicated and put together, and the end
product still does not taste right.
"We have found 29 compounds in apple, and still no one has been able to combine them in the way that
they are put together in nature," reports a chemist sadly. "Perhaps the apple contains tiny amounts of other
chemicals that have slipped past the laboratory apparatus."
Despite the discovery of most of the chemicals involved, the flavor of coffee still escapes duplication.
Cocoa remains hard to reproduce. Nut flavors, and such specialty items as mushroom, stump the flavor
chemists.
The research man must produce not only the right taste, but also the right smell. That is a basic part of a
flavor, as anyone who has ever had a cold knows. Only sweetness, saltiness, sourness and bitterness are
actually tasted.
"The more subtle part of the taste is recognized by the sense of smell," explain chemists at Givaudan-
Delawanna.
It is not just a coincidence that most of the smell manufacturers make flavors, too.
Just to complicate matters, the odors of the flavor-making chemical compounds keep changing as they
are combined with other elements. One chemist reports that he started an experiment with a chemical that
smelled like rancid butter or cheese. Luckily, he was the type of person who is not easily discouraged. He
carried on and eventually produced a fresh-strawberry flavor-smell. Another compound that resembles violet
perfume is essential in making a successful raspberry flavor-smell.
In the last few years it has been discovered that there are a number of different ways of making the
same artificial flavor. Chemical A and Chemical B are combined, and peppermint flavor appears. Another
researcher tries putting Chemical C and Chemical D together - and peppermint is there again. Even more
surprising, every so often a chemist makes up an imitation flavor that tastes exactly like the real thing, and
discovers that it does not contain the chemical compounds so painstakingly discovered in the laboratory
analysis of the natural product. Some of the chemicals used to make good imitation peach and strawberry
flavors, for example, have not been found in the true fruit extracts, and yet they give the same effect. This is one
of the mysteries still to be solved by the scientist.
If you visit a flavor laboratory you will see a chemist trying hundreds or even thousands of
combinations of the known ingredients, plus a few others he is just guessing at. He makes up a small batch,
smells it, and tastes it. Then he smells and tastes the real apple or sips the apple juice. Is he close? If not, he tries
again, making a slight change.
How can he be sure he has succeeded?
The great German poet Johann Wolfgang von Goethe once commented: "One must ask children and
birds how strawberries taste."
No flavor laboratory has ever tried to communicate with birds, but some do ask children for their
opinions on new flavors. One chemist invited his son's schoolmates over to his home one day for a surprise
party. The surprise was a new watermelon ice cream. Another passed around an after-school treat of root beer
flavored candy. All the youngsters needed to do was to say whether they liked it.
Most taste panels, however, are made up of adults only. There are two types of panel: one composed of
experts, and one of untrained people. Many flavors are tested by both. You may wonder how a person becomes
a taste "expert." As a rule, he works in the food industry and has developed a keen sense of taste and smell. One
major food company forms the panel with one highly trained chemist and 50 other employees.
Reading Passages [ 251 ]

In one recent test a number of foods with new flavors added


were placed on a long table in an air-conditioned, odor-free room.
The milder flavors were tested first. Each member of the panel took a
spoonful of a solid food or a sip, in a brandy glass, of a liquid. He
then sampled the naturally flavored food which was being copied. In
some cases he was told which one was artificial; in others he was
asked to guess. After each taste the person judging took a drink of ice
water and ate a plain cracker. Then he went on to the next flavor.
Individual booths were offered to judges who felt that they might be
distracted by the presence of others.
The panel of people who are not experts does its testing
under somewhat less formal conditions. The group may simply sit
around a table in the test kitchen of the laboratory. This panel is
made up of at least ten men and women. An effort is made to gather people of different ages, as it has been
found that tastes change over the years.
When a youngster refuses an olive or makes a face at a salad doused in vinegar, someone is likely to
smile and say: "Those tastes are cultivated. He'll like them when he gets older."
This is true of many bitter and sour foods. There are some other changes in food tastes that cannot be
explained so easily. Children like licorice. When they reach the age of eighteen, somehow the taste for this
pungent flavor is lost. The surprising thing, however, is that people over fifty like licorice again.
The members of the panel taste the food with the new flavor and then try to answer the question: What
is it: This sounds easy. But many times even professionals are unable to identify a flavor correctly. Try to
describe what is in juicy-fruit gum or Coca-Cola. They are good, but what are they?
The second, and most important, question is: Do you like it?
The artificial flavor is then compared with the natural Many imitations do not pass the comparison test,
but go to the head of the flavor class anyway. Some of the moss popular man-made flavors are not at all like
anything natural, but everybody enjoys them.
"Cherry" candy or soda could never fool anyone into thinking that the flavor came from the dark, juicy
fruit It is not supposed to. Cherries, like many other fruits have a very bland flavor. Bite into a cherry. The
feeling in your mouth is wonderful; there is a vague sense of sweetness, a faint aroma - that is all. Take away
the texture, and very little is left. The accurate imitations o cherry are uninteresting and weak. What is
commonly known as "cherry" flavor is actually an imitation of maraschino cherries. These are cherries which
have been soaked in maraschino liqueur, or, to go a step further, in a liquid that is a chemical imitation of the
liqueur. Whatever its basis, this artificial cherry flavor is one of the most popular in the world of tastes. A large
flavor manufacturer states that he has had to develop several thousand formulas for cherry alone. Each is a
little different from the others, depending upon the use for which it is intended, and the taste and pocketbook
of the person who is going to eat the cherry cake or candy. Thousands and thousands of flavors have come out
of the test tubes in the last few years. You can get artificial gooseberry or butter or hickory nut or coriander or
brandy or American cheese or crab apple or quince or passion fruit or eggnog or watermelon.
"I produced a perfect string bean flavor," comments a chemist, "only nobody wanted it."
What to do with the artificial flavor is also important. The chemist and the home economist in the
laboratory's test kitchen put their heads together. Will tutti-frutti be good in chewing gum or pudding? Would
anyone swallow a loganberry carbonated drink? Some artificial flavors are tasty in practically anything from
crackers to ice cream; others are suitable for soup or for gelatin only.
Company employees have a way of dropping into the test kitchens around four o'clock in the afternoon.
They are "doing a bit of testing," they say, brushing away the crumbs.
"These snacks are risky, though," warns a home economist. "The cantaloupe-flavored cookies looked
tempting, but somehow tasted like straw."
"We develop flavors and nonflavors, too," declares a chemist.
This is not just mumbo jumbo. A nonflavor or "blender," which is the correct chemical term for this, is a
flavoring material which improves another flavor, without adding anything of its own characteristic taste. That
is how monosodium glutamate, which you probably know as Accent, makes all sorts of foods taste better. Tiny
amounts of peach, peppermint and almond oil also give a helping hand to a wide variety of flavors which do
not taste a bit like peach, mint or almond. Salt also helps foods which could never be described as "salty."
Reading Passages [ 252 ]

Flavors must not only be good, they must also be safe. In order to be approved for use by the Food and
Drug Administration of the United States Government, each extract must be tested on animals. Guinea pigs,
rabbits, rats and dogs eat large amounts of food flavored with the chemicals that imitate spearmint, nutmeg or
wild cherry. The tests usually go on for one or two years. This is not always as hard on the animal as it sounds.
A few years ago the Food and Drug Administration found that coumarin, a flavor used in imitation
vanilla extracts, was harmful. Chemists set to work to discover a synthetic with the same taste, but without the
bad effects. The rats whose food was flavored with the new vanilla substitute grew bigger and heavier than
their brothers with unflavored feed. It was not that the food was more nourishing, but that it tasted so good
that the rats kept coming back for second and third helpings. These same animals would then have been
suitable candidates for tests on low-calorie foods.
"This type of guinea-pig research is elementary compared to some of the projects now getting under
way," states a scientist. "Until recently, we have been trying to identify and duplicate the chemicals in natural
flavors. The next step is for us to unveil the mystery of the way flavor is produced in nature. At what stage in
its growth does a carrot begin to taste like a carrot? And why?"
Living matter is made up of cells. Inside are chemical compounds, called "flavor precursors," that are
destined to become flavors. For them to do this, they must be made active by the presence of other chemical
compounds known as "enzymes." Unfortunately, the enzymes are very fragile and disappear when food is
prepared for canning, freezing or dehydrating. The precursors remain, but without the enzymes they are
helpless. That is why so many processed foods do not taste good.
Scientists at the Evans Research & Development Corporation have worked out a way of making natural
flavors in an unnatural way in the laboratory. They take the enzymes from the apple or pea - or, if they are
being economical, from the apple core or pea pod-and put them back into the fruit or vegetable after it has been
processed. The system is artificial; the flavor is natural.
In time scientists may learn to play some tricks with these enzymes. Spinach with corn-on-the-cob
enzymes added, rhubarb with strawberry enzymes, squash with asparagus enzymes - these are just a few of the
thoughts that come to mind. Whether these changes are possible depends upon the chemical composition of the
flavor precursors, and this is now being studied.
A similar approach is being taken to the problem of meat flavors. Scientists have found two chemicals
in uncooked meat which produce flavors when they are heated. The work is still experimental, but chemists
believe that these chemicals could be added to anything from bread to dandelions before cooking. The heat will
bring out the taste and smell of lamb chops, steak or ham.
These chemicals might also bring back the flavor to meat that has been stored for a long time. Research
by the food industry and the Army has shown that meats stored under refrigeration for three years are still
edible, but the taste is gone. The scientist plans to make it return. Anything seems possible in the space age.
Artificial flavorings will soar aloft with our space pilots. Ordinary food, such as sandwiches, is suitable
only for fairly short flights. As astronauts' journeys get longer, semisolid foods squeezed out of a tube, food
concentrates and liquid foods will be needed. All of these can benefit from the addition of flavors from earth-
bound laboratories. For really long space trips to Mars or Venus our travelers will have to grow their own food.
Algae, the most primitive plants on earth, can be grown in water tanks on board space ships. They may become
the staff of life in outer space. The natural flavor is much like dried lima beans. Artificial flavors or enzymes
from tastier foods will have to be added. The first colonists on Mars could then have orange juice algae and
ham and egg algae for breakfast, tuna fish algae for lunch, and move on to steak algae, potato algae and peach
algae for dinner.
Algae may also be used as food right here on earth someday. The fast-growing population of the world
will need to tap food sources that are ignored at the present time. In addition to algae, your descendants may
consume plankton, the tiny fish and plant life that floats in the sea. Yeasts that could be used as food can be
gotten from the wastes of paper mills or grown in sugar solutions. These could be substitutes for the roast beef,
potatoes and apples of today.
People will not change just because there are more of them. They will still want food to taste good. The
familiar flavors will, therefore, come out of the laboratory and be added to the unfamiliar foods of the future.
The day may come when the foods that form the staples of our diet are used up. But the flavors will live
on.
(From The artificial world around us, by Lucy Kavaler)
LET US HAVE "NOTHING" TO EAT
Question: How can you eat without eating?
Reading Passages [ 253 ]

Answer: When you consume food that is not food. This sounds like a nonsense riddle, on the order of
"What eats people, is purple, and can fly?" For anyone who has missed that one, the answer is "a flying purple
people-eater." Unlike the flying purple people-eater, however, this answer happens to make a great deal of
sense. You have read how synthetic flavors and smells have made it impossible for you to trust your tongue or
your nose. Now science is making it impossible for you to trust your eyes and your stomach.
A synthetic food has been produced that is not food at all. It has no taste, no smell, no vitamins or
minerals. It has no energy-producing calories. To sum up, it
provides you with no nourishment at all. What does it do? It fools
you into thinking that you have eaten a full-course dinner. Your
appetite is satisfied; your stomach feels full. You may wonder why
anyone would want to play such tricks on himself. After all,
artificial food will not give you the energy to play football or go
swimming. Millions of Americans, however, are watching their
waistlines. Nobody from the five-year-old in kindergarten to the
eighty-year-old grandmother likes to be called "Hey, Fatty."
Everyone wants to have the beautiful figure of a television star or a
crack athlete. Vanity is not the only reason why people try to be
slim. Doctors have some harsh words to say about fatness. Stout
people are likely to develop heart conditions and other physical
ailments.
As a result, plump members of society are continually going on - and off - diets. One month they expect
great things from a steak diet in which only red meat is eaten three times a day; the next month they switch to a
liquid diet and drink their meals; then a "crash" diet of bananas and skim milk is tried. Few men and women
have the will power to stick to these diets for very long; they are too hungry for food that will stick to their ribs
instead. The fact is that most people who are stout get that way because they like to eat - early, often and too
much. They follow the doctors' wise suggestion to eat less of everything in much the same way that students
observe their teachers' urging that they start the book report a week, instead of an hour, before it is due.
Scientists have been worrying about the problems of their overweight brothers and sisters. For many
years they have been hunting for something that will give the same effect as food without containing its
nourishment.
Some researchers set themselves the tasks of eating objects that could not, by the wildest stretch of the
imagination, be called food. One intrepid experimenter downed powdered coal, rubber, sand, glass beads, steel
balls and feathers. You can see that the man must have been unusually healthy to survive these trials. None of
the solids he tried, however, turned out to be really successful substitutes for steak, ice cream, or cream-cheese-
and-jelly sandwiches. People who watched him shuddered, and wondered if he was on the wrong track.
Perhaps what was needed was a test-tube product -something completely new, still to be discovered.
As it turned out, the product that eventually proved suitable is anything but rare. It is one of the most
plentiful substances on earth - as plentiful as trees or grass or cotton or jute or linen. In fact, it is a part of trees
and grass and cotton and jute and linen and many other plants. All of these are made up of a substance called
cellulose. Cows eat it; termites eat it; but until recently man used it to make such things as rayon and
cellophane.
People have been trying to eat cellulose for years, but for a long time no suitable way of handling it
could be found. One man, for example, took some surgical cotton, chopped it up very fine, and flavored it with
fruit juice. He reported that after eating this he felt completely satisfied for many hours. As you can imagine,
not many people were eager to copy his recipe.
A more tempting form of cellulose was discovered only recently, almost by accident, in the laboratories
of the American Viscose Corporation by a scientist named Dr. O. A. Battista. The company makes rayon and
cellophane, and cellulose is the raw material for both.
Dr. Battista was at work trying to develop a particularly strong rayon tire cord. He believed that this
could be done by breaking down the cellulose into tiny fragments. In one such trial he placed some cellulose in
water in an ordinary electric blender of the type that you may have in your own kitchen. The smallest bits of
cellulose were expected to fall to the bottom where they could be separated from the water and then used to
form a cord.
A quarter of an hour later the scientist discovered that the blender was completely filled with
something that resembled a thick white custard. It did not look like the raw material for a tire cord; it looked
Reading Passages [ 254 ]

like something to eat. The laboratory was equipped with an oven. Dr. Battista spooned the custard out of the
blender and made cookies and sauces with it. These were the first examples of the food that can be eaten, but
that is not food.
The company soon began to make this product, which was named Avicel, in larger quantities. This was
not difficult to do, as the rayon you wear, the cellophane you wrap packages with, and the new nonfood are
made of the same raw material - wood. Even the early stages of preparation are the same: The logs are chopped
up and treated with chemicals to make wood pulp from which the cellulose is obtained. The only difference is
that the cellulose planned for the dinner table is carefully purified.
To make Avicel in bulk, Dr. Battista's method is used, only on a bigger scale. The raw cellulose and
water are put into a huge blender. The cellulose is broken down into the tiniest particles you can imagine. They
range in size between a miniscule 0.000039 and 0.0001560 of an inch.
If you looked at the finished product you would never guess that it came from a tree. The Avicel is
made in two forms. One is called a "gel," which is the technical way of describing the custardlike substance Dr.
Battista saw that first day. To be exact, this nonfood looks much more like a fluffy white cold cream than a
custard. The other form of Avicel is as a very fine flour with the consistency of face powder.
Before presenting Avicel to weight-watching Americans the scientists had to make sure that eating this
cellulose would not harm anyone. After all, humans do not have the same digestive systems as termites or
cows. The first tests were made with rabbits, mice and monkeys. These creatures were put on diets of half
Avicel and half ordinary food. They emerged from the test in perfect health. They were also, incidentally, slim
and trim, lithe and limber. A number of stouthearted men and women then agreed to act as guinea pigs. About
one-sixth of their daily food intake was limited to Avicel. No ill effects were reported.
How did they eat the artificial food? They had Avicel cookies made with scarcely any flour, fat or sugar.
They spread butter-flavored Avicel on bread. Sweetened and whipped, it masqueraded as whipped cream.
"It would be theoretically possible for a person on a cellulose diet to starve to death, without ever
knowing that he was hungry!" exclaims a home economist dramatically.
That would be carrying the dieting principle a bit too far. In most cases Avicel is mixed with foods that
are normally very fattening. The principle is simple and logical. You take away some of the real food and
replace it with the artificial food. The hearty eater, working his way through a second helping of banana cream
pie, is really eating less than half of what is on his plate - in terms of nourishment, that is.
In its powdered state Avicel can be substituted for nearly half of the flour needed to bake a cake or a
loaf of bread. As a gel it can replace much of the butterfat in ice cream or the oil in salad dressing. A hot fudge
sundae with whipped cream on top can be almost as kind to the waistline as grapefruit. Candy bars, cheese
cake, spaghetti, potato pancakes, mayonnaise, hollandaise sauce and cheese dips can be served to the most
determined dieter.
Could he tell the difference? If one person is eating food and another is dipping his spoon into nonfood,
would anyone know which was which? In an effort to find out, the "blindfold" game was played by the makers
of
Avicel. Testers equipped with a sweet tooth were asked to judge a chocolate layer cake. Although it was
called a blindfold test, the people were not really blindfolded and fed; it was rather that the cakes were
unmarked. None of the testers was able to detect the doctored cake. The artificial food has no flavor or smell of
its own, and it does not change these qualities in the real foods to which it is added.
The new nonfood can perform a number of interesting feats, too. What would you think of sprinkling
peanut butter out of a shaker onto your bread? There may come a time when scooping the sticky paste out a jar
with a knife will appear old fashioned. Avicel can absorb fats and oils, and give butter and peanut butter the
consistency of grated cheese. Cellulose can work the same magic on syrups, providing molasses, honey or
maple syrup to be sprinkled, not poured, onto your pancakes. That is quite an accomplishment for a product
that is made from a stick of wood.
It is hard to believe that people can eat cakes and candies which contain the same ingredient that is
found in sawdust, rayon and cellophane. One day a little boy, caught with his hand in the cookie jar, may tell
his mother: "But I wasn't going to eat any food." And that would be the truth.
(From The artificial world around us, by Lucy Kavaler)
Reading Passages [ 255 ]

WHAT IS HAPPENING TO THE STEEL AGE?


At the end of a laboratory experiment some time ago a
chemist found himself left with a compound which he simply
could not get out of the glass container. The steel stirring rod that
had been used to mix it was firmly stuck in the center, and could
not be budged. At last the scientist gave up and smashed the glass
container. The lump of plastic with the steel stirring rod sticking
out of it looked like a hammer, so he idly began to use it as one. To
his amazement he discovered that this "hammer" was unbreakable
no matter how hard he hit with it.
Samples of this material were then produced. A laboratory
technician picked one up and threw it onto the floor with all his
might. Nothing happened. He hurled it against the wall, and it
landed with a satisfying crash.
Then he took a hammer and went to work on it. After
awhile, he admitted defeat. He could not even dent the sample.
This plastic was the first of a new group of synthetic materials. Discovered only recently, it is already
beginning to replace metals in some of their traditional uses. The boundaries of the artificial world around you
are expanding to include metals, in addition to fibers, food, smells and flavors. A new age is dawning -the age
of plastics.
Plastic automobiles, airplanes, space ships, tractors and skyscrapers may yet become commonplace.
They would look a little different from the iron, steel and aluminum we are used to today. After all, who has
ever seen a transparent metal? Or a purple one? The new synthetics can be either transparent or opaque, which
is the opposite, and they can be any color you have ever seen. Whatever they look like, they will do what metals
can.
Many people find this hard to believe. "But plastic is so breakable" is the most common complaint.
Four-year-old Johnny barely touched big brother's plastic airplane, and the wing broke off. He walked into the
playroom and stumbled onto the plastic gun. You hardly need to look at it to know the bad news. But what can
you expect of plastic? If you want something strong, you need to get metal.
This way of thinking could soon go out of style. By the time you are buying toys for your children you
may be saying: "Let's get something that will last. Let's get plastic."
Careful experiments with samples of the new synthetic bore out the results of the early catch-as-catch-
can. games. The scientists took a steel ball and dropped it from a height onto a shelf made out of the plastic.
Nothing happened. They tried it again and yet again. After ten tries the shelf was still unmarked. Then they
tried loading the shelf with heavy weights. It supported these with ease. In another experiment a bullet was
fired at the sheet of plastic. It did not splinter.
This synthetic is a new member of an old plastics family, known as "thermoplastic resin." This means
that it is a solid substance which becomes soft when heated and hard when cooled, in much the same way that
metals do.
The new strong resins which can compete with metals were developed in the late 1950's and made their
commercial debut in 1960. They are the end result of a long line of research that can be traced all the way back
to Russia in the days of the czars. In 1859, a university professor named Alexander Butlerov began to
experiment with the chemical, formaldehyde. You are probably familiar with formaldehyde. It is commonly
used to preserve specimens in hospitals and zoology laboratories. Butlerov and the scientists who followed him
had quite a different plan for the chemical. Their aim was to change it into a strong, solid mass. In the late
1940's, E. I. du Pont de Nemours & Company set its chemists to work. Twelve years later and $50 million
poorer, the company announced success.
What the scientists did was to change the basic chemistry of formaldehyde. You have surely read about
atomic energy, and know that all matter is made up of atoms. Two or more atoms form a molecule. The
formaldehyde molecule consists of one carbon, one oxygen and two hydrogen atoms arranged in this pattern:
Reading Passages [ 256 ]

The chemists forced these molecules to become linked together in a long chain, containing more than
1,000 single molecules, like this:

This chain, which is really a giant molecule, is called a "polymer," and the process of making it is called
"polymerization." These chains of molecules form an extremely strong, solid mass, different from other plastics.
"You might describe this as the `missing link' between plastics and metals," comments a chemist.
The thermoplastics made out of formaldehyde are technically described as "acetal" or
"polyformaldehyde resins." You are more likely to get to know them by their trade names. Du Pont calls its
resin, "Delrin," and the Celanese Corporation of America, which makes a similar product, calls it "Celcon."
Oddly enough, scientists traveling another research road arrived at the same destination. They made
metal substitutes out of completely different chemicals. A chemist in the laboratories of the General Electric
Company one day happened to notice a flask of a compound called "bisphenol A" on the shelf. It occurred to
him that combining it with the gas "phosgene" might bring an interesting result. It did: a long, strong chain of
molecules. Not quite able to believe that he had made an important discovery so easily, the researcher tried the
experiment again and again, using more than 100 other compounds. None of them worked as well.
This second type of thermoplastic resin is known to scientists as "polycarbonate resin." The General
Electric Company's "Lexan," and the Mobay Chemical Company's "Merlon" belong to this group.
"The new plastics act so much like metals that sheets, bars and rods made out of them can be handled
on standard machine-shop equipment," explains an engineer.
These synthetics are gradually entering the fields where
metal has been king.
As its name implies, hardware must be hard. Until just
recently this meant it was made of metal. Nowadays not only your
door knob or appliance handle can be made of plastic, but even the
door hinge and the light switch have escaped from the domination
of copper, brass and steel.
Everybody knows that the conveyor belts used in factories
run noisily on steel ball bearings. That has always been true. Now,
sometimes it is, but sometimes it is not. If you have a chance to visit
a modern factory try to listen before you look. Should you happen
to notice that a machine or conveyor is working particularly
silently, chances are that the steel ball bearings are gone. In their
place are rings made of plastic. Another possible explanation for the surprising degree of quiet in some up-to-
date canneries and bottling plants is that the steel balls are enclosed in a ring of plastic. A number of equipment
designers have gone yet a step further. They are using the new resins to replace steel in the actual links of the
conveyor belts or chains. As most of these plastics are slippery, it is not even necessary to oil them. While no
one suggests that you can hear a pin drop in the most modern factory, at least you do not need to shout in order
to be heard above the din made by the machinery.
Since the first Model T Ford came off the assembly Line, steel and other metals have had control over
the automobile. Synthetics took a back seat, being used in upholstery and things of that type. The makers of the
new plastics were not satisfied to leave it at that. They pointed out that these materials could go down the same
Reading Passages [ 257 ]

old assembly line. The resins were treated to what an engineer describes as "torture testing," to see if they could
be used in the actual mechanism of the car. The plastic gearshift of a speedometer was forced to run for 100,000
miles at a speed of 110 miles per hour. At the end of that time it was still fresh and ready for more.
"In its first 12 months of use Delrin replaced metals in 44 different places in an automobile," spokesmen
for du Pont report proudly. "In the second year, the figure climbed to 80."
The instrument panel on a number of new cars, for example, is now made of Delrin, instead of zinc. In
metal, the panel weighs 9 pounds; in plastic, only 2.
The hot Texas sun beating down on the instrument panel will not affect it, nor will the cold
temperatures of Alaska. These plastics do not become brittle, even when the mercury in the thermometer drops
to 40' below zero, Fahrenheit. On the other hand, they are quite indifferent to temperatures of 180' F. and can
stand a blistering 250' F. if it does not continue too long. There is even one form of the new synthetic which
does not get out of shape until the heat reaches 280° to 290° F.
This quality proved very useful when a worker melting metal with a soldering iron proved to be all
thumbs. He dropped the white-hot iron. In his confusion he stepped on it. Luckily, his foot landed on the
handle. His foot was not burned, and what is more, the handle was not broken. As you have probably guessed,
the handle was made of Lexan.
It is very hard to set any of these plastics on fire. In one experiment a sample of Merlon was put into an
electric furnace. The heat was raised to 400' F., then to 700', then to 800°. It was not until the temperature
reached a staggering 1058° F. that the material burst into flame. As soon as the heat was withdrawn, however,
the fire went out. There are not many products - either natural or artificial - that could do as well.
Scientists seem determined to turn everything upside down. They are making unnatural or synthetic
products that are able to withstand the forces of nature better than the natural ones can.
Farmers often complain that the cast-iron parts of agricultural machinery get rusty and worn as a result
of rain, sleet, snow, sunshine and wind. They become caked with dirt quickly, too. Plastic, on the other hand, is
so slippery that the dirt slithers off it.
"And who ever heard of a rusty plastic?" asks a farmer cheerfully.
What is more, even insects, rats and mice want to have nothing to do with these synthetics and leave
them strictly alone.
The same is true of water. That is hardly a new idea. We take it for granted that plastics are waterproof.
It is, after all, a plastic finish that makes your raincoat keep you dry. Now the users of the metal-like synthetics
are taking some of these well-known properties and giving them a brand-new twist.
"Wouldn't this waterproof quality be handy on a pump that has to be submerged in water?" suggests an
engineer brightly.
Just because pumping units are usually made of brass does not mean that they always need to be. A
plastic pump was produced and plunged beneath the water. It ran for 1,800 hours without a break and showed
no signs of wear. In another test the pump worked continuously for one year at a temperature of 140° F. This is
about the amount of use a pump would normally get in nearly twenty years. At the end of the test the machine
was still in excellent condition. It is not surprising that you can now find the plastic on rugged marine bilge
pumps.
Even the fish in our streams and rivers are in greater danger since the new plastic was discovered. It can
replace the aluminum usually used for the reel and frame of a fishing pole, and make them completely salt-
waterproof. In its first year 500 industrial uses for Delrin were found, according to du Pont. More than 400 of
these were as replacements for metals. Which ones? In 130 cases the plastic took the place of iron and steel; in 80
it served as a substitute for aluminum; in 60 for zinc, and in 60 for brass. And this is only the beginning of the
age of plastics. The new synthetics will in time become essential to mankind. Industry eats metals greedily.
Every year it takes more. The population of the world is growing at the almost unbelievable rate of one person
per second. Each of these people needs goods made of metal. In addition, parts of the world which were until
recently backward are developing big industries. In other words, they are becoming metals' users.
Sooner or later the earth's metal supply will become too small to fill mankind's needs. Substitutes for
metal will help to provide your descendants with cars, airplanes, washing machines and television sets.
If you look far enough into the future it is even possible that metals could disappear altogether and be
forgotten, except by writers of history books. Someday children might be urged to eat their spinach, in order to
have muscles "as strong as plastic."
(From The artificial world around us, by Lucy Kavaler)
Diamonds of the laboratory
Reading Passages [ 258 ]

Workers leaving the diamond mines of South Africa are


carefully searched before they are allowed to go home. The mine
owners fear that some of the precious gems will be smuggled out,
hidden in the clothing, or even inside the mouths, ears or noses of
the miners. For generations men have dug deep beneath the
earth's surface, searching for the stones that can glow with a
thousand inner fires. History's pages are alight with this blaze: the
Spanish conquistadors proudly returned home from Mexico and
Peru with a wealth of emeralds; a cardinal who was out of favor
bought a diamond necklace for Queen Marie Antoinette; a diamond was used as security for a loan from the
Netherlands in the days of the French Revolution. The crowns and scepters of hundreds of royal dynasties have
been encrusted with precious gems. Pirates have sailed the seas, stealing and then burying treasures of gold
and jewels.
Legends and stories have grown up around precious stones - from Aladdin's cave in The Arabian Nights
to the curse on the person who dared to wear the Hope Diamond. In ancient times diamonds were supposed to
have the power to prevent insanity. In periods of violence and treason many people believed that wearing a
diamond would work as an antidote to poison. In the Middle Ages possession of this jewel was supposed to
keep the peace between husband and wife. That custom, only slightly changed, is still in effect. The diamond in
the engagement ring is a symbol of everlasting love.
Rubies, emeralds, diamonds and sapphires remain the outward signs of wealth and success. In a
changing world this does not change. A few years ago a popular song was entitled "Diamonds Are a Girl's Best
Friend." The lyrics were simply restating the well-known fact in a witty way.
Take away the legends and glamor that surround precious gems, and what do you have? Minerals in
the form of crystals. That does not sound very pretty, nor do the crystals look pretty at first glance. Until they
are cut and polished the fire within is hidden.
About a hundred years ago a South African child was sent out on the veldt or plain to gather wood. He
saw a glittering pebble lying on the ground. The boy picked it up and put it in his pocket, where it rattled
around along with pieces of string, bits of candy and old spelling lists. A few days later he took out the pebble
and gave it to his little sister as a toy. A keen-eyed visitor noticed the stone one day, and asked for it as a gift,
without telling the innocent children why he wanted it. He later sold it as the genuine diamond which it was.
Although children from that day to this have been bringing home shiny stones in the hope that they
were diamonds, few if any have succeeded in finding the real thing. jewels are called precious gems because
they are as rare as they are beautiful. That is what makes them so valuable.
As everyone, rich or not, likes to have beautiful things, imitations of the lovely crystals must serve as
the poor man's Hope Diamond. Costume jewelry is not a twentieth-century fashion. Back in 3000 B.C. there
were necklaces, bracelets and brooches containing early substitutes for precious stones.
Over the centuries many kinds of substitutes have been tried. Whether you go into the five-and-ten-cent
store or the most expensive shop in town, you can find copies of every single precious jewel. The cheapest are
known as "imitation" jewels. These can be made of anything at all. They simply resemble the natural gem. They
are not like it in terms of hardness, weight, brilliance or chemical composition. Glass is used most often, with
coloring added when an emerald, ruby or sapphire is to be imitated. There are also a number of common
natural stones which look something like valuable ones and can be used as substitutes. These are set in slightly
more expensive costume jewelry. In the next price range you find jewels which are real, but which are not in
their natural form. Tiny fragments of real stones, left over when large emeralds or sapphires are cut, are treated
to make them stick together.
Jewelers, customers and scientists have not been fully satisfied with these substitutes.
"It should be possible to make a precious stone that not only looks like the real thing, but that is the real
thing," said a chemist many years ago. "The only difference should be that one crystal would be made by man,
the other by nature."
At first this did not seem like a particularly hard task. Scientists began to try making synthetic
diamonds toward the end of the eighteenth century. It was at this time that a key scientific fact was discovered:
diamonds are a form of carbon, which is a very common element. Graphite, the black mineral that is used for
the "lead" in your pencil, is made of it, too. The only difference, we know today, is that the carbon atoms have
been packed together in a slightly different way. The chemists were fired with enthusiasm: Why not change a
cheap and plentiful substance, carbon, into a rare and expensive one, diamond?
Reading Passages [ 259 ]

You have probably heard about the alchemists who for centuries tried to turn plain lead or iron into
gold. They failed, because gold is completely different from lead or iron. Transforming carbon into diamonds,
however, is not illogical at all. This change takes place in nature, so it should be possible to make it happen in
the laboratory.
It should be possible, but for one hundred and fifty years every effort failed. During this period,
nonetheless, several people believed that they had solved the diamond riddle. One of these was a French
scientist who produced crystals that seemed to be the real thing. After the man's death, however, a curious
rumor began to go the rounds. The story told was that one of the scientist's assistants had simply put tiny
pieces of genuine diamonds into the carbon mixture. He was bored with the work, and he wanted to make the
old chemist happy.
The first real success came more than sixty years later in the laboratories of the General Electric
Company. Scientists there had been working for a number of years on a process designed to duplicate nature's
work. Far below the earth's surface, carbon is subjected to incredibly heavy pressure and extremely high
temperature. Under these conditions the carbon turns into diamonds. For a long time the laboratory attempts
failed, simply because no suitable machinery existed. What was needed was some sort of pressure chamber in
which the carbon could be subjected to between 800,000 and 1,800,000 pounds of pressure to the square inch, at
a temperature of between 2200°F and 4400°F.
Building a pressure chamber that would not break under these conditions was a fantastically difficult
feat, but eventually it was done. The scientists eagerly set to work again. Imagine their disappointment when,
even with this equipment, they produced all sorts of crystals, but no diamonds. They wondered if the fault lay
in the carbon they were using, and so they tried a number of different forms.
"Every time we opened the pressure chamber we found crystals. Some of them even had the smell of
diamonds," recalls one of the men who worked on the project. "But they were terribly small, and the tests we
ran on them were unsatisfactory."
The scientists went on working. The idea was then brought forward that perhaps the carbon needed to
be dissolved in a melted metal. The metal might act as a catalyst, which means that it helps a chemical reaction
between two other elements to take place.
This time the carbon was mixed with iron before being placed in the pressure chamber. The pressure
was brought up to 1,300,000 pounds to the square inch and the temperature to 2900° F. At last the chamber was
opened. A number of shiny crystals lay within. These crystals scratched glass, and even diamonds. Light waves
passed through them in the same way as they do through diamonds. Carbon dioxide was given off when the
crystals were burned. The density or weight was 3.5 grams per cubic centimeter, as is true of diamonds. The
crystals were analyzed chemically. They were finally studied under X rays, and there was no longer room for
doubt. These jewels of the laboratory were not like diamonds; they were diamonds. They even had the same
atomic structure. The atoms making up the molecule of the synthetic crystal were arranged in exactly the same
pattern as they are in the natural.
The scientists did not rush out to the jewelers to have these crystals set in rings or tiaras to rival Queen
Elizabeth's. Instead, they rushed the laboratory-made diamonds into industry, which is using millions of carats
a year. Diamonds are harder than any other substance on earth. In the days before X rays and microscopic
analysis people used to decide whether a gem was a diamond by attempting to scratch their names with it on a
windowpane. If that worked, they went on to a test that is still considered pretty good: They tried to scratch a
real diamond (if they had one) with the suspected crystal. Of course, there is little to be gained by scratching
diamonds or etching your name on a windowpane, but this hardness makes diamonds particularly useful in
grinding wheels and high-speed cutting tools.
Dental drills and the giant drills that bore through rock in search of oil or minerals need industrial
diamonds. Diamond-cutting wheels are used to cut concrete blocks and marble. Superfine electric wiring used
in television sets, X ray machines, radar and rockets is made by drawing heavy wire through a hole in a
diamond die. In much the same way, yarns are spun out of tungsten and molybdenum. Fabrics made out of
these yarns are being tested for use in space suits.
The military and space uses of industrial diamonds are so important that the Army Signal Corps and
the Air Force Systems Command are making their own synthetic diamonds for use in research projects.
Africa, which supplies about 95 percent of the world's industrial diamonds, has often been torn by
strife, and imports have been endangered. The United States has no diamond mines beneath the earth, so it is
fortunate that we have some above the ground in the laboratory.
Reading Passages [ 260 ]

Chemists are already learning how to do tricks with synthetic diamonds. When tiny amounts of boron,
beryllium or aluminum are added to the carbon, the diamonds become semiconducting. This means that an
electric current can pass through them, making them suitable for use in transistors and other electronic devices.
As less than one percent of natural diamonds are semiconducting, this discovery is most important. Scientists
believe that the famous Hope Diamond is a semiconductor, but no one suggests destroying this beautiful gem
for its electrical qualities.
The trail leading to the perfect synthetic diamond has not yet been followed to the very end. Better
ways of transforming graphite into diamonds are still being worked out. The General Electric Research
Laboratory recently announced discovery of a method that does the job in a few thousandths of a second. It
does not require a metal catalyst either. The secret lies in an improved pressure chamber that can achieve
pressures of three million pounds to the square inch at temperatures above 9000° F. Other experiments are also
still going on.
"The jewels we have made are diamonds," says a physicist, "but they are not very beautiful. Natural
diamonds range in color from white to black, with the white or blue-white favored as gems. Most of ours are on
the dark side, and are quite small."
Not one compares in size with the famed Cullinan Diamond, which originally weighed 3,025-3/ 4 carats.
It takes 142 carats to make an ounce, so a little figuring will show you that the diamond weighed a little more
than one pound and 5 ounces. The Cullman was cut into 9 large and 7 small stones. Even at that, the two
biggest ones, at 516 and 309 carats respectively, remain the largest diamonds in the world today. Another
famous diamond, the Regent or Pitt, which weighed 136 carats, was once mounted by Napoleon on the hilt of
his sword. These world-famous gems are, of course, too big for normal use. No girl could wear an engagement
ring with a stone of 100 or more carats. She would have to be built like a stevedore to be able to raise her hand.
The synthetic stones, however, look too insignificant.
In the future scientists may succeed in producing synthetic diamonds lovely enough to be used as gems.
In the meantime, a diamond substitute has been discovered. As this gem stone is not the same as a diamond
chemically or physically, it is technically an imitation. The men who produced it object violently to this
description.
"It is a completely new jewel," they maintain, " a crystal with all the fire and brilliance of the most
beautiful gems found in nature. This is the first man-made gem. It only exists in the laboratory. You could not
go out and dig one up."
Strange as it seems, this diamondlike jewel was discovered by scientists working for the National Lead
Company, one of the largest paint manufacturers in the United States. Their aim was to improve the paint, not
to create gems. White paint is made with a chemical called "titanium dioxide." The chemists believed that a
careful study of the crystal of titanium dioxide would tell them many new things about the chemical. However,
single crystals had never been found in nature. They decided, therefore, to make one, using a high-temperature
furnace. The crystal that emerged from the furnace was breathtakingly beautiful.
"That would certainly look terrific in a ring!" exclaimed one of the laboratory assistants.
Everyone could see that he was right. A new jewel was born, which had even more fire and brilliance
than a diamond. It had, however, one major drawback. Titanium has a characteristic greenish-yellow cast
which marred the beauty of the stone. A slight change in the chemical composition might solve that, thought
the chemists. Like cooks deciding whether to add a touch of vanilla or peppermint extract to the cookie batter,
the scientists tried adding a pinch of this element and a touch of that. And in time they discovered that a
combination of titanium and strontium produced a brilliant, clear white crystal.
This stone, which has been named "Fabulite," is much softer and heavier than its natural brother. This
makes it unsuitable as a substitute for industrial diamonds. Some uses for the man-made gem have been found
in making infrared lenses. Mostly it is intended for wear as a jewel. "Fabulite" can be cut exactly like a diamond,
and it costs much less.
"We go nature one better," boasts one of the technicians who developed the gem. "If you dig for
diamonds or rubies or sapphires you will turn up thousands of imperfect stones for every one that is good. But
all of our jewels are perfect. There is no second best. Every one is the best."
(From The artificial world around us, by Lucy Kavaler)
Reading Passages [ 261 ]

WHO MADE THIS RUBY?


Which giant drill has the natural diamond,
and which the synthetic? You would hardly be
expected to know. But these days you would have to
carry a microscope to tell if a lady is wearing a real
ruby brooch, or a real emerald pendant, or even a real
star-sapphire ring. "It is practically impossible to see
the difference between the natural and the synthetic
gems," states a chemist. "The structure is identical.
Even an expert must study the crystals under the
microscope to spot tiny differences." Successful
duplicates of rubies came long before the first
triumphs with diamonds. Back in 1904, a French
chemist named Auguste Victor Louis Verneuil
produced synthetic rubies in a very hot furnace.
Although some improvements have been made since, the same process is still in use today.
Like diamonds, rubies and sapphires are also forms of a cheap and plentiful substance. In this case it is
a mineral called "corundum." The coarser forms of the mineral are used for grinding or polishing; the finer ones
are the jewels.
The synthetic gems are produced by making the mineral in its crystal form. Corundum consists of
aluminum oxide. Scientists can create aluminum oxide by heating pure alum, which is an aluminum
compound. In order to produce a ruby, the chemists add coloring matter made out of a chemical called
"chromic oxide."
"When synthetic jewels first appeared I used to figure that if a lady was wearing a gigantic ruby it could
not be natural," says one observant young man. "After all, how many people can afford to buy a ten-carat
stone? The Aga Khan and Princess Grace of Monaco. But nowadays, people have caught on to that, and buy
small synthetic gems, too, which are just about foolproof."
Giving the chemist's battle cry, "Anything that exists in nature can be reproduced in the laboratory,"
scientists have tried their skills at producing star sapphires and star rubies. These are crystals which appear to
have a star in the center. They are extremely rare in nature. As a result, they are so valuable that for many years
most of the buyers belonged to royalty.
A child observing a star sapphire on display in the window of a jewelry store turned to her mother and
said: "It's so beautiful it must be fake."
As the year was 1937, the youngster could not possibly have been right. Ten years later such a remark
might have been the truth.
A successful way of making the star gems was found in 1947. The first step is to make a plain synthetic
ruby or sapphire. This is then used as a seed on which the star gem is grown. Synthetic star sapphires are not
yet as large as the natural. The greatest of these gems found in nature weighs 563.35 carats; the biggest
laboratory duplicate, 35. On the other hand, size is no problem when it comes to turning out synthetic star
rubies. The largest known natural star ruby weighs 102 carats. A 109-carat star ruby has been made in the
laboratory. Of course not many people could wear it.
Today the Linde Company, a division of Union Carbide Corporation, makes four star stones: the star
sapphire, star ruby, white star and black star.

The accomplishment of which scientists are most proud is the production of synthetic emerald. It is
extremely difficult to make this gem in the laboratory. This was first done in 1930 by the German chemical
company 1. G. Farben. The technique used there has remained a carefully guarded secret, but Americans have
worked out equally successful methods since then. Emeralds are a form of the mineral beryl. The emerald is
grown around a crystal of colorless beryl, which serves as the seed. Coloring matter made out of chromium is
added to produce the brilliant green color known as emerald.
In ancient Egypt emeralds were a sign of wealth, much as they are today. And just as now, those
struggling along on less than a Pharaoh's income had to make do with imitations. These, to be sure, bore only a
general resemblance to the real emerald. The age of the synthetic jewel was still thousands of years in the
future.
Reading Passages [ 262 ]

In addition to making emeralds, rubies, sapphires and diamonds, scientists can reproduce almost any
gem you might name. Garnets, for example, are precious stones which are not so precious as all that. They are,
in fact, often used as imitations of other more valued gems. But they, too, can be made in the laboratory.
During the period when research on synthetic diamonds was at a standstill, a weary scientist took some
time off. Just for a change, he had a try at making garnets. He took some hornblende, a fairly common mineral
ore which contains most of the same elements found in natural garnets. This ore was subjected to a pressure of
441,000 pounds to the square inch and a temperature of 2192° F. Sure enough, garnets emerged from the
pressure chamber. However, as garnets are not particularly valuable, his discovery did not receive the praise
given him a few months later when the same general process yielded diamonds.
Synthetic gems sell at prices that average between one-tenth and one-twentieth of the cost of the
natural, so it is not surprising that business in synthetic jewels is increasing by leaps and bounds.
"The only thing that holds us back is that word 'synthetic,' " says a jeweler sorrowfully. "You can't get
anybody to say `synthetic diamonds are a girl's best friend.'"
Not all of these man-made jewels are set in tiaras, bracelets, brooches and necklaces to be worn by
glamorous women. These gems are as useful as they are beautiful.
"We make as many crystals for industry as we do for jewels," reports a spokesman for the Union
Carbide Corporation.
Rubies and sapphires are very hard, second only to diamonds. They can survive a lot of wear and tear.
Rubies are placed in the bearings of expensive watches; ball point pens may have ruby points; phonograph
needles are frequently made of sapphire. Both gems hide their fire when set in a number of machines, such as
tape recorders or thread guides for textile mills.
The electronics industry uses rubies and sapphires. Microwave and vacuum tubes and other electronic
equipment often contain sapphires. Rubies are particularly valuable in the field of communications. They can
send beams of light across distances as great as a million miles. Information can be carried on these light waves.
The device that is used is called an "optical maser." You will be hearing a lot about it in the space age ahead.
"One of the synthetic crystals used in masers to communicate with outer space is an almost unbelievable
combination - ruby on the outside and sapphire on the inside!" reveal Union Carbide officials.
Is there life on other planets? What are conditions on the surface of the moon? The ruby and sapphire
will help to bring us answers to these questions.
(From The artificial world around us, by Lucy Kavaler)
THE SYNTHETIC FUTURE
As man hurtles through the universe he extends the
frontiers of the artificial world.
"Our conquest of outer space rests on an artificial
base," jokes a scientist.
What he means is that the first astronauts sent out in
space capsules sit on couches heavily padded with a
special kind of plastic foam called "urethane." The
space pilots must be protected from the tremendous
unearthly forces to which their earthly bodies will be
subjected. The shock of landing, for example, is more
than a man could stand without the aid of
engineering. Scientists have figured that the flier
must face forces that can be equal to 240 times the
pull of gravity. The impact of landing can be reduced with devices attached to the space ship. These are not
enough. The space man must also be able to count on cushioning within the space ship and in his clothing.
Fashions for astronauts keep changing. Styles are dependent on new developments in fibers and
synthetics, rather than on fashion announcements by Paris designers. A current model weighs twenty pounds.
It is a one-piece suit, fashioned out of nylon treated with aluminum and rubber. A vinyl sponge absorbs the
water vapor from perspiration and breathing. Every so often the sponge is squeezed automatically, and the
water is siphoned off into a storage tank. On his head the astronaut wears a plastic helmet, lined with an inch
and a half of urethane foam.
Our knowledge of conditions beyond the earth's atmosphere is learned from instruments, as well as
from observant astronauts. Some of the capsules are sent into the far reaches of the universe without a pilot on
board. How can cameras and delicate electronic devices take the speed, vibrations and impact of landing? Like
Reading Passages [ 263 ]

the space men, they are cushioned against shock with plastic foam. In one preflight test an electronic tube was
placed in a box padded in this way. It was dropped out of a window on the eighteenth floor of a skyscraper,
and landed on the concrete sidewalk far below with a deafening crash. When the tube was taken out of its
container it still worked perfectly. In the same way, fragile equipment returns unhurt from voyages through
space.
New synthetics are being designed to meet each advance of the space age - fuels to get the rockets off
the ground, materials for space ships, nose cones for missiles, fabrics that can endure fantastic heat and
unbelievable cold, substances suitable for building houses on other planets.
Right here on earth life is fast growing more artificial than ever. You have read that many of the things
you wear, use and eat are synthetic. In the future men may become partly synthetic, too.
We are at the beginning of a plastics age in medicine. Synthetics have been developed that can be put
inside the human body. They do no harm, nor are they harmed by the living tissue around them. There are
already people walking around with plastic arteries and veins. The injured or blocked portions of their blood
vessels have been removed and replaced with synthetic tubes. Others go about their business with partly
artificial hip joints. Certain types of plastic, called "acrylic" and "silicone," substitute for a piece of the damaged
bone.
Synthetics are coming into use to rebuild faces or parts of the body in plastic surgery. Even a portion of
the membrane covering the brain has been replaced with a silicone-coated fabric. Plastic tubes and rods can
drain off fluid from the brains of patients suffering from water on the brain. By using plastics to repair
damaged parts of the eye, sight has been restored to people apparently doomed to blindness. Experiments are
being done in the creation of an artificial cornea, which is the transparent portion of the coat of the eyeball.
The most dramatic possibility for the artificial future is that the day may come when a man will be able
to live without a heart. There are already a number of people who are functioning with heart valves made of
plastic. Others have survived drastic surgery, thanks to special machines placed in their chests. These provided
an electrical impulse that kept the heart going. Other machines can substitute for the heart altogether for short
periods during certain types of operations.
"These are just first steps," says a doctor.
Medical researchers are now working on an artificial heart that could replace the heart altogether for the
rest of the patient's life. A small motor-driven pump, covered with plastic, and with synthetic valves and
arteries, could be placed inside the body. Even the aorta, which is the largest and most important artery leading
out of the heart, would be artificial.
Models of such a heart have been used experimentally on dogs for periods of as long as five hours. Such
extreme measures would, of course, be attempted only on a human patient in desperate condition.
Men without hearts: this is an indication of just how far synthetics can take us. In the world of the
future any damaged organ may be replaced by an artificial duplicate as a matter of course.
Medicines, too, have left the kingdom of the natural and entered the kingdom of the synthetic. When
sick, we do not send an aged aunt out to scour the woods in search of healing herbs and leaves. Medicine men
are not called in to cure us with magic potions. Doctors give us shots or prescribe pills instead. Inside the vials
and pills are many medicines that are simply chemical reproductions of natural products. Vitamins and
tranquilizers are just two of hundreds that have been duplicated successfully. Researchers are burning the
laboratory lamps until late at night in search of more.
"Even some of the wonder drugs can now be made chemically," reports a doctor. "Chloromycetin
originally came from a microorganism found in the soil in Venezuela. A few years later scientists figured out
how to make this very antibiotic synthetically. The same thing has been done more recently with tetracycline,
another of the wonder drugs, which was also first found in nature. The advantage of making these medicines in
the laboratory is that we can then cause slight changes in the molecule and make each antibiotic cure more
diseases more effectively."
Drugs to conquer the illnesses that still plague mankind are yet to come out of the flasks resting on top
of today's Bunsen burners.
The world outside you, as well as inside you, becomes more synthetic as the years go by. Few objects
are secure from the inroads of plastics. Even gun barrels can be made out of a solid form of nylon. Plumbing
pipes and sewer connections are going artificial.
Do you wonder what lies ahead? Synthetics will make life more comfortable than you ever imagined it
could be. A paint that can heat or cool your house sounds like the kind of scheme you would read about in a
comic book. Nonetheless, scientists are working away at this project and appear quite sure of success. Others
Reading Passages [ 264 ]

prefer the idea of warming a room by means of the wallpaper. If cold feet trouble you in the winter, just wait a
few years. Carpets that can be heated may provide the solution to that problem.
Everybody likes a clean house, but nobody likes to do the dusting. Very little of that time-consuming
chore will be needed when dust-repellent finishes are built into fabrics, upholstery, carpets and pillows.

Another possibility which scientists and clothing manufacturers discuss when they get together is a
type of garment that you could wear all year around. It would keep you warm in winter and cool in summer.
The trick is in the lining. In the winter you zip in a lining that is colored with special dyes and treated with
finishes that absorb light rays and change them into heat. When the thermometer rises you do not need to hang
your dress or suit away in a closet until the fall. You simply switch the lining for one that is treated with
finishes to repel light rays.
Perhaps you think that wearing the same clothes winter and summer would get boring. Chemists have
a solution in mind that would please people who want constant change. How would you like to buy a skirt or a
jacket, wear it a couple of times, and throw it away? It seems like the sort of extravagance possible for
billionaires only. But the day may come when you will be doing just that. Special plastic finishes are being
developed to make paper so strong that it could be used in clothes. The scientist, hard at work creating an easy
life for his descendants, does not plan to stop there.
"Why not paper bed linen, upholstery, sails for boats, parachutes and blankets?" he asks. "It will be
possible to wash or clean these paper fabrics, but they will be so cheap that many users will just toss them aside
without a second thought."
Chemists are continuing their all-out attack on natural products. Out of the tiny world within the test
tube, scientists are creating a new world around us.
From The artificial world around us, by Lucy Kavaler)

You might also like