Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

James Hutton FRSE (/htn/; 3 June 1726 OS (14 June 1726 NS) 26 March 1797) was a Scottish

geologist, physician, chemical manufacturer, naturalist, and experimental agriculturalist.[1] He originated


the theory ofuniformitarianisma fundamental principle of geologywhich explains the features of
the Earth's crust by means of natural processes over geologic time. Hutton's work established geology as
a proper science, and thus he is often referred to as the "Father of Modern Geology".[2][3]
Through observation and carefully reasoned geological arguments, Hutton came to believe that
the Earth was perpetually being formed; he recognised that the history of the Earth could be
determined by understanding how processes such as erosion and sedimentation work in the present
day. His theories of geology and geologic time,[4] also called deep time,[5] came to be included in theories
which were called plutonism and uniformitarianism. Some of his writings anticipated the Gaia
hypothesis.
Contents
[hide]

1 Early life and career


o

1.1 Farming and geology

1.2 Edinburgh and canal building

1.3 Later Life and Death

2 Theory of rock formations


o

2.1 Search for evidence

3 Publication

4 Opposing theories

5 Acceptance of geological theories

6 Other contributions
o

6.1 Meteorology

6.2 Earth as a living entity

6.3 Evolution

7 Works

8 See also

9 References

10 Further reading

11 External links

Early life and career[edit]


Hutton was born in Edinburgh on 3 June 1726 OS as one of five children of William Hutton, a merchant
who was Edinburgh City Treasurer, but who died in 1729 when James was still young. Hutton's mother
Sarah Balfour insisted on his education at the High School of Edinburgh where he was particularly
interested in mathematics and chemistry, then when he was 14 he attended the University of
Edinburgh as a "student of humanity" i.e. Classics (Latin and Greek). He was apprenticed to the lawyer
George Chalmers WS when he was 17, but took more interest in chemical experiments than legal work.
At the age of 18, he became a physician's assistant, and attended lectures in medicine at the University
of Edinburgh. After three years he went to the University of Paris to continue his studies, taking the
degree ofDoctor of Medicine at Leiden University in 1749 with a thesis on blood circulation.[6] Around
1747 he had a son by a Miss Edington, and though he gave his child James Smeaton Hutton financial
assistance, he had little to do with the boy who went on to become a post-office clerk in London.[7]
After his degree Hutton returned to London, then in mid-1750 went back to Edinburgh and resumed
chemical experiments with close friend, James Davie. Their work on production of sal
ammoniac from soot led to their partnership in a profitable chemical works,[8] manufacturing the
crystalline salt which was used for dyeing, metalworking and as smelling salts and previously was
available only from natural sources and had to be imported from Egypt. Hutton owned and rented out
properties in Edinburgh, employing a factor to manage this business.[9]
Farming and geology[edit]
Hutton inherited from his father the Berwickshire farms of Slighhouses, a lowland farm which had been
in the family since 1713, and the hill farm of Nether Monynut.[10] In the early 1750s he moved
to Slighhouses and set about making improvements, introducing farming practices from other parts of
Britain and experimenting with plant and animal husbandry.[11] He recorded his ideas and innovations in
an unpublished treatise on The Elements of Agriculture.[12]
This developed his interest in meteorology and geology.[11] In a 1753 letter he wrote that he had
"become very fond of studying the surface of the earth, and was looking with anxious curiosity into
every pit or ditch or bed of a river that fell in his way". Clearing and draining his farm provided ample
opportunities. Playfair describes Hutton as having noticed that "a vast proportion of the present rocks
are composed of materials afforded by the destruction of bodies, animal, vegetable and mineral, of
more ancient formation". His theoretical ideas began to come together in 1760. While his farming
activities continued, in 1764 he went on a geological tour of the north of Scotland with George MaxwellClerk,[13] ancestor of the famous James Clerk Maxwell.[14]
Edinburgh and canal building[edit]

In 1768 Hutton returned to Edinburgh, letting his farms to tenants but continuing to take an interest in
farm improvements and research which included experiments carried out at Slighhouses. He developed
a red dye made from the roots of themadder plant.[15]
He had a house built in 1770 at St John's Hill, Edinburgh, overlooking Salisbury Crags. This later became
the Balfour family home and, in 1840, the birthplace of the psychiatrist James Crichton-Browne. Hutton
was one of the most influential participants in the Scottish Enlightenment, and fell in with numerous
first-class minds in the sciences including John Playfair, philosopher David Hume and economist Adam
Smith.[16] Hutton held no position in Edinburgh University and communicated his scientific findings
through the Royal Society of Edinburgh. He was particularly friendly with Joseph Black, and the two of
them together with Adam Smith founded the Oyster Club for weekly meetings, with Hutton and Black
finding a venue which turned out to have rather disreputable associations.[17]
Between 1767 and 1774 Hutton had considerable close involvement with the construction of the Forth
and Clyde canal, making full use of his geological knowledge, both as a shareholder and as a member of
the committee of management, and attended meetings including extended site inspections of all the
works. In 1777 he published a pamphlet on Considerations on the Nature, Quality and Distinctions of
Coal and Culm which successfully helped to obtain relief from excise duty on carrying small coal.[18]
Later Life and Death[edit]
From 1791 Hutton suffered extreme pain from stones in the bladder and gave up field work to
concentrate on finishing his books. A painful (and dangerous) operation failed to resolve his illness.[19]
He died in Edinburgh and was buried in the vault of Andrew Balfour, opposite the vault of his
friend Joseph Black, in the now sealed south-west section of Greyfriars Kirkyard commonly known as
the Covenanter's Prison.
Theory of rock formations[edit]
Hutton hit on a variety of ideas to explain the rock formations he saw around him, but according to
Playfair he "was in no haste to publish his theory; for he was one of those who are much more delighted
with the contemplation of truth, than with the praise of having discovered it". After some 25 years of
work,[20] his Theory of the Earth; or an Investigation of the Laws observable in the Composition,
Dissolution, and Restoration of Land upon the Globe was read to meetings of the Royal Society of
Edinburgh in two parts, the first by his friend Joseph Black on 7 March 1785, and the second by himself
on 4 April 1785. Hutton subsequently read an abstract of his dissertation Concerning the System of the
Earth, its Duration and Stability to Society meeting on 4 July 1785,[21] which he had printed and
circulated privately.[22] In it, he outlined his theory as follows;
The solid parts of the present land appear in general, to have been composed of the productions of the
sea, and of other materials similar to those now found upon the shores. Hence we find reason to
conclude:

1st, That the land on which we rest is not simple and original, but that it is a composition, and had been
formed by the operation of second causes.
2nd, That before the present land was made, there had subsisted a world composed of sea and land, in
which were tides and currents, with such operations at the bottom of the sea as now take place. And,
Lastly, That while the present land was forming at the bottom of the ocean, the former land maintained
plants and animals; at least the sea was then inhabited by animals, in a similar manner as it is at present.
Hence we are led to conclude, that the greater part of our land, if not the whole had been produced by
operations natural to this globe; but that in order to make this land a permanent body, resisting the
operations of the waters, two things had been required;
1st, The consolidation of masses formed by collections of loose or incoherent materials;
2ndly, The elevation of those consolidated masses from the bottom of the sea, the place where they
were collected, to the stations in which they now remain above the level of the ocean.
Search for evidence[edit]

Hutton's Section on Edinburgh's Salisbury Crags


At Glen Tilt in the Cairngorm mountains in the Scottish Highlands in 1785, Hutton
found granite penetrating metamorphic schists, in a way which indicated that the granite had
been molten at the time. This showed to him that granite formed from cooling of molten rock,
not precipitation out of water as others at the time believed, and that the granite must be younger than
the schists.[23][24]
He went on to find a similar penetration of volcanic rock through sedimentary rocknear the centre
of Edinburgh, at Salisbury Crags,[3] adjoining Arthur's Seat: this is now known as Hutton's
Section.[25][26] He found other examples in Galloway in 1786, and on the Isle of Arran in 1787.

Hutton's Unconformity on Arran

Hutton Unconformity at Jedburgh. Photograph (2003) below Clerk of Eldinillustration (1787).


The existence of angular unconformities had been noted by Nicolas Steno and by French geologists
including Horace-Bndict de Saussure, who interpreted them in terms of Neptunism as "primary
formations". Hutton wanted to examine such formations himself to see "particular marks" of the
relationship between the rock layers. On the 1787 trip to the Isle of Arran he found his first example
of Hutton's Unconformity to the north of Newton Point near Lochranza,[27][28] but the limited view meant
that the condition of the underlying strata was not clear enough for him,[29] and he incorrectly thought
that the strata were conformable at a depth below the exposed outcrop.[30]
Later in 1787 Hutton noted what is now known as the Hutton or "Great" Unconformity at
Inchbonny,[4] Jedburgh, in layers of sedimentary rock.[31] As shown in the illustrations to the right, layers
of greywacke in the lower layers of the cliff face are tilted almost vertically, and above an intervening
layer of conglomerate lie horizontal layers of Old Red Sandstone. He later wrote of how he "rejoiced at
my good fortune in stumbling upon an object so interesting in the natural history of the earth, and
which I had been long looking for in vain." That year, he found the same sequence in Teviotdale.[29]

An eroded outcrop at Siccar Pointshowing sloping red sandstone above vertical greywacke was sketched
by Sir James Hall in 1788.
In the Spring of 1788 he set off with John Playfair to the Berwickshire coast and found more examples of
this sequence in the valleys of the Tour and Pease Burns near Cockburnspath.[29] They then took a boat
trip from Dunglass Burn east along the coast with the geologist Sir James Hall of Dunglass. They found
the sequence in the cliff below St. Helens, then just to the east at Siccar Point found what Hutton called
"a beautiful picture of this junction washed bare by the sea".[32][33] Playfair later commented about the
experience, "the mind seemed to grow giddy by looking so far into the abyss of time".[34] Continuing
along the coast, they made more discoveries including sections of the vertical beds showing strong
ripple marks which gave Hutton "great satisfaction" as a confirmation of his supposition that these beds
had been laid horizontally in water. He also found conglomerate at altitudes that demonstrated the
extent of erosion of the strata, and said of this that "we never should have dreamed of meeting with
what we now perceived".[29]
Hutton reasoned that there must have been innumerable cycles, each involvingdeposition on
the seabed, uplift with tilting and erosion then undersea again for further layers to be deposited. On the
belief that this was due to the same geological forces operating in the past as the very slow geological
forces seen operating at the present day, the thicknesses of exposed rock layers implied to him
enormous stretches of time.[4]
Publication[edit]
Though Hutton circulated privately a printed version of the abstract of his Theory (Concerning the
System of the Earth, its Duration, and Stability) which he read at a meeting of the Royal Society of
Edinburgh on 4 July 1785;[22] the full account of his theory as read at 7 March 1785 and 4 April 1785
meetings did not appear in print until 1788. It was titled Theory of the Earth; or an Investigation of the
Laws observable in the Composition, Dissolution, and Restoration of Land upon the Globeand appeared
in Transactions of the Royal Society of Edinburgh, vol. I, Part II, pp. 209304, plates I and II, published
1788.[21] He put forward the view that "from what has actually been, we have data for concluding with
regard to that which is to happen thereafter." This restated the Scottish Enlightenment concept
which David Hume had put in 1777 as "all inferences from experience suppose ... that the future will
resemble the past", and Charles Lyell memorably rephrased in the 1830s as "the present is the key to
the past".[35] Hutton's 1788 paper concludes; "The result, therefore, of our present enquiry is, that we
find no vestige of a beginning,no prospect of an end."[21] His memorably phrased closing statement has
long been celebrated.[4][36] (It was quoted in the 1989 song No Control" by songwriter and
professor Greg Graffin.[37])
Following criticism, especially the arguments from Richard Kirwan who thought Hutton's ideas
were atheistic and not logical,[21] Hutton published a two volume version of his theory in
1795,[38][39] consisting of the 1788 version of his theory (with slight additions) along with a lot of material
drawn from shorter papers Hutton already had to hand on various subjects such as the origin of granite.

It included a review of alternative theories, such as those of Thomas Burnet and Georges-Louis Leclerc,
Comte de Buffon.
The whole was entitled An Investigation of the Principles of Knowledge and of the Progress of Reason,
from Sense to Science and Philosophy when the third volume was completed in 1794.[40] Its 2,138 pages
prompted Playfair to remark that "The great size of the book, and the obscurity which may justly be
objected to many parts of it, have probably prevented it from being received as it deserves.
Opposing theories[edit]
His new theories placed him into opposition with the then-popular Neptunist theories of Abraham
Gottlob Werner, that all rocks had precipitated out of a single enormous flood. Hutton proposed that
the interior of the Earth was hot, and that this heat was the engine which drove the creation of new
rock: land was eroded by air and water and deposited as layers in the sea; heat then consolidated
the sediment into stone, and uplifted it into new lands. This theory was dubbed "Plutonist" in contrast
to the flood-oriented theory.
As well as combating the Neptunists, he also opened up the concept of deep time for scientific
purposes, in opposition toCatastrophism. Rather than accepting that the earth was no more than a few
thousand years old, he maintained that theEarth must be much older, with a history extending
indefinitely into the distant past.[23] His main line of argument was that the tremendous displacements
and changes he was seeing did not happen in a short period of time by means of catastrophe, but that
processes still happening on the Earth in the present day had caused them. As these processes were
very gradual, the Earth needed to be ancient, to allow time for the changes. Before long, scientific
inquiries provoked by his claims had pushed back the age of the earth into the millions of years still
too short when compared with the accepted 4.6 billion year age in the 21st century, but a distinct
improvement.
Acceptance of geological theories[edit]
It has been claimed that the prose of Principles of Knowledge was so obscure that it also impeded the
acceptance of Hutton's geological theories.[41] Restatements of his geological ideas (though not his
thoughts on evolution) by John Playfairin 1802 and then Charles Lyell in the 1830s popularised the
concept of an infinitely repeating cycle, though Lyell tended to dismiss Hutton's views as giving too
much credence to catastrophic changes.
Lyell's books had widespread influence, not least on the up-and-coming young geologist Charles
Darwin who read them with enthusiasm during his voyage on the Beagle, and has been described as
Lyell's first disciple. In a comment on the arguments of the 1830s, William Whewell coined the
term uniformitarianism to describe Lyell's version of the ideas, contrasted with the catastrophism of
those who supported the early 19th century concept that geological ages recorded a series of
catastrophes followed by repopulation by a new range of species. Over time there was a convergence in
views, but Lyell's description of the development of geological ideas led to wide belief that
uniformitarianism had triumphed.

Other contributions[edit]
Meteorology[edit]
It was not merely the earth to which Hutton directed his attention. He had long studied the changes of
the atmosphere. The same volume in which his Theory of the Earth appeared contained also a Theory of
Rain. He contended that the amount of moisture which the air can retain in solution increases with
temperature, and, therefore, that on the mixture of two masses of air of different temperatures a
portion of the moisture must be condensed and appear in visible form. He investigated the available
data regarding rainfall and climate in different regions of the globe, and came to the conclusion that the
rainfall is regulated by the humidity of the air on the one hand, and mixing of different air currents in the
higher atmosphere on the other.
Earth as a living entity[edit]
The idea that the Earth is alive is found in philosophy and religion, but the first scientific discussion was
by James Hutton. In 1785, he stated that the Earth was a superorganism and that its proper study should
be physiology.[42] Although his views anticipated the Gaia hypothesis, proposed in the 1960s by
scientist James Lovelock,[43] his idea of a living Earth was forgotten in the intense reductionism of the
19th century.[42]
Evolution[edit]
Hutton also advocated uniformitarianism for living creatures evolution, in a sense and even
suggested natural selectionas a possible mechanism affecting them:
"...if an organised body is not in the situation and circumstances best adapted to its sustenance and
propagation, then, in conceiving an indefinite variety among the individuals of that species, we must be
assured, that, on the one hand, those which depart most from the best adapted constitution, will be the
most liable to perish, while, on the other hand, those organised bodies, which most approach to the
best constitution for the present circumstances, will be best adapted to continue, in preserving
themselves and multiplying the individuals of their race." Investigation of the Principles of Knowledge,
volume 2.[40]
Hutton gave the example that where dogs survived through "swiftness of foot and quickness of sight...
the most defective in respect of those necessary qualities, would be the most subject to perish, and that
those who employed them in greatest perfection... would be those who would remain, to preserve
themselves, and to continue the race". Equally, if an acutesense of smell became "more necessary to the
sustenance of the animal... the same principle [would] change the qualities of the animal, and.. produce
a race of well scented hounds, instead of those who catch their prey by swiftness". The same "principle
of variation" would influence "every species of plant, whether growing in a forest or a meadow". He
came to his ideas as the result of experiments in plant and animal breeding, some of which he outlined
in an unpublished manuscript, the Elements of Agriculture. He distinguished between heritable

variation as the result of breeding, and non-heritable variations caused by environmental differences
such as soil and climate.[40]
Though he saw his "principle of variation" as explaining the development of varieties, Hutton rejected
the idea that evolution might originate species as a "romantic fantasy", according
to palaeoclimatologist Paul Pearson.[44] Influenced by deism,[45]Hutton thought the mechanism allowed
species to form varieties better adapted to particular conditions and provided evidence of benevolent
design in nature. Studies of Charles Darwin's notebooks have shown that Darwin arrived separately at
the idea of natural selection which he set out in his 1859 book On the Origin of Species, but it has been
speculated that he may have had some half-forgotten memory from his time as a student in Edinburgh
of ideas of selection in nature as set out by Hutton, and by William Charles Wells and Patrick
Matthew who had both been associated with the city before publishing their ideas on the topic early in
the 19th century.[40]
Works[edit]

1785. Abstract of a dissertation read in the Royal Society of Edinburgh, upon the seventh of
March, and fourth of April, MDCCLXXXV, Concerning the System of the Earth, Its Duration, and
Stability. Edinburgh. 30pp.

1788. The theory of rain. Transactions of the Royal Society of Edinburgh, vol. 1, Part 2, pp. 41
86.

1788. Theory of the Earth; or an investigation of the laws observable in the composition,
dissolution, and restoration of land upon the Globe. Transactions of the Royal Society of
Edinburgh, vol. 1, Part 2, pp. 209304.

1792. Dissertations on different subjects in natural philosophy. Edinburgh & London: Strahan &
Cadell.

1794. Observations on granite. Transactions of the Royal Society of Edinburgh, vol. 3, pp. 7781.

1794. A dissertation upon the philosophy of light, heat, and fire. Edinburgh: Cadell, Junior,
Davies.

1794. An investigation of the principles of knowledge and of the progress of reason, from sense
to science and philosophy. Edinburgh: Strahan & Cadell.

1795. Theory of the Earth; with proofs and illustrations. Edinburgh: Creech. 2 vols.

1797. Elements of Agriculture. Unpublished manuscript.

1899. Theory of the Earth; with proofs and illustrations, vol III, Edited by Sir Archibald Geikie.
Geological Society, Burlington House, London. at Internet Archive

niformitarianism is the scientific observation that the same natural laws and processes that operate in
the universe now have always operated in the universe in the past and apply everywhere in the
universe. It has included the gradualisticconcept that "the present is the key to the past" and is
functioning at the same rates. Uniformitarianism has been a key principle of geology and virtually all
fields of science, but naturalism's modern geologists, while accepting that geology has occurred across
deep time, no longer hold to a strict gradualism.
Uniformitarianism, coined by William Whewell, was originally proposed in contrast tocatastrophism[1] by
British naturalists in the late 18th century, starting with the work of the Scottish geologist James Hutton,
which was refined by John Playfair and popularised by Charles Lyell's Principles of Geology in 1830.[2]
Contents
[hide]

1 History
o

1.1 18th century

1.2 19th century

1.2.1 Lyell's uniformitarianism

1.2.1.1 Methodological assumptions

1.2.1.2 Substantive hypotheses

1.3 20th century

2 See also

3 Notes

4 References

5 External links

History[edit]
18th century[edit]

1.3.1 Modern Uniformitarianism includes periodic catastrophes

Cliff at the east of Siccar Pointshowing the near-horizontal red sandstone layers above vertically tilted
greywacke rocks.
The earlier conceptions likely had little influence on 18th-century European geological explanations for
the formation of Earth. Abraham Gottlob Wernerproposed Neptunism where strata were deposits from
shrinking seas precipitatedonto primordial rocks such as granite. In 1785 James Hutton proposed an
opposing, self-maintaining infinite cycle based on natural history and not on the Biblical record.[3][4]
The solid parts of the present land appear in general, to have been composed of the productions of the
sea, and of other materials similar to those now found upon the shores. Hence we find reason to
conclude:
1st, That the land on which we rest is not simple and original, but that it is a composition, and had been
formed by the operation of second causes.
2nd, That before the present land was made, there had subsisted a world composed of sea and land, in
which were tides and currents, with such operations at the bottom of the sea as now take place. And,
Lastly, That while the present land was forming at the bottom of the ocean, the former land maintained
plants and animals; at least the sea was then inhabited by animals, in a similar manner as it is at present.
Hence we are led to conclude, that the greater part of our land, if not the whole had been produced by
operations natural to this globe; but that in order to make this land a permanent body, resisting the
operations of the waters, two things had been required;
1st, The consolidation of masses formed by collections of loose or incoherent materials;
2ndly, The elevation of those consolidated masses from the bottom of the sea, the place where they
were collected, to the stations in which they now remain above the level of the ocean.[5]
Hutton then sought evidence to support his idea that there must have been repeated cycles, each
involving deposition on the seabed, uplift with tilting and erosion, and then moving undersea again for
further layers to be deposited. At Glen Tilt in the Cairngorm mountains he found granite
penetrating metamorphic schists, in a way which indicated to him that the presumed primordial rock
had been molten after the strata had formed.[6][7] He had read about angular unconformities as
interpreted by Neptunists, and found an unconformity at Jedburgh where layers of greywacke in the

lower layers of the cliff face have been tilted almost vertically before being eroded to form a level plane,
under horizontal layers of Old Red Sandstone.[8] In the Spring of 1788 he took a boat trip along
the Berwickshire coast with John Playfair and the geologist Sir James Hall, and found a dramatic
unconformity showing the same sequence at Siccar Point.[9] Playfair later recalled that "the mind
seemed to grow giddy by looking so far into the abyss of time",[10] and Hutton concluded a 1788 paper
he presented at the Royal Society of Edinburgh, later rewritten as a book, with the phrase "we find no
vestige of a beginning, no prospect of an end."[11]
Both Playfair and Hall wrote their own books on the theory, and for decades there was a robust debate
between Hutton's supporters and the Neptunists. Georges Cuvier's paleontological work in the 1790s,
which established the reality ofextinction, explained this by local catastrophes, after which other fixed
species repopulated the affected areas. In Britain, geologists adapted this idea into "diluvial theory"
which proposed repeated worldwide annihilation and creation of new fixed species adapted to a
changed environment, initially identifying the most recent catastrophe as the biblical flood.[12]
19th century[edit]
From 1830 to 1833 Charles Lyell's multi-volume Principles of Geology was published. The work's subtitle
was "An attempt to explain the former changes of the Earth's surface by reference to causes now in
operation". He drew his explanations from field studies conducted directly before he went to work on
the founding geology text,[13] and developed Hutton's idea that the earth was shaped entirely by slowmoving forces still in operation today, acting over a very long period of time. The
terms uniformitarianism for this idea, and catastrophism for the opposing viewpoint, were coined
by William Whewell in a review of Lyell's book. Principles of Geology was the most influential geological
work in the middle of the 19th century.
Lyell's uniformitarianism[edit]
According to Reijer Hooykaas (1963), Lyell's uniformitarianism is a family of four related propositions,
not a single idea:[14]

Uniformity of law the laws of nature are constant across time and space.

Uniformity of methodology the appropriate hypotheses for explaining the geological past are
those with analogy today.

Uniformity of kind past and present causes are all of the same kind, have the same energy, and
produce the same effects.

Uniformity of degree geological circumstances have remained the same over time.

None of these connotations requires another, and they are not all equally inferred by uniformitarians.[15]

Gould explained Lyell's propositions in Time's Arrow, Time's Cycle (1987), stating that Lyell conflated two
different types of propositions: a pair of methodological assumptions with a pair of substantive
hypotheses. The four together make up Lyell's uniformitarianism.[16]
Methodological assumptions[edit]
The two methodological assumptions below are accepted to be true by the majority of scientists and
geologists. Gould claims that these philosophical propositions must be assumed before you can proceed
as a scientist doing science. "You cannot go to a rocky outcrop and observe either the constancy of
nature's laws or the working of unknown processes. It works the other way around." You first assume
these propositions and "then you go to the out crop of rock."[17]

Uniformity of law across time and space: Natural laws are constant across space and time.[18]

The axiom of uniformity of law [18][19][20] is necessary in order for scientists to extrapolate (by inductive
inference) into the unobservable past.[18][20] The constancy of natural laws must be assumed in the study
of the past; else we cannot meaningfully study it.[18][19][20][21]

Uniformity of process across time and space: Natural processes are constant across time and
space.

Though similar to uniformity of law, this second a priori assumption, shared by the vast majority of
scientists, deals with geological causes, not physico-chemical laws[22] The past is to be explained by
processes acting currently in time and space rather than inventing extra esoteric or unknown
processes without good reason,[23][24] otherwise known asparsimony or Occam's razor.
However, the scientific observation and analysis of the light from distant stars and galaxies provides
direct empirical evidence of both the uniformity of law across time and space and the uniformity of
process across time and space. This empirical evidence is in no way an assumption nor an extrapolation
because when one analyses the light emitted from distant stars and galaxies one is actually literally
looking back in time,[25] and hence in this case the distant past is in fact directly observable. This effect is
due to fact that photons do not experience time at all.[26] From the perspective of a photon there is zero
time elapsed between when it is emitted and when it is absorbed again. It doesn't experience distance
either.[26]
Substantive hypotheses[edit]
The substantive hypotheses were controversial and, in some cases, accepted by few.[16] These
hypotheses are judged true or false on empirical grounds through scientific observation and repeated
experimental data. This is in contrast with the previous two philosophical assumptions[17] that come
before one can do science and so cannot be tested or falsified by science.

Uniformity of rate across time and space: Change is typically slow, steady, and gradual.[17]

Uniformity of rate (or gradualism) is what most people (including geologists) think of when they hear the
word "uniformitarianism," confusing this hypothesis with the entire definition. As late as 1990, Lemon,
in his textbook of stratigraphy, affirmed that "The uniformitarian view of earth history held that all
geologic processes proceed continuously and at a very slow pace."[27]
Gould explained Hutton's view of uniformity of rate; mountain ranges or grand canyons are built by
accumulation of nearly insensible changes added up through vast time. Some major events such as
floods, earthquakes, and eruptions, do occur. But these catastrophes are strictly local. They neither
occurred in the past, nor shall happen in the future, at any greater frequency or extent than they display
at present. In particular, the whole earth is never convulsed at once.[28]

Uniformity of state across time and space: Change is evenly distributed throughout space and
time.[29]

The uniformity of state hypothesis implies that throughout the history of our earth there is no progress
in any inexorable direction. The planet has almost always looked and behaved as it does now. Change is
continuous, but leads nowhere. The earth is in balance: a dynamic steady state.[29]
20th century[edit]
Stephen Jay Gould's first scientific paper, Is uniformitarianism necessary? (1965), reduced these four
interpretations to two.[30] He dismissed the first principle, which asserted spatial and temporal
invariance of natural laws, as no longer an issue of debate. He rejected the third (uniformity of rate) as
an unjustified limitation on scientific inquiry, as it constrains past geologic rates and conditions to those
of the present. So, Lyellian uniformitarianism was unnecessary.
Modern Uniformitarianism includes periodic catastrophes[edit]
Uniformitarianism was originally proposed in contrast to catastrophism, which states that the distant
past "consisted of epochs of paroxysmal and catastrophic action interposed between periods of
comparative tranquility"[31] Especially in the late 19th and early 20th centuries, most geologists took this
interpretation to mean that catastrophic events are not important in geologic time; one example of this
is the debate of the formation of the Channeled Scablands due to the catastrophic Missoula glacial
outburst floods. An important result of this debate and others was the re-clarification that, while the
same principles operate in geologic time, catastrophic events that are infrequent on human time-scales
can have important consequences in geologic history.[32] Derek Ager has noted that "geologists do not
deny uniformitarianism in its true sense, that is to say, of interpreting the past by means of the
processes that are seen going on at the present day, so long as we remember that the periodic
catastrophe is one of those processes. Those periodic catastrophes make more showing in the
stratigraphical record than we have hitherto assumed."[33]
Even Charles Lyell thought that ordinary geological processes would cause Niagara Falls to move
upstream to Lake Eriewithin 10,000 years, leading to catastrophic flooding of a large part of North
America.

Modern geologists do not apply uniformitarianism in the same way as Lyell. They question if rates of
processes were uniform through time and only those values measured during the history of geology are
to be accepted.[34] The present may not be a long enough key to penetrate the deep lock of the
past.[35] Geologic processes may have been active at different rates in the past that humans have not
observed. "By force of popularity, uniformity of rate has persisted to our present day. For more than a
century, Lyell's rhetoric conflating axiom with hypotheses has descended in unmodified form. Many
geologists have been stifled by the belief that proper methodology includes an a priori commitment to
gradual change, and by a preference for explaining large-scale phenomena as the concatenation of
innumerable tiny changes."[28]
The current consensus is that Earth's history is a slow, gradual process punctuated by occasional natural
catastrophic events that have affected Earth and its inhabitants.[36] In practice it is reduced from Lyell's
conflation to simply the two philosophical assumptions. This is also known as the principle of geological
actualism, which states that all past geological action was like all present geological action. The principle
of actualism is the cornerstone of paleoecology.

Alternative title: Pangaea


Pangea, also

spelled Pangaea,
in early geologic time, a supercontinent that incorporated almost all the landmasses on Earth.

Pangea was surrounded by a


global ocean calledPanthalassa, and it was fully assembled by the Early Permian Period (some 299
million to 272 million years ago). The supercontinent began to break apart about 200 million years ago,
during the Early Jurassic Period (201 million to 174 million years ago), eventually forming the modern
continents and the Atlantic and Indian oceans. Pangeas existence was first proposed in 1912 by German
meteorologist Alfred Wegener as a part of his theory of continental drift. Its name is derived from the
Greek pangaia, meaning all the Earth.
During the Early Permian, the northwestern coastline of the ancient continent Gondwana (a
paleocontinent that would eventually fragment to become South America, India, Africa,Australia,
and Antarctica) collided with and joined the southern part of Euramerica (a paleocontinent made up
of North America and southern Europe). With the fusion of the Angaran craton (the stable interior
portion of a continent) of Siberia to that combined landmass during the middle of the Early Permian, the
assembly of Pangea was complete. Cathaysia, a landmass comprising the former tectonic plates of North
and South China, was not incorporated into Pangea. Rather, it formed a separate, much smaller,
continent within the global ocean Panthalassa.
The mechanism for the breakup of Pangea is now explained in terms of plate tectonicsrather than
Wegeners outmoded concept of continental drift, which simply stated that Earths continents were
once joined together into the supercontinent Pangea that lasted for most of geologic time. Plate
tectonics states that Earths outer shell, or lithosphere, consists of large rigid plates that move apart
at oceanic ridges, come together at subduction zones, or slip past one another along fault lines. The
pattern of seafloor spreading indicates that Pangea did not break apart all at once but rather
fragmented in distinct stages. Plate tectonics also postulates that the continents joined with one
another and broke apart several times in Earths geologic history.
The first oceans formed from the breakup, some 180 million years ago, were the centralAtlantic
Ocean between northwestern Africa and North America and the southwesternIndian Ocean between
Africa and Antarctica. The South Atlantic Ocean opened about 140 million years ago as Africa separated
from South America. About the same time, Indiaseparated from Antarctica and Australia, forming the
central Indian Ocean. Finally, about 80 million years ago, North America separated from Europe,

Australia began to rift away from Antarctica, and India broke away from Madagascar. India eventually
collided with Eurasia approximately 50 million years ago, forming the Himalayas.
During Earths long history, there probably have been several Pangea-like supercontinents. The oldest of
those supercontinents is called Rodinia and was formed during Precambriantime some one billion years
ago. Another Pangea-like supercontinent, Pannotia, was assembled 600 million years ago, at the end of
the Precambrian. Present-day plate motions are bringing the continents together once again. Africa has
begun to collide with southern Europe, and the Australian plate is now colliding with Southeast Asia.
Within the next 50 million years, Africa and Australia will merge with Eurasia to form a supercontinent
that approaches Pangean proportions. That episodic assembly of the worlds landmasses has been called
the supercontinent cycle, or Wegenerian cycle, in honour of Wegener (see plate tectonics:
Supercontinent cycle).

Alfred Lothar Wegener (November 1, 1880 November 1930) was a German polar
researcher, geophysicist and meteorologist.
During his lifetime he was primarily known for his achievements in meteorology and as a pioneer of
polar research, but today he is most remembered as the originator of the theory of continental drift by
hypothesizing in 1912 that thecontinents are slowly drifting around the Earth (Kontinentalverschiebung).
His hypothesis was controversial and not widely accepted until the 1950s, when numerous discoveries
such as palaeomagnetism provided strong support for continental drift, and thereby a substantial basis
for today's model of plate tectonics.[1][2] Wegener was involved in several expeditions to Greenland to
study polar air circulation before the existence of the jet stream was accepted. Expedition participants
made many meteorological observations and achieved the first-ever overwintering on the inland
Greenland ice sheet as well as the first-ever boring of ice cores on a moving Arctic glacier.
Contents
[hide]

1 Biography
o

1.1 Early life and education

1.2 First Greenland expedition and years in Marburg

1.3 Second Greenland expedition

1.4 World War I

1.5 Postwar period and third expedition

1.6 Fourth and last expedition

1.7 Death

2 Continental drift theory


o

2.1 Reaction

3 Modern developments

4 Awards and honors

5 See also

6 References

7 Selected works

8 External links

Biography
This section needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be
challenged and removed. (January 2015)
Early life and education
On November 1, 1880, Alfred Wegener was born in Berlin as the youngest of five children in a
clergyman's family. His father, Richard Wegener, was a theologian and teacher of classical languages at
the Berlinisches Gymnasium zum Grauen Kloster. In 1886 his family purchased a former manor house
near Rheinsberg, which they used as a vacation home. Today there is an Alfred Wegener Memorial site
and tourist information office in a nearby building that was once the local schoolhouse.[3]

Commemorative plaque on Wegener's former school in Wallstrasse


Wegener attended school at the Kllnisches Gymnasium on Wallstrasse in Berlin (a fact which is
memorialized on a plaque on this protected building, now a school of music), graduating as the best in
his class. Afterward he studied Physics, meteorology andAstronomy in Berlin, Heidelberg and Innsbruck.
From 1902 to 1903 during his studies he was an assistant at the Urania astronomical observatory. He
obtained a doctorate in astronomy in 1905 based on a dissertation written under the supervision
of Julius Bauschinger at Friedrich Wilhelms University (today Humboldt University), Berlin. Wegener had
always maintained a strong interest in the developing fields of meteorology andclimatology and his
studies afterwards focused on these disciplines.
In 1905 Wegener became an assistant at the Aeronautisches Observatorium Lindenberg]] near Beeskow.
He worked there with his brother Kurt, two years his senior, who was likewise a scientist with an
interest in meteorology and polar research. The two pioneered the use of weather balloons to track air
masses. On a balloon ascent undertaken to carry out meteorological investigations and to test a celestial
navigation method using a particular type of quadrant (Libellenquadrant), the Wegener brothers set a
new record for a continuous balloon flight, remaining aloft 52.5 hours from April 57, 1906.[4]

First Greenland expedition and years in Marburg


In that same year 1906, Wegener participated in the first of his four Greenland expeditions, later
regarding this experience as marking a decisive turning point in his life. The expedition was led by the
Dane Ludvig Mylius-Erichsen and charged with studying the last unknown portion of the northeastern
coast of Greenland. During the expedition Wegener constructed the first meteorological station in
Greenland near Danmarkshavn, where he launched kites and tethered balloons to make meteorological
measurements in an Arctic climatic zone. Here Wegener also made his first acquaintance with death in a
wilderness of ice when the expedition leader and two of his colleagues died on an exploratory trip
undertaken with sled dogs.
After his return in 1908 and until World War I, Wegener was a lecturer in meteorology, applied
astronomy and cosmic physics at the University of Marburg. His students and colleagues in Marburg
particularly valued his ability to clearly and understandably explain even complex topics and current
research findings without sacrificing precision. His lectures formed the basis of what was to become a
standard textbook in meteorology, first written In 1909/1910: Thermodynamik der
Atmosphre (Thermodynamics of the Atmosphere), in which he incorporated many of the results of the
Greenland expedition.
On 6 January 1912 he publicized his first thoughts about continental drift in a lecture at a session of the
Geologischen Vereinigung at the Senckenberg-Museum, Frankfurt am Main and in three articles in the
journal Petermanns Geographischen Mitteilungen.[5]
Second Greenland expedition
After a stopover in Iceland to purchase and test ponys as pack animals, the expedition arrived in
Danmarkshavn. Even before the trip to the inland ice began the expedition was almost annihilated by a
calving glacier. The Danish expedition leader, Johan Peter Koch, broke his leg when he fell into a glacier
crevasse and spent months recovering in a sickbed. Wegener and Koch were the first to winter on the
inland ice in northeast Greenland.[6] Inside their hut they drilled to a depth of 25 m with an auger. In
summer 1913 the team crossed the inland ice, the four expedition participants covering a distance twice
as long as Fridtjof Nansen's southern Greenland crossing in 1888. Only a few kilometers from the
western Greenland settlement of Kangersuatsiaq the small team ran out of food while struggling to find
their way through difficult glacial breakup terrain. But at the last moment, after the last pony and dog
had been eaten, they were picked up at a fjord by the clergyman of Upernavik, who just happened to be
visiting a remote congregation at the time.
Later in 1913 after his return Wegener married Else Kppen, the daughter of his former teacher and
mentor, the meteorologist Wladimir Kppen. The young pair lived in Marburg, where Wegner resumed
his university lectureship.
World War I

As an infantry reserve officer Wegener was immediately called up when war began in 1914. On the war
front in Belgium he experienced fierce fighting but his term lasted only a few months: after being
wounded twice he was declared unfit for active service and assigned to the army weather service. This
activity required him to travel constantly between various weather stations in Germany, on the Balkans,
on the Western Front and in the Baltic region.
Nevertheless he was able in 1915 to complete the first version of his major work, Die Entstehung der
Kontinente und Ozeane (The Origin of Continents and Oceans). His brother Kurt remarked that Alfred
Wegeners motivation was to reestablish the connection between geophysics on the one hand
and geography and geology on the other, which had become completely ruptured because of the
specialized development of these branches of science.
Interest in this small publication was however low, also because of wartime chaos. By the end of the war
Wegener had published almost 20 additional meteorological and geophysical papers in which he
repeatedly embarked for new scientific frontiers. In 1917 he undertook a scientific investigation of
the Treysa meteorite.
Postwar period and third expedition
Wegener obtained a position as a meteorologist at the German Naval Observatory (Deutsche Seewarte)
and moved toHamburg with his wife and their two daughters. In 1921 he was appointed senior lecturer
at the new University of Hamburg. From 1919 to 1923 Wegener worked on Die Klimate der geologischen
Vorzeit (The Climates of the Geological Past), published together with his father-in-law, Wladimir
Kppen. In 1922 the third, fully revised edition of The Origin of Continents and Oceans appeared, and
discussion began on his theory of continental drift, first in the German language area and later
internationally. Withering criticism was the response of most experts.
In 1924 Wegener was appointed to a professorship in meteorology and geophysics in Graz, which finally
provided him with a secure position for himself and his family. He concentrated on physics and the
optics of the atmosphere as well as the study of tornados. Scientific assessment of his second Greenland
expedition (ice measurements, atmospheric optics, etc.) continued to the end of the 1920s.
In November 1926 Wegener presented his continental drift theory at a symposium of the American
Association of Petroleum Geologists in New York City, again earning rejection from everyone but the
chairman. Three years later the fourth and final expanded edition of The Origin of Continents and
Oceans appeared.
In 1929 Wegener embarked on his third trip to Greenland, which laid the groundwork for a later main
expedition and included a test of an innovative, propeller-driven snowmobile.
Fourth and last expedition

Wegener (left) and Villumsen (right) in Greenland; November 1, 1930.


Wegener's last Greenland expedition was in 1930. The 14 participants under his leadership were to
establish three permanent stations from which the thickness of the Greenland ice sheet could be
measured and year-round Arctic weather observations made. Wegener felt personally responsible for
the expedition's success, as the German government had contributed $120,000 ($1.5 million in 2007
dollars). Success depended on enough provisions being transferred from West camp to Eismitte ("midice") for two men to winter there, and this was a factor in the decision that led to his death. Owing to a
late thaw, the expedition was six weeks behind schedule and, as summer ended, the men
at Eismitte sent a message that they had insufficient fuel and so would return on October 20.

Vehicles used by the 1930 expedition (stored).


On September 24, although the route markers were by now largely buried under snow, Wegener set out
with thirteen Greenlanders and his meteorologist Fritz Loewe to supply the camp by dog sled. During
the journey the temperature reached 60 C (76 F) and Loewe's toes became so frostbitten they had
to be amputated with a penknife without anesthetic. Twelve of the Greenlanders returned to West
camp. On October 19, the remaining three members of the expedition reached Eismitte. There being
only enough supplies for three at Eismitte, Wegener and Rasmus Villumsen took two dog sleds and
made for West camp. They took no food for the dogs and killed them one by one to feed the rest until

they could run only one sled. While Villumsen rode the sled, Wegener had to use skis. They never
reached the camp. The expedition was completed by his brother, Kurt Wegener.
This expedition inspired the Greenland expedition episode of Adam Melfort in John Buchan's 1933
novel A Prince of the Captivity.
Death
Six months later, on May 12, 1931, Wegener's body was found halfway between Eismitte and West
camp. It had been buried (by Villumsen) with great care and a pair of skis marked the grave site.
Wegener had been fifty years of age and a heavy smoker and it was believed that he had died of heart
failure brought on by overexertion. His body was reburied in the same spot by the team that found him
and the grave was marked with a large cross. After burying Wegener, Villumsen had resumed his
journey to West camp but was never seen again. Villumsen was twenty three when he died and it is
estimated that his body, and Wegener's diary, now lie under more than 100 metres (330 ft) of
accumulated ice and snow.
Continental drift theory
Alfred Wegener first thought of this idea by noticing that the different large landmasses of the Earth
almost fit together like a jigsaw puzzle. The Continental shelf of the Americas fit closely to Africa and
Europe, and Antarctica, Australia, India and Madagascar fit next to the tip of Southern Africa. But
Wegener only took action after reading a paper in 1911 and seeing that a flooded land-bridge
contradicts isostasy.[7] Wegener's main interest was meteorology, and he wanted to join the DenmarkGreenland expedition scheduled for mid-1912. He presented his Continental Drift hypothesis on January
6, 1912. He analyzed either side of the Atlantic Ocean for rock type, geological structures and fossils. He
noticed that there was a significant similarity between matching sides of the continents, especially
in fossil plants.

Fossil patterns across continents (Gondwana).


From 1912, Wegener publicly advocated the existence of "continental drift", arguing that all the
continents were once joined together in a single landmass and have drifted apart. He supposed that the
mechanisms might be the centrifugal force of the Earth's rotation ("Polflucht") or the
astronomical precession caused the drift. Wegener also speculated on sea-floor spreading and the role

of the mid-ocean ridges, stating: the Mid-Atlantic Ridge ... zone in which the floor of the Atlantic, as it
keeps spreading, is continuously tearing open and making space for fresh, relatively fluid and hot sima
[rising] from depth.[8] However, he did not pursue these ideas in his later works.
In 1915, in The Origin of Continents and Oceans (Die Entstehung der Kontinente und Ozeane), Wegener
drew together evidence from various fields to advance the theory that there had once been a giant
continent which he named "Urkontinent"[9] (German for "primal continent", analogous to the Greek
"Pangaea",[10] meaning "All-Lands" or "All-Earth"). Expanded editions during the 1920s presented further
evidence. The last edition, just before his untimely death, revealed the significant observation that
shallower oceans were geologically younger.

Wegener during J.P. Koch's Expedition 1912 - 1913 in the winter base "Borg".
Reaction
In his work, Wegener presented a large amount of observational evidence in support of continental
drift, but the mechanism remained elusive, partly because Wegener's estimate of the velocity of
continental motion, 250 cm/year, was high.[11](The currently accepted rate for the separation of the
Americas from Europe and Africa is about 2.5 cm/year).[12]
While his ideas attracted a few early supporters such as Alexander Du Toit from South Africa and Arthur
Holmes in England,[13] the hypothesis was initially met with skepticism from geologists who viewed
Wegener as an outsider, and were resistant to change.[13] The one American edition of Wegener's work,
published in 1925, which was written in "a dogmatic style that often results from Germany
translations",[13] was received so poorly that the American Association of Petroleum
Geologists organized a symposium specifically in opposition to the continental drift hypothesis.[14] The
opponents argued, as did the Leipziger geologist Franz Kossmat, that the oceanic crust was too firm for
the continents to "simply plough through".
Wegener's fit of the supercontinent at the 200m isobath (the continental shelves), an idea he had since
at least 1910, was a good match.[13] Part of the reason Wegener's ideas were not initially accepted was
based on his proposed fit of the continents, with Charles Schuchert commenting:
During this vast time [of the split of Pangea] the sea waves have been continuously pounding against
Africa and Brazil and in many places rivers have been bringing into the ocean great amounts of eroded

material, yet everywhere the geographic shore lines are said to have remained practically unchanged! It
apparently makes no difference to Wegener how hard or how soft are the rocks of these shore lines,
what are their geological structures that might aid or retard land or marine erosion, how often the
strand lines have been elevated or depressed, and how far peneplanation has gone on during each
period of continental stability. Furthermore, sea-level in itself has not been constant, especially during
the Pleistocene, when the lands were covered by millions of square miles of ice made from water
subtracted out of the oceans. In the equatorial regions, this level fluctuated three times during the
Pleistocene, and during each period of ice accumulation the sea-level sank about 250 feet.[citation needed]
The comment was based on the misapprehension that Wegener's fit was judged along the current
coastline, while Wegener was using the 200m isobath. Wegener, who was in the audience, made no
attempt to defend his work, (possibly because of an inadequate command of the English language).
Supporters such as Toit, also contributed to this misunderstanding of the method of the continental
fitting, commenting (after Wegener's death) "most persons view the continental shelf as an integral part
of the continental block, and criticise Wegener for endeavoring to fit together the masses by their
present coastlines instead of by the submerged margins of the shelves."[13]
In 1943 George Gaylord Simpson wrote a vehement attack on the theory (as well as the rival theory of
sunken land bridges) and put forward his own permanentist views.[15] Alexander du Toit wrote a
rejoinder in the following year.[16]
Modern developments

The tectonic plates of the world were mapped in the second half of the 20th century.
In the early 1950s, the new science of paleomagnetism pioneered at the University of Cambridge by S. K.
Runcorn and at Imperial Collegeby P.M.S. Blackett was soon producing data in favour of Wegener's
theory. By early 1953 samples taken from India showed that the country had previously been in the
Southern hemisphere as predicted by Wegener. By 1959, the theory had enough supporting data that
minds were starting to change, particularly in the United Kingdom where, in 1964, the Royal Society held
a symposium on the subject.[17]

Additionally, the 1960s saw several developments in geology, notably the discoveries of seafloor
spreading and Wadati-Benioff zones, led to the rapid resurrection of the continental drift hypothesis and
its direct descendant, the theory of plate tectonics. Alfred Wegener was quickly recognized as the
founding father of one of the major scientific revolutions of the 20th century.
With the advent of the Global Positioning System (GPS), it became possible to measure continental drift
directly.[18]
Awards and honors
The Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, was established
in 1980 on his centenary. It awards the Wegener Medal in his name.[19] The crater Wegener on the
Moon and the crater Wegener on Mars, as well as the asteroid 29227 Wegener and the peninsula where
he died in Greenland (Wegener Peninsula near Ummannaq, 7112N 5150W), are named after him.[20]
The European Geosciences Union sponsors an Alfred Wegener Medal & Honorary Membership "for
scientists who have achieved exceptional international standing in atmospheric, hydrological or ocean
sciences, defined in their widest senses, for their merit and their scientific achievements."[21]

Definition
An unconformity is the contact between sedimentary rocks that are significantly different in age, or
between sedimentary rocks and older, eroded igneous or metamorphic rocks. Unconformities represent
gaps in the geologic record; periods of time that are not represented by any rocks.
How Unconformities Form
Unconformities happen for two reasons: sediment deposition stopped for a considerable time and/or
existing rocks were eroded prior to being covered by younger sediment. There is no single time span
represented by an unconformity. It depends on how long erosion occurred or for how long deposition
ceased.
Some unconformities are easier to identify than others. For example, the contact between a very old
granite and a younger sandstone is pretty obvious. On the other hand, figuring out whether two
limestone beds are significantly different in age might require more investigation.
Types of Unconformities
The type of unconformity (in other words, what we call it) is based upon what rock types are involved;
whether it is the result of erosion or no deposition; and whether older sedimentary rock layers were
tilted prior to being eroded.
Angular Unconformity

An angular unconformity (at arrow) at Siccar


Point, Scotland, was made famous by James
Hutton. Dave Souza photo; modified and used
under Creative Commons Attribution Share-Alike
3.0 Unported license.

An angular unconformity is the result of erosion of tilted layers of sedimentary rock. The erosion surface
is buried under younger, horizontal layers of sedimentary rock.Hutton's Unconformity at Siccar Point,
Scotland, is probably the most famous one, where tilted beds of eroded sandstone are covered by
horizontal beds of younger sandstone.
Disconformity

A disconformity (at arrow) separates two horizontal beds of


sedimentary rock. James Stuby photo, released to the public
domain.

A disconformity also involves erosion of sedimentary rocks. But here, the older rock layers were not
tilted before they were eroded. In the least obvious case, the disconformity (the erosion surface) is
parallel to the layering of both rock layers. Those can be difficult to identify, and usually require dating
of the rocks using fossils or some other method in order to know with certainty that the two rock layers
are substantially different in age.
In many instances, the erosion surface is irregular and the unconformity undulates somewhat, cutting
across the rock layers. Those types of disconformities are more easily recognizable.

Sonar
From Wikipedia, the free encyclopedia
This article is about underwater sound propagation. For atmospheric sounding, see SODAR. For the Zeiss
lens, seeSonnar. For other uses, see Sonar (disambiguation).

French F70 type frigates (here, La Motte-Picquet) are fitted with VDS (Variable Depth Sonar) type
DUBV43 or DUBV43C towed sonars

Sonar image of shipwreck of Virsaitis in Estonia.


Sonar (originally an acronym for SOund Navigation And Ranging) is a technique that
uses sound propagation (usually underwater, as insubmarine navigation) to navigate, communicate with
or detect objects on or under the surface of the water, such as other vessels. Two types of technology
share the name "sonar": passive sonar is essentially listening for the sound made by vessels; active sonar
is emitting pulses of sounds and listening for echoes. Sonar may be used as a means ofacoustic
location and of measurement of the echo characteristics of "targets" in the water. Acoustic location in
air was used before the introduction of radar. Sonar may also be used in air for robot navigation,
and SODAR (an upward looking in-air sonar) is used for atmospheric investigations. The term sonar is
also used for the equipment used to generate and receive the sound. The acoustic frequencies used in
sonar systems vary from very low (infrasonic) to extremely high (ultrasonic). The study of underwater
sound is known as underwater acoustics or hydroacoustics.

Contents
[hide]

1 History
o

1.1 ASDIC

1.2 SONAR

1.3 Materials and designs

2 Active sonar
o

2.1 Project Artemis

2.2 Transponder

2.3 Performance prediction

2.4 Hand-held sonar for use by a diver

3 Passive sonar
o

3.1 Identifying sound sources

3.2 Noise limitations

3.3 Performance prediction

4 Performance factors
o

4.1 Sound propagation

4.2 Scattering

4.3 Target characteristics

4.4 Countermeasures

5 Military applications
o

5.1 Anti-submarine warfare

5.2 Torpedoes

5.3 Mines

5.4 Mine countermeasures

5.5 Submarine navigation

5.6 Aircraft

5.7 Underwater communications

5.8 Ocean surveillance

5.9 Underwater security

5.10 Hand-held sonar

5.11 Intercept sonar

6 Civilian applications
o

6.1 Fisheries

6.2 Echo sounding

6.3 Net location

6.4 ROV and UUV

6.5 Vehicle location

6.6 Prosthesis for the visually impaired

7 Scientific applications
o

7.1 Biomass estimation

7.2 Wave measurement

7.3 Water velocity measurement

7.4 Bottom type assessment

7.5 Bathymetric mapping

7.6 Sub-bottom profiling

7.7 Synthetic aperture sonar

7.8 Parametric sonar

7.9 Sonar in extraterrestrial contexts

8 Effect of sonar on marine life

8.1 Effect on marine mammals

8.2 Effect on fish

9 Frequencies and resolutions

10 See also

11 Notes

12 References

13 Bibliography

14 Further reading

15 External links

History
Although some animals (dolphins and bats) have used sound for communication and object detection
for millions of years, use by humans in the water is initially recorded by Leonardo da Vinci in 1490: a
tube inserted into the water was said to be used to detect vessels by placing an ear to the tube.[1]
In the 19th century an underwater bell was used as an ancillary to lighthouses to provide warning of
hazards.
The use of sound to 'echo locate' underwater in the same way as bats use sound for aerial navigation
seems to have been prompted by the Titanic disaster of 1912. The world's first patent for an underwater
echo ranging device was filed at the British Patent Office by English meteorologist Lewis Richardson a
month after the sinking of the Titanic,[2] and a German physicist Alexander Behm obtained a patent for
an echo sounder in 1913.
The Canadian engineer Reginald Fessenden, while working for the Submarine Signal Company in Boston,
built an experimental system beginning in 1912, a system later tested in Boston Harbor, and finally in
1914 from the U.S. Revenue (now Coast Guard) Cutter Miami on the Grand
Banks off Newfoundland Canada.[2][3] In that test, Fessenden demonstrated depth sounding, underwater
communications (Morse code) and echo ranging (detecting an iceberg at two miles (3 km)
range).[4][5] The so-called Fessenden oscillator, at ca. 500 Hz frequency, was unable to determine the
bearing of the berg due to the 3 metre wavelength and the small dimension of the transducer's radiating
face (less than 1 metre in diameter). The ten Montreal-built British H class submarines launched in 1915
were equipped with a Fessenden oscillator.[6]
During World War I the need to detect submarines prompted more research into the use of sound. The
British made early use of underwater listening devices called hydrophones, while the French
physicist Paul Langevin, working with a Russian immigrant electrical engineer, Constantin Chilowsky,
worked on the development of active sound devices for detecting submarines in 1915.

Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers
they used, this work influenced future designs. Lightweight sound-sensitive plastic film and fibre optics
have been used forhydrophones (acousto-electric transducers for in-water use), while Terfenol-D and
PMN (lead magnesium niobate) have been developed for projectors.
ASDIC

ASDIC display unit ca. 1944


In 1916, under the British Board of Invention and Research, Canadian physicistRobert William Boyle took
on the active sound detection project with A B Wood, producing a prototype for testing in mid-1917.
This work, for the Anti-Submarine Division of the British Naval Staff, was undertaken in utmost secrecy,
and used quartz piezoelectric crystals to produce the world's first practical underwater active sound
detection apparatus. To maintain secrecy no mention of sound experimentation or quartz was made the word used to describe the early work ('supersonics') was changed to 'ASD'ics, and the quartz
material to 'ASD'ivite: hence the British acronym ASDIC. In 1939, in response to a question from
the Oxford English Dictionary, the Admiralty made up the story that it stood for 'Allied Submarine
Detection Investigation Committee', and this is still widely believed,[7]though no committee bearing this
name has been found in the Admiralty archives.[8]
By 1918, both France and Britain had built prototype active systems. The British tested their ASDIC
on HMS Antrim in 1920, and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped
vessels in 1923. An anti-submarine school,HMS Osprey, and a training flotilla of four vessels were
established on Portland in 1924. The US Sonar QB set arrived in 1931.
By the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and
others for submarines, incorporated into a complete anti-submarine attack system. The effectiveness of
early ASDIC was hamstrung by the use of the depth charge as an anti-submarine weapon. This required
an attacking vessel to pass over a submerged contact before dropping charges over the stern, resulting

in a loss of ASDIC contact in the moments leading up to attack. The hunter was effectively firing blind,
during which time a submarine commander could take evasive action. This situation was remedied by
using several ships cooperating and by the adoption of "ahead throwing weapons", such
as Hedgehog and later Squid, which projected warheads at a target ahead of the attacker and thus still
in ASDIC contact. Developments during the war resulted in British ASDIC sets which used several
different shapes of beam, continuously covering blind spots. Later, acoustic torpedoes were used.
At the start of World War II, British ASDIC technology was transferred for free to the United States.
Research on ASDIC and underwater sound was expanded in the UK and in the US. Many new types of
military sound detection were developed. These included sonobuoys, first developed by the British in
1944 under the codename High Tea, dipping/dunking sonar andmine detection sonar. This work formed
the basis for post war developments related to countering the nuclear submarine. Work on sonar had
also been carried out in the Axis countries, notably in Germany, which included countermeasures. At the
end of World War II this German work was assimilated by Britain and the US. Sonars have continued to
be developed by many countries, including Russia, for both military and civil uses. In recent years the
major military development has been the increasing interest in low frequency active systems.
SONAR
During the 1930s American engineers developed their own underwater sound detection technology and
important discoveries were made, such as thermoclines, that would help future development.[9] After
technical information was exchanged between the two countries during the Second World War,
Americans began to use the term SONAR for their systems, coined as the equivalent of RADAR.
Materials and designs
There was little progress in development from 1915 to 1940. In 1940, the US sonars typically consisted
of amagnetostrictive transducer and an array of nickel tubes connected to a 1-foot-diameter steel plate
attached back to back to a Rochelle salt crystal in a spherical housing. This assembly penetrated the ship
hull and was manually rotated to the desired angle. The piezoelectric Rochelle salt crystal had better
parameters, but the magnetostrictive unit was much more reliable. Early WW2 losses prompted rapid
research in the field, pursuing both improvements in magnetostrictive transducer parameters and
Rochelle salt reliability. Ammonium dihydrogen phosphate (ADP), a superior alternative, was found as a
replacement for Rochelle salt; the first application was a replacement of the 24 kHz Rochelle salt
transducers. Within nine months, Rochelle salt was obsolete. The ADP manufacturing facility grew from
few dozen personnel in early 1940 to several thousands in 1942.
One of the earliest application of ADP crystals were hydrophones for acoustic mines; the crystals were
specified for low frequency cutoff at 5 Hz, withstanding mechanical shock for deployment from aircraft
from 10,000 ft, and ability to survive neighbouring mine explosions. One of key features of ADP
reliability is its zero aging characteristics; the crystal keeps its parameters even over prolonged storage.
Another application was for acoustic homing torpedoes. Two pairs of directional hydrophones were
mounted on the torpedo nose, in the horizontal and vertical plane; the difference signals from the pairs

were used to steer the torpedo left-right and up-down. A countermeasure was developed: the targeted
submarine discharged an effervescent chemical, and the torpedo went after the noisier fizzy decoy. The
counter-countermeasure was a torpedo with active sonar a transducer was added to the torpedo
nose, and the microphones were listening for its reflected periodic tone bursts. The transducers
comprised identical rectangular crystal plates arranged to diamond-shaped areas in staggered rows.
Passive sonar arrays for submarines were developed from ADP crystals. Several crystal assemblies were
arranged in a steel tube, vacuum-filled with castor oil, and sealed. The tubes then were mounted in
parallel arrays.
The standard US Navy scanning sonar at the end of the World War II operated at 18 kHz, using an array
of ADP crystals. Desired longer range however required use of lower frequencies. The required
dimensions were too big for ADP crystals, so in the early 1950s magnetostrictive and barium
titanate piezoelectric systems were developed, but these had problems achieving uniform impedance
characteristics and the beam pattern suffered. Barium titanate was then replaced with more stable lead
zirconate titanate (PZT), and the frequency was lowered to 5 kHz. The US fleet used this material in the
AN/SQS-23 sonar for several decades. The SQS-23 sonar first used magnetostrictive nickel transducers,
but these weighed several tons and nickel was expensive and considered a critical material; piezoelectric
transducers were therefore substituted. The sonar was a large array of 432 individual transducers. At
first the transducers were unreliable, showing mechanical and electrical failures and deteriorating soon
after installation; they were also produced by several vendors, had different designs, and their
characteristics were different enough to impair the array's performance. The policy to allow repair of
individual transducers was then sacrificed, and "expendable modular design", sealed non-repairable
modules, was chosen instead, eliminating the problem with seals and other extraneous mechanical
parts.[10]
The Imperial Japanese Navy at the onset of WW2 used projectors based on quartz. These were big and
heavy, especially if designed for lower frequencies; the one for Type 91 set, operating at 9 kHz, had a
diameter of 30 inches and was driven by an oscillator with 5 kW power and 7 kV of output amplitude.
The Type 93 projectors consisted of solid sandwiches of quartz, assembled into spherical cast
iron bodies. The Type 93 sonars were later replaced with Type 3, which followed German design and
used magnetostrictive projectors; the projectors consisted of two rectangular identical independent
units in a cast iron rectangular body about 169 inches. The exposed area was half the wavelength wide
and three wavelengths high. The magnetostrictive cores were made from 4 mm stampings of nickel, and
later of an iron-aluminium alloy with aluminium content between 12.7 and 12.9%. The power was
provided from a 2 kW at 3.8 kV, with polarization from a 20 V/8 A DC source.
The passive hydrophones of the Imperial Japanese Navy were based on moving coil design, Rochelle salt
piezo transducers, and carbon microphones.[11]
Magnetostrictive transducers were pursued after WW2 as an alternative to piezoelectric ones. Nickel
scroll-wound ring transducers were used for high-power low-frequency operations, with size up to
13 feet in diameter, probably the largest individual sonar transducers ever. The advantage of metals is

their high tensile strength and low input electrical impedance, but they have electrical losses and lower
coupling coefficient than PZT, whose tensile strength can be increased byprestressing. Other materials
were also tried; nonmetallic ferrites were promising for their low electrical conductivity resulting in
low eddy current losses, Metglas offered high coupling coefficient, but they were inferior to PZT overall.
In the 1970s, compounds of rare earths and iron were discovered with superior magnetomechanic
properties, namely theTerfenol-D alloy. This made possible new designs, e.g. a hybrid magnetostrictivepiezoelectric transducer. The most recent sch material is Galfenol.
Other types of transducers include variable reluctance (or moving armature, or electromagnetic)
transducers, where magnetic force acts on the surfaces of gaps, and moving coil (or electrodynamic)
transducers, similar to conventional speakers; the latter are used in underwater sound calibration, due
to their very low resonance frequencies and flat broadband characteristics above them.[12]
Active sonar
This section does not cite any references or sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (January 2009)

Principle of an active sonar


Active sonar uses a sound transmitter and a receiver. When the two are in the same place it is
monostatic operation. When the transmitter and receiver are separated it is bistatic operation. When
more transmitters (or more receivers) are used, again spatially separated, it is multistatic operation.
Most sonars are used monostatically with the same array often being used for transmission and
reception. Active sonobuoy fields may be operated multistatically.
Active sonar creates a pulse of sound, often called a "ping", and then listens for reflections (echo) of the
pulse. This pulse of sound is generally created electronically using a sonar projector consisting of a signal
generator, power amplifier and electro-acoustic transducer/array. A beamformer is usually employed to
concentrate the acoustic power into a beam, which may be swept to cover the required search angles.
Generally, the electro-acoustic transducers are of the Tonpilz type and their design may be optimised to

achieve maximum efficiency over the widest bandwidth, in order to optimise performance of the overall
system. Occasionally, the acoustic pulse may be created by other means, e.g. (1) chemically using
explosives, or (2) airguns or (3) plasma sound sources.
To measure the distance to an object, the time from transmission of a pulse to reception is measured
and converted into a range by knowing the speed of sound. To measure the bearing,
several hydrophones are used, and the set measures the relative arrival time to each, or with an array of
hydrophones, by measuring the relative amplitude in beams formed through a process
called beamforming. Use of an array reduces the spatial response so that to provide wide
cover multibeamsystems are used. The target signal (if present) together with noise is then passed
through various forms of signal processing, which for simple sonars may be just energy measurement. It
is then presented to some form of decision device that calls the output either the required signal or
noise. This decision device may be an operator with headphones or a display, or in more sophisticated
sonars this function may be carried out by software. Further processes may be carried out to classify the
target and localise it, as well as measuring its velocity.
The pulse may be at constant frequency or a chirp of changing frequency (to allow pulse compression on
reception). Simple sonars generally use the former with a filter wide enough to cover possible Doppler
changes due to target movement, while more complex ones generally include the latter technique.
Since digital processing became availablepulse compression has usually been implemented using digital
correlation techniques. Military sonars often have multiple beams to provide all-round cover while
simple ones only cover a narrow arc, although the beam may be rotated, relatively slowly, by
mechanical scanning.
Particularly when single frequency transmissions are used, the Doppler effect can be used to measure
the radial speed of a target. The difference in frequency between the transmitted and received signal is
measured and converted into a velocity. Since Doppler shifts can be introduced by either receiver or
target motion, allowance has to be made for the radial speed of the searching platform.
One useful small sonar is similar in appearance to a waterproof flashlight. The head is pointed into the
water, a button is pressed, and the device displays the distance to the target. Another variant is a
"fishfinder" that shows a small display withshoals of fish. Some civilian sonars (which are not designed
for stealth) approach active military sonars in capability, with quite exotic three-dimensional displays of
the area near the boat.
When active sonar is used to measure the distance from the transducer to the bottom, it is known
as echo sounding. Similar methods may be used looking upward for wave measurement.
Active sonar is also used to measure distance through water between two sonar transducers or a
combination of a hydrophone (underwater acoustic microphone) and projector (underwater acoustic
speaker). A transducer is a device that can transmit and receive acoustic signals ("pings"). When a
hydrophone/transducer receives a specific interrogation signal it responds by transmitting a specific
reply signal. To measure distance, one transducer/projector transmits an interrogation signal and
measures the time between this transmission and the receipt of the other transducer/hydrophone

reply. The time difference, scaled by the speed of sound through water and divided by two, is the
distance between the two platforms. This technique, when used with multiple
transducers/hydrophones/projectors, can calculate the relative positions of static and moving objects in
water.
In combat situations, an active pulse can be detected by an opponent and will reveal a submarine's
position.
A very directional, but low-efficiency, type of sonar (used by fisheries, military, and for port security)
makes use of a complex nonlinear feature of water known as non-linear sonar, the virtual transducer
being known as a parametric array.
Sonar pings

MENU
0:00
Recording of active
SONAR pings.

Problems playing this file?


See media help.
Project Artemis
Project Artemis was a one-of-a-kind low-frequency sonar for surveillance that was deployed off
Bermuda for several years in the early 1960s. The active portion was deployed from a World War II
tanker, and the receiving array was a built into a fixed position on an offshore bank.
Transponder
This is an active sonar device that receives a stimulus and immediately (or with a delay) retransmits the
received signal or a predetermined one.
Performance prediction

A sonar target is small relative to the sphere, centred around the emitter, on which it is located.
Therefore, the power of the reflected signal is very low, several orders of magnitude less than the
original signal. Even if the reflected signal was of the same power, the following example (using
hypothetical values) shows the problem: Suppose a sonar system is capable of emitting a 10,000 W/m
signal at 1 m, and detecting a 0.001 W/m signal. At 100 m the signal will be 1 W/m (due to theinversesquare law). If the entire signal is reflected from a 10 m target, it will be at 0.001 W/m when it reaches
the emitter, i.e. just detectable. However, the original signal will remain above 0.001 W/m until 300 m.
Any 10 m target between 100 and 300 m using a similar or better system would be able to detect the
pulse but would not be detected by the emitter. The detectors must be very sensitive to pick up the
echoes. Since the original signal is much more powerful, it can be detected many times further than
twice the range of the sonar (as in the example).
In active sonar there are two performance limitations, due to noise and reverberation. In general one or
other of these will dominate so that the two effects can be initially considered separately.
In noise limited conditions at initial detection:
SL 2TL + TS (NL DI) = DT
where SL is the source level, TL is the transmission loss (or propagation loss), TS is the target strength,
NL is the noise level, DI is the directivity index of the array (an approximation to the array gain) and DT is
the detection threshold.
In reverberation limited conditions at initial detection (neglecting array gain):
SL 2TL + TS = RL + DT
where RL is the reverberation level and the other factors are as before.
Hand-held sonar for use by a diver

The LIMIS (= Limpet Mine Imaging Sonar) is a hand-held or ROV-mounted imaging sonar for use
by a diver. Its name is because it was designed for patrol divers (combat frogmen or Clearance
Divers) to look for limpet mines in low visibilitywater.ities

The LUIS (= Lensing Underwater Imaging System) is another imaging sonar for use by a diver.

There is or was a small flashlight-shaped handheld sonar for divers, that merely displays range.

For the INSS = Integrated Navigation Sonar System

Passive sonar
This section does not cite any references or sources. Please help
improve this section by adding citations to reliable sources. Unsourced

material may be challenged and removed. (April 2010)


Passive sonar listens without transmitting. It is often employed in military settings, although it is also
used in science applications, e.g., detecting fish for presence/absence studies in various aquatic
environments - see also passive acousticsand passive radar. In the very broadest usage, this term can
encompass virtually any analytical technique involving remotely generated sound, though it is usually
restricted to techniques applied in an aquatic environment.
Identifying sound sources
Passive sonar has a wide variety of techniques for identifying the source of a detected sound. For
example, U.S. vessels usually operate 60 Hz alternating current power systems.
If transformers or generators are mounted without propervibration insulation from the hull or become
flooded, the 60 Hz sound from the windings can be emitted from the submarineor ship. This can help to
identify its nationality, as all European submarines and nearly every other nation's submarine have
50 Hz power systems. Intermittent sound sources (such as a wrench being dropped) may also be
detectable to passive sonar. Until fairly recently,[when?] an experienced, trained operator identified
signals, but now computers may do this.
Passive sonar systems may have large sonic databases, but the sonar operator usually finally classifies
the signals manually. A computer system frequently uses these databases to identify classes of ships,
actions (i.e. the speed of a ship, or the type of weapon released), and even particular ships. Publications
for classification of sounds are provided by and continually updated by the US Office of Naval
Intelligence.
Noise limitations
Passive sonar on vehicles is usually severely limited because of noise generated by the vehicle. For this
reason, many submarines operate nuclear reactors that can be cooled without pumps, using
silent convection, or fuel cells or batteries, which can also run silently. Vehicles' propellers are also
designed and precisely machined to emit minimal noise. High-speed propellers often create tiny bubbles
in the water, and this cavitation has a distinct sound.
The sonar hydrophones may be towed behind the ship or submarine in order to reduce the effect of
noise generated by the watercraft itself. Towed units also combat the thermocline, as the unit may be
towed above or below the thermocline.
The display of most passive sonars used to be a two-dimensional waterfall display. The horizontal
direction of the display is bearing. The vertical is frequency, or sometimes time. Another display
technique is to color-code frequency-time information for bearing. More recent displays are generated
by the computers, and mimic radar-type plan position indicatordisplays.
Performance prediction

Unlike active sonar, only one way propagation is involved. Because of the different signal processing
used, the minimum detectable signal to noise ratio will be different. The equation for determining the
performance of a passive sonar is:
SL TL = NL DI + DT
where SL is the source level, TL is the transmission loss, NL is the noise level, DI is the directivity index of
the array (an approximation to the array gain) and DT is the detection threshold. The figure of merit of a
passive sonar is:
FOM = SL + DI (NL + DT).
Performance factors
The detection, classification and localisation performance of a sonar depends on the environment and
the receiving equipment, as well as the transmitting equipment in an active sonar or the target radiated
noise in a passive sonar.
Sound propagation
Sonar operation is affected by variations in sound speed, particularly in the vertical plane. Sound travels
more slowly infresh water than in sea water, though the difference is small. The speed is determined by
the water's bulk modulus andmass density. The bulk modulus is affected by temperature, dissolved
impurities (usually salinity), and pressure. The density effect is small. The speed of sound (in feet per
second) is approximately:
4388 + (11.25 temperature (in F)) + (0.0182 depth (in feet)) + salinity (in parts-per-thousand ).
This empirically derived approximation equation is reasonably accurate for normal temperatures,
concentrations of salinity and the range of most ocean depths. Ocean temperature varies with depth,
but at between 30 and 100 meters there is often a marked change, called the thermocline, dividing the
warmer surface water from the cold, still waters that make up the rest of the ocean. This can frustrate
sonar, because a sound originating on one side of the thermocline tends to be bent, or refracted,
through the thermocline. The thermocline may be present in shallower coastal waters. However, wave
action will often mix the water column and eliminate the thermocline. Water pressure also affects
sound propagation: higher pressure increases the sound speed, which causes the sound waves to refract
away from the area of higher sound speed. The mathematical model of refraction is called Snell's law.
If the sound source is deep and the conditions are right, propagation may occur in the 'deep sound
channel'. This provides extremely low propagation loss to a receiver in the channel. This is because of
sound trapping in the channel with no losses at the boundaries. Similar propagation can occur in the
'surface duct' under suitable conditions. However in this case there are reflection losses at the surface.
In shallow water propagation is generally by repeated reflection at the surface and bottom, where
considerable losses can occur.

Sound propagation is affected by absorption in the water itself as well as at the surface and bottom. This
absorption depends upon frequency, with several different mechanisms in sea water. Long-range sonar
uses low frequencies to minimise absorption effects.
The sea contains many sources of noise that interfere with the desired target echo or signature. The
main noise sources are waves and shipping. The motion of the receiver through the water can also cause
speed-dependent low frequency noise.
Scattering
When active sonar is used, scattering occurs from small objects in the sea as well as from the bottom
and surface. This can be a major source of interference. This acoustic scattering is analogous to the
scattering of the light from a car's headlights in fog: a high-intensity pencil beam will penetrate the fog
to some extent, but broader-beam headlights emit much light in unwanted directions, much of which is
scattered back to the observer, overwhelming that reflected from the target ("white-out"). For
analogous reasons active sonar needs to transmit in a narrow beam to minimise scattering.
Target characteristics
The sound reflection characteristics of the target of an active sonar, such as a submarine, are known as
its target strength. A complication is that echoes are also obtained from other objects in the sea such as
whales, wakes, schools of fish and rocks.
Passive sonar detects the target's radiated noise characteristics. The radiated spectrum comprises
a continuous spectrumof noise with peaks at certain frequencies which can be used for classification.
Countermeasures
Active (powered) countermeasures may be launched by a submarine under attack to raise the noise
level, provide a large false target, and obscure the signature of the submarine itself.
Passive (i.e., non-powered) countermeasures include:

Mounting noise-generating devices on isolating devices.

Sound-absorbent coatings on the hulls of submarines, for example anechoic tiles.

Military applications
Modern naval warfare makes extensive use of both passive and active sonar from water-borne vessels,
aircraft and fixed installations. Although active sonar was used by surface craft in World War II,
submarines avoided the use of active sonar due to the potential for revealing their presence and
position to enemy forces. However, the advent of modern signal-processing enabled the use of passive
sonar as a primary means for search and detection operations. In 1987 a division
ofJapanese company Toshiba reportedly sold machinery to the Soviet Union that allowed their

submarine propeller blades to be milled so that they became radically quieter, making the newer
generation of submarines more difficult to detect.
The use of active sonar by a submarine to determine bearing is extremely rare and will not necessarily
give high quality bearing or range information to the submarines fire control team;however, use of
active sonar on surface ships is very common. Active sonar is used by submarines when if the tactical
situation dictates it is more important to determine the position of a hostile submarine than conceal
their own position. With surface ships it might be assumed that the threat is already tracking the ship
with satellite data. Any vessel around the emitting sonar will detect the emission. Having heard the
signal, it is easy to identify the sonar equipment used (usually with its frequency) and its position (with
the sound wave's energy). Active sonar is similar to radar in that, while it allows detection of targets at a
certain range, it also enables the emitter to be detected at a far greater range, which is undesirable.
Since active sonar reveals the presence and position of the operator, and does not allow exact
classification of targets, it is used by fast (planes, helicopters) and by noisy platforms (most surface
ships) but rarely by submarines. When active sonar is used by surface ships or submarines, it is typically
activated very briefly at intermittent periods to minimize the risk of detection. Consequently active
sonar is normally considered a backup to passive sonar. In aircraft, active sonar is used in the form of
disposable sonobuoys that are dropped in the aircraft's patrol area or in the vicinity of possible enemy
sonar contacts.
Passive sonar has several advantages. Most importantly, it is silent. If the target radiated noise level is
high enough, it can have a greater range than active sonar, and allows the target to be identified. Since
any motorized object makes some noise, it may in principle be detected, depending on the level of noise
emitted and the ambient noise level in the area, as well as the technology used. To simplify, passive
sonar "sees" around the ship using it. On a submarine, nose-mounted passive sonar detects in directions
of about 270, centered on the ship's alignment, the hull-mounted array of about 160 on each side, and
the towed array of a full 360. The invisible areas are due to the ship's own interference. Once a signal is
detected in a certain direction (which means that something makes sound in that direction, this is called
broadband detection) it is possible to zoom in and analyze the signal received (narrowband analysis).
This is generally done using aFourier transform to show the different frequencies making up the sound.
Since every engine makes a specific sound, it is straightforward to identify the object. Databases of
unique engine sounds are part of what is known as acoustic intelligenceor ACINT.
Another use of passive sonar is to determine the target's trajectory. This process is called Target Motion
Analysis (TMA), and the resultant "solution" is the target's range, course, and speed. TMA is done by
marking from which direction the sound comes at different times, and comparing the motion with that
of the operator's own ship. Changes in relative motion are analyzed using standard geometrical
techniques along with some assumptions about limiting cases.
Passive sonar is stealthy and very useful. However, it requires high-tech electronic components and is
costly. It is generally deployed on expensive ships in the form of arrays to enhance detection. Surface
ships use it to good effect; it is even better used by submarines, and it is also used by airplanes and

helicopters, mostly to a "surprise effect", since submarines can hide under thermal layers. If a
submarine's commander believes he is alone, he may bring his boat closer to the surface and be easier
to detect, or go deeper and faster, and thus make more sound.
Examples of sonar applications in military use are given below. Many of the civil uses given in the
following section may also be applicable to naval use.
Anti-submarine warfare

Variable Depth Sonar and its winch


Until recently, ship sonars were usually with hull mounted arrays, either amidships or at the bow. It was
soon found after their initial use that a means of reducing flow noise was required. The first were made
of canvas on a framework, then steel ones were used. Now domes are usually made of reinforced plastic
or pressurized rubber. Such sonars are primarily active in operation. An example of a conventional hull
mounted sonar is the SQS-56.
Because of the problems of ship noise, towed sonars are also used. These also have the advantage of
being able to be placed deeper in the water. However, there are limitations on their use in shallow
water. These are called towed arrays (linear) or variable depth sonars (VDS) with 2/3D arrays. A problem
is that the winches required to deploy/recover these are large and expensive. VDS sets are primarily
active in operation while towed arrays are passive.

An example of a modern active/passive ship towed sonar is Sonar 2087 made byThales Underwater
Systems.
Torpedoes
Modern torpedoes are generally fitted with an active/passive sonar. This may be used to home directly
on the target, but wake following torpedoes are also used. An early example of an acoustic homer was
the Mark 37 torpedo.
Torpedo countermeasures can be towed or free. An early example was the GermanSieglinde device
while the Bold was a chemical device. A widely used US device was the towed AN/SLQ-25
Nixie whileMobile submarine simulator (MOSS) was a free device. A modern alternative to the Nixie
system is the UK Royal Navy S2170 Surface Ship Torpedo Defence system.
Mines
Mines may be fitted with a sonar to detect, localize and recognize the required target. Further
information is given inacoustic mine and an example is the CAPTOR mine.
Mine countermeasures
Mine Countermeasure (MCM) Sonar, sometimes called "Mine and Obstacle Avoidance Sonar (MOAS)", is
a specialized type of sonar used for detecting small objects. Most MCM sonars are hull mounted but a
few types are VDS design. An example of a hull mounted MCM sonar is the Type 2193 while the SQQ-32
Mine-hunting sonar and Type 2093 systems are VDS designs. See also Minesweeper (ship)
Submarine navigation
Main article: Submarine navigation
Submarines rely on sonar to a greater extent than surface ships as they cannot use radar at depth. The
sonar arrays may be hull mounted or towed. Information fitted on typical fits is given in Oyashio class
submarine and Swiftsure classsubmarine.
Aircraft
Helicopters can be used for antisubmarine warfare by deploying fields of active/passive sonobuoys or
can operate dipping sonar, such as the AQS-13. Fixed wing aircraft can also deploy sonobuoys and have
greater endurance and capacity to deploy them. Processing from the sonobuoys or Dipping Sonar can be
on the aircraft or on ship. Dipping sonar has the advantage of being deployable to depths appropriate to
daily conditions Helicopters have also been used for mine countermeasure missions using towed sonars
such as the AQS-20A.

AN/AQS-13 Dipping sonar deployed from an H-3 Sea King.


Underwater communications
Dedicated sonars can be fitted to ships and submarines for underwater communication. See also the
section on the underwater acousticspage.
Ocean surveillance
For many years, the United States operated a large set of passive sonar arrays at various points in the
world's oceans, collectively calledSound Surveillance System (SOSUS) and later Integrated Undersea
Surveillance System (IUSS). A similar system is believed to have been operated by the Soviet Union. As
permanently mounted arrays in the deep ocean were utilised, they were in very quiet conditions so long
ranges could be achieved. Signal processing was carried out using powerful computers ashore. With the
ending of the Cold War a SOSUS array has been turned over to scientific use.
In the United States Navy, a special badge known as the Integrated Undersea Surveillance System
Badge is awarded to those who have been trained and qualified in its operation.
Underwater security
Sonar can be used to detect frogmen and other scuba divers. This can be applicable around ships or at
entrances to ports. Active sonar can also be used as a deterrent and/or disablement mechanism. One
such device is the Cerberus system.
See Underwater Port Security System and Anti-frogman techniques#Ultrasound detection.
Hand-held sonar
Limpet Mine Imaging Sonar (LIMIS) is a hand-held or ROV-mounted imaging sonar designed for patrol
divers (combatfrogmen or clearance divers) to look for limpet mines in low visibility water.
The LUIS is another imaging sonar for use by a diver.

Integrated Navigation Sonar System (INSS) is a small flashlight-shaped handheld sonar for divers that
displays range.[13][14]
Intercept sonar
This is a sonar designed to detect and locate the transmissions from hostile active sonars. An example of
this is the Type 2082 fitted on the British Vanguard class submarines.
Civilian applications
Fisheries
Fishing is an important industry that is seeing growing demand, but world catch tonnage is falling as a
result of serious resource problems. The industry faces a future of continuing worldwide consolidation
until a point of sustainability can be reached. However, the consolidation of the fishing fleets are driving
increased demands for sophisticated fish finding electronics such as sensors, sounders and sonars.
Historically, fishermen have used many different techniques to find and harvest fish. However, acoustic
technology has been one of the most important driving forces behind the development of the modern
commercial fisheries.
Sound waves travel differently through fish than through water because a fish's air-filled swim
bladder has a different density than seawater. This density difference allows the detection of schools of
fish by using reflected sound. Acoustic technology is especially well suited for underwater applications
since sound travels farther and faster underwater than in air. Today, commercial fishing vessels rely
almost completely on acoustic sonar and sounders to detect fish. Fishermen also use active sonar and
echo sounder technology to determine water depth, bottom contour, and bottom composition.

Cabin display of a fish finder sonar

Companies such as eSonar, Raymarine UK, Marport Canada, Wesmar, Furuno, Krupp, and Simrad make a
variety of sonar and acoustic instruments for the deep seacommercial fishing industry. For example, net
sensors take various underwater measurements and transmit the information back to a receiver on
board a vessel. Each sensor is equipped with one or more acoustic transducers depending on its specific
function. Data is transmitted from the sensors using wireless acoustic telemetry and is received by a hull
mounted hydrophone. The analog signals are decoded and converted by a digital acoustic receiver into
data which is transmitted to a bridge computer for graphical display on a high resolution monitor.
Echo sounding
Main article: Echo sounding
Echo sounding is a process used to determine the depth of water beneath ships andboats. A type of
active sonar, echo sounding is the transmission of an acoustic pulse directly downwards to the seabed,
measuring the time between transmission and echo return, after having hit the bottom and bouncing
back to its ship of origin. The acoustic pulse is emitted by a transducer which receives the return echo as
well. The depth measurement is calculated by multiplying the speed of sound in water(averaging 1,500
meters per second) by the time between emission and echo return.[15][16]
The value of underwater acoustics to the fishing industry has led to the development of other acoustic
instruments that operate in a similar fashion to echo-sounders but, because their function is slightly
different from the initial model of the echo-sounder, have been given different terms.
Net location
The net sounder is an echo sounder with a transducer mounted on the headline of the net rather than
on the bottom of the vessel. Nevertheless, to accommodate the distance from the transducer to the
display unit, which is much greater than in a normal echo-sounder, several refinements have to be
made. Two main types are available. The first is the cable type in which the signals are sent along a
cable. In this case there has to be the provision of a cable drum on which to haul, shoot and stow the
cable during the different phases of the operation. The second type is the cable less net-sounder such
as Marports Trawl Explorer - in which the signals are sent acoustically between the net and hull
mounted receiver/hydrophone on the vessel. In this case no cable drum is required but sophisticated
electronics are needed at the transducer and receiver.
The display on a net sounder shows the distance of the net from the bottom (or the surface), rather
than the depth of water as with the echo-sounder's hull-mounted transducer. Fixed to the headline of
the net, the footrope can usually be seen which gives an indication of the net performance. Any fish
passing into the net can also be seen, allowing fine adjustments to be made to catch the most fish
possible. In other fisheries, where the amount of fish in the net is important, catch sensor transducers
are mounted at various positions on the cod-end of the net. As the cod-end fills up these catch sensor
transducers are triggered one by one and this information is transmitted acoustically to display monitors
on the bridge of the vessel. The skipper can then decide when to haul the net.

Modern versions of the net sounder, using multiple element transducers, function more like a sonar
than an echo sounder and show slices of the area in front of the net and not merely the vertical view
that the initial net sounders used.
The sonar is an echo-sounder with a directional capability that can show fish or other objects around the
vessel.
ROV and UUV
Small sonars have been fitted to Remotely Operated Vehicles (ROV) and Unmanned Underwater
Vehicles (UUV) to allow their operation in murky conditions. These sonars are used for looking ahead of
the vehicle. The Long-Term Mine Reconnaissance System is an UUV for MCM purposes.
Vehicle location
Sonars which act as beacons are fitted to aircraft to allow their location in the event of a crash in the
sea. Short and Long Baseline sonars may be used for caring out the location, such as LBL.
Prosthesis for the visually impaired
In 2013 an inventor in the United States unveiled a "spider-sense" bodysuit, equipped with ultrasonic
sensors and haptic feedback systems, which alerts the wearer of incoming threats; allowing them to
respond to attackers even when blindfolded.[17]
Scientific applications
Biomass estimation
Main article: Bioacoustics
Detection of fish, and other marine and aquatic life, and estimation their individual sizes or total
biomass using active sonar techniques. As the sound pulse travels through water it encounters objects
that are of different density or acoustic characteristics than the surrounding medium, such as fish, that
reflect sound back toward the sound source. These echoes provide information on fish size, location,
abundance and behavior. Data is usually processed and analysed using a variety of software such
as Echoview. See Also: Hydroacoustics and Fisheries Acoustics.
Wave measurement
An upward looking echo sounder mounted on the bottom or on a platform may be used to make
measurements of wave height and period. From this statistics of the surface conditions at a location can
be derived.
Water velocity measurement
Special short range sonars have been developed to allow measurements of water velocity.

Bottom type assessment


Sonars have been developed that can be used to characterise the sea bottom into, for example, mud,
sand, and gravel. Relatively simple sonars such as echo sounders can be promoted to seafloor
classification systems via add-on modules, converting echo parameters into sediment type. Different
algorithms exist, but they are all based on changes in the energy or shape of the reflected sounder
pings. Advanced substrate classification analysis can be achieved using calibrated (scientific)
echosounders and parametric or fuzzy-logic analysis of the acoustic data (See: Acoustic Seabed
Classification)
Bathymetric mapping
Side-scan sonars can be used to derive maps of seafloor topography (bathymetry) by moving the sonar
across it just above the bottom. Low frequency sonars such as GLORIA have been used for continental
shelf wide surveys while high frequency sonars are used for more detailed surveys of smaller areas.
Sub-bottom profiling
Powerful low frequency echo-sounders have been developed for providing profiles of the upper layers
of the ocean bottom.
Synthetic aperture sonar
Various synthetic aperture sonars have been built in the laboratory and some have entered use in minehunting and search systems. An explanation of their operation is given in synthetic aperture sonar.
Parametric sonar
Parametric sources use the non-linearity of water to generate the difference frequency between two
high frequencies. A virtual end-fire array is formed. Such a projector has advantages of broad
bandwidth, narrow beamwidth, and when fully developed and carefully measured it has no obvious
sidelobes: see Parametric array. Its major disadvantage is very low efficiency of only a few
percent.[18] P.J. Westervelt's seminal 1963 JASA paper summarizes the trends involved.
Sonar in extraterrestrial contexts
Use of sonar has been proposed for determining the depth of hydrocarbon seas on Titan.[19]
Effect of sonar on marine life
Effect on marine mammals

A Humpback whale
Further information: Marine mammals and sonar
Research has shown that use of active sonar can lead to mass strandings of marine
mammals.[20][21] Beaked whales, the most common casualty of the strandings, have been shown to be
highly sensitive to mid-frequency active sonar.[22] Other marine mammals such as the blue whale also
flee away from the source of the sonar,[23]while naval activity was suggested to be the most probable
cause of a mass stranding of dolphins.[24] The US Navy, which part-funded some of studies, said the
findings only showed behavioural responses to sonar, not actual harm, but "will evaluate the
effectiveness of [their] marine mammal protective measures in light of new research findings."[20]
Some marine animals, such as whales and dolphins, use echolocation systems, sometimes
called biosonar to locate predators and prey. It is conjectured that active sonar transmitters could
confuse these animals and interfere with basic biological functions such as feeding and mating.[citation
needed]

Effect on fish
High intensity sonar sounds can create a small temporary shift in the hearing threshold of some
fish.[25][26] [a]
Frequencies and resolutions
The frequencies of sonars range from infrasonic to above a megahertz. Generally, the lower frequencies
have longer range, while the higher frequencies offer better resolution, and smaller size for a given
directionality.
To achieve reasonable directionality, frequencies below 1 kHz generally require large size, usually
achieved as towed arrays.[27]
Low frequency sonars are loosely defined as 15 kHz, albeit some navies regard 57 kHz also as low
frequency. Medium frequency is defined as 515 kHz. Another style of division considers low frequency
to be under 1 kHz, and medium frequency at between 110 kHz.[27]
American World War II era sonars operated at a relatively high frequency of 2030 kHz, to achieve
directionality with reasonably small transducers, with typical maximum operational range of 2500 yd.
Postwar sonars used lower frequencies to achieve longer range; e.g. SQS-4 operated at 10 kHz with

range up to 5000 yd. SQS-26 and SQS-53 operated at 3 kHz with range up to 20,000 yd; their domes had
size of approx. a 60-ft personnel boat, an upper size limit for conventional hull sonars. Achieving larger
sizes by conformal sonar array spread over the hull has not been effective so far, for lower frequencies
linear or towed arrays are therefore used.[27]
Japanese WW2 sonars operated at a range of frequencies. The Type 91, with 30 inch quartz projector,
worked at 9 kHz. The Type 93, with smaller quartz projectors, operated at 17.5 kHz (model 5 at 16 or
19 kHz magnetostrictive) at powers between 1.7 and 2.5 kilowatts, with range of up to 6 km. The later
Type 3, with German-design magnetostrictive transducers, operated at 13, 14.5, 16, or 20 kHz (by
model), using twin transducers (except model 1 which had three single ones), at 0.2 to 2.5 kilowatts. The
Simple type used 14.5 kHz magnetostrictive transducers at 0.25 kW, driven by capacitive discharge
instead of oscillators, with range up to 2.5 km.[11]
The sonar's resolution is angular; objects further apart will be imaged with lower resolutions than
nearby ones.
Another source lists ranges and resolutions vs frequencies for sidescan sonars. 30 kHz provides low
resolution with range of 10006000 m, 100 kHz gives medium resolution at 5001000 m, 300 kHz gives
high resolution at 150500 m, and 600 kHz gives high resolution at 75150 m. Longer range sonars are
more adversely affected by nonhomogenities of water. Some environments, typically shallow waters
near the coasts, have complicated terrain with many features; higher frequencies become necessary
there.[28]
As a specific example, the Sonar 2094 Digital, a towed fish capable of reaching depth of 1000 or 2000
meters, performs side-scanning at 114 kHz (600m range at each side, 50 by 1 degree beamwidth) and
410 kHz (150m range, 40 by 0.3 degree beamwidth), with 3 kW pulse power.[29]
A JW Fishers system offers side-scanning at 1200 kHz with very high spatial resolution, optionally
coupled with longer-range 600 kHz (range 200 ft at each side) or 100 kHz (up to 2000 ft per side, suitable
for scanning large areas for big targets).[30]

Harry Hammond Hess


From Wikipedia, the free encyclopedia
Harry Hammond Hess

Harry Hess commanding the USS Cape Johnson.


Born

May 24, 1906


New York City

Died

August 25, 1969 (age 63)


Woods Hole, Massachusetts

Nationality

United States

Fields

Geology

Alma mater

Princeton University

Doctoral advisor

Arthur Francis Buddington

Doctoral students

John Tuzo Wilson[1]


Ronald Oxburgh

Influences

F. A. Vening-Meinesz[2]

Notable awards

Penrose Medal (1966)

Harry Hammond Hess (May 24, 1906 August 25, 1969) was a geologist andUnited States Navy officer
in World War II.

Considered one of the "founding fathers" of the unifying theory of plate tectonics, Rear Admiral Harry
Hammond Hess was born on May 24, 1906 inNew York City. He is best known for his theories on sea
floor spreading, specifically work on relationships between island arcs, seafloor gravity anomalies,
and serpentinized peridotite, suggesting that the convection of the Earth's mantle was the driving force
behind this process. This work provided a conceptual base for the development of the theory of plate
tectonics.
Contents
[hide]

1 Teaching career

2 The Navy-Princeton gravity expedition to the West Indies in 1932

3 Military career

4 Scientific discoveries

5 Death

6 Selected publications

7 References

8 External links

Teaching career[edit]
Harry Hess taught for a year (19321933) at Rutgers University in New Jerseyand spent a year as a
research associate at the Geophysical Laboratory ofWashington, D. C., before joining the faculty
of Princeton University in 1934. Hess remained at Princeton for the rest of his career and served as
Geology Department Chair from 1950 to 1966. He was a visiting professor at the University of Cape
Town, South Africa (19491950), and the University of Cambridge, England (1965).
The Navy-Princeton gravity expedition to the West Indies in 1932[edit]
Hess accompanied Dr. Felix Vening Meinesz of Utrecht University on board the US Navy submarine USS
S-48 to assist with the second U.S. expedition to obtain gravity measurements at sea. The expedition
used a gravimeter, or gravity meter, designed by Meinesz.[3] The submarine traveled a route
from Guantanamo, Cuba to Key West, Florida and return to Guantanamo through
the Bahamas and Turks and Caicos region from 5 February through 25 March 1932. The description of
operations and results of the expedition were published by the U.S. Navy Hydrographic Office in The
Navy-Princeton gravity expedition to the West Indies in 1932.[4]
Military career[edit]

Hess joined the United States Navy during World War II, becoming captain of the USS Cape Johnson, an
attack transport ship equipped with a new technology: sonar. This command would later prove to be
key in Hess's development of his theory of sea floor spreading. Hess carefully tracked his travel routes
to Pacific Ocean landings on the Marianas,Philippines, and Iwo Jima, continuously using his ship's echo
sounder. This unplanned wartime scientific surveying enabled Hess to collect ocean floor profiles across
the North Pacific Ocean, resulting in the discovery of flat-topped submarine volcanoes, which he
termed guyots, after the nineteenth century geographer Arnold Henry Guyot. After the war, he
remained in the Naval Reserve, rising to the rank of rear admiral.
Scientific discoveries[edit]
In 1960, Hess made his single most important contribution, which is regarded as part of the major
advance in geologic science of the 20th century. In a widely circulated report to the Office of Naval
Research, he advanced the theory, now generally accepted, that the Earth's crust moved laterally away
from long, volcanically active oceanic ridges. He only understood his ocean floor profiles across the
North Pacific Ocean after Marie Tharp and Bruce Heezen (1953, Lamont Group) discovered the Great
Global Rift, running along the Mid-Atlantic Ridge.[5][6] Seafloor spreading, as the process was later
named, helped establish Alfred Wegener's earlier (but generally dismissed at the time) concept
of continental drift as scientifically respectable. This triggered a revolution in the earth sciences.[7] Hess's
report was formally published in hisHistory of Ocean Basins (1962),[8] which for a time was the single
most referenced work in solid-earth geophysics. Hess was also involved in many other scientific
endeavours, including the Mohole project (19571966), an investigation onto the feasibility and
techniques of deep sea drilling.
Death[edit]
Hess died from a heart attack in Woods Hole, Massachusetts, on August 25, 1969, while chairing a
meeting of the Space Science Board of the National Academy of Sciences. He was buried in the Arlington
National Cemetery and was posthumously awarded the National Aeronautics and Space
Administration's Distinguished Public Service Award.
The American Geophysical Union established the Harry H. Hess medal in his memory in 1984 to "honor
outstanding achievements in research of the constitution and evolution of Earth and sister planets."[9]
Selected publications[edit]

Hess, H.H. (1946). "Drowned ancient islands of the Pacific basin". Am. J. Sci. 244 (11): 772
91.doi:10.2475/ajs.244.11.772. Also in: Hess, H.H. (1947). International Hydrographic Review 24:
8191. Missing or empty |title= (help) ; And Hess, H.H. (1948). Smithsonian Institution, Annual
Report for 1947: 281300. Missing or empty |title= (help)

Hess, H.H.; Maxwell, J. C. (1953). "Major structural features of the south-west Pacific: a
preliminary interpretation of H. O. 5484, bathymetric chart, New Guinea to New

Zealand.". Proceedings of the 7th Pacific Science Congress: Held at Auckland and Christchurch,
New Zealand, 1949 2. Wellington: Harry H. Tombs, Ltd. pp. 1417.

Hess, H.H. (1954). "Geological hypotheses and the Earth's crust under the oceans". A Discussion
on the Floor of the Atlantic Ocean. Proceedings of the Royal Society of London, Series
A 222 (1150). pp. 34148.

Hess, H.H. (1955). "The oceanic crust". Journal of Marine Research 14: 42339.

Hess, H.H. (1955). A. W. Poldervaart, ed. "Crust of the Earth". Geological Society of America,
Special Paper No. 62 (Symposium). New York: The Society. pp. 391407. |chapter= ignored
(help)

Hess, H.H. (1959). "The AMSOC hole to the Earth's mantle". Transactions American Geophysical
Union 40: 340345.Bibcode:1959TrAGU..40..340H. doi:10.1029/tr040i004p00340. Also in: Hess,
H.H. (1960). Am. Scientist 47: 254263. Missing or empty |title= (help)

Hess, H.H. (1960). "Preprints of the 1st International Oceanographic Congress (New York, August
31-September 12, 1959)". Washington: American Association for the Advancement of Science.
(A). pp. 3334. |chapter= ignored (help)

Hess, H.H. (1960). "Evolution of ocean basins". Report to Office of Naval Research. Contract No.
1858(10), NR 081-067. p. 38.

Skepticism is the process of applying reason and critical thinking to determine validity. It's the process of
finding a supported conclusion, not the justification of a preconceived conclusion.

Seafloor spreading is a process that occurs at mid-ocean ridges, where new oceanic crust is formed
through volcanic activity and then gradually moves away from the ridge. Seafloor spreading helps
explaincontinental drift in the theory of plate tectonics. When oceanic platesdiverge, tensional stress
causes fractures to occur in the lithosphere.Basaltic magma rises up the fractures and cools on the
ocean floor to form new sea floor. Older rocks will be found farther away from the spreading zone while
younger rocks will be found nearer to the spreading zone.
Earlier theories (e.g. by Alfred Wegener and Alexander du Toit) of continental drift were
that continents "ploughed" through the sea. The idea that the seafloor itself moves (and carries the
continents with it) as it expands from a central axis was proposed by Harry Hess from Princeton
University in the 1960s.[1] The theory is well accepted now, and the phenomenon is known to be caused
by convection currents in the plastic, very weak upper mantle, or asthenosphere.[2]
Contents
[hide]

1 Incipient spreading

2 Continued spreading and subduction

3 Debate and search for mechanism

4 Sea floor global topography: half-space model

5 See also

6 References

7 External links

Incipient spreading[edit]
This section does not cite any references or sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (April 2015)

Plates in the crust of the earth, according to the plate tectonics theory
In the general case, sea floor spreading starts as a rift in a continental land mass, similar to the Red SeaEast Africa Rift System today. The process starts with heating at the base of the continental crust which
causes it to become more plastic and less dense. Because less dense objects rise in relation to denser
objects, the area being heated becomes a broad dome (see isostasy). As the crust bows upward,
fractures occur that gradually grow into rifts. The typical rift system consists of three rift arms at
approximately 120 degree angles. These areas are namedtriple junctions and can be found in several
places across the world today. The separated margins of the continents evolve to form passive margins.
Hess' theory was that new seafloor is formed when magma is forced upward toward the surface at a
mid-ocean ridge.
If spreading continues past the incipient stage described above, two of the rift arms will open while the
third arm stops opening and becomes a 'failed rift'. As the two active rifts continue to open, eventually
the continental crust is attenuated as far as it will stretch. At this point basaltic oceanic crust begins to
form between the separating continental fragments. When one of the rifts opens into the existing
ocean, the rift system is flooded with seawater and becomes a new sea. The Red Sea is an example of a
new arm of the sea. The East African rift was thought to be a "failed" arm that was opening somewhat
more slowly than the other two arms, but in 2005 the Ethiopian Afar Geophysical Lithospheric
Experiment reported that in the Afar region last September, a 60 km fissure opened as wide as eight
meters. During this period of initial flooding the new sea is sensitive to changes in climate and eustasy.
As a result, the new sea will evaporate (partially or completely) several times before the elevation of the
rift valley has been lowered to the point that the sea becomes stable. During this period of evaporation
large evaporite deposits will be made in the rift valley. Later these deposits have the potential to
become hydrocarbon seals and are of particular interest to petroleum geologists.
Sea floor spreading can stop during the process, but if it continues to the point that the continent is
completely severed, then a new ocean basin is created. The Red Sea has not yet completely split Arabia
from Africa, but a similar feature can be found on the other side of Africa that has broken completely
free. South America once fit into the area of the Niger Delta. The Niger River has formed in the failed rift
arm of the triple junction.
Continued spreading and subduction[edit]

Spreading at a mid-ocean ridge


The new oceanic crust is quite hot relative to old oceanic crust, so the new oceanic basin is shallower
than older oceanic basins. If the diameter of the earth remains relatively constant despite the
production of new crust, a mechanism must exist by which crust is also destroyed. The destruction of
oceanic crust occurs at subduction zones where oceanic crust is forced under either continental crust or
oceanic crust. Today, the Atlantic basin is actively spreading at the Mid-Atlantic Ridge. Only a small
portion of the oceanic crust produced in the Atlantic is subducted. However, the plates making up the
Pacific Ocean are experiencing subduction along many of their boundaries which causes the volcanic
activity in what has been termed the Ring of Fire of the Pacific Ocean. The Pacific is also home to one of
the world's most active spreading centres (the East Pacific Rise (EPR)) with spreading rates of up to
13 cm/yr. The Mid-Atlantic Ridge is a "textbook" slow-spreading centre, while the EPR is used as an
example of fast spreading. The differences in spreading rates affect not only the geometries of the
ridges but also the geochemistry of the basalts that are produced.[3]
Since the new oceanic basins are shallower than the old oceanic basins, the total capacity of the world's
ocean basins decreases during times of active sea floor spreading. During the opening of the Atlantic
Ocean, sea level was so high that aWestern Interior Seaway formed across North America from the Gulf
of Mexico to the Arctic Ocean.
Debate and search for mechanism[edit]
At the Mid-Atlantic Ridge (and in other areas), material from the upper mantle rises through the faults
between oceanic plates to form new crust as the plates move away from each other, a phenomenon
first observed as continental drift. WhenAlfred Wegener first presented a hypothesis of continental drift
in 1912, he suggested that continents ploughed through the ocean crust. This was impossible: oceanic
crust is both more dense and more rigid than continental crust. Accordingly, Wegener's theory wasn't
taken very seriously, especially in the United States.
Since then, it has been shown that the motion of the continents is linked to seafloor spreading. In the
1960s, the past record of geomagnetic reversals was noticed by observing the magnetic stripe
"anomalies" on the ocean floor. This results in broadly evident "stripes" from which the past magnetic
field polarity can be inferred by looking at the data gathered from simply towing a magnetometer on the
sea surface or from an aircraft. The stripes on one side of the mid-ocean ridge were the mirror image of

those on the other side. The seafloor must have originated on the Earth's great fiery welts, like the MidAtlantic Ridge and the East Pacific Rise.
The driver for seafloor spreading in plates with active margins is the weight of the cool, dense,
subducting slabs that pull them along. The magmatism at the ridge is considered to be "passive
upswelling", which is caused by the plates being pulled apart under the weight of their own slabs.[4] This
can be thought of as analogous to a rug on a table with little friction: when part of the rug is off of the
table, its weight pulls the rest of the rug down with it.
Sea floor global topography: half-space model[edit]
To first approximation, sea floor global topography in areas without significant subduction can be
estimated by the half-space model.[5] In this model, the seabed height is determined by the oceanic
lithosphere temperature, due to thermal expansion. Oceanic lithosphere is continuously formed at a
constant rate at the mid-ocean ridges. The source of the lithosphere has a half-plane shape (x = 0, z < 0)
and a constant temperature T1. Due to its continuous creation, the lithosphere at x > 0 is moving away
from the ridge at a constant velocity v, which is assumed large compared to other typical scales in the
problem. The temperature at the upper boundary of the lithosphere (z=0) is a constant T0 = 0. Thus at x
= 0 the temperature is the Heaviside step function
. Finally, we assume the system is at a
quasi-steady state, so that the temperature distribution is constant in time, i.e. T=T(x,z).
By calculating in the frame of reference of the moving lithosphere (velocity v), which have spatial
coordinate x' = x-vt, we may write T = T(x',z,t) and use the heat
equation:
lithosphere.

where

Since T depends on x' and t only through the combination

is the thermal diffusivityof the mantle

, we have:

Thus:
We now use the assumption that

is large compared to other scales in the problem; we therefore

neglect the last term in the equation, and get a 1-dimensional diffusion equation:
the initial conditions
The solution for

.
is given by the error function

with

Due to the large velocity, the temperature dependence on the horizontal direction is negligible, and the
height at time t (i.e. of sea floor of age t) can be calculated by integrating the thermal expansion over z:

where
is the effective volumetric thermal expansion coefficient, and h0 is the mid-ocean ridge
height (compared to some reference).
Note that the assumption the v is relatively large is equivalently to the assumption that the thermal
diffusivity is small compared to
, where L is the acean width (from mid-ocean
ridges to continental shelf) and T is its age.
The effective thermal expansion coefficient
is different from the usual thermal expansion
coefficient due toisostasic effect of the change in water column height above the lithosphere as it
expands or retracts. Both coefficients are related by:

where

is the rock density and

is the density of water.

By substituting the parameters by their rough


estimates:
m2/s,
C1 and T1 ~1220 C (for the Atlantic and Indian
oceans) or ~1120 C (for the eastern Pacific), we have:

for the eastern Pacific Ocean, and:

for the Atlantic and Indian Ocean, where the height is in meters and time is in millions of years. To get
the dependence on x, one must substitute t = x/v ~ Tx/L, where L is the distance between the ridge to
the continental shelf (roughly half the ocean width), and T is the ocean age.

Asthenosphere
From Wikipedia, the free encyclopedia
This section includes a list of references, but its sources remain unclear because it
has insufficient inline citations. Please help to improve this article
by introducingmore precise citations. (March 2009)

Earth cutaway from core to crust, the asthenosphere lying between the upper mantle and the
lithospheric mantle (detail not to scale)
The asthenosphere (from Greek asthens 'weak' + sphere) is the highly viscous, mechanically
weak[1] and ductilelydeforming region of the upper mantle of the Earth. It lies below the lithosphere, at
depths between 80 and 200 km ( 50 and 124 miles) below the surface. The LithosphereAsthenosphere boundary is usually referred to as LAB. The asthenosphere is generally solid although
some of its regions could be melted (e.g. below mid-ocean ridge). The lower boundary of the
asthenosphere is not well defined. The thickness of the asthenosphere depends mainly on the
temperature. For some regions, asthenosphere could extend as deep as 700 km (430 mi). It is
considered the source region of mid-ocean ridgebasalt (MORB).[2]
Contents
[hide]

1 Characteristics

2 Historical

3 References

4 Bibliography

5 External links

Characteristics[edit]
The asthenosphere is a part of the upper mantle just below the lithosphere that is involved in plate
tectonic movement andisostatic adjustments. The lithosphere-asthenosphere boundary is
conventionally taken at the 1300C isotherm, above which the mantle behaves in a rigid fashion and
below which it behaves in a ductile fashion.[3] Seismic waves pass relatively slowly through the
asthenosphere[4] compared to the overlying lithospheric mantle, thus it has been called the low-velocity
zone (LVZ), although the two are not exactly the same. This decreasing in seismic waves velocity from
lithosphere to asthenosphere could be caused by the presence of small percentage of melt in the
asthenosphere. The lower boundary of the LVZ lies at a depth of 180220 km,[5] whereas the base of the
asthenosphere lies at a depth of about 700 km.[6] This was the observation that originally
alerted seismologists to its presence and gave some information about its physical properties, as the
speed of seismic waves decreases with decreasing rigidity.
In the old oceanic mantle the transition from the lithosphere to the asthenosphere, the so-called
lithosphere-asthenosphere boundary (LAB) is shallow (about 60 km in some regions) with a sharp and
large velocity drop (5-10%).[7] At the mid-ocean ridges the LAB rises to within a few kilometers of the
ocean floor.
The upper part of the asthenosphere is believed to be the zone upon which the great rigid and brittle
lithospheric plates of the Earth's crust move about. Due to the temperature and pressure conditions in
the asthenosphere, rock becomes ductile, moving at rates of deformation measured in cm/yr over lineal
distances eventually measuring thousands of kilometers. In this way, it flows like a convection current,
radiating heat outward from the Earth's interior. Above the asthenosphere, at the same rate of
deformation, rock behaves elastically and, being brittle, can break, causing faults. The rigid lithosphere is
thought to "float" or move about on the slowly flowing asthenosphere, creating the movement
of tectonic plates.
Historical[edit]
Although its presence was suspected as early as 1926, the worldwide occurrence of the asthenosphere
was confirmed by analyses of earthquake waves from the 9.5 MW Great Chilean earthquake of May 22,
1960.

Mid-ocean ridge
A mid-ocean ridge or mid-oceanic ridge is an underwater mountain range, formed by plate tectonics.
This uplifting of the ocean floor occurs when convection currents rise in the mantle beneath the oceanic
crust and create magma where two tectonic plates meet at a divergent boundary.
The mid-ocean ridges of the world are connected and form a single global mid-oceanic ridge system that
is part of every ocean, making the mid-oceanic ridge system the longest mountain range in the world,
with a total length of about 60,000 km.
There are two processes, ridge-push and slab-pull, thought to be responsible for the spreading seen at
mid-ocean ridges, and there is some uncertainty as to which is dominant.
Ridge-push occurs when the weight of the ridge pushes the rest of the tectonic plate away from the
ridge, often towards a subduction zone.
At the subduction zone, "slab-pull" comes into effect.
This is simply the weight of the tectonic plate being subducted (pulled) below the overlying plate
dragging the rest of the plate along behind it.
The other process proposed to contribute to the formation of new oceanic crust at mid-ocean ridges is
the "mantle conveyor" (see image).
However, there have been some studies which have shown that the upper mantle (asthenosphere) is
too plastic (flexible) to generate enough friction to pull the tectonic plate along

HOT SPOTS AND MANTLE PLUMES


Although most volcanic rocks are generated at plate boundaries, there
are a few exceptionally active sites of volcanism within the plate
interiors. These intraplate regions of voluminous volcanism are
called hotspots. Twenty-four selected hotspots are shown on the
adjacent map. Most hotspots are thought to be underlain by a large
plume of anomalously hot mantle. These mantle plumes appear to be
generated in the lower mantle and rise slowly through the mantle by
convection. Experimental data suggests that they rise as a plastically deforming mass that has a
bulbous plume head fed by a long, narrow plume tail. As the head impinges on the base of
thelithosphere, it spreads outward into a mushroom shape. Such plume heads are thought to have
diameters between ~500 to ~1000 km.
Many scientists believe that mantle
plumes may be derived from near
the core-mantle boundary, as
demonstrated in this computer
simulation from the Minnesota
supercomputing lab. Note the
bulbous plume heads, the
narrowplume tails, and the
flattened plume heads as they
impinge on the outer sphere
representing the base of the
lithosphere.
Decompressional melting of this hot mantle source can generate huge volumes of basalt magma. It is
thought that the massive flood basalt provinces on earth are produced above mantle hotspots. Although
most geologists accept the hotspot concept, the number of hotspots worldwide is still a matter of
controversy.
HOTSPOT TRACKS
The Pacific plate contains several linear belts of extinct submarine volcanoes, called seamounts, an
example of which is the Foundation seamount chain shown here.

The Foundation seamount chain is


located near Easter Island in the south
Pacific. Courtesy of NOAA.

The formation of at least some of these intraplate seamount chains can be attributed to volcanism
above a mantle hotspot to form a linear, age-progressive hotspot track. Mantle plumes appear to be
largely unaffected by plate motions. As lithospheric plates move across stationary hotspots, volcanism
will generate volcanic islands that are active above the mantle plume, but become inactive and
progressively older as they move away from the mantle plume in the direction of plate movement. Thus,
a linear belt of inactive volcanic islands and seamounts will be produced. A classic example of this
mechanism is demonstrated by the Hawaiian and Emperor seamount chains.

The image on the left shows the Hawaiian and Emperor seamount chains.
The Hawaiian chain begins at the Hawaiian Islands, to the southeast, and
continues to the bend located ~5000 km to the northwest. From the
bend, the Emperor chain continues to the north-northwest until it
terminates at the Aleutian trench (Courtesy of NOAA). The diagram on
the right is a model demonstrating how these chains form above the
stationary mantle plume, becoming progressively older to the
northwest (Courtesy of the USGS).
The "Big Island" of Hawaii lies above the mantle plume. It is the only island that is currently volcanically
active. The seven Hawaiian Islands become progressively older to the northwest. The main phase of
volcanism on Oahu ceased about 3 million years ago, and on Kauai about 5 million years ago. This trend
continues beyond the Hawaiian Islands, as demonstrated by a string of seamounts (the Hawaiian chain)
that becomes progressively older toward Midway Island. Midway is composed of lavas that are ~27
million years old. Northwest of Midway, the volcanic belt bends to the north-northwest to form the
Emperor seamount chain. Here, the seamounts become progressively older until they terminate against
the Aleutian trench. The oldest of these seamounts near the trench is ~70 million years old. This implies
that the mantle plume currently generating basaltic lavas on the Big Island has been in existence for at
least 70 million years!
The Hawaiians were very good at recognizing the difference in the older, eroded volcanic islands and
newer islands to the southeast, where volcanic features are more pristine. Legend has it that Pele, the
Hawaiian goddess of fire, was forced from island to island as she was chased by various gods. Her

journey is marked by volcanic eruptions, as she progressed from the island of Kaua'i to her current home
on the Big Island. The legend corresponds well with the modern scientific notion of the age progression
of these volcanic islands.

There are five active volcanoes in Hawaii. They are:

Loihi

Kilauea

Mauna Loa

Hualalai

Haleakala

Kilauea is considered one of the worlds most frequently active volcanoes. If you just look at the number
of Kilauea eruptions recorded since Europeans arrived, there have been 62 eruptions in 245 years, which
comes out to 1 eruption every 3.95 years. However, this completely ignores the fact that some of the
eruptions lasted a long time. For example, the current eruption started in January of 1983 and has been
continuous ever since! Likewise, there was an active lava lake in the summit caldera from at least 1823
until 1924, while at the same time eruptions would take place elsewhere on the flanks of the volcano.
Mauna Loa is an active volcano and is due for an eruption. Mauna Loa has erupted 15 times since 1900.
These eruptions have lasted from a few hours to 145 days. Since 1950 Mauna Loa has erupted only
twice, in 1975 and 1984. The 1975 eruption lasted 1 day. The 1984 eruption lasted 3 weeks. Nearly all
the eruptions begin at the summit. About half of these migrate down into a rift zone.
Haleakala began growing on the ocean floor roughly 1-2 million years ago. It erupted most recently in
1790 at La Perouse Bay.
Hualalai is an active volcano. The resort town of Kailua is on the southwest flank of the volcano. Hualalai
last erupted in 1801 and sent lavafrom a vent on its northeast rift down to the ocean. Swarms of
earthquakes in 1929 were probably the result of magma movement within the volcano but there was
not an eruption. Hualalai is monitored by geologists of the U.S. Geological Surveys Hawaiian Volcano
Observatory. In the last 24 years there have been no swarms of microearthquakes nor any harmonic
tremor. Since the early 1980s the geologists have been surveying the volcano. Hualalai is not expanding
at the present time nor has expanded since the geologists began making their measurements. If
anything changes Im sure well hear about it.
Loihi means long one, a reference to its elongate shape. For a 3-d image, check out the Hawaii
Undersea Geological Observatory (HUGO) home. Right now, the summit of Loihi is about 970 meters
below sea level. It is growing on the lower flanks of its two neighbors, Kilauea and Mauna Loa, with its
base at a depth of about 4000 meters below sea level, so you can say that Loihi itself is about 3000 m
high. We dont really know when it will reach the surface or even if it will. There is an underwater
volcano off the NW coast of the big island of Hawaii named Mahukona, and there is debate about
whether it ever grew above sea level, or died out prior to doing so. The most often-heard time required
for Loihi to reach sea level is about 10,000 years, but that is really only a guess. It might be 30,000 years
for all we know. It is far enough away from the coastline of Hawaii that I imagine that at first it will be a

separate island when it breaks the surface. As it grows (and especially if Kilauea and Mauna Loa are still
erupting) it will soon be joined to the island.

Mauna Loa (/mn lo./ or /man lo./; Hawaiian: *mun low]; English: Long Mountain[3]) is
one of five volcanoes that form the Island of Hawaii in the U.S. state of Hawaii in the Pacific Ocean. The
largestsubaerial volcano in both mass and volume, Mauna Loa has historically been considered the
largest volcano on Earth. It is an active shield volcanowith relatively shallow slopes, with a volume
estimated at approximately 18,000 cubic miles (75,000 km3),[4] although its peak is about 120 feet (37 m)
lower than that of its neighbor, Mauna Kea. Lava eruptions from Mauna Loa are silica-poor and very
fluid, and they tend to be non-explosive.
Mauna Loa has probably been erupting for at least 700,000 years, and may have emerged above sea
level about 400,000 years ago. The oldest-known dated rocks are not older than 200,000 years.[5] The
volcano'smagma comes from the Hawaii hotspot, which has been responsible for the creation of
the Hawaiian island chain over tens of millions of years. The slow drift of the Pacific Plate will eventually
carry Mauna Loa away from the hotspot within 500,000 to one million years from now, at which point it
will become extinct.
Mauna Loa's most recent eruption occurred from March 24 to April 15, 1984. No recent eruptions of the
volcano have caused fatalities, but eruptions in 1926 and 1950 destroyed villages, and the city of Hilo is
partly built on lava flows from the late 19th century. Because of the potential hazards it poses to
population centers, Mauna Loa is part of the Decade Volcanoes program, which encourages studies of
the world's most dangerous volcanoes. Mauna Loa has been monitored intensively by theHawaiian
Volcano Observatory since 1912. Observations of theatmosphere are undertaken at the Mauna Loa
Observatory, and of theSun at the Mauna Loa Solar Observatory, both located near the mountain's
summit. Hawaii Volcanoes National Park covers the summit and the southeastern flank of the volcano,
and also incorporates Klauea, a separate volcano.
Contents
[hide]

1 Geology
o

1.1 Setting

1.2 Structure

2 Eruptive history
o

2.1 Prehistoric eruptions

2.2 Recent history

2.3 Hazards

2.4 Monitoring

3 Human history

3.1 Pre-contact

3.2 European summiting attempts

3.3 Wilkes expedition

3.4 Today

4 Climate

5 Observatories

6 See also

7 References

8 External links

Geology
Setting

Position of Mauna Loa on Hawaii island

Landsat mosaic; recent lava flows appear in black

Like all Hawaiian volcanoes, Mauna Loa was created as the Pacific tectonic plate moved over
the Hawaiian hotspot in the Earth's underlying mantle.[6] The Hawaii island volcanoes are merely the
most recent evidence of this process that, over 70 million years, has created the 3,700 mi (6,000 km)long HawaiianEmperor seamount chain.[7] The prevailing view states that the hotspot has been largely
stationary within the planet's mantle for much, if not all of the Cenozoic Era.[7][8] However, while the
Hawaiian plume is well-understood and extensively studied, the nature of hotspots themselves remains
fairly enigmatic.[9]
Mauna Loa is one of five subaerial volcanoes that make up the island of Hawaii, created by the Hawaii
hotspot.[10] The oldest volcano on the island, Kohala, is more than a million years old,[11] and Klauea, the
youngest, is believed to be between 300,000 and 600,000 years of age.[10] Lihi Seamount on the
island's flank is even younger, but has yet to breach the surface.[12] At 1 million to 700,000 years of
age,[2] Mauna Loa is the second youngest of the five volcanoes on the lsland, making it
the third youngest volcano in the Hawaiian Emperor seamount chain, a chain of shield
volcanoes andseamounts extending from Hawaii to the KurilKamchatka Trench in Russia.[13]
Following the pattern of Hawaiian volcanics, Mauna Loa would have started out as young submarine
volcano, gradually building itself up through subsurface eruptions of alkali basalt before emerging from
the sea through a series of surtseyan eruptions[14] about 400,000 years ago. Since then the volcano has
remained active, generating a continual stream ofeffusive and explosive eruptions, including 33
historical ones since the first well-documented eruption in 1843.[2] Although Mauna Loa's activity has
been overshadowed in recent years by that of its neighbor Klauea,[10] it remains active.[2]
Structure

Mauna Loa's summit, overlaid with 100 m (328 ft) contour lines; its rift zones are visible from the air.

Mokuweoweo, Mauna Loa's summit caldera, covered in snow.


Mauna Loa is the largest subaerial and second largest overall volcano in the world (behind Tamu
Massif),[15] covering a land area of 5,271 km2 (2,035 sq mi) and spans a maximum width of 120 km
(75 mi).[2] Consisting of approximately 65,000 to 80,000 km3 (15,600 to 19,200 cu mi) of solid rock,[16] it
makes up more than half of the surface area of the island of Hawaii. Combining the volcano's extensive
submarine flanks (5,000 m (16,400 ft) to the sea floor) and 4,170 m (13,680 ft) subaerial height, Mauna
Loa rises an impressive 9,170 m (30,085 ft) from base to peak,[2][17] greater than the 8,848 m or
29,029 ft[18] elevation of Mount Everest from sea level to its peak. In addition, much of the mountain is
invisible even underwater: its mass depresses the crust beneath it by another 8 km (5 mi), in the shape
of an inverse mountain,[19] meaning the true height of Mauna Loa from the start of its eruptive history is
about 17,170 m (56,000 ft).[20]
Mauna Loa is a typical shield volcano in form, taking the shape of a long, broad dome extending down to
the ocean floor whose slopes are about 12 at their steepest, a consequence of its extremely fluid lava.
The shield-stage lavas that built the enormous main mass of the mountain are tholeiitic basalts, like
those of Mauna Kea, created through the mixing of primary magma and subducted oceanic
crust.[21]Mauna Loa's summit hosts three overlapping pit craters arranged northeast-southwest, the first
and last roughly 1 km (0.6 mi) in diameter and the second an oblong 4.2 km 2.5 km (2.6 mi 1.6 mi)
feature; together these three craters make up the 6.2 by 2.5 km (3.9 by 1.6 mi) summit caldera
Mokuweoweo,[22] so named for the Hawaiian weoweo fish (Priacanthus meeki), purportedly due to
the resemblance of its eruptive fires to the coloration of the fish.[23] Mokuweoweo's caldera floor lies
between 170 and 50 m (558 and 164 ft) beneath its rim and it is only the latest of several calderas that
have formed and reformed over the volcano's life. It was created between 1,000 and 1,500 years ago by
a large eruption from Mauna Loa's northeast rift zone, which emptied out a shallow magma
chamber beneath the summit and collapsed it into its present form.[22] Additionally, two smaller pit
craters lie southwest of the caldera, named Lua Hou (New Pit) and Lua Hohonu (Deep Pit).[16]
Mauna Loa's summit is also the focal point for its two prominent rift zones, marked on the surface by
well-preserved, relatively recent lava flows (easily seen in satellite imagery) and linearly
arranged fracture lines intersected by cinder andsplatter cones.[24] These rift zones are deeply set
structures, driven by dike intrusions along a decollement fault that is believed to reach down all the way
to the volcano's base, 12 to 14 km (7 to 9 mi) deep.[25] The first is a 60 km (37 mi) rift trending southwest
from the caldera to the sea and a further 40 km (25 mi) underwater, with a prominent 40 directional
change along its length; this rift zone is historically active across most of its length. The second,
northeastern rift zone extends towards Hilo and is historically active across only the first 20 km (12 mi)

of its length, with a nearly straight and, in its latter sections, poorly defined trend.[24] The northeastern
rift zone takes the form of a succession of cinder cones, the most prominent of which the 60 m (197 ft)
high Puu Ulaula, or Red Hill. There is also a less definite northward rift zone that extends towards the
Humuula Saddle marking the intersection of Mauna Loa and Mauna Kea.[16]
Simplified geophysical models of Mauna Loa's magma chamber have been constructed,
using interferometric synthetic aperture radar measures of ground deformation due to the slow buildup
of lava under the volcano's surface. These models predict a 1.1 km (1 mi) wide magma chamber located
at a depth of about 4.7 km (3 mi), 0.5 km (0 mi) below sea level, near the southeastern margin of
Mokuweoweo. This shallow magma chamber is significantly higher-placed than Mauna Loa's rift zones,
suggesting magma intrusion into the deeper and occasional dike injections into the shallower parts of
the rift zone drive rift activity; a similar mechanism has been proposed for neighboring Klauea.[25] Earlier
models based on Mauna Loa's two most recent eruptions made a similar prediction, placing the
chamber at 3 km (1.9 mi) deep in roughly the same geographic position.[26]
Mauna Loa has complex interactions with its neighbors, Huallai to the west, Mauna Kea to the north,
and particularlyKlauea to the east. Lavas from Mauna Kea intersect with Mauna Loa's basal flows as a
consequence of Kea's older age,[27] and Mauna Kea's original rift zones were buried beneath post-shield
volcanism from Mauna Loa;[28] additionally, Mauna Kea shares Mauna Loa's gravity well, depressing
the ocean crust beneath it by 6 km (4 mi).[27] There are also a series of normal faults on Mauna Loa's
northern and western slopes, between its two major rift zones, that are believed to be the result of
combined circumferential tension from the two rift zones and from added pressure due to the westward
growth of neighboring Klauea.[29]
Because Klauea lacks a topographical prominence and appears as a bulge on the southeastern flank of
Mauna Loa, it was historically interpreted by both native Hawaiians and early geologists to be an active
satellite of Mauna Loa. However, analysis of the chemical composition of lavas from the two volcanoes
show that they have separate magma chambers, and are thus distinct. Nonetheless, their proximity has
led to a historical trend in which high activity at one volcano roughly coincides with low activity at the
other. When Klauea lay dormant between 1934 and 1952, Mauna Loa became active, and when the
latter remained quiet from 1952 to 1974, the reverse was true. This is not always the case; the 1984
eruption of Mauna Loa started during an eruption at Klauea, but had no discernible effect on the
Klauea eruption, and the ongoing inflation of Mauna Loa's summit, indicative of a future eruption,
began the same day as new lava flows at Klauea's Puu crater. Geologists have suggested that
"pulses" of magma entering Mauna Loa's deeper magma system may have increased pressure inside
Klauea and triggered the concurrent eruptions.[30]
Mauna Loa is slumping eastward along its southwestern rift zone, leveraging its mass into Klauea and
driving the latter eastward at a rate of about 10 cm (4 in) per year; the interaction between the two
volcanoes in this manner has generated a number of large earthquakes in the past, and has resulted in a
significant area of debris off of Klauea's seaward flank known as the Hilina Slump. A system of older
faults exists on the southeastern side of Mauna Loa that likely formed before Kilauea became large
enough to impede Mauna Loa's slump, the lowest and northernmost of which, the Kaoiki fault, remains

an active earthquake center today. The west side of Mauna Loa, meanwhile, is unimpeded in
movement, and indeed is believed to have undergone a massive slump collapse between 100,000 and
200,000 years ago, the residue from which, consisting of a scattering of debris up to several kilometers
wide and up to 50 km (31 mi) distant, is still visible today. The damage was so extensive that the
headwall of the damage likely intersected its southwestern rift zone. There is very little movement there
today, a consequence of the volcano's geometry.[31]
Mauna Loa is tall enough to have experienced glaciation during the last ice age, 25,000 to 15,000 years
ago.[6] Unlike Mauna Kea, on which extensive evidence of glaciation remains even today,[32] Mauna Loa
was at the time and has remained active, having grown an additional 150 to 300 m (492 to 984 ft) in
height since then and covering any glacial deposits beneath new flows; strata of that age doesn't occur
until at least 2,000 m (6,562 ft) down from the volcano's summit, too low for glacial growth. Mauna Loa
also lacks its neighbor's summit permafrost region, although sporadic ice persists in places. It is
speculated that extensive phreatomagmatic activity occurred during this time, contributing extensively
to ash deposits on the summit.[6]

A view of Mauna Loa taken from a Pu'u near The Onizuka Center for International Astronomy Visitor
Information Station at the 9300 ft. level of Mauna Kea.
Eruptive history
Prehistoric eruptions

A cinder cone and surrounding flows on Mauna Loa


To have reached its enormous size within its relatively short (geologically speaking) 600,000 to
1,000,000 years of life, Mauna Loa would logically have had to have grown extremely rapidly through its
developmental history,[33] and extensivecharcoal-based radiocarbon dating (perhaps the most extensive
such prehistorical eruptive dating on Earth[34][35]) has amassed a record of almost two hundred reliably
dated extant flows confirming this hypothesis.[33]
The oldest exposed flows on Mauna Loa are thought to be the Ninole Hills on its southern flank,
subaerial basalt rock dating back approximately 100 to 200 thousand years. They form a terrace against
which younger flows have since banked, heavily eroded and incised against its slope in terms of
direction; this is believed to be the result of a period of erosion because of a change in the direction of
lava flow caused by the volcano's prehistoric slump. These are followed by two units of lava flows
separated by an intervening ash layer known as the Phala ash layer: the older Kahuka basalt, sparsely
exposed on the lower southwest rift, and the younger and far more widespread Kau basalt, which
appear more widely on the volcano. The Phala ashes themselves were produced over a long period of
time circa 13 to 30 thousand years ago, although heavy vitrification and interactions with post and precreation flows has hindered exact dating. Their age roughly corresponding with the glaciation of Mauna
Loa during the last ice age, raising the distinct possibility that it is the product
of phreatomagmatic interaction between the long-gone glaciers and Mauna Loa's eruptive activities.[6]
Studies have shown that a cycle occurs in which volcanic activity at the summit is dominant for several
hundred years, after which activity shifts to the rift zones for several more centuries, and then back to
the summit again. Two cycles have been clearly identified, each lasting 1,5002,000 years. This cyclical
behavior is unique to Mauna Loa among the Hawaiian volcanoes.[36] Between about 7,000 and
6,000 years ago Mauna Loa was largely inactive. The cause of this cessation in activity is not known, and
no known similar hiatus has been found at other Hawaiian volcanoes except for those currently in the
post-shield stage. Between 11,000 and 8,000 years ago, activity was more intense than it is

today.[34] However, Mauna Loa's overall rate of growth has probably begun to slow over the last
100,000 years,[37] and the volcano may in fact be nearing the end of its tholeiitic basalt shield-building
phase.[38]
Recent history
Ancient Hawaiians have been present on Hawaii island for about 1,500 years, but they preserved almost
no records on volcanic activity on the island beyond a few fragmentary accounts dating to the late 18th
and early 19th centuries.[39]Possible eruptions occurred around 1730 and 1750 and sometime during
1780 and 1803.[40][41] A June 1832 eruption was witnessed by a missionary on Maui, but the 190 km
(118 mi) between the two islands and lack of apparent geological evidence have cast this testimony in
doubt. Thus the first entirely confirmed historically witnessed eruption was a January 1843 event, since
which Mauna Loa has erupted anew 32 times.[39]
Historical eruptions at Mauna Loa are typically Hawaiian in character and rarely violent, starting with the
emergence of lava fountains over a several kilometer long rift colloquially known as the "curtain of fire"
(often, but not always, propagating from Mauna Loa's summit[25]) and eventually concentrating at a
single vent, its long-term eruptive center.[16][24] Activity centered on its summit are usually followed by
flank eruptions up to a few months distant,[42] and although Mauna Loa is historically less active than
that of its neighbor Kilauea, it tends to produce greater volumes of lava over shorter periods of
time.[43]Most eruptions are centered at either the summit or either of its two major rift zones; within the
last two hundred years, 38 percent of eruptions occurred at the summit, 31 percent at the northeast rift
zone, 25 percent at the southwest rift zone, and the remaining 6 percent from northwest vents.[34] 40
percent of the volcano's surface consists of lavas less than a thousand years old,[43] and 98 percent of
lavas less than 10,000 years old.[33] In addition to the summit and rift zones, Mauna Loa's northwestern
flank has also been the source of three historical eruptions.[43]
The 1843 event was followed by eruptions in 1849, 1851, 1852, and 1855,[39] with the 1855 flows being
particularly extensive.[34] 1859 marked the largest of the three historical flows that have been centered
on Mauna Loa's northwestern flank, producing a long lava flow that reached the ocean on Hawaii
island's west coast, north of Kiholo Bay.[43] An eruption in 1868 occurred alongside the enormous 1868
Hawaii earthquake,[34] a magnitude eight event that claimed 77 lives and remains the largest earthquake
ever to hit the island.[44] Following further activity in 1871, Mauna Loa experienced nearly continuous
activity from August 1872 through 1877, a long-lasting and voluminous eruption lasting approximately
1,200 days and never moving beyond its summit.[39][45][46] A short single-day eruption in 1877 was
unusual in that it took place underwater, in Kealakekua Bay and within a mile of the shoreline; curious
onlookers approaching the area in boats reported unusually turbulent water and occasional floating
blocks of hardened lava.[43] Further eruptions occurred in 1879 and then twice in 1880,[39] the latter of
which extended into 1881 and came within the present boundaries of the island's largest city,Hilo;
however at the time the settlement was a shore-side village located further down the volcano's slope,
and so was unaffected.[34][43]

Clickable imagemap of the United States Geological Survey hazard mapping for Hawaii island; the lowest
numbers correspond with the highest hazard levels.
Mauna Loa continued its activity, and of the eruptions that occurred in 1887, 1892, 1896, 1899, 1903
(twice), 1907, 1914, 1916, 1919, and 1926,[39] three (in 1887, 1919, and 1926) were
partially subaerial.[34] The 1926 eruption in particular is noteworthy for having inundated of a village
near Hoopuloa, destroying 12 houses, a church, and a small harbor.[47]After an event in 1933, Mauna
Loa's 1935 eruption caused a public crisis when its flows started to head towards Hilo.[39] A bombing
operation was decided upon to try and divert the flows, planned out by then-lieutenant colonel George
S. Patton. The bombing, conducted on December 27, was declared a success by Thomas A. Jaggar,
director of the Hawaiian Volcano Observatory, and lava stopped flowing by January 2, 1936. However,
the role the bombing played in ending the eruption has since been heavily disputed by
volcanologists.[48] A longer but summit-bound event in 1940 was comparatively less interesting.[39]
Mauna Loa's 1942 eruption occurred only four months after the attack on Pearl Harbor and United
States entry into World War II, and created a unique problem for the wartime United States. Occurring
during an enforced nighttime blackout on the island, the eruption's luminosity forced the government to
issue a gag order on the local press, hoping to prevent news of its occurrence spreading for fear that the
Japanese would use it to launch a bombing run on the island. However, as flows from the eruption
rapidly spread down the volcano's flank and threatened the Olaa flume, Mountain View's
primary water source, the United States Air Force decided to drop its own bombs on the island in the
hopes of redirecting the flows away from the flume; sixteen bombs weighing between 300 and 600 lb
(136 and 272 kg) each were dropped on the island, but produced little effect. Eventually the eruption
ceased on its own.[41][49]
Following a 1949 event the next major eruption at Mauna Loa occurred in 1950. Originating from the
volcano's southwestern rift zone, the eruption remains the largest rift event in the volcano's modern

history, lasting 23 days, emitting 376 million cubic meters of lava, and reaching 24 km (15 mi) out to the
ocean within 3 hours. The 1950 eruption was not the most voluminous eruption on the volcano (the
long-lived 1872-1877 event produced more than twice as much material) but it was easily one of the
fastest-acting, producing the same amount of lava as the 1859 eruption in a tenth of the time.[46]Flows
overtook the village of Hookena-mauka in South Kona, crossed Hawaii Route 11, and reached the sea
within four hours of eruption, and although there was no loss of life the village was permanently
destroyed.[50] After the 1950 event Mauna Loa entered an extended period of dormancy, interrupted
only by a small single-day summit event in 1975. However it rumbled to life again in 1984, manifesting
first at Mauna Loa's summit and then producing a narrow, channelized'a'a flow that advanced
downslope to within 6 km (4 mi) of Hilo, close enough to illuminate the city at nighttime. However the
flow got no closer, as two natural levees further up its pathway consequently broke and diverted active
flows.[51][52]
Mauna Loa has not erupted since, and as of January 2013 has remained quiet for nearly 29 years, its
longest period of quiet in recorded history.[39][53]
Hazards

Teide

Nyiragongo

Vesuvius

Etna

Santorini

Unzen

Sakurajima

Taal

Merapi

Ulawun

Mauna Loa

Colima

Santa Mara

Avachinsky

Koryaksky

Galeras

Rainier
Mauna Loa is one of the 17 Decade Volcanoes.
Mauna Loa has been designated aDecade Volcano, one of the sixteen volcanoes identified by
the International Association of Volcanology and Chemistry of the Earth's Interior(IAVCEI) as being
worthy of particular study in light of their history of large, destructive eruptions and proximity to
populated areas.[54][55] The United States Geological Survey maintains ahazard zone mapping of the
island done on a one to nine scale, with the most dangerous areas corresponding the smallest numbers.
Based on this classification Mauna Loa's continuously active summit caldera and rift zones have been
given a level one designation. Much of the area immediately surrounding the rift zones is considered
level two, and about 20 percent of the area has been covered in lava in historical times. Much of the
remainder of the volcano is hazard level three, about 15 to 20 percent of which has been covered by
flows within the last 750 years. However, two sections of the volcano, the first in the Naalehuarea and
the second on the southeastern flank of Mauna Loa's rift zone, are protected from eruptive activity by
local topography, and have thus been designated hazard level 6, comparable with a similarly isolated
segment on Klauea.[43]
Although volcanic eruptions in Hawaii rarely produce casualties (the only direct historical fatality due to
volcanic activity on the island occurred at Klauea in 1924, when an unusually explosive eruption hurled
rocks at an onlooker), property damage due to inundation by lava is a common and costly
hazard.[56] Hawaiian-type eruptions usually produce extremely slow-moving flows that advance at
walking pace, presenting little danger to human life, but this is not strictly the case;[57]Mauna Loa's 1950
eruption emitted as much lava in three weeks as Klauea's current eruption produces in three years and
reached sea level within four hours of its start, overrunning the village of Hookena Mauka and a major
highway on the way there.[46] An earlier eruption in 1926 overran the village of Hoploa
Makai,[47] and Hilo, partly built on lavas from the 1880-81 eruption, is at risk from future
eruptions.[43] The 1984 eruption nearly reached the city, but stopped short after the flow was redirected
upstream.[58]
A potentially greater hazard at Mauna Loa is a sudden, massive collapse of the volcano's flanks, like the
one that struck the volcano's west flank between 100,000 and 200,000 years ago and formed the
present-day Kealakekua Bay.[31] Deep fault lines are a common feature on Hawaiian volcanoes, allowing
large portions of their flanks to gradually slide downwards and forming structures like the Hilina
Slump and the ancient Ninole Hills; large earthquakes could trigger rapid flank collapses along these
lines, creating massive landslides and possibly triggering equally large tsunamis. Undersea surveys have
revealed numerous landslides along the Hawaiian chain and evidence of two such giant tsunami events:
200,000 years ago, Molokai experienced a 75 m (246 ft) tsunami, and 100,000 years ago
a megatsunami 325 m (1,066 ft) high struckLnai.[59] A more recent example of the risks associated with
slumps occurred when in 1975 the Hilina Slump suddenly lurched forward several meters, triggering
a magnitude 7.2 earthquake and a small tsunami that killed two campers at Halape.[60]

Monitoring

GPS stations, tiltmeters, andstrainmeters on Mauna Loa's summit. Not shown: a webcam and agas
detector positioned on the caldera rim.

Summit inflation as measured via GPSbetween June 2004 and April 2005; arrows denote between 1 and
10 cm (0.4 and 3.9 in) of growth.
Established on Klauea in 1912, the Hawaiian Volcano Observatory (HVO), presently a branch of the
United States Geological Survey, is the primary organization associated with the monitoring,
observance, and study of Hawaiian volcanoes.[61] Thomas A. Jaggar, the observatory's founder,
attempted a summit expedition to the volcano to observe its 1914 eruption, but was rebuffed by the
arduous trek required (see Ascents). After soliciting help fromLorrin A. Thurston, in 1915 he was able to
persuade the US Army to construct a "simple route to the summit" for public and scientific use, a project
completed in December of that year; the Observatory has maintained a presence on the volcano ever
since.[42]
Eruptions on Mauna Loa are almost always preceded and accompanied by prolonged episodes of
seismic activity, the monitoring of which was the primary and often only warning mechanism in the past
and which remains viable today.Seismic stations have been maintained on Hawaii since the
Observatory's inception, but these were concentrated primarily on Klauea, with coverage on Mauna Loa
improving only slowly through the 20th century.[62] Following the invention of modern monitoring
equipment, the backbone of the present-day monitoring system was installed on the volcano in the
1970s. Mauna Loa's July 1975 eruption was forewarned by more than a year of seismic unrest, with the

HVO issuing warnings to the general public from late 1974; the 1984 eruption was similarly preceded by
as much as three years of unusually high seismic activity, with volcanologists predicting an eruption
within two years in 1983.[63]
The modern monitoring system on Mauna Loa is constituted not only by its locally seismic network but
also of a large number of GPS stations, tiltmeters, and strainmeters that have been anchored on the
volcano to monitor ground deformation due to swelling in Mauna Loa's subterranean magma chamber,
which presents a more complete picture of the events proceeding eruptive activity. The GPS network is
the most durable and wide-ranging of the three systems, while the tiltmeters provide the most sensitive
predictive data, but are prone to erroneous results unrelated to actual ground deformation; nonetheless
a survey line across the caldera measured a 76 mm (3 in) increase in its width over the year preceding
the 1975 eruption, and a similar increase in 1984 eruption. Strainmeters, by contrast, are relatively
rare.[64] The Observatory also maintains two gas detectors at Mokuweoweo, Mauna Loa's summit
caldera, as well as a publicly accessible live webcam and occasional screenings by interferometric
synthetic aperture radar imaging.[63]
Human history
Pre-contact
The first Ancient Hawaiians to arrive on Hawaii island lived along the shores where food and water were
plentiful.[65]Flightless birds that had previously known no predators became a staple food
source.[66] Early settlements had a major impact on the local ecosystem, and caused many extinctions,
particularly amongst bird species, as well as introducing foreign plants and animals and increasing
erosion rates.[67] The prevailing lowland forest ecosystem was transformed from forest to grassland;
some of this change was caused by the use of fire, but the main reason appears to have been the
introduction of the Polynesian Rat (Rattus exulans).[68]
Ancient Hawaiian religious practice holds that the five volcanic peaks of the island are sacred, and
regards Mauna Loa, the largest of them all, with great admiration;[69] but what mythology survives today
consists mainly of oral accounts from the 18th century first compiled in the 19th. Most of these stories
agree that the Hawaiian volcano deity, Pele, resides inHalemaumau Crater on Kilauea; however a few
place her home at Mauna Lao's summit caldera Mokuweoweo, and the mythos in general associates
her with all volcanic activity on the island.[70] Regardless, Klauea's lack of a geographic outline and
strong volcanic link to Mauna Loa led to it being considered an offshoot of Mauna Loa by the Ancient
Hawaiians, meaning much of the mythos now associated with Klauea was originally directed at Mauna
Loa proper as well.[71]:154155
Ancient Hawaiians constructed an extensive trail system on Hawaii island, today known as the Ala
Kahakai National Historic Trail. The network consisted of short trailheads servicing local areas along the
main roads and more extensive networks within and around agricultural centers. The positioning of the
trails were practical, connecting living areas to farms and ports and regions to resources, with a few
upland sections reserved for gathering and most lines marked well enough to remain identifiable long
after regular use has ended. One of these trails, the Ainapo Trail, ascended from the village of Kappala

over 3,400 m (11,155 ft) in about 56 km (35 mi) and ended at Mokuweoweo at Mauna Loa's summit.
Although the journey was arduous and required several days and many porters, ancient Hawaiians likely
made the journey during eruptions to leave offerings and prayers to honor Pele, much as they did at
Halemaumau, neighboring Kilauea's more active and more easily accessible caldera. Several camps
established along the way supplied water and food for travelers.[72][73]
European summiting attempts
James Cook's third voyage was the first to make landfall on Hawaii island, in 1778, and following
adventures along the North American west coast Cook returned to the island in 1779. On his second
visit John Ledyard, a corporal of the Royal Marines aboard the HMS Resolution, proposed and receiving
approval for an expedition to summit Mauna Loa to learn "about that part of the island, particularly the
peak, the tip of which is generally covered with snow, and had excited great curiosity." Using a compass,
Ledyard and small group of ships' mates and native attendants attempted to make a direct course for
the summit. However, on the second day of traveling the route became steeper, rougher, and blocked
by "impenetrable thickets," and the group was forced to abandon their attempt and return
to Kealakekua Bay, reckoning they had "penetrated 24 miles and we suppose [were] within 11 miles of
the peak"; in reality, Mokuweoweo lies only 32 km (20 mi) east of the bay, a severe overestimation on
Ledyard's part. Another of Cook's men, Lieutenant James King, estimated the peak to be at least 5,600 m
(18,373 ft) high based on its snow line.[74][75]

The Scottish botanist and naturalistArchibald Menzies was the first European to reach the summit of
Mauna Loa, on his third attempt.
The next attempt to summit Mauna Loa was an expedition led by Archibald Menzies, a botanist and
naturalist on the 1793 Vancouver Expedition. In February of that year Menzies, two ships' mates, and a
small group of native Hawaiian attendants attempted a direct course for the summit from Kealakekua
Bay, making it 26 km (16 mi) inland by their reckoning (an overestimation) before they were turned
away by the thickness of the forest. On a second visit by the expedition to the island in January of the

next year Menzies was placed in charge of exploring the island interior, and after traversing the flanks
of Huallai he and his party arrived at the high plateau separating the two volcanoes. Menzies decided
to make a second attempt (above the objections of the accompanying island chief), but again his
progress was arrested by unassailable thickets.[74]
Menzies made a third attempt to summit Mauna Loa in February of 1794. This time the botanist
consulted King Kamehameha I for advice and learned that he could take canoes to the south and follow
the Ainap Trail, not knowing of its existence beforehand. Significantly better prepared, Menzies,
Lieutenant Joseph Baker and Midshipman George McKenzie of the Discovery, and a servant (most likely
Jonathan Ewins, listed on the ship's muster as "Botanist's L't") reached the summit, which Menzies
estimated to be 4,156 m (13,635 ft) high with the aid of a barometer(consistent with a modern value of
4,169 m (13,678 ft)). He was surprised to find heavy snow and morning temperatures of 3 C (27 F),
and was unable to compare the heights of Mauna Loa and Kea but correctly supposed the latter to be
taller based on its larger snow cap.[74] The feat of summitting Mauna Loa was not to be repeated for
forty years.[74]
The Hawaiian Islands were the site of fervent missionary work, with the first group of missionaries
arrived at Honolulu in 1820 and the second in 1823. Some of these missionaries left for Hawaii island,
and spent ten weeks traveling around it, preaching at local villages and climbing Kilauea, from which one
of its members, William Ellis, observed Mauna Loa with the aid of a telescope and ascertained it and Kea
to be "perhaps 15,000 to 16,000 feet above the level of the sea"; they did not, however, attempt to
climb the volcano itself. It is sometimes reported that the missionary Joseph Goodrich reached the
summit around this time, but he never claimed this himself, though he did summit Mauna Kea and
describe Mokuweoweo with the aid of another telescope.[75]
The next successful ascent was made on January 29, 1834, 40 years later, by the Scottish botanist David
Douglas, who also reached the summit caldera using the Ainap Trail. By the time Douglas reached the
summit the environment had put him under extreme duress, but he nonetheless stayed overnight to
make measurements of the summit caldera's proportions and record barometric data on its height, both
now known to be widely inaccurate. Douglas collected biological samples on the way both up and down,
and after a difficult and distressing descent began collating his samples; he planned to return to
England, but instead several months later his body was mysteriously discovered crushed in a pit besides
a dead wild boar.[75]
Isidor Lwenstern successfully climbed Mauna Loa in February 1839, only the third successful climb in
60 years.[74]
Wilkes expedition
Wilkes Campsite
U.S. National Register of Historic Places

Sketch by ship's artist Alfred Thomas Agate


Nearest city

Hilo, Hawaii

Coordinates
192759N1553454W
Area

4 acres (16,000 m2)

Built

1840

Architect

Charles Wilkes

Architectural style Stone shelter


Governing body

National Park Service

NRHP Reference # 74000295[76]


Added to NRHP

July 24, 1974

The United States Exploring Expedition led by Lieutenant Charles Wilkes was tasked with a vast survey of
the Pacific Ocean starting in 1838.[77] In September 1840 they arrived in Honolulu, where repairs to the
ships took longer than expected. Wilkes decided to spend the winter in Hawaii and take the opportunity
to explore its volcanoes while waiting for better weather to continue the expedition. King Kamehameha
III assigned American medical missionary Dr.Gerrit P. Judd to the expedition as a translator.[75]
Wilkes sailed to Hilo on the island of Hawaii and decided to climb Mauna Loa first, since it looked easier
than Mauna Kea. On December 14 he hired about 200 porters, but after he left he realized only about
half the equipment had been taken, so he had to hire more Hawaiians at higher pay. When they reached
Klauea after two days, their guide Puhano headed off to the established Ainap Trail. Wilkes did not
want to head back downhill so he blazed his own way through dense forest directed by a compass. The

Hawaiians were offended by the waste of sacred trees which did not help morale. At about 6,000 feet
(1,800 m) elevation they established a camp called "Sunday Station" at the edge of the forest.
Two guides joined them at Sunday Station: Keaweehu, "the bird-catcher" and another whose Hawaiian
name is not recorded, called "ragsdale". Although Wilkes thought he was almost to the summit, the
guides knew they were less than half way up. Since there was no water at Sunday Station, porters had to
be sent back ten miles (16 km) to a lava tubeon Ainap Trail which had a known supply. After an entire
day replenishing stocks, they continued up to a second camp they called "Recruiting Station" at about
9,000 feet (2,700 m) elevation. After another full day's hike they established "Flag Station" on December
22, and by this time were on the Ainap Trail. Most of the porters were sent back down to get another
load.
At the Flag Station Wilkes and his eight remaining men built a circular wall of lava rocks and covered the
shelter with a canvas tent. A snowstorm was in progress and several suffered from altitude sickness.
That night (December 23), the snow on the canvas roof caused it to collapse. At daylight some of the
group went down the trail to retrieve firewood and the gear abandoned on the trail the day before.
After another day's climb, nine men reached the rim of Mokuweoweo. They could not find a way down
its steep sides so chose a smooth place on the rim for the camp site, at
coordinates192759N 1553454W. Their tent was pitched within 60 feet (18 m) of the crater's edge,
secured by lava blocks.[78]
The next morning they were unable to start a fire using friction due to the thin air at that altitude, and
sent for matches. By this time, the naval officers and Hawaiians could not agree on terms to continue
hiring porters, so sailors and marines were ordered from the ships. Dr. Judd traveled between the
summit and the Recruiting Station to tend the many who suffered from altitude sickness or had worn
out their shoes on the rough rock. Christmas Day was spent building rock walls around the camp to give
some protection from the high winds and blowing snow. It took another week to bring all the equipment
to the summit, including a pendulum designed for measuring slight variations in gravity.[78]

Sketch of Mokuweoweo from Wilkes' journal


On December 31, 1840 the pre-fabricated pendulum house was assembled. Axes and chisels cut away
the rock surface for the pendulum's base. It took another three days to adjust the clock to the point
where the experiments could begin. However, the high winds made so much noise that the ticks could

often not be heard, and varied the temperature to make measurements inaccurate. Grass had to be
painstakingly brought from the lowest elevations for insulation to get accurate measurements.
On Monday, January 11, Wilkes hiked around the summit crater. Using an optical method, he estimated
Mauna Kea was only 193 feet (59 m) higher (modern measurements are 104 feet (32 m)). On January
13, 1841, he had "Pendulum Peak, January 1841 U.S. Ex, Ex." cut into a rock at the site. The tents were
dismantled and Hawaiians carried the gear down over the next three days, while Wilkes enjoyed
a lomilomiHawaiian massage. He continued his measurements at lower elevations and left the island on
March 5. For all the effort he did not obtain any significant results, attributing gravity discrepancies to
"the tides".[75]
The Wilkes expedition's camp site's ruins are the only known physical evidence in the Pacific of the U. S.
Exploring Expedition.[78] The camp site was listed on the National Register of Historic Places on July 24,
1974 as site 74000295,[76]and is state historic site 10-52-5507.[79]
Today
A summit shelter was built with some of the stones from Wilkes' camp site and mortar in 1934. In 1916
Mokuweoweo was included in Hawaii Volcanoes National Park, and a new trail was built directly from
park headquarters at Klauea, an even more direct route than the one taken by Wilkes.[72] This trail,
arriving at the summit from the east via Red Hill, became the preferred route due to its easier access
and gentler slope. The historic Ainap Trail fell into disuse, and was reopened in the 1990s. A third
modern route to the summit is from the Saddle Road up to the Mauna Loa Observatory which is at
11,135 feet (3,394 m) elevation a few miles north of Mokuweoweo and the North Pit trail.[80]
Climate
Trade winds blow from east to west across the Hawaiian islands, and the presence of Mauna Loa
strongly affects the local climate. At low elevations, the eastern (windward) side of the volcano receives
heavy rain; the city of Hilo is the wettest in the United States. The rainfall supports
extensive forestation. The western (leeward) side has a much drier climate. At higher elevations, the
amount of precipitation decreases, and skies are very often clear. Very low temperatures mean that
precipitation often occurs in the form of snow, and the summit of Mauna Loa is described as
a periglacial region, where freezing and thawing play a significant role in shaping the landscape.[81]
Mauna Loa has a tropical climate with warm temperatures at lower elevations and cool to cold
temperatures higher up year-round. Below is the table for the slope observatory, which is at 10,000 feet
(3,000 m) in the alpine zone. The highest recorded temperature was 85 F (29 C) and the lowest was
18 F (8 C) on February 18, 2003 and February 20, 1962, respectively.[82]
[hide]Climate data for Mauna Loa slope observatory (19611990)
Month

Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Dec

Record high F (C)

67
(19)

85
(29)

65
(18)

Average high F (C)

49.8 49.6 50.2 51.8 53.9 57.2 56.4 56.3 55.8 54.7 52.6 50.6
(9.9) (9.8) (10.1) (11) (12.2) (14) (13.6) (13.5) (13.2) (12.6) (11.4) (10.3)

Average low F (C)

33.3 32.9 33.2


(0.7) (0.5) (0.7)

34.6 36.6
(1.4) (2.6)

39.4 38.8
(4.1) (3.8)

38.9
(3.8)

38.5
(3.6)

37.8
(3.2)

36.2
(2.3)

34.3
(1.3)

Record low F (C)

19
(7)

18
(8)

20
(7)

24
(4)

27
(3)

28
(2)

26
(3)

28
(2)

29
(2)

27
(3)

25
(4)

22
(6)

Average precipitation inches 2.3


(mm)
(58)

1.5
(38)

1.7
(43)

1.3
(33)

1.0
(25)

0.5
(13)

1.1
(28)

1.5
(38)

1.3
(33)

1.1
(28)

1.7
(43)

2.0
(51)

Average snowfall inches


(cm)

0.0
(0)

1.0 0.3
(2.5) (0.8)

1.3 0.0
(3.3) (0)

0.0
(0)

0.0
(0)

0.0
(0)

0.0
(0)

0.0
(0)

0.0
(0)

1.0
(2.5)

Avg. precipitation days (


0.01 inch)

67
(19)

68
(20)

71
(22)

70
(21)

68
(20)

67
(19)

66
(19)

Source: NOAA[83]
Observatories

Atmospheric CO2 concentrations measured at the Mauna Loa Observatory.


The location of Mauna Loa has made it an important location for atmospheric monitoring by the Global
Atmosphere Watch and other scientific observations. TheMauna Loa Solar Observatory (MLSO), located
at 11,155 feet (3,400 m) on the northern slope of the mountain, has long been prominent in
observations of theSun. The NOAA Mauna Loa Observatory (MLO) is located close by. From its location

65
(18)

67
(19)

well above local human-generated influences, the MLO monitors the global atmosphere, including
the greenhouse gas carbon dioxide. Measurements are adjusted to account for local outgassing of
CO2 from the volcano.[84]
The Yuan-Tseh Lee Array for Microwave Background Anisotropy (AMiBA) sits at an elevation of 11,155
feet (3,400 m). It was established in October 2006 by theAcademia Sinica Institute of Astronomy and
Astrophysics (ASIAA) to examinecosmic microwave background radiation.[85]

Paleomagnetism (or Palaeomagnetism in the United Kingdom) is the study of the record of the Earth's
magnetic field in rocks, sediment, or archeological materials. Certain minerals in rocks lock-in a record of
the direction and intensity of the magnetic field when they form. This record provides information on
the past behavior of Earth's magnetic field and the past location of tectonic plates. The record
of geomagnetic reversals preserved in volcanic and sedimentary rock sequences (magnetostratigraphy)
provides a time-scale that is used as a geochronologic tool. Geophysicists who specialize in
paleomagnetism are called paleomagnetists.
Paleomagnetists led the revival of the continental drift hypothesis and its transformation into plate
tectonics. Apparent polar wander paths provided the first clear geophysical evidence for continental
drift, while marine magnetic anomalies did the same for seafloor spreading. Paleomagnetism continues
to extend the history of plate tectonics back in time and are applied to the movement of continental
fragments, or terranes.
Paleomagnetism relied heavily on new developments in rock magnetism, which in turn has provided the
foundation for new applications of magnetism. These include biomagnetism, magnetic fabrics (used as
strain indicators in rocks and soils), and environmental magnetism.
Contents
[hide]

1 History

2 Fields of paleomagnetism

3 Principles of remanent magnetization

3.1 Thermoremanent magnetization

3.2 Detrital remanent magnetization

3.3 Chemical remanent magnetization

3.4 Isothermal remanent magnetization

3.5 Viscous remanent magnetization

4 Paleomagnetic procedure
o

4.1 Collecting samples on land

5 Applications

6 See also

7 Notes and references

8 Further reading

9 External links

History[edit]
Main article: History of geomagnetism
As early as the 18th century it was noticed that compass needles deviated near strongly magnetized
outcrops. In 1797, Von Humboldt attributed this magnetization to lightning strikes (and lightning strikes
do often magnetize surface rocks).[1][2] In the 19th century studies of the direction of magnetization in
rocks showed that some recent lavas were magnetized parallel to the Earth's magnetic field. Early in the
20th century, work by David, Brunhes and Mercanton showed that many rocks were magnetized
antiparallel to the field. Motonori Matuyama showed that the Earth's magnetic field reversed in the
mid-Quaternary, a reversal now known as the Brunhes-Matuyama reversal.[1]
The British physicist P.M.S. Blackett provided a major impetus to paleomagnetism by inventing a
sensitive astaticmagnetometer in 1956. His intent was to test his theory that the geomagnetic field was
related to the Earth's rotation, a theory that he ultimately rejected; but the astatic magnetometer
became the basic tool of paleomagnetism and led to a revival of the theory of continental drift. Alfred
Wegener first proposed in 1915 that continents had once been joined together and had since moved
apart.[3] Although he produced an abundance of circumstantial evidence, his theory met with little
acceptance for two reasons: (1) no mechanism for continental drift was known, and (2) there was no
way to reconstruct the movements of the continents over time. Keith Runcorn[4] and Edward A.
Irving[5] constructed apparent polar wander paths for Europe and North America. These curves diverged,
but could be reconciled if it was assumed that the continents had been in contact up to 200 million
years ago. This provided the first clear geophysical evidence for continental drift. Then in 1963, Morley,
Vine and Matthews showed that marine magnetic anomalies provided evidence forseafloor spreading.
Fields of paleomagnetism[edit]
Paleomagnetism is studied on a number of scales:

Secular variation studies look at small-scale changes in the direction and intensity of the Earth's
magnetic field. The magnetic north pole is constantly shifting relative to the axis of rotation of
the Earth. Magnetism is a vector and so magnetic field variation is made up of palaeodirectional
measurements of magnetic declination and magnetic inclinationand palaeointensity
measurements.

Earth's magnetic polarity reversals in last 5 million years. Dark regions represent normal polarity (same
as present field); light regions represent reversed polarity.

Magnetostratigraphy uses the polarity reversal history of the Earth's magnetic field recorded in
rocks to determine the age of those rocks. Reversals have occurred at irregular intervals
throughout Earth history. The age and pattern of these reversals is known from the study of sea
floor spreading zones and the dating of volcanic rocks.

Principles of remanent magnetization[edit]


The study of paleomagnetism is possible because iron-bearing minerals such asmagnetite may record
past directions of the Earth's magnetic field. Magnetic signatures in rocks can be recorded by several
different mechanisms.
Thermoremanent magnetization[edit]
Main article: Thermoremanent magnetization
Iron-titanium oxide minerals in basalt and other igneous rocks may preserve the direction of the Earth's
magnetic field when the rocks cool through the Curie temperatures of those minerals. The Curie
temperature of magnetite, a spinel-group iron oxide, is about 580C, whereas most basalt
and gabbro are completely crystallized at temperatures below 900C. Hence, the mineral grains are not
rotated physically to align with the Earth's field, but rather they may record the orientation of that field.
The record so preserved is called athermoremanent magnetization (TRM). Because complex oxidation
reactions may occur as igneous rocks cool after crystallization, the orientations of the Earth's magnetic
field are not always accurately recorded, nor is the record necessarily maintained. Nonetheless, the
record has been preserved well enough in basalts of the ocean crust to have been critical in the
development of theories of sea floor spreading related to plate tectonics. TRM can also be recorded
in pottery kilns, hearths, and burned adobe buildings. The discipline based on the study of
thermoremanent magnetisation in archaeological materials is called archaeomagnetic dating.[6]
Detrital remanent magnetization[edit]
In a completely different process, magnetic grains in sediments may align with the magnetic field during
or soon after deposition; this is known as detrital remanent magnetization (DRM). If the magnetization
is acquired as the grains are deposited, the result is a depositional detrital remanent magnetization
(dDRM); if it is acquired soon after deposition, it is a post-depositional detrital remanent magnetization
(pDRM).[7]
Chemical remanent magnetization[edit]
See also: Chemical remanent magnetization
In a third process, magnetic grains grow during chemical reactions, and record the direction of the
magnetic field at the time of their formation. The field is said to be recorded by chemical remanent

magnetization (CRM). A common form of chemical remanent magnetization is held by the


mineral hematite, another iron oxide. Hematite forms through chemical oxidation reactions of other
minerals in the rock including magnetite. Redbeds, clastic sedimentary rocks (such as sandstones) are
red because of hematite that formed during sedimentary diagenesis. The CRM signatures in redbeds can
be quite useful and they are common targets in magnetostratigraphy studies.[8]
Isothermal remanent magnetization[edit]
See also: Remanence
Remanence that is acquired at a fixed temperature is called isothermal remanent magnetization (IRM).
Remanence of this sort is not useful for paleomagnetism, but it can be acquired as a result of lightning
strikes. Lightning-induced remanent magnetization can be distinguished by its high intensity and rapid
variation in direction over scales of centimeters.[9][10]
IRM is often induced in drill cores by the magnetic field of the steel core barrel. This contaminant is
generally parallel to the barrel, and most of it can be removed by heating up to about 400 or
demagnetizing in a small alternating field.
In the laboratory, IRM is induced by applying fields of various strengths and is used for many purposes
in rock magnetism.
Viscous remanent magnetization[edit]
Main article: Viscous remanent magnetization
Viscous remanent magnetization is remanence that is acquired by ferromagnetic materials by sitting in a
magnetic field for some time.
Paleomagnetic procedure[edit]
Collecting samples on land[edit]
Paleomagnetists, like many geologists, gravitate towards outcrops because layers of rock are exposed.
Road cuts are a convenient man-made source of outcrops.
"And everywhere, in profusion along this half mile of [roadcut], there are small, neatly cored holes ...
appears to be a Hilton for wrens and purple martins."[11]
There are two main goals of sampling:
1. Retrieve samples with accurate orientations, and
2. Reduce statistical uncertainty.
One way to achieve the first goal is to use a rock coring drill that has a pipe tipped with diamond bits.
The drill cuts a cylindrical space around some rock. This can be messy - the drill must be cooled with

water, and the result is mud spewing out of the hole. Into this space is inserted another pipe
with compass and inclinometer attached. These provide the orientations. Before this device is removed,
a mark is scratched on the sample. After the sample is broken off, the mark can be augmented for
clarity.[12]
Applications[edit]
Paleomagnetic evidence, both reversals and polar wandering data, was instrumental in verifying the
theories of continental drift and plate tectonics in the 1960s and 1970s. Some applications of
paleomagnetic evidence to reconstruct histories ofterranes have continued to arouse controversies.
Paleomagnetic evidence is also used in constraining possible ages for rocks and processes and in
reconstructions of the deformational histories of parts of the crust.[2]
Reversal magnetostratigraphy is often used to estimate the age of sites bearing fossils
and hominin remains.[13]Conversely, for a fossil of known age, the paleomagnetic data can fix the
latitude at which the fossil was laid down. Such apaleolatitude provides information about the
geological environment at the time of deposition.
Paleomagnetic studies are combined with geochronological methods to determine absolute ages for
rocks in which the magnetic record is preserved. For igneous rocks such as basalt, commonly used
methods include potassiumargon andargonargon geochronology.
Scientists in New Zealand have found that they are able to figure out the Earth's past magnetic field
changes by studying 700800-year old steam ovens, or hangi, used by the Maori for cooking food.[14]
See also[edit]

A seamount is a mountain rising from the ocean seafloor that does not reach to the water's surface (sea
level), and thus is not an island. Seamounts are typically formed from extinct volcanoes that rise
abruptly and are usually found rising from the seafloor to 1,0004,000 metres (3,30013,100 ft) in
height. They are defined by oceanographers as independent features that rise to at least 1,000 metres
(3,281 ft) above the seafloor. The peaks are often found hundreds to thousands of meters below the
surface, and are therefore considered to be within the deep sea.[1]
There are an estimated 100,000 seamounts around the globe, with only a few having been studied.
Seamounts come in all shapes and sizes, and follow a distinctive pattern of growth, activity, and death.
In recent years, several active seamounts have been observed, for example Loihi in the Hawaiian Islands.
Because of their abundance, seamounts are one of the most common oceanic ecosystems in the world.
Interactions between seamounts and underwater currents, as well as their elevated position in the
water, attract plankton,corals, fish, and marine mammals alike. Their aggregational effect has been
noted by the commercial fishing industry, and many seamounts support extensive fisheries. There are
ongoing concerns on the negative impact of fishing on seamount ecosystems, and well-documented
cases of stock decline, for example with the orange roughy (Hoplostethus atlanticus). 95% of ecological
damage is done by bottom trawling, which scrapes whole ecosystems off seamounts.
Because of their large numbers, many seamounts remain to be properly studied, and even
mapped. Bathymetry and satellite altimetry are two technologies working to close the gap. There have
been instances where naval vessels have collided with uncharted seamounts; for example, Muirfield
Seamount is named after the ship that struck it in 1973. However, the greatest danger from seamounts
are flank collapses; as they get older, extrusions seeping in the seamounts put pressure on their sides,
causing landslides that have the potential to generate massive tsunamis.
Contents
[hide]

1 Geography
o

1.1 Grouping

1.1.1 List

2 Geology
o

2.1 Geochemistry and evolution

2.2 Lava types

2.3 Structure

3 Ecology
o

3.1 Ecological role of seamounts

3.2 Fishing

3.3 Conservation

4 Exploration

5 Deep-sea mining

6 Dangers

7 See also

8 References

9 Bibliography

10 External links

Geography[edit]
Seamounts can be found in every ocean basin in the world, distributed extremely widely both in space
and in age. A seamount is technically defined as an isolated rise in elevation of 1,000 m (3,281 ft) or
more from the surrounding seafloor, and with a limited summit area,[2] a definition drafted in 1964. This
definition is no longer strictly adhered to however, and some scientists recognize features as short as
100 m (328 ft) as seamounts. Under the strictest definition there are up to 100,000 seamounts in the
oceans, and under the loosest there may be as many as 2 million;[3] however, there are many very small
and very deep seamounts that are difficult to analyze, so the true number may never be known.[2]
Most seamounts are volcanic in origin, and thus tend to be found on oceanic crust near mid-ocean
ridges, mantle plumes, and island arcs. Nearly half of the world's seamounts are found in the Pacific
Ocean, and the rest are distributed mostly across the Atlantic and Indian oceans. Overall there is also a
significant bias in distribution towards the southern hemisphere.[2]
Grouping[edit]
"Seamount chain" redirects here; for a broader coverage related to this topic, see Undersea mountain
range.
Seamounts are often found in groupings or submerged archipelagos, a classic example being
the Emperor Seamounts, an extension of the Hawaiian Islands. Formed millions of years ago
by volcanism, they have since subsided far below sea level. This long chain of islands and seamounts
extends thousands of kilometers northwest from the island of Hawaii.
Isolated seamounts and those without clear volcanic origins are less common; examples include Bollons
Seamount,Eratosthenes Seamount, Axial Seamount and Gorringe Ridge.[4] If all known seamounts were
collected into one area, they would make a landform the size of Europe.[5] Their overall abundance

makes them one of the most common, and least understood, marine structures and biomes on
Earth,[6] a sort of exploratory frontier.[7]

A partial mapping of some of the world's major seamounts


List[edit]
This section
requires expansion.(June
2015)
Main category: Seamount chains
See also: Category:Seamounts.
Geology[edit]
Geochemistry and evolution[edit]

Diagram of a submarine eruption. (key: 1. Water vapor cloud 2. Water 3.Stratum 4. Lava flow 5. Magma
conduit6. Magma chamber 7. Dike 8. Pillow lava) Click to enlarge.
Most seamounts are built by one of two volcanic processes, although some, such as the Christmas Island
Seamount Province near Australia, are more enigmatic.[8]Volcanoes near plate boundaries and midocean ridges are built by decompression melting of rock in the mantle that then floats up to the surface,
while volcanoes formed near subducting zones are created because the
subducting plate addsvolatiles to the rising plate that lowers its melting point. Which process formed
the seamount has a profound effect on its eruptive materials. Lava flows from mid-ocean ridge and plate
boundary seamounts are mostly basaltic (both tholeiitic andalkalic), whereas flows from subducting
ridge volcanoes are mostly calc-alkalinelavas. Compared to mid-ocean ridge seamounts, subduction
zone seamounts generally have more sodium, alkali, and volatile abundances, and less magnesium,
resulting in more explosive, viscous eruptions.[7]
All volcanic seamounts follow a particular pattern of growth, activity, subsidence and eventual
extinction. The first stage of a seamount's evolution is its early activity, building its flanks and core up
from the sea floor. This is followed by a period of intense volcanism, during which the new volcano
erupts almost all (e.g. 98%) of its total magmatic volume. The seamount may even grow above sea level
to become an oceanic island (for example, the 2009 eruption of Hunga Tonga). After a period of
explosive activity near the ocean surface, the eruptions slowly die away. With eruptions becoming
infrequent and the seamount losing its ability to maintain itself, the volcano starts to erode. After finally
becoming extinct (possibly after a brief rejuvenated period), they are ground back down by the waves.
Seamounts are built in a far more dynamic oceanic setting than their land counterparts, resulting in
horizontal subsidation as the seamount moves on the grinding plate towards a subduction zone. Here it
is subducted under the plate margin and ultimately destroyed, but it may leave evidence of its passage
by carving an indentation into the opposing wall of the subduction trench. The majority of seamounts
have already completed their eruptive cycle, so access to early flows by researchers is limited by late
volcanic activity.[7]
Ocean-ridge volcanoes in particular have been observed to follow a certain pattern in terms of eruptive
activity, first observed with Hawaiian seamounts but now shown to be the process followed by all
seamounts of the ocean-ridge type. During the first stage the volcano erupts basalt of various types,
caused by various degrees of mantle melting. In the second, most active stage of its life, ocean-ridge
volcanoes erupt tholeiitic to mildly alkalic basalt as a result of a larger area melting in the mantle. This is
finally capped by alkalic flows late in its eruptive history, as the link between the seamount and its
source of volcanism is cut by crustal movement. Some seamounts also experience a brief "rejuvenated"
period after a hiatus of 1.5 to 10 million years, the flows of which are highly alkalic and produce
many xenoliths.[7]
In recent years, geologists have confirmed that a number of seamounts are active undersea volcanoes;
two examples areLoihi in the Hawaiian Islands and Vailulu'u in the Manu'a Group (Samoa).[4]
Lava types[edit]

Pillow lava, a type of basalt flow that originates from lava-water interactions during submarine
eruptions.[9]
The most apparent lava flows at a seamount are the eruptive flows which cover their flanks,
however igneous intrusions, in the forms of dikes and sills, are also an important part of seamount
growth. The most common type of flow is pillow lava, named so after its unusual shape. Less common
are sheet flows, which are glassyand marginal, and indicative of larger-scale
flows. Volcaniclastic sedimentary rocks dominate shallow-water seamounts. They are the products of
the explosive activity of seamounts that are near the water's surface, and can also form from
mechanical wear of existing volcanic rock.[7]
Structure[edit]
Seamounts can form in a wide variety of tectonic settings, resulting in a very diverse structural bank.
Seamounts come in a wide variety of structural shapes, from conical to flat-topped to complexly
shaped.[7] Some are built very large and very low, such as Koko Guyot[10] and Detroit Seamount;[11] others
are built more steeply, such as Loihi Seamount[12] and Bowie Seamount.[13] Some seamounts also have
a carbonate or sediment cap.[7]
Many seamounts show signs of intrusive activity, which is likely to lead to inflation, steepening of
volcanic slopes, and ultimately, flank collapse.[7] There are also several sub-classes of seamounts. The
first are guyots, seamounts with a flat top. These tops must be 200 m (656 ft) or more below the surface
of the sea; the diameters of these flat summits can be over 10 km (6.2 mi).[14] Knolls are isolated
elevation spikes measuring less than 1,000 meters (3,281 ft). Lastly, pinnacles are small pillar-like
seamounts.[2]
Ecology[edit]
Ecological role of seamounts[edit]
Seamounts are exceptionally important to their biome ecologically, but their role in their environment is
poorly understood. Because they project out above the surrounding sea floor, they disturb standard
water flow, causing eddies and associated hydrological phenomena that ultimately result in water
movement in an otherwise still ocean bottom. Currents have been measured at up to 0.9 knots, or
48 centimeters per second. Because of this upwelling seamounts often carry above-

average plankton populations, seamounts are thus centers where the fish that feed on them aggregate,
in turn falling prey to further predation, making seamounts important biological hotspots.[2]
Seamounts provide habitats and spawning grounds for these larger animals, including numerous fish.
Some species, including black oreo (Allocyttus niger) and blackstripe cardinalfish (Apogon
nigrofasciatus), have been shown to occur more often on seamounts than anywhere else on the ocean
floor. Marine mammals, sharks, tuna, and cephalopods all congregate over seamounts to feed, as well
as some species of seabirds when the features are particularly shallow.[2]

Grenadier fish (Coryphaenoides sp.) and bubblegum coral (Paragorgia arborea) on the crest of Davidson
Seamount. These are two species attracted to the seamount; Paragorgia arborea in particular grows in
the surrounding area as well, but nowhere near as profusely.[15]
Seamounts often project upwards into shallower zones more hospitable to sea life,
providing habitats for marine species that are not found on or around the surrounding deeper ocean
bottom. Because seamounts are isolated from each other they form "undersea islands" creating the
same biogeographical interest. As they are formed from volcanic rock, the substrate is much harder than
the surrounding sedimentary deep sea floor. This causes a different type of fauna to exist than on the
seafloor, and leads to a theoretically higher degree ofendemism.[16] However, recent research especially
centered at Davidson Seamountsuggests that seamounts may not be especially endemic, and
discussions are ongoing on the effect of seamounts on endemicity. They have, however, been
confidently shown to provide a habitat to species that have difficulty surviving elsewhere.[17][18]
The volcanic rocks on the slopes of seamounts are heavily populated by suspension feeders,
particularly corals, which capitalize on the strong currents around the seamount to supply them with
food. This is in sharp contrast with the typical deep-sea habitat, where deposit-feeding animals rely on
food they get off the ground.[2] Intropical zones extensive coral growth results in the formation of coral
atolls late in the seamount's life.[3][18]
In addition soft sediments tend to accumulate on seamounts, which are typically populated
by polychaetes (annelid marine worms) oligochaetes (microdrile worms), and gastropod mollusks (sea
slugs). Xenophyophores have also been found. They tend to gather small particulates and thus form
beds, which alters sediment deposition and creates a habitat for smaller animals.[2] Many seamounts
also have hydrothermal vent communities, for example Suiyo[19] and Loihiseamounts.[20] This is helped
by geochemical exchange between the seamounts and the ocean water.[7]

Seamounts may thus be vital stopping points for some migratory animals, specifically whales. Some
recent research indicates whales may use such features as navigational aids throughout their
migration.[21] For a long time it has been surmised that many pelagic animals visit seamounts as well, to
gather food, but proof of this aggregating effect has been lacking. The first demonstration of this
conjecture was published in 2008.[22]
Fishing[edit]
The effect that seamounts have on fish populations has not gone unnoticed by the commercial fishing
industry. Seamounts were first extensively fished in the second half of the 20th century, due to poor
management practices and increased fishing pressure seriously depleting stock numbers on the typical
fishing ground, the continental shelf. Seamounts have been the site of targeted fishing since that
time.[23]
Nearly 80 species of fish and shellfish are commercially harvested from seamounts, including spiny
lobster (Palinuridae),mackerel (Scombridae and others), red king crab (Paralithodes camtschaticus), red
snapper (Lutjanus campechanus),tuna (Scombridae), Orange roughy (Hoplostethus atlanticus),
and perch (Percidae).[2]
Conservation[edit]

Because of overfishing at their seamount spawning grounds, stocks oforange roughy (Hoplostethus
atlanticus) have plummeted; experts say that it could take decades for the species to restore itself to its
former numbers.[23]
The ecological conservation of seamounts is hurt by the simple lack of information available. Seamounts
are very poorly studied, with only 350 of the estimated 100,000 seamounts in the world having received
sampling, and fewer than 100 in depth.[24] Much of this lack of information can be attributed to a lack of
technology,[clarification needed] and to the daunting task of reaching these underwater structures; the
technology to fully explore them has only been around the last few decades. Before consistent
conservation efforts can begin, the seamounts of the world must first be mapped, a task that is still in
progress.[2]
Overfishing is a serious threat to seamount ecological welfare. There are several well-documented cases
of fishery exploitation, for example the orange roughy(Hoplostethus atlanticus) off the coasts of
Australia and New Zealand and thepelagic armorhead (Pseudopentaceros richardsoni) near Japan and
Russia.[2] The reason for this is that the fishes that are targeted over seamounts are typically long-lived,
slow-growing, and slow-maturing. The problem is confounded by the dangers of trawling, which

damages seamount surface communities, and the fact that many seamounts are located in international
waters, making proper monitoring difficult.[23] Bottom trawling in particular is extremely devastating to
seamount ecology, and is responsible for as much as 95% of ecological damage to seamounts.[25]

Coral earrings of this type are often made from coral harvested off seamounts.
Corals from seamounts are also vulnerable, as they are highly valued for making jewellery and
decorative objects. Significant harvests have been produced from seamounts, often leaving coral beds
depleted.[2]
Individual nations are beginning to note the effect of fishing on seamounts, and theEuropean
Commission has agreed to fund the OASIS project, a detailed study of the effects of fishing on seamount
communities in the North Atlantic.[23] Another project working towards conservation is CenSeam,
a Census of Marine Life project formed in 2005. CenSeam is intended to provide the framework needed
to prioritise, integrate, expand and facilitate seamount research efforts in order to significantly reduce
the unknown and build towards a global understanding of seamount ecosystems, and the roles they
have in thebiogeography, biodiversity, productivity and evolution of marine organisms.[24][26]
Possibly the best ecologically studied seamount in the world is Davidson Seamount, with six major
expeditions recording over 60,000 species observations. The contrast between the seamount and the
surrounding area was well-marked.[17] One of the primary ecological havens on the seamount is its deep
sea coral garden, and many of the specimens noted were over a century old.[15] Following the expansion
of knowledge on the seamount there was extensive support to make it amarine sanctuary, a motion that
was granted in 2008 as part of the Monterey Bay National Marine Sanctuary.[27] Much of what is known
about seamounts ecologically is based on observations from Davidson.[15][22] Another such seamount
isBowie Seamount, which has also been declared a marine protected area by Canada for its ecological
richness.[28]
Exploration[edit]

Graph showing the rise in global sea level (in mm) as measured by theNASA/CNES oceanic satellite
altimeterTOPEX/Poseidon (left) and its follow-on mission Jason-1.
The study of seamounts has been stymied for a long time by the lack of technology. Although seamounts
have been sampled as far back as the 19th century, their depth and position meant that the technology
to explore and sample seamounts in sufficient detail did not exist until the last few decades. Even with
the right technology available,[clarification needed] only a scant 1% of the total number have been
explored,[5] and sampling and information remains biased towards the top 500 m (1,640 ft).[2] New
species are observed or collected and valuable information is obtained on almost every submersible dive
at seamounts.[6]
Before seamounts and their oceanographic impact can be fully understood, they must be mapped, a
daunting task due to their sheer number.[2] The most detailed seamount mappings are provided
by multibeam echosounding (sonar), however after more than 5000 publicly held cruises, the amount of
the sea floor that has been mapped remains minuscule. Satellite altimetry is a broader alternative, albeit
not as detailed, with 13,000 catalogued seamounts; however this is still only a fraction of the total
100,000. The reason for this is that uncertainties in the technology limit recognition to features 1,500 m
(4,921 ft) or larger. In the future, technological advances could allow for a larger and more detailed
catalogue.[3]
Data from CryoSat-2 has shown 25,000 seamounts, with more to come as data is interpreted.[29][30][31][32]
Deep-sea mining[edit]
Seamounts are a possible future source of heavy metals. The growth of the human population and, with
it, heavy industry, has pressed demands for Earth's finite resources. Even though the ocean makes up
70% of the world, technological challenges with deep-sea mineral mining have severely limited its
extent. But with the constantly decreasing supply on land, many see oceanic mining as the destined
future, and seamounts stand out as candidates.[33]
Seamounts are abundant, and all have metal resource potential because of various enrichment
processes during the seamount's life. Hydrogenic Iron-manganese, hydrothermal iron
oxide, sulfide, sulfate, sulfur, hydrothermal manganese oxide, and phosphorite[34] (the latter especially
in parts of Micronesia) are all mineral resources that are founded by various processes and deposited
upon seamounts. However, only the first two have any potential of being targeted by mining in the next
few decades.[33]
Dangers[edit]
USS San Francisco in dry dock inGuam in January 2005, following its collision with an uncharted
seamount. The damage was extensive and the submarine was just barely salvaged.[35]
See also: Landslide Causing tsunamis and Landslide Prehistoric submarine landslides

Some seamounts have not been mapped and thus pose a navigational danger. For instance, Muirfield
Seamount is named after the ship that hit it in 1973.[36] More recently, the submarine USS San
Francisco ran into an uncharted seamount in 2005 at a speed of 35 knots (40.3 mph; 64.8 km/h),
sustaining serious damage and killing one seaman.[35]
One major seamount risk is that often, in the late of stages of their life, extrusionsbegin to seep in the
seamount. This activity leads to inflation, over-extension of the volcano's flanks, and ultimately flank
collapse, leading to submarine landslides with the potential to start major tsunamis, which can be
among the largest natural disasters in the world. In an illustration of the potent power of flank collapses,
a summit collapse on the northern edge of Vlinder Seamount resulted in a
pronounced headwall scarp and a field of debris up to 6 km (4 mi) away.[7] A catastrophic collapse
at Detroit Seamount flattened its whole structure extensively.[11] Lastly, in 2004, scientists foundmarine
fossils 61 m (200 ft) up the flank of Kohala mountain in Hawaii (island). Subsidation analysis found that
at the time of their deposition, this would have been 500 m (1,640 ft) up the flank of the volcano,[37] far
too high for a normal wave to reach. The date corresponded with a massive flank collapse at the
nearby Mauna Loa, and it was theorized that it was a massive tsunami, generated by the landslide, that
deposited the fossils.[38]

Very-long-baseline interferometry (VLBI) is a type of astronomical interferometryused in radio


astronomy. In VLBI a signal from an astronomical radio source, such as a quasar, is collected at multiple
radio telescopes on Earth. The distance between the radio telescopes is then calculated using the time
difference between the arrivals of the radio signal at different telescopes. This allows observations of an
object that are made simultaneously by many radio telescopes to be combined, emulating a telescope
with a size equal to the maximum separation between the telescopes.
Data received at each antenna in the array include arrival times from a local atomic clock, such as
a hydrogen maser. At a later time, the data are correlated with data from other antennas that recorded
the same radio signal, to produce the resulting image. The resolution achievable using interferometry is
proportional to the observing frequency. The VLBI technique enables the distance between telescopes
to be much greater than that possible with conventional interferometry, which requires antennas to be
physically connected by coaxial cable, waveguide, optical fiber, or other type of transmission line. The
greater telescope separations are possible in VLBI due to the development of the closure phase imaging
technique byRoger Jennison in the 1950s, allowing VLBI to produce images with superior resolution.
VLBI is most well known for imaging distant cosmic radio sources, spacecraft tracking, and for
applications in astrometry. However, since the VLBI technique measures the time differences between
the arrival of radio waves at separate antennas, it can also be used "in reverse" to perform earth
rotation studies, map movements of tectonic plates very precisely (within millimetres), and perform
other types of geodesy. Using VLBI in this manner requires large numbers of time difference
measurements from distant sources (such as quasars) observed with a global network of antennas over
a period of time.
Contents
[hide]

1 Scientific results

2 VLBI arrays

3 e-VLBI

4 Space VLBI

5 How VLBI Works

6 References

7 External links

Scientific results[edit]

This section is in a list format that may be better presented using prose. You can
help by converting this section to prose, if appropriate. Editing help is
available.(October 2012)

Geodesist Chopo Ma explains some of the geodetic uses of VLBI.


Some of the scientific results derived from VLBI include:

High resolution radio imaging of cosmic radio sources.

Imaging the surfaces of nearby stars at radio wavelengths (see also interferometry) similar
techniques have also been used to make infrared and optical images of stellar surfaces

Definition of the celestial reference frame

Motion of the Earth's tectonic plates

Regional deformation and local uplift or subsidence.

Variations in the Earth's orientation and length of day.

Maintenance of the terrestrial reference frame

Measurement of gravitational forces of the Sun and Moonon the Earth and the deep structure
of the Earth

Improvement of atmospheric models

Measurement of the fundamental speed of gravity

The tracking of the Huygens probe as it passed through Titan's atmosphere, allowing wind
velocity measurements

VLBI arrays[edit]

There are several VLBI arrays located in Europe, Canada, the United States, Russia, Japan, Mexico and
Australia. The most sensitive VLBI array in the world is the European VLBI Network (EVN). This is a parttime array which brings together the largest European radiotelescopes for typically week-long sessions,
with the data being processed at the Joint Institute for VLBI in Europe (JIVE). The Very Long Baseline
Array (VLBA) uses ten dedicated, 25-meter telescopes spanning 5351 miles across the United States, and
is the largest VLBI array that operates all year round as both an astronomical
andgeodesy instrument.[1] The combination of the EVN and VLBA is known as Global VLBI. When one or
both of these arrays are combined with one or more space-based VLBI antennas such
as HALCA (previously) and now with RadioAstron (Spektr-R), the resolution obtained is higher than any
other astronomical instrument, capable of imaging the sky with a level of detail measured in
microarcseconds. VLBI generally benefits from the longer baselines afforded by international
collaboration, with a notable early example in 1976, when radio telescopes in the United States, USSR
and Australia were linked to observe hydroxyl-maser sources.[2]
e-VLBI[edit]

Image of the source IRC+10420. The lower resolution image on the left image was taken with the UK's
MERLIN array and shows the shell ofmaser emission produced by an expanding shell of gas with a
diameter about 200 times that of our own Solar System. The shell of gas was ejected from a supergiant
star (10 times the mass of our sun) at the centre of the emission about 900 years ago. The
corresponding EVN e-VLBI image (right) shows the much finer structure of the masers because of the
higher resolution of the VLBI array.
VLBI has traditionally operated by recording the signal at each telescope on magnetic tapes or disks, and
shipping those to the correlation center for replay. Recently, it has become possible to connect VLBI
radio telescopes in close to real-time, while still employing the local time references of the VLBI
technique, in a technique known as e-VLBI. In Europe, six radio telescopes of the European VLBI
Network (EVN) are now connected with Gigabit per second links via their National Research Networks
and the Pan-European research network GEANT2, and the first astronomical experiments using this new
technique were successfully conducted in 2011.[3]

The image to the right shows the first science produced by the European VLBI Network using e-VLBI. The
data from 6 telescopes were processed in real time at the European Data Processing centre at JIVE. The
Netherlands Academic Research Network SURFnet provides 6 x 1 Gbit/s connectivity between JIVE and
the GEANT2 network.
Space VLBI[edit]
In the quest for even greater angular resolution, dedicated VLBI satellites have been placed in Earth
orbit to provide greatly extended baselines. Experiments incorporating such space-borne array elements
are termed Space Very Long Baseline Interferometry (SVLBI).
The first such dedicated VLBI mission was HALCA, an 8 meter radio telescope, which was launched in
February 1997 and made observations until October 2003, but due to the small size of the dish only very
strong radio sources could be observed with SVLBI arrays incorporating it.
Another space VLBI mission, Spektr-R (or RadioAstron), was launched in July 2011.
How VLBI Works[edit]

Recording data at each of the telescopes in a VLBI array. Extremely accurate high-frequency clocks are
recorded alongside the astronomical data in order to help get the synchronization correct
In VLBI interferometry, the digitized antenna data are usually recorded at each of the telescopes (in the
past this was done on large magnetic tapes, but nowadays it is usually done on large RAID arrays of
computer disk drives). The antenna signal is sampled with an extremely precise and stable atomic clock
(usually a hydrogen maser) that is additionally locked onto a GPS time standard. Alongside the
astronomical data samples, the output of this clock is recorded on the tape/disk media. The recorded
media are then transported to a central location. More recent experiments have been conducted with
"electronic" VLBI (e-VLBI) where the data are sent by fibre-optics (e.g., 10 Gbit/s fiber-optic paths in the
EuropeanGEANT2 research network) and not recorded at the telescopes, speeding up and simplifying
the observing process significantly. Even though the data rates are very high, the data can be sent over
normal Internet connections taking advantage of the fact that many of the international high speed
networks have significant spare capacity at present.

At the location of the correlator the data are played back. The timing of the playback is adjusted
according to the atomic clock signals on the (tapes/disk drives/fibre optic signal), and the estimated
times of arrival of the radio signal at each of the telescopes. A range of playback timings over a range of
nanoseconds are usually tested until the correct timing is found.

Playing back the data from each of the telescopes in a VLBI array. Great care must be taken to
synchronize the play back of the data from different telescopes. Atomic clock signals recorded with the
data help in getting the timing correct.
Each antenna will be a different distance from the radio source, and as with the short baseline
radio interferometer the delays incurred by the extra distance to one antenna must be added artificially
to the signals received at each of the other antennas. The approximate delay required can be calculated
from the geometry of the problem. The tape playback is synchronized using the recorded signals from
the atomic clocks as time references, as shown in the drawing on the right. If the position of the
antennas is not known to sufficient accuracy or atmospheric effects are significant, fine adjustments to
the delays must be made until interference fringes are detected. If the signal from antenna A is taken as
the reference, inaccuracies in the delay will lead to errors
and
in the phases of the signals from
tapes B and C respectively (see drawing on right). As a result of these errors the phase of the complex
visibility cannot be measured with a very-long-baseline interferometer.
The phase of the complex visibility depends on the symmetry of the source brightness distribution. Any
brightness distribution can be written as the sum of a symmetric component and an anti-symmetric
component. The symmetric component of the brightness distribution only contributes to the real part of
the complex visibility, while the anti-symmetric component only contributes to the imaginary part. As
the phase of each complex visibility measurement cannot be determined with a very-long-baseline
interferometer the symmetry of the corresponding contribution to the source brightness distributions is
not known.

R. C. Jennison developed a novel technique for obtaining information about visibility phases when delay
errors are present, using an observable called the closure phase. Although his initial laboratory
measurements of closure phase had been done at optical wavelengths, he foresaw greater potential for
his technique in radio interferometry. In 1958 he demonstrated its effectiveness with a radio
interferometer, but it only became widely used for long-baseline radio interferometry in 1974. At least
three antennas are required. This method was used for the first VLBI measurements, and a modified
form of this approach ("Self-Calibration") is still used today.

Global Positioning System


From Wikipedia, the free encyclopedia
This article is about the American system. For the Russian equivalent, see GLONASS. For the European
equivalent, see GALILEO. For other similar systems, see GNSS.
"GPS" redirects here. For the device, see GPS receiver. For other uses, see GPS (disambiguation).
Global Positioning System
Country of origin

United States

Operator(s)

AFSPC

Type

Military, civilian

Status

Operational

Coverage

Global

Constellation size
Number of satellites
(current total)

32

First launch

February 1978

Total launches

65

Orbital Characteristics
Regime(s)

6x MEO planes

Orbital height

20,180 km (12,540 mi)

Geodesy
Fundamentals

Geodesy

Geodynamics

Geomatics

Cartography

History

Concepts

Datum

Geographical distance

Geoid

Figure of the Earth

Geodetic system

Geodesic

Geographic coordinate system

Horizontal position representation

Latitude / Longitude

Map projection

Reference ellipsoid

Satellite geodesy

Spatial reference system

Technologies

Global Navigation Satellite System (GNSS)

Global Positioning System (GPS)

GLONASS (Russian)

Galileo (European)

Indian Regional Navigation


Satellite System (IRNSS)

BeiDou (BDS) (Chinese)

Standards
ED50

European Datum 1950

SAD69 South American Datum 1969


GRS 80 Geodetic Reference System
1980
NAD83 North American Datum 1983
WGS84 World Geodetic System 1984
NAVD88 N. American Vertical Datum
1988
ETRS89 European Terrestrial
Reference
System 1989
GCJ-02 Chinese encrypted datum
2002

Spatial Reference System Identifier (SRID)

Universal Transverse Mercator (UTM)

History

NGVD29 (Sea Level Datum 1929)

Artist's conception of GPS Block II-F satellite in earth orbit.

Civilian GPS receivers ("GPS navigation device") in a marine application.

Automotive navigation systemin a taxicab.

U.S. Air Force Senior Airmanruns through a checklist during Global Positioning System satellite
operations.
The Global Positioning System (GPS) is a space-based navigation system that provides location and time
information in all weather conditions, anywhere on or near the earth where there is an unobstructed
line of sight to four or more GPS satellites.[1] The system provides critical capabilities to military, civil,
and commercial users around the world. The United States government created the system, maintains
it, and makes it freely accessible to anyone with a GPS receiver.

The US began the GPS project in 1973 to overcome the limitations of previous navigation
systems,[2] integrating ideas from several predecessors, including a number of classified engineering
design studies from the 1960s. The U.S. Department of Defense (DoD) developed the system, which
originally used 24 satellites. It became fully operational in 1995. Bradford Parkinson, Roger L. Easton,
and Ivan A. Getting are credited with inventing it.
Advances in technology and new demands on the existing system have now led to efforts to modernize
the GPS system and implement the next generation of GPS Block IIIA satellites and Next Generation
Operational Control System (OCX).[3] Announcements from Vice President Al Gore and the White
House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization
effort, GPS III.
In addition to GPS, other systems are in use or under development. The Russian Global Navigation
Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete
coverage of the globe until the mid-2000s.[4] There are also the planned European UnionGalileo
positioning system, India's Indian Regional Navigation Satellite System, and the Chinese BeiDou
Navigation Satellite System.
Contents
[hide]

1 History
o

1.1 Predecessors

1.2 Development

1.3 Timeline and modernization

1.4 Awards

2 Basic concept of GPS


o

2.1 Fundamentals

2.2 More detailed description

2.3 User-satellite geometry

2.4 Receiver in continuous operation

2.5 Non-navigation applications

3 Structure
o

3.1 Space segment

3.2 Control segment

3.3 User segment

4 Applications
o

4.1 Civilian

4.1.1 Restrictions on civilian use

4.2 Military

5 Communication
o

5.1 Message format

5.2 Satellite frequencies

5.3 Demodulation and decoding

6 Navigation equations
o

6.1 Problem description

6.2 Geometric interpretation

6.2.1 Spheres

6.2.2 Hyperboloids

6.2.3 Spherical cones

6.3 Solution methods

6.3.1 Least squares

6.3.2 Iterative

6.3.3 Closed-form

7 Error sources and analysis

8 Accuracy enhancement and surveying


o

8.1 Augmentation

8.2 Precise monitoring

8.3 Timekeeping

8.3.1 Leap seconds

8.3.2 Accuracy

8.3.3 Format

8.4 Carrier phase tracking (surveying)

9 Regulatory spectrum issues concerning GPS receivers

10 Other systems

11 See also

12 Notes

13 References

14 Further reading

15 External links

History[edit]
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and
the Decca Navigator, developed in the early 1940s and used by the British Royal Navy during World War
II.
Predecessors[edit]
In 1956, the German-American physicist Friedwardt Winterberg[5] proposed a test of general relativity detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside
artificial satellites. Calculations using general relativity determined that the clocks on the GPS satellites
would be seen by the earth's observers to run 38 microseconds faster per day (than those on the earth),
and this was corrected for in the design of GPS.[6]
The Soviet Union launched the first man-made satellite, Sputnik, in 1957. Two American physicists,
William Guier and George Weiffenbach, at Johns Hopkins's Applied Physics Laboratory (APL), decided to
monitor Sputnik's radio transmissions.[7] Within hours they realized that, because of the Doppler effect,
they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to
their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director
of the APL, asked Guier and Weiffenbach to investigate the inverse problem pinpointing the user's
location given that of the satellite. (At the time, the Navy was developing the submarinelaunchedPolaris missile, which required them to know the submarine's location.) This led them and APL
to develop the Transitsystem.[8] In 1959, ARPA (renamed DARPA in 1972) also played a role in
Transit.[9][10][11]

Official logo for NAVSTAR GPS

Emblem of the 50th Space Wing


The first satellite navigation system, Transit, used by the United States Navy, was first successfully tested
in 1960.[12] It used a constellation of five satellites and could provide a navigational fix approximately
once per hour. In 1967, the U.S. Navy developed the Timation satellite that proved the ability to place
accurate clocks in space, a technology required by GPS. In the 1970s, the ground-basedOmega
Navigation System, based on phase comparison of signal transmission from pairs of stations,[13] became
the first worldwide radio navigation system. Limitations of these systems drove the need for a more
universal navigation solution with greater accuracy.
While there were wide needs for accurate navigation in military and civilian sectors, almost none of
those was seen as justification for the billions of dollars it would cost in research, development,
deployment, and operation for a constellation of navigation satellites. During the Cold War arms race,
the nuclear threat to the existence of the United States was the one need that did justify this cost in the
view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for
the ultra secrecy at that time. The nuclear triad consisted of the United States Navy's submarinelaunched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic
bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear
deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their
positions before they launched their SLBMs.[14] The USAF, with two thirds of the nuclear triad, also had
requirements for a more accurate and reliable navigation system. The Navy and Air Force were
developing their own technologies in parallel to solve what was essentially the same problem. To
increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (such as
Russian SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate
ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was worked in 1963 and

it was "in this study that the GPS concept was born". That same year, the concept was pursued as
Project 621B, which had "many of the attributes that you now see in GPS"[15] and promised increased
accuracy for Air Force bombers as well as ICBMs. Updates from the Navy Transit system were too slow
for the high speeds of Air Force operation. The Naval Research Laboratory continued advancements
with their Timation (Time Navigation) satellites, first launched in 1967, and with the third one in 1974
carrying the first atomic clock into orbit.[16]
Another important predecessor to GPS came from a different branch of the United States military. In
1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for
geodetic surveying.[17] The SECOR system included three ground-based transmitters from known
locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at
an undetermined position, could then use those signals to fix its location precisely. The last SECOR
satellite was launched in 1969.[18] Decades later, during the early years of GPS, civilian surveying became
one of the first fields to make use of the new technology, because surveyors could reap benefits of
signals from the less-than-complete GPS constellation years before it was declared operational. GPS can
be thought of as an evolution of the SECOR system where the ground-based transmitters have been
migrated into orbit.
Development[edit]
With these parallel developments in the 1960s, it was realized that a superior system could be
developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multiservice program.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon
discussed the creation of aDefense Navigation Satellite System (DNSS). It was at this meeting that "the
real synthesis that became GPS was created." Later that year, the DNSS program was named Navstar, or
Navigation System Using Timing and Ranging.[19] With the individual satellites being associated with the
name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was
used to identify the constellation of Navstar satellites, Navstar-GPS.[20] Ten "Block I" prototype satellites
were launched between 1978 and 1985 (with one prototype being destroyed in a launch failure).[21]
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983 after
straying into the USSR'sprohibited airspace,[22] in the vicinity of Sakhalin and Moneron Islands,
President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was
sufficiently developed, as a common good.[23] The first satellite was launched in 1989, and the
24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the
user equipment, but including the costs of the satellite launches, has been estimated at about USD$5
billion (then-year dollars).[24] Roger L. Easton is widely credited as the primary inventor of GPS.
Initially, the highest quality signal was reserved for military use, and the signal available for civilian use
was intentionally degraded (Selective Availability). This changed with President Bill Clinton signing a
policy directive in 1996 to turn off Selective Availability in May 2000 to provide the same precision to
civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of

Defense, William Perry, because of the widespread growth of differential GPSservices to improve civilian
accuracy and eliminate the U.S. military advantage. Moreover, the U.S. military was actively developing
technologies to deny GPS service to potential adversaries on a regional basis.[25]
Since its deployment, the U.S. has implemented several improvements to the GPS service including new
signals for civil use and increased accuracy and integrity for all users, all the while maintaining
compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing
initiative by the U.S. Department of Defense through a series ofsatellite acquisitions to meet the
growing needs of the military, civilians, and the commercial market.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers provide an
accuracy of better than 3.5 horizontal meters,[26] although many factors such as receiver quality and
atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States Government as a national resource. The Department of
Defense is the steward of GPS. Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from
1996 to 2004. After that the National Space-Based Positioning, Navigation and Timing Executive
Committee was established by presidential directive in 2004 to advise and coordinate federal
departments and agencies on matters concerning the GPS and related systems.[27]The executive
committee is chaired jointly by the deputy secretaries of defense and transportation. Its membership
includes equivalent-level officials from the departments of state, commerce, and homeland security, the
joint chiefs of staff, and NASA. Components of the executive office of the president participate as
observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as
defined in the federal radio navigation plan and the standard positioning service signal specification)
that will be available on a continuous, worldwide basis," and "develop measures to prevent hostile use
of GPS and its augmentations without unduly disrupting or degrading civilian uses."
Timeline and modernization[edit]
Main article: List of GPS satellites
Summary of satellites[28]
Satellite launches
Block

Launch
Period

Currently in orbit
Suc- Fail- In prep- Plan- and healthy
cess ure aration ned

19781985 10

II

19891990 9

IIA

19901997 19

IIR

19972004 12

12

IIR-M 20052009 8

IIF

From 2010

IIIA

From 2016

12

IIIB

IIIC

16

66

36

32

Total

(Last update: June 13, 2014)

USA-85 from Block IIA is unhealthy


USA-203 from Block IIR-M is unhealthy
[29]
For a more complete list, see list of GPS satellite launches

In 1972, the USAF Central Inertial Guidance Test Facility (Holloman AFB), conducted
developmental flight tests of two prototype GPS receivers over White Sands Missile Range,
using ground-based pseudo-satellites.[citation needed]

In 1978, the first experimental Block-I GPS satellite was launched.[21]

In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 that strayed
into prohibited airspace because of navigational errors, killing all 269 people on board, U.S.
President Ronald Reaganannounced that GPS would be made available for civilian uses once it
was completed,[30][31] although it had been previously published [in Navigation magazine] that
the CA code (Coarse Acquisition code) would be available to civilian users.

By 1985, ten more experimental Block-I satellites had been launched to validate the concept.

Beginning in 1988, Command & Control of these satellites was transitioned from Onizuka AFS,
California to the 2nd Satellite Control Squadron (2SCS) located at Falcon Air Force Station in
Colorado Springs, Colorado.[32][33]

On February 14, 1989, the first modern Block-II satellite was launched.

The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.[34]

In 1991, a project to create a miniature GPS receiver successfully ended, replacing the previous
50 pound military receivers with a 2.75 pound handheld receiver.[10]

In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and
replaced by the 50th Space Wing.

By December 1993, GPS achieved initial operational capability (IOC), indicating a full
constellation (24 satellites) was available and providing the Standard Positioning Service
(SPS).[35]

Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April
1995, signifying full availability of the military's secure Precise Positioning Service (PPS).[35]

In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S.
President Bill Clinton issued a policy directive[36] declaring GPS a dual-use system and
establishing an Interagency GPS Executive Board to manage it as a national asset.

In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new
civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation
safety and in 2000 the United States Congress authorized the effort, referring to it as GPS III.

On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order,
allowing users to receive a non-degraded signal globally.

In 2004, the United States Government signed an agreement with the European Community
establishing cooperation related to GPS and Europe's planned Galileo system.

In 2004, United States President George W. Bush updated the national policy and replaced the
executive board with the National Executive Committee for Space-Based Positioning,
Navigation, and Timing.[37]

November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.[38]

In 2005, the first modernized GPS satellite was launched and began transmitting a second
civilian signal (L2C) for enhanced user performance.[39]

On September 14, 2007, the aging mainframe-based Ground Segment Control System was
transferred to the new Architecture Evolution Plan.[40]

On May 19, 2009, the United States Government Accountability Office issued a report warning
that some GPS satellites could fail as soon as 2010.[41]

On May 21, 2009, the Air Force Space Command allayed fears of GPS failure saying "There's only
a small risk we will not continue to exceed our performance standard."[42]

On January 11, 2010, an update of ground control systems caused a software incompatibility
with 8000 to 10000 military receivers manufactured by a division of Trimble Navigation Limited
of Sunnyvale, Calif.[43]

On February 25, 2010,[44] the U.S. Air Force awarded the contract to develop the GPS Next
Generation Operational Control System (OCX) to improve accuracy and availability of GPS
navigation signals, and serve as a critical part of GPS modernization.

Seabed
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be
challenged and removed. (March 2015)
This article is about ocean floor. For the 2001 song by Audio Adrenaline, see Lift (Audio Adrenaline
album).

Map showing the underwater topography (bathymetry) of the ocean floor. Like land terrain, the ocean
floor has ridges, valleys, plains and volcanoes.
The seabed (also known as the seafloor, sea floor, or ocean floor) is the bottom of the ocean.
Contents
[hide]

1 Seabed structure
o

1.1 Technical terms

2 Benthos

3 Seabed features

4 History of exploration

5 In art and culture

6 Further reading

7 See also

8 References

9 External links

Seabed structure[edit]
See also: Seafloor spreading
This section does not cite any references or sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (March 2015)

The major oceanic divisions


Most of the oceans have a common structure, created by common physical phenomena, mainly from
tectonic movement, and sediment from various sources. The structure of the oceans, starting with the
continents, begins usually with a continental shelf, continues to thecontinental slope which is a steep
descent into the ocean, until reaching the abyssal plain a topographic plain, the beginning of the
seabed, and its main area. The border between the continental slope and the abyssal plain usually has a
more gradual descent, and is called the continental rise, which is caused by sediment cascading down
the continental slope.
The mid-ocean ridge, as its name implies, is a mountainous rise through the middle of all the oceans,
between the continents. Typically a rift runs along the edge of this ridge. Along tectonic plate edges
there are typically oceanic trenches deep valleys, created by the mantle circulation movement from
the mid-ocean mountain ridge to the oceanic trench.
Hotspot volcanic island ridges are created by volcanic activity, erupting periodically, as the tectonic
plates pass over a hotspot. In areas with volcanic activity and in the oceanic trenches there
are hydrothermal vents releasing high pressure and extremely hot water and chemicals into the
typically freezing water around it.
Deep ocean water is divided into layers or zones, each with typical features of salinity, pressure,
temperature and marine life, according to their depth. Lying along the top of the abyssal plain is
the abyssal zone, whose lower boundary lies at about 6,000 m (20,000 ft). The hadal zone which

includes the oceanic trenches, lies between 6,00011,000 metres (20,00036,000 ft) and is the deepest
oceanic zone.
Technical terms[edit]
The acronym "mbsf" meaning "metres below the seafloor" is a convention used for depths below the
seafloor.[1][2]
Benthos[edit]
Main article: Benthos
Benthos is the community of organisms which live on, in, or near the seabed, the area known as
the benthic zone.[3] This community lives in or near marine sedimentary environments, from tidal
pools along the foreshore, out to the continental shelf, and then down to the abyssal depths. The
benthic zone is the ecological region on, in and immediately above the seabed, including the sediment
surface and some sub-surface layers. Benthos generally live in close relationship with the substrate
bottom, and many such organisms are permanently attached to the bottom. The superficial layer of the
soil lining the given body of water, the benthic boundary layer, is an integral part of the benthic zone,
and greatly influences the biological activity which takes place there. Examples of contact soil layers
include sand bottoms, rocky outcrops, coral, andbay mud.
Seabed features[edit]

Oceanic ridge with deep sea vent

Layers of the pelagic zone


This sectiondoes
notcite anyreferences
or sources.Please
help improve this
section byadding
citations to reliable
sources. Unsourced
material may be
challenged
andremoved.(March

2015)
Each area of the seabed has typical features such as common soil composition, typical topography,
salinity of water layers above it, marine life, magnetic direction of rocks,and sedimenting.
Seabed topography is flat where sedimenting is heavy and covers the tectonic features. Sedimenting
comes from various sources:

Land erosion sediments, brought mainly by rivers,

New "young rock" New magma of basalt composition, from the mid-ocean ridge.

Underwater volcanic ash spreading, especially from hydrothermal vents.

Microorganism activity.

Sea currents eroding the seabed itself,

Marine life: corals, fish, algae, crabs, marine plants and other biological created sediment.

Where sedimenting is avoided, such as in the Atlantic ocean especially in the northern and eastern
Atlantic, the original tectonic activity can be clearly seen as straight line "cracks" or "vents" thousands of
kilometers long.
Recently there has been the discovery of abundant marine life in the deep sea, especially
around hydrothermal vents. Large deep sea communities of marine life have been discovered
around black and white smokers hydrothermal vents emitting typical chemicals toxic to humans and
most of the vertebrates. This marine life receives its energy from both the extreme temperature
difference (typically a drop of 150 degrees) and from chemosynthesis by bacteria.
Brine pools are another seabed feature, usually connected to cold seeps.
History of exploration[edit]
This section does not cite any references or sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may be
challenged and removed. (March 2015)
Main article: Oceanography History
The seabed has been explored by submersibles such as Alvin and, to some extent, scuba divers with
special equipment. The process that continually adds new material to the ocean floor is seafloor
spreading and the continental slope. In recent years satellite images show a very clear mapping of the
seabed, and are used extensively in the study and exploration of the ocean floor.
In art and culture[edit]

Some children's play songs include elements such as "There's a hole at the bottom of the sea", or "A
sailor went to sea... but all that he could see was the bottom of the deep blue sea".
On and under the seabed are archaeological sites of historic interest, such as shipwrecks and sunken
towns. This underwater cultural heritage is protected by the UNESCO Convention on the Protection of
the Underwater Cultural Heritage. The convention aims at preventing looting and the destruction or loss
of historic and cultural information by providing an international legal framework.[4]

You might also like