1906 Revisited

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

earthquake

I
Feature:

1906
Earthquake
n the early morning hours of
April 18, 1906, disaster struck
the San Francisco Bay area. At 5:12
AM, the sleepy dawn was shattered as
a powerful earthquake ripped through

rev i s i t e d
the San Andreas fault. The temblor,
which registered 7.8 on the
Richter scale – the energy
equivalent of a million atomic
by bombs – was one of the worst
natural disasters to strike a major
Ryan Propper U.S. city. It left more than 3,000 dead,
200,000 homeless, and cost a staggering
$400 million in damage.
Scientists have long sought to develop accurate
models of this destructive event, hoping to gain both
historical perspective and pragmatic understanding of the
region’s altered geophysical state. Now, almost 100 years
after “The Big One,” Stanford University professors Gregory
Beroza and Paul Segall in the Department of Geophysics,
along with graduate student Seok Goo Song, are rethinking
the traditional interpretations of this catastrophic quake.

Yesterday’s Models on Shaky Ground


Beroza explains that there have been two key models of
the 1906 San Francisco earthquake. The first, proposed
by Dr. Wayne Thatcher and his colleagues at the U.S.
Geological Survey, was based on triangulation data,
“the 19th-century equivalent of GPS,” Beroza says.
Triangulation involves the precise measurement
of angles between geodetic monuments - specific
points on the Earth’s surface used for surveying
and navigating. This process allows surveyors

a new to determine distances with a high degree of


accuracy. “The San Andreas fault follows the

model for
coast fairly closely,” Beroza explains. “When
the earthquake happened, it changed those
angles. By looking at the difference in angles

an old before and after the earthquake, Thatcher and


his colleagues were able to turn that into a map

quake
of the slip distribution on the fault.”
The second model of the 1906 quake was
developed by Professor David Wald, a Visiting
Associate in Geophysics at the California Institute
of Technology. Wald’s analysis was based not on
positional data but on recordings of the seismic waves
resulting from the temblor. These readings were taken from
research stations around the globe, including Japan, Europe,
and Puerto Rico. Wald’s analysis of the seismic waves provided
Photo courtesy of Stanford University Archives.

another framework for understanding the earthquake. His


team used these data to create an interpretation of the quake’s
The statue of the famous geologist, Louis rupture as a function of space and time.
Agassiz, fell head first into the Quad during the Unfortunately, the geodetic and seismic models
1906 Quake. Remarkably, Agassiz only suffered
damage to his nose. Agassiz is now firmly
differed in two important respects. First, though
attached to his ledge in front of Jordan Hall. both interpretations suggested that the fault
extended from its epicenter near Daly City south
to San Juan Batista, Wald’s model showed

34 stanford scientific
1.9.0.6.
its northward extent reaching only to Point Arena, for a total Feature:
fault distance of 300 kilometers. On the other hand, Thatcher’s Earthquake
model placed the northernmost reach of the rupture much
farther up the coast, near Cape Mendocino. This position by the numbers
amounted to a total fault distance of nearly 500 kilometers.
Second, whereas Wald’s model measured the quake’s intensity
at 7.7 on the Richter scale, Thatcher’s suggested a value of 7.9 Strength: .8
– twice as strong when gauged by the amount of energy the
temblor released.
Beroza notes that the inconsistencies in these models were
Dead: ,000+
quite significant, even for the relatively primitive data sources
used to create them. “The two models of the earthquake were Homeless: 00,000
discrepant in a way that superseded how old and spotty the
data was,” he says. In other words, the two early models of
the earthquake - the seismic model and the geodetic model -
Cost: $400 million
were discrepant because of errors in the models themselves.
So although the data from these models was inaccurate, the Newly Discovered
fundamental problem was that one of the models was just plain
wrong. Even with relatively inaccurate data, the models gave
Rupture Velocity:
such different results that it couldn’t be attributed simply to the
data.
-5 km/sec
Supershear Seismology that,” Beroza concludes. “And sure enough, we can fit the data
Beroza and his colleagues reexamined both data sets under acceptably well with supershear rupture faults.”
the lens of a new hypothesis: that the earthquake ruptured the
Earth’s crust much more quickly than the previously assumed. The Big Picture
“In an elastic medium like the Earth,” Beroza explains, “there are The Stanford team views their new model of the 1906 San
two fundamental wave speeds. One is the compressional wave Francisco earthquake as more than just the fruits of an academic
or P-wave speed; the other is the shear wave or S-wave speed.” exercise. “You might say, ‘This earthquake happened a hundred
The S-wave speed, about 3 kilometers per second in the crust of years ago. Who really cares?’ It’s more than that,” Beroza claims.
the San Andreas fault, was traditionally regarded as an upper He explains that our modern understanding of seismic hazard
bound on the rupture velocity of the 1906 quake. However, in northern California depends, to a large extent, on a thorough
speeds up to the P-wave velocity, about 5 kilometers per second, comprehension of the causes and effects of “The Big One.”
are physically possible, though seldom observed. By positing After 1906, there were very few damaging earthquakes along
that the San Francisco the San Andreas fault. Other
earthquake was supershear
– that is, the rupture velocity
Our modern than the 1989 Loma Prieta
temblor and some moderate
exceeded the S-wave speed
– Beroza’s team fit both the
understanding of seismic quakes in 1926 near Monterey
Bay, seismic activity has been
geodetic and seismic data hazard...depends on a relatively quiet. Beroza and
sets to a single model. his colleagues think this period
The notion of postulating thorough comprehension may be “the calm before the
supershear rupture for
“The Big One” might have of the causes and effects storm,” and that we’re due for
another powerful earthquake
seemed outlandish until
very recently. “Ten years
of “The Big One”. sometime soon. “It’s strongly
time-dependent,” he notes.
ago, you’d have had a hard “Understanding exactly, or
time convincing our colleagues that this old earthquake with even approximately, when we might enter a more active phase
this relatively bad data was supershear,” Beroza says. “But depends on knowing how much slip, and where it was, in the
things changed.” The catalyst for change was a string of recent 1906 earthquake.” By looking at the past in a new light, Stanford
earthquakes – Turkey in 1999, Tibet in 2001, and Alaska in researchers have moved one step closer toward understanding
2002 – that exhibited markedly similar geophysical behavior forces at work deep within the Earth and predicting the future of
when compared with the San Francisco quake of 1906. All seismic activity in California. S
three measured 7.5 to 8 on the Richter scale and occurred on
strike-slip faults, a category of horizontally-slipping tectonic rYan ProPPer is a junior majoring in Computer Science at Stanford
plates that includes the San Andreas fault. In each case, modern University. He has been affiliated with the Stanford Scientific Review for
data confirmed that supershear rupture was responsible for three years, serving in various capacities as a writer, editor, and head of
the temblors’ power and devastation[RP4]. “It certainly gave distribution. Originally from Colorado, Ryan enjoys programming, tennis,
us the license to propose it for 1906, so we went ahead and did and following Colorado Avalanche hockey.

layout design: Marisa Dowling volume iv 5

You might also like