Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

OPINION PAPER

Against Fractiles
R. M. W. Musson,a… M.EERI

关DOI: 10.1193/1.1985445兴
In a recent opinion paper, Abrahamson and Bommer 共2005兲 argue that, contrary to
the conventional practice of choosing mean hazard curves as a basis for design, it is bet-
ter to select a chosen fractile hazard curve 共the 84%, for example兲. My belief is that the
arguments adduced for this proposal are flawed. In this response I suggest that there are
compelling reasons for continuing to use the mean hazard.
In standard probabilistic seismic hazard analysis practice 共PSHA兲, epistemic uncer-
tainties in the hazard model 共unknown values such as maximum magnitude, fault activ-
ity status, etc.兲 are typically represented as branches in a logic tree. Each branch is as-
sociated with a weight that is assigned by the analyst. In most studies, hazard values are
computed for each path through the logic tree in turn, and the final hazard value is the
weighted mean of all the end members of the logic tree. However, it is also possible to
sort the hazard values from each path and select the value that is exceeded by 50% of
branches, or 16% of branches, or 5% of branches, and so on, and these individual values
or curves are described as fractiles. Commonly, a plot of selected fractiles is used to
indicate the range of epistemic uncertainty in the overall results.
Abrahamson and Bommer 共2005兲 have two main objections to the use of the mean
hazard value. On the one hand, they question its validity on the grounds that the weights
ascribed to branches are not probabilities but confidence levels. On the other hand, they
observe that at very low probabilities, the mean hazard curve is unduly influenced by the
severest options in the hazard model, which start to dominate despite having been as-
signed very low weights. Therefore they would prefer to define an agreed-upon percen-
tile value and use the appropriate fractile hazard curve.
The problem with this approach, in my view, is that it effectively throws out proba-
bilism. Given the model defined by the analyst, if one wants the ground motion value
with an annual probability of, say, 10−7 of being exceeded, then it is the mean hazard
value, and no other value. The 10−7 value on the 84% curve does not in fact have that
probability of occurring according to the model 共see Figure 1兲. In the particular example
shown in Figure 1, the value that one would read for 10−7 annual probability on the 84%
curve has a true probability of occurrence of about 2 ⫻ 10−7.
The 84% curve has meaning if, and only if, all the conditions in the single branch of
the logic tree that that curve represents, are true. This is not the case, however. The prob-
ability that this one single branch is the correct one is very small, represented by the

a兲
British Geological Survey, West Mains Road, Edinburgh, EH9 3LA, UK

887
Earthquake Spectra, Volume 21, No. 3, pages 887–891, August 2005; © 2005, Earthquake Engineering Research Institute
888 R. M. W. MUSSON

Figure 1. Hypothetical hazard curves: median 共50%兲, 84, and 99 percentiles are gray lines, and
mean 共expected兲 hazard is a heavy black line. These are freehand sketches for the sake of il-
lustration and not the actual hazard at any site.

cumulative weight attached to it. And the same is true of all other fractiles. Each one
represents a state of epistemic certainty, and the probabilities given by each of the suite
of curves are the probabilities attached to the supposition that those certainties apply.
But, in reality, there is uncertainty, as expressed by the model. Each fractile, as a
statement of probabilistic hazard, is false. True probabilism has to take into account all
uncertainties, and none of the fractile curves do this. Only the mean curve does this.
Only the mean curve actually gives correct probabilities according to the model. Far
from abandoning the mean, it would be better to abandon all use of fractile hazard
curves.
Now, one could argue that choosing a value taken from an 84% curve is a good thing
from a design point of view, as an arbitrary choice. But if one does this, one should
remove any reference to probabilities, because one is making what is effectively a
pseudo-deterministic decision as to what one will have as a design value.
The argument that the weights in a logic tree are confidences and not probabilities is
a distinction in semantics; in practice, the weights are probabilities. If I give one zone
model a weight of 0.6 and another 0.3, it may be that this means that I have twice as
OPINION PAPER: AGAINST FRACTILES 889

much faith in the first one. But it also means that my belief is that it is twice as likely to
be right, i.e., twice as probable. This is, quite simply, how the hazard estimation process
works.
One of the interesting things about PSHA is that there are two totally different routes
from the starting model to the final hazard value. This is rather unusual—it is more often
expected that if you take two different approaches to hazard estimation 共for example,
standard PSHA and extreme value analysis兲 you get different results. However, there are
two methods that are completely different and yet completely compatible: the PSHA
methodology descended from Cornell 共1968兲 and the Monte Carlo simulation method
described by Musson 共2000兲. The fact that these two entirely separate procedures come
up with the same answer in response to the same input does rather suggest that both are
doing something right, and that the results are meaningful.
One of the advantages of the Monte Carlo procedure is that, while the Cornell-
descended method is largely a set of mathematical abstractions, the Monte Carlo method
is utterly rooted in physical reality. The method sets up the seismic source model, real-
izes the seismicity for the next 50 years, for example, that the model generates, performs
this realization several thousand times over, and then examines the resulting simulations
to see how often design levels of ground motion would actually be exceeded at site. The
total set of simulations is the seismicity that the model predicts, and the resulting ground
motions give hazard values that are actually based on observation. If one had the divine
ability to see all the alternative possible futures of the seismicity of the next 50 years,
one would observe the actual hazard to the site in terms of ground motions happening
more frequently or less frequently. The Monte Carlo hazard process is the next best thing
to this vision—an observational method of hazard estimation.
But what it gives you is the mean hazard. If you play the future over and over again
and observe the results, you can derive from this what is effectively the mean hazard
curve. You cannot obtain any of the fractile hazard values in this way. No simulation
from the model will give you these, because the fractiles relate only to the hazard that
would pertain in the case that all uncertainty were removed from the model—which is
unrealistic. Switching from mean hazard to some fractile value would have the effect of
breaking the link between analytical hazard estimation and observational hazard
estimation—which cannot be good.
The heart of the matter is perhaps really Abrahamson and Bommer’s second point—
they do not like the fact that the mean hazard is very high at very low probabilities,
citing the Yucca Mountain and PEGASOS projects as examples. These very high hazard
results are arrived at in the following way, as Abrahamson and Bommer 共2005兲 state: in
the interests of inclusiveness, hazard analysts incorporate extreme hypotheses in their
model with very low weights; at very low probabilities 共10−7 per annum兲 these extremes
tend to dominate; hence the high hazard results. They argue that selecting some fractile
would be less sensitive to these extremes.
The situation reminds one of an old joke: 共Patient兲 “Doctor, it hurts if I do this.”
共Doctor兲 “Well, don’t do that, then.” There are two issues here that need to be confronted
head on and not sidestepped.
890 R. M. W. MUSSON

The first issue is the tendency of hazard analysts to include all manner of hypotheses
in their models just because the hypothesis is published somewhere, and the analyst feels
a duty to “represent the opinion of the wider community.” I believe that analysts should
become more ruthless and be freer with assigning zero weights, instead of low weights,
to versions of the model in which they have no belief. They need to understand that
branches with very low weights may turn out to have a disproportionate effect at very
low probabilities, and they should design the model accordingly. I have argued elsewhere
共Musson 2004a, b兲 that more effort should be spent on proving that some model
branches are simply incompatible with reality and should be confidently discarded.
The second issue is the probability level itself. Early versions of deterministic hazard
assessment sought to estimate the worst possible ground motion at a site, on the basis
that a structure that will withstand the worst thing possible is inherently safe. The trouble
with this approach is that the worst thing for, say, a site in New England, might be a
Charleston-type event directly under the site, which is certainly possible, but is ex-
tremely improbable. Now, with design probability levels being pushed down to lower
values such as 10−7 or even 10−8 per annum, the analyst is essentially being asked, what
is the hazard value that is extremely improbable? One should not be surprised that the
answer turns out to be the worst case.
So the fact that the mean hazard at very low probabilities is dominated by the most
extreme branches in the hazard model is, in fact, nothing less than the truth. If you don’t
like the answer, don’t ask the question. If you set out to calculate the ground motion with
annual probability of 10−7, then the mean hazard, and only the mean hazard, is the
ground motion that actually has that probability, and this is inherently likely to be the
worst case or close to it. So one should consider carefully exactly what probability levels
are appropriate for design. The more one starts looking at extremely low probabilities,
the more one is beginning to approach worst-case determinism.
Switching to some predefined fractile tends to have the effect of changing the design
probability to a less onerous value. As shown in Figure 1, if one chooses the 10−7 value
on the 84 percentile curve, one is actually choosing a value that has a probability of
occurrence roughly twice that of the design value of 10−7. The real 10−7 value on the
hazard curve is about 1.0 g. The value that would be read from the 84-percentile curve
is about 0.82 g, the probability of which is actually about 2 ⫻ 10−7, as shown by where
the value occurs on the mean hazard curve.
The replacement of mean hazard values by fractile values would have several detri-
mental effects. It would patch over two issues of practice 共poor model design and un-
thinking acceptance of given design probability levels兲 instead of tackling them directly;
and it would abdicate from true probabilistic procedure in favor of something more ar-
bitrary. For any model that includes uncertainties, the correct answer to the question,
What is the ground motion with probability X? is given by the mean hazard value only.
The use of fractiles may show something of the degree of epistemic uncertainty in the
model, but the curves themselves are not meaningful, and it is arguable that the role of
portraying epistemic uncertainty is better served by the use of sensitivity studies where
the results can be tied directly to the assumptions that are made.
OPINION PAPER: AGAINST FRACTILES 891

ACKNOWLEDGMENTS
This paper is published with the permission of the executive director of the British
Geological Survey 共NERC兲.

REFERENCES
Abrahamson, N. A., and Bommer, J. J., 2005. Probability and uncertainty in seismic hazard
analsysis, Earthquake Spectra 21 共2兲, 603–607.
Cornell, C. A., 1968. Engineering seismic risk analysis. Bull. Seismol. Soc. Am. 58, 1583–1606.
Musson, R. M. W., 2000. The use of Monte Carlo simulations for seismic hazard assessment in
the UK, Annali di Geofisica 43, 1–9.
Musson, R. M. W., 2004a. Objective validation of seismic hazard source models, Proceedings
of 13th World Conference on Earthquake Engineering, Vancouver, Paper No. 2492.
Musson, R. M. W., 2004b. Comment on “Communicating with uncertainty: A critical issue with
probabilistic seismic hazard analysis,” EOS 85 共24兲, 235–236.
共Received 18 May 2005; accepted 21 May 2005兲

You might also like