Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

What is philosophy of risk?

bY

SVEN OVE HANSSON


Uppsala University

1. Introduction
THE WORD ‘RISK’ has been given many technical meanings, in the
majority of which it is constructed as a numerical variable. In most
studies of risk assessment, the procedure has been to multiply “the
probability of a risk with its severity, to call that the expectation
value, and to use this expectation value to compare risks.’”
In less technical contexts, ‘risk‘ refers to situations in which it is
possible but not certain that some undesirable event will occur. In
this essay, ‘philosophy of risk’ refers to philosophical studies related
to risk in this wide, non-technical sense.
The rapid growth of risk-related research in recent years has only
to a small degree been reflected in philosophy. This is unfortunate
for two reasons. First, philosophical methodology can be useful in
the analysis of various policy-related risk issues. Secondly-and this
is the subject of the present essay-studies of risk can provide us
with new and fruitful perspectives on several basic philosophical
issues. What follows is an outline of some such issues in epistemol-
ogy, the philosophy of science, decision theory, and moral philoso-
phy. The emphasis is on problems rather than on solutions.

2. The reduction of epistemic uncertainty


In situations of risk we do not know what will happen, and in
particular we do not know how future developments will be af-
fected by the choice that we make among the options that are avail-
able to us. In the policy-relevant cases, our choice of a course of

I Herman Bondi, “Risk in perspective”, pp. 8-17 in MG Cooper (ed.), Risk, 1985.
Quotation from p. 9.-On some of the problems of this definition, see Sven Ove
Hansson, “Dimensions of Risk”, Risk Andysis, 9:107-112, 1989.
170 SVEN OVE HANSSON

action is believed to have effects that we do not wish to ignore, but


about which we are not sufficiently informed. In this sense, situations
of risk are characterized by partial knowledge and partial control.
In decision theory, such lack of knowledge is divided into two
major categories, that are commonly labelled ‘risk’ and ‘uncertainty’.
In decision-making under risk, we know what the possible outcomes
are and what are their probabilities.2 Perhaps a more adequate term
for this would be “decision-making under known probabilities”. In
decision-making under uncertainty, probabilities are either not known
at all or only known with insufficient precision.’
Although the distinction between risk and (epistemic) uncertainty
is both clear and practically useful4, it is strongly dependent on
background beliefs. Consider, for instance, a decision for which
tomorrow’s weather in Stockholm is crucial. The meteorological pre-
diction says that, given what we know at this moment, there is a
50% probability that it will rain in Stockholm tomorrow. If you are
a meteorologist who fully believes in the underlying theories and in
the computations made for this prognosis, then this is a clear case
of decision-making under risk (known probabilities). But if you are
less confident in either of these respects, then this is a case of decision-
making under (epistemic) uncertainty.
Similarly, if you are absolutely certain that current estimates of
the effects of low-dose radiation are accurate, then decision-making
referring to such exposure may be decision-making under risk. If
you are less than fully convinced, then this too is a case of decision-
making under uncertainty.
If, in these examples, you are not an expert in the relevant fields,
then your epistemic uncertainty has a component of “uncertainty of

The special case when all the probabilities are either 0 or 1 coincides with deci-
sion-making under certainty.
The case when they are not known at all is also called “decision-making under
ignorance”.
Nils-Eric Sahlin and Johannes Persson, “Epistemic Risk: The significance of
knowing what one does not know”, in Berndt Brehmer and Nils-Eric Sahlin (eds.).
Future Ri.7k.r und Risk Munugement, Kluwer 1994.
Sven Ove Hansson, “Decision Making Under Great Uncertainty“, P / I ~ / O S O / Jo f/‘ ~ , Y
the Sociul Sciences 26:369-386, 1994, esp. pp. 380-383. Sven Ove Hansson.
“Entscheidungsfindung bei Uneinigkeit der Experten, pp. 87-96 in Horst ZilleBen.
Peter C Dienel and Wendelin Strubelt (eds.) Die. Motk~wii.rier.urigr k r . Dcrnokrtrric~.
Internutionule Anxitze, Westdeutscher Verlag 1993.
WHAT IS PHILOSOPHY OF RISK? 171

reliance”-dependence on the knowledge of other^.^ Experts are


known to have made mistakes, and a rational decision-maker should
take into account the possibility that experts may turn out wrong
again.
The only clear-cut cases of “risk” (known probabilities) seem to
be idealized textbook cases that refer to devices such as dice or
coins that are supposed to be known with certainty to be fair. More
typical real-life cases are characterized by (epistemic) uncertainty
that can, with varying degrees of inaccuracy, be described in a sim-
plified manner as cases of risk (known probabilities). Our actual
epistemic situation can be illustrated as in Diagram 1.

DIAGRAM 1

What should our reaction be to all this epistemic uncertainty? One


possible answer, and perhaps at first hand the most attractive one,
is that we should always take it into account, and that all decisions
should be treated as decisions under epistemic uncertainty. However,
attractive though this may seem, it is not in practice feasible, since
human cognitive powers are insufficient to handle such a mass of
unsettled issues. In order to grasp complex situations, we reduce the
prevailing epistemic uncertainty to probabilities (“There is a 90%
chance that it will rain tomorrow”) or even to full beliefs (“It will
rain tomorrow”).6 This process of uncertainty-reduction, or ‘fixation

The word ‘reduction’ is used metaphorically. I do not wish to imply that all
probability assignments or full beliefs have been preceded by more uncertainty-
laden belief states, only that they can be seen as reductions in relation to an
idealized belief state in which uncertainty is always fully recognized.
172 SVEN OVE HANSSON

of belief ,7 helps us to achieve a cognitively manageable representation


of the world, and thus increases our competence and efficiency as
decision-makers.
Another answer to this question is provided by Bayesian decision
theory. According to the Bayesian ideal of rationality, all statements
about the world should have a definite probability value assigned to
them. Non-logical propositions should never be fully believed, but
only assigned high probabilities. Hence, as shown in Diagram 2,
epistemic uncertainty is always reduced to probability, but never to
full belief The resulting belief system is a complex web of intercon-
nected probability statements.*

DIAGRAM 2

0 0

In practice, the degree of uncertainty-reduction provided by Bayesian-


ism is insufficient to achieve a manageable belief system. Our cognitive
limitations are so severe that massive reductions to full beliefs
(certainty) are inevitable in order to make us capable of reaching
conclusions and making decision^.^ As one example of this, since all

’ Charles Peirce, “The fixation of belief”, pp. 223-247 in Collected Papers of


Charles Sunders Peirce, vol. 5 (C. Hartshorne and P. Weiss, eds.), Harvard Uni-
versity Press 1934.
* Richard C Jeffrey, “Valuation and Acceptance of Scientific Hypotheses”, Phi-
losophy of Science 23~237-249, 1956.
This is one of the reasons why belief revision models that represent belief states
as sets of (sentences representing full) beliefs are an important complement to
probabilistic models. Some features of doxastic behaviour, notably features related
to logic, are more realistically represented in the former type of models.
WHAT IS PHILOSOPHY OF RISK? 173

measurement practices are theory-laden, no reasonably simple account


of measurement would be available in a Bayesian approach.’O On
the other hand, Bayesianism is also unable to account for the fact
that we also live with some unreduced epistemic uncertainties.
For practical purposes we need an account of the reduction of
uncertainty that allows for substantial reductions all the way down
to certainty (full beliefs), and also for the non-reduction of some
epistemic uncertainties, as in Diagram 3. Furthermore, reductions
should be allowed to be temporary, so that we can revert from
certainty to probability or even to uncertainty, when there are reasons
to do this.

DIAGRAM 3

Much remains to be done, even on the basic conceptual level, be-


fore credible models of the reduction process and its reversal can be
constructed.

3. Misfired veductions
Although the reduction of epistemic uncertainty is a sine qua non
for efficient decision-making, it may also have unwanted effects on
decisions. This can be seen from the following example, in which
expected utility theory is applied to a decision-problem related to

I”Andrew McLaughlin, “Science, Reason and Value”, Theory and Decision ]:I21 -
137, 1970.
174 SVEN OVE HANSSON

global climate change. Suppose that, with a certain amount of green-


house gas emissions, there are three scenarios. For simplicity of
exposition, they are presented in the following table in probabilistic
terms (i.e., uncertainty has been reduced to probability but not to
certainty):

Case Probability Utility


A .98 -100
B .01 0
C .01 -1 00000

Here, A is the most plausible case. B is the “best case” in which the
greenhouse effect is completely outbalanced by other mechanisms,
and C is the “worst case” in which a runaway greenhouse effect
seriously threatens human life on this planet.”
If the issue is entrusted to a scientific committee, then the most
likely outcome is that they will settle for option A . In other words,
they will perform a full reduction to certainty (full belief). If the de-
cision-makers base their decision on this information, and do not take
uncertainty of reliance into account, then with the utility assignment
given in the table, the expected utility will be estimated at -100. If the
decision were instead based on the probabilistic information, then
the expected utility would be estimated at about -1 100. According to
expected utility theory, the latter is the more correct estimate.’*
As can be seen from this example, in order to avoid untoward
effects of uncertainty-reduction, we sometimes need to revert from
full beliefs to probabilities or even to epistemic uncertainty. The
purpose of doing so is to satisfy, as far as possible, the following
critierion for an epistemically well-founded decision:

‘ I Sven Ove Hansson and Mikael Johannesson, “Decision-Theoretical Approaches


to Global Climate Change”, pp. 153-178 in Gunnar Fermann (ed.) The Politics qf
Climate Change, Scandinavian University Press, 1997.
’’ Probably, virtually all calculations of expected utility undertaken by actual agents
(in contradistinction to idealized Bayesian subjects) rely on presuppositions in the
underlying belief system similar to the exclusion in this example of alternatives B
and C. Therefore, for a human agent to approximate as far as possible the decision-
behaviour of a perfect expected utility maximizer, she may very well be required to
follow some other rule than that of maximizing expected utility.
W H A T IS PHILOSOPHY OF RISK? 175

Insensitivity to reintroduction
In the underlying reduction of epistemic uncertainty, noth-
ing has been excluded that, if reintroduced, would have sig-
nificantly changed the outcome of the decision.

On the face of it, our normal cognitive processes seem to comply


fairly well with this criterion. We are more reluctant to ignore remote
or improbable alternatives when the stakes are high. Suppose that
when searching for mislaid ammunition, I open and carefully check
a revolver, concluding that it is empty. I may then say that I know
that the revolver is unloaded. However, if somebody then points
the revolver at my head asking: “May I then pull the trigger?”, it
would not be unreasonable or inconsistent of me to say “No”, and
to use the language of probability or uncertainty when explaining
why.
Unfortunately, the process of reintroduction is not as unproblem-
atic as it may seem from examples such as this. An important theo-
retical problem (that I do not know to be practically significant) is
that there is no guarantee that the process of reintroduction will
terminate. How do we know that not every step of reintroduction
will give rise to a new such step, such that the decision-outcome
oscillates interminably?
Problems of more immediate practical concern are connected with
the fact that important parts of our belief systems are governed by a
thought system, namely science, that programmatically ignores con-
siderations of practical value.

4. Science and misfired reductions

Contrary to everyday reasoning, the scientific process of uncertainty-


reduction is bound by rules that (at least ideally) restrict the grounds
for accepting or rejecting a proposition to considerations unrelated
to practical consequences. There are good reasons for this restric-
tion. As decision-makers and cognitive agents with limited capacity,
we could hardly do without an intersubjective, continually updated
corpus of beliefs that can for most purposes be taken to be the
outcome of reasonable reductions of uncertainty.
There are also good reasons for the scientific process to be based
176 SVEN OVE HANSSON

on fairly strict standards of proof. When determining whether or


not a scientific hypothesis should be accepted for the time being, the
onus of proof falls squarely on its adherents. Similarly, those who
claim the existence of an as yet unproven phenomenon have the
burden of proof. These proof standards are essential for both intra-
and extrascientific reasons. They prevent scientific progress from
being blocked by the pursuit of all sorts of blind alleys. They also
ensure that the scientific corpus is reliable enough to be useful for
(most) extra-scientific applications.
Nevertheless, the proof standards of science are apt to cause prob-
lems whenever science is applied to practical problems that require
standards of proof other than those of science. The application of
toxicology to regulatory decisions is a case in point. It would not
seem rational-let alone morally defensible-for a decision-maker
to ignore all preliminary indications of toxicity that do not amount
to full scientific proof. It has indeed often been claimed that admin-
istrative toxicologists should apply a burden of proof that is a re-
versal of that applied in science: a toxic effect should be considered
to exist unless it can be shown not to exist. With such a principle,
administrators would err on the side of safety, and much human
suffering would be avoided.I3
Unfortunately, such a reversal is not in practice feasible, and this
for two rea~0ns.I~ First, issues of toxicity are asymmetrical in the
sense that they can be much more easily settled in one direction
than in the other. It can often be proved beyond reasonable doubt
that a substance has a particular adverse effect. On the other hand,
it can seldom be proved beyond reasonable doubt that a substance
does not have a particular adverse effect, and in practice never that
it has no adverse effect at all. As a rough rule of thumb, epidemio-
logical studies can only detect reliably excess relative risks that are
about 10% or greater. For the more common types of cancer, such
as leukemia and lung cancer, lifetime risks are between 1% and
10%. Therefore, even in the most sensitive studies, lifetime risks

l 3 Richard Rudner, “The scientist qua scientist makes value judgments”, Philoso-
phy qf Science 2O:l-6, 1953. Steven D Jellinek, “On the inevitability of being
wrong”, Annals of the New York Academy of Science 363:4347, 1981.
l 4 Sven Ove Hansson, “Can we reverse the burden of proof?”, Toxicology Letters,
90:223-228, 1997.
WHAT IS PHILOSOPHY OF RISK? 177

smaller than lo-* or le3cannot be 0b~erved.I~ It follows that impor-


tant health effects go undetected in such studies. The same statistical
problems beset animal experiments.16In order to detect an increase
of the mutation rate by 1/2 per cent, about 8,000,000,000 mice are
required.”
From the viewpoint of the philosopher of science, this asymmetry
of toxicological knowledge gives rise to an interesting problem for
falsificationism. According to that doctrine, scientific theories and
hypotheses can be empirically falsified but not verified. However,
when a toxicologist considers an hypothesis such as ‘nitromethane
is carcinogenic’, the very opposite situation obtains. This hypothesis
can be verified (e.g., by convincing epidemiological studies), but it
can never be falsified, since no experiment can exclude a weak carci-
nogenic effect. It seems as if falsificationism can only be saved at
the price of disidentifying the ‘hypothesis’ referred to by the phi-
losopher of science and the hypothesis that a working scientist sets
out to test in her laboratory.
The second reason why a “reversal” of the scientific burden of
proof is not feasible is that it would not only affect the interpreta-
tion of individual results in toxicology, but also our views on more
basic biological phenomena. If our main concern is not to miss any
possible mechanism for toxicity, then we must pay serious attention
to possible metabolic pathways for which there is insufficient proof.
Such considerations in turn have intricate connections with various
issues in biochemistry, and ultimately we are driven to reappraise
an immense number of empirical conclusions, hypotheses, and theo-
ries. A massive reversal of the uncertainty-reduction process would

I s Harri Vainio and Lorenzo Tomatis, “Exposure to carcinogens: scientific and


regulatory aspects”, Annals cfthe American Conference of Governmental Industrial
Hygienists, 12:135- 143, 1985.
I h To some extent, this effect can be compensated for by extrapolation from high-
dose experiments. But due to inter-species differences, the absence of effects in
animals is no proof of safety in humans. (Neither is the presence of an effect in
animals a proof of an effect in humans, but effects demonstrated in relevant and
well-conducted animal experiments provide such a strong indication of human
toxicity that to neglect them is tantamount to not being on the safe side.) Some
effects on humans, such as subtle mental effects, are not easily accessible to animal
experimentation.
Alvin M Weinberg, “Science and Trans-Science”, Minerva 10:209-222, 1972,
p. 210.
178 SVEN OVE HANSSON

be needed. Due to our cognitive limitations, this cannot in practice


be done. Each of us has access only to small parts of the entire
corpus of chemical and biological knowledge on which modern toxi-
cology is based, and this corpus has been shaped by innumerable
reductions of uncertainty that have accorded with ordinary scientific
standards of proof.
This conclusion can be generalized. It does not seem possible to
realign the corpus of scientific belief to make it accord with stand-
ards of evidence other than those that have guided its development.
Only small and imperfect adjustments can be made. How they should
best be constructed is an issue that remains to investigate, both
from theoretical epistemological and more practical methodological
points of view.’*

5. Unknown possibilities

We often have to make decisions under conditions of unknown pos-


sibilities, i.e., without knowing what the possible consequences are.
In probabilistic language, this is the case when there is some conse-
quence for which we do not know whether its probability, given
some option, is zero or non-zero. However, this probabilistic de-
scription does not capture the gist of the matter. The characteristic
feature of these cases is that we do not have a complete list of the
consequences that should be taken into account.
The constructors of the first nuclear bomb worried that the bomb
might trigger an uncontrolled reaction that would propagate through-
out the whole atmosphere. Theoretical calculations convinced them
that this possibility could be neg1e~ted.I~ The group might equally
well have worried about the possibility that the bomb could have
some other, not thought-of, catastrophic consequence in addition to
its (most certainly catastrophic) intended effect. The calculations
could not have laid such apprehensions to rest (and arguably no

I X One example of the latter is the development of statistical methods that estimate

the size of the adverse health effects that may have gone undetected in spite of the
studies that have been performed. See Sven Ove Hansson, “The Detection Level”,
Hegulufory Toxicology and Pharmacology, 22:103- 109, 1995.
l 9 Robert Oppenheimer, Letters and RecoNectiuns (Alice Kimball Smith and Charles
Weiner, eds.), Harvard University Press 1980, p. 227.
WHAT IS PHILOSOPHY OF RISK? 179

other scientific argument could have done so either). The decision


would have been much more difficult if unknown possibilities had
been taken into account.
The problem of unknown possibilities is difficult to come to grips
with. On one hand, there are cases in which it would seem unduly
risky to entirely dismiss unknown possibilities. Suppose, for instance,
that someone proposes the introduction of a genetically altered spe-
cies of earthworm that will displace the common earthworm and
that will aerate the soil more efficiently. It would not be unreason-
able to take into account the risk that this may have unforeseen
negative consequences. For the sake of argument we may assume
that all concrete worries can be neutralized. The new species can be
shown not to induce more soil erosion, not to be more susceptible
to diseases, etc. Still, it would not be irrational to say: “Yes, but
there may be other negative effects that we have not been able to
think of. Therefore, the new species should not be introduced.” Simi-
larly, if someone proposed to eject a chemical substance into the
stratosphere for some good purpose or other, it would not be irra-
tional to oppose this proposal solely on the ground that it may have
unforeseeable negative consequences, and this even if all specified
worries can be neutralized.
On the other hand, there are cases in which it seems to be sensi-
ble enough to ignore unknown possibilities. An illustrative example
is offered by the debate on the polywater hypothesis, according to
which water could exist in an as yet unknown polymeric form. In
1969, Nature printed a letter that warned against producing polywater.
The substance might “grow at the expense of normal water under
any conditions found in the environment”, thus replacing all natural
water on earth and destroying all life on this planet.20The author
might equally well have mentioned other possible disasters, or men-
tioned the problem of unknown possibilities. Soon afterwards, it
was shown that polywater is a non-existent entity. If the warning
had been heeded, then no attempts would have been made to replicate
the polywater experiments, and we might still not have known that
polywater does not exist.
In a sense, any decision may have catastrophic unforeseen conse-

’“FJ Donahoe, “Anomalous Water”, Nature 224:198, 1969.


180 SVEN OVE HANSSON

quences. If nothing else, in the last resort, any action whatsoever


might invoke the wrath of evil spirits (which might exist), thus drawing
misfortune upon all of us. If such possibilities are taken into ac-
count, then (selected) appeals to unknown possibilities may stop
investigations, foster superstition and hence depreciate our general
competence as decision-makers.
The choice when to take unknown possibilities into account and
when to ignore them is one of the more difficult issues in uncer-
tainty-reduction. Factors that may be relevant include novelty, the
absence of spatial and temporal limitations, and the presence of
complex systems in balance.21The investigation of this subject has
only just begun. It seems to have interesting connections with the
problems of epistemological skepticism.

6. Three perspectives on uncertainty


Situations of risk or epistemic uncertainty may be seen from three
perspectives. Taking risks at war as a paradigm example, they may
be called the perspectives of the soldier, the general, and the Norn.
A certain number of persons will be killed in a forthcoming battle.
For the soldier, governed by his desire to survive, this is an all-or-
nothing affair. Either he is killed, or he is not. If he receives reliable
information that the probability of being killed is 10 percent, this
may relieve or aggravate his apprehensions, but it cannot make his
possible death in battle less of an uncertain event. For the general,
who is concerned about possible losses, such calculations are much
more efficiently uncertainty-reducing. If he knows that the prob-
ability for individual soldiers to be killed is 10 per cent, then he also
knows that the corresponding proportion of the battalion will be
lost. For him, this is what matters. It makes no big difference to
him if he sends off 2000 soldiers to a mission that will kill 10% of
them or 200 soldiers to a mission that will kill all of them. His
objectives refer to an aggregated level, on which much of the uncer-
tainty from the individual level has been cancelled out.
For another example, suppose that the expected number of deaths

For a discussion of some such factors, see Sven Ove Hansson, “Decision Mak-
ing Under Great Uncertainty”, Philosophy qfthe Social Sciences 26:369-386, 1994.
W H A T 1s PHILOSOPHY OF RISK? 181

in traffic accidents in a region is 300 per year if safety belts are


compulsary, and 400 per year if they are optional. Then, if these
calculations are correct, about 100 more persons per year will be
killed in the latter case than in the former. We know, when choos-
ing one of these options, whether it will lead to fewer or more deaths
than the other option. If we aim at reducing the number of traffic
casualties, then this can, due to the law of large numbers, safely be
achieved by making safety belts compulsory. The large number of
road accidents levels out random effects in the long run. From the
general’s (or rather the legislator’s perspective), there is practically
speaking not much uncertainty left. For the individual driver or
passenger (with or without the belt) the uncertainty is still there.22
The levelling-out mechanism does not work for case-by-case deci-
sions on unique or very rare events. Suppose that we have a choice
between a probability of ,001 of an event that will kill 50 persons
and the probability of .1 of an event that will kill one person. Here,
random effects will not be levelled out, not even from the general’s
perspective, as in the traffic belt case. Nevertheless, a decision in
this case to prefer the first of the two options (with the lowest number
of expected deaths) may very well be ultimately based on a level-
ling-out argument, namely if the decision is included in a sufficiently
large group of decisions for which a metadecision has been made to
maximize expected utility. The larger the group of decisions that are
covered by such a policy, the more efficient is the levelling-out ef-
fect. In other words, the larger the group of decisions, the larger
catastrophic consequences can be levelled out.23
However, there are limits to this effect. Extreme outcomes, such
as a nuclear war or a major ecological threat to human life, cannot
be levelled out even in the hypothetical limiting case in which all
human decision-making aims at maximizing expected utility. No
human “general” can command such a large “army’’. In this case,
the levelling-out effect belongs to the reasoning of the Norn who
sees many worlds come and go.
Risk analysts typically assume the perspective of the general rather

3,
--Insurance may be seen as a partial transference of uncertainty from the soldier’s
to the general’s perspective.
Y Sven Ove Hansson, “The false promises of risk analysis”, Ratio 6:16-26, 1993.
182 S V E N OVE HANSSON

than that of the soldier, and some of them even seem to have adopted
the perspective of the N ~ r nThis
. ~ practice
~ has obvious advantages.
Policy decisions should be impartial, and one person’s sufferings
should be given the same weight as that of anyone other. The general’s
perspective achieves exactly that.
On the other hand, the total dominance of this perspective in
public discourse is not without its disadvantages. Stuart Hampshire
has rightly warned that the habits of mind engendered by moral
decision-making based on impersonal computations may lead to “a
coarseness and grossness of moral feeling, a blunting of sensibility,
and a suppression of individual discrimination and gentlene~s.”~~
Many risk-related issues are subject not only to decisions on the
aggregate level, for which the general’s perspective may be adequate,
but also to individual decision-making and to decision-making in
small groups for which it may in some cases be less adequate. There
may also be connections between the levels that need to be clarified.
For these reasons, what has been called here the soldier’s perspec-
tive needs a voice in public discourse on risk. At least in part it may
be the task of moral philosophy to give it a voice.

1. The causal dilution problem


Throughout the history of moral philosophy, moral theorizing has
for the most part referred to a deterministic world in which the
morally relevant properties of human actions are both well-deter-
mined and knowable. Consequently, mainstream ethical (or meta-
ethical) theories cannot be effectively applied to problems involving
risk and uncertainty, unless they are generalized to a non-determin-
istic setting. The problem of how to perform this generalization is
perhaps best expressed with reference to morally forbidden acts:

?4 Perhaps the best example of this is the Pentagon’s use of secret utility assign-
ments to accidental nuclear strike and to failure to respond to a nuclear attack, as
a basis for the construction of command and control devices. ME PatC-Cornell
and JE Neu, “Warning Systems and Defense Policy: A Reliability Model for the
Command and Control of U.S. Nuclear Forces”, Risk Analysis 5:121-138, 1985.
I s Stuart Hampshire, Morality und Pessimism, the Leslie Stephen Lecture 1972,
Cambridge University Press 1972, p. 9.
WHAT IS PHILOSOPHY OF RISK? 183

The causul dilution problem:


Given that a moral theory T forbids an agent A to perform
an action X because it has properties P, under what condi-
tions does (a generalized version of',) T forbid A to perform
an action Y because it possibly but not certainly has the
properties P?

If T is a consequentialist theory, then the causal dilution problem


may be further specified:

The cuusul dilution problem (consequentialist version) :


Given that a moral theory T forbids an agent A to perform
an action X with the consequences C, under what condi-
tions does (a generalized version of) T forbid A to perform
an action Y that possibly but not certainly has the conse-
quences C?

For utilitarian moral theories, one possible answer is provided by


expected utility theory:

Whenever T would forbid A to perform an action Z with


(certain) consequences that have the same utility as the ex-
pected utility of Y .

This extension of utilitarianism is both operative and convenient,


and it has been almost universally accepted in modern literature on
risk analysis and risk management. However, it does not follow
from utilitarian theory that a utilitarian must choose this solution
to the causal dilution problem.
As can be seen from the examples given in Section 6, the level-
ling-out mechanism provides strong support for expected utility
maximization, provided that the following two criteria are satisfied:

(EU1) Utility is all that matters.


(EU2) The choice of a decision-rule bears upon a group of deci-
sions that is sufficiently large for the levelling-out mecha-
nism to take effect.
184 SVEN OVE HANSSON

A utilitarian will assume that (EU1) is correct, but this is not suff-
cient to motivate expected utility maximization in cases when (EU2)
is violated. In such cases, nothing prevents a utilitarian from choos-
ing decision-rules that deviate from the maximization of expected
utility. In particular, utilitarianism is compatible with more cau-
tious or risk-aversive decision rules, that give priority to the avoid-
ance of catastrophic outcomes.
A non-utilitarian will reject (EUI), and not without good intui-
tive reasons. The morally relevant aspects of situations of risk and
uncertainty go far beyond the impersonal, free-floating sets of con-
sequences that utilitarianism is contented with. Risks are taken, run,
or imposed.z6It makes a difference if it is my own life or that of
somebody else that I risk in order to earn a fortune for myself. A
moral analysis of risk that includes considerations of agency and
responsibility will be an analysis more in terms of the verb (to) ‘risk’
than of the noun (a) ‘risk’.27
Major policy debates on risks have in part been clashes between the
“noun” and the “verb” approach to risk. Proponents of nuclear energy
emphasize how small the risks are, whereas opponents question the
very act of risking improbable but potentially calamitous accidents.
For an account of the ethics of risking, it would seem natural to
abandon utilitarianism, and turn instead to deontological or rights-
based theories. Unfortunately, these theories have their own causal
dilution problems that do not seem to be easier to solve. The rel-
evant question, with respect to rights-based theories, was asked by
Robert Nozick: “Imposing how slight a probability of a harm that
violates someone’s rights also violates his rights?”28Your right not
to be killed by me implies that I am forbidden to perform certain
acts that involve a risk of killing you, but it does not prohibit all
such acts. (I am allowed to drive a car in the town where you live,
although this increases the risk of being killed by me.) Perhaps the

Judith Thomson, “Imposing Risk”, pp. 124-140 in Mary Gibson (ed.) To Breuthe
Freely, R o m a n & Allanheld 1985.
27 The notion of risking is in need of clarification. In order to risk something, must
I increase its probability, or causally contribute to it? Can I be said to risk an
outcome that I have no means of knowing that I contribute to? The discussion of
these definitional issues will have to be deferred to another occasion.
2H Robert Nozick, Anarchy, State, und Utopia, Basic Books 1974, p 74. Cf. Dennis
McKerlie, “Rights and Risk”, Cunudiun Journal of Philosophy 16:239-251, 1986.
WHAT IS PHILOSOPHY OF RISK? 185

most obvious solution is to require that each right be associated


with a probability limit, below which risking is allowed and above
which it is prohibited. However, as Nozick observed, such a solution
is not credible since probability limits “cannot be utilized by a tradi-
tion which holds that stealing a penny or a pin or anything from
someone violates his rights. That tradition does not select a thresh-
old measure of harm as a lower limit...”29Furthermore, no rights-
based method for the determination of such probability limits seems
to be available, so that they would have to be external to the rights-
based theory. No other credible solution to the causal dilution prob-
lem for rights-based theories seems to be available.
For deontological theories, the picture is very much the same:
The problems are analogous, and no solution-in particular no in-
ternal solution-seems to be available.
Contract theories may perhaps appear somewhat more promis-
ing. The criterion that they offer for the deterministic case, namely
consent, can also be applied to risky options. Unfortunately, this
solution is far from unproblematic. Consent, as conceived in con-
tract theories, is either actual or hypothetical. Actual consent does
not seem to be a realistic criterion in a complex society in which
everyone performs actions with marginal but additive effects on many
people’s lives. According to the criterion of actual consent, you have
a veto against me or anyone else who wants to drive a car in the
town where you live. Similarly, I have a veto against your use of
coal to heat your house, since the emissions contribute to health
risks that affect me. In this way we can all block each other, creat-
ing a society of stalemates. When all options in a decision are asso-
ciated with risk, and all parties claim their rights to keep clear of
the risks that others want to impose on them, the criterion of actual
consent does not seem to be of much help.
We are left then with hypothetical consent. However, as the de-
bate following Rawls’s Theory of Justice has shown, there is no
single decision-rule for risk and uncertainty that all participants in a
hypothetical initial situation can be supposed to adhere It remains
to show-if this can at all be done-that a viable consensus on

Nozick, op. cit., p. 75


1y
See for instance RM Hare, “Rawls’s Theory of Justice”, American Philosophical
30
Quarterly 23:144-155, 241-252, 1973.
186 SVEN O V E HANSSON

risk-impositions can be reached among participants who apply


different decision-rules in situations of risk and uncertainty. (If a
unanimous decision will be reached due to the fact that everybody
applies the same decision-rule, then the problem has not been solved
primarily by contract theory but by the underlying theory for indi-
vidual decision-making.)
Much more could be said about the causal dilution problem, but
to make a long story short, no solution to it seems to be available
for any moral theory. This may be seen as an indication of a deeper
problem. The moral problems associated with risk and uncertainty
may very well require a more fundamental renewal of moral think-
ing than mere additions to .existing theories.
The boundary between moral philosophy and decision theory may
then also have to be reconsidered. Traditionally, moral philosophy
provides answers to what should be done in deterministic situations.
Decision theory takes value assignments for deterministic cases for
given, and derives from them answers to how one should rationally
behave in an uncertain and indeterministic world. It is not incon-
ceivable that the boundary between these two disciplines will have
to be broken up in order to provide a reasonable account of moral
problems related to uncertainty and risk.”

I’I would like to thank the participants of a seminar at the Philosophy Depart-
ment of Goteborg University, and in particular Helge Malmgren, Nils-Eric Sahlin
and Torbjorn Tgnnsjo. for valuable comments 011 an earlier version of this paper.

You might also like