Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Anonymous Marking code: Z0966332

Word count (exluding bibliography): 2998 words

What is the Propensity Theory of


Chance? Is it plausible?
Language, Logic and Reality: 2 n d Summative Essay

Introduction
In this essay I will present two distinct versions of the Propensity theory of chance and objections
against them, while assessing their plausibility. The essay is divided into four sections. In the first one
I analyse the philosophical meaning of ‘chance’ and the domain of propositions it includes. Then, I
proceed by presenting two different forms of the propensity theory, the long-run and the single-case
versions. In the third section I defend the two different versions of the theory against an objection for
each and finally I conclude that the Long-Run Propensity theory of chance is the more plausible of
the two as well as being preferable to the frequency theory of chance.

Setting the domain of the theory


To properly define chance we have to consider the term in the context of the wider, umbrella, term of
probability. Probability is the degree to which a proposition is likely to be true and it can be divided
into an epistemic and an objective branch. The epistemic branch of probability is the field in which
we assess the likelihood of our belief in a particular proposition, even if in principle there is no
uncertainty about the truth of the proposition in itself, this will be referred to as credence. Chance is
defined as the second category of probability, objective probability. This means the likelihood, or
chance, of particular events occurring does not depend on the observer but rather it is intrinsic of the
events themselves, there is an objective probability associated with whether they will happen.

In order to properly assess a Propensity Theory of Chance, we have to clearly set the domain of
objective probability. I take the approach of Gillies (2000b, p. 187) and restrict chance to the Natural
Sciences, thereby excluding fields which concern human behaviour. The basis of evaluating any
objective probabilities is the ability to conduct independent trials. In Social Sciences experiments it
is impossible to recreate the same conditions to perform independent trials. As Soros argues (1994,
p. 6), the techniques of calculating objective probabilities as applied in natural sciences, do not work
when the participants in the trial are able to form beliefs about their situation. At that point a subjective
evaluation is the only option1.

There is also the question of whether chance exists at all in the external world. A classical account of
probability, provided by Laplace and popular until the early twentieth century, considers probability
in the physical world to exist only due to human ignorance about all the factors which contribute to
the occurrence of a particular event. Poincare argues (Poincare, 1952, p. 74) even if in principle there
are cases were objective probability exists in nature, in practise many more are down to chance
because it would be impossible for humans with our limited mental capacity, to determine all the
causes. However, as long as it is theoretically possible for the uncertainty to be extinguished, it cannot
be considered chance because technologies of the future could be able to predict it. The emergence
of radioactive decay and quantum mechanics showed that uncertainty could be a fundamental
component of reality and made chance a relevant concept in physics, while in Biology the discovery
of genes created more space for probabilistic interpretations. Therefore, the propensity theory of
chance will be evaluated in this context.

Two theories of Propensity


In this section I will examine different forms of the propensity theory of chance and analyse their
crucial differences, as well as provide context for their emergence. The two broad categories of the
PTC are the long-run and the single-case versions.

There are two broad, metaphysically distinct cases of the propensity theory of chance. According to
the first one, the so-called long-run version (LPTC) (Gillies, 2000a, p.137) a propensity is the property
of a particular experimental set-up to produce a certain frequency distribution of outcomes. The
distinction between the propensity and the frequency is absolutely crucial here. An analogy with
measuring a quantity such as mass can be made to illustrate the difference. When measuring the mass
of the object with an instrument, the measurement of the mass corresponds to the frequency in the
case of probability, while the property of mass itself corresponds to the propensity. The frequency
distribution is the projection of the propensity of the experiment toward a certain outcome. The
propensity lies in the repeatable conditions of the trials, not their result and can only be identified
after a sufficient number of repeated trials. The advantage of this interpretation is that it gives a clear
method of identifying the propensities present, by working backwards from the frequency distribution
while retaining the advantage of removing the heterogeneous category of chance trials from the
ontology of statistical science (Mellor, 1971, p. 70). However, it suffers from the same weakness the

1
It should be noted that this argument appears in Gillies (2000) but it was appropriate to mention it in this essay.
frequency theory of chance suffers: how to account for single-case trials. The way to deal with that
objection will be considered in the next section.

The second type of PTC, the single-case propensity theory (SPTC) accounts for the problem of
individual trials. Advocated by Miller (1994, p. 177)among others, it does not consider propensity to
be a property of a repeatable set-up but rather it is a property of a particular, singular situation.
Therefore propensities do not produce frequencies, instead they induce a ‘causal force’ towards a
discrete, particular results. According to Gillies this formulation transforms the PTC from a scientific
to a metaphysical account of probability, because its single-case propensities cannot be empirically
evaluated (Gillies, 2000a, p.128). This is not necessarily a negative outcome. As long as the theory is
shown to be consistent, we can afford to carry the metaphysical burden. Since the SPTC is compatible
with single trials, it can be used to explain the indeterminism which is apparent in quantum mechanics.
Under the SPTC, each particle has a propensity, expressed by its wave function, to be observed in
particular locations. However, a propensity continues to be dependent on an overall conditions of its
manifestation, as shown by the dice example. It is beyond the scope of this essay to establish what
constitutes a relevant condition so for the shake of metaphysical tidiness, we can consider the state of
the whole universe at that time, or as Popper puts it the “entire physical situation” (1994).

There are clear parallels and great compatibility between the SPTC and a dispositionalist account of
laws. The dispositionalist position is that what we observe as laws of nature are actually the
manifestation of the various dispositions, intrinsic to particulars instead of external laws governing
their relation. By viewing the universe in terms of dispositionalism, we can posit higher order
propensities and dispositions of particulars, which when considered holistically, are the overall
propensity of a certain outcome. We can think of these powers as the most fundamental constituents
of reality, so that it is metaphysically possible to locate the exact source of chance and analyse it in
terms of the relevant dispositions of objects around it. Classifying propensity, and therefore the source
of objective probability, as a form of disposition is conceptually and metaphysically neat because of
its compatibility with an established view of laws. Dispositions need certain conditions to manifest
themselves. Propensities under SPTC are similar, with the difference being that when the conditions
are met, instead of certainty, there is merely a probability of manifestation, which vary depending on
the surroundings. The consequence of this resemblance is that the SPTC is open to the same counter-
arguments as dispositionalism, which will be presented in the next section.

In this section I have outlined two distinct forms of the propensity theory of chance. The long-run
and the single-case version. They are metaphysically different and to some extent are solutions to
different issues. The LPTC is a metaphysically light alternative to frequentism, which assigns the
findings of experiments in the set up rather than results of trials, thus being ontologically intuitive.
The SPTC deals with the problem of single-case probabilities more effectively but comes with a
heavy metaphysical commitment. In the next section I will present objections to both theories, before
determining the most plausible one.

Objections to the Propensity Theories


In this section I will outline two objections to the LPTC and the SPTC. The first one concerns the
LPTC and it is its apparent failure to account for the probability of singular events. The second is
aimed at the SPTC, which emerges due to its connections to the dispositionalist account of the ‘laws
of nature’. 2

One of the reasons to prefer a propensity theory of chance as opposed to the frequency theory, is the
latter’s failure to account for the probability of singular trials. The long-run propensity theory appears
to suffer from the same problem. Since the frequency distribution is produced only after multiple
trials, it is impossible under LPTC to account for the propensity of a single event to happen such the
odds of a single coin toss landing ‘tails’ ( the fact that it a theoretically deterministic process is
irrelevant to the example). Clearly this is an important problem because single case trials exist in
nature such as the genetic composition of a newly conceived foetus or the position of a diffracted
electron. Scientists calculate probabilities for these events so a viable theory of chance should be able
to accommodate for them. Gillies (2000a, p. 170) states that single evens can never be fully objective
because there will be uncertainty about the way they are classified, thus introducing subjectivity.

A way for the LPTC theorist to reply is to introduce the concept of reference classes and therefore a
degree of epistemic probability. Howson and Urbach (1989, p. 228) argue that when it comes to single
case probabilities, the degree of likelihood is a matter of epistemic judgement, despite it still being
based on objective probabilities. The concept of reference classes is a way to build that bridge
between chance and credence. We can think of the value of the single case probability as a little pearl
inside many Russian dolls. The Russian dolls represent the reference classes. Examining only the
largest one gives a vague idea of the position of the pearl inside it, as you open them and find
progressively smaller dolls, the objective range of positions for the pearl becomes smaller and smaller,
although the probability of the pearl being in an exact position remains subjective to us (we can only
guess) as we approach the limit. For the position of a diffracted electron, the biggest Russian doll and
therefore the largest reference class is information about the general set-up of the experiment. The
successively smaller Russian dolls are ever more precise information about the voltage of the cathode
and the diffraction grating. The probability of the electron being in a specific position is still

2
There are other potent objections to the Propensity Theory in the literature, such as Humphrey’s paradox. However,
due to limited space I chose to present in detail just two, which apply individually to the different versions
subjectively decided but it is made ever more precise by using the smallest reference class, i.e. having
the pearl inside the smallest Russian doll. This is called the principle of the narrowest reference class
(Ayer, 1963, p. 202). I consider this combination of objective factors and credence to be an adequate
reply to the objection, although the cost paid was abandoning complete objectivity for the LPTC.

The second objection is targeted at the SPTC, due to its reliance on a dispositionalist account of laws.
It seems that there are some aspects of the external world which cannot be reduced to relationships
between particulars, be that propensities or dispositions. For example, laws as basic as gravitation
contain a reliance on the distance between two or more objects. If there is nothing more to the universe
than objects and their dispositions, then the concept of distance is incoherent because it requires a
larger reference frame to be defined, it cannot be reduced to a power of particulars. This issue means
that certain macro-propensities of scientific set-ups cannot be successfully analysed into their
constituent particular micro propensities without losing certain aspects critical to its overall
probability.

The reply of the SPTC should follow Ellis (2001) and introduce dispositions (and therefore
propensities) at different levels of structure, from the most microscopic to the macroscopic in order
to allow for the diverse types of interactions observed at different orders of magnitude. The powers
possessed by subatomic particles and therefore their interactions are very different to the dispositions
of entire biological organisms. One follows the laws of the fundamental physical forces and the other
the laws of evolutionary biology. It is practically impossible to analyse the latter in terms of the
former. Just like the dispositions of an organism are practically independent from those of subatomic
particles, we can consider the entire universe to have certain dispositions which are practically
independent from its constituents. Such ‘collective’ dispositions can be used to account for action-at-
a-distance laws of universal character like gravitational forces or the properties of electromagnetic
fields and other macroscopic effects that are considered ‘laws of nature’ but cannot be analysed solely
by interactions between particulars . In the context of probability, it means that experimental set-ups
can have their own associated propensity without requiring it to originate as a consideration for the
propensities of all particulars involve in it, as well as allowing for more macroscopic probabilistic
laws. This reply would be sufficient to counter the objection of non-dispositional interactions, but it
requires us to metaphysically commit to a context-dependent definition of ‘particular’ and thereby
making dispositionalism and SPTC more ontologically bloated.

Plausibility of the two theories


In this section I firstly outline an argument supporting propensity theories of chance against frequency
ones. Then I proceed to argue in favour of the LPTC as the most plausible propensity theory of the
two presented in the essay, taking into account the objections raised to both of them in the section
above.

Frequency theories ‘put the cart before the horse’. According to the frequency interpretation, it is the
results of the repeated trials which determine the probability of success rather than the conditions of
the trials themselves. This is implies an operationalist view of science (Gillies, 2000a, p. 125). This
is the view that all theoretical variables in science can be determined by observables and it can be
shown false in other equivalent cases. For example, the concept of mass cannot be defined by simply
comparing objects to an object we deemed to be one kilogram. If that were the case how could we
possibly understand the masses of large objects like planets and tiny microscopic particles such as
electrons? Even if we were to have different operational definitions for different degrees of
magnitude, we would still need to use theoretical assumptions about mass in order to set the mass of
our ideal 1kg object. Therefore, mass is more primitive than its observable manifestation in weighting
instruments. The relationship between propensities and frequencies is similar. It is the propensities
which generate the observable frequencies which show the probability.

Given these objections and their replies, I consider the long-run propensity theory of chance to be the
more plausible of the two alternatives. The fact that it is more empirical by nature, means that it has
a higher degree of utility (in the non-specialist use of the word) for understanding the concept of
chance. Since it relies on repeated experiments to produce observable patterns, it can be described
more as a mathematical science akin to mechanics and less of a metaphysical theory. Its major
weakness, the application in single events, can be adequately dealt with by introducing reference
classes and a degree of credence as shown above.

The single-case propensity theory is considered less plausible because it requires us making heavy
metaphysical commitments in order to accept it. It requires specific beliefs concerning the laws of
nature, namely dispositionalism, which creates more questions than it attempts to answer. For
example it begs the question of how are universal dispositions linked to their manifestations, as well
as begging the question of what is the difference between an unmanifested and the propensity not
existing. The long-run version is not plagued by the first of these problems due to its empirical nature,
while its answer to the second one is the frequency distribution it produces. This does not mean that
there are no replies to these problems available to the SPTC, Tugby (2013, p. 451]) for example
argues for Platonic ideals being the solution to the first issue raised in this paragraph. However, the
more empirical and arguably leaner alternative seems to be the most plausible account of propensity.
Conclusion
In this essay I argued that the long-run propensity theory of chance is plausible both because it is
preferable to the frequency account of objective probability as well as being the better version of the
propensity theory when compared to the single-case variety. I began by defining objective
probability and outlining the domain to which it is applicable. Then the two versions of the
propensity theory were presented. In the third section I replied to one potent objection for each
theory, while in the final version I explain the reasons I selected the long-run version as the more
plausible one.

Bibliography
Armstrong, D. (1997). A World of States of Affairs. Cambridge: Cambridge University Press.
Ayer, A. J. (1963). Two notes on probability. London: MacMillan.
Ellis, B. (2001). Scientific Essentialism. Cambridge: Cambridge University Press.
Fetzer, J. (1974). A single case propensity theory of explanation. Philosophia, 337-338.
Galavotti, M. (2005). Philosophical Introduction to Probability. Stanford: CSLI Publications.
Gillies, D. (2000). Philosophical theories of probability. London: Routledge.
Gillies, D. (2000). Varieties of Propensity. The British Journal for the Philoisophy of Science, 807-
835.
Howson, C., & Urbach, P. (1989). Scientific reasoning: the Bayesian approach. La Salle: Open
Court.
Kyburg, H. (1974). Propensities and Probabilities. British Journal for the Philosophy of Science,
359-375.
Kyburg, H. (2002). Don't take Unnecessary Chances! Synthese, 9-26.
Mellor, D. (1971). The matter of Chance. London: Cambridge University Press.
Miller, D. (1994). Critical Rationalism. Chicago: Open Court.
Molnar, G. (2003). Powers: A Study in Metaphysics. Oxford: Oxford University Press.
Poincare, H. (1952). Science and Hypothesis. Dover.
Popper, K. (1994). A World of Propensities . Bristol: Thoemmes .
Popper, K. R. (1957). The propensity Interpretation of the Calculus of Probability, and the Quantum
Theory. {rpceedings of tehe Ninth Symposium of the Colston Resarch Society (pp. 65-70).
Bristol: University of Bristol.
Soros, G. (1994). The alchemy of fianance: reading the mind of the market. New York: J. Wiley.
Tugby, M. (2013). Platonic Dispositionalism. Mind, 451-480.
Tugby, M. (2016). Universals, Laws and Governance. Philosophical Studies, 1147-1163.

You might also like