Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Doomsday argument

The Doomsday Argument (DA), or Carter


catastrophe, is a probabilistic argument that
claims to predict the future population of the
human species based on an estimation of the
number of humans born to date. The
Doomsday argument was originally proposed
by the astrophysicist Brandon Carter in
1983,[1] leading to the initial name of the
Carter catastrophe. The argument was
subsequently championed by the philosopher
John A. Leslie and has since been
independently conceived by J. Richard Gott[2]
World population from 10,000 BC to AD 2000
and Holger Bech Nielsen.[3] Similar principles
of eschatology were proposed earlier by Heinz
von Foerster, among others. A more general form was given earlier in the Lindy effect,[4] which proposes
that for certain phenomena, the future life expectancy is proportional to (though not necessarily equal to)
the current age and is based on a decreasing mortality rate over time.

The premise of the argument is as follows: suppose that the total number of human beings that will ever
exist is fixed. If so, the likelihood of a randomly selected person existing at a particular time in history
would be proportional to the total population at that time. Given this, the argument posits that a person alive
today should adjust their expectations about the future of the human race because their existence provides
information about the total number of humans that will ever live.

If the total number of humans who were born or will ever be born is denoted by , then the Copernican
principle suggests that any one human is equally likely (along with the other humans) to find
themselves in any position of the total population , so humans assume that our fractional position
is uniformly distributed on the interval [0,1] before learning our absolute position.

is uniformly distributed on (0,1) even after learning the absolute position . For example, there is a 95%
chance that is in the interval (0.05,1), that is . In other words, one can assume with 95%
certainty that any individual human would be within the last 95% of all the humans ever to be born. If the
absolute position is known, this argument implies a 95% confidence upper bound for obtained by
rearranging to give .

If Leslie's figure[5] is used, then approximately 60 billion humans have been born so far, so it can be
estimated that there is a 95% chance that the total number of humans will be less than 20 60 billion =
1.2 trillion. Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, it
can be estimated that the remaining 1140 billion humans will be born in 9120 years. Depending on the
projection of the world population in the forthcoming centuries, estimates may vary, but the argument states
that it is unlikely that more than 1.2 trillion humans will ever live.

Aspects
Assume, for simplicity, that the total number of humans who will ever be born is 60 billion (N1 ), or 6,000
billion (N2 ).[6] If there is no prior knowledge of the position that a currently living individual, X, has in the
history of humanity, one may instead compute how many humans were born before X, and arrive at say
59,854,795,447, which would roughly place X among the first 60 billion humans who have ever lived.

It is possible to sum the probabilities for each value of N and, therefore, to compute a statistical 'confidence
limit' on N. For example, taking the numbers above, it is 99% certain that N is smaller than 6 trillion.

Note that as remarked above, this argument assumes that the prior probability for N is flat, or 50% for N1
and 50% for N2 in the absence of any information about X. On the other hand, it is possible to conclude,
given X, that N2 is more likely than N1 if a different prior is used for N. More precisely, Bayes' theorem
tells us that P(N|X) = P(X|N)P(N)/P(X), and the conservative application of the Copernican principle tells us
only how to calculate P(X|N). Taking P(X) to be flat, we still have to make an assumption about the prior
probability P(N) that the total number of humans is N. If we conclude that N2 is much more likely than N1
(for example, because producing a larger population takes more time, increasing the chance that a low
probability but cataclysmic natural event will take place in that time), then P(X|N) can become more heavily
weighted towards the bigger value of N. A further, more detailed discussion, as well as relevant
distributions P(N), are given below in the Rebuttals section.

The Doomsday argument does not say that humanity cannot or will not exist indefinitely. It does not put
any upper limit on the number of humans that will ever exist nor provide a date for when humanity will
become extinct. An abbreviated form of the argument does make these claims, by confusing probability
with certainty. However, the actual conclusion for the version used above is that there is a 95% chance of
extinction within 9,120 years and a 5% chance that some humans will still be alive at the end of that period.
(The precise numbers vary among specific Doomsday arguments.)

Variations
This argument has generated a philosophical debate, and no consensus has yet emerged on its solution. The
variants described below produce the DA by separate derivations.

Gott's formulation: 'vague prior' total population

Gott specifically proposes the functional form for the prior distribution of the number of people who will
ever be born (N). Gott's DA used the vague prior distribution:

where

P(N) is the probability prior to discovering n, the total number of humans who have yet been
born.
The constant, k, is chosen to normalize the sum of P(N). The value chosen is not important
here, just the functional form (this is an improper prior, so no value of k gives a valid
distribution, but Bayesian inference is still possible using it.)

Since Gott specifies the prior distribution of total humans, P(N), Bayes' theorem and the principle of
indifference alone give us P(N|n), the probability of N humans being born if n is a random draw from N:
This is Bayes' theorem for the posterior probability of the total population ever born of N, conditioned on
population born thus far of n. Now, using the indifference principle:

The unconditioned n distribution of the current population is identical to the vague prior N probability
density function,[note 1] so:

giving P (N | n) for each specific N (through a substitution into the posterior probability equation):

The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that N
is a continuous variable (since it is very large) and integrate over the probability density from N = n to N =
Z. (This will give a function for the probability that N ≤ Z):

Defining Z = 20n gives:

This is the simplest Bayesian derivation of the Doomsday Argument:

The chance that the total number of humans that will ever be born (N) is greater than
twenty times the total that have been is below 5%

The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible
about N, given that some particular function must be chosen. It is equivalent to the assumption that the
probability density of one's fractional position remains uniformly distributed even after learning of one's
absolute position (n).

Gott's 'reference class' in his original 1993 paper was not the number of births, but the number of years
'humans' had existed as a species, which he put at 200,000. Also, Gott tried to give a 95% confidence
interval between a minimum survival time and a maximum. Because of the 2.5% chance that he gives to
underestimating the minimum, he has only a 2.5% chance of overestimating the maximum. This equates to
97.5% confidence that extinction occurs before the upper boundary of his confidence interval, which can
be used in the integral above with Z = 40n, and n = 200,000 years:
This is how Gott produces a 97.5% confidence of extinction within N ≤ 8,000,000 years. The number he
quoted was the likely time remaining, N − n = 7.8 million years. This was much higher than the temporal
confidence bound produced by counting births, because it applied the principle of indifference to time.
(Producing different estimates by sampling different parameters in the same hypothesis is Bertrand's
paradox.) Similarly, there is a 97.5% chance that the present lies in the first 97.5% of human history, so
there is a 97.5% chance that the total lifespan of humanity will be at least

In other words, Gott's argument gives a 95% confidence that humans will go extinct between 5,100 and 7.8
million years in the future.

Gott has also tested this formulation against the Berlin Wall and Broadway and off-Broadway plays.[7]

Leslie's argument differs from Gott's version in that he does not assume a vague prior probability
distribution for N. Instead, he argues that the force of the Doomsday Argument resides purely in the
increased probability of an early Doomsday once you take into account your birth position, regardless of
your prior probability distribution for N. He calls this the probability shift.

Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do
not result in self-inhibition. Rather, societies' success varies directly with population size. Von Foerster
found that this model fits some 25 data points from the birth of Jesus to 1958, with only 7% of the variance
left unexplained. Several follow-up letters (1961, 1962, …) were published in Science showing that von
Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing
about von Foerster's model was it predicted that the human population would reach infinity or a
mathematical singularity, on Friday, November 13, 2026. In fact, von Foerster did not imply that the world
population on that day could actually become infinite. The real implication was that the world population
growth pattern followed for many centuries prior to 1960 was about to come to an end and be transformed
into a radically different pattern. Note that this prediction began to be fulfilled just in a few years after the
"Doomsday" argument was published.[note 2]

Reference classes
The reference class from which n is drawn, and of which N is the ultimate size, is a crucial point of
contention in the Doomsday Argument argument. The 'standard' Doomsday Argument hypothesis skips
over this point entirely, merely stating that the reference class is the number of 'people'. Given that you are
human, the Copernican principle might be used to determine if you were born exceptionally early, however
the term "human" has been heavily contested on practical and philosophical reasons. According to Nick
Bostrom, consciousness is (part of) the discriminator between what is in and what is out of the reference
class, and therefore extraterrestrial intelligence might have a significant impact on the calculation.

The following sub-sections relate to different suggested reference classes, each of which has had the
standard Doomsday Argument applied to it.

SSSA: Sampling from observer-moments

Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption (SSA):
"that you should think of yourself as if you were a random observer from a suitable reference class". If the
'reference class' is the set of humans to ever be born, this gives N < 20n with 95% confidence (the standard
Doomsday argument). However, he has refined this idea to apply to observer-moments rather than just
observers. He has formalized this as:[8]

The Strong Self-Sampling Assumption (SSSA): Each observer-moment should reason as


if it were randomly selected from the class of all observer-moments in its reference class.

An application of the principle underlying SSSA (though this application is nowhere expressly articulated
by Bostrom), is: If the minute in which you read this article is randomly selected from every minute in
every human's lifespan, then (with 95% confidence) this event has occurred after the first 5% of human
observer-moments. If the mean lifespan in the future is twice the historic mean lifespan, this implies 95%
confidence that N < 10n (the average future human will account for twice the observer-moments of the
average historic human). Therefore, the 95th percentile extinction-time estimate in this version is 4560
years.

Rebuttals

We are in the earliest 5%, a priori

One counterargument to the Doomsday argument agrees with its statistical methods but disagrees with its
extinction-time estimate. This position requires justifying why the observer cannot be assumed to be
randomly selected from the set of all humans ever to be born, which implies that this set is not an
appropriate reference class. By disagreeing with the Doomsday argument, it implies that that the observer is
within the first 5% of humans to be born.

By analogy, if one is a member of 50,000 people in a collaborative project, the reasoning of the Doomsday
argument implies that there will never be more than a million members of that project, within a 95%
confidence interval. However, if one's characteristics are typical of an early adopter, rather than typical of
an average member over the project's lifespan, then it may not be reasonable to assume one has joined the
project at a random point in its life. For instance, the mainstream of potential users will prefer to be involved
when the project is nearly complete. However, if one were to enjoy the project's incompleteness, it is
already known that he or she is unusual, prior to the discovery of his or her early involvement.

If one has measurable attributes that set one apart from the typical long run user, the project Doomsday
argument can be refuted based on the fact that one could expect to be within the first 5% of members, a
priori. The analogy to the total-human-population form of the argument is: confidence in a prediction of the
distribution of human characteristics that places modern and historic humans outside the mainstream implies
that it is already known, before examining n, that it is likely to be very early in N. This is an argument for
changing the reference class.

For example, if one is certain that 99% of humans who will ever live will be cyborgs, but that only a
negligible fraction of humans who have been born to date are cyborgs, one could be equally certain that at
least one hundred times as many people remain to be born as have been.

Robin Hanson's paper sums up these criticisms of the Doomsday argument:[9]


All else is not equal; we have good reasons for thinking we are not randomly selected humans
from all who will ever live.

Human extinction is distant, a posteriori

The a posteriori observation that extinction level events are rare could be offered as evidence that the
Doomsday argument's predictions are implausible; typically, extinctions of dominant species happen less
often than once in a million years. Therefore, it is argued that human extinction is unlikely within the next
ten millennia. (Another probabilistic argument, drawing a different conclusion than the Doomsday
argument.)

In Bayesian terms, this response to the Doomsday argument says that our knowledge of history (or ability
to prevent disaster) produces a prior marginal for N with a minimum value in the trillions. If N is distributed
uniformly from 1012 to 1013 , for example, then the probability of N < 1,200 billion inferred from n = 60
billion will be extremely small. This is an equally impeccable Bayesian calculation, rejecting the
Copernican principle on the grounds that we must be 'special observers' since there is no likely mechanism
for humanity to go extinct within the next hundred thousand years.

This response is accused of overlooking the technological threats to humanity's survival, to which earlier
life was not subject, and is specifically rejected by most academic critics of the Doomsday argument
(arguably excepting Robin Hanson).

The prior N distribution may make n very uninformative

Robin Hanson argues that N's prior may be exponentially distributed:[9]

Here, c and q are constants. If q is large, then our 95% confidence upper bound is on the uniform draw, not
the exponential value of N.

The simplest way to compare this with Gott's Bayesian argument is to flatten the distribution from the
vague prior by having the probability fall off more slowly with N (than inverse proportionally). This
corresponds to the idea that humanity's growth may be exponential in time with doomsday having a vague
prior probability density function in time. This would mean than N, the last birth, would have a distribution
looking like the following:

This prior N distribution is all that is required (with the principle of indifference) to produce the inference of
N from n, and this is done in an identical way to the standard case, as described by Gott (equivalent to =
1 in this distribution):

Substituting into the posterior probability equation):


Integrating the probability of any N above xn:

For example, if x = 20, and = 0.5, this becomes:

Therefore, with this prior, the chance of a trillion births is well over 20%, rather than the 5% chance given
by the standard DA. If is reduced further by assuming a flatter prior N distribution, then the limits on N
given by n become weaker. An of one reproduces Gott's calculation with a birth reference class, and
around 0.5 could approximate his temporal confidence interval calculation (if the population were
expanding exponentially). As (gets smaller) n becomes less and less informative about N. In the
limit this distribution approaches an (unbounded) uniform distribution, where all values of N are equally
likely. This is Page et al.'s "Assumption 3", which they find few reasons to reject, a priori. (Although all
distributions with are improper priors, this applies to Gott's vague-prior distribution also, and they
can all be converted to produce proper integrals by postulating a finite upper population limit.) Since the
probability of reaching a population of size 2N is usually thought of as the chance of reaching N multiplied
by the survival probability from N to 2N it follows that Pr(N) must be a monotonically decreasing function
of N, but this doesn't necessarily require an inverse proportionality.[9]

Infinite expectation

Another objection to the Doomsday Argument is that the expected total human population is actually
infinite.[10] The calculation is as follows:

The total human population N = n/f, where n is the human population to date and f is our
fractional position in the total.
We assume that f is uniformly distributed on (0,1].
The expectation of N is

For a similar example of counterintuitive infinite expectations, see the St. Petersburg paradox.

Self-Indication Assumption: The possibility of not existing at all

One objection is that the possibility of a human existing at all depends on how many humans will ever exist
(N). If this is a high number, then the possibility of their existing is higher than if only a few humans will
ever exist. Since they do indeed exist, this is evidence that the number of humans that will ever exist is
high.[11]
This objection, originally by Dennis Dieks (1992),[12] is now known by Nick Bostrom's name for it: the
"Self-Indication Assumption objection". It can be shown that some SIAs prevent any inference of N from n
(the current population).[13]

Caves' rebuttal

The Bayesian argument by Carlton M. Caves states that the uniform distribution assumption is incompatible
with the Copernican principle, not a consequence of it.[14]

Caves gives a number of examples to argue that Gott's rule is implausible. For instance, he says, imagine
stumbling into a birthday party, about which you know nothing:

Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her
(tp =) 50th birthday. According to Gott, you can predict with 95% confidence that the woman
will survive between [50]/39 = 1.28 years and 39[×50] = 1,950 years into the future. Since the
wide range encompasses reasonable expectations regarding the woman's survival, it might not
seem so bad, till one realizes that [Gott's rule] predicts that with probability 1/2 the woman will
survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to
bet on the woman's survival using Gott's rule. (See Caves' online paper below.)

Although this example exposes a weakness in J. Richard Gott's "Copernicus method" DA (that he does not
specify when the "Copernicus method" can be applied) it is not precisely analogous with the modern DA;
epistemological refinements of Gott's argument by philosophers such as Nick Bostrom specify that:

Knowing the absolute birth rank (n) must give no information on the total population (N).

Careful DA variants specified with this rule aren't shown implausible by Caves' "Old Lady" example
above, because the woman's age is given prior to the estimate of her lifespan. Since human age gives an
estimate of survival time (via actuarial tables) Caves' Birthday party age-estimate could not fall into the
class of DA problems defined with this proviso.

To produce a comparable "Birthday Party Example" of the carefully specified Bayesian DA, we would
need to completely exclude all prior knowledge of likely human life spans; in principle this could be done
(e.g.: hypothetical Amnesia chamber). However, this would remove the modified example from everyday
experience. To keep it in the everyday realm the lady's age must be hidden prior to the survival estimate
being made. (Although this is no longer exactly the DA, it is much more comparable to it.)

Without knowing the lady's age, the DA reasoning produces a rule to convert the birthday (n) into a
maximum lifespan with 50% confidence (N). Gott's Copernicus method rule is simply: Prob (N < 2n) =
50%. How accurate would this estimate turn out to be? Western demographics are now fairly uniform
across ages, so a random birthday (n) could be (very roughly) approximated by a U(0,M] draw where M is
the maximum lifespan in the census. In this 'flat' model, everyone shares the same lifespan so N = M. If n
happens to be less than (M)/2 then Gott's 2n estimate of N will be under M, its true figure. The other half of
the time 2n underestimates M, and in this case (the one Caves highlights in his example) the subject will die
before the 2n estimate is reached. In this 'flat demographics' model Gott's 50% confidence figure is proven
right 50% of the time.

Self-referencing Doomsday argument rebuttal


Some philosophers have suggested that only people who have contemplated the Doomsday argument (DA)
belong in the reference class 'human'. If that is the appropriate reference class, Carter defied his own
prediction when he first described the argument (to the Royal Society). An attendant could have argued
thus:

Presently, only one person in the world understands the Doomsday argument, so by its own
logic there is a 95% chance that it is a minor problem which will only ever interest twenty
people, and I should ignore it.

Jeff Dewynne and Professor Peter Landsberg suggested that this line of reasoning will create a paradox for
the Doomsday argument:[10]

If a member of the Royal Society did pass such a comment, it would indicate that they understood the DA
sufficiently well that in fact 2 people could be considered to understand it, and thus there would be a 5%
chance that 40 or more people would actually be interested. Also, of course, ignoring something because
you only expect a small number of people to be interested in it is extremely short sighted—if this approach
were to be taken, nothing new would ever be explored, if we assume no a priori knowledge of the nature
of interest and attentional mechanisms.

Conflation of future duration with total duration

Various authors have argued that the Doomsday argument rests on an incorrect conflation of future duration
with total duration. This occurs in the specification of the two time periods as "doom soon" and "doom
deferred" which means that both periods are selected to occur after the observed value of the birth order. A
rebuttal in Pisaturo (2009)[15] argues that the Doomsday Argument relies on the equivalent of this equation:

,
where:
X = the prior information;
Dp = the data that past duration is tp;
HFS = the hypothesis that the future duration of the phenomenon will be short;
HFL = the hypothesis that the future duration of the phenomenon will be long;
HTS = the hypothesis that the total duration of the phenomenon will be short—i.e., that tt,
the phenomenon’s total longevity, = tTS;
HTL = the hypothesis that the total duration of the phenomenon will be long—i.e., that tt,
the phenomenon’s total longevity, = tTL, with tTL > tTS.

Pisaturo then observes:

Clearly, this is an invalid application of Bayes' theorem, as it conflates future duration and
total duration.

Pisaturo takes numerical examples based on two possible corrections to this equation: considering only
future durations and considering only total durations. In both cases, he concludes that the Doomsday
Argument’s claim, that there is a ‘Bayesian shift’ in favor of the shorter future duration, is fallacious.

This argument is also echoed in O'Neill (2014).[16] In this work O'Neill argues that a unidirectional
"Bayesian Shift" is an impossibility within the standard formulation of probability theory and is
contradictory to the rules of probability. As with Pisaturo, he argues that the doomsday argument conflates
future duration with total duration by specification of doom times that occur after the observed birth order.
According to O'Neill:

The reason for the hostility to the doomsday argument and its assertion of a "Bayesian
shift" is that many people who are familiar with probability theory are implicitly aware of the
absurdity of the claim that one can have an automatic unidirectional shift in beliefs
regardless of the actual outcome that is observed. This is an example of the "reasoning to
a foregone conclusion" that arises in certain kinds of failures of an underlying inferential
mechanism. An examination of the inference problem used in the argument shows that this
suspicion is indeed correct, and the doomsday argument is invalid. (pp. 216-217)

Confusion over the meaning of confidence intervals

Gelman and Robert[17] assert that the Doomsday argument confuses frequentist confidence intervals with
Bayesian credible intervals. Suppose that every individual knows their number n and uses it to estimate an
upper bound on N. Every individual has a different estimate, and these estimates are constructed so that
95% of them contain the true value of N and the other 5% do not. This, say Gelman and Robert, is the
defining property of a frequentist lower-tailed 95% confidence interval. But, they say, "this does not mean
that there is a 95% chance that any particular interval will contain the true value." That is, while 95% of the
confidence intervals will contain the true value of N, this is not the same as N being contained in the
confidence interval with 95% probability. The latter is a different property and is the defining characteristic
of a Bayesian credible interval. Gelman and Robert conclude,

... the Doomsday argument is the ultimate triumph of the idea, beloved among Bayesian
educators, that our students and clients do not really understand Neyman–Pearson confidence
intervals and inevitably give them the intuitive Bayesian interpretation.

See also
Anthropic principle
Human overpopulation
Global catastrophic risk
Doomsday event
Fermi paradox
Measure problem (cosmology)
Mediocrity principle
Quantum suicide and immortality
Simulated reality
Survival analysis
Survivalism
Technological singularity

Notes
1. The only probability density functions that must be specified a priori are:
Pr(N) - the ultimate number of people that will be born, assumed by J. Richard Gott to
have a vague prior distribution, Pr(N) = k/N
Pr(n|N) - the chance of being born in any position based on a total population N - all DA
forms assume the Copernican principle, making Pr(n|N) = 1/N
From these two distributions, the Doomsday Argument proceeds to create a Bayesian
inference on the distribution of N from n, through Bayes' rule, which requires P(n); to produce
this, integrate over all the possible values of N which might contain an individual born nth
(that is, wherever N > n):

This is why the marginal distribution of n and N are identical in the case of P(N) = k/N'
2. See, for example, Introduction to Social Macrodynamics (http://urss.ru/cgi-bin/db.pl?cp=&pa
ge=Book&id=34250&lang=en&blang=en&list=38) by Andrey Korotayev et al.

References
1. Brandon Carter; McCrea, W. H. (1983). "The anthropic principle and its implications for
biological evolution". Philosophical Transactions of the Royal Society of London. A310
(1512): 347–363. Bibcode:1983RSPTA.310..347C (https://ui.adsabs.harvard.edu/abs/1983R
SPTA.310..347C). doi:10.1098/rsta.1983.0096 (https://doi.org/10.1098%2Frsta.1983.0096).
S2CID 92330878 (https://api.semanticscholar.org/CorpusID:92330878).
2. J. Richard Gott, III (1993). "Implications of the Copernican principle for our future prospects".
Nature. 363 (6427): 315–319. Bibcode:1993Natur.363..315G (https://ui.adsabs.harvard.edu/
abs/1993Natur.363..315G). doi:10.1038/363315a0 (https://doi.org/10.1038%2F363315a0).
S2CID 4252750 (https://api.semanticscholar.org/CorpusID:4252750).
3. Holger Bech Nielsen (1989). "Random dynamics and relations between the number of
fermion generations and the fine structure constants". Acta Physica Polonica. B20: 427–
468.
4. Humphrey, Colman (2014). "Predicting Future Lifespan: The Lindy Effect, Gott's Predictions
and Caves' Corrections, and Confidence Intervals" (https://web.archive.org/web/201603281
83642/http://www.amstat.org/meetings/jsm/2014/onlineprogram/AbstractDetails.cfm?abstract
id=313738). Archived from the original (https://www.amstat.org/meetings/jsm/2014/onlinepro
gram/AbstractDetails.cfm?abstractid=313738) on 2016-03-28.
5. Oliver, Jonathan; Korb, Kevin (1998). "A Bayesian Analysis of the Doomsday Argument".
CiteSeerX 10.1.1.49.5899 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.58
99).
6. Korb, K. (1998). "A refutation of the doomsday argument". Mind. 107 (426): 403–410.
doi:10.1093/mind/107.426.403 (https://doi.org/10.1093%2Fmind%2F107.426.403).
7. Timothy Ferris (July 12, 1999). "How to Predict Everything" (http://www.newyorker.com/archi
ve/1999/07/12/1999_07_12_035_TNY_LIBRY_000018591). The New Yorker. Retrieved
September 3, 2010.
8. Bostrom, Nick (2005). "Self-Location and Observation Selection Theory" (https://anthropic-pr
inciple.com/preprints/self-location). anthropic-principle.com. Retrieved 2023-07-02.
9. "Critiquing the Doomsday Argument" (http://mason.gmu.edu/~rhanson/Nodoom.html).
mason.gmu.edu. Retrieved 2023-06-17.
10. Monton, Bradley; Roush, Sherri (2001-11-20). "Gott's Doomsday Argument" (http://philsci-arc
hive.pitt.edu/1205/). philsci-archive.pitt.edu. Retrieved 2023-06-17.
11. Olum, Ken D. (2002). "The doomsday argument and the number of possible observers". The
Philosophical Quarterly. 52 (207): 164. arXiv:gr-qc/0009081 (https://arxiv.org/abs/gr-qc/0009
081). doi:10.1111/1467-9213.00260 (https://doi.org/10.1111%2F1467-9213.00260).
S2CID 14707647 (https://api.semanticscholar.org/CorpusID:14707647).
12. Dieks, Dennis (2005-01-13). "Reasoning About the Future: Doom and Beauty" (https://philsc
i-archive.pitt.edu/2144/). philsci-archive.pitt.edu. Retrieved 2023-06-17.
13. Bostrom, Nick (2002). Anthropic Bias: Observational Selection Effects in Science and
Philosophy. New York & London: Routledge. pp. 124–126. ISBN 0-415-93858-9.
14. Caves, Carlton M. (2008). "Predicting future duration from present age: Revisiting a critical
assessment of Gott's rule". arXiv:0806.3538 (https://arxiv.org/abs/0806.3538) [astro-ph (http
s://arxiv.org/archive/astro-ph)].
15. Ronald Pisaturo (2009). "Past Longevity as Evidence for the Future". Philosophy of Science.
76: 73–100. doi:10.1086/599273 (https://doi.org/10.1086%2F599273). S2CID 122207511 (ht
tps://api.semanticscholar.org/CorpusID:122207511).
16. Ben O'Neill (2014). "Assessing the "Bayesian Shift" in the Doomsday Argument". Journal of
Philosophy. 111 (4): 198–218. doi:10.5840/jphil2014111412 (https://doi.org/10.5840%2Fjphi
l2014111412).
17. Andrew Gelman; Christian P. Robert (2013). " 'Not Only Defended But Also Applied': The
Perceived Absurdity of Bayesian Inference". The American Statistician. 67 (4): 1–5.
arXiv:1006.5366 (https://arxiv.org/abs/1006.5366). doi:10.1080/00031305.2013.760987 (http
s://doi.org/10.1080%2F00031305.2013.760987). S2CID 10833752 (https://api.semanticscho
lar.org/CorpusID:10833752).

Further reading
John A. Leslie, The End of the World: The Science and Ethics of Human Extinction,
Routledge, 1998, ISBN 0-41518447-9.
J. R. Gott III, Future Prospects Discussed, Nature, vol. 368, p. 108, 1994.
This argument plays a central role in Stephen Baxter's science fiction book, Manifold: Time,
Del Rey Books, 2000, ISBN 0-345-43076-X.
The same principle plays a major role in the Dan Brown novel, Inferno, Corgy Books,
ISBN 978-0-552-16959-2
Poundstone, William, The Doomsday Calculation: How an Equation that Predicts the Future
Is Transforming Everything We Know About Life and the Universe. 2019 Little, Brown Spark.
Description (https://books.google.com/books?id=Q55yDwAAQBAJ) & arrow/scrollable
preview. (https://books.google.com/books?id=Q55yDwAAQBAJ) Also summarised in
Poundstone's essay, "Math Says Humanity May Have Just 760 Years Left," (https://www.wsj.
com/articles/doomsday-math-says-humanity-may-have-just-760-years-left-11561655839)
Wall Street Journal, updated June 27, 2019. ISBN 9783164440707

External links
The Doomsday argument category on PhilPapers (http://philpapers.org/browse/doomsday-a
rgument)
A non-mathematical, unpartisan introduction to the DA (http://flatrock.org.nz/topics/environme
nt/doom_soon.htm)
Nick Bostrom's response to Korb and Oliver (http://www.anthropic-principle.com/preprints/ali/
alive.html)
Nick Bostrom's annotated collection of references (http://www.anthropic-principle.com/prepri
nts.html#doomsday)
Kopf, Krtouš & Page's early (1994) refutation (https://arxiv.org/abs/gr-qc/9407002) based on
the SIA, which they called "Assumption 2".
The Doomsday argument and the number of possible observers by Ken Olum (https://arxiv.o
rg/abs/gr-qc/0009081) In 1993 J. Richard Gott used his "Copernicus method" to predict the
lifetime of Broadway shows. One part of this paper uses the same reference class as an
empirical counter-example to Gott's method.
A Critique of the Doomsday Argument by Robin Hanson (https://web.archive.org/web/20040
217141525/http://hanson.gmu.edu/nodoom.html)
A Third Route to the Doomsday Argument by Paul Franceschi (http://cogprints.org/7044/),
Journal of Philosophical Research, 2009, vol. 34, pp. 263–278
Chambers' Ussherian Corollary Objection (http://journals.cambridge.org/action/displayAbstr
act?fromPage=online&aid=82931)
Caves' Bayesian critique of Gott's argument. C. M. Caves, "Predicting future duration from
present age: A critical assessment", Contemporary Physics 41, 143-153 (2000). (https://web.
archive.org/web/20041205094143/http://info.phys.unm.edu/papers/2000/Caves2000a.pdf)
C.M. Caves, "Predicting future duration from present age: Revisiting a critical assessment of
Gott's rule. (https://arxiv.org/abs/0806.3538)
"Infinitely Long Afterlives and the Doomsday Argument" by John Leslie (http://journals.cambr
idge.org/action/displayAbstract?fromPage=online&aid=2400044) shows that Leslie has
recently modified his analysis and conclusion (Philosophy 83 (4) 2008 pp. 519–524):
Abstract—A recent book of mine defends three distinct varieties of immortality. One of them
is an infinitely lengthy afterlife; however, any hopes of it might seem destroyed by something
like Brandon Carter's ‘doomsday argument’ against viewing ourselves as extremely early
humans. The apparent difficulty might be overcome in two ways. First, if the world is non-
deterministic then anything on the lines of the doomsday argument may prove unable to
deliver a strongly pessimistic conclusion. Secondly, anything on those lines may break
down when an infinite sequence of experiences is in question.
Mark Greenberg, "Apocalypse Not Just Now" in London Review of Books (https://www.lrb.c
o.uk/the-paper/v21/n13/mark-greenberg/apocalypse-not-just-now)
Laster (http://pthbb.org/manual/services/grim/laster.html): A simple webpage applet giving
the min & max survival times of anything with 50% and 95% confidence requiring only that
you input how old it is. It is designed to use the same mathematics as J. Richard Gott's form
of the DA, and was programmed by sustainable development researcher Jerrad Pierce.
PBS Space Time The Doomsday Argument (https://www.youtube.com/watch?v=dSvgw9ZO
K3I)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Doomsday_argument&oldid=1166434639"

You might also like