Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Probability

Probability is t he branch of mat hemat ics concerning numerical descript ions of how likely an
event is t o occur, or how likely it is t hat a proposit ion is t rue. The probabilit y of an event is a
number bet ween 0 and 1, where, roughly speaking, 0 indicat es impossibilit y of t he event and 1
indicat es cert aint y.[note 1][1][2] The higher t he probabilit y of an event , t he more likely it is t hat t he
event will occur. A simple example is t he t ossing of a fair (unbiased) coin. Since t he coin is fair,
t he t wo out comes ("heads" and "t ails") are bot h equally probable; t he probabilit y of "heads"
equals t he probabilit y of "t ails"; and since no ot her out comes are possible, t he probabilit y of
eit her "heads" or "t ails" is 1/2 (which could also be writ t en as 0.5 or 50%).

The probabilities of rolling several numbers using two dice.

These concept s have been given an axiomat ic mat hemat ical formalizat ion in probabilit y t heory,
which is used widely in areas of st udy such as st at ist ics, mat hemat ics, science, finance,
gambling, art ificial int elligence, machine learning, comput er science, game t heory, and philosophy
t o, for example, draw inferences about t he expect ed frequency of event s. Probabilit y t heory is
also used t o describe t he underlying mechanics and regularit ies of complex syst ems.[3]

Terminology of the probability theory

Experiment: An operat ion which can produce some well-defined out comes, is called an
Experiment .

Example: When we toss a coin, we know that either head or tail shows
up. So, the operation of tossing a coin may be said to have two well-
defined outcomes, namely, (a) heads showing up; and (b) tails showing
up.

Random Experiment: When we roll a die we are well aware of t he fact t hat any of t he numerals
1,2,3,4,5, or 6 may appear on t he upper face but we cannot say t hat which exact number will
show up.

Such an experiment in which all possible outcomes are known and the
exact outcome cannot be predicted in advance, is called a Random
Experiment.

Sample Space: All t he possible out comes of an experiment as an whole, form t he Sample
Space.

Example: When we roll a die we can get any outcome from 1 to 6. All
the possible numbers which can appear on the upper face form the
Sample Space(denoted by S). Hence, the Sample Space of a dice roll is S=
{1,2,3,4,5,6}

Outcome: Any possible result out of t he Sample Space S for a Random Experiment is called an
Out come.

Example: When we roll a die, we might obtain 3 or when we toss a


coin, we might obtain heads.
Event: Any subset of t he Sample Space S is called an Event (denot ed by E). When an out come
which belongs t o t he subset E t akes place, it is said t hat an Event has occurred. Whereas, when
an out come which does not belong t o subset E t akes place, t he Event has not occurred.

Example: Consider the experiment of throwing a die. Over here the


Sample Space S={1,2,3,4,5,6}. Let E denote the event of 'a number
appearing less than 4.' Thus the Event E={1,2,3}. If the number 1
appears, we say that Event E has occurred. Similarly, if the outcomes
are 2 or 3, we can say Event E has occurred since these outcomes
belong to subset E.

Trial: By a t rial, we mean performing a random experiment .

Example: (i) Tossing a fair coin, (ii) rolling an unbiased die [4]

Interpretations

When dealing wit h experiment s t hat are random and well-defined in a purely t heoret ical set t ing
(like t ossing a coin), probabilit ies can be numerically described by t he number of desired
out comes, divided by t he t ot al number of all out comes. For example, t ossing a coin t wice will
yield "head-head", "head-t ail", "t ail-head", and "t ail-t ail" out comes. The probabilit y of get t ing an
out come of "head-head" is 1 out of 4 out comes, or, in numerical t erms, 1/4, 0.25 or 25%.
However, when it comes t o pract ical applicat ion, t here are t wo major compet ing cat egories of
probabilit y int erpret at ions, whose adherent s hold different views about t he fundament al nat ure
of probabilit y:

Object ivist s assign numbers t o describe some object ive or physical st at e of affairs. The most
popular version of object ive probabilit y is frequent ist probabilit y, which claims t hat t he
probabilit y of a random event denot es t he relative frequency of occurrence of an experiment 's
out come when t he experiment is repeat ed indefinit ely. This int erpret at ion considers
probabilit y t o be t he relat ive frequency "in t he long run" of out comes.[5] A modificat ion of t his
is propensit y probabilit y, which int erpret s probabilit y as t he t endency of some experiment t o
yield a cert ain out come, even if it is performed only once.

Subject ivist s assign numbers per subject ive probabilit y, t hat is, as a degree of belief.[6] The
degree of belief has been int erpret ed as "t he price at which you would buy or sell a bet t hat
pays 1 unit of ut ilit y if E, 0 if not E."[7] The most popular version of subject ive probabilit y is
Bayesian probabilit y, which includes expert knowledge as well as experiment al dat a t o produce
probabilit ies. The expert knowledge is represent ed by some (subject ive) prior probabilit y
dist ribut ion. These dat a are incorporat ed in a likelihood funct ion. The product of t he prior and
t he likelihood, when normalized, result s in a post erior probabilit y dist ribut ion t hat incorporat es
all t he informat ion known t o dat e.[8] By Aumann's agreement t heorem, Bayesian agent s whose
prior beliefs are similar will end up wit h similar post erior beliefs. However, sufficient ly different
priors can lead t o different conclusions, regardless of how much informat ion t he agent s
share.[9]

Etymology

The word probability derives from t he Lat in probabilitas, which can also mean "probit y", a measure
of t he aut horit y of a wit ness in a legal case in Europe, and oft en correlat ed wit h t he wit ness's
nobilit y. In a sense, t his differs much from t he modern meaning of probability, which in cont rast is
a measure of t he weight of empirical evidence, and is arrived at from induct ive reasoning and
st at ist ical inference.[10]

History

The scient ific st udy of probabilit y is a modern development of mat hemat ics. Gambling shows
t hat t here has been an int erest in quant ifying t he ideas of probabilit y for millennia, but exact
mat hemat ical descript ions arose much lat er. There are reasons for t he slow development of t he
mat hemat ics of probabilit y. Whereas games of chance provided t he impet us for t he
mat hemat ical st udy of probabilit y, fundament al issues [note 2] are st ill obscured by t he
superst it ions of gamblers.[11]

According t o Richard Jeffrey, "Before t he middle of t he sevent eent h cent ury, t he t erm 'probable'
(Lat in probabilis) meant approvable, and was applied in t hat sense, univocally, t o opinion and t o
act ion. A probable act ion or opinion was one such as sensible people would undert ake or hold, in
t he circumst ances."[12] However, in legal cont ext s especially, 'probable' could also apply t o
proposit ions for which t here was good evidence.[13]
Gerolamo Cardano (16th century)

Christiaan Huygens published one of the first books on probability (17th century)

The sixt eent h-cent ury It alian polymat h Gerolamo Cardano demonst rat ed t he efficacy of defining
odds as t he rat io of favourable t o unfavourable out comes (which implies t hat t he probabilit y of
an event is given by t he rat io of favourable out comes t o t he t ot al number of possible
out comes[14]).
Aside from t he element ary work by Cardano, t he doct rine of probabilit ies dat es t o
t he correspondence of Pierre de Fermat and Blaise Pascal (1654). Christ iaan Huygens (1657)
gave t he earliest known scient ific t reat ment of t he subject .[15] Jakob Bernoulli's Ars Conjectandi
(post humous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) t reat ed t he subject as
a branch of mat hemat ics.[16] See Ian Hacking's The Emergence of Probability[10] and James
Franklin's The Science of Conjecture[17] for hist ories of t he early development of t he very
concept of mat hemat ical probabilit y.

The t heory of errors may be t raced back t o Roger Cot es's Opera Miscellanea (post humous,
1722), but a memoir prepared by Thomas Simpson in 1755 (print ed 1756) first applied t he t heory
t o t he discussion of errors of observat ion.[18] The reprint (1757) of t his memoir lays down t he
axioms t hat posit ive and negat ive errors are equally probable, and t hat cert ain assignable limit s
define t he range of all errors. Simpson also discusses cont inuous errors and describes a
probabilit y curve.

The first t wo laws of error t hat were proposed bot h originat ed wit h Pierre-Simon Laplace. The
first law was published in 1774, and st at ed t hat t he frequency of an error could be expressed as
an exponent ial funct ion of t he numerical magnit ude of t he error—disregarding sign. The second
law of error was proposed in 1778 by Laplace, and st at ed t hat t he frequency of t he error is an
exponent ial funct ion of t he square of t he error.[19] The second law of error is called t he normal
dist ribut ion or t he Gauss law. "It is difficult hist orically t o at t ribut e t hat law t o Gauss, who in spit e
of his well-known precocit y had probably not made t his discovery before he was t wo years
old."[19]

Daniel Bernoulli (1778) int roduced t he principle of t he maximum product of t he probabilit ies of a
syst em of concurrent errors.

Carl Friedrich Gauss

Adrien-Marie Legendre (1805) developed t he met hod of least squares, and int roduced it in his
Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for
Determining the Orbits of Comets).[20] In ignorance of Legendre's cont ribut ion, an Irish-American
writ er, Robert Adrain, edit or of "The Analyst " (1808), first deduced t he law of facilit y of error,

where is a const ant depending on precision of observat ion, and is a scale fact or ensuring t hat
t he area under t he curve equals 1. He gave t wo proofs, t he second being essent ially t he same as
John Herschel's (1850). Gauss gave t he first proof t hat seems t o have been known in Europe (t he
t hird aft er Adrain's) in 1809. Furt her proofs were given by Laplace (1810, 1812), Gauss (1823),
James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and
Morgan Croft on (1870). Ot her cont ribut ors were Ellis (1844), De Morgan (1864), Glaisher (1872),
and Giovanni Schiaparelli (1875). Pet ers's (1856) formula for r, t he probable error of a single
observat ion, is well known.

In t he ninet eent h cent ury, aut hors on t he general t heory included Laplace, Sylvest re Lacroix
(1816), Lit t row (1833), Adolphe Quet elet (1853), Richard Dedekind (1860), Helmert (1872),
Hermann Laurent (1873), Liagre, Didion and Karl Pearson. August us De Morgan and George Boole
improved t he exposit ion of t he t heory.

In 1906, Andrey Markov int roduced[21] t he not ion of Markov chains, which played an import ant
role in st ochast ic processes t heory and it s applicat ions. The modern t heory of probabilit y based
on t he measure t heory was developed by Andrey Kolmogorov in 1931.[22]

On t he geomet ric side, cont ribut ors t o The Educational Times were influent ial (Miller, Croft on,
McColl, Wolst enholme, Wat son, and Art emas Mart in).[23] See int egral geomet ry for more info.

Theory

Like ot her t heories, t he t heory of probabilit y is a represent at ion of it s concept s in formal t erms—
t hat is, in t erms t hat can be considered separat ely from t heir meaning. These formal t erms are
manipulat ed by t he rules of mat hemat ics and logic, and any result s are int erpret ed or t ranslat ed
back int o t he problem domain.

There have been at least t wo successful at t empt s t o formalize probabilit y, namely t he


Kolmogorov formulat ion and t he Cox formulat ion. In Kolmogorov's formulat ion (see also
probabilit y space), set s are int erpret ed as event s and probabilit y as a measure on a class of set s.
In Cox's t heorem, probabilit y is t aken as a primit ive (i.e., not furt her analyzed), and t he emphasis is
on const ruct ing a consist ent assignment of probabilit y values t o proposit ions. In bot h cases, t he
laws of probabilit y are t he same, except for t echnical det ails.

There are ot her met hods for quant ifying uncert aint y, such as t he Dempst er–Shafer t heory or
possibilit y t heory, but t hose are essent ially different and not compat ible wit h t he usually-
underst ood laws of probabilit y.

Applications
Probabilit y t heory is applied in everyday life in risk assessment and modeling. The insurance
indust ry and market s use act uarial science t o det ermine pricing and make t rading decisions.
Government s apply probabilist ic met hods in environment al regulat ion, ent it lement analysis, and
financial regulat ion.

An example of t he use of probabilit y t heory in equit y t rading is t he effect of t he perceived


probabilit y of any widespread Middle East conflict on oil prices, which have ripple effect s in t he
economy as a whole. An assessment by a commodit y t rader t hat a war is more likely can send
t hat commodit y's prices up or down, and signals ot her t raders of t hat opinion. Accordingly, t he
probabilit ies are neit her assessed independent ly nor necessarily rat ionally. The t heory of
behavioral finance emerged t o describe t he effect of such groupt hink on pricing, on policy, and
on peace and conflict .[24]

In addit ion t o financial assessment , probabilit y can be used t o analyze t rends in biology (e.g.,
disease spread) as well as ecology (e.g., biological Punnet t squares). As wit h finance, risk
assessment can be used as a st at ist ical t ool t o calculat e t he likelihood of undesirable event s
occurring, and can assist wit h implement ing prot ocols t o avoid encount ering such circumst ances.
Probabilit y is used t o design games of chance so t hat casinos can make a guarant eed profit , yet
provide payout s t o players t hat are frequent enough t o encourage cont inued play.[25]

Anot her significant applicat ion of probabilit y t heory in everyday life is reliabilit y. Many consumer
product s, such as aut omobiles and consumer elect ronics, use reliabilit y t heory in product design
t o reduce t he probabilit y of failure. Failure probabilit y may influence a manufact urer's decisions
on a product 's warrant y.[26]

The cache language model and ot her st at ist ical language models t hat are used in nat ural
language processing are also examples of applicat ions of probabilit y t heory.

Mathematical treatment
Calculation of probability (risk) vs odds

Consider an experiment t hat can produce a number of result s. The collect ion of all possible
result s is called t he sample space of t he experiment , somet imes denot ed as .[27] The power
set of t he sample space is formed by considering all different collect ions of possible result s.
For example, rolling a die can produce six possible result s. One collect ion of possible result s
gives an odd number on t he die. Thus, t he subset {1,3,5} is an element of t he power set of t he
sample space of dice rolls. These collect ions are called "event s". In t his case, {1,3,5} is t he event
t hat t he die falls on some odd number. If t he result s t hat act ually occur fall in a given event , t he
event is said t o have occurred.

A probabilit y is a way of assigning every event a value bet ween zero and one, wit h t he
requirement t hat t he event made up of all possible result s (in our example, t he event
{1,2,3,4,5,6}) is assigned a value of one. To qualify as a probabilit y, t he assignment of values must
sat isfy t he requirement t hat for any collect ion of mut ually exclusive event s (event s wit h no
common result s, such as t he event s {1,6}, {3}, and {2,4}), t he probabilit y t hat at least one of t he
event s will occur is given by t he sum of t he probabilit ies of all t he individual event s.[28]

The probabilit y of an event A is writ t en as ,[27][29] , or .[30] This mat hemat ical
definit ion of probabilit y can ext end t o infinit e sample spaces, and even uncount able sample
spaces, using t he concept of a measure.

The opposite or complement of an event A is t he event [not A] (t hat is, t he event of A not
occurring), oft en denot ed as ,[27] , or ; it s probabilit y is given by
[31]
P(not A) = 1 − P(A). As an example, t he chance of not rolling a six on a six-sided die is
1 – (chance of rolling a six) . For a more comprehensive t reat ment , see
Complement ary event .

If t wo event s A and B occur on a single performance of an experiment , t his is called t he


int ersect ion or joint probabilit y of A and B, denot ed as .[27]
Independent events

If t wo event s, A and B are independent t hen t he joint probabilit y is[29]

For example, if t wo coins are flipped, t hen t he chance of bot h being heads is .[32]

Mutually exclusive events

If eit her event A or event B can occur but never bot h simult aneously, t hen t hey are called
mut ually exclusive event s.

If t wo event s are mut ually exclusive, t hen t he probabilit y of both occurring is denot ed as
and

If t wo event s are mut ually exclusive, t hen t he probabilit y of either occurring is denot ed as
and

For example, t he chance of rolling a 1 or 2 on a six-sided die is

Not mutually exclusive events

If t he event s are not mut ually exclusive t hen

For example, when drawing a card from a deck of cards, t he chance of get t ing a heart or a face
card (J,Q,K) (or bot h) is , since among t he 52 cards of a deck, 13 are
heart s, 12 are face cards, and 3 are bot h: here t he possibilit ies included in t he "3 t hat are bot h"
are included in each of t he "13 heart s" and t he "12 face cards", but should only be count ed once.

Conditional probability

Conditional probability is t he probabilit y of some event A, given t he occurrence of some ot her


event B.
Condit ional probabilit y is writ t en ,[27] and is read "t he probabilit y of A, given B".
It is defined by[33]

If t hen is formally undefined by t his expression. In t his case and are


independent , since . However, it is possible t o define a
condit ional probabilit y for some zero-probabilit y event s using a σ-algebra of such event s (such
as t hose arising from a cont inuous random variable).

For example, in a bag of 2 red balls and 2 blue balls (4 balls in t ot al), t he probabilit y of t aking a
red ball is ; however, when t aking a second ball, t he probabilit y of it being eit her a red ball or a
blue ball depends on t he ball previously t aken. For example, if a red ball was t aken, t hen t he
probabilit y of picking a red ball again would be , since only 1 red and 2 blue balls would have
been remaining. And if a blue ball was t aken previously, t he probabilit y of t aking a red ball will be
.

Inverse probability

In probabilit y t heory and applicat ions, Bayes' rule relat es t he odds of event t o event ,
before (prior t o) and aft er (post erior t o) condit ioning on anot her event . The odds on to
event is simply t he rat io of t he probabilit ies of t he t wo event s. When arbit rarily many event s
are of int erest , not just t wo, t he rule can be rephrased as posterior is proportional to prior
times likelihood, where t he proport ionalit y symbol means t hat t he
left hand side is proport ional t o (i.e., equals a const ant t imes) t he right hand side as varies, for
fixed or given (Lee, 2012; Bert sch McGrayne, 2012). In t his form it goes back t o Laplace
(1774) and t o Cournot (1843); see Fienberg (2005). See Inverse probabilit y and Bayes' rule.

Summary of probabilities
Summary of probabilities
Event Probability

not A

A or B

A and B

A given B

Relation to randomness and probability in quantum


mechanics

In a det erminist ic universe, based on Newt onian concept s, t here would be no probabilit y if all
condit ions were known (Laplace's demon), (but t here are sit uat ions in which sensit ivit y t o init ial
condit ions exceeds our abilit y t o measure t hem, i.e. know t hem). In t he case of a roulet t e wheel,
if t he force of t he hand and t he period of t hat force are known, t he number on which t he ball will
st op would be a cert aint y (t hough as a pract ical mat t er, t his would likely be t rue only of a
roulet t e wheel t hat had not been exact ly levelled – as Thomas A. Bass' Newt onian Casino
revealed). This also assumes knowledge of inert ia and frict ion of t he wheel, weight , smoot hness,
and roundness of t he ball, variat ions in hand speed during t he t urning, and so fort h. A probabilist ic
descript ion can t hus be more useful t han Newt onian mechanics for analyzing t he pat t ern of
out comes of repeat ed rolls of a roulet t e wheel. Physicist s face t he same sit uat ion in t he kinet ic
t heory of gases, where t he syst em, while det erminist ic in principle, is so complex (wit h t he
number of molecules t ypically t he order of magnit ude of t he Avogadro const ant 6.02 × 1023)
t hat only a st at ist ical descript ion of it s propert ies is feasible.

Probabilit y t heory is required t o describe quant um phenomena.[34] A revolut ionary discovery of


early 20t h cent ury physics was t he random charact er of all physical processes t hat occur at
sub-at omic scales and are governed by t he laws of quant um mechanics. The object ive wave
funct ion evolves det erminist ically but , according t o t he Copenhagen int erpret at ion, it deals wit h
probabilit ies of observing, t he out come being explained by a wave funct ion collapse when an
observat ion is made. However, t he loss of det erminism for t he sake of inst rument alism did not
meet wit h universal approval. Albert Einst ein famously remarked in a let t er t o Max Born: "I am
convinced t hat God does not play dice".[35] Like Einst ein, Erwin Schrödinger, who discovered t he
wave funct ion, believed quant um mechanics is a st at ist ical approximat ion of an underlying
det erminist ic realit y.[36] In some modern int erpret at ions of t he st at ist ical mechanics of
measurement , quant um decoherence is invoked t o account for t he appearance of subject ively
probabilist ic experiment al out comes.

See also

Chance (disambiguat ion)

Class membership probabilit ies

Cont ingency

Equiprobabilit y

Heurist ics in judgment and decision-making

Probabilit y t heory

Randomness

St at ist ics

Est imat ors

Est imat ion t heory

Probabilit y densit y funct ion

Pairwise independence
In law
Balance of probabilit ies

Notes

1. Strictly speaking, a probability of 0 indicates that an event almost never takes place, whereas a
probability of 1 indicates than an event almost certainly takes place. This is an important distinction
when the sample space is infinite. For example, for the continuous uniform distribution on the real
interval [5, 10], there are an infinite number of possible outcomes, and the probability of any given
outcome being observed — for instance, exactly 7 — is 0. This means that when we make an
observation, it will almost surely not be exactly 7. However, it does not mean that exactly 7 is
impossible. Ultimately some specific outcome (with probability 0) will be observed, and one possibility
for that specific outcome is exactly 7.

2. In the context of the book that this is quoted from, it is the theory of probability and the logic behind it
that governs the phenomena of such things compared to rash predictions that rely on pure luck or
mythological arguments such as gods of luck helping the winner of the game.

References

1. "Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th
Ed, (2009), ISBN 978-0-534-24312-8.

2. William Feller, An Introduction to Probability Theory and Its Applications, (Vol 1), 3rd Ed, (1968), Wiley,
ISBN 0-471-25708-7.

3. Probability Theory (http://www.britannica.com/EBchecked/topic/477530/probability-theory) The


Britannica website

4. Mathematics Textbook For Class XI. National Council of Educational Research and Training (NCERT).
2019. pp. 384–388. ISBN 978-81-7450-486-9.

5. Hacking, Ian (1965). The Logic of Statistical Inference. Cambridge University Press. ISBN 978-0-521-
05165-1.

6. Finetti, Bruno de (1970). "Logical foundations and measurement of subjective probability". Acta
Psychologica. 34: 129–145. doi:10.1016/0001-6918(70)90012-0 (https://doi.org/10.1016%2F0001-691
8%2870%2990012-0) .

7. Hájek, Alan (21 October 2002). Edward N. Zalta (ed.). "Interpretations of Probability" (http://plato.stanf
ord.edu/archives/win2012/entries/probability-interpret/) . The Stanford Encyclopedia of Philosophy
(Winter 2012 ed.). Retrieved 22 April 2013.

8. Hogg, Robert V.; Craig, Allen; McKean, Joseph W. (2004). Introduction to Mathematical Statistics
(6th ed.). Upper Saddle River: Pearson. ISBN 978-0-13-008507-8.

9. Jaynes, E.T. (2003). "Section 5.3 Converging and diverging views". In Bretthorst, G. Larry (ed.).
Probability Theory: The Logic of Science (1 ed.). Cambridge University Press. ISBN 978-0-521-59271-0.

10. Hacking, I. (2006) The Emergence of Probability: A Philosophical Study of Early Ideas about Probability,
Induction and Statistical Inference, Cambridge University Press, ISBN 978-0-521-68557-3

11. Freund, John. (1973) Introduction to Probability. Dickenson ISBN 978-0-8221-0078-2 (p. 1)

12. Jeffrey, R.C., Probability and the Art of Judgment, Cambridge University Press. (1992). pp. 54–55 .
ISBN 0-521-39459-7

13. Franklin, J. (2001) The Science of Conjecture: Evidence and Probability Before Pascal, Johns Hopkins
University Press. (pp. 22, 113, 127)
14. Some laws and problems in classical probability and how Cardano anticipated them Gorrochum, P.
Chance magazine 2012 (http://www.columbia.edu/~pg2113/index_files/Gorroochurn-Some%20Laws.
pdf)

15. Abrams, William, A Brief History of Probability (http://www.secondmoment.org/articles/probability.ph


p) , Second Moment, retrieved 23 May 2008

16. Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008). Quantum leap : from Dirac and Feynman, across the
universe, to human body and mind. Singapore ; Hackensack, NJ: World Scientific. p. 16. ISBN 978-981-
281-927-7.

17. Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal. Johns
Hopkins University Press. ISBN 978-0-8018-6569-5.

18. Shoesmith, Eddie (November 1985). "Thomas Simpson and the arithmetic mean" (https://doi.org/10.10
16%2F0315-0860%2885%2990044-8) . Historia Mathematica. 12 (4): 352–355. doi:10.1016/0315-
0860(85)90044-8 (https://doi.org/10.1016%2F0315-0860%2885%2990044-8) .

19. Wilson EB (1923) "First and second laws of error". Journal of the American Statistical Association, 18,
143

20. Seneta, Eugene William. " "Adrien-Marie Legendre" (version 9)" (https://web.archive.org/web/201602030
70724/http://statprob.com/encyclopedia/AdrienMarieLegendre.html) . StatProb: The Encyclopedia
Sponsored by Statistics and Probability Societies. Archived from the original (http://statprob.com/encyc
lopedia/AdrienMarieLegendre.html) on 3 February 2016. Retrieved 27 January 2016.

21. Weber, Richard. "Markov Chains" (http://www.statslab.cam.ac.uk/~rrw1/markov/M.pdf) (PDF).


Statistical Laboratory. University of Cambridge.

22. Vitanyi, Paul M.B. (1988). "Andrei Nikolaevich Kolmogorov" (http://homepages.cwi.nl/~paulv/KOLMOG


OROV.BIOGRAPHY.html) . CWI Quarterly (1): 3–18. Retrieved 27 January 2016.

23. Wilcox, Rand R. (10 May 2016). Understanding and applying basic statistical methods using R.
Hoboken, New Jersey. ISBN 978-1-119-06140-3. OCLC 949759319 (https://www.worldcat.org/oclc/949
759319) .

24. Singh, Laurie (2010) "Whither Efficient Markets? Efficient Market Theory and Behavioral Finance". The
Finance Professionals' Post, 2010.

25. Gao, J.Z.; Fong, D.; Liu, X. (April 2011). "Mathematical analyses of casino rebate systems for VIP
gambling". International Gambling Studies. 11 (1): 93–106. doi:10.1080/14459795.2011.552575 (http
s://doi.org/10.1080%2F14459795.2011.552575) . S2CID 144540412 (https://api.semanticscholar.or
g/CorpusID:144540412) .

26. Gorman, Michael F. (2010). "Management Insights" (https://doi.org/10.1287%2Fmnsc.1090.1132) .


Management Science. 56: iv–vii. doi:10.1287/mnsc.1090.1132 (https://doi.org/10.1287%2Fmnsc.1090.
1132) .
27. "List of Probability and Statistics Symbols" (https://mathvault.ca/hub/higher-math/math-symbols/prob
ability-statistics-symbols/) . Math Vault. 26 April 2020. Retrieved 10 September 2020.

28. Ross, Sheldon M. (2010). A First course in Probability (8th ed.). Pearson Prentice Hall. pp. 26–27.
ISBN 9780136033134.

29. Weisstein, Eric W. "Probability" (https://mathworld.wolfram.com/Probability.html) .


mathworld.wolfram.com. Retrieved 10 September 2020.

30. Olofsson (2005) p. 8.

31. Olofsson (2005), p. 9

32. Olofsson (2005) p. 35.

33. Olofsson (2005) p. 29.

34. Burgin, Mark (2010). "Interpretations of Negative Probabilities". p. 1. arXiv:1008.1287v1 (https://arxiv.or
g/abs/1008.1287v1) [physics.data-an (https://arxiv.org/archive/physics.data-an) ].

35. Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. Letter to Max Born, 4 December 1926, in:
Einstein/Born Briefwechsel 1916–1955 (https://books.google.com/books?id=LQIsAQAAIAAJ&q=achtun
g-gebietend) .

36. Moore, W.J. (1992). Schrödinger: Life and Thought. Cambridge University Press. p. 479. ISBN 978-0-
521-43767-7.

Bibliography

Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New


York. 510 pp. ISBN 0-387-25115-4

Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in St at ist ics.
650 pp. ISBN 0-387-95313-2

Olofsson, Pet er (2005) Probability, Statistics, and Stochastic Processes, Wiley-Int erscience.
504 pp ISBN 0-471-67969-0.

External links

Wikiquot e has quot at ions relat ed t o: Probability

Wikibooks has more on t he t opic of: Probability

Wikimedia Commons has media relat ed t o Probability.


Virt ual Laborat ories in Probabilit y and St at ist ics (Univ. of Ala.-Hunt sville) (ht t p://www.mat h.uah.
edu/st at /)

Probabilit y (ht t ps://www.bbc.co.uk/programmes/b00bqf61) on In Our Time at t he BBC

Probabilit y and St at ist ics EBook (ht t p://wiki.st at .ucla.edu/socr/index.php/EBook)

Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Preprint : Washingt on
Universit y, (1996). — HTML index wit h links t o Post Script files (ht t ps://web.archive.org/web/2
0160119131820/ht t p://omega.albany.edu:8008/JaynesBook.ht ml) and PDF (ht t p://bayes.wu
st l.edu/et j/prob/book.pdf) (first t hree chapt ers)

People from t he Hist ory of Probabilit y and St at ist ics (Univ. of Sout hampt on) (ht t p://www.econ
omics.sot on.ac.uk/st aff/aldrich/Figures.ht m)

Probabilit y and St at ist ics on t he Earliest Uses Pages (Univ. of Sout hampt on) (ht t p://www.econ
omics.sot on.ac.uk/st aff/aldrich/Probabilit y%20Earliest %20Uses.ht m)

Earliest Uses of Symbols in Probabilit y and St at ist ics (ht t p://jeff560.t ripod.com/st at .ht ml)
on Earliest Uses of Various Mat hemat ical Symbols (ht t p://jeff560.t ripod.com/mat hsym.ht ml)

A t ut orial on probabilit y and Bayes' t heorem devised for first -year Oxford Universit y st udent s
(ht t p://www.celiagreen.com/charlesmccreery/st at ist ics/bayest ut orial.pdf)

[1] (ht t p://ubu.com/hist orical/young/index.ht ml) pdf file of An Ant hology of Chance
Operat ions (1963) at UbuWeb

Int roduct ion t o Probabilit y – eBook (ht t p://www.dart mout h.edu/~chance/t eaching_ aids/book
s_ art icles/probabilit y_ book/book.ht ml) , by Charles Grinst ead, Laurie Snell Source (ht t ps://bit
bucket .org/shabbychef/numas_ t ext /) (GNU Free Documentation License)

(in English and It alian) Bruno de Finet t i, Probabilità e induzione (http://amshistorica.unibo.it/3


5) , Bologna, CLUEB, 1993. ISBN 88-8091-176-7 (digit al version)

Richard P. Feynman's Lect ure on probabilit y. (ht t p://www.feynmanlect ures.calt ech.edu/I_ 06.ht
ml)
Retrieved from
"https://en.wikipedia.org/w/index.php?
title=Probability&oldid=1047282853"


Last edited 4 days ago by 71.207.141.13

Content is available under CC BY-SA 3.0 unless


otherwise noted.

You might also like