Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Probability and Statistics

In many physical phenomena, the outcome of an experiment may show fluctuations which may
not be predicted precisely. For example, in a coin tossing experiment, it’s impossible to say when
heads or tails occur. In this case however, the outcome over a large number of tosses may show
some regularity in the result. Hence on average we may find that heads and tails occur about
evenly.

The study of the average behavior of events leads to a determination of the frequency of
occurrence of certain outcomes such as that of heads and tails. In mathematical terms, this is
defined as the notion of probability.

Associated with probability are such concepts as probability distributions and density functions
depicting the results of a large number of events which when analyzed enable certain laws to be
determined, and this is essentially known as statistics.

Definition of Statistics

In a random experiment such as a coin tossing which is repeated several times, suppose the two
outcomes are either A (heads)

Or B (tails). If n A is the number of occurrences of A in a total of N tosses, then the relative


frequency of occurrence of A is given by

nA
Relative occurrence of A = N

nA
Denoting the chance of the probability of A by P(A), then the ratio N will approach some
value and will show little change if N is a very large number. This ratio is defined as the
probability P(A) and is given by;

nA
lim
P(A) = N →∞ N

Similarly, if n B is the number of occurrences of B, then

nB
lim
P(A) = N →∞ N
Where P(B) is the probability of the outcome B. Consequently, if there is a certainty of an event
A occurring every time the coin is tossed, then n A =N and P(A) = 1. Alternatively, if event A
can never happen, then n A =0 and P(A) = 0. This signifies impossibility. Hence the value of
any probability lies between 1 and 0 i.e.;

0≤P ( A )≤1 and 0≤P (B )≤1

With

P( A )+P( B )=1
So that
P( A )=1−P( B)

Similarly,
P(B )=1−P( A )

Joint Probability

Joint probability is a statistical measure where the likelihood of two events occurring together
and at the same point in time are calculated. Joint probability is the probability of event A
occurring at the same time event B occurs and is denoted as P( AB ) . If n AB is the number of
times out of N trials that A and B occur together, then we have

n AB
lim
P( AB ) = N →∞ N

In the case of events in which A and b may or may not occur together, the total probability of A
or B occurring is defined as P( A +B) . If n A +nB is the number of occurrences of A or B
out of a total of N events, then

n A +n B
lim
P( A +B) = N →∞ N

Terminology

Sample space: the set of all possible outcomes of a random phenomenon.

Event: any set of outcomes of interest.

Probability of an event: the relative frequency of this set of outcomes over an infinite number of
trials.
Example

Suppose we roll two dice and take their sum.

S = {2, 3, 4, 5,………., 11, 12}

What is the probability that the sum of the two dice is 5

Solution

We get the sum of the two dice to be 5 when we roll

(1, 4), (2, 3), (3, 2), (4, 1)

Thus
4
P(sum=5)=
36

Conditional Probability

In certain events, the occurrence of A may depend on the occurrence of B. The probability of this
happening is called conditional probability. This can be defined as the measure of the probability
of an event given that another event has occurred. The conditional probability of occurrence of A
given that B has already occurred is written as;

P( A / B) .
Where
n AB
P( A / B)=
nB

n AB is the number of joint occurrences of A and B, while n B is the number of occurrences of


B (with or without A)

n AB n AB N P( AB )
P( A / B)= = . =
nB N nB P( B)
(i)
n AB n AB N P( AB )
P( B / A )= = . =
nA N n A P( A )
(ii)

Combining equations (i) and (ii) yields

P( AB )=P( A /B )⋅P (B )=P( B/ A )⋅P ( A )


(iii)

Or

P( B/ A )⋅P ( A )
P( A / B)=
P( B) This is known as Baye’s Theorem

Statistical Concepts

In the analysis of statistical data, two concepts are used:

i. The average or expected value E[ X ] for any random variable such as noise or
interference.
ii. The standard deviation δ

Average or Expected value ( E[ X ] )

Suppose random variable X can take on values x 1 ,x 2 , x 3 ,........,x k with occurrence


n1 ,n2 ,n 3 ,........,n k . Then the expectation of this random variable X is defined as

x 1 n1 +x 2 n 2 +.. . .. .. . ..+x k n k
E[ X ]=
n

You might also like