Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Unit IV

Probability:

A probability is a way of assigning every event a value between zero and one, with the
requirement that the event made up of all possible results (in our example, the event
{1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values
must satisfy the requirement that if you look at a collection of mutually exclusive events (events
with no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the
probability that at least one of the events will occur is given by the sum of the probabilities of all
the individual events.

The probability of an event A is written as P(A), p(A) or Pr(A).

Theorem of Probabilities:

Multiplication Theorem
If two events, A and B are independent then the joint probability is

for example, if two coins are flipped the chance of both being heads is
Additional Theorem
If either event A or event B or both events occur on a single performance of an experiment this is
called the union of the events A and B denoted as . If two events are mutually
exclusive then the probability of either occurring is

For example, the chance of rolling a 1 or 2 on a six-


sided die is
Conditional probability
Conditional probability is the probability of some event A, given the occurrence of some other
event B. Conditional probability is written , and is read "the probability ofA,
given B". It is defined by[23]
If then is formally undefined by this expression. However, it is
possible to define a conditional probability for some zero-probability events using a σ-
algebra of such events (such as those arising from a continuous random variable)

For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a
red ball is ; however, when taking a second ball, the probability of it being either a red ball
or a blue ball depends on the ball previously taken, such as, if a red ball was taken, the
probability of picking a red ball again would be since only 1 red and 2 blue balls would
have been remaining.

Baye’s Theorem

Bayes' theorem (also known as Bayes' rule) is a useful tool for calculating conditional probabilities. Bayes'
theorem can be stated as follows:

Bayes' theorem. Let A1, A2, ... , An be a set of mutually exclusive events that together form the sample
space S. Let B be any event from the same sample space, such that P(B) > 0. Then,

P( Ak ∩ B )

P( Ak | B ) =

P( A1 ∩ B ) + P( A2 ∩ B ) + . . . + P( An ∩ B )

Note: Invoking the fact that P( Ak ∩ B ) = P( Ak )P( B | Ak ), Baye's theorem can also be expressed as

P( Ak ) P( B | Ak )

P( Ak | B ) =

P( A1 ) P( B | A1 ) + P( A2 ) P( B | A2 ) + . . . + P( An ) P( B | An )

Probability Distribution

A probability distribution provides the possible values of the random variable and their
corresponding probabilities. A probability distribution can be in the form of a table, graph or
mathematical formula.
Binomial Probability Distribution.

A binomial experiment is one that possesses the following properties:

1. The experiment consists of n repeated trials;


2. Each trial results in an outcome that may be classified as a successor a failure (hence the
name, binomial);
3. The probability of a success, denoted by p, remains constant from trial to trial and
repeated trials are independent.

The number of successes X in n trials of a binomial experiment is called a binomial random


variable.

The probability distribution of the random variable X is called a binomial distribution, and is
given by the formula:

P(X)=Cxnpxqn−x

where

n = the number of trials

x = 0, 1, 2, ... n

p = the probability of success in a single trial

q = the probability of failure in a single trial

(i.e. q = 1 − p)

Poisson Probability Distribution

The Poisson Distribution was developed by the French mathematician Simeon Denis Poisson in
1837.

The Poisson random variable satisfies the following conditions:

1. The number of successes in two disjoint time intervals is independent.


2. The probability of a success during a small time interval is proportional to the entire
length of the time interval.

Apart from disjoint time intervals, the Poisson random variable also applies to disjoint regions of
space.
Applications

 the number of deaths by horse kicking in the Prussian army (first application)
 birth defects and genetic mutations
 rare diseases (like Leukemia, but not AIDS because it is infectious and so not
independent) - especially in legal cases
 car accidents
 traffic flow and ideal gap distance
 number of typing errors on a page
 hairs found in McDonald's hamburgers
 spread of an endangered animal in Africa
 failure of a machine in one month

The probability distribution of a Poisson random variable X representing the number of successes
occurring in a given time interval or a specified region of space is given by the formula:

P(X)=x!e−μμx

where

x=0,1,2,3…

e=2.71828 (but use your calculator's e button)

μ= mean number of successes in the given time interval or region of space


Normal Probability Distribution

A random variable X whose distribution has the shape of a normal curve is called a normal
random variable.

Normal Curve

This random variable X is said to be normally distributed with mean μ and standard deviation σ if
its probability distribution is given by

f(X)=σ√2π1e−(x−μ)2/2 σ2

Properties of a Normal Distribution

1. The normal curve is symmetrical about the mean μ;


2. The mean is at the middle and divides the area into halves;
3. The total area under the curve is equal to 1;
4. It is completely determined by its mean and standard deviation σ (or variance σ2)

Note:

In a normal distribution, only 2 parameters are needed, namely μ and σ2.

Area Under the Normal Curve using Integration

The probability of a continuous normal variable X found in a particular interval [a, b] is the area
under the curve bounded by x=a and x=b and is given by

P(a<X<b)=∫abf(X)dx

and the area depends upon the values of μ and σ.


[See Area under a Curve for more information on using integration to find areas under curves.
Don't worry - we don't have to perform this integration - we'll use the computer to do it for us.]

The Standard Normal Distribution

It makes life a lot easier for us if we standardize our normal curve, with a mean of zero and a
standard deviation of 1 unit.

If we have the standardized situation of μ = 0 and σ = 1, then we have:

f(X)=√2π1e−x2/2

Standard Normal Curve μ = 0, σ = 1

We can transform all the observations of any normal random variable X with mean μ and
variance σ to a new set of observations of another normal random variable Z with mean 0 and
variance 1 using the following transformation:

Z=σX−μ

You might also like