Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

PROBABILITY THEORY & DISTRIBUTION PROBABILITY

INTRODUCTION
Probability Theory is a way in which we can study scientifically things that happen by
chance. Consider the following questions:
1. What are your chances of winning a raffle in which 325 people have bought 1 ticket each?
2. If a coin is to be tossed 4 times and on the first 3 trials ‘heads’ comes up, what are the
chances of getting ‘tails’ on the 4th trial?
3. If two dice are tossed is it more likely that you will get a ‘3’ and a ‘4’ thrown or a ‘1’ and a
‘1’?
4. What are the chances that Labour will win the next election?
Questions 1, 2 and 3 can be answered exactly. We have enough theory at our disposal and the
situations are sufficiently simple to be evaluated easily. We will answer them in the course of
this unit. Question 4 is far more difficult, as the conditions affecting the outcome are
complex, numerous and changing. Hence polls which try to give a simple answer to question
4 are often wrong.
Probability theory A branch of mathematics concerned with the analysis of random
phenomena. The outcome of a random event cannot be determined before it occurs, but it
may be any one of several possible outcomes. The actual outcome is considered to be
determined by chance.
The word probability has several meanings in ordinary conversation. Some of these are
particularly important for the development and applications of the mathematical theory of
probability. One is the Interpretation of probabilities as relative frequencies, for which simple
games involving coins, cards, dice, and roulette wheels provide examples.

ADDITION AND MULTIPLICATION LAW IN THEORY OF PROBABILITY


The Addition Law and the Multiplication Law are two important rules in probability theory.
The Addition Law of Probability, also known as the Law of Total Probability, states that
the probability of the union of two or more events is equal to the sum of the probabilities of
each event, considering their overlap. Symbolically, if A and B are two events, then:
P (A Ս B) = P(A) + P(B) – P (A ∩ B)
where P (A Ս B) is the probability of the union of A and B, P(A) is the probability of event
A, P(B) is the probability of event B, and P(A Ո B) is the probability of the intersection of A
and B.
Example:
If the probability of solving a problem by two students George and James are 1/2 and 1/3
respectively then what is the probability of the problem to be solved.

Solution: Let A and B be the probabilities of solving the problem by George and James
respectively. Then P(A)=1/2 and P(B)=1/3.
The problem will be solved if it is solved at least by one of them also.
So, we need to find P(AUB).
By addition theorem on probability, we have
P(AՍB) = P(A) + P(B) - P(AՈB).
P(AՍB) = 1/2+1/3 – 1/2 x 1/3 = 1/2 +1/3 -1/6 = (3+2-1)/6=4/6 = 2/3
Note:
 If A and B are any two mutually exclusive events then P(AՈB) = 0.
Then P(AՍB) = P(A) + P(B).

The Multiplication Law of Probability states that the probability of the intersection of two
events is equal to the product of the individual probabilities of each event, given that the
other event has already occurred. Symbolically, if A and B are two events with P(B) ≠ 0,
then:
P (A | B) = P (A ∩ B) / P(B)
where P (A | B) is the conditional probability of A given B, P (A ∩ B) is the probability of
the intersection of A and B, and P(B) is the probability of event B.

Independent Events:
Two events A and B are said to be independent if there is no change in the happening of an
event with the happening of the other event.
i.e. Two events A and B are said to be independent if
P(A|B) = P(A) where P(B) ≠ 0
P(B|A) = P(B) where P(A) ≠ 0
i.e. Two events independent if A & B said to be independent if P (A∩B) = P(A) x P(B).
Example:
While laying the pack of cards, let A be the event of drawing a diamond and B be the event
of drawing an ace.
Then P(A) = 13/52 = 1/4 and P(B) = 4/52=1/13
Now, A∩B = drawing a king card from hearts.
Then P(A∩B) = 1/52
Now, P(A/B) = P(A∩B)/P(B) = (1/52) / (1/13) = 1/4 = P(A).
So, A and B are independent. [Here, P(A∩B) = = = P(A) * P(B)]

BAYE’S THEOREM

A mathematical formula used to determine the conditional probability of events

In statistics and probability theory, the Bayes’ theorem (also known as the Bayes’ rule) is a
mathematical formula used to determine the conditional probability of events. Essentially, the
Bayes’ theorem describes the Probability of an event based on prior knowledge of the
conditions that might be relevant to the event.
The theorem is named after English statistician, Thomas Bayes, who discovered the formula
in 1763. It is considered the foundation of the special statistical inference approach called the
Bayes’ inference.

Besides statistics, the Bayes’ theorem is also used in various disciplines, with medicine and
pharmacology as the most notable examples. In addition, the theorem is commonly employed
in different fields of finance. Some of the applications include but are not limited to,
modelling the risk of lending money to borrowers or forecasting the probability of the success
of an investment.
Where:
 P(A|B) – the probability of event A occurring, given event B has occurred
 P(B|A) – the probability of event B occurring, given event A has occurred
 P(A) – the probability of event A
 P(B) – the probability of event B
Note that events A and B are independent events (i.e., the probability of the outcome of event
A does not depend on the probability of the outcome of event B).
A special case of the Bayes’ theorem is when event A is a binary variable. In such a case, the
theorem is expressed in the following way:

Where:
 P(B|A–) – the probability of event B occurring given that event A– has occurred
 P(B|A+) – the probability of event B occurring given that event A+ has occurred
In the special case above, events A– and A+ are mutually exclusive outcomes of event A.
Example of Bayes’ Theorem
Imagine you are a financial analyst at an investment bank. According to your research
of publicly-traded companies, 60% of the companies that increased their share price by more
than 5% in the last three years replaced their CEOs during the period.
At the same time, only 35% of the companies that did not increase their share price by more
than 5% in the same period replaced their CEOs. Knowing that the probability that the stock
prices grow by more than 5% is 4%, find the probability that the shares of a company that
fires its CEO will increase by more than 5%.
Before finding the probabilities, you must first define the notation of the probabilities.
 P(A) – the probability that the stock price increases by 5%
 P(B) – the probability that the CEO is replaced
 P(A|B) – the probability of the stock price increases by 5% given that the CEO has
been replaced
 P(B|A) – the probability of the CEO replacement given the stock price has increased
by 5%.
Using the Bayes’ theorem, we can find the required probability:

Thus, the probability that the shares of a company that replaces its CEO will grow by more
than 5% is 6.67%.

THEORETICAL PROBABILITY

Every one of us would have encountered multiple situations in life where we had to take a
chance or risk. Depending on the situation, it can be predicted up to a certain extent if a
particular event is going to take place or not. This chance of occurrence of a particular event
is what we study in probability. In our everyday life, we are more accustomed to the word
‘chance’ as compared to the word ‘probability’. Since Mathematics is all about quantifying
things, the theory of probability basically quantifies these chances of occurrence or non-
occurrence of certain events. In this article, we are going to discuss what is probability and its
two different types of approaches with examples.

What is Probability?
In Mathematics, the probability is a branch that deals with the likelihood of the occurrences
of the given event. The probability value is expressed between the range of numbers from 0
to 1. The three basic rules connected with the probability are addition, multiplication, and
complement rules.

Theoretical Probability Vs Experimental Probability


Probability theory can be studied using two different approaches:

 Theoretical Probability
 Experimental Probability

Theoretical Probability Definition


Theoretical probability is the theory behind probability. To find the probability of an event
using theoretical probability, it is not required to conduct an experiment. Instead of that, we
should know about the situation to find the probability of an event occurring. The theoretical
probability is defined as the ratio of the number of favourable outcomes to the number of
possible outcomes.
Probability of Event P(E) = No. of. Favourable outcomes/ No. of. Possible outcomes.

Experimental Probability Definition


The experimental probability also is known as an empirical probability, is an approach that
relies upon actual experiments and adequate recordings of occurrence of certain events while
the theoretical probability attempts to predict what will happen based upon the total number
of outcomes possible. The experimental Probability is defined as the ratio of the number of
times that event occurs to the total number of trials.
Probability of Event P(E) = No. of. times that event occurs/ Total number of trials
The basic difference between these two approaches is that in the experimental approach; the
probability of an event is based on what has actually happened by conducting a series of
actual experiments, while in theoretical approach; we attempt to predict what will occur
without actually performing the experiments.

Theoretical probability Example


Question:
Find the probability of rolling a 5 on a fair die
Solution:
To find the probability of getting 5 while rolling a die, an experiment is not needed. We know
that there are 6 possible outcomes when rolling a die. They are 1, 2, 3, 4, 5, 6.
Therefore, the probability is,
Probability of Event P(E) = No. of. Favourable outcomes/ No. of. Possible outcomes.
P(E) = 1/6.
Hence, the probability of getting 5 while rolling a fair die is 1/6.

BINOMIAL DISTRIBUTION

A common probability distribution that models the probability of obtaining one of two
outcomes under a given number of parameter

What is Binomial Distribution?

Binomial distribution is a common probability distribution that models the probability of


obtaining one of two outcomes under a given number of parameters. It summarizes the
number of trials when each trial has the same chance of attaining one specific outcome. The
value of a binomial is obtained by multiplying the number of independent trials by the
successes.

For example, when tossing a coin, the probability of obtaining a head is 0.5. If there are 50
trials, the expected value of the number of heads is 25 (50 x 0.5). The binomial distribution
is used in statistics as a building block for dichotomous variables such as the likelihood that
either candidate A or B will emerge in position 1 in the midterm exams

Criteria of Binomial Distribution

Binomial distribution models the probability of occurrence of an event when specific criteria
are met. Binomial distribution involves the following rules that must be present in the process
in order to use the binomial probability formula:

1. Fixed trials

The process under investigation must have a fixed number of trials that cannot be altered in
the course of the analysis. During the analysis, each trial must be performed in a uniform
manner, although each trial may yield a different outcome.
In the binomial probability formula, the number of trials is represented by the letter “n.” An
example of a fixed trial may be coin flips, free throws, wheel spins, etc. The number of times
that each trial is conducted is known from the start. If a coin is flipped 10 times, each flip of
the coin is a trial.

2. Independent trials

The other condition of a binomial probability is that the trials are independent of each other.
In simple terms, the outcome of one trial should not affect the outcome of the subsequent
trials.

When using certain sampling methods, there is a possibility of having trials that are not
completely independent of each other, and binomial distribution may only be used when the
size of the population is large vis-a-vis the sample size.

An example of independent trials may be tossing a coin or rolling a dice. When tossing a
coin, the first event is independent of the subsequent events.

3. Fixed probability of success

In a binomial distribution, the probability of getting a success must remain the same for the
trials we are investigating. For example, when tossing a coin, the probability of flipping a
coin is ½ or 0.5 for every trial we conduct, since there are only two possible outcomes.

In some sampling techniques, such as sampling without replacement, the probability of


success from each trial may vary from one trial to the other. For example, assume that there
are 50 boys in a population of 1,000 students. The probability of picking a boy from that
population is 0.05.

In the next trial, there will be 49 boys out of 999 students. The probability of picking a boy in
the next trial is 0.049. It shows that in subsequent trials, the probability from one trial to the
next will vary slightly from the prior trial.

4. Two mutually exclusive outcomes

In binomial probability, there are only two mutually exclusive outcomes, i.e., success or
failure. While success is generally a positive term, it can be used to mean that the outcome of
the trial agrees with what you have defined as a success, whether it is a positive or negative
outcome.

For example, when a business receives a consignment of lamps with a lot of breakages, the
business can define success for the trial to be every lamp that has broken glass. A failure can
be defined as when the lamps have zero broken glasses.

In our example, the instances of broken lamps may be used to denote success as a way of
showing that a high proportion of the lamps in the consignment is broken. and that there is a
low probability of getting a consignment of lamps with zero breakages.

Example of Binomial Distribution


Suppose, according to the latest police reports, 80% of all petty crimes are unresolved, and in
your town, at least three of such petty crimes are committed. The three crimes are all
independent of each other. From the given data, what is the probability that one of the three
crimes will be resolved?

Solution

The first step in finding the binomial probability is to verify that the situation satisfies the
four rules of binomial distribution:

 Number of fixed trials (n): 3 (Number of petty crimes)


 Number of mutually exclusive outcomes: 2 (solved and
unsolved)
 The probability of success (p): 0.2 (20% of cases are
solved)
 Independent trials: Yes

Next:

We find the probability that one of the crimes will be solved in the three independent trials. It
is shown as follows:

Trial 1 = Solved 1st, unsolved 2nd, and unsolved 3rd

= 0.2 x 0. 8 x 0.8

= 0.128

Trial 2 = Unsolved 1st, solved 2nd, and unsolved 3rd

= 0.8 x 0.2 x 0.8

= 0.128

Trial 3 = Unsolved 1st, unsolved 2nd, and solved 3rd

= 0.8 x 0.8 x 0.2

= 0.128

Total (for the three trials):

= 0.128 + 0.128 + 0.128

= 0.384

Alternatively, we can apply the information in the binomial probability formula, as follows:
Where:

In the equation, x = 1 and n = 3. The equation gives a probability of 0.384.

Poisson distribution

The Poisson distribution is a discrete probability distribution that represents the number of
times an event occurs in a fixed interval of time or space. It is used to model the count of
events that happen randomly and independently, given the average number of times the event
occurs in the given interval.

The Poisson distribution is characterized by a single parameter, lambda (λ), which is the
average number of events per interval. The probability of observing k events in an interval is
given by the following formula:

P(k) = (λ^k * e^-λ) / k!

The Poisson distribution is commonly used in a variety of fields, such as biology,


engineering, finance, and many others, where the count of events is of interest. For example,
it can be used to model the number of calls received by a call center in a given hour, the
number of customers arriving at a store in a given day, or the number of meteor impacts on a
certain area in a given year.

1. An example involving time intervals

Suppose a traffic analysis company wants to study the number of cars that pass through a
busy intersection in a given time interval. Based on historical data, they know that on
average, 50 cars pass through the intersection every hour.
We can use Poisson probability distribution to calculate the probability of different number of
cars passing through the intersection in a given time interval. For example, if we want to find
the probability of exactly 40 cars passing through the intersection in a 30-minute time
interval, we can use the formula:

P(X = 40) = (e^-50) * (50^40) / 40!

where X is the number of cars and e is the natural logarithm base. Plugging in the values, we
get:

P(X = 40) = (e^-50) * (50^40) / 40! = 0.00082

So, the probability of exactly 40 cars passing through the intersection in a 30-minute time
interval is 0.00082, or approximately 0.082%.

2. An example involving length or distance intervals

Suppose that a post office receives an average of 20 letters per hour. We can use the Poisson
probability distribution to find the probability that the post office will receive exactly 15
letters in a given hour.

The Poisson probability formula is given by:

P(k) = (e^-μ) (μ^k) / k!

where μ is the average number of events and k is the actual number of events.

Plugging in μ = 20 and k = 15, we get:

P(15) = (e^-20) (20^15) / 15! = (2.718^-20) (20^15) / 15! = 0.057

So the probability that the post office will receive exactly 15 letters in a given hour is 0.057
or 5.7%.
We can also find the probability that the post office will receive between 10 and 15 letters in
a given hour. To do this, we can calculate the probabilities for each value within the interval
(10, 11, 12, 13, 14, 15) and sum them up.

The most important probability distribution for describing a continuous random variable is
the normal probability distribution. It is used in a wide variety of practical applications in
which the random variables are heights and weights of people, test scores, scientific
measurements, amounts of rainfall, and so on. Normal distribution provides a description of
the likely results obtained through sampling.

Normal Curve

The normal curve, also known as the Gaussian curve or bell curve, is a graphical
representation of the normal probability distribution. It is a symmetrical, continuous, and
smooth curve that represents the distribution of a set of data.

The normal curve is defined by its mean (μ) and standard deviation (σ), which determine the
shape and spread of the distribution. The mean is located at the center of the curve and
represents the average value of the data. The standard deviation determines the width of the
curve and represents how much the data deviates from the mean.

In a normal distribution, approximately 68% of the observations fall within one standard
deviation of the mean, 95% within two standard deviations, and 99.7% within three standard
deviations. These properties are often used in statistical analysis and hypothesis testing to
make predictions and inferences about a population based on a sample.

The normal curve is also used to calculate probabilities using the cumulative distribution
function (CDF) and the inverse cumulative distribution function (ICDF). The CDF returns the
probability that a random variable X is less than or equal to a specified value, and the ICDF
returns the value for which the CDF is equal to a specified probability.

Standard Noraml probability Distribution

The standard normal probability distribution is a special case of the normal probability
distribution with a mean of 0 and a standard deviation of 1. It is also known as the standard
normal distribution or the z-distribution.
The standard normal distribution is widely used in statistical analysis as a standard reference
distribution to which other normal distributions can be transformed. By transforming a
normal distribution to a standard normal distribution, it becomes easier to perform
calculations and make inferences about the distribution.

In a standard normal distribution, the cumulative distribution function (CDF) can be used to
calculate the probability that a random variable X is less than or equal to a specified value.
The inverse cumulative distribution function (ICDF) can also be used to find the value for
which the CDF is equal to a specified probability.

A common use of the standard normal distribution is in hypothesis testing, where it is used to
test the significance of a sample mean. The sample mean can be transformed into a z-score,
which represents the number of standard deviations that the sample mean is from the
population mean. The z-score can then be used to calculate the probability of observing a
sample mean as extreme or more extreme than the one observed, given that the population
mean is equal to the hypothesized value.

What Is Decision Theory?

Decision theory refers to a range of econometric and statistical tools for analyzing an
individual’s choices. In other words, it lets the entity make the best logical decision possible
when dealing with uncertain and unknown circumstances. Analysts call it a theory of choice,
relying on beliefs, attitudes, and desires.

 Decision theory involves economic and statistical approaches for studying an individual’s
choices. Because it is based on ideas, attitudes, and wishes, analysts refer to it as a theory of
choice.
 It enables the entity to make the most rational decision feasible in unknown and uncertain
conditions, repercussions, and behaviors.
 Companies worldwide utilize this theory to understand better how customers make decisions
and market trends to make better business decisions.
 Mathematicians, economists, marketers, data and social scientists, biologists, psychologists,
philosopher
s, and politicians use two theory forms: normative and descriptive.

vExamples

Let us look at the following decision theory examples to understand the concept better:

Example #1

Firm A has had a monopoly in the industry for more than a decade. It is the market leader for
various reasons, such as quality, pricing, brand value, recognition and awareness, goodwill,
availability, good customer service, marketing strategy, seasonal product launches, etc.
Therefore, there was no rivalry for firm A at first.
Company B, a new competitor, is about to enter the market. It uses decision theory in
statistics to figure out how company A became the market leader, why customers prefer its
products, what company A offers that makes it a popular choice, and when and where
company A sells the most. It helps firm B in deciphering client demand and behavior.

Example #2

People are nothing more than the results of their own choices. Companies can read, attract,
influence, and control users’ minds, which contradicts the decision-making theory. Social
media and online streaming platforms track viewers’ choices, such as pages they visit, the
content they like, dislike, share, post, etc.

Netflix is an example of how viewers watch more than just an algorithm to attract and
become addicted. In addition, a new web series is often released a day before the holiday or
only on weekends, a specific type of content is always present in the Instagram feed, and one
particular YouTube video suggestion is returning over and over again.
STEPS IN DECISION MAKING APPROACH
1.Define the problem: Clearly state the problem and determine the decision variables,
uncertainties, and objectives.
Defining the problem clearly is a crucial step in the decision-making approach in probability.
A well-defined problem provides a clear understanding of the situation, the goals to be
achieved, and the possible outcomes. This information is essential for making informed
decisions, as it allows us to:
 Assess the available options: By defining the problem clearly, we can identify the
various options available for addressing the issue.

 Evaluate the risks and uncertainties: Understanding the problem helps us assess the
likelihood of different outcomes, which in turn allows us to determine the potential
risks and uncertainties associated with each option.

 Apply probability and statistical analysis: A well-defined problem allows us to


perform probability and statistical analysis to estimate the likelihood of different
outcomes and make informed decisions based on the results.

 Evaluate the potential outcomes: Understanding the problem and the possible
outcomes helps us evaluate the potential outcomes of each option and select the one
that is most likely to result in the desired outcome.

In summary, defining the problem is a critical step in the decision-making approach in


probability, as it provides the foundation for making informed decisions based on a sound
understanding of the situation and the potential outcomes.

2.Identify the possible outcomes: List all the possible outcomes and their probabilities. It
also helps to make decisions by providing a clear understanding of the various possibilities
and their likelihoods. This information is crucial in determining the most appropriate course
of action, as it allows individuals to weigh the risks and benefits of different decisions. For
example, if an individual is deciding whether to invest in a particular stock, they may
consider the possible outcomes of the investment, such as the stock going up in value or
losing value. This information can then be used to make an informed decision based on the
likelihood of each outcome and the individual's risk tolerance. By identifying possible
outcomes, decision-makers can better understand the potential consequences of their
decisions and make more informed choices.

3.Assess the decision criteria: Determine the consequences of each possible outcome,
including the costs and benefits.
it also helps to make decisions by providing a framework for evaluating different options and
determining the most favourable one based on a set of predetermined values or priorities.
Decision criteria are the standards or principles used to determine the best course of action.

When making decisions in probability, the decision criteria can include factors such as:
 Risk tolerance - considering the level of uncertainty associated with different options
and how it aligns with the decision maker's risk appetite.

 Cost-benefit analysis - evaluating the expected costs and benefits of each option and
selecting the one that offers the best return on investment.

 Probability of success - considering the likelihood of different outcomes occurring


and how it impacts the overall success of the decision.

 Timing - considering the time frame in which the decision needs to be made and the
impact it will have on the outcome.

By accessing the decision criteria, decision makers can objectively evaluate different options
and make informed decisions based on their own priorities and values. It helps to eliminate
personal bias and ensures that the decision is made in a systematic and rational manner.

4.Evaluate the alternatives: Evaluate the alternatives by calculating the expected values of
each decision, considering the probabilities and consequences of each outcome.
When evaluating alternative help to make a decision in probability, it is important to consider
the following factors:

 Accuracy: How accurate is the information or advice being provided? Have they
considered all relevant factors and made a fair assessment of the probability?
 Relevance: Is the information or advice relevant to the situation at hand? Will it
actually help make the decision?
 Reliability: How reliable is the source of information or advice? Have they been
accurate in their predictions or assessments in the past?
 Bias: Is there any bias in the information or advice? Does the person providing the
information?

5.Select the best decision: Select the alternative with the highest expected value or the
alternative that maximizes the objective. Selecting the best decision in probability is
important because it allows you to make informed decisions based on the likelihood of
potential outcomes. The key to making effective decisions in probability is to weigh the
probabilities of each outcome and choose the one that has the highest probability of success.

To do this, you need to consider the following steps:

 identify all possible outcomes: Consider all of the potential outcomes of a decision
and list them out.

 Assign probabilities to each outcome: Use your knowledge of the situation and any
data available to determine the likelihood of each outcome.
 Determine the expected value of each outcome: Multiply the probability of each
outcome by its value and add them together to get the expected value of each
outcome.

 Choose the outcome with the highest expected value: The outcome with the highest
expected value is the best decision to make.

By following these steps, you can make decisions in probability that are based on the most
accurate and relevant information available. This helps to minimize the risk of making
incorrect decisions and maximizes your chances of success.

6.Implement the decision: Take action to implement the selected alternative.


 Maximum likelihood estimation: This method involves estimating the probability of
an event based on the available data. The decision is then made based on the highest
probability.

 Bayes’ theorem: This theorem involves updating the probability of an event based on
new information. It helps in making informed decisions based on the updated
probability.

 Utility theory: This theory involves calculating the expected value of a decision based
on the probability of each outcome. The decision with the highest expected value is
chosen.

 Monte Carlo simulation: This method involves creating a model of the decision-
making process and then simulating it several times to determine the best decision.

 decision trees: This is a visual representation of the decision-making process. It


involves identifying the different outcomes of a decision and the probability of each
outcome.

 Decision matrix: This method involves evaluating the decision based on a set of
criteria and assigning a weight to each criterion. The decision with the highest overall
score is chosen.

In conclusion, it is important to understand the underlying probability and use the appropriate
method to make informed decisions.

7.Evaluate the results: Evaluate the results of the decision by comparing the actual
outcomes with the expected outcomes.
 Confidence Interval: By evaluating the results of a probability experiment, we can
calculate the confidence interval that gives us an idea of the range of possible values
for a population parameter. This information can help us make decisions about the
reliability of our results and whether we need to collect more data.
 Hypothesis Testing: Evaluating the results of a probability experiment can also help us
perform hypothesis testing. We can compare the observed results with the expected
results to see if our hypothesis is supported or rejected. This information can be used
to make decisions about future actions based on the results of the experiment.
 Decision-Making: The results of a probability experiment can also be used to make
decisions about the best course of action in a particular situation. For example, if the
results show that a certain event is highly likely to occur, then we may decide to take
action to prevent or mitigate its impact.

In conclusion, evaluating the results of a probability experiment is important in helping us


make decisions because it provides us with valuable information about the reliability of our
results, supports hypothesis testing, and can be used to inform decision-making.

Types of Decision-Making Environment:

The main aim of decision theory is to help the decision-maker in selecting best course of
action from amongst the available courses of action. Based upon the type of information that
is available about the occurrence of the various states of nature the decision environment, the
decision modal's have been classified into four types: certainty, risk, uncertainty, and conflict.
Outcome State Explanation
Certainty Complete and accurate knowledge of the outcome of each
alternative. There is only one outcome for each alternative.
Risk Multiple possible outcomes of each alternative can be identified,
a probability of occurrence can be attached to each.
Uncertainty Multiple outcomes for each alternative can be identified, but
there is no knowledge of the probability to be attached to each.

Decision-making Under Certainty: In this environment, the decision-maker knows with


certainty the consequence of selecting every course of action or decision choice. In this type
of decision problems the decision-maker presumes that only one state of nature is relevant for
his purposes. He identifies this state of nature, takes it for granted and presumes complete
knowledge as to its occurrence. For example, suppose a person has Rs. 5,00,000 to invest for
a one-year period. One alternative is to open a savings account paying 3.5 per cent interest
and another is to invest in a government treasury note paying 9 per cent interest. If both
investments are secure and guaranteed, then there is a certainty that the treasury note will be
the better investment
The various techniques used for solving problems under certainty are: (i) System of
equations, (ii) Linear programming, (iii) Integer programming, (iv) Dynamic programming,
(v) Queuing models, (vi) Inventory models, (vii) capital budgeting analysis, (viii) Break-even
analysis, etc.
Decision-making under Risk. In this situation, the decision-maker faces a several states of
nature. But he is supposed to have believable evidential information, knowledge experience
or judgment to enable him to assign probability values to the likelihood Occurrence of each
state of nature. Probabilities could be assigned to future events by reference to similar
previous experience and information. Sometimes past experience of past records often enable
the decision-maker to assign probability values to the likely possible nature, knowing the
probability distributions of the states of nature, the best decision is to select that course of
action which has the largest expected pay of value.

For problems involving risk situations, the most widely used decision criterion for evaluating
the alternative courses of action, in this case is the Expected Monetary Value (EMV) (or
expected payoff). The objective of decision-making here is to optimize the expected payoff,
which may mean either maximization of expected profit or minimization of expected regret.
Expected Monetary Value (EMV): Given a payoff table with conditional values (payoffs)
and probability assessments for all states of nature, it is possible to determine expected
monetary value (EMV) for each course of action if the decision could be repeated a large
number of times. The EMV for given course of action is just sum of possible pay-offs of the
alternative, each weighted by the probability of that payoff occurring.
Suppose there is prior knowledge either on basis of past experience or on a subject basis that
the state of nature Sj, has a probability of occurrence P (Sj) [j = 1, 2,…, k] expected monetary
value corresponding to course of action Ai, of decision-maker is given by:

EMV (Ai) = Pi1 P (S1)+ Pi2 P (S2)+ Pi3 P (S3)+...+Pij P(Sj)+ +Pim P(Sm)
Steps for calculating EMV: The various steps involved are as follows:
1. Construct payoff table listing all possible courses of action and states of nature.
2. List the conditional payoff values associated with each possible combination of course of
action and state of nature along with the corresponding probabilities of the occurrence of each
state of nature.
3. Calculate the EMV for each course of action by multiplying the conditional payoffs by the
associated probabilities and add these weighted values for each course of action.
4. Select the course of action that yields the optimum EMV.

Expected Value of Perfect Information: The expected value with perfect information is the
expected or average return, in the long run, if we have perfect information before a decision
has to be made. In order to calculate this value, we choose the best alternative foreach state
nature and multiply its payoff times the probability of occurrence of that state of nature.
EVPI: Expected value with perfect information - Maximum EMV
Decision-making under Uncertainty: Under this condition, the probabilities associated with
occurrence of different states of nature are not known, i.e., there is no historical data available
or no relative frequency which could indicate the probability of the occurrence of a particular
state of nature. In other words the decision-maker has no way of calculating the expected
payoff for his courses of action or strategies. Such situations arise when a new product is
introduced in the market or a new plant is set up. In business situations there are many
problems of this nature and here the choice of a course of action is very largely dependent on
the personality of the decision-maker and the policy of an organization. The number of
different decision criteria available under the condition of uncertainty are given below:

1. The Criterion of Pessimism or Maximin: This is also called 'Waldian criterion' because
it was suggested by Arabham Wald. This criterion is the decision to take the course of
action which on maximizes the minimum possible payoff. The decision-maker lists down
the minimum outcome within every course of action or alternative strategy and then
selects the strategy with the maximum number. Since this decision criterion locates the
alternative strategy that has the least possible loss, it is also known as a pessimistic
decision criterion.

2. The criterion of optimism or Maximax: This criterion is just reverse to the above
criterion and was suggested by Leonid Hurwitz. The Maximax Criterion finds the course
of action or alternative strategy that maximizes the maximum payoff. The decision-maker
first locates the maximum payoff associated with very course of action or alternative
strategy, and then selects the alternative with the maximum number. Since this decision
criterion locates the alternative with the highest possible gain, it has also been called an
optimistic decision criterion. Thus, this criterion corresponds to identifying the best
possible outcome (maximum payoff) associated with each course of action and then
choose the maximum of the maximum values in order to select the optimal course of
action.

3. Minimax Regret Criterion: This criterion is attributed to Leonard Savage and is used to
identify the regret (or opportunity loss) associated with each state of nature if a particular
course of action is undertaken. Under this criterion for each conditional profit (payoff) or
cost value, a regret value equal to difference between the maximum payoff under a state
of nature and payoff resulting from each course of action under that strategy is calculated,
i.e., regret payoff = maximum payoff from a course of action - payoff. Thus obtain the
maximum regret for each course of action and then select course of action with the
minimum of the maximum regret values.

4. Criterion of Realism (Hurwicz Criterion): This criterion is a compromise between an


optimistic & pessimistic decision criterion. To start with, a coefficient of optimism a (0 ≤
α ≤ 1) is selected. When α is close to one, the decision-maker is optimistic about the
future and when α is close to zero, the decision-maker is pessimistic about the future.
According to Hurwicz, select strategy which maximizes:
H = α (Maximum payoff in column) + (1 - α) (Minimum payoff in column)

5. Criterion of Rationality (Baye's or Laplace Criterion): This criterion is based on the


principle of insufficient reason or equally likelihood and was developed by Thomas Bayes
and supported by Simor de Laplace. The decision maker first calculate the average outcome
for every course of action (which is the sum of all outcomes divided by the number of
outcomes) and then select that with the maximum number. (In other words, this rule is based
on the assumption that the probabilities of different states of nature are all equal.)

DECISION MAKING UNDER UNCERTAINITY AND UNDER RISK

DESISON MAKING:
Decision-making is a whole process, and managers have to go through specific steps to make
effective decisions. There are many factors that influence decision-making, one of which is
the environment in which a decision is made.
Every decision-making environment has its characteristics and is associated with certain pros
and cons. Some environments pose more risk to decision-making than others and hence need
a different strategy to handle.
In this article, I’ll explain everything about the different types of decision-making
environments and how they play a key role in making decisions. Also, we’ll compare
different decision environments near the end and decide which one is the riskiest.
Decision making under uncertainty.
As you might have guessed, this is quite the opposite of the previous type of environment we
looked at. Uncertainty is when a manager fails to get enough information needed to make a
sound decision, or even if he does, it’s corrupted, biased, or has unreliable sources.

This decision-making environment is a nightmare for managers. Not only does it get very
confusing about what path to follow, but when you start thinking in a certain direction, other
possibilities start clouding over its authenticity and seeming more appropriate.

The variables and the data available are not credible; they keep changing and hence making
the decision-maker panic. Such situations need the managers to be able to handle pressure
and make the best out of any situation. A combination of a proactive approach with effective
stress-management is required.
Let’s consider an example.
Consider you’re a football team coach. There are two players of your team playing on the
field. By the end of the match, you need to kick one of them out. However, the problem is,
both are playing at the best of their abilities, and it’s hard to separate them.

One of playing better in an instant, and the next minute, the other one exceeds him. Since
their performance isn’t following a certain trend or pattern, it gets tough for you to choose
which one to kick and which one to keep.
This type of decision-making environment is referred to as uncertain since a lot is going on,
which leads to nothing but confusion.

Decision-Making Under Risk


The last type of decision-making environment is risky environments. Risk environments are
when the probability of multiple events is tied to a decision. You’re never sure about the
outcomes of your decision other than calculated guesses. Such decisions are associated with
events that could either be very successful or quite disastrous for the organization.
When you’re faced with such problems, you will have some data available related to the
situation, but it’s all a game of probabilities. The past experiences of managers play a huge
role, and they often have to take a good look at their past when confronted with such
decisions.

The best course of action to take in risky environments is first analyzing the risk of all the
alternative actions based on the information available to you.
Once you have done the risk calculation for all the possible alternatives, choose the one with
the smallest amount of risk involved.

Nowadays, almost every decision made in huge organizations is based on risk factors. This is
the reason why there are whole departments dedicated to risk management. Moreover,
hundreds of tools are available that help you with risk calculation if you have enough
information, which makes it relatively more manageable than it would’ve been otherwise.

You might also like