Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

MODULE 7: DECISION ANALYSIS

INTRODUCTION

Decision analysis can be used to develop an optimal strategy when a decision maker is
faced with several decision alternatives and an uncertain or risk-filled pattern of future
events. Even when a careful decision analysis has been conducted, uncertain future
events make the final consequence uncertain. In some cases, the selected decision
alternative may provide good or excellent results. In other cases, a relatively unlikely future
event may occur, causing the selected decision alternative to provide only fair or even
poor results. The risk associated with any decision alternative is a direct result of the
uncertainty associated with the final consequence. A good decision analysis includes risk
analysis. Through risk analysis, the decision maker is provided with probability information
about favourable as well as unfavourable consequences that may occur. Decision analysis
is used by large and Small Corporation alike when making various types of decision,
including management, operations, marketing, capital investments, or strategic choices.

UNIT 1: PROBLEM FORMULATION

Decision analysis is concerned with selecting an option or alternative course of action (the
decision) given prior knowledge of its outcome (called a payoff) for various future
scenarios (called states of nature or events). The decision-maker has control over the
process of selecting an alternative course of action, but not over the states of nature, at
least not in the short run.

The final input for the structure of the decision analysis problem is the outcome or payoff
resulting from each state of nature/decision combination. A convenient structure for
displaying the decision alternatives, states of nature and payoffs is a payoff table.

Illustrative Problem

Suppose a manufacturer has three alternative courses of action for the next
production period.
�1 - They can make their product
�2 - Buy their product from another manufacturer and sell it to their customers or
�3 - They can do nothing in the next production period
Suppose further that the manufacture has a simple forecast for the next productions
period:
�1 - demand will be low

171
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
�2 - demand will be high

States of Nature
s1 - Low s2 -High
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000
d2 - Buy Product 10, 000 70,000
d3 - Do Nothing, 5,000 5,000

The payoff table above shows, for example, that making the product leads to a profit of
90,000 should the demand turn out to be high, or a loss of 20,000 if demand is low. Buying
the product shows that the manufacturer can avoid a loss, compared to making the
product, if the demand turns out to be low since the manufacturer avoids paying fixed
production costs. If the demand is high, the manufacturer makes a profit, but not as much
as if they made the product since they miss the production economies of scale. If the
manufacturer does nothing, a small profit is earned from selling existing inventory that just
meets a low level of demand.

Another way to display the structure of the decision problem is with a decision tree.

In this decision tree, the cell labeled A denotes a decision node, followed by its three
decision branches. The cells labeled B, C and D denote state of nature nodes which are
each followed by the two state of nature branches. Payoffs are shown at the terminal end
of each state of nature branch. Structuring the decision problem, as in formulation of all
quantitative models, establishes the first major benefit of these formal methods of decision
making. That is, the decision-maker is required to formally consider many of the important
aspects of a decision problem during the problem-structuring phase. These aspects may

172
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
be overlooked when the decision-maker uses "gut feel" or some other purely qualitative
technique to make decisions. After the problem is structured, we can now turn to its
analysis: selecting one of the possible decision alternatives according to predetermine
selection criteria. There are two major approaches in decision analysis that depend on the
availability of information on the states of nature. One approach is called decision making
without probabilities, and the other, decision making with probabilities.

UNIT 2: DECISION MAKING WITHOUT PROBABILITIES

In this approach, the decision-maker has no information concerning the relative likelihood
of each of the states of nature. It is sometimes called "decision making under risk." There
are four classic criteria used in decision-making strategies without probabilities: Optimistic
Criterion, Conservative Criterion, Compromise Minimax Regret Strategy and the Laplace
Method.

A. Optimistic Criterion (Maximax Strategy)

In this strategy, the decision-maker evaluates each decision alternative in terms of the best
payoff that can occur. The optimistic approach would lead the decision maker to choose
the alternative corresponding to the largest profit. For problems involving minimization, this
approach leads to choosing the alternative with the smallest payoff.

Illustrative Problem

Consider the payoff table below.

States of Nature MAXIMUM


s1 - Low s2 -High PAYOFF
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000 90,000
d2 - Buy Product 10, 000 70,000 70,000
d3 - Do Nothing, 5,000 5,000 5,000

Note that an extra column is added to record the maximum payoff for each decision
alternative (maximum payoff in each row of the table). The decision-maker employing the
optimistic criterion then selects the decision alternative associated with the maximum of
the maximum payoffs. Since 90,000 is the maximum of the maximum payoffs, �1 is the

173
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
selected decision. The "eternal optimist" would be one using this approach - it captures the
philosophy of decision-makers that accept risk of large losses to make substantial gains

B. Conservative Criterion (Maximin Strategy)

In this strategy, the decision-maker evaluates each decision alternative in terms of the
worst payoff that can occur. For a problem in which the output measure is profit, the
conservative approach would lead the decision maker to choose the alternative that
maximizes the minimum possible profit that could be obtained. For problems involving
minimization, this approach identifies the alternative that will minimize the maximum payoff.

Illustrative Problem

The payoff table is repeated here with a new column to record the minimum payoffs for
each decision alternative (minimum payoff for each row in the table).

States of Nature MINIMUM


s1 - Low s2 -High PAYOFF
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000 (20,000)
d2 - Buy Product 10, 000 70,000 10,000
d3 - Do Nothing, 5,000 5,000 5,000

This strategy selects that decision alternative associated with the maximum of the minimum
payoffs. In this situation, the decision-maker would select �2 , buy the product, since 10,000
is the maximum of the minimum payoffs. Some believe this strategy is associated with
"eternal pessimists", but to be fair, it is actually a conservative strategy used by decision-
makers who seek to avoid large losses.

C. Compromise Minimax Regret Strategy (Minimax Regret Strategy)

The third classic strategy for decision making under risk is neither purely optimistic nor
conservative. This approach first starts by converting the payoff table into a regret table.
The regret table looks at each state of nature, one at a time, and asks, "if I knew ahead of
time that state of nature �1 will occur, what would I do?" The answer to maximize profit
would be, "buy the product ( �2 )", since that leads to the highest profit, 10,000. If the
decision-maker selected �2 and �1 occurred, there would be no regret.

174
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
On the other hand, if the decision-maker selected �3 , there would be a regret or
opportunity loss of 5,000 (10,000 that could have been gained minus 5,000 that was
gained).

Similarly, there would be a regret of 30,000 if the decision-maker selected �1 and state of
nature �1 occurred (10,000 that could have been gained minus the minus 20,000 loss). The
regret numbers for �2 are prepared in a similar fashion. "If I knew ahead of time that state of
nature �2 would occur, what would I do?" The answer, again to maximize profit, is "make
the product (�1 )," since that leads to the highest profit for �2 , 90,000. If the decision-maker
selected �1 and �2 occurred, there would be no regret.

On the other hand, if the decision-maker selected �2 , there would be an opportunity loss or
regret of 20,000 (90,000 that could have been gained minus 70,000 that was gained).
Likewise, if the decision-maker selected d3 there would be a regret of 85,000 (90,000 minus
5,000).

States of Nature
s1 - Low s2 -High
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000
d2 - Buy Product 10, 000 70,000
d3 - Do Nothing, 5,000 5,000
Maximum 10, 000 90, 000

 Subtract the number in each column from the maximum value.


 Then select the maximum per row

The following table illustrates the completed regret table.

States of Nature MINIMUM


Decision Alternatives s1 - Low Demand s2 -High Demand Regret
d1 - Make Product 30,000 0 30,000
d2 - Buy Product 0 20,000 20,000
d3 - Do Nothing, 5,000 85,000 85,000

We have created the "Maximum Regret," to record the maximum regret value associated
with each decision strategy (the maximum regret in each row of the regret table). Select
that decision alternative associated with the minimum of the maximum regrets. In this
situation, the decision-maker would select �2 , buy the product, since 20,000 is the minimum

175
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
of the maximum regrets. Some believe this strategy follows a "middle of the road"
approach of minimizing losses.

For minimization, to construct the minimum regret table, we first choose the minimum value
per column. Next, subtract the number in each column from the minimum number minus.
Then choose the minimum entry per row (minimum regret). Finally, choose the maximum of
the minimum regrets.

D. Laplace Method

In this strategy, the decision-maker evaluates each decision alternative in terms of the
average payoff. The payoff table is repeated here with a new column to record the
average payoffs for each decision alternative.

States of Nature AVERAGE


s1 - Low s2 -High PAYOFF
Decision Alternatives
Demand Demand
−20,000+90,000
d1 - Make Product (20,000) 90,000 2
= ��, ���
10,000 + 70,000
d2 - Buy Product 10, 000 70,000
2
= ��, ���

5,000 + 5,000
d3 - Do Nothing, 5,000 5,000 = �, ���
2

This strategy selects that decision alternative associated with the highest average payoff. If
the problem is minimization, select the lowest average payoff. In this situation, the decision-
maker would select d2, buy the product, since it has the highest average pay off of 40,000.
This criterion treats all feature outcomes as being equally likely.

UNIT 3: DECISION MAKING WITH PROBABILITIES

In this approach, the decision-maker has information concerning the relative likelihood of
each of the states of nature. We will be discussing here three approaches: the expected
value approach, sensitivity analysis and the value of information.

176
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
A. The Expected Value Approach

In many decision-making situations, we can obtain probability assessments for the states of
nature. When such probabilities are available, we can use the expected value approach
to identify the best decision alternative. Let us first define the expected value of a decision
alternative.

Because one and only one of the N states of nature can occur, the probabilities must
satisfy two conditions:

At this point, that to abide by the laws of probability, each probability must be a real
number between 0 and 1, and the sum of the probabilities for the states of nature must
sum to one. For this to happen, the states of nature must be mutually exclusive.

The expected value (EV) of decision alternative �� is defined as follows:

The expected value (EV) for a decision alternative is the sum of the [probabilities of the
states of nature times the payoffs]. For the "make product" decision: ( ��� to represent the
payoff associated with decision alternative i and state of nature j)

177
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Illustrative Problem

Refer again to the payoff table for our make-buy example, this time adding a row for
probabilities and a column for the expected value of the decision alternatives.

States of Nature Expected


s1 - Low s2 -High Values**
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000
d2 - Buy Product 10,000 70,000
d3 - Do Nothing, 5,000 5,000
Probabilities P (s1) = 0.35 P (s2) = 0.65

In decision analysis, it is assumed that the probabilities are long-term relative frequencies.
Since they are often simply the subjective judgment of the decision- maker, the techniques
is subject to the criticism of this limitation. But this criticism can be levied against any
quantitative approach - the output is only as good as the input. To counter the criticism,
we will add a sensitivity analysis step after the initial solution. For now, let's learn the
expected value approach.
�� (�1 ) = [� (�1 ) ∗ �11 ] + [� (�2 ) ∗ �12 ]
= [0.35 ∗ − 20,000] + [0.65 ∗ 90,000]
= 51,500

The EV of 51,500 represents the long run outcome of repeated "make product" experiments.
That is, if we could theoretically conduct the "make product" decision 100 times, 35 times
we would lose 20,000, and 65 times we would make 90,000. The weighted average of
these outcomes is 51,500. In reality, we do not conduct the experiment 100 times - we
make the decision once and we are either going to lose 20,000 or make 90,000. However,
and this is very important, we use the expected value approach to assist us in making the
decision. The expected values for the second and third decision alternatives are
calculated in a similar fashion.

Compute �� �� ��� �� �� and check your answer from the table below.

States of Nature Expected


Decision Alternatives s1 - Low s2 -High Values

178
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Demand Demand
d1 - Make Product (20,000) 90,000 51,500
d2 - Buy Product 10,000 70,000 49,000
d3 - Do Nothing, 5,000 5,000 5,000
Probabilities P (s1) = 0.35 P (s2) = 0.65

Thus, using the expected value approach, we find that “make product” decision with an
expected value of 51,500, is the recommended decision.

B. Sensitivity Analysis

When you apply quantitative approaches to solve management problems it is always a


good idea to ask, "How sensitive is my solution to changes in data input?" This is especially
true when the input consists of subjective estimates. When the solution remains the optimal
solution given large changes in selected important inputs, there is a high level of
confidence in the solution. On the other hand, when the optimal solution strategy changes
for very small changes to the one or more inputs, the decision-maker should be cautious
before implementation. Perhaps some time should be spent in refining the input.

Illustrative Problem

Suppose the decision-maker examines the solution and realizes that the "buy" decision lost
out to the "make" decision by only 2,500. Further suppose the decision-maker has little
confidence in the payoff for the "buy product" decision under the "high demand" state of
nature. The question is, "at what "buy product, high demand" payoff would I be indifferent
between the "make" and "buy" decisions, given all other applicable data input items
remain the same?"

Solution:

To answer this question, we note that the point of indifference occurs where the "make"
and the "buy" EV's are equal. Mathematically, this is expressed as the equation:
�� �1 = �� �2

179
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Now writing out the computational formulas for the EV's: (��� to represent the payoff
associated with decision alternative i and state of nature j) and solve the equation for
V22:

�� �1 = �� �2

� �1 ∗ �11 + � �2 ∗ �12 = � �1 ∗ �21 + � �2 ∗ �22

[0.35 ∗ − 20,000] + [0.65 ∗ 90,000] = [0.35 ∗ 10,000] + [0.65 ∗ �22 ]

0.65 ∗ �22 = 51,500 − [0.35 ∗ 10,000]

48,000
�22 = = 73, 846
0.65

So, if the decision-maker erred by just 5.2% (estimates the "buy product, high demand"
payoff to be 70,000 when it is really 73,850), the decision-maker would have selected
the wrong strategy.

As a rule of thumb, if a decision strategy changes based on a 5% or less change to an


input, we say that the solution is very sensitive to change and it might be wise to
consider investing in more accurate data input values.

Probabilities are sometimes also subjective estimates when there has been no history or
experience with a particular state of nature. By observation, we can see that as long as
�(�1 ) remains at or below 0.35 the decision-maker will favor the optimal "make product"
decision since a low probability for �1 gives little relative weight to the 20,000 loss. But as
� (�1 ) increases (resulting in a decrease for �(�2 )), the expected values approach a point
of indifference or a break-even point.

Problem: What �(�1 ) would the decision-maker be indifferent between the "buy" and
"make" decisions, all other data input remaining the same?

Solution:

Again, we begin by setting ��(�1 ) equal to ��(�2 ) to represent break-even:

�� �1 = �� �2
[� (�1 ) ∗ �11 ] + [� (�2 ) ∗ �12 ] = [� (�1 ) ∗ �21 ] + [� (�2 ) ∗ �22 ]
[� (�1 ) ∗ − 20,000] + [� (�2 ) ∗ 90,000] = [� (�1 ) ∗ 10,000] + [� (�2 ) ∗ 70,000]
−30, 000� �1 + 20, 000� �2 = 0

Also, we know that sum of the probabilities of the states of nature must equal one, so we
have another equation: � (�1 ) + � (�2 ) = 1.

180
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Solving for � �1 , we have the system of linear equations:
−30, 000� �1 + 20, 000� �2 = 0
� �1 + � �2 = 1

This gives the solution � �1 = 0.4 ��� � �2 = 0.6.

As long as � (�� ) stays less than or equal to 0.40, the "make" decision strategy will remain
the optimal strategy.

C. The Value of Information

What if you find out that your solution is sensitive to changes in the inputs? In that case, it
may be appropriate to obtain additional information concerning your input numbers to
make them more accurate or to increase your confidence in their original value. However,
while information has value, it is sometimes expensive to obtain. Our last topic in decision
analysis focuses on the value of information. This technique has found countless
applications in assisting managers make decisions. These helped companies use decision
analysis in equipment loan, construction, and enterprise resource planning situations. The
formal steps in structuring the problem, which forces the decision-maker to consider
decision alternatives, states of nature, probabilities and payoffs, is almost as valuable as
coming up with the decision itself.
We made the point that information has value, but it also costs money and time to obtain. So the
question becomes: How much would be willing to pay for additional information? The upper limit is
established by the first concept, the Value of Perfect Information.

 The Value of Perfect Information

States of Nature Expected


s1 - Low s2 -High Values
Decision Alternatives
Demand Demand
d1 - Make Product (20,000) 90,000 51,500
d2 - Buy Product 10,000 70,000 49,000
d3 - Do Nothing, 5,000 5,000 5,000
Probabilities P (s1) = 0.35 P (s2) = 0.65

181
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
If we knew ahead of time that �1 would occur, we would select �2 as our optimal
strategy to maximize our payoffs. If we knew ahead of time that �2 would occur, we
would select �1 to maximize our payoffs. Over time, the expected value with this perfect
information (abbreviated as �� � ��) would be:
�� � �� = [� (�1 ) ∗ �21 ] + [� (�2 ) ∗ �12 ] = [ 0.35 ∗ 10,000 ] + [ 0.65 ∗ 90,000 ] = 62,000

Recall that without this perfect information, the expected value was 51,500 by going with
�1 . The difference between the expected value with perfect information and the
expected value without perfect information is called the Expected Value of Perfect
Information (EVPI):
���� = �� � �� − �� �/� �� = 62,000 − 51,500 = 10,500

EVPI places an upper bound on how much we should be willing to pay someone to get
additional information on the future states of nature in order to improve our decision
making strategy.

For example, we might be willing to pay a consultant for a market research study to help
us learn more about the demand states of nature. We don't expect the research study
to result in perfect information, but we hope it will represent a good proportion of the
10,500. We also know that we do not want to pay more than 10,500 for the new
information.

 Expected Value of Sample Information

New information about the states of nature takes the form of revisions to the probability
estimates in decision analysis. This information may be obtained through market
research studies, product testing or some other sampling process. In decision analysis,
this topic is called the Expected Value of Sample Information (EVSI).

Recall that we have probabilities on the states of nature, �(�1 ) and � �2 . These are more
technically called prior probabilities. If we plan to obtain sample information, the results
will be expressed as revisions to these prior probabilities.

Suppose a market research consultant offers to conduct a demand analysis that will
result in two indicators. �1 will be the indicator used to represent a prediction of low
demand in the next production period. �1 then corresponds to the state of nature we
labeled �1 , the difference is that one is a prediction and one is an actual occurrence. �2

182
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
will be the indicator used to represent a prediction of high demand in the next
production period. This indicator corresponds to our state of nature �2 .

This indicator information will be used to calculate conditional probabilities.


� (�1 ����� �1 ) will be the probability that �1 will occur given or conditioned on the
consultant's �1 prediction. � (�1 ����� �1 ) is generally written as � �1 �1 ) . Since there are
two states of nature and two indicators, we will need three additional conditional
probabilities, � �1 �2 ), � �2 �1 ) and � �2 �2 ).

In order to compute these conditional probabilities and then EVSI, we need to know the
consultant's track record. That is, how has the consultant done in prior market demand
studies? In terms of decision analysis, we want to know the accuracy of indicator
predictions given what state of nature actually occurred. Suppose the following
information is available from the consultant.

Market Research
State of Nature I1, Predict Low Demand I2, Predict High Demand
s1, Low Demand P ( I1 | s1 ) = 0.90 P ( I2 | s2 ) = 0.10
s2, High Demand P ( I1 | s2 ) = 0.20 P ( I 2 | s2 ) = 0.80

We are almost there, but note that these conditional probabilities are the reverse of
what we need. To get from � �1 �� ) to � �� �1 ), we need to set up a table and do some
calculations, same for Indicator 2. Let's do �1 first.

States
Prior Consultants Conditional
of Joint Probabilities,
Probabilities, TrackRecord, P Probabilities,
Nature, P ( I1 and sj)
P (sj) ( I1| sj ) P ( sj | I1 )
sj
� �1 �1 )
s1 P (s1) = 0.35 P (I1|s1) = 0.90 0.35*0.90 = 0.315 = . 315 / 0.445
= 0 . 7079
� �2 �1 )
s2 P (s2) = 0.65 P (I1|s2) = 0.20 0.65*0.20 = 0.13 = 0.13/0.445
= 0.2921
P(I1)=.315 + .13 =0.445 0.7079 + 0.2921 = 1

183
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
The first three columns in the table are from input information. The joint probabilities column
is computed by multiplying the prior probabilities times the consultant track record
probabilities in each row. The conditional probabilities in the last column are computed by
dividing the joint probabilities by �(�1 ).

Note how consultant research information can change the estimates on the probabilities.
Now the probability of low demand becomes 0.71 given that the consultant predicts low
demand, and taking into account the consultant's track record. This revision reflects prior
accuracy for the low demand case. Note that the probability of high demand, �2 goes
from 0.65 to 0.29 given that the consultant predicts that demand will be low. The above
procedure illustrates the power of the complete decision analysis technique. The steps
illustrate how information has value to justify its expense. Now, we do similar calculations for
Indicator two.

States of Prior
Consultants Track Joint Probabilities, Conditional Probabilities,
Nature, Probabilities,
Record, P ( I2| sj ) P ( I2 and sj) P ( s j | I2 )
sj P (sj)
s1 P (s1) = 0.35 P (I2|s1) = 0.10 0.35*0.10 = 0.035 P( s1| I2)=.035 / 0.555 = 0 .0631
s2 P (s2) = 0.65 P (I2|s2) = 0.80 0.65*0.80 = 0.52 P( s2| I2 )=0.52 / 0.555 = 0.9369
P(I2)=.035 + .52 =0.555 Note: 0.0631 + 0.9369 = 1.00

The following figure will illustrate the decision problem incorporating the sample information.

Let us see how we obtained the payoffs 27, 528.01 and 83, 063.06.

RULE FOR ROUNDING OFF:

You may present your solution using four decimal points, however please use the
complete value for computation purposes. Only the final answer would be rounded off
in two decimal places.

184
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
EXPLANATION

This shows a new decision to our problem: the decision of whether or not to engage the
services of the market research consultant. If the manufacturer does not engage the
services, the expected value has already been determined to be 51,500 by selecting �1 ,
"make” the product. Note that we have added the expected value in node B to obtain
58, 349.96. The first event after engaging the services of the consultant is the consultant's
predictions. The consultant's predictions are then followed by the manufacturer's
decisions to "make,", "buy," or "do nothing;" which are then followed by the actual states
of nature, low and high demand.

The next figure illustrates the decision tree leading from the node labelled C in the
following Figure, the consultant's prediction of low demand.

This part of the decision tree provides back-up for the calculation of the 27,528.01
expected value for node C. That value represents the highest expected value and is
associated with the "buy" decision. Thus, if the consultant is hired to conduct the market
research, and predicts low demand, the best decision is to "buy" the product. When

185
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
working out decision trees by hand, we always start at the right of the tree, move left,
pruning branches coming from decision nodes as we go. You should observe that every
state of nature node gets an expected value computation, the sum of the products of
probabilities times payoffs. Every decision node simply gets the highest expected value
from its branches.

Next, we examine the computations for the node labelled D the event that occurs if we
engage the consultant and the consultant's prediction is high demand.

This part of the decision tree provides back-up for the calculation of the 83, 063.06,
expected value for node D. The value 83, 063.06 represents the highest expected value
and is associated with the "make" decision. Thus, if the consultant is hired to conduct the
market research, and predicts high demand, the best decision is to "make" the product.

Looking back at the first figure, we see that the expected value with the market research
consultant report, named the expected value with sample information (EV w SI), is
58,349.96. The expected value without the sample information (EV w/o SI) (same as the
expected value without perfect information) is 51,500.

To get the expected value of sample information, we just make the subtraction:
���� = �� � �� − �� �/� �� = 58,349.96 − 51,500 = 6,849.96

186
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Like EVPI, this is a powerful piece of information. We now know how much we would be
willing to spend to obtain market research from the consultant - that is a great bargaining
chip to have at negotiation time!

 Efficiency of the Sample Information

Recall that the expected value of perfect information was 10,500 - that placed a
theoretical upper bound on how much we would be willing to spend for perfect
information. But forecasts and consultants are not perfect. If we can get the "track record"
of a forecaster consultant, we can then determine how they stack up to the theoretical
EVPI.

To compute the efficiency of the sample information, we simply divide the EVSI by EVPI
and convert the decimal to a percent:

���������� = ���� / ���� = 6,849.96 / 10,500 = . 652377142 = 65.24%

The closer this number is to 100%, the better. Low efficiency ratings for sample information
might lead the decision maker to look for other types of information. However, high
efficiency ratings indicate that the sample information is almost as good as perfect
information and that additional sources of information would not yield significantly better
results.

We are not suggesting that every decision analysis exercise includes this last procedure of
computing EVSI and its efficiency. Often the track record of the sample information
producer may not be available. But when it is (and it should ALWAYS be available if you do
the market research "in-house"), this is a powerful tool to put some limits on how much you
would want to spend to acquire more information.

187
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Self-Test
1. Bob plans to operate a poultry farm. He currently is deciding the capacity of this farm. Its
size could either be for 100, 150 or 200 chickens. In terms of market forecasts, demand may
be low, medium or high. The expected monthly income matrix for the capacities and
market forecasts is as follows:
Capacity Low Demand Medium Demand High Demand
100 chickens P85,000 P85,000 P85,000
150 chickens 40,000 130,000 130,000
200 chickens (25,000) 75,000 175,000

Required:
Strategy Decision Relevant Income of the
Decision
Maximin a. b.

Maximax c. d.

Laplace e. f.

Minimax Regret g. h.

Expected Value i. j.
(20:50:30 probabilities for Low:
Medium: High)

k. Assuming the same probabilities in i and j, how much should Bob be willing to pay for perfect
information?

188
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
2. ABC Condominium Corp. of Baguio City, recently purchased land and is attempting to
determine the size of the condominium development it should build. It is considering three
sizes of development: small (d1 ); medium (d1 ); and large (d1 ). With the three levels of
demand- low (s1 ), medium (s2 ) and high (s3 ), the company’s management has prepared
the following profit payoff table:

State of Nature
Decision Alternatives Low (s1 ) Medium (s2 ) High (s3 )
d1 – Small Condo 400,000 400,000 400,000
d2 – Medium Condo 100,000 600,000 600,000
d3 - Large Condo (300,000) 300,000 900,000

Required:
1. Assuming the probabilities for the state of nature are �(�1 ) = 0.20; �(�2 ) = 0.35 and
�(�3 ) = 0.45, what decision should ABC Condominium Corp make?
2. What is the relevant income in ABC's decision?
3. At how much will the “Medium condo, high demand" payoff would be indifferent
between the "Medium condo" and "Large condo” decisions, given all other applicable
data input items remain the same?
4. Compute for the Expected Value of Perfect Information.
5. Suppose that before making a final decision, ABC Condominium Corp is considering
conducting a survey to help evaluate the demand for the new condominium
development. The survey report anticipated to indicate one of the two levels of
demand: weak (W) or strong (S). The relevant probabilities are as follows:

P(s1 I W) = .75 P(s1 I S) = .25

P(s2 I W) = .70 P(s2 I S) = .30 �(�) = 0.45


P(s3 I W) = .45 P(s3 I S) = .55 � � = 0.55

A. What is the expected Value with sample information?


B. What is the expected value of sample information?
C. What is the efficiency of the survey information (present your answer in decimal
number NOT percentage for ex: 0.2000 not 20%)?

RULE FOR ROUNDING OFF: You may present your solution using four decimal points,
however please use the complete value for computation purposes. Only the final answer
would be rounded off in two decimal places.

189
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
REFERENCES
Anderson, D. R., Sweeney, D. J., Williams, T. A., Camm, J. D., Cochran, J. J., Fry, M. J.,
Ohlmann, J. W. (Latest Edition). An Introduction to Management Science: Quantitative
Approaches to Decision Making. Cengage.

Hillier, Frederick S. (2001), Introduction to Operations Research 7th Edition, Mc Graw-Hill


Book Company Inc. New York.

Powell, Kenneth R. Baker. Management Science: The Art of Modeling with Spreadsheets.

Taha, Hamdy A. (2017), Operations Research: An introduction 10th Edition, Pearson


Education Asia Pte Ltd.

Winston, Wayne L. (1991), Operations Research: Application and Algorithms, PWS-Kent


Publishing Company.

https://www.valamis.com/hub/learning-
curve#:~:text=Diminishing%2DReturns%20Learning%20Curve,the%20learner%20obtain%2
0full%20proficiency.

190
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Appendix A

191
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
192
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.
Appendix B

193
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited.

You might also like