Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

Decision Science By USMS Academic Support Team

Decision Science Notes

1|Page
Decision Science By USMS Academic Support Team

CONTENT

Probability Meaning
1 and Approaches of
Probability Theory
5

Addition and
2 Multiplication
Theorems
8

3 Conditional Probability 11

4 Bayes’ Rule 13

Probability
Distribution – 15
5
Binominal Poisson,
Normal Distribution
20
Application of
Probability Theory in
6
Business Decision
Making

UNIT 2 Linear Programming 21

1 Linear Programming Meaning and Assumption 23

26
2 Formulation of Linear Programming Model

Solution of LP Problem with the help of graphical and


3
Simplex Method 29

4 Concept of Duality 31

2|Page
Decision Science By USMS Academic Support Team

5 Shadow Prices 34

35
6 Sensitivity Analysis

37
7 Role of LP in Economic Decision Making

Transportation Problems – Initial Basic feasible


8 43
Solution

9 Test for Optimality 48

10 Assignment Problems 52

11 Travelling Salesman Problem

UNIT 3 Decision Theory 55

1 Decision under Certainty, Uncertainty, and Risk 58

2 Decision Tree Analysis 61

Game Theory – Pure and Mixed Strategies, Principle of


3
Dominance
66

Solution of Game Theory Problems with the Help of


4
Graphical, Algebraic, and Simplex Methods

UNIT 4 Network Analysis


80

1 Meaning of Networking 82

3|Page
Decision Science By USMS Academic Support Team

2 Analysis with the help of PERT and CPM 84

3 Resource Planning and Meaning of Crashing

86

Queuing Theory – Meaning, Concepts and assumptions queuing


4 90
models

5 M/M/I/FIFO
97

6 Simulation Modeling

4|Page
Decision Science By USMS Academic Support Team

UNIT 1 Probability
DS/U1 Topic 1 Probability Meaning and Approaches of Probability Theory
In our day to day life the “probability” or “chance” is very commonly used term. Sometimes,
we use to say “Probably it may rain tomorrow”, “Probably Mr. X may come for taking his
class today”, “Probably you are right”. All these terms, possibility and probability convey the
same meaning. But in statistics probability has certain special connotation unlike in
Layman’s view.

The theory of probability has been developed in 17th century. It has got its origin from
games, tossing coins, throwing a dice, drawing a card from a pack. In 1954 Antoine
Germband had taken an initiation and an interest for this area.

After him many authors in statistics had tried to remodel the idea given by the former. The
“probability” has become one of the basic tools of statistics. Sometimes statistical analysis
becomes paralyzed without the theorem of probability. “Probability of a given event is
defined as the expected frequency of occurrence of the event among events of a like
sort.” (Garrett)

The probability theory provides a means of getting an idea of the likelihood of occurrence of
different events resulting from a random experiment in terms of quantitative measures
ranging between zero and one. The probability is zero for an impossible event and one for an
event which is certain to occur.

Approaches of Probability Theory

1. Classical Probability

The classical approach to probability is one of the oldest and simplest school of thought. It
has been originated in 18th century which explains probability concerning games of chances
such as throwing coin, dice, drawing cards etc.

The definition of probability has been given by a French mathematician named “Laplace”.
According to him probability is the ratio of the number of favourable cases among the
number of equally likely cases.

Or in other words, the ratio suggested by classical approach is:

Pr. = Number of favourable cases/Number of equally likely cases

For example, if a coin is tossed, and if it is asked what is the probability of the occurrence of
the head, then the number of the favourable case = 1, the number of the equally likely cases =
2.

Pr. of head = 1/2

5|Page
Decision Science By USMS Academic Support Team

Symbolically it can be expressed as:

P = Pr. (A) = a/n, q = Pr. (B) or (not A) = b/n

1 – a/n = b/n = (or) a + b = 1 and also p + q = 1

p = 1 – q, and q = 1 – p and if a + b = 1 then so also a/n + b/n = 1

In this approach the probability varies from 0 to 1. When probability is zero it denotes that it
is impossible to occur.

If probability is 1 then there is certainty for occurrence, i.e. the event is bound to occur.

Example:

From a bag containing 20 black and 25 white balls, a ball is drawn randomly. What is the
probability that it is black.

Pr. of a black ball = 20/45 = 4/9 = p, 25 Pr. of a white ball = 25/45 = 5/9 = q

p = 4/9 and q = 5/9 (p + q= 4/9 + 5/9= 1)

2. Relative Frequency Theory of Probability

This approach to probability is a protest against the classical approach. It indicates the fact
that if n is increased upto the ∞, we can find out the probability of p or q.

Example:

If n is ∞, then Pr. of A= a/n = .5, Pr. of B = b/n = 5

If an event occurs a times out of n its relative frequency is a/n. When n becomes ∞, is called
the limit of relative frequency.

Pr. (A) = limit a/n

where n → ∞

Pr. (B) = limit bl.t. here → ∞.

Axiomatic approach

An axiomatic approach is taken to define probability as a set function where the elements of
the domain are the sets and the elements of range are real numbers. If event A is an element
in the domain of this function, P(A) is the customary notation used to designate the
corresponding element in the range.

6|Page
Decision Science By USMS Academic Support Team

Probability Function

A probability function p(A) is a function mapping the event space A of a random experiment
into the interval [0,1] according to the following axioms;

Axiom 1. For any event A, 0 ≤ P(A) ≤ 1

Axiom 2. P(Ω) = 1

Axiom 3. If A and B are any two mutually exclusive events then,

P(A ∪ B)) = P(A) + P(B)

As given in the third axiom the addition property of the probability can be extended to any
number of events as long as the events are mutually exclusive. If the events are not mutually
exclusive then;

P(A ∪ B) = P(A) + P(B) – P(A∩B)

P(A∩B) is Φ if both the events are mutually exclusive.

If there are two types of objects among the objects of similar or other natures then the
probability of one object i.e. Pr. of A = .5, then Pr. of B = .5.

7|Page
Decision Science By USMS Academic Support Team

DS/U1 Topic 2 Addition and Multiplication Theorems

Addition theorem on probability:

If A and B are any two events then the probability of happening of at least one of the events
is defined as P(AUB) = P(A) + P(B)- P(A∩B).

Proof:

Since events are nothing but sets,

From set theory, we have

n(AUB) = n(A) + n(B)- n(A∩B).

Dividing the above equation by n(S), (where S is the sample space)

n(AUB)/ n(S) = n(A)/ n(S) + n(B)/ n(S)- n(A∩B)/ n(S)

Then by the definition of probability,

P(AUB) = P(A) + P(B)- P(A∩B).

Example:

If the probability of solving a problem by two students George and James are 1/2 and 1/3
respectively then what is the probability of the problem to be solved.

Solution:

Let A and B be the probabilities of solving the problem by George and James respectively.

Then P(A)=1/2 and P(B)=1/3.

The problem will be solved if it is solved at least by one of them also.

So, we need to find P(AUB).

By addition theorem on probability, we have

P(AUB) = P(A) + P(B)- P(A∩B).

P(AUB) = 1/2 +.1/3 – 1/2 * 1/3 = 1/2 +1/3-1/6 = (3+2-1)/6 = 4/6 = 2/3

Note:

8|Page
Decision Science By USMS Academic Support Team

If A and B are any two mutually exclusive events then P(A∩B)=0.

Then P(AUB) = P(A)+P(B).

Multiplication theorem on probability

If A and B are any two events of a sample space such that P(A) ≠0 and P(B)≠0, then

P(A∩B) = P(A) * P(B|A) = P(B) *P(A|B).

Example: If P(A) = 1/5 P(B|A) = 1/3 then what is P(A∩B)?

Solution: P(A∩B) = P(A) * P(B|A) = 1/5 * 1/3 = 1/15

INDEPENDENT EVENTS:

Two events A and B are said to be independent if there is no change in the happening of an
event with the happening of the other event.

i.e. Two events A and B are said to be independent if

P(A|B) = P(A) where P(B)≠0.

P(B|A) = P(B) where P(A)≠0.

i.e. Two events A and B are said to be independent if

P(A∩B) = P(A) * P(B).

Example:

While laying the pack of cards, let A be the event of drawing a diamond and B be the event
of drawing an ace.

Then P(A) = 13/52 = 1/4 and P(B) = 4/52=1/13

Now, A∩B = drawing a king card from hearts.

Then P(A∩B) = 1/52

Now, P(A/B) = P(A∩B)/P(B) = (1/52)/(1/13) = 1/4 = P(A).

So, A and B are independent.

9|Page
Decision Science By USMS Academic Support Team

[Here, P(A∩B) = = = P(A) * P(B)]

Note:

(1) If 3 events A,B and C are independent the

P(A∩B∩C) = P(A)*P(B)*P(C).

(2) If A and B are any two events, then P(AUB) = 1-P(A’)P(B’).

10 | P a g e
Decision Science By USMS Academic Support Team

DS/U1 Topic 3 Conditional Probability


Conditional probability-Many events associated to the real experiments, specifically related
to civil engineering, have some dependency on the other events associated to the same
experiment. In that case the occurrence of the dependent event is a function of the occurrence
of the other events. One such example is given the probability of having cube density in
between two values what is the probability of having the cube strength ranging in between
two specific values.

The probability of the occurrence of an event A given that an event B has already occurred is
called the conditional probability of A given B

The same is explained in Figure using the sample spaces related to the events A and B,
assuming that there are few sample points common to these two events. Part 1 of the figure
shows the total sample space related to the experiment as in the form of rectangle and the
sample space related to the event A as a circle. Similarly part 2 of the figure shows the total
sample space and the sample space related to event B. As explained earlier in conditional
probability the total sample space is restrained to the sample space that is related to event B
(which has already occurred). The same is shown in part 3 of Figure 2.15. Now the sample
space for event A (B is the total sample space available) is nothing but the sample points
related to event A and falling in the sample space. This is nothing but the intersection of the
events A and B and is shown in part 3 of the figure as the hatched area.

Figure 2.15: Representation of conditional probability using the Venn diagrams

For example, there are 100 trips per day between two places X and Y. Out of these 100 trips
50 are made by car, 25 are made by bus and the other 25 are by local train. Probabilities
associated to these modes are 0.5, 0.25, and 0.25, respectively. In transportation engineering
both the bus and the local train are considered as public transport so the event space
associated to this is the summation of the event spaces associated to bus and local train.
Probability of choosing public transportation is 0.5. Now if one is interested in finding the
probability of choosing bus given public transportation is chosen the conditional probability
is useful in finding that.

11 | P a g e
Decision Science By USMS Academic Support Team

DS/U1 Topic 4 Baye’s Rule


Bayes’ theorem is a way to figure out conditional probability. Conditional probability is the
probability of an event happening, given that it has some relationship to one or more other
events. For example, your probability of getting a parking space is connected to the time of
day you park, where you park, and what conventions are going on at any time. Bayes’
theorem is slightly more nuanced. In a nutshell, it gives you the actual probability of
an event given information about tests.

“Events” Are different from “tests.” For example, there is a test for liver disease, but that’s
separate from the event of actually having liver disease.

Tests are flawed: just because you have a positive test does not mean you actually have the
disease. Many tests have a high false positive rate. Rare events tend to have higher false
positive rates than more common events. We’re not just talking about medical tests here. For
example, spam filtering can have high false positive rates. Bayes’ theorem takes the test
results and calculates your real probability that the test has identified the event.

Bayes’ Theorem (also known as Bayes’ rule) is a deceptively simple formula used to
calculate conditional probability. The Theorem was named after English mathematician
Thomas Bayes (1701-1761). The formal definition for the rule is:

In most cases, you can’t just plug numbers into an equation; You have to figure out what your
“tests” and “events” are first. For two events, A and B, Bayes’ theorem allows you to figure
out p(A|B) (the probability that event A happened, given that test B was positive) from
p(B|A) (the probability that test B happened, given that event A happened). It can be a little
tricky to wrap your head around as technically you’re working backwards; you may have to
switch your tests and events around, which can get confusing. An example should clarify
what I mean by “switch the tests and events around.”

Bayes’ Theorem Example


You might be interested in finding out a patient’s probability of having liver disease if they
are an alcoholic. “Being an alcoholic” is the test (kind of like a litmus test) for liver disease.

A could mean the event “Patient has liver disease.” Past data tells you that 10% of patients
entering your clinic have liver disease. P(A) = 0.10.

B could mean the litmus test that “Patient is an alcoholic.” Five percent of the clinic’s
patients are alcoholics. P(B) = 0.05.

You might also know that among those patients diagnosed with liver disease, 7% are
alcoholics. This is your B|A: the probability that a patient is alcoholic, given that they have
liver disease, is 7%.

12 | P a g e
Decision Science By USMS Academic Support Team

Bayes’ theorem tells you:


P(A|B) = (0.07 * 0.1)/0.05 = 0.14
In other words, if the patient is an alcoholic, their chances of having liver disease is 0.14
(14%). This is a large increase from the 10% suggested by past data. But it’s still unlikely
that any particular patient has liver disease.

13 | P a g e
Decision Science By USMS Academic Support Team

DS/U1 Topic 5 Probability Distribution: Binominal Poisson,


Normal Distribution
Probability Distributions

Probability theory is the foundation for statistical inference. A probability distribution is a


device for indicating the values that a random variable may have. There are two categories of
random variables. These are discrete random variables and continuous random variables.

Discrete random variable

The probability distribution of a discrete random variable specifies all possible values of a
discrete random variable along with their respective probabilities.

Examples can be

• Frequency distribution
• Probability distribution (relative frequency distribution)
• Cumulative frequency

Examples of discrete probability distributions are the binomial distribution and the Poisson
distribution.

Binomial Distribution

A binomial experiment is a probability experiment with the following properties.

1. Each trial can have only two outcomes which can be considered success or failure.
2. There must be a fixed number of trials.
3. The outcomes of each trial must be independent of each other.
4. The probability of success must remain the same in each trial.

The outcomes of a binomial experiment are called a binomial distribution.

Poisson Distribution

The Poisson distribution is based on the Poisson process.

1. The occurrences of the events are independent in an interval.


2. An infinite number of occurrences of the event are possible in the interval.
3. The probability of a single event in the interval is proportional to the length of the interval.
4. In an infinitely small portion of the interval, the probability of more than one occurrence
of the event is negligible.

Continuous probability distributions

14 | P a g e
Decision Science By USMS Academic Support Team

A continuous variable can assume any value within a specified interval of values assumed by
the variable. In a general case, with a large number of class intervals, the frequency polygon
begins to resemble a smooth curve.

A continuous probability distribution is a probability density function. The area under the
smooth curve is equal to 1 and the frequency of occurrence of values between any two points
equals the total area under the curve between the two points and the x-axis.

The Normal Distribution

The normal distribution is the most important distribution in biostatistics. It is frequently


called the Gaussian distribution. The two parameters of the normal distribution are the mean
(m) and the standard deviation (s). The graph has a familiar bell-shaped curve.

Graph of a Normal Distribution

Characteristics of the normal distribution


1. It is symmetrical about m.

2. The mean, median and mode are all equal.

3. The total area under the curve above the x-axis is 1 square unit. Therefore 50% is to the
right of m and 50% is to the left of m.

4. Perpendiculars of:
± s contain about 68%;
±2 s contain about 95%;
±3 s contain about 99.7%
of the area under the curve.

The standard normal distribution

A normal distribution is determined by m and s. This creates a family of distributions


depending on whatever the values of m and s are. The standard normal distribution has m =0
and s =1.

15 | P a g e
Decision Science By USMS Academic Support Team

Finding normal curve areas

1. The table gives areas between – and the value of .

2. Find the z value in tenths in the column at left margin and locate its row. Find the
hundredths place in the appropriate column.

3. Read the value of the area (P) from the body of the table where the row and column
intersect. Note that P is the probability that a given value of z is as large as it is in its
location. Values of P are in the form of a decimal point and four places. This constitutes a
decimal percent.

Finding probabilities

We find probabilities using the table and a four-step procedure as illustrated below.

a) What is the probability that z < -1.96?

(1) Sketch a normal curve


(2) Draw a line for z = -1.96
(3) Find the area in the table
(4) The answer is the area to the left of the line P(z < -1.96) = .0250

b) What is the probability that -1.96 < z < 1.96?

16 | P a g e
Decision Science By USMS Academic Support Team

(1) Sketch a normal curve


(2) Draw lines for lower z = -1.96, and upper z = 1.96
(3) Find the area in the table corresponding to each value
(4) The answer is the area between the values–subtract lower from upper P(-1.96 < z <
1.96) = .9750 – .0250 = .9500

c) What is the probability that z > 1.96?

(1) Sketch a normal curve


(2) Draw a line for z = 1.96
(3) Find the area in the table
(4) The answer is the area to the right of the line; found by subtracting table value from
1.0000; P(z > 1.96) =1.0000 – .9750 = .0250

Applications of the Normal distribution

The normal distribution is used as a model to study many different variables. We can use the
normal distribution to answer probability questions about random variables. Some examples
of variables that are normally distributed are human height and intelligence.

Solving normal distribution application problems

In this explanation we add an additional step. Following the model of the normal

17 | P a g e
Decision Science By USMS Academic Support Team

distribution, a given value of x must be converted to a z score before it can be looked up in


the z table.

(1) Write the given information


(2) Sketch a normal curve
(3) Convert x to a z score
(4) Find the appropriate value(s) in the table
(5) Complete the answer

Illustrative Example: Total fingerprint ridge count in humans is approximately normally


distributed with mean of 140 and standard deviation of 50. Find the probability that an
individual picked at random will have a ridge count less than 100. We follow the steps to
find the solution.

(1) Write the given information

m = 140
s = 50
x = 100

(3) Convert x to a z score

(4) Find the appropriate value(s) in the table

A value of z = -0.8 gives an area of .2119 which corresponds to the probability P (z < -0.8)

(5) Complete the answer

The probability that x is less than 100 is .2119.

18 | P a g e
Decision Science By USMS Academic Support Team

DS/U1 Topic 6 Application of Probability Theory in Business


Decision Making
Most every business decision you make relates to some aspect of probability. While your
focus is on formulas and statistical calculations used to define probability, underneath these
lie basic concepts that determine whether — and how much — event interactions affect
probability. Together, statistical calculations and probability concepts allow you to make
good business decisions, even in times of uncertainty.

About Probability, Statistics and Chance

Probability concepts are abstract ideas used to identify the degree of risk a business decision
involves. In determining probability, risk is the degree to which a potential outcome differs
from a benchmark expectation. You can base probability calculations on a random or full
data sample. For example, consumer demand forecasts commonly use a random sampling
from the target market population. However, when you’re making a purchasing decision
based solely on cost, the full cost of each item determines which comes the closest to
matching your cost expectation.

Mutual Exclusivity

The concept of mutually exclusivity applies if the occurrence of one event prohibits the
occurrence of another event. For example, assume you have two tasks on your to-do list. Both
tasks are due today and both will take the entire day to complete. Whichever task you choose
to complete means the other will remain incomplete. These two tasks can’t have the same
outcome. Thus, these tasks are mutually exclusive.

Dependent Events

A second concept refers to the impact two separate events have on each other. Dependent
events are those in which the occurrence of one event affects — but doesn’t prevent — the
probability of the other occurring. For example, assume a five-year goal is to purchase a new
building and pay the full purchase price in cash. The expected funding source is investment
returns from excess sales revenue investments. The probability of the purchase happening
within the five-year period depends on whether sales revenues meet projected expectations.
This makes these dependent events.

Interdependent Events

Interdependent events are those in which the occurrence of one event has no effect of the
probability of another event. For example, assume consumer demand for hairbrushes is
falling to an all-time low. The concept of interdependence says that declining demand for
hairbrushes and the probability that demand for shampoo will also decline share no
relationship. In the same way, if you intend to purchase a new building by investing personal
funds instead of relying on investment returns from excess sales revenues, the purchase of a
new building and sales revenues share no relationship. Thus, these are now interdependent
events.

19 | P a g e
Decision Science By USMS Academic Support Team

UNIT 2 Linear Programming

DS/U2 Topic 1 Linear Programming Meaning and Assumption


The Linear Programming problem is formulated to determine the optimum solution by
selecting the best alternative from the set of feasible alternatives available to the decision
maker.

Assumptions of Linear programming

There are several assumptions on which the linear programming works, these are:

1. Proportionality:The basic assumption underlying the linear programming is that any change
in the constraint inequalities will have the proportional change in the objective function. This
means, if product contributes Rs 20 towards the profit, then the total contribution would be
equal to 20x1, where x1 is the number of units of the product.

For example, if there are 5 units of the product, then the contribution would be Rs 100 and in
the case of 10 units, it would be Rs 200. Thus, if the output (sales) is doubled, the profit
would also be doubled.

2. Additivity: The assumption of additivity asserts that the total profit of the objective function
is determined by the sum of profit contributed by each product separately. Similarly, the total
amount of resources used is determined by the sum of resources used by each product
separately. This implies, there is no interaction between the decision variables.

20 | P a g e
Decision Science By USMS Academic Support Team

3. Continuity: Another assumption of linear programming is that the decision variables are
continuous. This means a combination of outputs can be used with the fractional values along
with the integer values.

For example, If 52/3 units of product A and 101/3 units of product B to be produced in a
week. In this case, the fractional amount of production will be taken as a work-in-progress
and the remaining production part is taken in the following week. Therefore, a production of
17 units of product A and 31 units of product B over a three-week period implies 52/3 units of
product A and 101/3 units of product B per week.

4. Certainty: Another underlying assumption of linear programming is a certainty, i.e. the


parameters of objective function coefficients and the coefficients of constraint inequalities is
known with certainty. Such as profit per unit of product, availability of material and labor per
unit, requirement of material and labor per unit are known and is given in the linear
programming problem.
5. Finite Choices: This assumption implies that the decision maker has certain choices, and the
decision variables assume non-negative values. The non-negative assumption is true in the
sense, the output in the production problem can not be negative. Thus, this assumption is
considered feasible.

Thus, while solving for the linear programming problem, these assumptions should be kept in
mind such that the best alternative is chosen.

21 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 2 Formulation of Linear Programming Model


Maximization Case
Let’s understand the maximization case with the help of a problem. Suppose a firm produces two
products A and B. For producing the each unit of product A, 4 Kg of Raw material and 6 labor
hours are required. While, for the production of each unit of product B, 4 kg of raw material and
5 labor hours is required. The total availability of raw material and labor hours is 60 Kg and 90
Hours respectively (per week). The unit price of Product A is Rs 35 and of product, B is Rs 40.

This problem can be converted into linear programming problem to determine how many units of
each product should be produced per week to have the maximum profit. Firstly, the objective
function is to be formulated. Suppose x1 and x2 are units produced per week of product A and B
respectively. The sale of product A and product B yields Rs 35 and Rs 40 respectively. The total
profit will be equal to

Z = 35x1+ 40x2 (objective function)

Since the raw material and labor is in limited supply the mathematical relationship that explains
this limitation is called inequality. Therefore, the inequality equations will be as follows:

Product A requires 4 kg of raw material and product B also requires 4 Kg of Raw material; thus,
total consumption is 4x1+4x2, which cannot exceed the total availability of 60 kg. Thus, this
constraint can be expressed as:

4x1 + 4x2 ≤ 60

Similarly, the second constraint equation will be:

6x1 + 5x2 ≤ 90

Where 6 hours and 5hours of labor is required for the production of each unit of product A and B
respectively, but cannot exceed the total availability of 90 hours.

Thus, the linear programming problem will be:

Maximize Z = 35x1+ 40x2 (profit)

Subject to:

4x1 + 4x2 ≤ 60 (raw material constraint)


6x1 + 5x2 ≤ 90 (labor hours constraint)
x1, x2 ≥ 0 (Non-negativity restriction)

Note: It is to be noted that “≤” (less than equal to) sign is used as the profit maximizing output
may not fully utilize all the resources, and some may be left unused. And the non-negativity
condition is used since the x1 and x2 are a number of units produced and cannot have negative
values.

22 | P a g e
Decision Science By USMS Academic Support Team

Minimization Case

The minimization case can be well understood through a problem. Let’s say; the agricultural
research institute recommended a farmer to spread out at least 5000 kg of phosphate fertilizer and
not less than 7000 kg of nitrogen fertilizer to raise the productivity of his crops on the farm. There
are two mixtures A and B, weighs 100 kg each, from which these fertilizers can be obtained.

The cost of each Mixture A and B is Rs 40 and 25 respectively. Mixture A contains 40 kg of


phosphate and 60 kg of nitrogen while the Mixture B contains 60 kg of phosphate and 40 kg of
nitrogen. This problem can be represented as a linear programming problem to find out how
many bags of each type a farmer should buy to get the desired amount of fertilizers at the
minimum cost.

Firstly, the objective function is to be formulated. Suppose, x1 and x2are the number of bags of
mixture A and mixture B. The cost of both the mixture is 40x1 + 25x2 and thus, the objective
function will be:

Minimize

Z = 40x1+25x2

In this problem, there are two constraints, minimum 5000 kg of phosphate and minimum 7000 kg
of nitrogen is required. The Bag A contains 40 kg of phosphate while Bag B contains 60 kg of
phosphate. Thus, the phosphate constraint can be expressed as:

40x1 + 60x2 ≥ 5000

Similarly, the second constraint equation can be expressed as:

60x1 + 40x2 ≥ 7000

Where, Bag A contains 60 kg of nitrogen and Bag B contains 40 kg of nitrogen, and the
minimum requirement of nitrogen is 7000 kg.

Thus, the linear programming problem is:

Minimize Z = 40x1+25x2 (cost)

Subject to:

40x1 + 60x2 ≥ 5000 (Phosphate Constraint)


60x1 + 40x2 ≥ 7000 (Nitrogen Constraint)
x1, x2 ≥ 0 (Non-negativity Restriction)

Note: It is to be noted that, “≥” (greater than equal to) sign shows the full utilization of
resources at the minimum cost. The non-negativity condition is used, since x1 and x2 represent the
number of bags of both the mixture and hence cannot have the negative values.

23 | P a g e
Decision Science By USMS Academic Support Team

24 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 3 Solution of LP Problem with the help of graphical and


Simplex Method
The Graphical Method (graphic solving) is an excellent alternative for the representation and
solving of Linear Programming models that have two decision variables.

Exercise #1: A workshop has three (3) types of machines A, B and C; it can manufacture two (2)
products 1 and 2, and all products have to go to each machine and each one goes in the same
order; First to the machine A, then to B and then to C. The following table shows:

• The hours needed at each machine, per product unit


• The total available hours for each machine, per week
• The profit of each product per unit sold

Formulate and solve using the graphical method a Linear Programming model for the previous
situation that allows the workshop to obtain maximum gains.

Decision Variables:

• : Product 1 Units to be produced weekly


• : Product 2 Units to be produced weekly

Objective Function:

Maximize2

Constraints:

The constraints represent the number of hours available weekly for machines A, B and C,
respectively, and also incorporate the non-negativity conditions.

For the graphical solution of this model we will use the Graphic Linear Optimizer
(GLP) software. The green colored area corresponds to the set of feasible solutions and the level

25 | P a g e
Decision Science By USMS Academic Support Team

curve of the objective function that passes by the optimal vertex is shown with a red dotted line.

The optimal solution is and with an optimal value that represents the workshop’s profit.

Simplex Method

The Simplex Method or Simplex Algorithm is used for calculating the optimal solution to the
linear programming problem. In other words, the simplex algorithm is an iterative procedure
carried systematically to determine the optimal solution from the set of feasible solutions.

Firstly, to apply the simplex method, appropriate variables are introduced in the linear
programming problem, and the primary or the decision variables are equated to zero. The
iterative process begins by assigning values to these defined variables. The value of decision
variables is taken as zero since the evaluation in terms of the graphical approach begins with the
origin. Therefore, x1 and x2 is equal to zero.

The decision maker will enter appropriate values of the variables in the problem and find out the
variable value that contributes maximum to the objective function and removes those values
which give undesirable results. Thus, the value of the objective function gets improved through
this method. This procedure of substitution of variable value continues until any further
improvement in the value of the objective function is possible.

Following two conditions need to be met before applying the simplex method:

1. The right-hand side of each constraint inequality should be non-negative. In case, any linear
programming problem has a negative resource value, then it should be converted into positive
value by multiplying both the sides of constraint inequality by “-1”.
2. The decision variables in the linear programming problem should be non-negative.

Thus, the simplex algorithm is efficient since it considers few feasible solutions, provided by the
corner points, to determine the optimal solution to the linear programming problem.

26 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 4 Concept of Duality

The Duality in Linear Programming states that every linear programming problem has
another linear programming problem related to it and thus can be derived from it. The
original linear programming problem is called “Primal,” while the derived linear problem is
called “Dual.”

Before solving for the duality, the original linear programming problem is to be formulated in
its standard form. Standard form means, all the variables in the problem should be non-
negative and “≥,” ”≤” sign is used in the minimization case and the maximization case
respectively.

The concept of Duality can be well understood through a problem given below:

Maximize

Z = 50x1+30x2

Subject to:
4x1 + 3x2 ≤ 100
3x1 + 5x2 ≤ 150
X1, x2 ≥ 0

The duality can be applied to the above original linear programming problem as:

Minimize

G = 100y1+150y2

Subject to:

4y1 + 3y1 ≥ 50
3y1 +5y2 ≥ 30
Y1, y2 ≥ 0

The following observations were made while forming the dual linear programming problem:

1. The primal or original linear programming problem is of the maximization type while the
dual problem is of minimization type.
2. The constraint values 100 and 150 of the primal problem have become the coefficient of dual
variables y1and y2 in the objective function of a dual problem and while the coefficient of the
variables in the objective function of a primal problem has become the constraint value in the
dual problem.
3. The first column in the constraint inequality of primal problem has become the first row in a
dual problem and similarly the second column of constraint has become the second row in the
dual problem.

27 | P a g e
Decision Science By USMS Academic Support Team

4. The directions of inequalities have also changed, i.e. in the dual problem, the sign is the
reverse of a primal problem. Such that in the primal problem, the inequality sign was “≤” but
in the dual problem, the sign of inequality becomes “≥”.

28 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 5 Shadow Prices


Shadow prices reflect true values for factors and products for the calculation or estimations of
prices in social cost-benefit analysis. J. Tinbergen defines them, “Shadow prices are prices
indicating the intrinsic or true value of a factor or product in the sense of equilibrium
prices. These prices may be different for different time periods as well as geographically
separate areas and various occupations (in the case of labour). They may deviate from
market prices.”

According to E.J. Mishan, “A shadow or accounting price…. is the price the economist
attributes to a good or factor on the argument that it is more appropriate for the purposes of
economic calculation than its existing price if any.”

Need and Determination of Shadow Prices:

In developing countries for project evaluation the distribution of factors on the basis of
market prices is imperfect because there exist fundamental disequilibria which are reflected
in mass underemployment at existing wage levels, in the deficiency of funds at existing
interest rates and in the scarcity of foreign exchange at the prevalent exchange rate.

In such a situation, the equilibrium level of wages would be much below the market wage,
the equilibrium interest rates would be higher than their market rates, and the equilibrium rate
of exchange would be lower than its market rate.

In order to overcome these difficulties, J. Tinbergen, H.B. Chenery and K.S.


Kretchemer have emphasized the use of shadow or accounting prices for the following
reasons:

1. Imperfect Market Mechanism:


The price mechanism operates imperfectly in developing countries. Market prices do not
correctly reflect relative scarcities, benefits, and costs. This is because perfect competition is
entirely absent. Structural changes do not respond to price changes.

Institutional factors distort the existence of equilibrium in the product, labour, capital and
foreign exchange markets. Thus prices fail to reflect and transmit the direct and indirect
influences on the supply side and the demand side.

All such difficulties are overcome with the help of shadow prices. Fiscal, monetary and other
policies also help in bringing the market prices of products labour, capital and foreign
exchange in conformity with their shadow prices and thus make investment projects a
success.

2. Wage Rates:
In developing countries, there exist fundamental disequilibria in the labour market which are
reflected in mass underemployment and unemployment at existing wage rates. In such
economies, wages are much lower in the non-organised agricultural sector.

29 | P a g e
Decision Science By USMS Academic Support Team

There is also surplus labour in rural areas whose marginal product is zero or negligible. But it
cannot be assumed to be zero in calculating the cost of such labour on construction works. On
the other hand, wages are much higher than the opportunity cost of labour in the industrial
sector where labour is organised in strong trade unions.

Therefore, unadjusted market wages of labour cannot be used for calculating the cost of such
labour on investment projects. In such a situation, the equilibrium level of wages would be
much below the market wage in the rural sector.

Economists suggest that the shadow price of such labour can be fixed anywhere above the
zero marginal product of labour, and with the increase in the marginal product of labour, its
shadow price can also be raised to the market level of wages.

3. Capital Costs:
In developing countries, funds for investment are deficient at prevailing interest rates. The
main cause is the deficiency of savings. The majority of people are poor having low income
levels, low rate of savings and hence low propensity to invest. Moreover, there is little
relationship between the supply of capital and interest rates prevalent in the country.

There is also wide disparity between the prevailing interest rates in different regions and
areas. In the capital market, the market rate of interest is much higher them the bank rate.
Therefore, the equilibrium interest rate would be much higher than its market rate. If
unadjusted market price of capital is used in calculating the cost of capital on investment
projects, it would underestimate the real cost of such projects.

To overcome this problem, the shadow rate of interest can be estimated on the basis of
interest rates paid by private investors. But while so doing, it is essential to allow for a social
rate of discount for calculating the social benefits and costs of an investment project where its
net present value (NPV) is calculated as

NPV = Σt Bt-Ct/ (1+i) t

Where Bt is the expected gross benefit of the project at time t, Ct is expected gross cost of the
project at time t, and i is the social discount rate at time t. The social discount rate is the
government’s borrowing rate on long-term securities. So it differs from the market rate of
interest. If the social discount rate is higher, short- period projects with higher net benefits are
preferred, and if it is low, long-period projects with lower benefits are chosen.

4. Exchange Rate:
There is acute scarcity of foreign exchange leads to balance of payments difficulties in
developing countries. As a result, the current rate of foreign exchange is much lower than in
the black market and the equilibrium exchange rate is lower than its market exchange rate.

To solve this problem, an artificial equilibrium is achieved in the balance of payments by


fixing a higher shadow exchange rate than the official exchange rate. For this, weight is
attached to the cost of foreign exchanges in the project.

30 | P a g e
Decision Science By USMS Academic Support Team

Suppose the shadow price of foreign exchange is 50% higher than its market value, the net
effect of a project on the balance of payments should be given a weight of 0.5. This is
equivalent to valuing foreign exchange costs and earnings at a price of 1.5. Tinbergen
suggests the calculation of the shadow foreign exchange rate based on the ‘black’ and ‘free’
rates of exchange.

If the free (official) exchange rate is Rs.50 a dollar and the black rate is Rs.75 a dollar and the
conversion of the official rate is four times as great as that at the black rate, then the shadow
exchange rate would be the weighted average,

4 × 50 + 1 × 75/5 = Rs.55

Thus Rs.55 per dollar would be the shadow rate instead of the official rate of Rs.50.

5. Inflationary Pressures:
Developing countries suffer from inflationary pressures because the market mechanism
operates imperfectly due to a number of socio-economic and administrative obstacles. Even
otherwise, rise in prices are inevitable in the development process.

So actual market prices do not reflect social benefits and costs. Some prices are fixed by the
government. Others are free, but are influenced by restrictive trade practices or monopolies.
Still others are influenced by quantitative controls.

When prices rise, there is overvaluation of domestic currency. The prices of imported goods
for projects underestimate their real cost. Thus there is need for shadow prices in the case of
investment projects in different sectors of the economy.

A factor that is expected to be in short supply should have a shadow price higher than its
market price, while a surplus factor should have a lower shadow price than its market price.
Thus the shadow price is the price which would prevail if prices were equilibrium prices.

Limitations of Shadow Prices:

The following are the limitations in the determination of shadow prices:

1. The calculation of shadow prices pre-supposes the availability of data. But adequate data are not
easily available in less developed countries.
2. In order to establish the intrinsic value of a factor or product requires the existence of full equilibrium
in all markets. In an underdeveloped economy which is characterized by a number of fundamental
disequilibria, the knowledge of full equilibrium conditions for the entire economy is not possible.
Thus the notion of shadow prices corresponding to intrinsic values is arbitrary.
3. The assumption of full employment equilibrium in the whole economy makes the concept of shadow
prices indeterminate. It requires a complete knowledge of demand and supply functions which are
based on the existing socio-economic institutions in the economy. Thus shadow prices are difficult to
ascertain under the existing institutional framework of underdeveloped countries.
4. Another problem arises with regard to the time dimension. The concept of shadow prices is static and
timeless, because shadow prices are used to overcome the difficulties involved in project evaluation
when factor prices change over time. All inputs and outputs are valued at fixed shadow prices in such
cases. This is not realistic because investment projects relate to long periods. Hence the concept of
shadow prices remains a static one.

31 | P a g e
Decision Science By USMS Academic Support Team

5. Another practical difficulty relates to the use of shadow prices in the economy where the private
enterprises buy inputs and sell outputs at market prices. The government, on the other hand, uses
shadow prices for the evaluation of its projects but buys all inputs at market prices and sells outputs at
competitive market prices where it does not possess a monopoly.
6. The determination of shadow prices is difficult in the case of projects with high capital-intensity and
which are substitutes and complementary to each other. Suppose there are two projects in which the
input of one is the output of the other and vice-versa.

In such cases, the determination of the shadow prices of the inputs of labour, capital and
foreign exchange will not only be difficult but impossible because the decisions about the
construction plans of the two projects cannot be the same.

DS/U2 Topic 6 Sensitivity Analysis


The Sensitivity Analysis or What-if Analysis means, determining the viability of the project
if some variables deviate from its expected value, such as investments or sales. In other
words, since the future is uncertain and the entrepreneur wants to know the feasibility of the
project in terms of its variable assumptions Viz, investments or sales change, can apply the
sensitivity analysis.

Whether to accept or reject the proposed project depends on its net present value (NPV).
Hence, sensitivity analysis is calculated in terms of NPV. Firstly, the base-case scenario is
developed; wherein the NPV is calculated for the project based on the assumptions which are
believed to be the most accurate. Then make some changes in the initial assumptions based
on the other potential assumptions, and recalculate the NPV. Once the new NPV is
calculated, analyze its sensitivity in terms of the changes made in the initial assumptions.

Sensitivity Analysis is very useful for a firm that shows, the robustness and the vulnerability
of the project due to the change in the values of underlying variables. It indicates whether the
project is worth to be carried forward or not with the help of NPV value. If the NPV value is
highly sensitive to the changes in variables, the firm can explore the variability of that critical
factor.

This method is very subjective in nature and suffers from certain limitations. Sensitivity
analysis shows the change in NPV due to the change in variables and does not talk about how
likely the change will be. Also, under this method, it is assumed that one variable changes at
a time, but in reality, variables tend to move together.

32 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 7 Role of LP in Economic Decision Making

Agricultural Applications

These applications fall into categories of farm economics and farm management. The former
deals with agricultural economy of a nation or region, while the latter is concerned with the
problems of the individual farm.

The study of farm economics deals with inter-regional competition and optimum allocation of
crop production. Efficient production patterns can be specified by a linear programming
model under regional land resources and national demand constraints.

Linear programming can be applied in agricultural planning, e.g. allocation of limited


resources such as acreage, labour, water supply and working capital, etc. in a way so as to
maximise net revenue.

Production Management

1. Product mix: A company can produce several different products, each of which requires the use of
limited production resources. In such cases, it is essential to determine the quantity of each product to
be produced knowing its marginal contribution and amount of available resource used by it. The
objective is to maximise the total contribution, subject to all constraints.
2. Production planning: This deals with the determination of minimum cost production plan over
planning period of an item with a fluctuating demand, considering the initial number of units in
inventory, production capacity, constraints on production, manpower and all relevant cost factors. The
objective is to minimise total operation costs.
3. Assembly-line balancing: This problem is likely to arise when an item can be made by assembling
different components. The process of assembling requires some specified sequence(s). The objective
is to minimise the total elapse time.
4. Blending problems: These problems arise when a product can be made from a variety of available raw
materials, each of which has a particular composition and price. The objective here is to determine the
minimum cost blend, subject to availability of the raw materials, and minimum and maximum
constraints on certain product constituents.
5. Trim loss When an item is made to a standard size (e.g. glass, paper sheet), the problem that arises is
to determine which combination of requirements should be produced from standard materials in order
to minimise the trim loss.

Financial Management

1. Portfolio selection: This deals with the selection of specific investment activity among several other
activities. The objective is to find the allocation which maximises the total expected return or
minimises risk under certain limitations.
2. Profit planning: This deal with the maximisation of the profit margin from investment in plant
facilities and equipment, cash in hand and inventory.

Marketing Management

1. Media selection: Linear programming technique helps in determining the advertising media mix so as
to maximise the effective exposure, subject to limitation of budget, specified exposure rates to
different market segments, specified minimum and maximum number of advertisements in various

33 | P a g e
Decision Science By USMS Academic Support Team

media. (if) Travelling salesman problem The problem of salesman is to find the shortest route from a
given city, visiting each of the specified cities and then returning to the original point of departure,
provided no city shall be visited twice during the tour. Such type of problems can be solved with the
help of the modified assignment technique.
2. Physical distribution: Linear programming determines the most economic and efficient manner of
locating manufacturing plants and distribution centres for physical distribution.

Personnel Management

• Staffing problem: Linear programming is used to allocate optimum manpower to a particular job so as
to minimise the total overtime cost or total manpower.
• Determination of equitable salaries: Linear programming technique has been used in determining
equitable salaries and sales incentives.
• Job evaluation and selection: Selection of suitable person for a specified job and evaluation of job in
organisations has been done with the help of linear programming technique.

Other applications of linear programming lie in the area of administration, education, fleet
utilisation, awarding contracts, hospital administration and capital budgeting.

34 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 8 Transportation Problems – Initial Basic feasible Solution


The Transportation Method of linear programming is applied to the problems related to the
study of the efficient transportation routes i.e. how efficiently the product from different
sources of production is transported to the different destinations, such as the total
transportation cost is minimum.

Here origin means the place where the product is originated or manufactured for the ultimate
sales while the places where the product is required to be sold is called destination. For
solving the transportation problem, the following steps are to be systematically followed:

1. Obtaining the initial feasible solution, which means identifying the solution that satisfies the
requirements of demand and supply. There are several methods through which the initial feasible
solution can be obtained; these are:
o North-West Corner
o Least Cost Method
o Vogel’s Approximation Method

Note: It is to be ensured that the number of cells occupied should be equal to m+n-1, where
“m” is the number of rows while “n” is the number of columns.

2. Testing the optimality of the initial feasible solution.Once the feasible solution is obtained, the next
step is to check whether it is optimum or not. There are two methods used for testing the optimality:
o Stepping-stone Method
o Modified Distribution Method (MODI)

• The final step is to revise the solution until the optimum solution is obtained.

The two most common objectives of transportation problem could be: i) maximize the profit
of transporting “n” units of product to the destination “y”, ii) Minimize the cost of shipping
“n” units of product to the destination “y”.

North-West Corner Rule


Definition: The North-West Corner Rule is a method adopted to compute the initial
feasible solution of the transportation problem. The name North-west corner is given to this
method because the basic variables are selected from the extreme left corner.

The concept of North-West Corner can be well understood through a transportation problem
given below:

35 | P a g e
Decision Science By USMS Academic Support Team

In the table, three sources A, B and C with the production capacity of 50 units, 40 units, 60
units of product respectively is given. Every day the demand of three retailers D, E, F is to be
furnished with at least 20 units, 95 units and 35 units of product respectively. The
transportation costs are also given in the matrix.

The prerequisite condition for solving the transportation problem is that demand should be
equal to the supply. In case the demand is more than supply, then dummy origin is added to
the table. The supply of dummy origin will be equal to the difference between the total
supply and total demand. The cost associated with the dummy origin will be zero.

Similarly, in case the supply is more than the demand, then dummy source is created whose
demand will be equivalent to the difference between supply and demand. Again the cost
associated with the dummy source will be zero.

Once the demand and supply are equal, the following procedure is followed:

1. Select the north-west or extreme left corner of the matrix, assign as many units as possible to cell AD,
within the supply and demand constraints. Such as 20 units are assigned to the first cell, that satisfies
the demand of destination D while the supply is in surplus.

36 | P a g e
Decision Science By USMS Academic Support Team

2. Now move horizontally and assign 30 units to the cell AE. Since 30 units are available with the source
A, the supply gets fully saturated.
3. Now move vertically in the matrix and assign 40 units to Cell BE. The supply of source B also gets
fully saturated.
4. Again move vertically, and assign 25 units to cell CE, the demand of destination E is fulfilled.
5. Move horizontally in the matrix and assign 35 units to cell CF, both the demand and supply of origin
and destination gets saturated. Now the total cost can be computed.

The Total cost can be computed by multiplying the units assigned to each cell with the
concerned transportation cost. Therefore,

Total Cost = 20*5+ 30*8+ 40*6+ 25*9+ 35*6 = Rs 1015

Least Cost Method


Definition: The Least Cost Method is another method used to obtain the initial feasible
solution for the transportation problem. Here, the allocation begins with the cell which has
the minimum cost. The lower cost cells are chosen over the higher-cost cell with the objective
to have the least cost of transportation.

The Least Cost Method is considered to produce more optimal results than the North-west
Corner because it considers the shipping cost while making the allocation, whereas the North-
West corner method only considers the availability and supply requirement and allocation
begin with the extreme left corner, irrespective of the shipping cost.

Let’s understand the concept of Least Cost method through a problem given below:

In the given matrix, the supply of each source A, B, C is given Viz. 50units, 40 units, and 60
units respectively. The weekly demand for three retailers D, E, F i.e. 20 units, 95 units and 35
units is given respectively. The shipping cost is given for all the routes.

The minimum transportation cost can be obtained by following the steps given below:

37 | P a g e
Decision Science By USMS Academic Support Team

1. The minimum cost in the matrix is Rs 3, but there is a tie in the cell BF, and CD, now the question
arises in which cell we shall allocate. Generally, the cost where maximum quantity can be assigned
should be chosen to obtain the better initial solution. Therefore, 35 units shall be assigned to the cell
BF. With this, the demand for retailer F gets fulfilled, and only 5 units are left with the source B.
2. Again the minimum cost in the matrix is Rs 3. Therefore, 20 units shall be assigned to the cell CD.
With this, the demand of retailer D gets fulfilled. Only 40 units are left with the source C.
3. The next minimum cost is Rs 4, but however, the demand for F is completed, we will move to the
next minimum cost which is 5. Again, the demand of D is completed. The next minimum cost is 6,
and there is a tie between three cells. But however, no units can be assigned to the cells BD and CF as
the demand for both the retailers D and F are saturated. So, we shall assign 5 units to Cell BE. With
this, the supply of source B gets saturated.
4. The next minimum cost is 8, assign 50 units to the cell AE. The supply of source A gets saturated.
5. The next minimum cost is Rs 9; we shall assign 40 units to the cell CE. With his both the demand and
supply of all the sources and origins gets saturated.

The total cost can be calculated by multiplying the assigned quantity with the concerned cost
of the cell. Therefore,

Total Cost = 50*8 + 5*6 + 35*3 +20*3 +40*9 = Rs 955.

Note: The supply and demand should be equal and in case supply are more, the dummy
source is added in the table with demand being equal to the difference between supply and
demand, and the cost remains zero. Similarly, in case the demand is more than supply, then
dummy destination or origin is added to the table with the supply equal to the difference in
quantity demanded and supplied and the cost being zero.

Vogel’s Approximation Method


Definition: The Vogel’s Approximation Method or VAM is an iterative procedure
calculated to find out the initial feasible solution of the transportation problem. Like Least
cost Method, here also the shipping cost is taken into consideration, but in a relative sense.

The following is the flow chart showing the steps involved in solving the transportation
problem using the Vogel’s Approximation Method:

38 | P a g e
Decision Science By USMS Academic Support Team

The concept of Vogel’s Approximation Method can be well understood through an


illustration given below:

1. First of all the difference between two least cost cells are calculated for each row and column, which
can be seen in the iteration given for each row and column. Then the largest difference is selected,
which is 4 in this case. So, allocate 20 units to cell BD, since the minimum cost is to be chosen for the
allocation. Now, only 20 units are left with the source B.

39 | P a g e
Decision Science By USMS Academic Support Team

1. Column D is deleted, again the difference between the least cost cells is calculated for each row and
column, as seen in the iteration below. The largest difference value comes to be 3, so allocate 35 units
to cell AF and 15 units to the cell AE. With this, the Supply and demand of source A and origin F gets
saturated, so delete both the row A and Column F.

3. Now, single column E is left, since no difference can be found out, so allocate 60 units to
the cell CE and 20 units to cell BE, as only 20 units are left with source B. Hence the demand
and supply are completely met.

Now the total cost can be computed, by multiplying the units assigned to each cell with the
cost concerned. Therefore,

Total Cost = 20*3 + 35*1 + 15*4 + 60*4 + 20*8 = Rs 555

Note: Vogel’s Approximation Method is also called as Penalty Method because the
difference costs chosen are nothing but the penalties of not choosing the least cost routes.

40 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 9 Test for Optimality


Testing the optimality of the initial feasible solution.

Once the feasible solution is obtained, the next step is to check whether it is optimum or not.
There are two methods used for testing the optimality:

• Stepping-stone Method
• Modified Distribution Method (MODI)

Stepping Stone Method


The Stepping Stone Method is used to check the optimality of the initial feasible solution
determined by using any of the method Viz. North-West Corner, Least Cost Method or
Vogel’s Approximation Method. Thus, the stepping stone method is a procedure for finding
the potential of any non-basic variables (empty cells) in terms of the objective function.

Through Stepping stone method, we determine that what effect on the transportation cost
would be in case one unit is assigned to the empty cell. With the help of this method, we
come to know whether the solution is optimal or not.

The series of steps are involved in checking the optimality of the initial feasible solution
using the stepping stone method:

1. The prerequisite condition to solve for the optimality is to ensure that the number of occupied cells is
exactly equal to m+n-1, where ‘m’ is the number of rows, while ‘n’ is equal to the number of
columns.
2. Firstly, the empty cell is selected and then the closed path is created which starts from the unoccupied
cell and returns to the same unoccupied cell, called as a “closed loop”. For creating a closed loop the
following conditions should be kept in mind:
o In a closed loop, cells are selected in a sequence such that one cell is unused/unoccupied, and all other
cells are used/occupied.
o A pair of Consecutive used cells lies either in the same row or the same column.
o No three consecutive occupied cells can either be in the same row or column.
o The first and last cells in the closed loop lies either in the same row or column.
o Only horizontal and vertical movement is allowed.

• Once the loop is created, assign “+” or “–“ sign alternatively on each corner cell of the loop, but begin
with the “+” sign for the unoccupied cell.
• Repeat these steps again until all the unoccupied cells get evaluated.
• Now, if all the computed changes are positive or are equal to or greater than zero, then the optimal
solution has been reached.
• But in case, if any, value comes to be negative, then there is a scope to reduce the transportation cost
further. Then, select that unoccupied cell which has the most negative change and assign as many
units as possible. Subtract the unit that added to the unoccupied cell from the other cells with a
negative sign in a loop, to balance the demand and supply requirements.

Example, suppose the following matrix shows the initial feasible solution and stepping stone
method is adopted to check its optimality:

41 | P a g e
Decision Science By USMS Academic Support Team

42 | P a g e
Decision Science By USMS Academic Support Team

With the new matrix so formed, again the empty cells will be evaluated through a loop
formation and signs will be assigned accordingly. The cell with the highest opportunity cost
will be assigned the units, and this process will repeat until the best optimum solution is
obtained or the opportunity cost of all the unoccupied cells comes to be negative.

Modified Distribution Method


Definition: The Modified Distribution Method or MODI is an efficient method of
checking the optimality of the initial feasible solution.

The concept of MODI can be further comprehended through an illustration given below:

1. Initial basic feasible solution is given below:

2. Now, calculate the


values of ui and vj by using the equation:

43 | P a g e
Decision Science By USMS Academic Support Team

ui+vj = Cij
Substituting the value of u1 as 0
U1+V1 = C11, 0+V1 = 6 or V1 = 6
U1 +V2 = C12, 0+V2 = 4 or V2 = 4
U2+V2 = C22, U2+4 = 8 or U2 = 4
U3+ V2 = C32, U3+4 = 4 or U3 = 0
U3+V3 = C33, 0+V3 = 2 or V3 =2

3. Next step is to calculate the opportunity cost of the unoccupied cells (AF, BD, BF, CD) by
using the following formula:
Cij – (ui+Vi)

AF = C13 – (U1+V3), 1- (0+2) = -1 or 1


BD = C21 – (U2+v1), 3- (4+6) = -7 or 7
BF = C23 – (U2+V3), 7- (4+2) = 1 or -1
CD = C31- (U3+V1), 4- (0+6) = -2 or 2

4. Choose the largest positive opportunity cost, which is 7 and draw a closed path, as shown
in the matrix below. Start from the unoccupied cell and assign “+” or “–“sign alternatively.
Therefore, The most favored cell is BD, assign as many units as possible.

44 | P a g e
Decision Science By USMS Academic Support Team

5. The matrix below shows the maximum allocation to the cell BD, and that number of units
are added to the cell with a positive sign and subtracted from the cell with a negative sign.

6. Again, repeat the steps from 1 to 4 i.e. find out the opportunity costs for each unoccupied
cell and assign the maximum possible units to the cell having the largest opportunity cost.
This process will go on until the optimum solution is reached.

The Modified distribution method is an improvement over the stepping stone method since; it
can be applied more efficiently when a large number of sources and destinations are
involved, which becomes quite difficult or tedious in case of stepping stone method.

Modified distribution method reduces the number of steps involved in the evaluation of
empty cells, thereby minimizes the complexity and gives a straightforward computational
scheme through which the opportunity cost of each empty cell can be determined.

45 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 10 Assignment Problems


Assignment problem is a special type of linear programming problem which deals with the
allocation of the various resources to the various activities on one to one basis. It does it in such a
way that the cost or time involved in the process is minimum and profit or sale is maximum.
Though there problems can be solved by simplex method or by transportation method but
assignment model gives a simpler approach for these problems.

In a factory, a supervisor may have six workers available and six jobs to fire. He will have to take
decision regarding which job should be given to which worker. Problem forms one to one basis.
This is an assignment problem.

1. Assignment Model:

Suppose there are n facilitates and n jobs it is clear that in this case, there will be n assignments.
Each facility or say worker can perform each job, one at a time. But there should be certain
procedure by which assignment should be made so that the profit is maximized or the cost or time
is minimized.

In the table, Coij is defined as the cost when jth job is assigned to ith worker. It maybe noted here
that this is a special case of transportation problem when the number of rows is equal to number
of columns.

Mathematical Formulation:
Any basic feasible solution of an Assignment problem consists (2n – 1) variables of which the (n
– 1) variables are zero, n is number of jobs or number of facilities. Due to this high degeneracy, if
we solve the problem by usual transportation method, it will be a complex and time consuming
work. Thus a separate technique is derived for it. Before going to the absolute method it is very
important to formulate the problem.

Suppose xjj is a variable which is defined as

1 if the ith job is assigned to jth machine or facility

0 if the ith job is not assigned to jth machine or facility.

Now as the problem forms one to one basis or one job is to be assigned to one facility or machine.

46 | P a g e
Decision Science By USMS Academic Support Team

The total assignment cost will be given by

The above definition can be developed into mathematical model as follows:

Determine xij > 0 (i, j = 1,2, 3…n) in order to

Subjected to constraints

and xij is either zero or one.

Method to solve Problem (Hungarian Technique):


Consider the objective function of minimization type. Following steps are involved in solving this
Assignment problem,

1. Locate the smallest cost element in each row of the given cost table starting with the first row.
Now, this smallest element is subtracted form each element of that row. So, we will be getting at
least one zero in each row of this new table.
2. Having constructed the table (as by step-1) take the columns of the table. Starting from first
column locate the smallest cost element in each column. Now subtract this smallest element from
each element of that column. Having performed the step 1 and step 2, we will be getting at least
one zero in each column in the reduced cost table.
3. Now, the assignments are made for the reduced table in following manner.

(i) Rows are examined successively, until the row with exactly single (one) zero is found.
Assignment is made to this single zero by putting square □ around it and in the corresponding
column, all other zeros are crossed out (x) because these will not be used to make any other
assignment in this column. Step is conducted for each row.

47 | P a g e
Decision Science By USMS Academic Support Team

(ii) Step 3 (i) in now performed on the columns as follow:- columns are examined successively
till a column with exactly one zero is found. Now , assignment is made to this single zero by
putting the square around it and at the same time, all other zeros in the corresponding rows are
crossed out (x) step is conducted for each column.

(iii) Step 3, (i) and 3 (ii) are repeated till all the zeros are either marked or crossed out. Now, if
the number of marked zeros or the assignments made are equal to number of rows or columns,
optimum solution has been achieved. There will be exactly single assignment in each or columns
without any assignment. In this case, we will go to step 4.

4. At this stage, draw the minimum number of lines (horizontal and vertical) necessary to cover all
zeros in the matrix obtained in step 3, Following procedure is adopted:

(i) Tick mark () all rows that do not have any assignment.

(ii) Now tick mark() all these columns that have zero in the tick marked rows.

(iii) Now tick mark all the rows that are not already marked and that have assignment in the
marked columns.

(iv) All the steps i.e. (4(i), 4(ii), 4(iii) are repeated until no more rows or columns can be marked.

(v) Now draw straight lines which pass through all the un marked rows and marked columns. It
can also be noticed that in an n x n matrix, always less than ‘n’ lines will cover all the zeros if
there is no solution among them.

5. In step 4, if the number of lines drawn are equal to n or the number of rows, then it is the
optimum solution if not, then go to step 6.
6. Select the smallest element among all the uncovered elements. Now, this element is subtracted
from all the uncovered elements and added to the element which lies at the intersection of two
lines. This is the matrix for fresh assignments.
7. Repeat the procedure from step (3) until the number of assignments becomes equal to the number
of rows or number of columns.

48 | P a g e
Decision Science By USMS Academic Support Team

DS/U2 Topic 11 Travelling Salesman Problem


Problem Statement
A traveler needs to visit all the cities from a list, where distances between all the cities are
known and each city should be visited just once. What is the shortest possible route that he
visits each city exactly once and returns to the origin city?

Solution
Travelling salesman problem is the most notorious computational problem. We can use brute-
force approach to evaluate every possible tour and select the best one. For n number of
vertices in a graph, there are (n – 1)! number of possibilities.

Instead of brute-force using dynamic programming approach, the solution can be obtained in
lesser time, though there is no polynomial time algorithm.

Let us consider a graph G = (V, E), where V is a set of cities and E is a set of weighted edges.
An edge e(u, v) represents that vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.

Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence,
this is a partial tour. We certainly need to know j, since this will determine which cities are
most convenient to visit next. We also need to know all the cities visited so far, so that we
don’t repeat any of them. Hence, this is an appropriate sub-problem.

For a subset of cities S Є {1, 2, 3, … , n} that includes 1, and j Є S, let C(S, j) be the length of
the shortest path visiting each node in S exactly once, starting at 1 and ending at j.

When |S| > 1, we define C(S, 1) = ∝ since the path cannot start and end at 1.

Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j.
We should select the next city in such a way that

C(S,j)=minC(S−{j},i)+d(i,j)wherei∈Sandi≠jC(S,j)=minC(S−{j},i)+d(i,j)wherei∈Sandi≠j

Algorithm: Traveling-Salesman-Problem C ({1}, 1) = 0 for s = 2 to n do for all subsets S Є {1,


2, 3, … , n} of size s and containing 1 C (S, 1) = ∞ for all j Є S and j ≠ 1 C (S, j) =
min {C (S – {j}, i) + d(i, j) for i Є S and i ≠ j} Return minj C ({1, 2, 3, …, n}, j) + d(j, i)

Analysis
There are at the most 2n.n2n.n sub-problems and each one takes linear time to solve.
Therefore, the total running time is O(2n.n2)O(2n.n2).

49 | P a g e
Decision Science By USMS Academic Support Team

Example
In the following example, we will illustrate the steps to solve the travelling salesman
problem.

From the above graph, the following table is prepared.

1 2 3 4

1 0 10 15 20

2 5 0 9 10

3 6 13 0 12

4 8 8 9 0

S=Φ

Cost(2,Φ,1)=d(2,1)=5Cost(2,Φ,1)=d(2,1)=5

Cost(3,Φ,1)=d(3,1)=6Cost(3,Φ,1)=d(3,1)=6

Cost(4,Φ,1)=d(4,1)=8Cost(4,Φ,1)=d(4,1)=8

S=1

Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}

Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15

50 | P a g e
Decision Science By USMS Academic Support Team

Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18

Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18

Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20

Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15

Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13

S=2

Cost(2,{3,4},1)={d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25Cost(2,
{3,4},1)={d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25

Cost(3,{2,4},1)={d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25Cost(
3,{2,4},1)={d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25

Cost(4,{2,3},1)={d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23Cost(4,{
2,3},1)={d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23

S=3

Cost(1,{2,3,4},1)=⎧⎩⎨d[1,2]+Cost(2,{3,4},1)=10+25=35d[1,3]+Cost(3,{2,4},1)=15+25=40
d[1,4]+Cost(4,{2,3},1)=20+23=43=35Cost(1,{2,3,4},1)={d[1,2]+Cost(2,{3,4},1)=10+25=35
d[1,3]+Cost(3,{2,4},1)=15+25=40d[1,4]+Cost(4,{2,3},1)=20+23=43=35

The minimum cost path is 35.

Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the
path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d
[4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.

When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore,
we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our last node of the
path is 4). Select path 4 to 3 (cost is 9), then go to s = Φ step. We get the minimum value
for d [3, 1] (cost is 6).

51 | P a g e
Decision Science By USMS Academic Support Team

UNIT 3 Decision Theory

DS/U3 Topic 1 Decision under Certainty, Uncertainty, and Risk


Decision-making under Certainty

A condition of certainty exists when the decision-maker knows with reasonable certainty what the
alternatives are, what conditions are associated with each alternative, and the outcome of each
alternative. Under conditions of certainty, accurate, measurable, and reliable information on
which to base decisions is available.

The cause and effect relationships are known and the future is highly predictable under conditions
of certainty. Such conditions exist in case of routine and repetitive decisions concerning the day-
to-day operations of the business.

Decision-making under Risk:

When a manager lacks perfect information or whenever an information asymmetry exists, risk
arises. Under a state of risk, the decision maker has incomplete information about available
alternatives but has a good idea of the probability of outcomes for each alternative.

While making decisions under a state of risk, managers must determine the probability associated
with each alternative on the basis of the available information and his experience.

Decision-making under Uncertainty:

Most significant decisions made in today’s complex environment are formulated under a state of
uncertainty. Conditions of uncertainty exist when the future environment is unpredictable and
everything is in a state of flux. The decision-maker is not aware of all available alternatives, the
risks associated with each, and the consequences of each alternative or their probabilities.

The manager does not possess complete information about the alternatives and whatever
information is available, may not be completely reliable. In the face of such uncertainty,
managers need to make certain assumptions about the situation in order to provide a reasonable
framework for decision-making. They have to depend upon their judgment and experience for
making decisions.

Modern Approaches to Decision-making under Uncertainty:

There are several modern techniques to improve the quality of decision-making under conditions
of uncertainty.

The most important among these are:

52 | P a g e
Decision Science By USMS Academic Support Team

(1) Risk analysis,

(2) Decision trees and

(3) Preference theory.

Risk Analysis:

Managers who follow this approach analyze the size and nature of the risk involved in choosing a
particular course of action.

For instance, while launching a new product, a manager has to carefully analyze each of the
following variables the cost of launching the product, its production cost, the capital investment
required, the price that can be set for the product, the potential market size and what percent of
the total market it will represent.

Risk analysis involves quantitative and qualitative risk assessment, risk management and risk
communication and provides managers with a better understanding of the risk and the benefits
associated with a proposed course of action. The decision represents a trade-off between the risks
and the benefits associated with a particular course of action under conditions of uncertainty.

Decision Trees:

These are considered to be one of the best ways to analyze a decision. A decision-tree approach
involves a graphic representation of alternative courses of action and the possible outcomes and
risks associated with each action.

By means of a “tree” diagram depicting the decision points, chance events and probabilities
involved in various courses of action, this technique of decision-making allows the decision-
maker to trace the optimum path or course of action.

Preference or Utility Theory:

This is another approach to decision-making under conditions of uncertainty. This approach is


based on the notion that individual attitudes towards risk vary. Some individuals are willing to
take only smaller risks (“risk averters”), while others are willing to take greater risks
(“gamblers”). Statistical probabilities associated with the various courses of action are based on
the assumption that decision-makers will follow them.

3For instance, if there were a 60 percent chance of a decision being right, it might seem
reasonable that a person would take the risk. This may not be necessarily true as the individual
might not wish to take the risk, since the chances of the decision being wrong are 40 percent. The
attitudes towards risk vary with events, with people and positions.

Top-level managers usually take the largest amount of risk. However, the same managers who
make a decision that risks millions of rupees of the company in a given program with a 75
percent chance of success are not likely to do the same with their own money.

53 | P a g e
Decision Science By USMS Academic Support Team

Moreover, a manager willing to take a 75 percent risk in one situation may not be willing to do so
in another. Similarly, a top executive might launch an advertising campaign having a 70 percent
chance of success but might decide against investing in plant and machinery unless it involves a
higher probability of success.

Though personal attitudes towards risk vary, two things are certain.

Firstly, attitudes towards risk vary with situations, i.e. some people are risk averters in some
situations and gamblers in others.

Secondly, some people have a high aversion to risk, while others have a low aversion.

Most managers prefer to be risk averters to a certain extent, and may thus also forego
opportunities. When the stakes are high, most managers tend to be risk averters; when the stakes
are small, they tend to be gamblers.

54 | P a g e
Decision Science By USMS Academic Support Team

DS/U3 Topic 2 Decision Tree Analysis


Decision Tree may be understood as the logical tree, is a range of conditions (premises) and
actions (conclusions), which are depicted as nodes and the branches of the tree which link the
premises with conclusions. It is a decision support tool, having a tree-like representation of
decisions and the consequences thereof. It uses ‘AND’ and ‘OR’ operators, to recreate the
structure of if-then rules.

A decision tree is helpful in reaching the ideal decision for intricate processes, especially when
the decision problems are interconnected and chronological in nature.

A decision tree does not constitute a decision but assists in making one, by graphically
representing the material information related to the given problem, in the form of a tree. It
diagrammatically depicts various courses of action, likely outcomes, states of nature, etc, as
nodes, branches or sub-branches of a horizontal tree.

Nodes

There are two types of Nodes:

• Decision Node: Represented as square, wherein different courses of action arise from decision
node in main branches.
• Chance Node: Symbolised as a circle, at the terminal point of decision node, the chance node is
present, where they emerge as sub-branches. These depict probabilities and outcomes.

For instance: Think of a situation where a firm introduces a new product. The decision tree
presented below gives a clear idea of managerial problems.

• Key A is a decision node, wherein the decision is taken, i.e. to test the product or drop the same.
• Key B is an outcome node, which shows all possible outcomes, that can be taken. As per the
given situation, there are only two outcomes, i.e. favorable or not.

55 | P a g e
Decision Science By USMS Academic Support Team

• Key C is again a decision node, that describes the market test is positive, so the firm’s
management will decide whether to go further with complete marketing or drop the product.
• Key D is one more decision node, but does not shows any choice, which depicts that if the market
test is unfavorable then the decision is to drop the product.
• Key E is again an outcome node.

The decision tree can be applied to various areas, where decisions are pending such as make
or buy decision, investment decision, marketing strategy, the introduction of a new project. The
decision maker will go for the alternative that increases the anticipated profit or the one which
reduces the overall expected cost at each decision point.

Types of Decision Tree

In a single stage decision tree, the decision maker can find only one solution, which is the best
course of action, on the basis of the information gathered. On the other hand, multi-stage
decision tree involves a series of the decision to be taken.

Decision Tree Analysis

The Decision Tree Analysis is a schematic representation of several decisions followed by


different chances of the occurrence. Simply, a tree-shaped graphical representation of decisions
related to the investments and the chance points that help to investigate the possible outcomes is
called as a decision tree analysis.

The decision tree shows Decision Points, represented by squares, are the alternative actions along
with the investment outlays, that can be undertaken for the experimentation. These decisions are
followed by the chance points, represented by circles, are the uncertain points, where the
outcomes are dependent on the chance process. Thus, the probability of occurrence is assigned to
each chance point.

56 | P a g e
Decision Science By USMS Academic Support Team

Once the decision tree is described precisely, and the data about outcomes along with their
probabilities is gathered, the decision alternatives can be evaluated as follows:

1. Start from the extreme right-hand end of the tree and start calculating NPV for each chance points
as you proceed leftward.
2. Once the NPVs are calculated for each chance point, evaluate the alternatives at the final stage
decision points in terms of their NPV.
3. Select the alternative which has the highest NPV and cut the branch of inferior decision
alternative. Assign value to each decision point equivalent to the NPV of the alternative selected.
4. Again, repeat the process, proceed leftward, recalculate NPV for each chance point, select the
decision alternative which has the highest NPV value and then cut the branch of the inferior
decision alternative. Assign the value to each point equivalent to the NPV of selected alternative
and repeat this process again and again until a final decision point is reached.

Thus, decision tree analysis helps the decision maker to take all the possible outcomes into the
consideration before reaching a final investment decision.

A decision tree is a decision support tool that uses a tree-like model of decisions and their
possible consequences, including chance event outcomes, resource costs, and utility. It is one way
to display an algorithm that only contains conditional control statements.

Decision trees are commonly used in operations research, specifically in decision analysis, to help
identify a strategy most likely to reach a goal, but are also a popular tool in machine learning.

57 | P a g e
Decision Science By USMS Academic Support Team

DS/U3 Topic 3 Game Theory – Pure and Mixed Strategies, Principle


of Dominance
Pure strategy

A pure strategy is an unconditional, defined choice that a person makes in a situation or


game. For example, in the game of Rock-Paper-Scissors,if a player would choose to only
play scissors for each and every independent trial, regardless of the other player’s strategy,
choosing scissors would be the player’s pure strategy. The probability for choosing scissors
equal to 1 and all other options (paper and rock) is chosen with the probability of 0. The set
of all options (i.e. rock, paper, and scissors) available in this game is known as the strategy
set.

Mixed strategy

A mixed strategy is an assignment of probability to all choices in the strategy set. Using the
example of Rock-Paper-Scissors, if a person’s probability of employing each pure strategy is
equal, then the probability distribution of the strategy set would be 1/3 for each option, or
approximately 33%. In other words, a person using a mixed strategy incorporates more than
one pure strategy into a game.

The definition of a mixed strategy does not rule out the possibility for an option(s)to never be
chosen (eg. pscissors= 0.5, prock = 0.5, ppaper = 0). This means that in a way, a pure strategy can also
be considered a mixed strategy at its extreme, with a binary probability assignment (setting
one option to 1 and all others equal to 0). For this article, we shall say that pure strategies are
not mixed strategies.In the game of tennis, each point is a zero-sum game with two players
(one being the server S, and the other being the returner R). In this scenario, assume each
player has two strategies (forehand F, and backhand B). Observe the following hypothetical
in the payoff matrix:

The strategies FS or BS are observed for the server when the ball is served to the side of the
service box closest to the returner’s forehand or backhand, respectively. For the returner, the
strategies FR and BR are observed when the returner moves to the forehand or backhand side to
return the serve, respectively. This gives us the payoffs when the returner receives the serve
correctly (FS,FR or BS,BR), or incorrectly (FS,BR or BS,FR). The payoffs to each player for every
action are given in pure strategy payoffs, as each player is only guaranteed their payoff given
the opponent’s strategy is employed 100% of the time. Given these pure strategy payoffs, we
can calculate the mixed strategy payoffs by figuring out the probability each strategy is

58 | P a g e
Decision Science By USMS Academic Support Team

chosen by each player.So you are Roger. It is apparent to you that a pure strategy would be
exploitable. If you serve to the backhand 100% of the time, it would be easy for the opponent
to catch on and return from the backhand side more often than the forehand, maximizing his
expected payoff. Same goes for the serve to the forehand. But how often should you mix your
strategy and serve to each side to minimize your opponent’s chances of winning? Calculating
these probabilities would give us our mixed strategy Nash equilibria, or the probabilities that
each strategy is used which would minimize the opponent’s expected payoff. In the following
article, we will look at how to find mixed strategy Nash equilibria, and how to interpret them.

Pure and Mixed Strategies:

In a pure strategy, players adopt a strategy that provides the best payoffs. In other words, a
pure strategy is the one that provides maximum profit or the best outcome to players.
Therefore, it is regarded as the best strategy for every player of the game. In the previously
cited example (Table-1), the increase in the prices of organizations’ products is the best
strategy for both of them.

This is because if both of them increase the prices of their products, they would earn
maximum profits. However, if only one of the organization increases the prices of its
products, then it would incur losses. In such a case, an increase in prices is regarded as a pure
strategy for organizations ABC and XYZ.

On the other hand, in a mixed strategy, players adopt different strategies to get the possible
outcome. For example, in cricket a bowler cannot throw the same type of ball every time
because it makes the batsman aware about the type of ball. In such a case, the batsman may
make more runs.

However, if the bowler throws the ball differently every time, then it may make the batsman
puzzled about the type of ball, he would be getting the next time.

Therefore, strategies adopted by the bowler and the batsman would be mixed strategies,
which are shown ion Table-2:

In Table-2, when the batsman’s expectation and the bowler’s ball type are same, then the
percentage of making runs by batsman would be 30%. However, when the expectation of the
batsman is different from the type of ball he gets, the percentage of making runs would
reduce to 10%. In case, the bowler or the batsman uses a pure strategy, then any one of them
may suffer a loss.

Therefore, it is preferred that bowler or batsman should adopt a mixed strategy in this case.
For example, the bowler throws a spin ball and fastball with a 50-50 combination and the

59 | P a g e
Decision Science By USMS Academic Support Team

batsman predicts the 50-50 combination of the spin and fast ball. In such a case, the average
hit of runs by batsman would be equal to 20%.

This is because all the four payoffs become 25% and the average of four combinations
can be derived as follows:

0.25(30%) + 0.25(10%) + 0.25(30%) + 0.25(10%) = 20%

However, it may be possible that when the bowler is throwing a 50-50 combination of spin
ball and fastball, the batsman may not be able to predict the right type of ball every time. This
would decrease his average run rate below 20%. Similarly, if the bowler throws the ball with
a 60-40 combination of fast and spin ball respectively, and the batsman would expect either a
fastball or a spin ball randomly. In such a case, the average of the batsman hits remains 20%.

The probabilities of four outcomes now become:

Anticipated fastball and fastball thrown: 0.50*0.60 = 0.30

Anticipated fastball and spin ball thrown: 0.50*0.40 = 0.20

Anticipated spin ball and spin ball thrown: 0.50*0.60 = 0.30

Anticipated spin ball and fastball thrown: 0.50*0.40 = 0.20

When we multiply the probabilities with the payoffs given in Table-2, we get

0.30(30%) + 0.20(10%) + 0.20(30%) + 0.30(10%) = 20%

This shows that the outcome does not depends on the combination of fastball and spin ball,
but it depends on the prediction of the batsman that he can get any type of ball from the
bowler.

Principle of Dominance

The principle of dominance in Game Theory (also known as dominant


strategy or dominance method) states that if one strategy of a player dominates over the
other strategy in all conditions then the later strategy can be ignored.

A strategy dominates over the other only if it is preferable over other in all conditions. The
concept of dominance is especially useful for the evaluation of two-person zero-sum
games where a saddle point does not exist.

Dominant Strategy Rules (Dominance Principle)

• If all the elements of a column (say ith column) are greater than or equal to the corresponding elements
of any other column (say jth column), then the ith column is dominated by the jthcolumn and can be
deleted from the matrix.

60 | P a g e
Decision Science By USMS Academic Support Team

• If all the elements of a row (say ith row) are less than or equal to the corresponding elements of any
other row (say jth row), then the ith row is dominated by the jth row and can be deleted from the matrix.

Dominance Example: Game Theory

Use the principle of dominance to solve this problem.

Solution.

Player B

I II III IV Minimum

I 3 5 4 2 2

Player A II 5 6 2 4 2

III 2 1 4 0 0

IV 3 3 5 2 2

Maximum 5 6 5 4

There is no saddle point in this game.

61 | P a g e
Decision Science By USMS Academic Support Team

Using Dominance Property In Game Theory


If a column is greater than another column (compare corresponding elements), then delete
that column.
Here, I and II column are greater than the IV column. So, player B has no incentive in using
his I and II course of action.

Player B

III IV

I 4 2

Player A II 2 4

III 4 0

IV 5 2

If a row is smaller than another row (compare corresponding elements), then delete that row.
Here, I and III row are smaller than IV row. So, player A has no incentive in using his I and
III course of action.

Player B

III IV

Player A II 2 4

IV 5 2

62 | P a g e
Decision Science By USMS Academic Support Team

DS/U3 Topic 4 Solution of Game Theory Problems with the Help of


Graphical, Algebraic, and Simplex Methods

This method can only be used in games with no saddle point, and
having a pay-off matrix of type n X 2 or 2 X n.

Example: Graphical Method for Game Theory

Consider the following pay-off matrix

Player B

B1 B2

Player A A1 -2 4

A1 8 3

A1 9 0

Solution.

The game does not have a saddle point as shown in the following table.

Player B
Minimum Probability
Player A B1 B2

A1 -2 4 -2 q1

63 | P a g e
Decision Science By USMS Academic Support Team

A2 8 3 3 q2

A3 9 0 0 q3

Maximum 9 4

Probability p1 p1

Maximin = 4, Minimax = 3

First, we draw two parallel lines 1 unit distance apart and mark a scale
on each. The two parallel lines represent strategies of player B.
If player A selects strategy A1, player B can win –2 (i.e., loose 2 units)
or 4 units depending on B’s selection of strategies. The value -2 is
plotted along the vertical axis under strategy B1 and the value 4 is
plotted along the vertical axis under strategy B2. A straight line joining
the two points is then drawn.
Similarly, we can plot strategies A2 and A3 also. The problem is
graphed in the following figure.

The lowest point V in the shaded region indicates the value of game.
From the above figure, the value of the game is 3.4 units. Likewise, we
can draw a graph for player B.

The point of optimal solution (i.e., maximin point) occurs at the


intersection of two lines:

E1 = -2p1 + 4p2 and


E2 = 8p1 + 3p2

Comparing the above two equations, we have

-2p1 + 4p2 = 8p1 + 3p2

Substituting p2 = 1 – p1
-2p1 + 4(1 – p1) = 8p1 + 3(1 – p1)

64 | P a g e
Decision Science By USMS Academic Support Team

p1 = 1/11
p2 = 10/11

Substituting the values of p1 and p2 in equation E1

V = -2 (1/11) + 4 (10/11) = 3.4 units

Game Theory: 2 x n Games

Games where one player has only two courses of action while the other
has more than two, are called 2 X n or n X 2 games.

If these games do not have a saddle point or are reducible by the


dominance method, then before solving these games we write all 2 X 2
sub-games and determine the value of each 2 X 2 sub-game.

This method is illustrated by the following example.

Example: 2 x n Games

Determine the solution of game for the pay-off matrix given below:

Player B

I II III

Player A I -3 -1 7

II 4 1 -2

Solution.

Obviously, there is no saddle point and also no course of action


dominates the other. Therefore, we consider each 2 X 2 sub-game and
obtain their values.

65 | P a g e
Decision Science By USMS Academic Support Team

(a)

Player B

I II

Player A I -3 -1

II 4 1

The saddle point is 1. So the value of game, V1 is 1.

(b)

Player B

I II

Player A I -3 7

II 4 -2

This game has no saddle point, so we use the algebraic method.

(-3) X (-2) – (7 X 4) 11
Value of game,
————————- = —
V2 =
(-3 – 2) – (7 + 4) 8

(c)

Player B

66 | P a g e
Decision Science By USMS Academic Support Team

II III

Player A I -1 7

II 1 -2

This game has no saddle point, so we use the algebraic method.

(-1) X (-2) – (7 X 1) 5
Value of game, V3
———————– = —
=
(-1 – 2) – (7 + 1) 11

The 2 X 2 sub-game with the lowest value is (c) and hence the solution
to this game provides the solution to the larger game.

Using algebraic method:

A plays ( 3/11, 8/11)


B plays (0, 9/11, 2/11)
Value of game is 5/11.

Algebraic Method: Game Theory


In this section, we will talk about the algebraic method used to
solve mixed strategy games. Here we have provided formulas and
examples of algebraic method.

Consider the zero sum two person game given below:

Player B

I II
Player A
I a b

67 | P a g e
Decision Science By USMS Academic Support Team

II C d

Formulas: Algebraic Method

The solution of the game is:

A play’s (p, 1 – p)

where:

d–c
p= ——————–
(a + d) – (b + c)

B play’s (q, 1 – q)

where:

d–b
q= ——————-
(a + d) – (b + c)

ad – bc
Value of the game, V = ——————–
(a + d) – (b + c)

Algebraic Method Example 1: Game Theory

Consider the game of matching coins. Two players, A & B, put down a
coin. If coins match (i.e., both are heads or both are tails) A gets
rewarded, otherwise B. However, matching on heads gives a double
premium. Obtain the best strategies for both players and the value of
the game.

68 | P a g e
Decision Science By USMS Academic Support Team

Player B

I II

Player A I 2 -1

II -1 1

Solution.

This game has no saddle point.

1 – (-1) 2
p
———————– = —-
=
(2 + 1) – (-1 – 1) 5

1 – p = 3/5

1 – (-1) 2
q
———————– = —-
=
(2 + 1) – (-1 – 1) 5

1 – q = 3/5

2 X 1 – (-1) X (-1) 1
V
————————– = —-
=
(2 + 1) – (-1 – 1) 5

Example 2: Algebraic Method in Game Theory


Solve the game whose payoff matrix is given below:

Player B

69 | P a g e
Decision Science By USMS Academic Support Team

I II

Player A I 1 7

II 6 2

Solution.

This game has no saddle point.

2–6 2
p
———————– = —-
=
(1 + 2) – (7 + 6) 5

1 – p = 3/5

2–7 1
q
———————– = —-
=
(1 + 2) – (7 + 6) 2

1 – q = 1/2

1 X 2 – (7 X 6)
V
————————– = 4
=
(1 + 2) – (7 + 6)

Simplex Method: Example 1

Maximize z = 3x1 + 2x2

subject to

70 | P a g e
Decision Science By USMS Academic Support Team

-x1 + 2x2 ≤ 4
3x1 + 2x2 ≤ 14
x1 – x2 ≤ 3

x1, x2 ≥ 0

Solution.

First, convert every inequality constraints in the LPP into an equality


constraint, so that the problem can be written in a standard from. This
can be accomplished by adding a slack variable to each constraint.
Slack variables are always added to the less than type constraints.

Converting inequalities to equalities

-x1 + 2x2 + x3 = 4
3x1 + 2x2 + x4 = 14
x1 – x2 + x5 = 3
x1, x2, x3, x4, x5 ≥ 0

Where x3, x4 and x5 are slack variables.

Since slack variables represent unused resources, their contribution in


the objective function is zero. Including these slack variables in the
objective function, we get

Maximize z = 3x1 + 2x2 + 0x3 + 0x4 + 0x5

Initial basic feasible solution

Now we assume that nothing can be produced. Therefore, the values of


the decision variables are zero.
x1 = 0, x2 = 0, z = 0

When we are not producing anything, obviously we are left with


unused capacity
x3 = 4, x4 = 14, x5 = 3

We note that the current solution has three variables (slack variables x 3,
x4 and x5) with non-zero solution values and two variables (decision
variables x1 and x2) with zero values. Variables with non-zero values

71 | P a g e
Decision Science By USMS Academic Support Team

are called basic variables. Variables with zero values are called non-
basic variables.

Simplex Method: Table 1

a11 = -1, a12 = 2, a13 = 1, a14 = 0, a15 = 0, b1 = 4


a21 = 3, a22 = 2, a23 = 0, a24 = 1, a25 = 0, b2 = 14
a31= 1, a32 = -1, a33 = 0, a34 = 0, a35 = 1, b3 = 3

Calculating values for the index row (zj – cj)

z1 – c1 = (0 X (-1) + 0 X 3 + 0 X 1) – 3 = -3
z2 – c2 = (0 X 2 + 0 X 2 + 0 X (-1)) – 2 = -2
z3 – c3 = (0 X 1 + 0 X 0 + 0 X 0) – 0 = 0
z4 – c4 = (0 X 0 + 0 X 1 + 0 X 0) – 0 = 0
z5 – c5 = (0 X 0 + 0 X 0 + 0 X 1) – 0 = 0

Choose the smallest negative value from zj – cj (i.e., – 3). So column


under x1 is the key column.
Now find out the minimum positive value
Minimum (14/3, 3/1) = 3
So row x5 is the key row.
Here, the pivot (key) element = 1 (the value at the point of intersection).
Therefore, x5 departs and x1 enters.

We obtain the elements of the next table using the following rules:

1. If the values of zj – cjare positive, the inclusion of any basic variable


will not increase the value of the objective function. Hence, the present
solution maximizes the objective function. If there are more than one
negative values, we choose the variable as a basic variable
corresponding to which the value of zj – cjis least (most negative) as this
will maximize the profit.
2. The numbers in the replacing row may be obtained by dividing the key
row elements by the pivot element and the numbers in the other two
rows may be calculated by using the formula:

New old (corresponding no. of key row) X (corresponding no. of


number= number- column)

72 | P a g e
Decision Science By USMS Academic Support Team

pivot element

Calculating values for table 2

x3 row

a11 = -1 – 1 X ((-1)/1) = 0
a12 = 2 – (-1) X ((-1)/1) = 1
a13 = 1 – 0 X ((-1)/1) = 1
a14 = 0 – 0 X ((-1)/1) = 0
a15 = 0 – 1 X ((-1)/1) = 1
b1 = 4 – 3 X ((-1)/1) = 7

x4 row

a21 = 3 – 1 X (3/1) = 0
a22 = 2 – (-1) X (3/1) = 5
a23 = 0 – 0 X (3/1) = 0
a24 = 1 – 0 X (3/1) = 1
a25 = 0 – 1 X (3/1) = -3
b2 = 14 – 3 X (3/1) = 5

x1 row

a31 = 1/1 = 1
a32 = -1/1 = -1
a33 = 0/1 = 0
a34 = 0/1 = 0
a35 = 1/1 = 1
b3 = 3/1 = 3

Table 2

cj 3 2 0 0 0

Basic variables Solution values


cB x1 x2 x3 x4 x5
B b (= XB)

73 | P a g e
Decision Science By USMS Academic Support Team

0 x3 0 1 1 0 1 7

0 x4 0 5 0 1 -3 5

3 x1 1 -1 0 0 1 3

zj-cj 0 -5 0 0 3

Calculating values for the index row (zj – cj)

z1 – c1 = (0 X 0 + 0 X 0 + 3 X 1) – 3 = 0
z2 – c2 = (0 X 1 + 0 X 5 + 3 X (-1)) – 2 = -5
z3 – c3 = (0 X 1 + 0 X 0 + 3 X 0) – 0 = 0
z4 – c4 = (0 X 0 + 0 X 1 + 3 X 0) – 0 = 0
z5 – c5 = (0 X 1 + 0 X (-3) + 3 X 1) – 0 = 3

Key column = x2 column


Minimum (7/1, 5/5) = 1
Key row = x4 row
Pivot element = 5
x4 departs and x2 enters.

Calculating values for table 3

x3 row

a11 = 0 – 0 X (1/5) = 0
a12 = 1 – 5 X (1/5) = 0
a13 = 1 – 0 X (1/5) = 1
a14 = 0 – 1 X (1/5) = -1/5
a15 = 1 – (-3) X (1/5) = 8/5
b1 = 7 – 5 X (1/5) = 6

x2 row

a21 = 0/5 = 0
a22 = 5/5 = 1
a23 = 0/5 = 0

74 | P a g e
Decision Science By USMS Academic Support Team

a24 = 1/5
a25 = -3/5
b2 = 5/5 = 1

x1 row

a31 = 1 – 0 X (-1/5) = 1
a32 = -1 – 5 X (-1/5) = 0
a33 = 0 – 0 X (-1/5) = 0
a34 = 0 – 1 X (-1/5) = 1/5
a35 = 1 – (-3) X (-1/5) = 2/5
b3 = 3 – 5 X (-1/5) = 4

Don’t convert the fractions into decimals, because many fractions


cancel out during the process while the conversion into decimals will
cause unnecessary complications.

Simplex Method: Final Optimal Table

cj 3 2 0 0 0

Basic variables Solution values


cB x1 x2 x3 x4 x5
B b (= XB)

0 x3 0 0 1 -1/5 8/5 6

2 x2 0 1 0 1/5 -3/5 1

3 x1 1 0 0 1/5 2/5 4

zj-cj 0 0 0 1 0

Since all the values of zj – cj are positive, this is the optimal solution.
x1 = 4, x2 = 1
z = 3 X 4 + 2 X 1 = 14.

75 | P a g e
Decision Science By USMS Academic Support Team

The largest profit of Rs.14 is obtained, when 1 unit of x2 and 4 units of


x1 are produced. The above solution also indicates that 6 units are still
unutilized, as shown by the slack variable x3 in the XBcolumn.

76 | P a g e
Decision Science By USMS Academic Support Team

DS/U4 Topic 1 Meaning of Networking


Meaning of Network Technique
Network technique is a technique for planning, scheduling (programming) and controlling the
progress of projects. This is very useful for projects which are complex in nature or where
activities are subject to considerable degree of uncertainty in performance time.

This technique provides an effective management, determines the project duration more
accurately, identifies the activities which are critical at different stages of project completion
to enable to pay more attention on these activities, analyse the scheduling at regular interval
for taking corrective action well in advance, facilitates in optimistic resources utilisation,
helps management for taking timely and better decisions for effective monitoring and control
during execution of the project.

Objectives of Network Analysis:


Network analysis entails a group of techniques for presenting information relating to time and
resources so as to assist in the planning, scheduling, and controlling of projects. The infor-
mation, usually represented by a network, includes the sequences, interdependencies, interre-
lationships, and criticality of various activities of the project.

A network analysis has following objectives:

1. Powerful tool of planning, scheduling and control.


2. Shows the inter-relationships of the activities of a project or a programme.
3. Minimises total cost where the cost of delays and cost of resources required to carry out the tasks can
be measured.
4. Minimise total time where required e.g. in maintenance of production-line machinery in a factory.
5. Minimization of idle resources.
6. Minimise production delays.
7. To provide systematic approach in planning and scheduling.
8. Follow an integrated approach and bring about better coordination between the departments.
9. Focusses attention on critical activities of the project.
10. Provides up-to-date status information.
11. Suggest areas for increasing efficiency, and reduction of cost.

Applications of Network Technique:


Applications of network techniques are very wide, but are very common in some of the
following fields:

(i) Planning,

(ii) Construction of buildings, bridges, highways, railways, stadiums, irrigation projects,


factories, power projects etc.

(iii) Assembly line scheduling,

(iv) Development and launching of new products,

(v) Strategic and tactical military planning,

77 | P a g e
Decision Science By USMS Academic Support Team

(vi) Research and development,

(vii) Market penetration programmes,

(viii) Planning of political campaigns,

(ix) Maintenance and overhauling of complicated or large machineries,

(x) organising big conferences etc.

Advantages of Network Technique:


Main advantages of the network system are as follows:

1. Detailed and thoughtful planning provides better analysis and logical thinking.
2. Identifies the critical activities and focus them to provide greater managerial attention.
3. Network technique enables to forecast project duration more accurately.
4. It is a powerful tool for optimisation of resources by using the concept of slack.
5. It provides a scientific basis for monitoring, review and control, to evaluate effect of slippages.
6. It helps in taking decision;

(i) To over-come delays,

(ii) To crashing programme,

(iii) Optimising resources, and

(iv) On other corrective actions.

7. It helps in getting better co-ordination amongst related fields.


8. It is an effective management tool through a common and simple language, providing common
understanding.

Limitations of Network Techniques:


Network techniques have following limitations:

(i) Network technique is simply a tool to help the management; hence its effectiveness
depends on how well it is used by the management.

(ii) Its accuracy depends on the estimation of the data used in the network.

(iii) It is useful only if it is updated regularly and decisions for corrective actions are taken
timely.
Ad

DS/U4 Topic 2 Analysis with the help of PERT and CPM

78 | P a g e
Decision Science By USMS Academic Support Team

Program Evaluation and Review Technique (PERT) is a method used to examine the
tasked that are in a schedule and determine a variation of the Critical Path Method (CPM). It
analyzes the time required to complete each task and its associated dependencies to determine
the minimum time to complete a project. It estimates the shortest possible time each activity
will take, the most likely length of time, and the longest time that might be taken if the
activity takes longer than expected.

The method was developed by the US Navy in 1957 on the Polaris nuclear submarine project.
To conduct PERT Analysis, three time estimates are obtained (optimistic, pessimistic, and
most likely) for every activity along the Critical Path. Then use those estimates in the formula
below to calculate how much time for each project stage:

Formula: (P+4M+O)/6

• Optimistic Time (O):the minimum possible time required to accomplish a task, assuming everything
proceeds better than is normally expected.
• Pessimistic Time (P):the maximum possible time required to accomplish a task, assuming everything
goes wrong (excluding major catastrophes).
• Most likely Time (M): the best estimate of the time required to accomplish a task, assuming
everything proceeds as normal.

Example of the three time estimates

Example of a Critical Path Nodal Diagram

PERT Analysis can be conducted using Microsoft Project.

79 | P a g e
Decision Science By USMS Academic Support Team

Critical Path Method

The Critical Path is the longest path of scheduled activities that must be met in order to
execute a project. This is important for Program Managers (PM) to know since any problems
that occur on the critical path can prevent a project from moving forward and be
delayed. Earned Value Management (EVM) analysis focuses on the critical path and near
critical paths to identify cost and schedule risks. Other schedule paths might have slack time
in them to avoid delaying the entire project unlike the critical path. There might be multiple
critical paths on a project.

The Critical Path is determined when analyze a projects schedule or network logic diagram
and uses the Critical Path Method (CPM). The CPM provides a graphical view of the
project, predicts the time required for the project, and shows which activities are critical to
maintain the schedule.

The seven (7) steps in the CPM are: [1]

1. List of all activities required to complete the project (see Work Breakdown Structure (WBS)),
2. Determine the sequence of activities
3. Draw a network diagram
4. Determine the time that each activity will take to completion
5. Determine the dependencies between the activities
6. Determine the critical path
7. Update the network diagram as the project progresses

The CPM calculates the longest path of planned activities to the end of the project, and the
earliest and latest that each activity can start and finish without making the project longer.
This process determines which activities are “critical” (i.e., on the longest path) and which
have “total float” (i.e., can be delayed without making the project longer). [1]

The CPM is a project modeling technique developed in the late 1950s by Morgan R. Walker
of DuPont and James E. Kelley, Jr. of Remington Rand.

DS/U4 Topic 3 Resource Planning and Meaning of Crashing

80 | P a g e
Decision Science By USMS Academic Support Team

Resource Planning

Resource planning refers to the strategy for planned and judicious utilisation of resources.
Resource planning is essential for sustainable existence of all forms of life.

Resource planning is essential for India as there is enormous diversity in the availability of
resources. For example the state of Rajasthan has vast potential for the development of solar and
wind energy but is deficient in water resources.

The cold desert of Ladakh has rich cultural heritage but is deficient in water and some strategic
minerals.

The state of Arunachal Pradesh has abundance of water resources but lacks infrastructure which
shows mere availability of resources in the absence of technology and institutions hinders
development.

This shows that the resource planning is needed at the national, regional, state and local levels for
balanced development of a country.

Meaning of Crashing

Crashing is a schedule compression technique used to reduce or shorten the project schedule.

The PM can various measures to accomplish this goal. Some of the common methods used are

• Adding additional resources to the critical path tasks


This option has various constraints such as the securing of the budget to add the resources, and
the availability of the resources.
• Reduce the project requirements or scope
This can be done only if the sponsor and major stakeholders agree to reduce the scope

After applying the crashing, the critical path might have changed and result in creating a different
critical path. Always revisit the project schedule to ensure the schedule has been crashed.

Other Definitions

1. Crashing is the technique to use when fast tracking has not saved enough time on the schedule. It
is a technique in which resources are added to the project for the least cost possible. Cost and
schedule tradeoffs are analyzed to determine how to obtain the greatest amount of compression
for the least incremental cost.
2. Crashing refers to a particular variety of project schedule compression which is performed for the
purposes of decreasing total period of time (also known as the total project schedule duration).
The diminishing of the project duration typically take place after a careful and thorough analysis
of all possible project duration minimization alternatives in which any and all methods to attain
the maximum schedule duration for the least additional cost.
3. When we say that an activity will take a certain number of days or weeks, what we really mean is
this activity normally takes this many Project Management Triangle days or weeks. We could
make it take less time, but to do so would cost more money. Spending more money to get
something done more quickly is called “crashing”. There are various methods of project schedule
crashing, and the decision to crash should only take place after you’ve carefully analyzed all of

81 | P a g e
Decision Science By USMS Academic Support Team

the possible alternatives.The key is to attain maximum decrease in schedule time with minimum
cost.
4. Crashing the schedule means to throw additional resources to the critical path without necessarily
getting the highest level of efficiency.
5. Crashing is another schedule compression technique where you add extra resources to the project
to compress the schedule. In crashing, you review the critical path and see which activities can be
completed by adding extra resources. You try to find the activities that can be reduced the most
by adding the least amount of cost. Once you find those activities, you will apply the crashing
technique.

82 | P a g e
Decision Science By USMS Academic Support Team

DS/U4 Topic 4 Queuing Theory – Meaning, Concepts and assumptions of


queuing models
Queuing theory is the study of queues and the random processes that characterize
them. It deals with making mathematical sense of real-life scenarios. For example, a mob of
people queuing up at a bank or the tasks queuing up on your computer’s back end.

In queuing theory we often want to find out how long wait times or queue lengths are, and we
can use models to do this. These models are typically important in business and software
applications, and queueing theory is often considered a part of operations research.

About Queuing
Any queuing activity can be summarized as entities (customers in your supermarket queue, or
jobs in a computer queue) trying to get through an activity (waiting to be served). Queues
happen when we can’t all access the activity at the same time: when it is not economically
efficient to have enough checkout lines for everyone to go right through as soon as they were
ready, or there isn’t enough server space to do an unlimited amount of computer tasks at one
moment.

In queueing theory a queue does not refer simply to a neat row which is always first come,
first served. This is one example of a queue, but not the only kind. A mob trying to rush for
the door on Black Friday is considered a queue as well, as is a group of job applicants waiting
for interviews who are picked randomly, one by one, to be interviewed.

Types of Queues and Types of Service


First In First Out, or First Come First Served, is fairly common in banking and commerce.
It is the type of queue you get when you have people politely lined up, waiting for their turn.

Last In First Out is the opposite scheme; whoever has been waiting for the shortest time is
served first. This type of queue management is common in asset management, where assets
produced or acquired last are the ones used or disposed of first. For example: the most recent
employees are often the ones laid off first.

Priority is where customers are served based on their priority level; these levels could be
based on status, task urgency, or some other criteria.

Shortest Job First is when whoever needs the shortest amount of service gets taken care of
first

Processor Sharing is when everyone gets served, or half-served, at the same time; service
capacity is distributed evenly among everyone waiting.

There may be a single server, where a line of people or items must go through a single
bottleneck, or parallel servers, where the same line is served by several servers. Or there
may be a tandem queue, where each of multiple servers has their own queue or line.

83 | P a g e
Decision Science By USMS Academic Support Team

Balking when a customer decides not to wait for service because the wait time threatens to be
too long. Renegingis similar, but when a customer who has waited already decides to leave
because they’ve wasted too much time. Jockeying is when a customer switches between
queues in a tandem queue system, trying to orchestrate the shortest wait possible.

Standard Notation for Queueing Theory


To make life easier, there’s standard notation for queueing theory that is used across the
board. These standard symbols include

λ: the mean arrival rate.

μ: the mean service rate.

n: the number of people in the system.

A: the arrival process probability distribution.

B: the service process probability distribution.

C: the number of servers.

D: the maximum number of customers allowed in the system at any given time, waiting or
being served (without getting bumped).

E: the maximum number of customers total.

Queuing System Components


Input Source: The input source generates customers for the service mechanism. The most
important characteristic of the input source is its size. It may be either finite or infinite. Please
note that the calculations are far easier for the infinite case, therefore, this assumption is often
made even when the actual size is relatively large.

If the population size is finite, then the analysis of queuing model becomes more involved.
The statistical pattern by which calling units are generated over time must also be specified. It
may be Poisson or Exponential probability distribution.

2. Queue: It is characterized by the maximum permissible number of units that it can contain. Queues
may be infinite or finite.
3. Service Discipline:It refers to the order in which members of the queue are selected for service.
Frequently, the discipline is first come, first served.

Following are some other disciplines:

• LIFO (Last In First Out)


• SIRO (Service In Random Order)
• Priority System

84 | P a g e
Decision Science By USMS Academic Support Team

• Service Mechanism:

A specification of the service mechanism includes a description of time to complete a service


and the number of customers who are satisfied at each service event. The service mechanism
also prescribes the number and configuration of servers. If there is more than one service
facility, the calling unit may receive service from a sequence of these. At a given facility, the
unit enters one of the parallel service channels and is completely serviced by that server.
Most elementary models assume one service facility with either one or a finite number of
servers.The following figure shows the physical layout of service facilities.

Unusual Customer/Server Behaviour

Customer’s Behaviour

• A customer may not like to join the queue due to long waiting line.
• A customer may leave the queue after waiting for sometime due to impatience.

Collusion. Several customers may cooperate and only one of them may stand in the queue.

Jockeying. When there are a number of queues, a customer may move from one queue to
another in hope of receiving the service quickly.

Server’s Behaviour
Failure. The service may be interrupted due to failure of a server (machinery).

Changing service rates. A server may speed up or slow down, depending on the number of
customers in the queue. For example, when the queue is long, a server may speed up in
response to the pressure. On the contrary, it may slow down if the queue is very small.

Batch processing. A server may service several customers simultaneously, a phenomenon


known as batch processing.

85 | P a g e
Decision Science By USMS Academic Support Team

Assumptions of Queuing Theory

• The source population has infinite size.


• The inter-arrival time has an exponential probability distribution with a mean arrival rate of l
customer arrivals per unit time.
• There is no unusual customer behaviour.
• The service discipline is FIFO.
• The service time has an exponential probability distribution with a mean service rate of m service
completions per unit time.
• The mean arrival rate is less than the mean service rate, i.e., l < m.
• There is no unusual server behaviour.

86 | P a g e
Decision Science By USMS Academic Support Team

DS/U4 Topic 5 M/M/I/FIFO


M/M/1 (N/FIFO) System: Queuing Models
It is a queuing model where the arrivals follow a Poisson process, service times are
exponentially distributed and there is only one server. Capacity of the system is limited to N
with first in first out mode.

The first M in the notation stands for Poisson input, second M for Poisson output, 1 for the
number of servers and N for capacity of the system.

ρ = λ/μ

1−ρ
Po = ——–
1 − ρN + 1

ρ (N + 1)ρN+1
Ls = ——– − ———–
1–ρ 1 − ρN + 1

Lq = Ls – λ/μ

Lq
Wq = —-
λ

87 | P a g e
Decision Science By USMS Academic Support Team

Ls
Ws = —-
λ

Example: M/M/1 (N/FIFO) System


Students arrive at the head office of Universal Teacher Publications according to a Poisson
input process with a mean rate of 30 per day. The time required to serve a student has an
exponential distribution with a mean of 36 minutes. Assume that the students are served by a
single individual, and queue capacity is 9. On the basis of this information, find the
following:

• The probability of zero unit in the queue.


• The average line length.

Solution.

30
λ= ———
60 X 24

= 1/48 students per minute

μ = 1/36 students per minute

ρ = 36/48 = 0.75
N=9

1– 0.75
Po = ————-
1- (0.75)9 + 1

88 | P a g e
Decision Science By USMS Academic Support Team

=
0.26

0.75 (9 + 1)(0.75)9+1
Ls = ——– – ———————-
1 – 0.75 1– (0.75)9 + 1

= 2.40 or 2 students.

M/M/1 Queuing System (∞/FIFO)


It is a queuing model where the arrivals follow a Poisson process, service times are
exponentially distributed and there is only one server. In other words, it is a system with
Poisson input, exponential waiting time and Poisson output with single channel.

Queue capacity of the system is infinite with first in first out mode. The first M in the
notation stands for Poisson input, second M for Poisson output, 1 for the number of servers
and ∞ for infinite capacity of the system.

Formulas
λ
Probability of zero unit in the queue (Po) = 1 − —–
μ

λ2
Average queue length (Lq ) = ——–
μ (μ – λ )

89 | P a g e
Decision Science By USMS Academic Support Team

λ
Average number of units in the system (Ls) = ——–
μ–λ

λ
Average waiting time of an arrival (Wq) = ———-
μ(μ – λ )

1
Average waiting time of an arrival in the system
———
(Ws) =
μ–λ

Example 1
Students arrive at the head office of Universal Teacher Publications according to a Poisson
input process with a mean rate of 40 per hour. The time required to serve a student has an
exponential distribution with a mean of 50 per hour. Assume that the students are served by a
single individual, find the average waiting time of a student.

Solution.

Given
λ = 40/hour, μ = 50/hour

40
Average waiting time of a student before
——— = 4.8 minutes
receiving service (Wq) =
50(50 – 40)

Example 2
New Delhi Railway Station has a single ticket counter. During the rush hours, customers
arrive at the rate of 10 per hour. The average number of customers that can be served is 12
per hour. Find out the following:

• Probability that the ticket counter is free.


• Average number of customers in the queue.

90 | P a g e
Decision Science By USMS Academic Support Team

Solution.

Given
λ = 10/hour, μ = 12/hour

10
1
Probability that the counter is free = —– = 1/6

12

(10)2
Average number of customers in the queue (Lq ) = ——– = 25/6
12 (12 – 10)

Example 3
At Bharat petrol pump, customers arrive according to a Poisson process with an average time
of 5 minutes between arrivals. The service time is exponentially distributed with mean time =
2 minutes. On the basis of this information, find out

1. What would be the average queue length?


2. What would be the average number of customers in the queuing system?
3. What is the average time spent by a car in the petrol pump?
4. What is the average waiting time of a car before receiving petrol?

Solution.

1 1
Average inter arrival time = — = 5minutes = — Hour
λ 12

λ = 12/hour

1 1
Average service time = — = 2 minutes = — Hour
μ 30

μ = 30/hour

91 | P a g e
Decision Science By USMS Academic Support Team

(12)2 4
Average queue length, Lq = ———– = —
30(30 – 12) 15

12 2
Average number of customers,
——- = —-
Ls =
30 – 12 3

1
Average time spent at the petrol
———- = 3.33 minutes
pump =
30 – 12

12
Average waiting time of a car
——— = 1.33 minutes
before receiving petrol =
30(30 – 12)

Example 4
Universal Bank is considering opening a drive in window for customer service. Management
estimates that customers will arrive at the rate of 15 per hour. The teller whom it is
considering to staff the window can service customers at the rate of one every three minutes.

Assuming Poisson arrivals and exponential service find

1. Average number in the waiting line.


2. Average number in the system.
3. Average waiting time in line.
4. Average waiting time in the system.

Solution.

92 | P a g e
Decision Science By USMS Academic Support Team

Given
λ = 15/hour,
μ = 3/60 hour
or 20/hour

(15)2
Average number in the waiting line = ———- = 2.25 customers
20(20 – 15)

15
Average number in the system = ———- = 3 customers
20 – 15

15
Average waiting time in line = ———— = 0.15 hours
20(20 – 15)

1
Average waiting time in the system = ——— = 0.20 hours
20 – 15

Example 5
Chhabra Saree Emporium has a single cashier. During the rush hours, customers arrive at the
rate of 10 per hour. The average number of customers that can be processed by the cashier is
12 per hour. On the basis of this information, find the following:

• Probability that the cashier is idle


• Average number of customers in the queuing system
• Average time a customer spends in the system
• Average number of customers in the queue
• Average time a customer spends in the queue

Solution.

Given
λ = 10/hour, μ = 12/hour

93 | P a g e
Decision Science By USMS Academic Support Team

10
Po = 1– —– = 1/6
12

10
Ls = ——- = 5 customers
12 – 10

1
Ws = ———- = 30 minutes
12 – 10

(10)2
Lq = ———– = 25/6 customers
12(12 – 10)

10
Wq = ——— = 25 minutes
12(12 – 10)

DS/U4 Topic 6 Simulation Modeling

94 | P a g e
Decision Science By USMS Academic Support Team

Simulation of Queuing System

Example 1: Simulation of Queuing System


People arrive at the New Delhi Railway station to buy tickets according to the following
distribution.

Inter-arrival Time (Min.) Frequency

2 10

3 20

4 40

5 20

6 10

The service time is 5 minutes and there is only one ticket counter. The Railway station
incharge is interested in predicting the operating characteristics of this counter during a
typical operating day from 10.00 a.m. to 11.00 a.m. Use simulation to determine the average
waiting time before service and average time a person spends in the system.

Solution.From the given distribution of arrivals, the random numbers can be assigned to the
arrival times as shown in table 1.

Table 1

Inter-arrival Time Cumulative


Frequency Probability R.No.
(Min.) Probability

2 10 0.10 0.10 0 – 09

3 20 0.20 0.30 10 – 29

4 40 0.40 0.70 30 – 69

95 | P a g e
Decision Science By USMS Academic Support Team

5 20 0.20 0.90 70 – 89

6 10 0.10 1.00 90 – 99

The first random number generated is 17, which corresponds to the inter-arrival time of 3
minutes. This implies that the first person arrives 3 minutes after the service window opens,
as shown in table 2. Since the first person arrives at 10.03 a.m., therefore, the server has to
wait for 3 minutes. The server takes 5 minutes and thus, the first person leaves the system at
10.08 a.m. (10.03 + .05). Similarly, other values can be calculated.

Table 2

Waiting Time
Inter- Arrival Arrival Service
S.No. R.No. Service Starts
Time Time Ends
Server Perso

1 17 3 10.03 10.03 10.08 3 –

2 86 5 10.08 10.08 10.13 – –

3 84 5 10.13 10.13 10.18 – –

4 79 5 10.18 10.18 10.23 – –

5 33 4 10.22 10.23 10.28 – 1

6 55 4 10.26 10.28 10.33 – 2

7 6 2 10.28 10.33 10.38 – 5

8 42 4 10.32 10.38 10.43 – 6

9 93 6 10.38 10.43 10.48 – 5

96 | P a g e
Decision Science By USMS Academic Support Team

10 38 4 10.42 10.48 10.53 – 6

11 58 4 10.46 10.53 10.58 – 7

12 71 5 10.51 10.58 11.03 – 7

Total 39

Average waiting time before service.


= Total waiting time (person)/Total no. of arrivals
= 39/12 = 3.25 minutes.

Average time a person spends in the system.


= Service time + Average waiting time before service
= 5 + 3.25 = 8.25 minutes.

Simulation and Inventory Control


Zicom Electronics wants to determine the order size for calculators. The demand and lead
time are probabilistic and their distributions are given below:

Demand / week (thousands) Probability Lead time Probability

0 0.2 2 0.3

1 0.4 3 0.4

2 0.3 4 0.3

3 0.1

The cost of placing an order is Rs. 100 per order and the holding cost for 1000 calculators is
Rs. 2 per week. The shortage cost is Rs. 20 per thousand. Whenever the inventory level is
equal to or below 2000, an order is placed equal to the difference between the current
inventory balance and specified maximum replenishment level equal to 4000.

Simulate the policy for 10 weeks. Assume the following

• the beginning inventory is 3000 units

97 | P a g e
Decision Science By USMS Academic Support Team

• no back orders are permitted


• each order is placed at the beginning of the week following the drop in inventory level to (or below)
the reorder point
• the replenishment orders are received at the beginning of the week.

Solution.

Using the daily demand and lead time distributions, we assign a set of random numbers to
represent the range of values of variables as shown in table 1 & table 2.

Table 1

Demand / week (thousands) Probability Cumulative Probability Random Numbers

0 0.2 0.2 00-19

1 0.4 0.6 20-59

2 0.3 0.9 60-89

3 0.1 1.0 90-99

Table 2

Lead time (weeks) Probability Cumulative Probability Random Numbers

2 0.3 0.3 00-29

3 0.4 0.7 30-69

4 0.3 1.0 70-99

At the start of simulation, the first random number 31 generates a demand of 1000 units, as
shown in table 3. The demand is determined from the cumulative probability values in table
1. At the end of first week, the closing balance is 2000 units, which is equal to the reorder
level; therefore, an order for 2000 (4000-2000) units is placed. The random number generated
is 29, so the lead time is 2 weeks. The lead time is determined from the cumulative
probability values in table 2. Since closing balance is 2000 units, the holding cost is Rs. 4

98 | P a g e
Decision Science By USMS Academic Support Team

In the second week, the random number 70 generates a demand of 2000 units. Therefore, the
closing balance at the end of second week is reduced to zero units.

In the third week, the demand for 1000 units can’t be fulfilled because the available inventory
is zero. This results in the shortage cost of Rs. 20.

The 2000 units ordered in the first week are received at the beginning of fourth week. The
random number 86 generates a demand of 2000 units, and, hence closing stock is zero.
Therefore, an order for 4000 (4000-0) units is placed. The random number generated is 83, so
the lead time is 4 weeks. Therefore, the second shortage occurs in the fifth week. The units
ordered at the end of fourth week are received in the beginning of ninth week.

Table 3

Demand Lead Time Costs


Opening Closing
Quantity
Balance Balance
Week Ordered
Inventory Inventory Holding
Random Units Random (‘000)
(‘000) (‘000) Weeks Cost
Numbers (‘000) Numbers
(Rs.)

1 3 31 1 2 29 2 2 4

2 2 70 2 0 – – – –

3 0 53 1 -1 – – – –

4 2* 86 2 0 83 4 4 –

5 0 32 1 -1 – – – –

6 0 78 2 -2 – – – –

7 0 26 1 -1 – – – –

8 0 64 2 -2 – – – –

99 | P a g e
Decision Science By USMS Academic Support Team

9 4* 45 1 3 – – – 6

10 3 12 0 3 – – – 6

Total 13 6

Note: * includes order quantity just received.

Average Inventory = 8000/10 = 800 units.


The average inventory is calculated by adding the closing inventory balances (ignoring
negative balances) and dividing by the number of weeks.

Weekly average cost = Ordering Cost + Inventory Holding Cost + Shortage Cost

Ordering Cost = (100 X 2)/10


= Rs. 20

Inventory Holding Cost = (800 X 2)/1000


= Rs. 1.60

Shortage Cost = [20 X (1 + 1 + 2 + 1 + 2)]/10 = Rs. 14

Weekly average cost = Rs. 20 + Rs. 1.60 + Rs. 14


= Rs. 35.60

It should be noted that the shortage cost is high as compared to holding cost. The shortage
cost can be reduced by increasing the reorder level.

Average lead time = 6/2 = 3 weeks


Average demand per week = 13000/10 = 1300 units
Average demand during lead time = 3 X 1300 = 3900 units
Maximum lead time = 4 weeks
Maximum weekly demand = 2000 units
Maximum demand during lead time = 4 x 2000 = 8000 units

Thus, the best reorder point should be somewhere between 3900 to 8000 units.

100 | P a g e

You might also like