Download as pdf or txt
Download as pdf or txt
You are on page 1of 125

List of content

Chapter 1 Introduction to operation Research (OR)


1.1 OR Definition…………………………………………………………………………………………..3
1.2 Purpose of OR …………………………………………………………………………………………4
1.3 OR tools and techniques ……………………………………………………………………………….4
1.4 Applications of OR……………………………………………………………………………………..6
1.5 Stages of development of OR………………………………………………………………………….7

Chapter 2 Linear Programming Formulation


2.1 LP definition…………………………………………………………………………………………..8
2.2 LP components………………………………………………………………………………………..8
2.3 assumptions of linear programming ………………………………………………………………….9
2.4 formulating Linear Programs………………………………………………………………………….9
2.5 examples of LP models……………………………………………………………………………….11

Chapter 3 Graphical & simplex solutions of linear programming


3.1 geometry of linear programs………………………………………………………………………….18
3.2 solution for maximization model……………………………………………………………………..19
3.3 solution for minimization model……………………………………………………………………...21
3.4 Multiple optimal solutions……………………………………………………………………………..25
3.5 unbounded solution…………………………………………………………………………………...27
3.6 infeasible solution…………………………………………………………………………………….28
3.7 introduction to simplex……………………………………………………………………………….30
3.8 computational details of simplex algorithm…………………………………………………………..31
3.9 artificial starting solution……………………………………………………………………………..38
3.9.1 M-method…………………………………………………………………………………...38
3.9.2 Two-phase method………………………………………………………………………….41
3.10 special cases in simplex……………………………………………………………………………...44
3.11 sensitivity analysis…………………………………………………………………………………...50

Chapter 4 Duality in Linear Programming Problems


4.1 definition of dual problem…………………………………………………………………………….59
4.2 primal-dual relationships……………………………………………………………………………...62
4.2.1 Review of simple matrix operation…………………………………………………………62
4.2.2 Simplex tableau layout……………………………………………………………………...63

1
4.2.3 Optimal dual solution……………………………………………………………………….64
4.2.4 Simplex tableau computations………………………………………………………………66
4.3 additional simplex algorithm………………………………………………………………………….67
4.3.1 Dual simplex algorithm……………………………………………………………………...67
4.3.2 Generalized simplex algorithm………………………………………………………………69

Chapter 5 Assignment Problem


5.1 Introduction ……………………………………………………………………………………………71
5.2 Assignment problem strategy…………………………………………………………………………..71
5.3 Assignment problem solution…………………………………………………………………………..72
5.4 simplex explanation of Hungarian method……………………………………………………………..75

Chapter 6 Transportation
6.1 Introduction……………………………………………………………………………………………..77
6.2 transportation algorithm………………………………………………………………………………...79
6.2.1 Northwest-corner method……………………………………………………………………..80
6.2.2 Least-cost method……………………………………………………………………………..81
6.2.3 VAM method (Vogel)…………………………………………………………………………82
6.3 iterative computation for transportation…………………………………………………………………84

Chapter 7 Network Model


7.1 introduction……………………………………………………………………………………………..89
7.2 How to construct network model………………………………………………………………………...92
7.3 Minimum spanning tree…………………………………………………………………………………..95
7.4 shortest route algorithm…………………………………………………………………………………..96
7.4.1 Dijkstra’s algorithm…………………………………………………………………………….96
7.4.2 Floyd’s algorithm……………………………………………………………………………….99
7.4.3 LP formulation of shortest route algorithm…………………………………………………….103
7.5 Maximum flow model……………………………………………………………………………………106
7.5.1 Maximum flow algorithm………………………………………………………………………107
7.5.2 LP formulation for maximum flow……………………………………………………………..114
7.6 Critical Path Method (CPM)………………………………………………………………………………115
7.7 Gantt Chart………………………………………………………………………………………………...121

2
Chapter 1
Introduction

1.1 operations research definition


Operation Research is a relatively new discipline. The contents and the boundaries of the OR are not yet
fixed. Therefore, to give a formal definition of the term Operations Research is a difficult task. The OR starts
when mathematical and quantitative techniques are used to substantiate the decision being taken. The main
activity of a manager is the decision making. The decisions are taken simply by common sense, judgment and
expertise without using any mathematical or any other model in simple situations. But the decision we are
concerned here with are complex and heavily responsible.

As stated earlier defining O.R. is a difficult task. The definitions stressed by various experts and Societies on
the subject together enable us to know what O.R. is, and what it does. They are as follows:

1. According to the Operational Research Society of Great Britain


Operational Research is the attack of modern science on complex problems arising in the
direction and management of large systems of men, machines, materials and money in industry,
business, government and defense. Its distinctive approach is to develop a scientific model of the
system, incorporating measurements of factors such as change and risk, with which to predict and
compare the outcomes of alternative decisions, strategies or controls. The purpose is to help
management determine its policy and actions scientifically.

2. Randy Robinson stresses that


Operations Research is the application of scientific methods to improve the effectiveness of operations,
decisions and management. By means such as analyzing data, creating mathematical models and
proposing innovative approaches, Operations Research professionals develop scientifically based
information that gives insight and guides decision making. They also develop related software, systems,
services and products.

3. Morse and Kimball have stressed


O.R. is a quantitative approach and described it as ―a scientific method of providing executive
departments with a quantitative basis for decisions regarding the operations under their control‖.

4. Saaty considers
O.R. as tool of improving quality of answers. He says, ―O.R. is the art of giving bad answers to
problems which otherwise have worse answers‖.

5. Miller and Starr state


―O.R. is applied decision theory, which uses any scientific, mathematical or logical means to attempt to
cope with the problems that confront the executive, when he tries to achieve a thorough-going rationality
in dealing with his decision problem‖.

3
6. Pocock stresses that
O.R. is an applied Science. He states ―O.R. is scientific methodology (analytical, mathematical, and
quantitative) which by assessing the overall implication of various alternative courses of action.

1.2 Purpose of O.R.

The main purpose of O.R. is to provide a rational basis for decisions making in the absence of complete
information, because the systems composed of human, machine, and procedures may do not have complete
information.

Operations Research can also be treated as science in the sense it describing, understanding and
predicting the systems behavior, especially man-machine system. Thus O.R. specialists are involved in three
classical aspect of science, they are as follows:

i) Determining the systems behavior.


ii) Analyzing the systems behavior by developing appropriate models.
iii) Predict the future behavior using these models.

The emphasis on analysis of operations as a whole distinguishes the O.R. from other research and engineering.
O.R. is an interdisciplinary discipline which provided solutions to problems of military operations during World
War II, and also successful in other operations.

Today business applications are primarily concerned with O.R. analysis for the possible alternative actions. The
business and industry befitted from O.R. in the areas of inventory, reorder policies, optimum location and size
of warehouses, advertising policies, etc.

1.3 OR Tools and Techniques

Operations Research uses any suitable tools or techniques available. The common frequently used
tools/techniques are mathematical procedures, cost analysis, electronic computation. However, operations
researchers given special importance to the development and the use of techniques like linear programming,
game theory, decision theory, queuing theory, inventory models and simulation. In addition to the above
techniques, some other common tools are non-linear programming, integer programming, dynamic
programming, sequencing theory, Markov process, network scheduling (PERT/CPM), symbolic Model,
information theory, and value theory. The brief explanations of some of the above techniques/tools are as
follows:

Linear Programming problems:


This is a constrained optimization technique, which optimize some criterion within some constraints. In
Linear programming the objective function (profit, loss or return on investment) and constraints are
linear. There are different methods available to solve linear programming problems.

4
Game Theory:
This is used for making decisions under conflicting situations where there are one or more
players/opponents. In this the motive of the players are dichotomized. The success of one player tends
to be at the cost of other players and hence they are in conflict.

Decision Theory:
Decision theory is concerned with making decisions under conditions of complete certainty
about the future outcomes and under conditions such that we can make some probability about what will
happen in future.

Queuing Theory:
This is used in situations where the queue is formed (for example customers waiting for service,
aircrafts waiting for landing, jobs waiting for processing in the computer system, etc.). The objective
here is minimizing the cost of waiting without increasing the cost of servicing.

Inventory Models:
Inventory model make a decisions that minimize total inventory cost. This model successfully reduces
the total cost of purchasing, carrying, and out of stock inventory.

Simulation:
Simulation is a procedure that studies a problem by creating a model of the process involved in
the problem and then through a series of organized trials and error solutions attempt to determine the
best solution. Sometimes this is a difficult/time consuming procedure. Simulation is used when actual
experimentation is not feasible or solution of model is not possible.

Non-linear Programming:
This is used when the objective function and the constraints are not linear in nature. Linear
relationships may be applied to approximate non-linear constraints but limited to some range, because
approximation becomes poorer as the range is extended. Thus, the non-linear programming is used to
determine the approximation in which a solution lies and then the solution is obtained using linear
methods.

Dynamic Programming:
Dynamic programming is a method of analyzing multistage decision processes. In this each elementary
decision depends on those preceding decisions and as well as external factors.

Integer Programming:
If one or more variables of the problem take integral values only then dynamic programming
method is used. For example number or motor in an organization, number of passenger in an aircraft,
number of generators in a power generating plant, etc.

Markov Process:
Markov process permits to predict changes over time information about the behavior of a system
is known. This is used in decision making in situations where the various states are defined. The

5
probability from one state to another state is known and depends on the current state and is independent
of how we have arrived at that particular state.

Network Scheduling:

This technique is used extensively to plan, schedule, and monitor large projects (for example
computer system installation, R & D design, construction, maintenance, etc.).

The aim of this technique is minimize trouble spots (such as delays, interruption, production bottlenecks,
etc.) by identifying the critical factors. The different activities and their relationships of the entire project
are represented diagrammatically with the help of networks and arrows, which is used for identifying
critical activities and path.

There are two main types of technique in network scheduling, they are:

Program Evaluation and Review Technique (PERT)


It is used when activities time is not known accurately/ only probabilistic estimate of time is available.

Critical Path Method (CPM)


It is used when activities time is known accurately.

Information Theory:
This analytical process is transferred from the electrical communication field to O.R. field. The
objective of this theory is to evaluate the effectiveness of flow of information with a given system. This
is used mainly in communication networks but also has indirect influence in simulating the examination
of business organizational structure with a view of enhancing flow of information.

1.4 The application of OR

Today, almost all fields of business and government utilizing the benefits of Operations Research. There
are voluminous of applications of Operations Research. Although it is not feasible to cover all applications of
O.R. in brief. The following are the abbreviated set of typical operations research applications to show how
widely these techniques are used today:

Application Usage
1. Assigning audit teams effectively 2. Credit policy analysis
3. Cash flow planning 4. Developing standard costs
Accounting
5. Establishing costs for byproducts
6. Planning of delinquent account strategy
1. Project scheduling, monitoring and control
Construction 2. Allocation of resources to projects
3. Deployment of work force 4. Determination of proper work force
Facilities planning 1. Estimation of number of facilities required

6
2. Transportation loading and unloading
3. Factory location and size decision 4. Hospital planning
5. International logistic system design 6. Warehouse location decision
1. Building cash management models 2. Investment analysis
Finance 3. Allocating capital among various alternatives 4. Portfolio analysis
5. Building financial planning models 6. Dividend policy making
1. Inventory control 2. Marketing balance projection
Manufacturing
3. Production scheduling 4. Production smoothing
1. Advertising budget allocation 2. Product introduction timing
Marketing 3. Selection of Product mix
4. Deciding most effective packaging alternative
Purchasing 1. Optimal buying 2. Optimal reordering 3. Materials transfer
1. Personnel planning 2. Recruitment of employees
Organizational behavior /
3. Skill balancing 4. Training program scheduling
human resources
5. Designing organizational structure more effectively
1. R & D Projects control 2. R & D Budget allocation
Research and development
3. Planning of Product introduction

1.5 Stages of Development of OR

The stages of development of O.R. are also known as phases and process of O.R, which has six important steps.
These six steps are arranged in the following order:

Step I: Observe the problem environment

Step II: Analyze and define the problem

Step III: Develop a model

Step IV: Select appropriate data input

Step V: Provide a solution and test its reasonableness

Step VI: Implement the solute

7
Chapter 2
Linear programming Formulation

2.1 LP definition

Linear Programming (LP) is a special and versatile technique which can be applied to a variety of management
problems viz. Advertising, Distribution, Investment, Production, Refinery Operations, and Transportation
analysis.

LP is a central topic in optimization. It provides a powerful tool in modeling many applications. LP has
attracted most of its attention in optimization during the last six decades for two main reasons:

 Applicability: there are many real world applications that can be modeled by LP.

 Solvability: there are theoretically and practically efficient techniques for solving large scale
problem.

The linear programming method is applicable in problems characterized by the presence of decision variables.
The objective function and the constraints can be expressed as linear functions of the decision variables.

2.2 LP components

1. The decision variables


Describe our choice that are under our control and represent quantities that are, in some sense,
controllable inputs to the system being modeled.

2. An objective function
Represents a principal objective criterion or goal that measures the effectiveness of the system such as
maximizing profits or productivity, or minimizing cost or consumption.

3. constraints
 There is always some practical limitations that restrict our choices for decision variables such as
limitation on the availability of resources viz. man, material, machine, or time for the system.

 These constraints are expressed as linear equations involving the decision variables.

8
Solving a linear programming problem means
Determining actual values of the decision variables that optimize the objective function subject to the
limitations imposed by the constraints.

The main important feature of linear programming model is the presence of linearity in the problem. The use of
linear programming model arises in a wide variety of applications. Some model may not be strictly linear, but
can be made linear by applying appropriate mathematical transformations. Still some applications are not at all
linear, but can be effectively approximated by linear models.

2.3 Assumptions of linear programs

Linear programs make the following implicit assumption:

1. Proportionality
Means that the contribution of individual variables in the objective function are proportional to their
value.

2. Additivity
Means the total value of the objective function and each constraint is the sum of the individual
contributions from each variable.

3. Divisibility
Means the decision variables can take on any real numerical values within a specified range.

4. Certainty
Means the parameters are known with certainty or are at least treated that way. The optimal solution
obtained is optimal for the specific problem formulated. If the parameter values are wrong, then the
resulting solution is of little value.

In practice, the assumptions of proportionality and Additivity need the greatest care and are most likely to be
violated by the modeler. With experience, we recognize when integer solutions are needed and the variables
must be modeled explicitly.

2.4 Formulating linear programs

Model formulation is the most important and the most difficult aspect of solving a real problem. Solving a model that does not
accurately represent the real problem is useless. There is no simple way to formulate optimization problems, but the following
suggestions may help.

9
Steps in problem formulation

1. Identify and define the decision variables for the problem.

Define the variables completely and precisely. All units of measure need to be stated explicitly,
including time units if appropriate. For example, if the variables represent quantities of a product
produced, these should be defined in terms of tons per hour, units per day, barrels per month, or some
other appropriate units.

2. Define the objective function.

Determine the criterion for evaluating alternative solutions. The objective function will normally
be the sum of terms made up of a variable multiplied by some appropriate coefficient (parameter). For
example, the coefficients might be profit per unit of production, distance travel per unit transported, or
cost per person hired.

3. Identify and express mathematically all of the relevant constraints.

It is often easier to express each constraint in words before putting it into mathematical form.
The written constraint is decomposed into its fundamental components. Then substitute the appropriate
numerical coefficients and variable names for the written terms.

A common mistake is using variables that have not been defined in the problem, which is not valid. This
mistake is frequently caused by not defining the original variables precisely. The formulation process is
iterative, and sometimes additional variables must be defined or existing variables redefined. For
example, if one of the variables is the total production of the company and five other variables represent
the production at the company’s five plants, then there must be a constant that forces total production to
equal the sum of the production at the plants.

Linear programs are constrained optimization models that satisfy three requirements.

1. The decision variables must be continuous; they can take on any value with in some restricted range.
2. The objective function must be a linear function.
3. The left-hand sides of the constraints must be linear functions.
Thus, linear programs are written in the following form:

Maximize or minimize


=
Subject to ≥


=

10
=

By applying some of basic linear algebra this problems become:

Minimize or maximize ∑

Subject to ∑

Where the values are decision variables and , and values are constants, called parameters or
coefficients, that are given or specified by the problem assumptions. Most linear programs require that all
decision variables be nonnegative.

2.5 Examples of LP Model

Example 2.5-1 (Reddy Mikks company)


Reddy Mikks produces both interior and exterior paints from two materials, M1 and M2.The following table
provides the basic data of the problem:

Tons of raw material per ton of Maximum daily


exterior paint Interior paint availability (tons)
Raw material M1 6 4 24
Raw material M2 1 2 6
Profit per ton(1000$) 5 4

Reddy Mikks wants to determine the optimum (best) product mix of interior and exterior paints that maximize
the total daily profits.

1. For the problem we need to determine the daily amounts to be produced of interior and exterior paints so, the
decision variables of model are defined as:
x1 =tons produced daily of exterior paint
x2 = tons produced daily of interior paint

2. To constructive the objective function, note that company wants to maximize the profit of both paints. Given
that the profits per ton of exterior and interior paints are 5 and 4 (thousand) dollars, it follows that:
Total profit from exterior paint=5 x1

Total profit from interior paint=4 x2

11
Letting z represent the total daily profit (in thousands of dollars), the objective function is:
Maximize z=5 x1 +4 x2

3. Next, we construct the constraints that restrict raw material usage and product demand. The raw material
restrictions are expressed as:
(Usage of a raw material by both paints) ≤ (Maximum raw material availability)
Usage of raw material M1 by exterior paint= 6 x1 tons/day

Usage of raw material M1 by interior paint= 4 x2 tons/day


Hence
Usage of raw material M1 by both paint= 6 x1 +4 x2 tons/day
In a similar manner
Usage of raw material M2 by both paint=1 x1 +2 x2 tons/day

Because the availabilities of raw materials M1 and M2 are limited to 24 and 6 tons, the restrictions are given:
6 x1 +4 x2 ≤ 24 (Raw Material M1)

1 x1 +2 x2 ≤ 6 (Raw Material M2)

 The first demand restriction that the excess of the daily production of interior over exterior paint , x2-x1,
shouldn’t exceed 1 ton :
x2 - x1 ≤ 1 (Market limit)
 The second demand restriction that maximum daily demand of interior paint is limited to 2 tons:
x2 ≤ 2 (demand limit)

The complete Reddy Mikks formulation model


Maximize z=5 x1 +4 x2
Subject to
6 x1 +4 x2 ≤ 24 (1)

x1 +2 x2 ≤ 6 (2)

- x1 + x2 ≤ 1 (3)

x2 ≤ 2 (4)

x1 , x2 ≥ 0 (5)

12
Example 2.5-2 (Feed mix or Diet problem)
International Wool Company operates a large farm on which sheep are raised. The farm manager determined
that for the sheep to grow in the desired fashion, they need at least minimum amounts of four nutrients (the
nutrients are nontoxic so the sheep can consume more than the minimum without harm). The manager is
considering three different grains to feed the sheep.

The following table lists the number of units of each nutrient in each pound of grain, the minimum daily
requirements of each nutrient for each sheep, and the cost of each grain. The manager believes that as long as a
sheep receives the minimum daily amount of each nutrient, it will be healthy and produce a standard amount of
wool. The manager wants to raise the sheep at minimum cost.

Grain Min daily


requirements
1 2 3
(units)
Nutrient A 20 30 70 110
Nutrient B 10 10 0 18
Nutrient C 50 30 0 90
Nutrient D 6 2.5 10 14
cost 41 36 96

Solution:

1. The quantities that the manager controls are the amounts of each grain to feed each sheep daily. We define

x j = number of pounds of grain j = (1, 2, 3) to feed each sheep daily

2. Note that the units of measure are completely specified. In addition, the variables are expressed on a per
sheep basis. If we minimize the cost per sheep, we minimize the cost for any group of sheep. The daily feed cost
per sheep will be
(Cost per lb of grain j) * (lb. of grain j fed to each sheep daily)

That is, the objective function is to


Minimize z = 41 x1 + 36 x2 + 96 x3

Why can’t the manager simply make all the variables equal to zero? This keeps costs at zero, but the manager
would have a flock of dead sheep, because there are minimum nutrient constraints that must be satisfied. The

13
values of the variables must be chosen so that the number of units of nutrient A consumed daily by each sheep
is equal to or greater than 110. Expressing this in terms of the variables yields

20 x1 +30 x2 +70 x3 + ≥ 110

3. The constraints for the other nutrients are

10 x1 +10 x2 ≥ 18

50 x1 +30 x2 ≥ 90

6 x1 + 2.5 x2 +10 x3 ≥ 110


And finally
all x j ≥ 0

Example 2.5-3 (post office problem)


A post office requires different numbers of employees on different days of the week. Each full-time employee
must work five consecutive days and then receive two days off. In the following table, the number of employees
required on each day of the week is specified. Try to formulate LP that the post office can use minimum number
of full-time employees who are needed to satisfy these constraints.

Day Numbers of full-time employees required


1=Monday 17
2=Tuesday 13
3=Wednesday 15
4=Thursday 19
5=Friday 14
6=Saturday 16
7=Sunday 11

We need to define 7 decision variables as follows:

( )

Our aim is to minimize number of hired employees so, the objective function is:
Minimize

The post office must ensure that enough employees are working on each day of the week. At least 17 employees
must be working on Monday so, the constraint is:

LP model for post office problem:

14
Minimize
Subject to
(Monday constraint)
(Tuesday constraint)
(Wednesday constraint)
(Thursday constraint)
(Friday constraint)
(Saturday constraint)
(Sunday constraint)

Example 2.5-4 (Manufacturing problem)


An operations manager is trying to determine a production plan for the next week. There are 3 products (P, Q
and R) to produce using 4 machines (A, B, C and D). Each of the four machines performs a unique process.
There is one machine for each type, and each machine is available for 2400 minutes per week. The unit
processing times for each machine is given in the following table1.

Tablel1 -Machine Data

Unit processing time (min)


machine Product P Product Q Product R availability
A 20 10 10 2400
B 12 28 16 2400
C 15 6 16 2400
D 10 15 0 2400
Total processing time 57 59 42 9600

The unit revenues and maximum sales for the week are indicated in table 2. Storage from one week to the next
is not permitted. The operating expenses associated with the plant are 6000 $ per week, regardless of how many
components and products are made. The 6000 $ includes all expenses except for material costs.

Table2- Product Data

item Product P Product Q Product R


Revenue per unit 90$ 100$ 70$
Material cost per unit 45$ 40$ 20$
Profit per unit 45$ 60$ 50$
Maximum sales 100 40 60

We seek the optimal product mix that is the amount of each product that should be manufactured during the
present week to maximize profit. Formulate this as a LP.

15
1. We define 3 decision variables as follow:
p: number of units of product P to produce
q: number of units of product Q to produce
r: number of units of product R to produce

2. Our objective function to maximize profit:


Profit= (90-45) p+ (100-40) q + (70-20) r – 6000
= 45 p + 60 q+ 50 r – 6000

Note: the costs are not a function of the variables in the problem. If we were to drop 6000 from the profit
function, we would still obtain the same optimal mix of the product so, the objective function:
Z=45 p + 60 q+ 50 r

3. The amount of time a machine is available and the maximum sales potential for each product restrict the
quantities to be manufactured. Since we know the unit processing times for each machine, the constraints can be
written as follow:

20 p + 10 q + 10 r ≤ 2400 (Machine A)
12 p + 28 q + 16 r ≤ 2400 (Machine B)
15 p + 6 q + 16 r ≤ 2400 (Machine C)
10 p + 15 q ≤ 2400 (Machine D)

The units for these constraints is units per week. The market limitations are written as:
Market constraints: p ≤ 100, q ≤ 40, r ≤ 60

Non-negativity constraint: p ≥ 0, q ≥ 0, r ≥ 0

Example 2.5-5 (Bank Loan Model)


Bank One is in the process of devising a loan policy that involves a maximum of $12 million. The following
table provides the pertinent data about available loans.

Type of loan Interest rate Bad-debt ratio


Personal 0.140 0.10
Car 0.130 0.07
Home 0.120 0.03
Farm 0.125 0.05
Commercial 0.100 0.02

.
Competition with other financial institutions dictates the allocation of at least 40% of the funds to farm and
commercial loans. To assist the housing industry in the region, home loans must equal at least 50% of the
personal, car, and home loans. The bank limits the overall ratio of bad debts on all loans to at most 4%.
Mathematical Model: The situation deals with determining the amount of loan in each category, thus leading
to the following definitions of the variables:

16
= personal loans 1in millions of dollars2
= car loans
= home loans
= farm loans
= commercial loans
The objective of the Bank One is to maximize net return, the difference between interest revenue and lost bad
debts. Interest revenue is accrued on loans in good standing. For example, when 10% of personal loans are lost
to bad debt, the bank will receive interest on 90% of the loan—that is, it will receive 14% interest on .9x1 of the
original loan x1. The same reasoning applies to the remaining four types of loans. Thus,

Total interest = 0.14(.9 ) + 0.13(.93 ) + 0.12(0.97 ) + 0.125(0.95 ) + 0.1(0.98 )


= 0.126 + 0.1209 + 0.1164 + 0.11875 + 0.098
We also have
Bad debt = .1 + .07 + .03 + .05 + .02

The objective function combines interest revenue and bad debt as:

Maximize z = Total interest – Bad debt


= (.126 + .1209 + .1164 + .11875 + .098 )–
(.1 + .07 + .03 x3 + .05 + .02 x5)
= .026 + .0509 + .0864 + .06875 + .078

The problem has five constraints:

1. Total funds should not exceed $12 (million):

+ + + + ≤ 12

2. Farm and commercial loans equal at least 40% of all loans:

+ ≥ .4 ( + + + + )
or
.4 + .4 + .4 – .6 –.6 ≤0

4. Home loans should equal at least 50% of personal, car, and home loans:
5.
≥ .5( + + )
or
.5 + .5 – .5 ≤0
4. Bad debts should not exceed 4% of all loans:

.1 + .07 + .03 + .05 + .02 x5 ≤ .04(x1 + x2 + + + )


or
.06 + .03 – .01 + .01 – .02 ≤0

5. Non-negativity:
, , , , ≥0

17
Chapter 3
Graphical & simplex solutions for
Linear programming problems

3.1 The geometry of linear programs


The characteristic that makes linear programs easy to solve is their simple geometric structure. Let’s define
some terminology.

A solution for a linear program problem is any set of numerical values for the variables. These values need not
be the best values and do not even have to satisfy the constraints or make sense.

 A feasible solution is a solution that satisfies all of the constraints.


 The feasible set or feasible region is the set of all feasible solutions.
 Finally, an optimal solution is the feasible solution that produces the best objective function value
possible.

So, the graphical solution includes two steps:


1. Determination of feasible solution space.
2. Determination of the optimum solution from among all the feasible points in the solution space.

The relationship among solutions can be represented in the following figure:

The procedure uses two examples to show how maximization and minimization objective functions are handled.

18
3.2 Solution for maximization model
Example 1
This example solves the problem Reddy Mikks Model in the previous chapter (example 2.5-1).

Step1: Determination of feasible solution space.


First, we account for the non-negativity constraints . In the next figure, the horizontal axis
and the vertical axis represent the exterior- and interior-paint variables, respectively. Thus, the non-
negativity of the variables restricts the solution-space area to the first quadrant that lies above the axis and
to the right of the axis.

To account for the remaining four constraints, first replace each inequality with an equation and then graph the
resulting straight line by locating two distinct points on it.

For example, after replacing 6 +4 ≤ 24 with the straight line 6 +4 = 24, we can determine two
distinct points by first setting = 0 to obtain = and then setting = 0 to obtain = . Thus,
the line passes through the two points (0, 6) and (4, 0), as shown by line (1) in the next figure.

Next, consider the effect of the inequality. All it does is divide the ( , )-plane into two half-spaces, one on
each side of the graphed line. Only one of these two halves satisfies the inequality. To determine the correct
side, choose (0, 0) as a reference point. If it satisfies the inequality, then the side in which it lies is the feasible
half-space, otherwise the other side is. The use of the reference point (0, 0) is illustrated with the constraint
6 + 4 ≤ 24. Because 6 * 0 + 4 * 0 = 0 is less than 24, the half-space representing the inequality includes the
origin.

It is convenient computationally to select (0, 0) as the reference point, unless the line happens to pass through
the origin, in which case any other point can be used. For example, if we use the reference point (6, 0), the left-
hand side of the first constraint is 6 * 6 + 4 * 0 = 36, which is larger than its right-hand side (= 24), which
means that the side in which (6, 0) lies is not feasible for the inequality 6 + 4 ≤ 24. The conclusion is
consistent with the one based on the reference point (0, 0).

Application of the reference-point procedure to all the constraints of the model produces the constraints shown
in figure (verify!). The feasible solution space of the problem represents the area in the first quadrant in which
all the constraints are satisfied simultaneously. In Figure 2.1, any point in or on the boundary of the area
ABCDEF is part of the feasible solution space. All points outside this area are infeasible.

19
Step2: Determination of the optimum solution
The feasible space in the previous figure is delineated by the line segments joining the points A, B, C, D, E, and
F. Any point within or on the boundary of the space ABCDEF is feasible. Because the feasible space ABCDEF
consists of an infinite number of points, we need a systematic procedure to identify the optimum solution.

The determination of the optimum solution requires identifying the direction in which the profit function z =
5x1 + 4 increases (recall that we are maximizing z). We can do so by assigning arbitrary increasing values to
z.

For example, using z = 10 and z =15 would be equivalent to graphing the two lines 5 + 4 = 10 and 5 +
4 = 15. Thus, the direction of increase in z is as shown in the next figure. The optimum solution occurs at C,
which is the point in the solution space beyond which any further increase will put z outside the boundaries of
ABCDEF.

The values of and associated with the optimum point C are determined by solving the equations
associated with lines (1) and (2)—that is,

20
The solution is =3 and =1.5 with z=5*3+4*1.5=21. This calls for a daily product mix of 3 tons of exterior
paint and 1.5 tons of interior paint. The associated daily profit is 21.000$.

An important characteristic of the optimum LP solution is that it is always associated with a corner point of the
solution space (where two lines intersect).This is true even if the objective function happens to be parallel to a
constraint.

For example, if the objective function is z = 6 + 4 , which is parallel to constraint 1, we can always say that
the optimum occurs at either corner point B or corner point C. Actually any point on the line segment BC will
be an alternative optimum, but the important observation here is that the line segment BC is totally defined by
the corner points B and C.

3.3 Solution for minimization model


Example 2
Ozkar farms uses at least 800 lb of special feed daily. The special feed is a matrix of corn and soybean
meal with the following conditions:

21
The dietary requirements of the special feed arc at least 30% protein and at most 5% fiber. Ozark Farms wishes
to determine the daily minimum-cost feed mix. Because the feed mix consists of corn and soybean meal, the
decision variables of the model are defined as

= lb of corn in the daily mix


= lb of soybean meal in the daily mix

The objective function seeks to minimize the total daily cost (in dollars) of the feed mix and is thus expressed as
Minimize z = 0.3 + 0.9
The constraints of the model reflect the daily amount needed and the dietary requirements. Because Ozark
Farms needs at least 800 lb of feed a day, the associated constraint can be ex-pressed as

+ ≥ 800
As for the protein dietary requirement constraint, the amount of protein included in x1 lb of corn and lb of
soybean meal is (0.09 + 0.6 ) lb. This quantity should equal at least 30% of the total feed mix (x1 + ) lb
that is,

0.09 + 0.6 ≥ 0.3(x1 + x2)

In a similar manner, the fiber requirement of at most 5% is constructed as


0.02 + 0.06x2 ≤ 0.05(x1 + )
The constraints are simplified by moving the terms in x1 and x2 to the left-hand side of each inequality, leaving
only a constant on the right-hand side. The complete model thus becomes
Minimize z = 0.3 + 0.
Subject to
+ ≥ 800
0.21 – 0.30 ≤0
0.03 – 0.01 ≥0
, ≥0

22
The next figure provides the graphical solution of the model. Unlike those of the Reddy Mikks model, the
second and third constraints pass through the origin. To plot the associated straight lines, we need one
additional point, which can be obtained by assigning a value to one of the variables and then solving for the
other variable.

For example, in the second constraint, = 200 will yield 0.21 * 200 — 0.3 = 0, or = 140. This means
that the straight line 0.21 – 0.3 = 0 passes through (0, 0) and (200,140).

Note also that (0, 0) cannot be used as a reference point for constraints 2 and 3, because both lines pass through
the origin. In-stead, any other point [e.g., (100, 0) or (0,100)] can be used for that purpose.

Solution:
Because the present model seeks the minimization of the objective function, we need to reduce the value of z as
much as possible in the direction shown in the next figure. The optimum solution is the intersection of the two
lines + = 800 and 0.21 – 0.3 = 0, which yields = 470.59 lb and = 329.41 lb. The associated
minimum cost of the feed mix is z = 0.3 * 470.59 + 0.9 * 329.42 = $437.65 per day.
Remarks:
We need to take note of the way the constraints of the problem are constructed. Be-cause the model is
minimizing the total cost, one may argue that the solution will seek exactly 800 tons of feed. Indeed, this is
what the optimum solution given above does. Does this mean then that the first constraint can be deleted
altogether simply by including the amount 800 tons in the remaining constraints?

23
To find the answer, we state the new protein and fiber constraints as
0.09 + 0.6 ≥ 0.3 * 800
0.02 + 0.06 ≤ 0.05 * 800
Or
0.09 + 0.6 ≥ 240
0.02 + 0.06 ≤ 40

The new formulation yields the solution = 0, and = 400 lb, which does not satisfy the implied requirement
for 800 lb of feed. 'Ibis means that the constraint + ≥ 800 must be used explicitly and that the protein and
fiber constraints must remain exactly as given originally.

Along the same line of reasoning, one may be tempted to replace + ≥ 800 with + = 800. In the
present example, the two constraints yield the same answer. But in general this may not be the case. For
example, suppose that the daily mix must include at least 500 lb of corn. In this case, the optimum solution will
call for using 500 lb of corn and 350 lb of soybean, which is equivalent to a daily feed mix of 500 + 350 = 850
lb. imposing the equality constraint a priori will lead to the conclusion that the problem has no feasible solution.

On the other hand, the use of the inequality is inclusive of the equality case, and hence its use does not prevent
the model from producing exactly 800 lb of feed mix, should the remaining constraints allow it. The conclusion
is that we should not "pre-guess" the solution by imposing the additional equality restriction, and we should
always use in-equalities unless the situation explicitly stipulates the use of equalities.

Example 3:
Consider the maximization problem

Maximize 30 + 40
Subject to:
3 + 2 ≤ 600
3 + 5 ≤ 800
5 + 6 ≤ 1100
≥ 0, ≥ 0

Solution
M = 30 +40x2

In this problem the objective function is 30 + 40 . Let be M is a parameter, the graph 30 + 40 = M is a


group of parallel lines with slope – 30/40. Some of these lines intersects the feasible region and contains many
feasible solutions, whereas the other lines miss and contain no feasible solution. In order to maximize the

24
objective function, we find the line of this family that intersects the feasible region and is farthest out from the
origin. Note that the farthest is the line from the origin the greater will be the value of M.

Observe that the line 30 + 40 = M passes through the point D, which is the intersection of the lines 3 +
5 = 800 and 5x1 + 6 = 1100 and has the coordinates = 170 and = 40. Since D is the only feasible
solution on this line the solution is unique.

The value of M is 6700, which is the objective function maximum value. The optimum value variables are =
170 and = 40.

The following Table shows the calculation of maximum value of the objective function.

3.4 Multiple optimum solution


Example 4:
A company purchasing scrap material has two types of scarp materials available. The first type has 30% of
material X, 20% of material Y and 50% of material Z by weight. The second type has 40% of material X, 10%
of material Y and 30% of material Z. The costs of the two scraps are Rs.120 and Rs.160 per kg respectively.
The company requires at least 240 kg of material X, 100 kg of material Y and 290 kg of material Z. Find the

25
optimum quantities of the two scraps to be purchased so that the company requirements of the three materials
are satisfied at a minimum cost.

Solution

First we have to formulate the linear programming model. Let us introduce the decision variables x1 and x2
denoting the amount of scrap material to be purchased. Here the objective is to minimize the purchasing cost.
So, the objective function here is

Minimize 120 x1 + 160 x2


Subject to:
0.3x1 + 0.4x2 ≥ 240
0.2x1 + 0.1x2 ≥ 100
0.5x1 + 0.3x2 ≥ 290
x1 ≥ 0, x2 ≥ 0

Multiply by 10 both sides of the inequalities, then the problem becomes

Minimize 120 x1 + 160 x2


Subject to:
3x1 + 4x2 ≥ 2400
2x1 + x2 ≥ 1000
5x1 + 3x2 ≥ 2900
x1 ≥ 0; x2 ≥ 0

Let us introduce parameter M in the objective function i.e. 120x 1 + 160x2 = M. Then we have to determine the
different values for M, which is shown in the following Table.

Note that there are two minimum value for the objective function (M=96000). The feasible region and the
multiple solutions are indicated in the following Graph.

26
The extreme points are A, B, C, and D. One of the objective functions 120 + 160 = M family coincides with
the line CD at the point C with value M=96000, and the optimum value variables are x 1 = 400, and = 300.
And at the point D with value M=96000, and the optimum value variables are = 800, and = 0. Thus, every
point on the line CD minimizes objective function value and the problem contains multiple optimal solutions.

3.5 unbounded solution


When the feasible region is unbounded, a maximization problem may don’t have optimal solution, since the
values of the decision variables may be increased arbitrarily. This is illustrated with the help of the following
problem.

Maximize 3 +
Subject to:
+ ≥ 6
- + ≤ 6
- + 2 ≥ -6
, ≥ 0

The following graph shows the unbounded feasible region and demonstrates that the objective function can be
made arbitrarily large by increasing the values of x 1 and within the unbounded feasible region. In this case,
there is no point ( , ) is optimal because there are always other feasible points for which objective function
is larger. Note that it is not the unbounded feasible region alone that precludes an optimal solution. The

27
minimization of the function subject to the constraints shown in the graph would be solved at one the extreme
point (A or B).

The unbounded solutions typically arise because some real constraints, which represent a practical resource
limitation, have been missed from the linear programming formulation. In such situation the problem needs to
be reformulated and re-solved.

3.6 Infeasible Solution

A linear programming problem is said to be infeasible if no feasible solution of the problem exists. This section
describes infeasible solution of the linear programming problem with the help of the following example.

Minimize 200 + 300


Subject to:
0.4 + 0.6 ≥ 240
0.2 + 0.2 ≤ 80
0.4 + 0.3 ≥ 180
, ≥ 0

On multiplying both sides of the inequalities by 10, we get

4 + 6 ≥ 2400
2 + 2 ≤ 800
4 + 3 ≥ 1800

28
The region right of the boundary AFE includes all the solutions which satisfy the first (4 + 6 ≥ 2400) and
the third (4 + 3 ≥ 1800) constraints. The region left of the BC contains all solutions which satisfy the second
constraint (2 + 2 ≤ 800).

Hence, there is no solution satisfying all the three constraints (first, second, and third). Thus, the linear problem
is infeasible. This is illustrated in the above graph.

29
Simplex Method for Solving Linear Programming

3.7 Introduction to the simplex method

The Linear Programming with two variables can be solved graphically. The graphical method of solving linear
programming problem is of limited application in the business problems as the number of variables is
substantially large. If the linear programming problem has larger number of variables, the suitable method for
solving is Simplex Method.

The simplex method is an iterative process, through which it reaches ultimately to the minimum or maximum
value of the objective function.

The steps of the simplex method are:

1. Determine a starting basic feasible solution.

2. Select an entering variable using the optimality condition.


Stop if there is no entering variable; the last solution is optimal. Else, go to step3.

3. Select a leaving variable using the feasibility condition.

4. Determine the new basic solution by using the appreciate Gauss-Jordan computation. Go to step2.

Optimality condition:
The entering variable in a maximization (minimization) problem is the non-basic variable having the
most negative (positive) coefficient in the z-row. Ties are broken arbitrarily. The optimum is reached at the
iteration where all the z-row coefficients of the non-basic variables are nonnegative (non-positive).

Feasibility condition:
For both the maximization and the minimization problems, the leaving variable is the basic variable
associated with the smallest nonnegative ratio (with strictly positive denominator). Ties are broken arbitrarily.

LP model in equation form


The development of the simplex method computations is facilitated by imposing two requirements on the LP
model:
1. All the constraints are equations with nonnegative right-hand sides.
2. All the variables are nonnegative.

30
Converting inequalities into equations with nonnegative right-hand sides.

1. To convert a (≤) -inequality to an equation, a nonnegative slack variable is added to the left-hand side
of the constraint.
2. Conversion from (≥) to (=) is achieved by subtracting a nonnegative surplus variable from the left
hand side of the inequality.

Gauss-Jordan row operations:

1. Pivot row
a. Replace the leaving variable in the Basic column with the entering variable.
b. New pivot row = Current pivot row ÷ Pivot element

2. All other rows, including z


New row = (Current row) — (pivot column coefficient) * (New pivot row)

3.8 Computational details of the simplex algorithm


This section provides the computational details of a simplex iteration, including the rules for
determining the entering and leaving variables as well as for stopping the computations when the optimum
solution has been reached. The vehicle of explanation is a numerical example.

We use the Reddy Mikks model to explain the details of the simplex method. The problem is expressed in
equation form as:

Maximize z = 5 +4 +0 +0 +0 +0
Subject to

6 +4 + = 24 (Raw material M1)

+2 + + = 6 (Raw material M2)

– + + = 1 (Market limit)

+ s4 = 2 (Demand limit)

, , , , , ≥0

The variables , , , and are the slacks associated with the respective constraints. Next, we write the
objective equation as:

Z–5 –4 =0
In this manner, the starting simplex tableau can be represented as follows:

31
The design of the tableau specifies the set of basic and non-basic variables as well as provides the solution
associated with the starting iteration. The simplex iterations start at the origin ( , ) = (0, 0) whose associated
set of non-basic and basic variables are defined as

Non-basic (zero) variables: ( , )

Basic variables: ( , , , )

Substituting the non-basic variables ( , ) = (0, 0) and noting the special 0-1 arrangement of the coefficients
of z and the basic variables ( , , , ) in the tableau, the following solution is immediately available
(without any calculations):

z= 0
= 24
=6
=1
=2
This information is shown in the tableau by listing the basic variables in the leftmost Basic column and their
values in the rightmost Solution column. In effect, the tableau defines the current corner point by specifying its
basic variables and their values, as well as the corresponding value of the objective function, z. Remember that
the non-basic variables (those not listed in the Basic column) always equal zero.

Is the starting solution optimal? The objective function z = 5 +4 shows that the solution can be improved
by increasing or . with the most positive coefficient is selected as the entering variable. Equivalently,
because the simplex tableau ex-presses the objective function as z — 5 — 4 = 0, the entering variable will
correspond to the variable with the most negative coefficient in the objective equation. This rule is referred to as
the optimality condition.
The mechanics of determining the leaving variable from the simplex tableau calls for computing the
nonnegative ratios of the right-hand side of the equations (Solution column) to the corresponding constraint
coefficients under the entering variable, , as the following table shows.

32
The minimum nonnegative ratio automatically identifies the current basic variable s1 as the leaving variable
and assigns the entering variable xi the new value of 4.

33
How do the computed ratios determine the leaving variable and the value of the entering variable? Figure shows
that the computed ratios are actually the intercepts of the constraints with the entering variable ( ) axis. We
can see that the value of x1 must be increased to 4 at corner point B, which is the smallest nonnegative intercept
with the -axis. An increase beyond B is infeasible. At point B, the current basic variable s1 associated with
constraint 1 assumes a zero value and becomes the leaving variable. The rule associated with the ratio
computations is referred to as the feasibility condition because it guarantees the feasibility of the new solution.

The new solution point B is determined by "swapping" the entering variable and the leaving variable in
the simplex tableau to produce the following sets of non-basic and basic variables:

Non-basic (zero) variables at B: ( , )

Basic variables at B: ( , , , )

The swapping process is based on the Gauss-Jordan row operations. It identifies the entering variable column
as the pivot column and the leaving variable row as the pivot row. The intersection of the pivot column and the
pivot row is called the pivot element. The following tableau is a restatement of the starting tableau with its
pivot row and column highlighted.

The Gauss-Jordan computations needed to produce the new basic solution include two types.
1. Pivot row
a. Replace the leaving variable in the Basic column with the entering variable.
b. New pivot row = Current pivot row ÷ Pivot element
2. All other rows, including z
New Row = (Current row) — (Its pivot column coefficient) x (New pivot row)

34
These computations are applied to the preceding tableau in the following manner:
1. Replace s1 in the Basic column with :
New –row = Current s1–row ÷ 6

= (0 6 4 1 0 0 0 24)

= (0 1 0 0 0 4).

2. New z-row = Current z-row - (-5) * New –row

= (1 -5 -4 0 0 0 0 0) - (-5) * (0 1 0 0 0 4)

= (1 0 - 1 0 0 0 20)

3. New s2-row = Current s2-row - (1) * New x1-row

= (0 1 2 0 1 0 0 6) — (1) * (0 1 0 0 0 4)

= (0 0 1 0 0 2)

4. New -row = Current -row - (-1) * New -row

= (0 -1 1 0 0 1 0 1) – (-1) * (0 1 0 0 0 4)

= (0 0 0 1 0 5)

5. New -row = Current -row - (0) * New -row

= (0 0 1 0 0 0 1 2) – (0) * (0 1 0 0 0 4)

= (0 0 1 0 0 0 1 2)

The new basic solution is ( , , , ), and the new tableau becomes:

35
Observe that the new tableau has the same properties as the starting tableau. When we set the new non-basic
variables and s1 to zero, the Solution column automatically yields the new basic solution ( = 4, = 2, =
5, = 2). This "conditioning" of the tableau is the result of the application of the Gauss-Jordan row operations.
The corresponding new objective value is z = 20, which is consistent with

New z = Old z + New -value * its objective coefficient


= 0 + 4 * 5 = 20
In the last tableau, the optimality condition shows that is the entering variable. The feasibility condition
produces the following

Thus, leaves the basic solution and new value of is 1.5. The corresponding increase in z is

= * 1.5 = 1, which yields new z = 20 + 1 = 21.

Replacing in the Basic column with entering , the following Gauss-Jordan row operations are applied:

1. New pivot -row = Current -row ÷

2. New z-row = Current z-row – ( ) * New -row

36
3. New -row = Current -row – ( ) * New -row

4. New -row = Current -row – ( ) * New -row

5. New -row = Current -row — (1) * New -row


These computations produce the following tableau

Based on the optimality condition, none of the z-row coefficients associated with the non-basic variables, and
are negative. Hence, the last tableau is optimal. The optimum solution can be read from the simplex tableau
in the following manner. The optimal values of the variables in the Basic column are given in the right-hand-
side Solution column and can be interpreted as

You can verify that the values = =0, = , and = , are consistent with the given values of and
by substituting out the values of and in the constraints.

The solution also gives the status of the resources. A resource is designated as scarce if the activities (variables)
of the model use the resource completely. Otherwise, the resource is abundant. This information is secured
from the optimum tableau by checking the value of the slack variable associated with the constraint representing
the resource. If the slack value is zero, the resource is used completely and, hence, is classified as scarce.
Otherwise, a positive slack indicates that the resource is abundant. The following table classifies the constraints
of the model:

37
Remarks. The simplex tableau offers a wealth of additional information that includes:
1. Sensitivity analysis, which deals with determining the conditions that will keep the current solution
unchanged.
2. Post-optimal analysis, which deals with finding a new optimal solution when the data of the model are
changed.

3.9 Artificial Starting Solution


LPs in which all the constraints are (≤) with non-negative right-hand sides offer a convenient all-slack starting
basic feasible solution. Models involving (=) and/or (≥) constraints do not.

The procedure for starting ―ill-behaved‖ LPs with (=) and (≥) constraints is to use artificial variables that play
the role of slacks at the first iteration. The artificial variables are then disposed of at a later iteration. Two
closely related methods are introduced here: the M-method and the two-phase method.

3.9.1 M-Method

The M-method starts with the LP in equation form. If equation i does not have a slack (or a variable that can
play the role of a slack), an artificial variable, Ri, is added to form a starting solution similar to the all-slack
basic solution. However, because the artificial variables are not part of the original problem, a modeling ―trick‖
is needed to force them to zero value by the time the optimum iteration is reached (assuming the problem has a
feasible solution). The desired goal is achieved by assigning a penalty defined as:

M is a sufficiently large positive value (mathematically, M → ∞)

38
Example
Minimize Z=4 +
Subject to :
3 + =3
4 +3 ≥6
+2 ≤4
, ≥0

To convert the constraints to equations, use as a surplus in the second constraint and x4 as a slack in the
third constraint. Thus
Minimize z = 4 +
Subject to
3 + =3
4 +3 – =6
+2 + =4
, , ≥0
The third equation has its slack variable, x4, but the first and second equations do not. Thus, we add the
artificial variables and in the first two equations and penalize them in the objective function with MR1 +
MR2 (because we are minimizing). The resulting LP becomes

Minimize z = 4 + +M +M
subject to
3 + + =3
4 +3 - + =6
+2 + =4
, , , , , ≥0

The starting basic solution is ( , , ) = (3, 6, 4).


From a computational standpoint, solving the problem on the computer requires replacing M with a (sufficiently
large) numeric value.

What value of M should we use? The answer depends on the data of the original LP. Recall that the penalty M
must be sufficiently large relative to the original objective coefficients to force the artificial variables to be zero
(which happens only if a feasible solution exists). At the same time, since computers are the main tool for
solving LPs, M should not be unnecessarily too large, as this may lead to serious round-off error. In the present
example, the objective coefficients of x1 and are 4 and 1, respectively, and it appears reasonable to set M =
100.

39
Using M = 100, the starting simplex tableau is given as follows (for convenience, from now on the z-column
will be eliminated from the tableau because it does not change in all the iterations):

Before proceeding with the simplex method computations, the z-row must be made consistent with the rest of
the tableau. The right-hand side of the z-row in the tableau currently shows z = 0. However, given the non-basic
solution = = = 0, the current basic solution is = 3, = 6, and = 4 yields z = (1 0 0 * 3) + (1 0 0
* 6) + (4 * 0) = 9 0 0. The inconsistency stems from the fact that R1 and R2 have nonzero coefficients (-100, -
100) in the z-row.

To eliminate the inconsistency, we need to substitute out and in the z-row using the following row
operation:
New z_row = Old z_row + (100 * _row + 100 * _row)

(Convince yourself that this operation is the same as substituting out =3-3 - and =6-4 -3 +
in the z-row.) The modified tableau thus becomes (verify!):

The result is that and are now substituted out (have zero coefficients) in the z-row with z = 900 as desired
.
The last tableau is ready for the application of the simplex optimality and the feasibility conditions. Because the
objective function is minimized, the variable x1 having the most positive coefficient in the z-row (= 696) enters
the solution. The minimum ratio of the feasibility condition specifies as the leaving variable (verify!).

Once the entering and the leaving variables have been determined, the new tableau can be computed by using
the familiar Gauss-Jordan operations.

40
The last tableau shows that and R2 are the entering and leaving variables, respectively. Continuing with the
simplex computations, two more iterations are needed to reach the optimum: = 2/5, = 9/5, z = 17/5.

Note that the artificial variables R1 and R2 leave the basic solution (i.e., become equal to zero) promptly in the
first and second iterations, a result that is consistent with the concept of penalizing them in the objective
function.

3.9.2 Two-phase Method


In the M-method, the use of the penalty, M, can result in computer round-off error. The two-phase method
eliminates the use of the constant M altogether. As the name suggests, the method solves the LP in two phases:
Phase I attempts to find a starting basic feasible solution, and, if one is found, Phase II is invoked to solve the
original problem.

Phase I.
Put the problem in equation form, and add the necessary artificial variables to the constraints (exactly as
in the M-method) to secure a starting basic solution. Next, find a basic solution of the resulting equations
that always minimizes the sum of the artificial variables, regardless of whether the LP is maximization
or minimization. If the minimum value of the sum is positive, the LP problem has no feasible solution.
Otherwise, proceed to Phase II.

Phase II.
Use the feasible solution from Phase I as a starting basic feasible solution for the original problem.

41
Example as the same above

As in the M-method, R1 and R2 are substituted out in the r-row by using the following row operations:

New r_row = Old r_row + (1 * R1_row + 1 * R2_row)

The new r-row is used to solve Phase I of the problem, which yields the following optimum tableau:

42
Because minimum r = 0, Phase I produces the basic feasible solution = 3/5, = 6/5, and = 1. At this
point, the artificial variables have completed their mission, and we can eliminate their columns altogether from
the tableau and move on to Phase II.

Phase II
After deleting the artificial columns, we write the original problem as :

Minimize Z= 4 +
Subject to +

+ =1

, , , ≥0

Essentially, Phase I has transformed the original constraint equations in a manner that provides a starting basic
feasible solution for the problem, if one exists. The tableau associated with Phase II problem is thus given as:

Again, because the basic variables x1 and x2 have non-zero coefficients in the z-row, they must be substituted
out, using the following operations.

New z_row = Old z_row + (4 * x1_row + 1 * x2_row)

43
Because we are minimizing, x3 must enter the solution. Application of the simplex method will produce the
optimum in one iteration.

3.10 special cases in simplex Method


This section considers four special cases that arise in the use of the simplex method:

1. Degeneracy
2. Alternative optima
3. Unbounded solutions
4. Non-existing (or infeasible) solutions
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Degeneracy
In the application of the feasibility condition of the simplex method, a tie for the minimum ratio may occur and
can be broken arbitrarily. When this happens, at least one basic variable will be zero in the next iteration, and
the new solution is said to be degenerate.

Degeneracy can cause the simplex iterations to cycle indefinitely, thus never terminating the algorithm. The
condition also reveals the possibility of at least one redundant constraint.

The following example explains the practical and theoretical impacts of degeneracy.

44
Example (Degenerate Optimal Solution)
Maximize Z= 3 +9
Subject to
+4 ≤8
+2 ≤4
, ≥0

In iteration 0, and tie for the leaving variable, leading to degeneracy in iteration 1 because the basic
variable x4 assumes a zero value. The optimum is reached in one additional iteration.

remark.

What is the practical implication of degeneracy? Look at the graphical solution. Three lines pass through the
optimum point (x1 = 0, x2 = 2). Because this is a two- dimensional problem, the point is over-determined, and
one of the constraints is redundant. Redundancy means that an associated constraint can be removed without
changing the solution space. Thus, in the following Figure, x1 + 4x2 ≤ 8 is redundant but x1 + 2x2 ≤ 4 is not.
The mere knowledge that some resources are superfluous can be important during the implementation phase of
the solution. The information may also lead to discovering irregularities in the modeling phase of the solution.
Unfortunately, there are no efficient computational techniques for identifying redundant constraints.

45
Alternative Optima

An LP problem may have an infinite number of alternative optima when the objective function is parallel to a
non-redundant binding constraint (i.e., a constraint that is satisfied as an equation at the optimal solution). The
next example demonstrates the practical significance of such solutions.

Example (infinite Number of Solutions)

Maximize Z= 2 +4
Subject to
+2 ≤5
+ ≤4
, ≥0

The following Figure demonstrates how alternative optima can arise in the LP model when the objective
function is parallel to a binding constraint. Any point on the line segment BC represents an alternative optimum
with the same objective value z = 10. The iterations of the model are given by the following tableaus.

46
Iteration 1 gives the optimum solution = 0, = 5/2, and z = 10 (point B in the next figure). The existence of
alternative can be detected in the optimal tableau by examining the z-equation coefficients of the non-basic
variables. The zero coefficient of non-basic x1 indicates that x1 can be made basic, altering the values of the
basic variables without changing the value of z. Iteration 2 does just that, using x1 and x4 as the entering and
leaving variables, respectively. The new solution point occurs at C( = 3, = 1, z = 10).

Remarks.
In practice, alternative optima are useful because we can choose from many solutions without experiencing
deterioration in the objective value. For instance, in the present example, the solution at B shows that activity 2
only is at a positive level. At C, both activities are at a positive level. If the example represents a product-mix
situation, it may be advantageous to market two products instead of one.

Unbounded solution

In some LP models, the solution space is unbounded in at least one variable—meaning that variables may be
increased indefinitely without violating any of the constraints. The associated objective value may also be
unbounded in this case.

An unbounded solution space may signal that the model is poorly constructed. The most likely irregularity in
such models is that some key constraints have not been accounted for. Another possibility is that estimates of
the constraint coefficients may not be accurate.

47
Example (Unbounded Objective Value)

Maximize Z= 2 +
Subject to
– ≤ 10
2 ≤ 40
, ≥0

48
In the starting tableau, both and have negative z-equation coefficients—meaning that an increase in their
values will increase the objective value. Although x1 should be the entering variable (it has the most negative z-
coefficient), we note that all the constraint coefficients under are ≤ 0—meaning that can be increased
indefinitely without violating any of the constraints. The result is that z can be increased indefinitely.

Remarks

Had x1 been selected as the entering variable in the starting iteration (per the optimality condition), a later
iteration would eventually have produced an entering variable with the same properties as x2.

Infeasible solution

LP models with inconsistent constraints have no feasible solution. This situation does not occur if all the
constraints are of the type ≤ with nonnegative right-hand sides because the slacks provide an obvious feasible
solution. For other types of constraints, penalized artificial variables are used to start the solution. If at least one
artificial variable is positive in the optimum iteration, then the LP has no feasible solution. From the practical
standpoint, an infeasible space points to the possibility that the model is not formulated correctly.

Example (Infeasible Solution Space)

Consider the following LP:

Maximize Z= 3 +2
Subject to
2 + ≤2
3 + 4 ≥ 12
, ≥0

Using the penalty M=100 for the artificial variable R, the following tableau provide the simplex iterations of the
model

49
Optimum iteration 1 shows that the artificial variable R is positive (=4)—meaning that the LP is infeasible. By
allowing the artificial variable to be positive, the simplex method has in essence reversed the direction of the
inequality from 3 + 4 ≥ 12 to 3 + 4 ≤ 12 (can you explain how?). The result is what we may call a
pseudo-optimal solution.

3.11 Sensitivity Analysis


In LP, the parameters (input data) of the model can change within certain limits without causing changes in the
optimum. This is referred to as sensitivity analysis and will be the subject matter of this section. The
presentation explains the basic ideas of sensitivity analysis using the more concrete graphical solution. These
ideas are then extended to the general LP problem using the simplex tableau results.

Graphical Sensitivity Analysis


This section demonstrates the general idea of sensitivity analysis. Two cases will be considered:

1. Sensitivity of the optimum solution to changes in the availability of the resources (right-hand side of
the constraints).

2. Sensitivity of the optimum solution to changes in unit profit or unit cost (coefficients of the objective
function).

We will use individual examples to explain the two cases.

50
Example (changes in the Right-Hand side)
JOBCO manufactures two products on two machines. A unit of product 1 requires 2 hrs. on machine 1 and 1 hr.
on machine 2. For product 2, one unit requires 1 hr. on machine 1 and 3 hrs. on machine 2. The revenues per
unit of products 1 and 2 are $30 and $20, respectively. The total daily processing time available for each
machine is 8 hrs.

Letting x1 and x2 represent the daily number of units of products 1 and 2, respectively, the LP model is given as

Maximize Z= 30 + 20
Subject to
2 + ≤8 (Machine 1)
+3 ≤8 (Machine 2)
, ≥0

The next following figure illustrates the change in the optimum solution when changes are made in the capacity
of machine 1. If the daily capacity is increased from 8 to 9 hrs the new optimum will move to point G. The rate
of change in optimum z resulting from changing machine 1 capacity from 8 to 9 hrs can be computed as:

( ,

The computed rate provides a direct link between the model input (resources) and its output (total revenue). It
says that a unit increase (decrease) in machine 1 capacity will increase (decrease) revenue by $14.

51
The name unit worth of a resource is an apt description of the rate of change of the objective function per unit
change of a resource. Nevertheless, early LP developments have coined the abstract name dual (or shadow)
price and this name is now standard in all the LP literature and software packages. The presentation in this book
conforms to this standard. Nevertheless, think ―unit worth of resource‖ whenever you come across standard
names ―dual or shadow price.‖

Looking at the above figure, we can see that the dual price of $14/hr remains valid for changes (increases or
decreases) in machine 1 capacity that move its constraint parallel to itself to any point on the line segment BF.
We compute machine 1 capacities at points B and F as follows:

Minimum machine 1 capacity [at B = 10, 2.672] = 2 * 0 + 1 * 2.67 = 2.67 hrs.

Minimum machine 1 capacity [at F = 18, 02] = 2 * 8 + 1 * 0 = 16 hrs.

The conclusion is that the dual price of $14.00/hr remains valid only in the range 2.67 hrs. ≤ Machine 1 capacity
≤ 16 hrs.
Changes outside this range produce a different dual price (worth per unit).

52
Using similar computations, you can verify that the dual price for machine 2 capacity is $2/hr, and it remains
valid for changes in machine 2 capacity within the line segment DE. Now,

Minimum machine 2 capacity [at D = 14, 02] = 1 * 4 + 3 * 0 = 4 hr

Minimum machine 2 capacity [at E = 10, 82] = 1 * 0 + 3 * 8 = 24 hr

Thus, the dual price of $2/hr for machine 2 remains applicable for the range 4 hr ≤ Machine 2 capacity ≤ 24 hr

The computed limits for machine 1 and 2 are referred to as the feasibility ranges.

Example (changes in the Objective coefficients)


The next figure shows the graphical solution space of the JOBCO problem presented in previous example. The
optimum occurs at point C( = 3.2, = 1.6, z = 128). Changes in revenue units (i.e., objective-function
coefficients) will change the slope of z. However, as can be seen from the figure, the optimum solution at point
C remains unchanged so long as the objective function lies between lines BF and DE.

How can we determine ranges for the coefficients of the objective function that will keep the optimum solution
unchanged at C? First, we write the objective function in the general format:

53
Maximize z = +

Imagine now that line z is pivoted at C and that it can rotate clockwise and counterclockwise. The optimum
solution will remain at point C so long as z = + + lies between the two lines + 3 = 8 and 2 +
= 8. This means that the ratio / can vary between 1/3 and 2/1, which yields the following optimality
range.

Algebraic sensitivity Analysis—changes in the Right-Hand side


This section extends the analysis to the general LP model. A numeric example (the TOYCO model) will be
used to facilitate the presentation.

Example (TOYCO model)

TOYCO uses three operations to assemble three types of toys—trains, trucks, and cars. The daily available
times for the three operations are 430, 460, and 420 mins, respectively, and the revenues per unit of toy train,
truck, and car are $3, $2, and $5, respectively. The assembly times per train at the three operations are 1, 3, and
1 mins, respectively. The corresponding times per train and per car are (2, 0, 4) and (1, 2, 0) mins (a zero time
indicates that the operation is not used).

Letting , and represent the daily number of units assembled trains, trucks and cars, respectively the
associated LP model is given as:

Maximize Z= 3 +2 +5
Subject to :
+2 + ≤ 430 (operation 1)
3 +2 ≤ 460 (operation 2)
+4 ≤ 420 (operation 3)
, , ≥0

Using , and as the slack variables for the constraints for the operations 1, 2 and 3 , respectively the
optimum tableau is :

54
The solution recommends manufacturing 100 trucks and 230 cars but no trains, the associated revenue is $1350.

Determination of dual prices and feasibility ranges. We will use the TOYCO model to show how this
information is obtained from the optimal simplex tableau. Recognizing that the dual prices and their feasibility
ranges are rooted in making changes in the right-hand side of the constraints, suppose that , , and are
the (positive or negative) changes made in the allotted daily manufacturing time of operations 1, 2, and 3,
respectively. The original TOYCO model can then be changed to:

Maximize Z= 3 +2 +5
Subject to

+2 + ≤ 430 + (operation 1)
3 +2 ≤ 460 + (operation 2)
+4 ≤ 420 + (operation 3)
, , ≥0

To express the optimum simplex tableau of the modified problem in terms of the changes D1, , and , we
first rewrite the starting tableau using the new right-hand sides, 430 + , 460 + , and 420 + .

The two shaded areas are identical. Hence, if we repeat the same simplex iterations (with the same row
operations) as in the original model, the columns in the two highlighted area will also be identical in the optimal
tableau—that is,

55
The new optimum tableau provides the following optimum solution:

We now use this solution to determine the dual prces and the feasibility ranges

Dual prices: the value of the objective fuction can be written as

The equations shows that:


1. A unit change in operation 1 capacity ( = ) changes Z by &1.
2. A unit change in operation 2 capacity ( = ) changes Z by &2.
3. A unit change in operation 3 capacity ( = ) changes Z by &0.

This means that, by definition, the corresponding dual prices are 1, 2, and 0 ($/min) for operations 1, 2, and 3,
respectively. The coefficients of , , and in the optimal z-row are exactly those of the slack variables ,
, and . This means that the dual prices equal the coefficients of the slack variables in the optimal z-row.
There is no ambiguity as to which coefficient applies to which resource because each slack variable is uniquely
identified with a constraint.

Feasibility range: The current solution remains feasible if all the basic variables remain nonnegative—that is,

56
Simultaneous changes D1, D2, and that satisfy these inequalities will keep the solution feasible. The new
optimum solution can be found by substituting out the values of D1, D2, and
To illustrate the use of these conditions, suppose that the manufacturing time available for operations 1, 2, and 3
are 480, 440, and 400 mins, respectively.
Then, = 480 - 430 = 50, = 440 - 460 = -20, and = 400 - 420 = -20. Substituting in the feasibility
conditions, we get

The calculations show that < 0, hence the current solution does not remain feasible. Alternatively, if the
changes in the resources are such that = -30, = -12, and = 10, then

The new (optimal) feasible solution is x1 = 88, x3 = 224, and x6 = 68 with z = 3(0) + 2(88) + 5(224) = $1296.
Notice that the optimum objective value can also be computed using the dual prices as z = 1350 + 1(-30) + 2(-
12) + 0(10) = $1296.

The given conditions can produce the individual feasibility ranges associated with changing the resources one
at a time. For example, a change in operation 1 time only means that = = 0. The simultaneous conditions
thus reduce to

57
}

This means that the dual price for operation 1 is valid in the feasibility range -200 ≤ ≤ 10. We can show in a
similar manner that the feasibility ranges for operations 2 and 3 are -20 ≤ ≤ 400 and -20 ≤ ≤ ∞.
respectively (verify!).

It is important to notice that the dual prices will remain applicable for any simultaneous changes that keep the
solution feasible, even if the changes violate the individual ranges. For example, the changes = 30, = 12
and = 100 will keep the solution feasible even though = 30 violate the feasibility range -200 ≤ ≤ 10, as
the following computations show:

This means that the dual prices will remain applicable, and we can compute the new optimum objective value
from the dual prices as z = 1350 + 1(30) + 2(-12) + 0(100) = $1356.

58
Chapter 4
Duality in Linear Programming Problems

4.1 Definition of Dual Problem


The dual problem is defined systematically from the primal (or original) LP model. The two problems are
closely related, in the sense that the optimal solution of one problem automatically provides the optimal
solution to the other. As such, it may be advantageous computationally in some cases to determine the primal
solution by solving the dual. But that computational advantage may be minor when compared with what the rich
primal–dual theory offers.

Our definition of the dual problem requires expressing the primal problem in the equation form, a format
consistent with the simplex starting tableau (all the constraints are equations with nonnegative right-hand sides,
and all the variables are nonnegative). Hence, any results obtained from the primal optimal solution apply
unambiguously to the associated dual problem.

The following is a summary of how the dual is constructed from the (equation form) primal:

1. A dual variable is assigned to each primal (equation) constraint and a dual constraint is assigned to each
primal variable.

2. The right-hand sides of the primal constraints provide the coefficients of the dual objective function.

3. The dual constraint corresponding to a primal variable is constructed by transposing the primal variable
column into a row with (i) the primal objective coefficient becoming the dual right-hand side and (ii) the
remaining constraint coefficients comprising the dual left-hand side coefficients.

4. The sense of optimization, direction of inequalities, and the signs of the variables in the dual are governed by
the rules in Table 4.1

The following examples demonstrate the use of the rules in Table 4.1. The examples also show that our
definition incorporates all forms of the primal automatically.

59
Example 1

primal Primal in equation form Dual variables


Maximize Z= 5 + 12 +4 Maximize Z= 5 + 12 +4 +0
Subject to Subject to
+2 + ≤10 +2 + + =10
2 - +3 =8 2 - +3 +0 =8
, , ≥0 , , , ≥0

Dual problem

Minimize W=10 +8
Subject to
+2 ≥5
2 - ≥12
+3 ≥4
}

Example 2

primal Primal in equation form Dual variables


Maximize Z= 15 + 12 Maximize Z= 5 + 12 +0 +0
Subject to Subject to
+2 ≥3 +2 – +0 =3
2 -4 ≤5 2 - +0 + =5
, ≥0 , , , ≥0

Dual problem

Maximize W=3 +5
Subject to
+2 ≤15
2 -4 ≤12

60
4.2 Primal–Dual relation ships
Changes made in the data of an LP model can affect the optimality and/or the feasibility of the current optimum
solution. This section introduces a number of primal–dual relationships that can be used to recompute the
elements of the optimal simplex tableau. These relationships form the basis for the economic interpretation of
the LP model and for post-optimality analysis.

4.2.1 Review of simple matrix operations


The simplex tableau can be generated by three elementary matrix operations: (row vector) * (matrix), (matrix) *
(column vector), and (scalar) * (matrix). These operations are summarized here for convenience. First, we
introduce some matrix definitions:

1. A matrix, A, of size (m * n) is a rectangular array of elements with m rows and n columns.


2. A row vector, V, of size m is a (1 * m) matrix.
3. A column vector, p, of size n is an (n * 1) matrix.

These definitions can be represented mathematically as

V=(v1,v2,….., vm), A =( ,, P =( )

1. (row vector * matrix, VA). The operation is valid only if the size of the row vector V and the number
of rows of A are equal. For example,

(11,22,33)( +

61
== (242, 308)

2. (matrix * column vector, AP). The operation is vaid only of the number of columns of A and the size
of column vector P are equal. For example,

( )( + ( ) ( )

3. (scalar*matrix, ). Given the scalar (or constant) quantity , the multiplication operation results in a
matrix if the same size as matrix A. for example, given =10,
(10)( )= )

4.2.2 Simplex tableau layout


The simplex tableau in Chapter 3 is the basis for the presentation in this chapter. Figure 4.1 represents the
starting and general simplex tableaus schematically. In the starting tableau, the constraint coefficients under the
starting variables form an identity matrix (all main-diagonal elements are 1, and all off-diagonal elements are
zero). With this arrangement, subsequent iterations of the simplex tableau generated by the Gauss–Jordan row
operations (see Chapter 3) modify the elements of the identity matrix to produce what is known as the inverse
matrix. As we will see in the remainder of this chapter, the inverse matrix is key to computing all the elements
of the associated simplex tableau.

Remarks. The inverse matrix in the general tableau has its roots in the starting tableau constraint columns.
That means that the inverse at any iteration can be computed (from scratch) using the original constraint
columns of the LP problem (as will be demonstrated in the remarks following Example 4.2-1). This is an
important relationship that has been exploited to control round-off errors in the simplex algorithm
computations.

62
4.2.3 Optimal Dual solution

The primal and dual solutions are closely related, in the sense that the optimal solution of either problem
directly yields the optimal solution to the other, as is explained subsequently. Thus, in an LP model in which the
number of variables is considerably smaller than the number of constraints, computational savings may be
realized by solving the dual because the amount of computations associated with determining the inverse matrix
primarily increases with the number of constraints. Notice that the rule addresses only the amount of
computations in each iteration but says nothing about the total number of iterations needed to solve each
problem. This section provides two methods for determining the dual values.

63
The elements of the row vector must appear in the same order the basic variables are listed in the Basic-column
of the simplex tableau.

64
65
4.2.4 Simplex tableau Computations
This section shows how any iteration of the simplex tableau can be generated from the original data of the
problem, the inverse associated with the iteration, and the dual problem. Using the layout of the simplex tableau
in Figure 4.1, we can divide the computations into two types:

1. Constraint columns (left-hand and right-hand sides).


2. Objective z-row.

66
4.3 Additional Simplex Algorithms
Chapter 3 presents the (primal) simplex algorithm that starts feasible and continues to be feasible until the
optimum is reached. This section presents two additional algorithms: The dual simplex starts infeasible (but
better than optimal) and remains infeasible until feasibility is restored, and the (author’s) generalized simplex
combines the primal and dual simplex methods, starting both non-optimal and infeasible.

4.3.1 Dual Simplex Algorithm

The dual simplex method starts with a better than optimal and infeasible basic solution. The optimality and
feasibility conditions are designed to preserve the optimality of the basic solutions as the solution move toward
feasibility.

Dual feasibility condition. The leaving variable, xr, is the basic variable having the most negative value (ties
are broken arbitrarily). If all the basic variables are nonnegative, the algorithm ends.

Dual optimality condition. Given that xr is the leaving variable, let be the reduced cost of non-basic variable
xj and the constraint coefficient in the xr-row and xj-column of the tableau. The entering variable is the non-
basic variable with < 0 that corresponds to

(Ties are broken arbitrarily.) If ≥ 0 for all non-basic xj, the problem has no feasible solution. To start the LP
optimal and infeasible, two requirements must be met:

1. The objective function must satisfy the optimality condition of the regular simplex method (Chapter 3).
2. All the constraints must be of the type (≤).

Inequalities of the type (≥) are converted to (≤) by multiplying both sides of the inequality by -1. If the LP
includes 1=2 constraints, the equation can be replaced by two inequalities. For example, x1 + x2 = 1 is
equivalent to x1 + x2 ≤ 1, x1 + x2 ≥ 1 or x1 + x2 ≤ 1, –x1 – x2 ≤ -1. The starting solution is infeasible if at least
one of the right-hand sides of the inequalities is negative.

67
68
4.3.2 Generalized simplex algorithm
The (primal) simplex algorithm in Chapter 3 starts feasible but non-optimal. The dual simplex (Section 4.3.1)
starts better than optimal and infeasible. What if an LP model starts both non-optimal and infeasible? Of course
we can use artificial variables and artificial constraints to secure a starting solution. But this really is not
necessary because the key idea of both the primal and dual simplex methods is that the optimum feasible
solution, when finite, always occurs at a corner point (or a basic solution). This suggests that a new simplex
algorithm (developed by this author) can be developed based on tandem use of the dual simplex and the primal
simplex methods. First, use the dual algorithm to get rid of infeasibility (without worrying about optimality).
Once feasibility is restored, the primal simplex can be used to find the optimum. Alternatively, we can first
apply the primal simplex to secure optimality (without worrying about feasibility) and then use the dual simplex
to seek feasibility.

69
The following tableau format of the problem shows that the starting basic solution (x4, x5, and x6) is both non-
optimal (because of non-basic x3) and infeasible (because of basic x4).

We can solve the problem without the use of any artificial variables or artificial constraints, first securing
feasibility using the dual simplex and then seeking optimality using the primal simplex. The dual simplex
selects x4 as the leaving variable. The entering variable can be any non-basic variable with a negative constraint
coefficient in the x4-row (recall that if no negative constraint coefficient exists, the problem has no feasible
solution). In the present example, x2 has a negative coefficient in the x4-row and is selected as the entering
variable. The next tableau is thus computed as

The new solution is now feasible but non-optimal, and we can use the primal simplex to determine the optimal
solution. In general, had we not restored feasibility in the preceding tableau, we would repeat the procedure as
necessary until feasibility is satisfied or until there is evidence that the problem has no feasible solution.

70
Chapter 5
Assignment problem

5.1 Introduction
The Assignment Problem can define as follows:
Given n facilities, n jobs and the effectiveness of each facility to each job, here the problem is to assign each
facility to one and only one job so that the measure of effectiveness if optimized. Here the optimization means
Maximized or Minimized. There are many management problems has an assignment problem structure.

For example, the head of the department may have 6 people available for assignment and 6 jobs to fill. Here the
head may like to know which job should be assigned to which person so that all tasks can be accomplished in
the shortest time possible.

Another example a container company may have an empty container in each of the location 1, 2,3,4,5 and
requires an empty container in each of the locations 6, 7, 8,9,10. It would like to ascertain the assignments of
containers to various locations so as to minimize the total distance.

The third example here is a marketing set up by making an estimate of sales performance for different salesmen
as well as for different cities one could assign a particular salesman to a particular city with a view to maximize
the overall sales.

5.2 Assignment Problem Structure


The structure of the Assignment problem is similar to a transportation problem, is as follows:

71
The element represents the measure of effectiveness when person is assigned job. Assume that the
overall measure of effectiveness is to be minimized. The element represents the number of individuals
assigned to the job. Since person can be assigned only one job and job can be assigned to only one
person we have the following:

+ + ……………. + = 1, where i = 1, 2. . . , n
+ + ……………. + = 1, where j = 1, 2. . . , n

The objective function is formulated as

Minimize + + ……….. +
≥ 0
5.3 Assignment problem solution
The solution of the assignment problem is based on the following results:

―If a constant is added to every element of a row/column of the cost matrix of an assignment problem
the resulting assignment problem has the same optimum solution as the original assignment problem and
vice versa‖.

This result may be used in two different methods to solve the assignment problem. If in an assignment
problem some cost elements are negative, we may have to convert them into an equivalent assignment

problem where all the cost elements are non-negative by adding a suitable large constant to the cost
elements of the relevant row or column, and then we look for a feasible solution which has zero
assignment cost after adding suitable constants to the cost elements of the various rows and columns.
Since it has been assumed that all the cost elements are non-negative, this assignment must be optimum.

On the basis of this principle a computational technique known as Hungarian Method is developed. The
Hungarian Method is discussed as follows.

Hungarian Method

Step1. Determine pi, the minimum cost element of row i in the original cost matrix, and subtract it from all the
elements of row i, i = 1, 2, 3.

Step2. For the matrix created in step 1, determine qj, the minimum cost element of column j, and subtract it
from all the elements of column j, j = 1, 2, 3.

Step3. From the matrix in step 2, attempt to find a feasible assignment among all the resulting zero entries.
3a. if such an assignment can be found, it is optimal.
3b. Else, additional calculations are needed.

The cells with underscored zero entries in step 3 provide the (feasible) optimum solution: John gets the paint
job, Karen gets to mow the lawn, and Terri gets to wash the family cars.

72
The total cost to Mr. Klyne is 9 + 10 + 8 = $27. This amount also will always equal (p1 + p2 + p3) + (q1 + q2 +
q3) = (9 + 9 + 8) + ( 0 + 1 + 0) = $27.

Step 3b: If no feasible zero-element assignments can be found,

(i) Draw the minimum number of horizontal and vertical lines in the last reduced matrix to cover all the zero
entries.

(ii) Select the smallest uncovered entry, subtract it from every uncovered entry, and then add it to every entry at
the intersection of two lines.

(iii) If no feasible assignment can be found among the resulting zero entries, repeat step 3a.

73
74
5.4 Simplex Explanation of the Hungarian Method

The assignment problem in which n workers are assigned to n jobs can be represented as an LP model in the
following manner: Let cij be the cost of assigning worker i to job j, and define

Then the LP model is given as

∑∑

Subject to

The optimal solution of the preceding LP model remains unchanged if a constant is added to or subtracted from
any row or column of the cost matrix (cij). To prove this point, let pi and qj be constants subtracted from row i
and column j. Thus, the cost element cij is changed to

Now

∑∑ ∑ ∑( ) ∑∑ ∑ (∑ ) ∑ (∑ +

∑∑ ∑ ∑

∑∑

Because the new objective function differs from the original by a constant, the optimum values of xij are the
same in both cases. The development shows that steps 1 and 2 of the Hungarian method, which call for
subtracting pi from row i and then subtracting qj from column j, produce an equivalent assignment model. In
this regard, if a feasible solution can be found among the zero entries of the cost matrix created by steps 1 and 2,

75
then it must be optimum (because the cost in the modified matrix cannot be less than zero). If the created zero
entries cannot yield a feasible, then step 2a (dealing with the covering of the zero entries) must be applied.
.
The reason (p1 + p2 + … + pn) + (q1 + q2 + … + qn) gives the optimal objective value is that it represents the
dual objective function of the assignment model.

76
Chapter 6
Transportation Model

6.1 Introduction
A special class of linear programming problem is Transportation Problem, where the objective is to minimize
the cost of distributing a product from a number of sources (e.g. factories) to a number of destinations (e.g.
warehouses) while satisfying both the supply limits and the demand requirement.

Because of the special structure of the Transportation Problem the Simplex Method of solving is unsuitable for
the Transportation Problem. The model assumes that the distributing cost on a given rout is directly
proportional to the number of units distributed on that route.

Generally, the transportation model can be extended to areas other than the direct transportation of a
commodity, including among others, inventory control, employment scheduling, and personnel assignment.

Example

Suppose a manufacturing company owns three factories (sources) and distribute his products to five different
retail agencies (destinations). The following table shows the capacities of the three factories, the quantity of
products required by the various retail agencies and the cost of shipping one unit of the product from each of the
three factories to each of the five retail agencies.

77
Usually the above table is referred as Transportation Table, which provides the basic information regarding the
transportation problem. The quantities inside the table are known as transportation cost per unit of product. The
capacity of the factories 1, 2, 3 is 50, 100 and 150 respectively. The requirement of the retail agency 1, 2, 3, 4, 5
is 100,60,50,50, and 40 respectively.

In this case, the transportation cost of one unit


From factory 1 to retail agency 1 is 1,
From factory 1 to retail agency 2 is 9,
From factory 1 to retail agency 3 is 13, and so on.

A transportation problem can be formulated as linear programming problem using variables with two
subscripts.
Let
= Amount to be transported from factory 1 to retail agency 1
= Amount to be transported from factory 1 to retail agency 2
……..
……..
……..
……..
= Amount to be transported from factory 3 to retail agency 5.

Let the transportation cost per unit be represented by C11, C12, ….., C35 that is C11=1, C12=9, and so on.
Let the capacities of the three factories be represented by a1=50, a2=100, a3=150.
Let the requirement of the retail agencies are b1=100, b2=60, b3=50, b4=50, and b5=40.

Thus, the problem can be formulated as

Minimize
C11 +C12 +……………+C35
Subject to:
+ + + + = a1
+ + + + = a2
+ x32 + + + = a3

+ + = b1
+ + x32 = b2
+ x32 + = b3

78
+ + = b4
+ + = b5

, , ……, ≥ 0.

Thus, the problem has 8 constraints and 15 variables. So, it is not possible to solve such a problem using
simplex method. This is the reason for the need of special computational procedure to solve transportation
problem. There are varieties of procedures, which are described in the next section.

6.2 Transportation Algorithm


The steps of the transportation algorithm are exact parallels of the simplex algorithm.

Step 1.
Determine a starting basic feasible solution, and go to step 2.
Step 2.
Use the optimality condition of the simplex method to determine the entering variable from among all
the non-basic variables. If the optimality condition is satisfied, stop. Otherwise, go to step 3.
Step 3.
Use the feasibility condition of the simplex method to determine the leaving variable from among all the
current basic variables, and find the new basic solution. Return to step 2.

A general transportation model with m sources and n destinations has m + n constraint equations, one for each
source and each destination. However, because the transportation model is always balanced (sum of the supply
= sum of the demand), one of these equations is redundant. Thus, the model has m + n - 1 independent
constraint equations, which means that the starting basic solution consists of m + n -1 basic variables.

The special structure of the transportation problem allows securing a non-artificial starting basic solution using
one of three methods:

1. Northwest-corner method
2. Least-cost method
3. Vogel approximation method

The three methods differ in the "quality" of the starting basic solution they produce, in the sense that a better
starting solution yields a smaller objective value. In general, though not always, the Vogel method yields the
best starting basic solution, and the northwest-corner method yields the worst. The tradeoff is that the
northwest-corner method involves the least amount of computations.

79
Example that shows how the Transportation Model is:

The supply (in truckloads) and the demand (also in truckloads) together with the unit transportation costs per
truckload on the different routes are summarized in the transportation model in previous Table 5.16. The unit
transportation costs, (shown in the northeast corner of each box) are in hundreds of dollars. The model seeks
the minimum-cost shipping schedule between silo i and mill j (i = 1, 2, 3; j = 1, 2, 3, 4).

6.2.1 Northwest-corner Method

Northwest-Corner Method. The method starts at the northwest-corner cell (route) of the tableau (variable ).

Step 1.
Allocate as much as possible to the selected cell, and adjust the associated amounts of supply (capacity)
and demand (requirement) by subtracting the allocated amount.
Step 2.
Cross out the row or column with zero supply or demand to indicate that no further assignments can be
made in that row or column. If both a row and a column net to zero simultaneously, cross out one only,
and leave a zero supply (demand) in the uncrossed-out row (column).
Step 3.
If exactly one row or column is left uncrossed out, stop. Otherwise, move to the cell to the right if a
column has just been crossed out or below if a row has been crossed out. Go to step 1.

80
The arrows show the order in which the allocated amounts are generated. The starting basic solution is = 5,
= 10, = 5, = 15, = 5, = 10. The associated cost of the schedule is z = 5 * 10 + 10 * 2 + 5 * 7 +
15 * 9 + 5 * 20 + 10 * 18 = $520.

6.2.2 Least cost Method

The least-cost method finds a better starting solution by targeting the cheapest routes. It assigns as much as
possible to the cell with the smallest unit cost (ties are broken arbitrarily). Next, the satisfied row or column is
crossed out and the amounts of supply and demand are adjusted accordingly.

If both a row and a column are satisfied simultaneously, only one is crossed out, the same as in the northwest-
corner method. Next, select the uncrossed-out cell with the smallest unit cost and repeat the process until
exactly one row or column is left uncrossed out.

The least-cost method is applied to the above Example.

1. Cell (1, 2) has the least unit cost in the tableau 1= $22. The most that can be shipped through (1, 2) is =
15 truckloads, which happens to satisfy both row 1 and column 2 simultaneously. We arbitrarily cross out
column 2 and adjust the supply in row 1 to 0.

2. Cell (3, 1) has the smallest uncrossed-out unit cost 1= $42. Assign = 5, and cross out column 1 because it
is satisfied, and adjust the demand of row 3 to 10 - 5 = 5 truckloads.

3. Continuing in the same manner, we successively assign 15 truckloads to cell (2, 3), 0 truckloads to cell (1, 4),
5 truckloads to cell (3, 4), and 10 truckload to cell (2, 4) (verify!).

The resulting starting solution is summarized in Table 5.10. The arrows show the order in which the allocations
are made. The starting solution (consisting of six basic variables) is:

= 15, = 0, = 15, = 10, = 5, = 5.

81
The associated objective value is
z = 15 * 2 + 0 * 11 + 15 * 9 + 10 * 20 + 5 * 4 + 5 * 18 = $475 , which happens to be better than the
northwest-corner solution.

6.2.3 Vogel approximation method (VAM).


VAM is an improved version of the least-cost method that generally, but not always, produces better starting
solutions.

Step1. For each row (column), determine a penalty measure by subtracting the smallest unit cost in the row
(column) from the next smallest unit cost in the same row (column). This penalty is actually a measure of lost
opportunity one forgoes if the smallest unit cost cell is not chosen.

Step2. Identify the row or column with the largest penalty, breaking ties arbitrarily.
Allocate as much as possible to the variable with the least unit cost in the selected row or column. Adjust the
supply and demand, and cross out the satisfied row or column. If a row and a column are satisfied
simultaneously, only one of the two is crossed out, and the remaining row (column) is assigned zero supply
(demand).

Step3.
(a) If exactly one row or column with zero supply or demand remains uncrossed out, stop.
(b) If one row (column) with positive supply (demand) remains uncrossed out, determine the basic
variables in the row (column) by the least-cost method. Stop.
(c) If all the uncrossed-out rows and columns have (remaining) zero supply and demand, determine the
zero basic variables by the least-cost method.
Stop.
(d) Otherwise, go to step 1.

82
VAM is applied to previous Example. The following Table computes the first set of penalties. Because row 3
has the largest penalty (= 10) and cell (3, 1) has the smallest unit cost in that row, the amount 5 is assigned to
x31. Column 1 is now satisfied and must be crossed out.

Next, new penalties are recomputed as in next second Table, showing that row 1 has the highest penalty (= 9).
Hence, we assign the maximum amount possible to cell (1, 2), which yields x12 = 15 and simultaneously
satisfies both row 1 and column 2. We arbitrarily cross out column 2 and adjust the supply in row 1 to zero.

83
Continuing in the same manner, row 2 will produce the highest penalty 1= 112, and we assign = 15, which
crosses out column 3 and leaves 10 units in row 2. Only column 4 is left, and it has a positive supply of 15
units. Applying the least-cost method to that column, we successively assign = 0, = 5, and = 10
(verify!). The associated objective value for this solution is z = 15 * 2 + 0 * 11 + 15 * 9 + 10 * 20 + 5 * 4 + 5 *
18 = $475. This solution happens to have the same objective value as in the least-cost method.

6.3 iterative Computations of the Transportation Algorithm


After determining the starting solution (using one of the methods), we use the following algorithm to determine
the optimum solution:

Step1. Use the simplex optimality condition to determine the entering variable. If the optimality condition is
satisfied, stop. Otherwise, go to step 2.

Step2. Determine the leaving variable using the simplex feasibility condition. Change the basis, and return to
step 1.

The optimality and feasibility conditions do not involve the familiar row operations used in the simplex method.
Instead, the special structure of the transportation model allows simpler (hand) computations.

The determination of the entering variable from among the current non-basic variables (those that are not part of
the starting basic solution) is done by computing the non-basic coefficients in the z-row, using the method of
multipliers.

In the method of multipliers, we associate the multipliers and with row i and column j of the transportation
tableau. For each current basic variable , such that:
:
+ = , for each basic

84
The starting solution has six basic variables, which leads to six equations in seven unknowns. To solve these
equations, the method of multipliers calls for setting any of the multiplier equal to zero. We will arbitrarily set
= 0, and then solve for the remaining variables as shown in the following table:

85
The preceding information, together with the fact that + - = 0 for basic , is actually equivalent to
computing the z-row of the simplex tableau, as the following summary shows:

Because the transportation model minimizes cost, the entering variable is the one having the most positive
coefficient in the z-row—namely, is the entering variable. All the preceding computations are usually done
directly on the transportation tableau as shown in Table, meaning that it is not necessary to write the (u, v)-
equations explicitly. Instead, we start by setting = 0.6 Then we can compute the v-values of all the columns
that have basic variables in row 1—namely, and . Next, we compute based on the (u, v) – equation of

86
basic . Now, given , we can compute and . Finally, we determine using the basic equation of .
The next step is to evaluate the non-basic variables by computing + - for each non-basic , as shown
in Table 5.14 in the boxed southeast corner of each cell.

Having identified as the entering variable, we need to determine the leaving variable. Remember that if
enters the solution to become basic, one of the current basic variables must leave as non-basic (at zero level).

The selection of x31 as the entering variable means shipping through this route reduces the total shipping cost.
What is the most that we can ship through the new route? Observe in Table 5.14 that if route (3, 1) ships u units
(i.e., x31 = u), then the maximum value of u is determined based on two conditions:

1. Supply limits and demand requirements remain satisfied.


2. Shipments through all routes remain nonnegative.

These two conditions determine the maximum value of u and the leaving variable in the following manner.
First, construct a closed loop that starts and ends at the entering variable cell (3, 1). The loop consists of
connected horizontal and vertical segments only (no diagonals are allowed) whose corner elements (excluding
the entering variable cell) must coincide with a current basic variable. Table shows the loop for x31. Exactly
one loop exists for a given entering variable.

Next, we assign the amount u to the entering variable cell (3, 1). For the supply and demand limits to remain
satisfied we must alternate between subtracting and adding the amount u at the successive corners of the loop as
shown in Table (it is immaterial if the loop is traced in a clockwise or counterclockwise direction). For Ɵ ≥ 0 ,
the new values of all the variables remain nonnegative if

=5-Ɵ≥0

87
=5-Ɵ≥0
= 10 - Ɵ ≥ 0

The corresponding maximum value of u is 5, which occurs when both and reach zero level. Either
or leaves the solution. Intuitively, though not crucial, it may be advantageous computationally to break the
tie by selecting the variable with the higher unit cost. Hence we choose (with = 10 as opposed to =
7) as the leaving variable.

The values of the basic variables at the corners of the closed loop are adjusted to accommodate setting
x31 = 5, as Table 5.16 shows. Because each unit shipped through route (3, 1) reduces the shipping cost by $9
(= + - ), the total cost associated with the new schedule is $9 * 5 = $45 less than in the previous
schedule. Thus, the new cost is $520 - $45 = $475.

Given the new basic solution, we repeat the computation of the multipliers u and v, as Table shows. The
entering variable is . The closed loop shows that = 10 and that the leaving variable is .
The new solution, shown in Table, costs $4 * 10 = $40 less than the preceding one, thus yielding the new cost
$475 - $40 = $435. The new values of + - are now negative for all non-basic xij. Thus, the solution in
Table is optimal.

88
Chapter 7

89
Network Model

7.1 Introduction for network model


A network consists of several destinations or jobs which are linked with one another. A manager will have
occasions to deal with some network or other. Certain problems pertaining to networks are taken up for
consideration in this unit.

A network is a series of related activities and events which result in an end product or service. The activities
shall follow a prescribed sequence. For example, while constructing a house, laying the foundation should take
place before the construction of walls. Fitting water tapes will be done towards the completion of the
construction. Such a sequence cannot be altered.

Network Definitions
A network consists of a set of nodes linked by arcs (or branches). The notation for describing a network is (N,
A), where N is the set of nodes and A is the set of arcs. As an illustration, the network in the following figure is
described as:
N = {1, 2, 3, 4, 5}
A = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 2), (4, 5)}

Associated with each network is a flow (e.g., oil products flow in a pipeline and automobile traffic flows in
highways). In general, the flow in a network is limited by the capacity of its arcs, which may be finite or
infinite.

An arc is said to be directed or oriented if it allows positive flow in one direction and zero flow in the opposite
direction. A directed network has all directed arcs.

A path is a sequence of distinct arcs that join two nodes through other nodes regardless of the direction of flow
in each arc. A path forms a cycle or a loop if it connects a node to itself through other nodes. For example, in
the previous figure, the arcs (2, 3), (3, 4), and (4, 2) form a cycle.

A connected network is such that every two distinct nodes are linked by at least one path. The network in
Figure 6.1 demonstrates this type of network.

A tree is a cycle-free connected network comprised of a subset of all the nodes, and a spanning tree is a tree that
links all the nodes of the network. The next figure provides examples of a tree and a spanning tree.

90
Key Concepts

Certain key concepts pertaining to a project network are described below:

1. Activity
An activity means a work. A project consists of several activities. An activity takes time. It is represented by
an arrow in a diagram of the network. For example, an activity in house construction can be flooring. This is
represented as follows:

Construction of a house involves various activities. Flooring is an activity in this project. We can say that a
project is completed only when all the activities in the project are completed.

2. Event
It is the beginning or the end of an activity. Events are represented by circles in a project network diagram.
The events in a network are called the nodes.

Starting a punching machine is an activity. Stopping the punching machine is another activity.

3. Predecessor Event
The event just before another event is called the predecessor event.

4. Successor Event
The event just following another event is called the successor event.

91
Consider the following example:

1 2 4 6

In this diagram, event 1 is predecessor for the event 2.


Event 2 is successor to event 1.
Event 2 is predecessor for the events 3, 4 and 5.
Event 4 is predecessor for the event 6.
Event 6 is successor to events 3, 4 and 5.

5. Network
A network is a series of related activities and events which result in an end product or service.
The activities shall follow a prescribed sequence. For example, while constructing a house, laying the
foundation should take place before the construction of walls. Fitting water tapes will be done towards the
completion of the construction. Such a sequence cannot be altered.

6. Dummy Activity
A dummy activity is an activity which does not consume any time. Sometimes, it may be necessary to
introduce a dummy activity in order to provide connectivity to a network or for the preservation of the
logical sequence of the nodes and edges.

7. Construction of a Project Network


A project network consists of a finite number of events and activities, by adhering to a certain specified
sequence. There shall be a start event and an end event (or stop event). All the other events shall be
between the start and the end events. The activities shall be marked by directed arrows. An activity takes the
project from one event to another event.

An event takes place at a point of time whereas an activity takes place from one point of time to another
point of time.

7.2 How to construct a network model

Construct the network diagram for a project with the following activities:

92
Activity Immediate
Name of
event––> predecessor
activity
event activity
1––>2 A ––
1––>3 B ––
1––>4 C ––
2––>5 D A
3––>6 E B
4––>6 F C
5––>6 G D

Solution
The start event is node 1.

The activities A, B, C start from node 1 and none of them has a predecessor activity. A joins nodes1 and 2; B joins
nodes 1 and 3; C joins nodes 1 and 4. So we get the following:

2
A
1 B 3
C

This is a part of the network diagram that is being constructed.


Next, activity D has A as the predecessor activity. D joins nodes 2 and 5. So we get

A D
1 2 5
Next, activity E has B as the predecessor activity. E joins nodes 3 and 6. So we get
B E
1 3 6

Next, activity G has D as the predecessor activity. G joins nodes 5 and 6. Thus we obtain
D G
2 5
6

93
Since activities E, F, G terminate in node 6, we get

5
G
E
3 6
F

6 is the end event.


Combining all the pieces together, the following network diagram is obtained for the given project:

D
5
A 2 G
Start Event 1 B E 6 End Event
C 3
F
4

(Bridges of königsberg)

The Prussian city of Kِnigsberg (now Kalingrad in Russia) was founded in 1254 on the banks of river
Pergel with seven bridges connecting its four sections (labeled A, B, C, and D) as shown in the first following
Figure. A question was raised as to whether a round-trip could be constructed to visit all four sections of the
city, crossing each bridge exactly once. A section could be visited multiple times, if necessary.

In the mid-eighteenth century, the famed mathematician Leonhard Euler developed a special ―path
construction‖ argument to prove that it was impossible to construct such a trip. Later, in the early nineteenth
century, the same problem was solved by representing the situation as a network with nodes representing the
sections and (distinct) arcs representing the bridges, as shown in the second following Figure.

94
95
7.3 Minimal Spanning Tree Algorithm
The minimal spanning tree links the nodes of a network using the smallest total length of connecting
branches. A typical application occurs in the pavement of roads linking towns, either directly or passing through
other towns. The minimal spanning tree solution provides the most economical design of the road system.

Let N = {1, 2… n} be the set of nodes of the network and define


= Set of nodes that have been permanently connected at iteration k
= Set of nodes as yet to be connected permanently after iteration k

The following steps describe the minimal spanning tree algorithm:

Step0. Set = ϕ and = N.

Step1. Start with any node i in the unconnected set and set = {i}, rendering
= N – {i}. Set k = 2.

General step k.
Select a node, j*, in the unconnected set that yields the shortest arc to a node in the connected set
- 1. Link j* permanently to - 1 and remove it from - 1 to obtain and , respectively. Stop if is
empty; else, set k = k + 1 and repeat the step.

The algorithm starts at node 1 (actually, any other node can be a starting point), which gives = {1} and
= {2, 3, 4, 5, 6}. The iterations of the algorithm are summarized in the next Figure. The thin arcs provide all the
candidate links between C and C. The thick arcs are the permanent links of the connected set C, and the dashed
arc is the new (permanent) link added at each iteration. For example, in iteration 1, branch (1, 2) is the shortest
link 1= 1 mile2 among all the candidate branches from node 1 to nodes 2, 3, 4, and 5 in the unconnected set .
Hence, link (1, 2) is made permanent and j* = 2, which yields = {1, 2}, = {3, 4, 5, 6}.

96
The solution is given by the minimal spanning tree shown in iteration 6 of next figure.
The resulting minimum cable miles needed to provide the desired cable service are 1 + 3 + 4 + 3 + 5 = 16 miles.

7.4 Shortest Route Algorithm


This section presents two algorithms for solving both cyclic (i.e., containing loops) and acyclic networks:
1. Dijkstra’s algorithm
For determining the shortest routes between the source node and every other node in the network.

2. Floyd’s algorithm
For determining the shortest route between any two nodes in the network.

7.4.1 Dijkstra’s algorithm.

Let be the shortest distance from source node 1 to node i, and define (≥ 0) as the length of arc (i, j). The
algorithm defines the label for an immediately succeeding node j as

[ , i] = [ + , i], ≥0

The label for the starting node is [0, —], indicating that the node has no predecessor.

97
Node labels in Dijkstra’s algorithm are of two types: temporary and permanent.
A temporary label at a node is modified if a shorter route to the node can be found.
Otherwise, the temporary status is changed to permanent.

Step 0. Label the source node (node 1) with the permanent label [0, —]. Set i = 1.

General step i.
(a) Compute the temporary labels [ + , i] for each node j with > 0, provided j is not permanently
labeled. If node j already has an existing temporary label [ k] via another node k and if + < ,
replace [ , k] with [ + , i].

(b) If all the nodes have permanent labels, stop. Otherwise, select the label [ , s] having the shortest
distance (= ) among all the temporary labels (break ties arbitrarily). Set i = r and repeat step i.

The network in the above figure gives the permissible routes and their lengths in miles between city 1 (node 1)
and four other cities (nodes 2 to 5). Determine the shortest routes between city 1 and each of the remaining four
cities.

Iteration 0. Assign the permanent label [0, —] to node 1.


Iteration 1. Nodes 2 and 3 can be reached from (the last permanently labeled) node 1. Thus, the list of
labeled nodes (temporary and permanent) becomes

For the two temporary labels [100, 1] and [30, 1], node 3 yields the smaller distance ( = 30). Thus,
the status of node 3 is changed to permanent.

98
Iteration 2. Nodes 4 and 5 can be reached from node 3, and the list of labeled nodes becomes

Temporary label [40, 3] at node 4 is now permanent ( = 40)

Iteration 3. Nodes 2 and 5 can be reached from node 4. Thus, the list of labeled nodes is updated as

At node 2, the new label [55, 4] replaces the temporary label [100, 1] from iteration 1 because it provides a
shorter route. Also, in iteration 3, node 5 has two alternative labels with the same distance 1 = 902.
Temporary label [55, 4] at node 2 is now permanent ( = 55).

Iteration 4. Only permanently labeled node 3 can be reached from node 2. Hence node 3 cannot be
relabeled. The new list of labels remains the same as in iteration 3 except that the label at node 2 is now
permanent. This leaves node 5 as the only temporary label. Because node 5 does not lead to other nodes, its
label becomes permanent, and the process ends.

The computations of the algorithm can be carried out directly on the network, as the next figure demonstrates.

The shortest route between nodes 1 and any other node in the network is determined beginning at the desired
destination node and backtracking to the starting node using the information in the permanent labels. For
example, the following sequence determines the shortest route from node 1 to node 2:

99
(2) → [55, 4] → (4) → [40, 3] → (3) → [30, 1] → (1)

Thus, the desired route is 1→ 3→ 4→ 2 with a total length of 55 miles.

7.4.2 Floyd’s Algorithm

Floyd’s algorithm is more general than Dijkstra’s because it determines the shortest route between any two
nodes in the network. The algorithm represents an n-node network as a square matrix with n rows and n
columns. Entry (i, j) of the matrix gives the distance dij from node i to node j, which is finite if i is linked
directly to j, and infinite otherwise.

The idea of Floyd’s algorithm is straightforward. Given three nodes i, j, and k in the next Figure with the
connecting distances shown on the three arcs, it is shorter to reach j from i passing through k if
+ <

In this case, it is optimal to replace the direct route from i → j with the indirect route i → k →j. This triple
operation exchange is applied to the distance matrix using the following steps:

100
Step0. Define the starting distance matrix D0 and node sequence matrix S0 (all diagonal elements are
blocked). Set k = 1.

General step k. Define row k and column k as pivot row and pivot column. Apply the triple operation to
each element in , for all i and j. If the condition

+ < , (i ≠ k, j ≠ k, and i ≠ j)

is satisfied, make the following changes:

(a) Create by replacing in with + .


(b) Create by replacing in with k. Set k = k + 1. If k = n + 1, stop; else repeat step k.

101
After n steps, we can determine the shortest route between nodes i and j from the matrices Dn and Sn using the
following rules:

1. From , gives the shortest distance between nodes i and j


.
2. From ,, determine the intermediate node k = that yields the route i→ k →j.
If = k and = j, stop; all the intermediate nodes of the route have been found. Otherwise, repeat the
procedure between nodes i and k and between nodes k and j.

Example

102
103
The final and contain all the information needed to determine the shortest route between any two nodes
in the network. For exmple, from , the shortest distance from node 1 to node 5 is = 12 miles. To
determine the associated route, recall that a segment (j, j) represents a direct link only if = j. otherwise, I and
j are linked through at least one other intermedu=iate node. Because = 4 ≠ 5, the route is initially given as
1→4→5. Now, because s14 = 2 ≠ 4, the segment (1, 4) is not a direct link, and 1→4 is replaced with 1→4→5,
and the route 1→4→5 now becomes 1→2→4→5. Next, because = 2, = 4, and = 5, no further
―dissecting‖ is needed, and 1→2→4→5 defines the shortest route.

7.4.3 LP formulation of the shortest route problem


This section provides an LP model for the shortest-route problem. The model is general in the sense that it can
be used to find the shortest route between any two nodes in the network. In this regard, it is equivalent to
Floyd’s algorithm.

We wish to determine the shortest route between any two nodes s and t in an n-node network. The LP assumes
that one unit of flow enters the network at node s and leaves at node t.

104
Define

Thus , the objective function of the linear program becomes

The constraints represent the conversation-of-flow equation at each node:

Total input flow = total output flow

Mathematically, this translates for node j to

( * ∑ ( * ∑

In the network of Example discussed above in Dijkstra’s algorithm , suppose that we want to determine the
shortest route from node 1 to node 2—that is, s = 1 and t = 2. The next Figure shows how the unit of flow enters
at node 1 and leaves at node 2.

We can see from the network that the flow-conservation equation yield

Node 1: 1= +
Node 2: + = +1
Node 3: + = +
Node 4: = +
Node 5: + =0

105
Notice that column xij has exactly one ―1‖ in roe I and one ―-1‖ in row j, a typical property of a network L.
notice also that by examining the network, ode 5 and its incoming arcs ca be deleted altogether; meaning that
node 5 constraint and the variable and ca be removed from the LP. Of course, the given LP is ―smart‖ to
yield = = 0 in the optimum solution.

The optimal solution


Z= 55, : = 1, = 1, =1

This solution gives the shortest route from node 1 to node 2 as 1→ 3→ 4→ 2, and the
associated distance is z = 55 (miles).

7.5 Maximum Flow Model

Consider a network of pipelines that transports crude oil from oil wells to refineries. Intermediate booster and
pumping stations are installed at appropriate design distances to move the crude in the network. Each pipe
segment has a finite discharge rate (or capacity) of crude flow. A pipe segment may be uni- or bidirectional,

106
depending on its design. The next Figure demonstrates a typical pipeline network. The goal is to determine the
maximum flow capacity of the network.

The solution of the proposed problem requires adding a single source and a single sink using unidirectional
infinite capacity arcs, as shown by dashed arcs. For arc (i, j), the notation 1 , 2 gives the flow capacities in
the two directions i → j and j → i. To eliminate ambiguity, we place next to node i and next to node j, as
shown in Figure.

Enumeration of cuts

A cut defines a set of arcs whose removal from the network disrupts flow between the source and sink nodes.
The cut capacity equals the sum of the capacities of its set of arcs. Among all possible cuts in the network, the
cut with the smallest capacity is the bottleneck that determines the maximum flow in the network.

Consider the network in the following Figure. The bidirectional capacities are shown on the respective arcs
using the convention in Figure. For example, for arc (3, 4), the flow limit is 10 units from 3 to 4 and 5 units
from 4 to 3. Figure illustrates three cuts with the following capacities:

107
The only information from the three cuts is that the maximum flow in the network cannot exceed 60 units. To
determine the maximum flow, it is necessary to enumerate all the cuts, a difficult task for the general network.
Thus, the need for an efficient algorithm is imperative.

7.5.1 Maximum Flow Algorithm


The maximal flow algorithm is based on finding breakthrough paths with positive flow between the source
and sink nodes. Each path commits part or all of the capacities of its arcs to the total flow in the network.

Consider arc (i, j) with the bidirectional (design) capacities 1 , 2 . As portions of these capacities are
committed to the flow in the arc, the residuals (or unused capacities) of the arc are updated. We use the
notation ( , ) to represent the residuals.

For a node j that receives flow from node i, we attach a label [aj, i], where aj is the flow from node i to node j.

108
Step1. For all arcs (i,j), st the residual capacity equal to the design capacity – that is , ( , )
= ( , ). Let a1=∞, and label source node 1 with [∞ , –]. Set i=1, and go to step2.

Step2. Determine , the set of unlabeled nodes j that can be reached directly from node I
by arcs with positive residuals (i.e., > 0 for all j ). If ≠ , go to step3. Otherwise, a
partial path is dead-ended at node i. go to step 4.

Step3. Determine k such that

Set = and label node k with [ , i]. if k=n, the sink node has been labeled, and a
breakthrough path is found, go to step 5. Otherwise, set i=k, and go to step2.

Step4. (Backtracking). If I = 1, no breakthrough is possible; go to step. Otherwise, let r be


the ode (on the partial path) that was labeled immediately before current node I, and remove I
from the set of nodes adjacent to r. set i=r, and go to step2.

Step5. (Determine if residuals). Let =(1, , , ….., n) define the nodes of the pth
breakthrough path from source node 1 to sink ode n. then the maximum flow along the path is
computed as

The residual capacity of each arc along the breakthrough path is decreased by the direction of
the flow increased by in the reverse direction, that is for node i and j on the path, the residual
flow is changed from the current ( , ) to
a. ( – , + ) is the flow is from i to j
b. ( + , - ) is the flow is from j to i.

Step6. (Solution)
a. Given that breakthrough paths ave been determined, the maximal floe in the network is :
F=f1+ f2+…….+fm

109
b. Using the (initial) design capacities and final residuals of arc (i,j), ( , ), and ( , ),
respectively, the optimal flow in arc (i,j) is determined by computing ( )=( – ,
- ). If >0, the optimal flow from i to j is . Otherwise, if >0, the optimal flow from j
to i is . (It is impossible to have both and positive)

The backtracking process of step4 is invoked when the algorithm dead-ends at an intermediate
node. The flow adjustment in step 5 can be explained via the simple floe network in figure 6.21.
network (a) gives the first breakthrough path.

110
Example : Determine the maximal flow in the network

111
112
113
114
7.5.2 LP Formulation of Maximal Flow Mode

Define as the amount of flow in arc (i, j) with capacity . The objective is to determine for all i and j
that maximizes the flow between start node s and terminal node t subject to flow restrictions (input flow =
output flow) at all but nodes s and t.

In the maximal flow model of the above figure of iterations, s = 1 and t = 5. The following table summarizes the
associated LP with two different, but equivalent, objective functions depending on whether we maximize the
output from start node 1(= z1) or the input to terminal node 5(= z2).

115
7.6 Critical Path Method (CPM)
CPM is network-based methods designed to assist in the planning, scheduling, and control of projects. A project
is defined as a collection of interrelated activities with each activity consuming time and resources.

The objective of CPM is to provide analytic means for scheduling the activities and aims at the determination of
the time to complete a project and the important activities on which a manager shall focus attention.

Assumption for CPM


In CPM, it is assumed that precise time estimate is available for each activity.

Project completion time


From the start event to the end event, the time required to complete all the activities of the project in the specified
sequence is known as the project completion time.

Path in a project
A continuous sequence, consisting of nodes and activities alternatively, beginning with the start event and stopping
at the end event of a network is called a path in the network.

Critical path and critical activities

Consider all the paths in a project, beginning with the start event and stopping at the end event. For each path,
calculate the time of execution, by adding the time for the individual activities in that path.

The path with the largest time is called the critical path and the activities along this path are called the critical
activities or bottleneck activities. The activities are called critical because they cannot be delayed. However, a non-
critical activity may be delayed to a certain extent. Any delay in a critical activity will delay the completion of the

116
whole project. However, a certain permissible delay in a non –critical activity will not delay the completion of the
whole project. It shall be noted that delay in a non-critical activity beyond a limit would certainly delay the
completion the whole project. Sometimes, there may be several critical paths for a project.

A project manager shall pay special attention to critical activities.

The following figure summarizes the steps of the techniques.


 First, we define the activities of the project, their precedence relationships, and their time requirements.

 Next, the precedence relationships among the activities are represented by a network.

 The third step involves specific computations to develop the time schedule for the project. During the actual
execution of the project things may not proceed as planned, as some of the activities may be expedited or
delayed. When this happens, the schedule must be revised to reflect the realities on the ground. This is the
reason for including a feedback loop between the time schedule phase and the net-work phase, as shown.

CPM computation
The end result in CPM is the construction of the time schedule for the project. To achieve this objective
conveniently, we carry out special computations that produce the following information:
1. Total duration needed to complete the project.
2. Classification of the activities of the project as critical and noncritical.
To carry out the necessary computations, we define an event as a point in time at which activities are terminated
and others are started. In terms of the network, an event corresponds to a node. Define
= Earliest occurrence time of event j

= Latest occurrence time of event j

= Duration of activity (i, j)

The definitions of the earliest and latest occurrences of event j are specified relative to the start and completion
dates of the entire project.

The critical path calculations involve two passes: The forward pass determines the earliest occurrence times of
the events, and the backward pass calculates their latest occurrence times.

117
Forward Pass (Earliest Occurrence Times, ⃞)

The computations start at node 1 and advance recursively to end node n.


Initial Step:
Set = 0 to indicate that the project starts at time 0.
General Step j:
Given that nodes p, q… , and v are linked directly to node j by incoming activities (p, j), (q, j),. .
. , and (v, j) and that the earliest occurrence times of events (nodes) p, q, ….., and v have already
been computed, then the earliest occurrence time of event j is computed as:

The forward pass is complete when at node n has been computed. By definition
represents the longest path (duration) at node j.

Backward Pass (Latest Occurrence Times, )


Following the completion of the forward pass and the computations start at node n and advance
recursively to end node 1.

Initial Step:
Set to indicate that the earliest and the latest occurrences of the last node of the project
are the same.

General Step j:
Given that nodes p, q… , and v are linked directly to node j by outgoing activities (j, p), (j, q),. . .
, and (j, v) and that the earliest occurrence times of events (nodes) p, q, ….., and v have already
been computed, then the latest occurrence time of event j is computed as:

The backward pass is complete when at node 1 has been computed. At this point,

Based on the preceding calculations, an activity (i, j) will be critical if it satisfies 3 conditions:

1.
2.
3.

The three conditions state that the earliest and latest occurrence times of end nodes i and j are
equal and the duration fits "tightly" in the specified time span. An activity that does not
satisfy all three conditions is thus noncritical.

118
By definition, the critical activities of a network must constitute an uninterrupted path that spans
the entire network from start to finish.

Problem 1:
Determine the critical path for the following network. It shows the forward and backward
calculations. All the durations are in days.

Solution

Forward pass
Node1. Set
Node2. =
Node3. =max { } = max {0+6, 5+3} =8
Node4.
Node5. =max { } = max {8+2, 13+0} =13
Node6. =max { } = max {8+11, 13+11, 13+12} =25
The computation shows that project can be completed in 25 days.

Backward pass
Node6. Set

Node5. =

Node4. =min { } =min {25-1, 13-0} =13

Node3. =min { } =min {25-11, 13-2} =11

Node2. =min { } =min {13-8, 11-3} =5

Node1. =min { } =min {11-6, 5-5} =0

119
Correct computations will always end with .
The forward and backward pass computations can be made directly on the network as shown in
Figure. Applying the rules for determining the critical activities, the critical path is
1––>2––>4––>5––>6

, which, as should be expected, spans the network from start (node 1) to finish (node 6). The
sum of the durations of the critical activities [(1, 2), (2, 4), (4, 5), and (5, 6)] equals the duration
of the project (= 25 days). Observe that activity (4, 6) satisfies the first two conditions for a
critical activity ( = 13 and = 25) but not the third ( ). Hence,
the activity is noncritical.

We can also solve the CPM problems by another way as shown in the next problems

Problem 2, the following details are available regarding a project. Determine the critical path, the
critical activities and the project completion time.
Predecessor Duration
Activity
activity (weeks)
A –– 3
B A 5
C A 7
D B 10
E C 5
F D,E 4

Solution

First let us construct the network diagram for the given project. We mark the time estimates along the arrows
representing the activities. We obtain the following diagram:

Consider the paths, beginning with the start node and stopping with the end node. There are two such paths for
the given project. They are as follows:

Path (I) with a time of 3 + 5 + 10 + 4 = 22 weeks.

120
Path (II) with a time of 3 + 7 + 5 + 4 = 19 weeks.

Compare the times for the two paths. Maximum of {22, 19} = 22. We see that path (I) has the maximum time of
22 weeks. Therefore, path (I) is the critical path. The critical activities are A, B, D and F.

The project completion time is 22 weeks. We notice that C and E are non- critical activities.

Time for path (I) - Time for path (II) = 22- 19 = 3 weeks.

Therefore, together the non- critical activities can be delayed up to a maximum of 3 weeks, without delaying the
completion of the whole project.

Problem 3, Find out the completion time and the critical activities for the following project:

Solution
In all, we identify 4 paths, beginning with the start node of 1 and terminating at the end node of 10. They are as
follows:

Path (I): Time for the path = 8 + 20 + 8 + 6 = 42 units of time.

Path (II): Time for the path = 10 + 16 + 11 + 6 = 43 units of time.

121
Path (III): Time for the path = 10 + 16 + 14 + 5 = 45 units of time.

Path (IV): Time for the path = 7 + 25 + 10 + 5 = 47 units of time.

Compare the times for the four paths. Maximum of {42, 43, 45, 47} = 47. We see that the following path has
the maximum time and so it is the critical path is path (IV).

The critical activities are C, F, J and L. The non-critical activities are A, B, D, E, G, H, I and K. The project
completion time is 47 units of time.

7.7 Gantt Chart

Definition
 . A Gantt chart is a type of bar chart that illustrates a project schedule. This chart lists the tasks to be
performed on the vertical axis, and time intervals on the horizontal axis.
 The width of the horizontal bars in the graph shows the duration of each activity.
 Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of
a project. Terminal elements and summary elements constitute the work breakdown structure of the
project. Modern Gantt charts also show the dependency (i.e., precedence network) relationships between
activities. Gantt charts can be used to show current schedule status using percent-complete shadings and
a vertical "TODAY" line as shown here

Construction of the time schedule

This section show how the information obtained from CPM calculation can be used to develop the time
schedule. We recognize that for an activity A (i, j), represents the earliest start time, and represents the
latest completion time. This means that the interval ( ) delineates the (maximum) span during which
activity A (i, j) may be scheduled without delaying the entire project.

Let’s determine the time schedule for the network of the problem 1

122
Solution

We can get a preliminary time schedule for the different activities of the project by delineating their respective
time spans as shown in Figure. No observations are in order.
1. The critical activities (shown by solid lines) must be stacked one right after the other to ensure that the
project is completed within its specified 25-day duration.

2. The noncritical activities (shown by dashed lines) have time spans that are larger than their respective
durations, thus allowing slack (or "leeway") in scheduling them within their allotted time intervals.
How should we schedule the noncritical activities within their respective spans? Normally, it is preferable to
start each noncritical activity as early as possible. In this manner, slack periods will remain opportunely
available at the end of the allotted span where they can be used to absorb unexpected delays in the execution of
the activity. It may be necessary, however, to delay the start of a noncritical activity past its earliest start time.
For example, in a figure, suppose that each of the noncritical activities E and F requires the use of a bulldozer,
and that only one is available. Scheduling both E and F as early as possible requires two bulldozers between

123
times 8 and 10. We can remove the overlap by starting E at time 8 and pushing the start time of F to somewhere
between times 10 and 14.

If all the noncritical activities can be scheduled as early as possible, the resulting schedule automatically is
feasible. Otherwise, some precedence relationships may be violated if noncritical activities are delayed past
their earliest time. Take for example activities C and E in a previous figure. In the project network (problem 1 ),
though C must be completed before E, the spans of C and E in previous figure allow us to schedule C between
times 6 and 9, and E between times 8 and 10, which violates the requirement that C precede E. The need for a
"red flag" that automatically reveals schedule conflict is thus evident. Such information is provided by
computing the floats for the noncritical activities.

Determination of the Floats


Floats are the slack times available within the allotted span of the noncritical activity. The most common are
the total float and the free float. The next figure gives a convenient summary for computing the total float
( ) and the free float ( ) for an activity (i, j).

The total float is the excess of the time span defined from the earliest occurrence of event i to the latest
occurrence of event j over the duration of (i, j)—that is,

The free float is the excess of the time span defined from the earliest occurrence of event i to the earliest
occurrence of event j over the duration of (i, j)—that is,

By definition

Red-Flagging Rule. For a noncritical activity (i, j)

(a) If , then the activity can be scheduled anywhere within it’s ( ) span without
causing schedule conflict.

(b) If , then the start of the activity can be delayed by at most relative to its earliest start
time ( ) without causing schedule conflict. Any delay larger than (but not more than ) must
be coupled with an equal delay relative to in the start time of all the activities leaving node j.

The implication of the rule is that a noncritical activity (i, j) will be red-flagged if its . This red flag
is important only if we decide to delay the start of the activity past its earliest start time , in which case we
must pay attention to the start times of the activities leaving node j to avoid schedule conflicts.

124
The following table summerizes the computation of that total and free floats. It is more convenient to do
calculations directly from the network in problem 1:

The computations red-flag activities B and C because their FF < TF. The remaining activities (E, F, and G) have
FF = TF, and hence may be scheduled anywhere between their earliest start and latest completion times.

125

You might also like