Professional Documents
Culture Documents
Linear Optimization PDF
Linear Optimization PDF
DEPARTMENT OF MATHEMATICS
MODULE IN
EDITED BY: –
Table of Contents
CHAPTER ONE ............................................................................................................................................... 8
1.Introduction to matrices ........................................................................................................................ 8
1.1 Introduction (Definition of LP, Motivation,…)................................................................................. 8
1.1.1 Types of Matrices ....................................................................................................................... 11
1.2 Matrices (Rank, Elementary Row Operations ) ............................................................................ 16
1.2.1 Rank of a Matrix ........................................................................................................................ 16
1.2.2 Elementary row (column) operations ........................................................................................ 17
1.3 System of Linear equations ........................................................................................................... 23
1.3.1 Gaussian-Jordan Elimination Method ........................................................................................ 25
1.4 System of linear inequality............................................................................................................ 31
CHAPTER TWO ............................................................................................................................................ 36
2. Linear Programming Problems models of practical problems............................................................ 36
2.1 Introduction .................................................................................................................................. 36
2.2 Decision process and relevance of optimization .......................................................................... 36
2.3. Model and model building ........................................................................................................... 38
CHAPTER THREE .......................................................................................................................................... 59
3.Geometric methods ............................................................................................................................. 59
3.1 Graphical Solution Methods ......................................................................................................... 59
3.2 Convex set ..................................................................................................................................... 64
3.3Polyhedral sets and Extreme Points .............................................................................................. 72
3.4 The corner Point Theorem ............................................................................................................ 74
CHAPTER four .............................................................................................................................................. 82
4.The Simplex method ............................................................................................................................ 82
4.1 Linear programming in standard form .......................................................................................... 82
4.2 Basic Feasible Solutions (Analytical Method) ............................................................................... 83
4.3. Fundamental theorem of linear programming ............................................................................ 86
4.4 Algebra of the Simplex Method .................................................................................................... 89
4.5 Optimality test and basic exchange .............................................................................................. 91
4.6 The simplex Algorithm .................................................................................................................. 96
4.7 Degeneracy and finiteness of simplex algorithm .......................................................................... 97
4.8 Finding a starting basic feasible solution ...................................................................................... 97
Course Introduction
This course deals with linear programming, geometric and simplex methods, duality theory and
further variations of the simplex method, sensitivity analysis, interior point methods,
transportation problems, and theory of games. Every units of course is touches the real world
activities so that it is advising to cover the course contents attentively.
The purpose of this module is to provide a unified, insightful, and modern treatment of linear
programming, geometric and simplex methods, duality theory and further variations of the
simplex method, sensitivity analysis, interior point methods, transportation problems, and theory
of games. We discuss both classical topics, as well as the state of the art. We give special
attention to theory, but also cover applications and present case studies. Our main objective is to
help the learner become a sophisticated practitioner of linear optimization. More specifically, we
wish to develop the ability to formulate fairly linear optimization problems, provide an
appreciation of the main classes of problems that are practically solvable, describe the available
solution methods, and build an understanding of the qualitative properties of the solutions they
provide.
▪Observe and investigate the impact of optimal solution due to change of parametric values
▪Identify the parameters whose values cannot be changed without changing the optimal solution
▪Understand the numerical calculation of the net evaluation corresponding to the non-basic cells
CHAPTER ONE
1.Introduction to matrices
1.1 Introduction (Definition of LP, Motivation,…)
The concept of matrices has had its origin in various types of linear problems, the most
important of which concerns the nature of solutions of any given system of linear equations.
Matrices are also useful in organizing and manipulating large amounts of data. Today, the
subject of matrices is one of the most important and powerful tools in Mathematics which
has found applications to a very large number of disciplines such as engineering, business
and economics, statistics etc.
Objectives
Definition of a matrix
Before we go any further, we need to familiarize ourselves with some terms that are associated
with matrices.
The numbers in a matrix are called the entries or the elements of the matrix. For the
entry a ij , the first subscript i specify the row and the second subscript j the column in
which the entry appears. That is, a ij is an element of matrix A which is located in the i th
row and j th column of the matrix A. Whenever we talk about a matrix, we need to know
the order of the matrix.
The order of a matrix is the number of rows and columns it has. When we say a matrix is a 3 by
4 matrix, we are saying that it has 3 rows and 4 columns. The rows are always mentioned first
and the columns second. This means that a 3 4 matrix does not have the same order as a 4 3
matrix. It must be noted that even though an m n matrix contains m x n elements, the entire
matrix should be considered as a single entity. In keeping with this point of view, matrices are
denoted by single capital letters such as A, B, C and so on.
Remark: By the size of a matrix or the dimension of a matrix we mean the order of the matrix.
1 5 2
Example 1. A
0 3 6
⋯
⋯
= ⋮ … … ⋮
…
4 3 5 6
−2 4 −1 7
Example 1: Let =
0 −4 −7 −8
5 6 7 8
a. Find , , , ,
b. What is the size of matrix A?
Solution:
Since the number of rows is specified first, this matrix has four rows and five columns.
2 3 4 5 6
3 4 5 6 7
=
4 5 6 7 8
5 6 7 8 9
Exercise: form a 4 by 3 matrix, B, such that a) bij i j b) bij ( 1) i j
Remark:
1. A vector is a matrix having either one row or one column.
2. A matrix consisting of a single row (a1, a2, a3, …, an) is called a row vector. Hence a row
matrix is a 1 × matrix.
matrix is an ×1
Definition ( Equality of matrices):
Two matrices A and B are said to be equal, written A=B, if they are of the same order and if all
the corresponding entries are equal.
5 1 0 2 3 1 0 9
Example 1. 2 3 2 2 but
9 2
2 3 4 2 ,Why?
x y 6 1 6
x y 8 3 8
Example 2. Give the matrix equation
Solution:
x y 1
By the definition of equality of matrices,
x y 3
on the diagonal extending from the left upper corner to the lower right
corner are called the main diagonal entries, or more simply the main diagonal.
3 2 4
Thus, in the matrix C = 1 6 0 the entries c11 3, c 22 6 and c33 8
5 1 8
1. Diagonal Matrix: A square matrix is said to be diagonal if each of the entries not
falling on the main diagonal is zero. Thus a square matrix A a ij is diagonal if
a ij 0 for i j .
0 ⋯ 0
0 ⋯ 0
Example 1: = is diagonal matrix of order n.
⋮ ⋮ ⋯ ⋮
0 0 ⋯
5 0 0
0 0 0
Example 2. 0 0 7
is a diagonal matrix.
Notation: A diagonal matrix A of order n with diagonal elements a11 , a 22 , ..., a nn is denoted by
A diag ( a11 , a 22 , ..., a nn )
1. A diagonal matrix in which all diagonal elements are equal is called a scalar matrix.
0 ⋯ 0 2 0 0
Example: Let = 0 ⋯ 0 and 0 2 0
⋮ ⋮ ⋯ ⋮
0 0 ⋯
0 0 2
are a scalar matrices.
= 0, ∀ ≠
Note: = is a scalar matrix, if
× = ,∀ =
1. Identity Matrix or Unit Matrix: A square matrix is said to be identity matrix or unit
matrix if all its main diagonal entries are 1’s and all other entries are 0’s. In other
words, a diagonal matrix whose all main diagonal elements are equal to 1 is called an
identity or unit matrix. An identity matrix of order n is denoted by In or more simply
by I
1 0 0
1 0
Example: I 3 0 1 0 is identity matrix of order I 2 is identity matrix of
0 1
0 0 1
order 2.
ii. Lower triangular matrix if all entries above the main diagonal are zeros.
5 0 0 0
2 4 8 1 3 0 0
Example: 0 1 2 are upper and lower triangular matrices,
6 1 2 0
0 0 3
and 2 4 8 6
respectively.
Example: consider
Example: The following matrices are symmetric matrices, since each is equal to
its own transpose (verify)
0 0 0 2 1 5
1 4 3
7 −3 0 0 0
= , = 4 −3 0 , = 0 D 1 0 3
−3 5 0 0
3 0 7 5 3 6
0 0 0
Note: for a symmetric matrix the elements that are at equal distance from the main
diagonal are equal.
Exercise 1.3
a 3 4 8
b c 3 9
1.For A is to be a symmetric matrix, what numbers
numbe should the
d e f 10
g h i j
letters a to j represent?
0 5 7 0 5 7
t
Example: For A 5 0 3 , A 5 0 3 A
7 3 0 7 3 0
So A is skew
skew-symmetric.
1 1 0 1 1 −1
= , = , =
1 0 1 0 1 0
Trace of square matrix
Definition: If A is a square matrix, then the trace of A denoted by tr(A) is defined as the sum
of the diagonal elements of A.
i.e, ( )=∑ ℎ =
×
Example 1 : consider
= , ℎ ( )= + +
4 −5 1
1. Let = −5 3 2 ℎ ( )=4+3−4= 3
1 2 −4
Remark: The trace of A is undefined if A is not a square matrix.
Properties of the trace
Let A and B be square matrices that are conformable for addition and multiplication. Then
a. tr(kA) = ktr(A) for any scalar k.
b. tr(A + B) = tr(A) + tr(B)
c. tr(AB) = tr(BA)
d. tr(A ) = tr(A)
Exercise
1. a) Form a 4 by 5 matrix, B, such that bij = i*j, where * represents
Multiplication.
3 1 0 2 4 3
2. Given A 2 4 5 5 1 7 Verify that
1 3 6 2 3 8
and B =
i) ( A B ) t A t B t , ii) ( AB ) B A
t t t
iii) (2 A) 2 A
t t
1 1 1
3. Let A , is At A is symmetric?
1 2 3
i.e if a matrix A is carried to a row-echelon matrix U by elementary row operations, then the
number of leading 1s in U is called the rank of A.
1 1 2 3 2 1 2
A 3 1 1 B 1 1 3 5
1 3 4 1 1 1 1
Solution : We transform the matrix A in to row –echelon form by using elementary row
operations.
1 1 2 1 1 2
A 3 1 1 0 4 5 , R2 3R1 R2 and R3 R1 R3
1 3 4 0 4 2
1 1 2
0 4 5 , R3 R2 R3
0 0 3
1 1 2
4 1 1
0 1 , R2 R2 and R3 R3
5 4 3
0 1 1
3 2 1 2 3 2 1 2
B 1 1 3 5 0 5 10 13 , R2 3R2 R1 and R3 3R3 R1
1 1 1 1 0 5 2 1
3 2 1 2
0 5 10 13 , R3 R2 R3
0 0 12 12
2 1 2
1
3 3 3
13 1 1 1
0 1 2 R1 R1 , R2 R2 and R3 R3
5 3 5 12
0 0 1 1
2 1 2
1
3 3 3
13
0 1 2 , R3 2 R3 R2
5
3
0 0 0
5
is row of echelon form and also the number of leading 1 ’s is 3.Therefore Rank(B)=3
B is obtained from A by → −2 + → −2 +
C is obtained from B by R → R
D is obtained from C by ↔
Example: Show that A is row equivalent to B.
1 2 4 3 2 4 8 6
= 2 1 3 2 , = 1 −1 2 3
1 −1 2 3 4 −1 7 8
Solution
Since B is obtained from A by performing the following ERO
→2 , ↔ → +2 Then A is row equivalent to B.
Definition: A matrix obtained from an identity (unit) matrix by applying a single elementary
operation is called an elementary matrix.
1 0 0 1 0 3
1 0
Example: The following are elementary matrices , 0 1 0 , 0 1 0
0 −3
0 0 1 0 0 1
0 0 1
Example: The matrix 0 1 0 is an elementary matrix obtained by ↔
1 0 0
A nonzero row (or column) in a matrix means a row (or column) that contains at least one
non-zero entry.
A leading entry of a row refers to the left most nonzero entry (in a non zero row).
Definition: A matrix is said to be in echelon (row echelon) forms if it satisfies the
following three properties.
1. All nonzero rows are above any rows of all zeros.
2. Each leading entry of a row is in a column to the right of the leading entry
of the row above it.
4. All entries in a column below a leading entry are zero.
2 3 2 1 1 0 0 29
A) 0 1 4 8 , 0 1 0 16
5
0 0 0 0 0 1 1
2
1 4 3 7 1 1 0 0 1 2 6 0
B) 0 1 6 2 , 0 1 0 , 0 0 1 −1 0
0 0 1 5 0 0 0 0 0 0 0 1
Definition: If a matrix in echelon form satisfies the following additional condition then it
is called in reduced echelon form (or row reduced echelon form)
1. The leading entry in each non zero row is 1
2. Each leading 1 is the only nonzero entry in its column.
Example 1: The following matrices are in reduced row echelon form
0 1 −2 0 1
1 0 0 4 1 0 0
0 0 0 0 0 1 3
0 1 0 7 , 0 1 0 , ,
0 0 0 0 0 0 0
0 0 1 −1 0 0 1
0 0 0 0 0
Remark: A matrix in row-echelon form has zeros below each leading 1, where as a matrix in
reduced row-echelon form has zeros both above and below each leading 1.
Remark:
1. Each matrix is row equivalent to one and only one row reduced echelon matrix.
But a matrix can be row equivalent to more than one echelon matrices.
Activity
1. Determine which of the following matrices are in row reduced echelon form and which
others are in row echelon form (but not in reduced echelon form)
1 0 1 0
1 0 1 1 1 1 0 1 1 0
a) 0 1 0 0 0 0
0 0 0 1
0 0 0 0 0 0
b) c) 0 0 0 0
0 2 3 4 5 1 0 5 0 8 3
0 0 3 4 5 0 1 4 1 0 6
d)
0 0 0 0 5 0 0 0 0 1 0
0 0 0 0 0 e)
0 0 0 0 0 0
Activity1.3
3: Reduce the following Matrices in to row reduced echelon form.
1 2 −3 0
A) = 2 4 −2 2
3 6 −4 3
0 0 −2 0 7 12
B) = 2 4 −10 6 12 28
2 4 −5 6 −5 −1
x1 x2 x3 2 1 1 1 1 1 1 2
2x1 3x2 x3 3 2 3 1 2 3 1 3
x1 x2 2x3 6 1 1 2 1 1 2 6
matrix of coefficien
t augmented matrix
Observe that the matrix of coefficients is a sub matrix of the augmented matrix. The augmented
matrix completely describes the system.
Elementary Transformations
Systems of equations that are related through elementary transformations, and thus have the
same solutions, are called equivalent systems. The symbol is used to indicate equivalent
system of equations. The next example compares the elementary transformation with elementary
row operations.
x1 x 2 x3 2
2 x1 3 x 2 x3 3
x x 2 x 6
1 2 3
x1 x2 x3 2
Initial system 2 x1 3 x2 x3 3
x1 x2 2 x3 6
nd rd
Eliminate x1 from the 2 and 3 equations
x1 x2 x3 6
eq ( 2) 2eq (1) eq ( 2)
x 2 x 3 1
eq (3) eq (3) eq (1)
2 x 2 3 x 3 6
st rd
Eliminate x2 from the 1 and 3 equations
x1 2 x3 3
eq (1) eq (1) eq ( 2)
≅ x 2 x 3 1
eq (3) eq (3) 2eq (2)
5 x3 10
rd
Make coefficient of x3 in 3
x1 2 x3 3
1
≅ x 2 x 3 1 eq(3) eq(3)
5
x3 2
x1 1
≅ x2 1
x3 2
Matrix Method
1 1 1 2
Augmented matrix 2 3 1 3
1 1 2 6
We refer to the first row as the pivot row, and then we have:
1 1 1 2
0 1 1 1 , R2 R2 2 R1 , R3 R3 R1
0 2 3 8
1 0 2 3
0 1 1 1 R1 R1 R2 , R3 R3 2 R2
0 0 5 10
1 0 2 3
1
0 1 1 1, R3 R3
0 0 1 2 5
1 0 0 1
0 1 0 1 , R1 R1 2 R3 , R2 R2 R3
0 0 1 2
x1 1
x2 1
x3 2
Definition: a linear equation in the variables x1 , x 2 ,..., x n over the real field
R is an equation that can be written in the form
a1 x1 a 2 x 2 .... a n x n b
(1)
a1 , a 2 ,..., a n
Where b and the coefficients are given real numbers.
Definition: A system of linear equations (or a linear system) is a collection of one or more
linear equations involving the same variables, say x1 , x 2 ,..., x n .
⋯
⋯
The matrix ⋮ … … ⋮ is called the augmented matrix for the system.
⋮
…
1 3 1 1 3 1 2
The matrix A 0
1 2 is the coefficient matrix and 0 1 2 4 is the
2 3 3 2 3 3 5
augmented matrix.
Are the coefficient matrix and the augmented matrix of a homogeneous linear system equal?
Why?
1. no solution, or
2. exactly one solution, or
3. Infinitely many solutions.
We say that a linear system is Consistent if it has either one solution or infinitely many
solutions; a system is inconsistent if it has no solution.
This is the process of finding the solutions of a linear system of equation. We first see the
technique of elimination (Gaussian elimination method) and then we add two more techniques,
matrix inversion method and Cramer’s rule.
1. Replace one equation by the sum of itself and a multiple of another equation.
2. Interchange two equations
3. Multiply all the terms in an equation by a non zero constant.
Why these three operations do not change the solution set of the system?
The three basic operations listed above correspond to the three elementary row operations on the
augmented matrix.
Thus to solve a linear system by elimination we first perform appropriate row operations on the
augmented matrix of the system to obtain the augmented matrix of an equivalent linear system
which is easier to solve and use back substitution on the resulting new system. This method can
also be used to answer questions about existence and uniqueness of a solution whenever there is
no need to solve the system completely.
In Gaussian elimination method we either transform the augmented matrix to an echelon matrix
or a reduced echelon matrix. That is we either find an echelon form or the reduced echelon form
of the augmented matrix of the system.
1 0 0 5
Example 2: if 0 1 0 −2 is the row reduced form of the augmented matrix of a system
0 0 1 4
1 0 0 5
linear equations then 0 1 0 = −2 is the equivalent system of equation in matrix
0 0 1 4
form. The corresponding system of equation is
=5
= −2
=4
Example 3: Determine if the following system is consistent. If so how many solutions does it
have?
x1 x 2 x 3 3
x1 5 x 2 5 x 3 2
2 x1 x 2 x 3 1
Solution:
Let us perform a finite sequence of elementary row operations on the augmented matrix.
1 1 1 3 R R R 1 1 1 3
R 3 2 R1 R3
A B 1 5 5 2 0 6 6 1
2 1 2
2 1 1 1 2 1 1 1
1 1 1 3 1
R3 2 R 2 R3 1 1 1 3
0 6 6 1 0 6 6 1
9
0 3 3 5 0 0 0
2
x1 x 2 x 3 3
6 x 2 6 x 3 1
9
0
2
9
But the last equation 0. x1 0. x 2 0. x 3 is never true. That is there are no values x1 , x 2 , x 3
2
that satisfy the new system (*). Since (*) and the original linear system have the same solution
set, the original system is inconsistent (has no solution).
2x y z 2
2x y z 4
6x 3y 2z 9
Solution:
2 1 1 2
A B 2 1 1 4
6 3 2 9
Let us find an echelon form of the augmented matrix first. From this we can determine whether
the system is consistent or not. If it is consistent we go ahead to obtain the reduced echelon form
of [AB], which enable us to describe explicitly all the solutions.
2 1 1 2
R 2 R1 R 2
2 1 1 2
3 3 R1 R 3
R
2 1 1 4 0 0 2 6
6 3 2 9 6 3 2 9
2 1 1 2 R 3 5 R 2 R 3 2 1 1 2 1
0 0 2 2
6 0 0 2 6 R 2 2 R 2
0 0 5 15 0 0 0 0
2 1 1 2 2 1 0 1 1
0 0 1 3 R1 R 2 R1
0 0 1 3 R
1
R1
2
0 0 0 0 0 0 0 0
1 1
1 2
0
2
0 0 1 3
0 0 0 0
x 1
2
y 1
2
z 3
0 0
Thus x 1
2
1
2
, y = and z = 3 is the solution of the given system, where is any real
number.
1
Then there are an infinite number of solutions, for example, x = 2
, y = 0, z = 3
x = 0, y = 1, z = 3 and so on.
In vector form the general solution of the given system is of the form ( 21 12 , , 3) where
Activity 1.2
x + y + 2z = 9
2 +4 −3 =1
3x + 6y − 5z = 0
x1 x2 x 4 1
2.Find the solution set of the system: x1 x 2 x3 2
x 2 x3 x 4 0
Exercise 1.3
1. Find the solution set of the following system:
2 x1 x 2 3 x3 1
x1 x 2 2 x3 2
4 x1 3 x 2 x3 3
a)
x1 5 x3 3
x1 2 x2 3 x3 x 4 0
3 x1 x 2 5 x3 x 4 0
b)
2 x1 x2 x 4 0
x1 3 x 2 5 x3 2 x 4 11
3 x1 2 x 2 7 x3 5 x 4 0
c)
2 x1 x 2 x 4 7
x yz 6
2. For what values of and the system: x 2 y 3 z 10
x 2 y z has
If A is an invertible square matrix, then for each column matrix b, the system of equations
= has exactly one solution, namely, = .
Example: Consider the system of linear equations
+2 +3 =5
2 +5 +3 =3
+8 = 17
1 2 3 5
= 2 5 3 , = , = 3
1 0 8 17
−40 16 9
The inverse of A is = 13 −5 −3
5 −2 −1
−40 16 9 5 1
= = 13 −5 −3 3 = −1
5 −2 −1 17 2
Let us now summarize the different possible cases that may arise for the solution of the system
of equation = .
4. If ( , ) > ( ) and hence = has no solution.
5. If ( , ) = ( ) = = , and there exists a unique solution to the
system Ax = b.
6. If ( , ) = ( ) = < , and we have an infinite number of
solutions to the system = .
− <2
> −2
≤3
Begin by finding the point of equilibrium by setting the two equations equal to each other and
solving for x.
60 + 0.00002x = 150 − 0.00001x Set equations equal to each other.
0.00003x = 90 Combine like terms.
x = 3,000,000 Solve for x
So, the solution is x = 3,000,000, which corresponds to an equ
equilibrium
ilibrium price of p = $120. So, the
consumer surplus and producer surplus are the areas of the following triangular regions.
region
In Figure F1, you can see that the consumer and producer surpluses are defined as the areas of
the shaded triangles.
Consumer surplus = 1 /2(base)(height) = 1 /2(3,000,000)(30) = $45,000,000
Producer surplus = 1/ 2(base)(height) = 1/ 2(3,000,000)(60) = $90,000,000
Activity 1.4.1
The minimum daily requirements from the liquid portion of a diet are 300 calories, 36 units of
vitamin A, and 90 units of vitamin C. A cup of dietary drink X provides 60 calories,
calori 12 units of
vitamin A, and 10 units of vitamin C. A cup of dietary drink Y provides 60 calories, 6 units of
vitamin A, and 30 units of vitamin C. Set up a system of linear inequalities that describes how
many cups of each drink should be consumed each day to meet the minimum daily requirements
for calories and vitamins.
Hint: Begin by letting x and y represent the following.
x = number of cups of dietary drink Y
To meet the minimum daily requirements, the following inequalities must be satisfied
60 + 60 ≥ 300
12 + 6 ≥ 36
10 + 30 ≥ 90
≥0
≥0
Summary
Types of Matrices
Definitions:
1.Row Matrix: A matrix that has exactly one row is called a row matrix. For
3.Zero or Null Matrix: A matrix whose entries are all 0 is called a zero or null matrix. It is
usually denoted by 0m n or more simply by 0. For example,
0 0 0 0
0 is a 2 4 zero matrix.
0 0 0 0
d. tr(A ) = tr(A)
Note: let A be any square matrix. Then
+ is symmetric
− is skew symmetric
A can be written as a sum of a symmetric and a skew symmetric matrices.
( ) =( ) = ( ) = ( ) =
Thus the basic strategy in this method is to replace a given system with an equivalent system,
which is easier to solve.
The basic operations that are used to produce an equivalent system of linear equations are the
following:
1.Replace one equation by the sum of itself and a multiple of another equation.
2.Interchange two equations
3.Multiply all the terms in an equation by a non zero constant.
Why these three operations do not change the solution set of the system?
The three basic operations listed above correspond to the three elementary row operations on the
augmented matrix.
Thus to solve a linear system by elimination we first perform appropriate row operations on the
augmented matrix of the system to obtain the augmented matrix of an equivalent linear system
which is easier to solve and use back substitution on the resulting new system. This method can
also be used to answer questions about existence and uniqueness of a solution whenever there is
no need to solve the system completely.
In Gaussian elimination method we either transform the augmented matrix to an echelon matrix
or a reduced echelon matrix. That is we either find an echelon form or the reduced echelon form
of the augmented matrix of the system.
Review exercise
1 2 4 2 -1 3 4
A = 2 3 1 B = 2 4 2 C = 2
5 0 3 3 6 1 3
Find, if possible. a) A + B b) B + C
1
2 3 −1
2. Let = = 2 . ℎ
0 4 2
−1
3. If a matrix A is 3x5 and the product AB is 3x7, then what is the order of B?
a 3 4 8
b c 3 9
4. Let A is to be a symmetric matrix, what numbers should the letters a
d e f 10
g h i j
to j represent?
a) Does a symmetric matrix have to be square?
b) Are all square matrices symmetric?
1 1 1
5. Let A , is At A is symmetric?
1 2 3
6. For the non homogeneous linear system
x1 3 x 2 x 3 2
x 2 2 x3 4
2 x1 3 x 2 3 x 3 5
1 3 1 1 3 1 2
The matrix A 0
1 2 is the coefficient matrix and 0 1 2 4 is the
2 3 3 2 3 3 5
augmented matrix.
Are the coefficient matrix and the augmented matrix of a homogeneous linear system equal?
Why?
1. no solution, or
2. exactly one solution, or
3. Infinitely many solutions.
CHAPTER TWO
planes, a certain level of staffing, and expected demands on the various routes?
3. Where should a company locate its factories and warehouses so that the costs of
transporting raw materials and finished products are minimized?
4. How should the equipment in an oil refinery be operated, so as to maximize rate of
production while meeting given standards of quality?
5. What is the best treatment plan for a cancer patient, given the characteristics of the
tumor and its proximity to vital organs?
Simple problems of this type can sometimes be solved by common sense, or by using tools from
calculus.
Others can be formulated as optimization problems, in which the goal is to select values that
maximize or minimize a given objective function, subject to certain constraints.
A linear programming problem is a problem of minimizing or maximizing a linear function in
the presence of linear constraints of the inequality and/or the equality type.
- It is the problem of optimizing a linear function of n dimensional vector X under
finitely many linear equality and non strict inequality constraints.
- Linear Programming techniques are widely used to solve a number of military, economic,
industrial, and social problems
Three primary reasons for its wide use are:
A large variety of problems in diverse fields can be represented or at least approximated
as linear programming models.
Efficient techniques for solving linear programming problems are available.
Ease through which data variation ( sensitivity analysis can be handled through linear
programming models.
( ) ( )→ , ∈ .
In the following, we will use the denotations:
The set S is called a feasible set.
f is called the objective function of (P).
ℎ ∈ is called a feasible point or a feasible solution.
A point ∈ , ℎ ( )≤ ( ) ∈ is called a solution of (P) or an optimal
solution of (P).
≥ , = ,…, .
Example1:
A manufacturer produces two products A and B. The profit per unit sold of each product is Br. 2
and Br.3 respectively. The time required manufacturing one units of the product and the daily
capacity of the two machines C and D are given below:
Time required per units( in minute) product Machine capacity( minute per day)
Machine A B
C 5 7 1200
D 4 6 100
The manufacturer wants at least 60 units of product A and 30 units of product B to be produced.
It is assumed that all products manufactured are sold in the market. Formulate the problem
mathematically.
Solution:
Let x be the number of units of product A and y be the number of units of product B.
a) Since the profit product A is Br. 2x
b) The profit in product B is Br. 3y
c) The total profit is 2x+3y
The objective is maximize the profit.
Therefore the objective function is
= 2 + 3 … … … … … … … … … … … … … … … … … … … (1)
Time required to produce x and y units on machine C is 5x + 7y minutes and the capacity of C is
1200 minutes.
Hence, 5x + 7y ≤ 1200
and for machine D we have, 4x + 6y ≤ 1000
Since the manufacturer wants at least 60 units of A and 30 units of B products
We have ≥ 60, ≥ 30
Thus the required LP model is given by
z = 2x + 3y → maximization
s. t
5x + 7y ≤ 1200
4x + 6y ≤ 1000
x ≥ 60, y ≥ 30
Example 2:
A dietician wishes to mix two types of foods in such a way that the vitamin contents of the
mixture contains at least 8 units of vitamin A and 10 units of vitamin B. food I contains 2 units
per kg of vitamin A and 1 units per kg of vitamin B. while food II contains 1 units per kg of
vitamin A and 2 units per kg of vitamin B. It costs Br. 5 per kg to purchase food I and Br. 8 to
purchase food II. Prepare a mathematical model of the problem stated above.
Solution:
Let x kg of food I and y kg of food II be used in the food mixture so as to attain the minimum
requirements of vitamin A and B. We form the summary table as follows
Food Units of vitamin contents per kg Costs per Kg(Br.)
A B
Food I 2 1 5
Food II 1 2 8
Daily min. requirement 8 10
The total cost of mixture is 5x + 8y and there should be at least 8 units of vitamin A and 10 units
of vitamin B in the food mixture.
Therefore the constraints are
+ ≥
+ ≥
≥ , ≥
Hence the mathematical formulation is
= +
Subject to
+ ≥
+ ≥
≥ , ≥
Example 3 (Reddy Mikks company): Reddy Mikks produces both interior and exterior paints
from two raw materials, M1 and M2. The following table provides the basic data of problem:
A market survey indicates that the daily demand for interior paint cannot exceed that for exterior
paint by more than 1 ton. Also, the maximum daily demand for interior paint is 2 tons.
Reddy Mikks wants to determine the optimum (best) produce mix of interior and exterior paints
that maximizes the total daily profit.
The LP model has three basic components.
1. Decision variables that we seek to determine.
2. Objective(goal) that we need to optimize (maximize or minimize).
3. Constraints that the solution must satisfy.
The proper definition of the decision variables is an essential first step in the development of the
model. Once done, the task of constructing the objective function and the constraints becomes
more straightforward.
For the Reddy Mikks problem, we need to determine the daily amounts to be produced of
exterior and interior paints. Thus, the variables of the model are defined as
−Tons produced daily of exterior paint
−Tons produced daily of interior paint
To construct the objective function, note that the company wants to maximize (i.e, increase as
much as possible) the total daily profit of both paints. Given that the profit per ton of exterior and
interior paints are 5 and 4(thousand) dollars, respectively, it follows that
Total profit from exterior paint=5 (thousand) dollars
Total profit from interior paint=5 (thousand) dollars
Letting Z represent the total daily profit (in thousands of dollars), the objective of the company is
Maximize = +
Next, we construct the constraints that restrict raw material usage and product demand. The raw
material restrictions are expressed verbally as
Usage of raw materials by both paints≤ Maximum raw material availability
Usage of raw material M1 by exterior paint= 6
Usage of raw material M1 by interior paint= 4
Hence, usage of raw material M1 by both paints= + tons/day
In a similar manner,
Usage of raw material M2 by both paints= + tons/day.
Because the daily availabilities of raw materials M1 and M2 are limited to 24 and 6 tons,
respectively, the associated restrictions are given as
+ ≤ ( )
+ ≤ ( )
The first demand restriction stipulates that the excess of the daily production of interior over
exterior paint, − , should not exceed 1 ton, which translates to
− ≤ ( )
The second demand restriction stipulates that the maximum daily demand of interior paint is
limited to 2 tons, which translates to
≤ ( )
Non-negativity restriction: , ≥ 0.
The complete Reddy Mikks model is
Maximize Z = 5x + 4x
subject to
6x + 4x ≤ 24
x1 + 2x2 ≤ 6
−x + x ≤ 1
x2 ≤ 2
x1 , x2 ≥ 0
Example 3: Advertizing Media Selection
An advertizing company wishes to plan an advertizing campaign in three different media-
television, radio, and magazines. The purpose of the advertizing program is to reach as many
potential customers as possible. Results of a market study are given below:
Television
Number of potential customers reached per unit 400,000 900,000 500,000 200,000
Number of women customers reached per unit 300,000 400,000 200,000 100,000
The company does not spend more than Br.800,000 on advertizing. It further requires that
1. At least 2 million exposures take place among women;
2. Advertizing on television be limited to Br. 500,000;
3. At least 3 advertizing units be bought on daytime television, and two units during prime
time and
4. The number of advertising units on radio and magazines should each be between 5 and
10. Formulate the problem mathematically.
≤ ≤
≤ ≤
Thus the complete linear programming problem is given below:
Maximize: + + +
Subject to
, + , + , + , ≤ ,
, + , + , + , ≥ , ,
, + , ≤ ,
≥
≥
≥
Example 4:
A firm manufactures 3 products A, B and C. the profits per units of product are Br. 3, Br. 2, and
Br. 4 respectively. The firm has two machines and given below is the required processing time in
minutes for each machine on each product.
Machines Products
A B C
M1 4 3 5
M2 2 2 4
Machine M1 and M2 have 2000 and 2500 machine minutes respectively. The firm must be
manufacturing 100 A’s 200 B’s and 50 C’s but not more than 150 A’s. Formulate the
mathematical model?
Example 5:
A farmer has 1000 acres of land on which he can grow corn, wheat or soya beans. Each acres of
corn costs Br. 100 for pre- preparation, requires 7 man days of work and yields a profit Br. 30.
acres of corn costs Br. 120 to prepare, requires 10 man days of work and yields a profit Br. 40.
acres of soya beans costs Br. 70 to prepare, requires 8 man days of work and yields a profit
Br.20. If the farmer has Br. 1,00,000 for preparation and can count 0n 80,000 man days work,
formulate the mathematical model?
Example 6:
A truck company requires the following number of drivers for its trucks during 24 hours.
Time Number of drivers required
00-04hr 5
04-08hr 10
08-12 hr 20
12-16 hr 12
16-20 hr 22
20-24 hr 8
According to the shift schedule a driver may join for duty at midnight, 04,08,12,16,20 hours and
work continuously for 8 hrs. Formulate the problem as LP model for optimal shift plan.
Solution: Let , , , , , denote the number of drivers join duty at 00, 04, 08,12,16,20
hours respectively. The objective is to minimize the number of drivers required.
Therefore the objective function is
= + + + + +
Drivers who join duty at 00 hrs. and 04 hrs. should be available between 04 and 08 hrs. As the
number of drivers required during this interval is 10, we have the constraint:
+ ≥
Likewise, for the remaining time intervals we obtain
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
≥ , = ,…,
Nurses report to the hospital at the beginning of each period and work for 8 consecutive hours.
The hospital wants to determine the minimum number of nurses to be employed so that there is
sufficient number of nurses available for each period. Formulate this as linear programming
problem.
Ans.
= + + + + +
Subject to
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
≥ , = ,…,
Example 8: Warehouse problem
A person running a warehouse purchases and sells identical items. The warehouse can
accommodate 1,000 such items. Each month, the person can sell any quantity he has in stock.
Each month, he can buy as much as he likes to have in stock for delivery at the end of the month,
subject to a maximum of 1,000 items. The forecast of purchase and sale prices for the next six
months is given below:
Month i 1 2 3 4 5 6
Purchase prices, Ci 12 14 17 19 20 21
Sale price, Si 13 15 16 20 21 23
200 + ( − )≥
Which is equivalent to
− ≤ 200
200 + ( − ) ≤ 1000
Which is equivalent to
− ≤ 800
− ≤
− ≤
, ≥ , = ,…,
Example 9:
Evening shift resident doctors in a government hospital work five consecutive days and then
have two days off. Their five days of work can start on any day of the week and the schedule
rotates indefinitely. The hospital requires the following minimum number of doctors.
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
35 55 60 50 60 50 45
No more than 40 doctors can start their five working day’s scheduled on the same day.
Formulate the LP model to minimize the number of doctors to be employed by the hospital.
Solution:
Let , , , , , denote the number of doctors join duty at Sunday, Monday, Tuesday
Wednesday, Thursday, Friday and Saturday respectively.
The objective is to minimize the number of doctors to be hired by the hospital.
The objective function is = + + + + +
doctors join duty on Sunday, , doctors who joined duty on last Monday and Tuesday will
have off on Sunday.
Therefore, for Sunday we have + + + + ≥ 35
For Monday: + + + + ≥ 55
For Tuesday: + + + + ≥ 60
For Wednesday: + + + + ≥ 50
For Thursday: + + + + ≥ 60
For Friday: + + + + ≥
For Saturday: + + + + ≥
≤ ≤ ; = ,…,
Thus the required LP model is:
= + + + + +
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
≤ ≤ ; = ,…,
Example 10:
A local travel agent is planning a charter trip to a major sea resort. The eight-day/ seven night
packages include the fare for round trip travel, surface transportation, board and lodging and
selected tour options. The charter trip is restricted to 200 people and experience indicates that
there will not be any problem for getting 200 persons. The problem for the travel agent is to
determine the number of Deluxe, Standard and Economy tour packages to offer for this charter.
These three plans differ according to seating and services for the flight, quality of
accommodation, meal plans and tour options. The following table summarizes the estimated
prices for the three packages and the corresponding expenses for the travel agent. The travel
agent has hired an aircraft for the flat fee of Br. 200,000 for the entire trip.
≥ 0.01 9 − − ≥0
+ +
−7 + 13 − 7 ≥ 0
−7 + 3 − 7 ≤ 0
−3 − 3 + 7 ≥ 0
≤ 60
+ = 120
+ + = 200
, , ≥0
Example 11:
A manufacturing company has plants in cities A, B, and C. The company produces and
distributes its product to dealers in various cities. On a particular day, the company has 30 units
of its product in A, 40 in B, and 30 in C. The company plans to ship 20 units to D, 20 to E, 25 to
F, and 35 to G, following orders received from dealers. The transportation costs per unit of each
product between the cities are given in below:
In the table, the quantities supplied and demanded appear at the right and along the bottom of the
table. The quantities to be transported from the plants to different destinations are represented by
the decision variables. This problem can be stated in the form:
.
+ + ⋯+ ≤=≥ ,
≥ 0, = 1, … , .
Remark: For =( , , ,…, ) and =( , , ,…, )
〈 , 〉= + +⋯ =∑ .( Inner product of x and y).
⋯
. .
= ⋮ ⋱ ⋮ , = , =
. .
⋯
. .
( , ′ ′ ′′ ) ′ ′ ′′
, , = 12 −8 +5 −5 →
′ ′ ′′
. 4 −2 + − ≤ 80
′ ′ ′′
2 −3 +2 −2 ≤ 100
′ ′ ′′
5 − − + ≤ 75
Equivalently we can write as follows
( )=〈 , 〉→
. ≤ , ℎ
4 −2 1 −1 ′ 80
= (12, −8,5, −5), = 2 −3 2 −2 , = ′ , = 100 .
5 −1 −1 1 ′′ 75
Since we have a general theory of solving system of linear equations, it will be advantageous to
change all inequality constraints in to equality constraints.
When a constraints are connected by “≤” a variable will be added to the left side to
convert in to equality constraints and these variables are called Slack variables.
4 +2 + ≤ 80 ⟺ 4 +2 + + = 80.
When a constraints are connected by “≥” a variable will be subtracted to the left side
hand side to convert in to equality constraints , these variables are called Surplus
variables. 4 +2 + ≤8⟺4 +2 + − = 8.
( ) = 〈 , 〉 → max ( )
Subject to = is called standard form of LPP.
Example: Find the SLOP form of the following problem
( , , )=2 +3 −4 →
. 4 +2 − ≤4
−3 +2 +3 ≥6
+ −3 ≤ 8, ≥ 0.
Solution: Add a slack variable to the first and third constraints and subtract surplus variable in
the second constraint. The constraints becomes
4 +2 − + = 4,
−3 +2 +3 − =6
+ −3 + =8
≥ 0; = 1,2, … ,6
Thus in matrix representation, we can write the above standard LPP as:
( )=〈 , 〉→
. = , ≥
Where, ( ) = 〈 , 〉
= (2,3, −4,0,0,0)
=( , , , , , )
4 2 −1 1 0 0 4
= −3 2 3 0 1 0 , = 6
1 1 −3 0 0 1 8
Minimization and maximization problems
Another problem manipulation is to convert a maximization problem into a minimization
problem and conversely. Note that over any region
Max ( ) = −Min (− ( ).
So a maximization (minimization) problem can be converted into a minimization (maximization)
problem by multiplying the coefficients of the objective function by −1. After the optimization
of the new problem is completed, the objective of the old problem is −1 times the optimal
objective of the new problem.
Example: Express the following minimization problem as standard maximization problem
( , , )=4 − +2 →
. 4 + − ≤7
2 −3 + ≤ 12
4 +7 − ≥ 0, ≥ 0, = 1,2,3.
Solution: Since Max ( ) = −Min (− ( ).
Now we have the following
( , , )=− ( , , ) = −4 + −2 →
. 4 + − + =7
2 −3 + + = 12
4 +7 − − = 0, ≥ 0, = 1,2, . . . ,6
Is the corresponding maximization problem in standard form.
4 1 −1 1 0 0 7
= (−4,1, −2,0,0,0) , = 2 −3 1 0 1 0 , = 12 ,
4 1 −1 0 0 −1 0
=( , , , , , ).
( )=〈 , 〉→ .
. = , ≥ 0.
S={ x ∈ ℝ : Ax = b, x ≥ 0}.
Standard and Canonical Forms
Summary
Optimization is a fundamental tool for understanding nature, science, engineering, economics,
and mathematics. Physical and chemical systems tend to a state that minimizes some measure of
their energy. People try to operate man-made systems (for example, a chemical plant, a cancer
treatment device, an investment portfolio, or a nation’s economy) to optimize their performance
in some sense.
In formulation of any LP model it is quite necessary to specify:
a. Decision Variables: The problem should have number of decision variable for
which the decisions are to be taken. These are, , ,…, .
2. Solve the following linear programming problems. If you wish, you may check your
arithmetic by using the simple online pivot tool:
a)
b)
CHAPTER THREE
3.Geometric methods
Introduction
When the number of variables in a linear programming problem is three or less, we can graph the
set of feasible solutions together with the level sets of the objective function. But if a number of
variables and constraints are many, it is difficult to apply the graphical method to solve the linear
programming problem. From the picture, it is usually a trivial matter to write down the optima
solution.
Each constraint (including the non negativity constraints on the variables) is a half plane. These
half-planes can be determined by first graphing the equation one obtains by replacing the
inequality with an equality and then asking whether or not some specific point that doesn’t
satisfy the equality often (0, 0) can be used satisfies the inequality constraint. The set of feasible
solutions is just the intersection of these half planes. For the problem given above, this set is also
shown two level sets of the objective function. One of them indicates points at which the
objective function value is feasible. This level set passes through the middle of the set of feasible
solutions. As the objective function value increases, the corresponding level set moves to the
right. The level set corresponding to the case where the objective function increases the last level
set that touches the set of feasible solutions.
General objectives
At the end of this unit the learner will be able to:
▪Understand how to apply the the geometrical method for linear programming problems
Theorem: If the convex set of F.S. of a LPP is bounded, then any objective function has both
maximum and minimum corresponding to some extreme points of the polyhedral set.
Proof: Exercise
Note:
1. When the feasible region of a LPP is strictly bounded, i.e., bounded polyhedral set, any
objective function has both finite maximum and finite minimum which occurs at the
extreme points.
2. If the feasible region of a LPP is bounded from below only, then an objective function
may have either maximum or minimum but not both.
3. A particular objective function may not have any optimal value, maximum or minimum,
where the feasible region is unbounded.
Definition:
Multiple optimal solutions: If a LPP has more than one optimal solution, then the problem is
said to have multiple optimal solution.
Theorem: If a LPP has at least two optimal feasible solution, then there are infinite number of
optimal solutions, which are the convex combination of the initial optimal solutions.
Proof:
Let , ( ≠ ) be two optimal feasible solution of a LPP which will maximize the
objective function = . = ; ≥0
Then ̂ = ̂= = , =
= + (1 − ) ,0 ≤ ≤1
= + (1 − ) = + (1 − ) =
=
= + (1 − ) = ̂
Therefore is also an optimal feasible solution of the LPP which is the convex combination of
two distinct points , , which indicates that there are infinite optimal solutions.
Fundamental theorem of LPP (Geometric approach)
Theorem: If a LPP admits an optimal solution, then the objective function assumes the optimum
value at an extreme points of the convex set generated by the set of all feasible solutions.
Proof: exercise
Some remarks on the solution of LPP
Solution a:
The convex set of the feasible region is given by ABCD. This region is also known as an admissible
region. It is strictly bounded region. The four extreme points are (0,0), (2,0), ( , ), (1,0). The
= 0,
=4
5
= ,
2
=1
Hence the maximum value of z is 4 which occurs at B.
b).
The four extreme points are (11,0), (4,2), (1,5), (0,10), Then
= 22,
= 14,
= 17,
= 30
2x+y=800
X+y=1000
A(400,0)
Z=30x+40y
O(0,0) 0
A(400,0) 12000
B(200,400) 14,000
C(0,500) 10,000
Form this we find that the value of Z maximum occurs at the vertex B(200,400).
b) min = 4 +
. 3 + 4 ≥ 20.
− − 15 ≤ −15. , ≥ 0.
The feasible region is given below
Suppose =ℝ , , ∈ℝ
inner product of x and y, 〈 , 〉 = ∑ , = ( ,…, ), = ( ,…, ).
‖ ‖= 〈 , 〉= ∑ = ∑ .(norm of x.)
, ∈ℝ , ℎ ( , )=‖ − ‖= 〈 − , − 〉
is the distance between two points x and y in ℝ .
Line: , ∈ ℝ , the line joining the points , ( ≠ ) is a set of points X
given by = { :x= + (1 − ) , }.
Line segment: Let , ∈ℝ , ≠ then the line segment joining these two points
is a set X of points given by = { :x= + (1 − ) , 0 ≤ ≤ 1}.
Hyper plane: a hyper plane in is a set X of points given by ={ : = }, where
c is a row vector given by = ( ,…, ) = 0, = ( ,…, ) is an n
component column vector.
A hyper plane divides the space in to three mutually exclusive disjoint sets given by:
={ : > }.
={ : = }.
={ : < }.
The set are called open half spaces.
Remark: The objective function = and the constraints with equality sign are all hyper plane
in LPP.
Hyper space: A hyper space in with center = , ,…, > 0 is
defined to be a set X of points given by
= { :| − | = Where = ( , … , ), the equation can be written as follows:
( − ) +⋯+( − ) )=
An − : An − neighbourhood about a point is defined as the set X
of points lying inside the hyper sphere with center at and radius > 0, i.e., the −
ℎ ℎ about the point a is a set of points given by = { : | − | < }.
Interior point of a set: Let ≠ ⊆ℝ , is said to an interior point of S if and only if there
exists at least one > 0 such that − ℎ ℎ is in S.
∃ >0 ℎ ℎ ( , )⊆ .
convex sets
nonconvex set
Examples :
1. Let u be the linear function in ℝ , then ={ ∈ℝ : ≥ , ∈ ℝ, ∈ℝ , ≠ 0} is a
convex set.
Solution: Let x, y ∈ . We want to show that + (1 − ) ∈ .
By definition of ℎ ≥ , ≥
⇒ λux ≥ λα, (1 − λ)uy ≥ (1 − λ)α.
⇒ λux + (1 − λ)uy ≥ λα + (1 − λ)α.
⇒ ( ) + (1 − ) ≥ + (1 − ) .
⇒ ( + (1 − ) ) ≥ .
⇒ + (1 − ) ∈
There fore is a convex set.
2. ={ ∈ℝ : ≤ , ∈ℝ , ∈ℝ , ≠ 0} is also convex set.
3. ( , ) ∈ℝ , ℎ ={ ∈ℝ : ≤ } is a convex set.
Exercise
Prove that the following sets are convex or non convex
a. = {( , ): − 2 = 2}.
b. = {( , ): − 2 ≤ 5}.
c. = { : | | < 2}.
d. = {( , ): + ≤ 4}.
e. = {( , ): | ≤ 2, | | | ≤ 1}.
f. = {( , ): ≤ }.
g. = {( , ): ≥ 4 }.
Solution: = {( , ): + ≤ 4}
i. Geometrical approach: the set of points representing a circle of radius 2with the boundary and all
its interior points. Hence, the set is convex.
ii. Algebraic approach: Let ( , ) ( , ) be any two points of the set S. then
+ ≤4
+ ≤4
Now any convex combination of the points is a point which is { + (1 − ) , + (1 − ) }
Since,
( + (1 − ) ) + ( + (1 − ) )
( + ) + (1 − ) ( + ) + 2 (1 − )( + )
+ + +
≤ ( + ) + (1 − ) ( + ) + 2 (1 − )( )
2
( ≤ , ≤ )
≤4 + 4(1 − ) + 8 (1 − ).
=4 +4+4 −8 −8 = 4.
Hence the set is convex.
= {( , ): | | ≤ 2, | | ≤ 1}
i. Geometrical approach: The set represents a rectangle with all its interior points having
the four extreme points (2,1),(2,-1),(-2,-1),(-2,1). Hence the set is convex.
ii. Algebraic approach: The set is not empty. Let ( , ) ( , )
Be any two points belonging to the set K. Then
| | ≤ 2, | | ≤ 2 | | ≤ 1, | | ≤ 1 .The convex combination of the points is the point
which is given by { + (1 − ) , + (1 − ) }
Now,
| + (1 − ) | ≤ | | + |(1 − ) |
= | | + (1 − )| | ≤ 2 + 2(1 − ) = 2
In the same way we can prove that
| + (1 − ) | ≤ 1.
Hence the set is convex set.
Show that = {( , ): ≥ 4 } is not convex.
Solution: (0,0) and (1,2) are two points in K. A particular convex combination of the two points is
. 0 + . 1, . 0 + . 2 = ,1 =
Let , ∈ℝ , = + (1 − ) ; = , = (1 − ).
Theorem: Let ⊆ ℝ be a convex set and let , ,…, ∈ . Then each convex combination
of , ,…, belongs to K. i.e, =∑ ∈ ;∑ = 1, ≥ 0, = 1, … , .
Proof:
If n=1, ∈ , = 1, = ∈ .
For n=2, let , ∈ , = + ∈ , + =1; , ≥0
Since K is convex we have + (1 − ) ∈ .
= + (1 − ) , = , = 1 − . Hence the theorem is true for n=2.
Assume the theorem is true for > 2, , ,…, ∈ , ℎ = + ⋯+ ∈
,∑ ; ≥ 0.
Let , ,…, , ∈ ; Let ′ = + ⋯+ + ,∑ = 1; ≥0
WTS ∈ ?
Form = + ⋯+ ∈
+ (1 − ) ∈ ( )
( + ⋯+ ) + (1 − ) ∈ , = , = 1, … , =1− .
′= +⋯+ +
=( ) +
= +
= + (1 − ) = 1
Therefore, the theorem holds true for m+1 elements.
Hence, the proof is completed.
Theorem: let , , ⊆ℝ , ∈ ℕ be convex sets, then
i. ⋂∈ is convex set
ii. + is convex set
iii. ∑ is convex set.
iv. ∈ℝ.
Proof:
4. Let , ∈⋂∈
⇒ , ∈ , ∈ .
⇒ +(1 − λ)y ∈ K , for each i ∈ I.
⇒ +(1 − ) ∈ ⋂ ∈ .
⇒⋂∈ is convex set.
. Let , ∈ + , = + , = + , , ∈ , , ∈
⇒ + (1 − ) = ( + ) + (1 − )( + )
⇒ + (1 − ) + + (1 − )
∈ ∈
∈
⇒ + .
. Let , ∈ , = , ∈ ∈ . Then we have
+ (1 − ) = + (1 − )
+ (1 − ) = ( + (1 − ) )
∈
∈
⇒ is convex.
Definition: Let ⊆ℝ , ℎ =∩ { : ⊆ } is called a convex
hull of A. If A is convex set , then A=conv A.
Let X be a point set, then the convex hull of X denoted by C(X), is the set of all convex
combination of sets of points from X.
If the set consists of a finite number of points, then the convex hull of X is called convex
polyhedron. If X convex set, then C(X) =X.
Theorem: Let ⊆ ℝ .then the ConvA is the set of all convex combination of elements of A.
i.e., Conv A={ ∈ ℝ : ∃ , , =∑ ,∑ = 1, ∈ , ≥ 0} =
WTS for x, y ∈ , + (1 − ) ∈ .
=∑ ,∑ = 1, ∈ , ≥ 0 and =∑ ,∑ = 1, ∈ , ≥ 0.
=∑ (1 − ) = ∑ (1 − )
Put = , = 1, … , ; = (1 − ) , = 1, … , .
= , = 1, … , .
= , = 1, … , .
+ (1 − ) = + (1 − )
= +
, ≥ 0, ∈
= +
=∑ +∑ (1 − )
= ∑ + (1 − ) ∑ = + (1 − ) = 1
+ (1 − ) ∈ . Hence C is convex set.
Next it remains to show that ConvA=C
Let ∈ , ∈
Since k is convex , x can be written as a convex combination of elements in k.
⇒ =1 ∈ .
⇒ ∈ , since C is convex.
⇒ ⊆ .
Let ∈ ,∑ , ∈ ,∑ =1
⇒ ∈ , .
⇒ ⊆
Thus, = .
Theorem: Let A , B∈ ℝ , then
i. convA is the minimal convex set containing A.
ii. if ⊆ , then convA⊆convB.
iii. conv(convA)=convA.
Proof: i) convA=⋂ , ⊆ .
⇒⋂ ⊆
⇒⋂ is minimal set containing A.
ii. ⊆ ⇒ convA⊆convB.
Suppose ⊆ and ∈
⇒ =∑ ,∑ =1 ≥ 0, ∈
⇒ ∈ ⇒ ∈
⇒ ∈
⇒ ⊆
Definition: let S be a non empty sub set of ℝ . If S is closed and bounded, then S is said to be compact
set.
Example:
S= [0,1] is compact set.
= {( , ): + ≤ } is also compact set.
( )=〈 , 〉→ ( )
Subject to
≤ , ≥ , = ,…, .
The set = : ≤ is called polyhedron.
Theorem: A bounded polyhedral set is a compact set.
Theorem: Hyper plane is a convex set.
Theorem: A polyhedral set is a convex set.
Definition: Let ⊆ ℝ and k is convex. A point ∈ is said to be an extreme point of k if and only if p
cannot be expressed as a strict convex combination of any other two distinct points in k, or p is not an
interior point of any line segment, ∉[ , ].
In other words , if = + (1 − ) , with ∈ (0,1) and , ∈ , then = =
Procedure:
1. Take each inequality constraints as equality constraint.
2. Plot each equation on the corresponding plane ( 2D)
3. Determine the feasible region: This is the region that satisfies all the constraints.
4. Find all extreme points of the feasible region and compute the value of the objective
function and compare the value obtained.
Properties of extreme points
Let ⊆ ℝ ,k convex set
i. If p is an extreme point of k and ∈[ , ] ⊆ , then either = = .
ii. The point p is an extreme point of k if and only if − { } is convex set.
Proof:
a. if p is an extreme point and ∈[ , ]
The proof immediately follows from the definition.
b. ⇒) Assume p is extreme point of k.
− { } is convex set.
Let , ∈ − { } =M. WTS [ , ]∈
Since , ∈ , ≠ , ≠
⇒ P is not an interior point of [ , ].
⇒[ , ]∈ .
⇒ Convex.
⇐). Suppose − { } is a convex set. Assume p is an interior point. Let , ∈ .
Where ∈[ , ]⇒ ≠ , ≠ . , ∈ − { }.
⇒ [ , ]∈ − { }. This is a contradiction.
⇒ .
Extreme Points are finite in number. There are at most BS to a set of m equations with n
unknowns. Hence the maximum number of basic feasible solution to LPP is which is finite provided
that n is finite.
Extreme points of the convex set K are the boundary points, but all boundary points may not be
necessarily extreme points.
Every points of the boundary of a circle is an extreme point of the convex set which includes the
boundary and interior the circle.
The extreme points of a rectangle are its four vertices.
A circle with all its interior point is a convex hull, but not convex polyhedral.
All convex polyhedral are convex hull, but all convex hull may not be convex polyhedral.
Values of the objective function at the vertices of the closed region ABC are
Vertices Objective Function Value
A(0,1) 2
B(0,2) 4
C(2,3) 4
The points B and C give the same maximum value of z. It follows every point between B and C
( on the line BC) also gives the same value of z. The problem, therefore, have multiple optimal
solution and =4.
Example: Find a geometric solution for the following LPP
= + → c) = 2 + 10 →
3 + ≥3
a) s. t . 2 + 5 ≤ 16
+ ≤1
, ≥0 6 ≤ 30
b) = 20 + 30 → 0 ≤ ,≥ 0
. 3 + 3 ≤ 36 d) =6 +4 →
5 + 2 ≤ 50 . 2 + ≤ 390
2 + 6 ≤ 60 3 + 3 ≤ 810
0 ≤ ,0 ≤ ≤ 200
denoted by provided that the limit exists. Similarly, the partial derivative of f with respect to
( , ) ( , )
y at ( , ) is defined by lim → and is denoted by provided that the limit
exists.
In general if f is a function of n variables = ( ,…, ) then the partial derivative of f is given
.
by = .
.
Steps:
1. Plot the feasible set S, on the coordinate plane.
2. Select an x and ind the value f(x ). Let z = f(x )
3. Plot the level line of f by c x ⋯ c x = z, through x0
4. Translate (move) the level line parallel to itself in the direction of improvement within
the feasible region until the point that maximize or minimize f is found or we conclude
that the problem has no finite minimum/maximum. .
Example 1: ( , )= + →
X+y=5
Example 2: ( , )=− + →
X+y=5
Minimize − −
subject to
+ ≤
− + ≤
The feasible region is illustrated in Figure below. The first and second constraints represent
points "below" lines 1 and 2 respectively. The nonnegativity constraints restrict the points to be
in the first quadrant. The equations − − = are called the objective contours and are
epresented by dotted lines .
In particular the contour − − = passes through the origin. The contours are moved in
the direction − = ( , ) as much as possible until the optimal point ( / , / ) is
reached.
In this example we had a unique optimal solution. Other cases may occur depending upon the
problem structure.
All possible cases that may arise are summarized below (for a minimization problem).
Unique Finite Optimal Solution: If the optimal finite solution is unique, then it occurs at an
extreme point. Figures a and b show a unique optimal solution. In Figure a the feasible region is
bounded; that is, there is a ball around the origin that contains the feasible region. In Figure b the
feasible region is not bounded. In each case, however, the unique optimal solution is finite.
Unbounded Optimal Solution. This case is illustrated in Figure below where both the feasible
region and the optimal solution are unbounded. For a minimization problem the plane =
can be moved in the direction – indefinitely while always intersecting with the feasible region.
In this case the optimal objective is unbounded with value −∞.
Empty Feasible Region. In this case the system of equations and/or inequalities defining the feasible
region is inconsistent. To illustrate, consider the following problem.
Examining the figure, it is clear that there exists no point ( , ) satisfying the above
inequalities. The problem is said to be infeasible, inconsistent, or with empty feasible region.
Summary
Graphical Solution Methods
Remark: There are two types of polyhedral set
Bounded polyhedral : strictly bounded and has finite number of extreme points.
2. Unbounded polyhedral se : Has finite number of extreme points and unbounded.
Theorem: If the convex set of F.S. of a LPP is bounded, then any objective function has both
maximum and minimum corresponding to some extreme points of the polyhedral set.
Definition:
Multiple optimal solutions: If a LPP has more than one optimal solution, then the problem is
said to have multiple optimal solution.
Theorem: If a LPP has at least two optimal feasible solution, then there are infinite number of
optimal solutions, which are the convex combination of the initial optimal solutions.
Convex Set
Let =ℝ
[ , ]={ ∈ℝ : = + (1 − ) , [0,1]} is called closed segment in ℝ .
( , )={ ∈ℝ : = + (1 − ) , (0,1)} is called open segment in ℝ .
Definition: A set ⊆ ℝ is said to be a convex set iff for any two points x, y in K, + (1 − ) .
The line segment joining any two distinct points of a set is also in the set, and then we call the set
as convex set.
CHAPTER four
4.The Simplex method
Introduction
In the previous chapter, we have seen graphical solutions of a linear programming problem. But
Graphical method cannot be applied to solve LPP involving more than two variables. In such
cases, analytical method is quite helpful. This method also forms a good basis to grasp the more
powerful simplex method.
In this section we introduce basic feasible solutions, and show that they correspond to extreme
points. Since an algebraic characterization of the former (and hence the latter) exists, we shall be
able to move from one basic feasible solution to another until optimality is reached.
General objectives
At the end of this unit the learner will be able to :
▪Remind the formation of linear programming problem
▪Understand how to solve basic feasible solution in LP
▪Consider the representation of the linear programming problem with non-basic variables
▪Explain the interpretation of optimality test
▪Identify the uniqueness and alternative optimal solutions
▪Differentiate the main steps to apply simplex method on linear programming
▪Know the two phase method
.
A = . denote the j column.
.
.
= . denote a vector in ℝ .
.
+ = , =
= −
= −
system
The components of are called basic variables and The components of are
called non basic variables.
If = ≥ , then x is called a basic feasible solution.
A solution is said to be basic solution if the vectors associated with the nonzero
variables are linearly independent. This condition is both necessary and sufficient.
If all components of a solution set corresponding to the basic variable are nonzero, then
the basic solution is called non-degenerate basic feasible solution.
If some components of a solution set corresponding to the basic variables are zero the
basic solution is called degenerate basic feasible solution.
Matrix B is called the basic matrix (or simply the basis) and
Matrix N is called the nonbasic matrix.
For a system of equation with n variables and m equations the number of basic feasible solution
is less than or equal to
!
=
!( − )!
Example 1: Consider the following system of equations. Find all basic and basic feasible
solutions.
+ +− =
+ + =
Solution: The set of equations can be written as
+ + =
2 4 −2 10 2 4 −2
ℎ = , = , = , = , = .
10 3 7 33 10 3 7
( ) = , number of linearly independent column vectors. Thus, the equations are linearly
independent we will have three sub matrices taking two at a time. The total number of basic
!
solutions will be 3 =( )! !
=3
4 −2 7 2 7 2 10 4
For B = (a , a ) = ,B = ,x = = .
3 7 −3 4 −3 4 33 3
x 3 x 0
x = x = 1 and x = x = 4 are basic feasible solutions.
x 0 x 3
x 4
x = x = 0 is basic solution but not feasible solution.
x −1
Example 2: Find all the basic solutions to the following problem:
= + + →
+ + =
+ + =
Also find which of the basic solutions are
a. Basic feasible
b. Non degenerate basic feasible solution
c. Optimal basic feasible solution
Example 3: How many basic feasible solutions are there in the following linearly independent
equations? Find all of them?
− + + =
− − + =
Solution: There are at most 4c = 6 basic solutions with two basic and two non-basic variables.
The given equation can be written as
a x + a x + a x + a x = b.
2 −1 3 1 6
Where a = ;a = ;a = ;a = ;b =
4 −2 −1 2 10
The six square sub matrices, taking two at a time are given by
2 −1 2 3 2 1
= (a , a ) = = (a , a ) = = (a , a ) = .
4 −2 4 −1 4 2
−1 3 −1 1 3 1
= (a , a ) = . = (a , a ) = . = (a , a ) = .
−2 −1 −2 2 −1 2
Since there are only three non singular matrices B , B , B there are only three basic solutions.
18
For B ; x =B b= 7 ⇒ BS = ( , 0 , 0)
2
7
−36
For B ; x =B b= 7 ⇒ BS = (0, , , 0)
2
7
2
For B ; x =B b= 7 ⇒ BS = (0, ,0 , )
36
7
Example: Two linearly independent equations with three variables are given
+ − = .
− − = .
Find, if possible the basic solutions with x as a non basic variable.
=
≥0
Theorem1: The collection of extreme points corresponds to the collection of basic feasible
solutions, and both are nonempty, provided that the feasible region is not empty.
Theorem2: If an optimal solution exists, then an optimal extreme point (or equivalently an
optimal
basic feasible solution) exists.
Limitation of the Method : Since the number of basic feasible solutions is bounded by ,
one may think of simply listing all basic feasible solutions, and picking the one with the minimal
objective value. This is not satisfactory, however, for a number of reasons.
Firstly, the number of basic feasible solutions is bounded by , which is large, even for
extracting m columns out of n columns of the matrix A fail to produce a basic feasible solution,
on the grounds that B does not have an inverse, or else ≱ 0.
The simplex method is a clever procedure that moves from an extreme point to another extreme
point, with a better (at least not worse) objective value. It also discovers whether the feasible
region is empty and whether the optimal solution is unbounded. In practice, the method only
enumerates a small portion of the extreme points of the feasible region.
Keys to Simplex method
The simplex method is a clever procedure that moves from an extreme point to another extreme
point, with a better (at least not worse) objective. It also discovers:
Whether the feasible region is empty and
Whether the optimal solution is unbounded.
In practice, the method only enumerates a small portion of the extreme points of the feasible
region.
The key to simplex method lies in recognizing the optimality of a given extreme point solution
based on local consideration without having to (globally) enumerate all extreme points or basic
feasible solutions.
Consider the following linear programming problem
. = ; ≥0
Where A is an matrix with rank m. suppose that we have a basic feasible solution
0
whose objective value is given by
= =( , ) = … … … … … … … … … … … … … … … . (1)
Now let , denote the sets of basic and non-basic variables for the given basis.
Then feasibility requires that
x ≥ 0, x ≥ 0 and that b = Ax = Bx + Nx
Multiplying the last equation by and rearranging the terms, we get
= − = −∑ ∈ = −∑ ∈ ( ) … … … … . (2)
Where, R is the current set of the indices of the non-basic variables.
Noting equation (1) and (2) and letting z the objective function value, we get
z = cx
=c x +c x
=c B b−∑∈ B ax +∑∈ cx =c B b−∑∈ c B ax +∑∈ cx
=z − z − c x … … … … … … … … … … … … … … … … … … (3)
∈
Where z = for each non-basic variable using the foregoing transformations, the linear
programming problem may be rewritten as:
= −∑∈ −
. + = ̅; ≥ , ∈ , ≥ ……………………..( )
∈
Without loss of generality, let us assume that no row in equation (4) has all zeros in the column
of the non-basic variables , ∈ . Otherwise, the basic variable in such a row is known in
value, and this row can be deleted from the program. Now observe that the variable simply
play the role of slack variables in equation (4). Hence, we can equivalently write LP in the non-
basic variable space, i.e, in terms of the non-basic variables as follows:
= −∑∈ −
. ≤ ̅; ≥ , ∈ ……………………..( )
∈
clear that the first basic variable dropping to zero corresponds to the minimum of for
= = : > 0 … … … … … . .9
In the absence of degeneracy, > 0, and hence = > 0. From equation (7) and the
fact that − > 0, it then follows that < and the objective function strictly improves.
b
x = y … … … … … … … .. (10)
From equation (10) , x = 0 and hence at most m variables are positive. The corresponding
columns in A are , ,…, , , …, . Note that these columns are linearly
3− +
. x − 2x + x = 2
+ =1
, , , ≥0
Now suppose that is a basic variable. In particular, suppose that is the basic variable,
that is = , = = . Recall that = = . But
is a vector of zero except for one at the position.
= ℎ − = − =0
Leaving basis and Blocking Variables
Suppose that we decide to increase a non basic variable with a positive − . From
equation (7), the larger the value of , the smaller is the objective z. as increased, the basic
variables are modified according to equation (8). If the vector has any positive component(s),
then the corresponding basic variable(s) is decreased as is increased. Therefore, the non basic
variable cannot be indefinitely increased because the non negativity of the basic variable will
be violated. Recall that a first basic variable that drops to zero is called the blocking
variables because it blocks further increase of . Thus enters the basis and leaves the
basis.
Termination, with optimality and unboundedness
We have discussed a procedure that moves from one basis to an adjacent basis, by introducing
one variable in to the basis, and removing another variable from the basis. The criteria for
entering and leaving are summarized below:
1. Entering: may enter if − >0
. = ; ≥
∗ ∗
Suppose that is a basic feasible solution with basis B; that is, = .
0
∗ ∗
Let denote the objective of , i.e,
∗
= Suppose further that − ≤ 0 for all non basic variable, and hence there are no
non basic variables that are eligible to enter the basis. Let x be any feasible solution with
∗
objective value z. then from equation (3) we have − =∑ ∈ − … … … … . (13)
∗ ∗
Since − ≤ 0 and ≥ 0 for all variable then ≤ , and so as in equation (6), is an
optimal basic feasible solution.
Uniqueness and Alternative Optimal Solutions
We can get more information from equation (13). If − < 0 for all non basic components,
then the current optimal solution is unique. To show this, let x be any basic feasible solution that
∗
is distinct from . Then there is at least one non basic component that is positive, because if
∗
all non basic components are zero x would not be distinct from . From equation (13) it follows
∗ ∗
that ≥ , and hence is the unique optimal solution.
Now consider the case where − ≤ 0 for all non basic components, but − = 0 for at
least one non basic variable . As is increased, we get (in the absence of degeneracy) points
∗
that are distinct from but have the same objective value(why). if is increased until it is
blocked by a basic variable, we get an alternative optimal basic feasible solution. The process of
increasing from zero until it is blocked generates an infinite number of alternative optimal
solutions.
Example: Solve the following problem
Minimize −3 +
+2 + =4
− + + =1
, , , ≥0
Solution:
1 0 1 0
Consider the basic feasible solution with basis =( , )= and = .
−1 1 1 1
The corresponding point is given by
1 0 4 4 0
= = = = , = =
1 1 1 5 0
The objective value is -12.
To see if we can improve the solution, calculate − and − as follows, noting that
= (−3,0), ℎ = (−3,0)
− = − = −7
− = − = −3
Since both − < 0 and − < 0 , then the basic feasible solution
( , , , ) = (4,0,0,5) is the unique optimal solution.
Now consider a new problem where the objective function −2 −4 is to be minimized over
the same region. Again, consider the same point (4,0,0,5). The objective value is -8, and the
= (−2,0), = (−2,0). Calculate − and − as follows,
2
− = − = (−2,0) +4=0
3
1
− = − = (−2,0) − 0 = −2
1
In this case, the given basic feasible solution is optimal, but it is no longer a unique optimal
solution. We see that by increasing , a family of optimal solutions is obtained. Actually,
if we increase , keep = 0, and modifying and , we get
4 2
= − = −
5 3
4−2
For any ≤ , the solution =
0
5−3
feasible optimal solution, where leaves the basis. Note that the new objective function
contours are parallel to the hyper plane corresponding to the first constraint. That is why we
obtain alternative optimal solutions in this example. In general, whenever the optimal objective
function contour contains a face of dimension greater than zero, we will have alternative optimal
solutions.
Unboundedness
Suppose that we have a basic feasible solution of the system = , ≥ 0 with objective value
. Now let us consider the case when we find a corresponding nonbasic variable
with − >0 ≤ 0. This variable is eligible to enter the basis since increasing
will improve the objective value. From equation (13) we have
= −( − )
Since we are minimizing z and since − > 0 , then it is to our benefit to increase
indefinitely, which will make go to −∞. The reason that we were not able to do this before was
that the increased in the value of was blocked by a basic variable. This puts a “ceiling” on
beyond which a basic variable will be negative. But if blocking is not encountered, there is no
reason why we should stop increasing . This is precisely the case when ≤ 0. Recall that
from equation (8) we have = − and so if ≤ 0, then can be increased
indefinitely without any of the basic variables becomes negative. Therefore the solution x (
where = − , is arbitrarily large and other nonbasic components are zero) is
feasible and its objective value = −( − ) , which approaches −∞
approaches to + ∞.
To summarize, if we have a basic feasible solution with − > 0 for some nonbasic
variable , and meanwhile ≤ 0 , then the optimal is unbounded with objective value −∞.
This is obtained by increasing indefinitely and adjusting the values of the current basic
variables, and is equivalent to moving along the ray:
−
0
0 .
. .
. + 1 ; ≥0
0 .
. .
0 0
Note that the vertex of the ray is the current basic feasible solution and the direction of
0
.
.
the ray is = 1 where the 1 appear in the kth position. It may be noted that
.
.
0
=( , ) =− + =− +
But since − + < 0 ( because was eligible to enter the basis). Then < 0, which is the
necessary and sufficient condition for unboundedness.
nonbasic variables. If − ≤ 0, then stop with the current basic feasible solution as an
optimal solution. Otherwise go to step 3 with as the entering variable. ( This strategy for
selecting an entering variable is known as Dantzig’s rule.)
3. Solve the system = ( ℎ = ). If ≤ 0 then stop with
−
the conclusion that the optimal solution is unbounded along the ray + ; ≥0
0
where is an − vector of zeros except for a 1 at the k th position. if ≰ 0, go to step
4. Let enter the basis. Then index r of the blocking variable which leaves the basis is
determined by the following minimum ratio test:
= minmum :
Update the basis b where replaces , update the index set R, and repeat step 1.
absence of degeneracy, > 0 and hence = > 0. Therefore the difference between the
objective values at the previous iteration and the current iteration is ( − ) > 0. In other
words, the objective function strictly decreases at each iteration, and hence the basic feasible
solution is generated by the simplex method are distinct. Since there is only a finite number of
basic feasible solutions, the method would stop in a finite number of steps with a finite optimal
solution or with an unbounded optimal solution.
Theorem: (Finite Convergence): In the absence of degeneracy (and assume feasibility), the
simplex method stops in a finite number of iterations, either with an optimal basic feasible
solution or with the conclusion that the optimal value is unbounded.
In the presence of degeneracy, there is the possibility of cycling in an infinite loop.
Subject to − − =0 (15)
+ = ; , ≥0 (16)
From equation (16) we have + = (17)
Multiplying (17) by and adding to equation (15), we get
+0 +( − ) = (18)
Currently x = 0, and from equations (17), (18)we get = , and = .
Also, from (17) and (18) we can conveniently represent the current basic feasible solution with
basis B in the following tableau. Here we think z as a basic variable to be minimized. The
objective row will be referred as row zero and the remaining rows are rows 1 through m. the
right hand side column (RHS) will denote the values of the basic variables ( including the
objective function). The basic variables are identified on the far left column.
1 0 − 0
0 1
The tableau in which z and have been solved in terms of is said to be in canonical form.
Not only does this tableau give us the value of the objective function and the basic variables on
the right hand side, but it also gives us all the information we need to proceed with the simplex
method. Actually the cost row gives us − , which consists of the − ’s for the
nonbasic variables. So row zero will tell us if we are at optimal solution ( if − ≤ 0), and
which nonbasic variables to increase otherwise. If is increased, then the vector = ,
which is stored in row 1 through m under the variable , will help us determine by how much
can be increased. If ≤ 0, then can be increased indefinitely without being blocked and
the optimal objective is unbounded. On the other hand, if ≰ 0, that is , if has at least one
positive component, then the increase in will be blocked by one of the current basic variables,
which drops to zero. The minimum ratio test ( which can be performed since
= and are both available in the tableau) determines the blocking variable. We would
like to have a scheme that will do the following.
a. Update the basic variables and their values.
b. Update the − values of the new nonbasic variables.
z 1 0 . 0 . 0 . − . − . .
0 1 0 0
. . . . . .
0 0 1 0
. . . . . . .
0 0 0 1
z 1 0 . − . 0 . ( − )− ⁄ ( . 0 . −( − )
− )
0 1 − ⁄ 0 − ⁄ 0 − ⁄
1
. . . . . . .
0 0 1⁄ 0 ⁄ 1
. . . . . . .
0 0 − ⁄ 1 − ⁄ 0 − ⁄
1 0 −
0 I
Main step
Let − = { − : ∈ }. If − ≤ 0, then stop; the current solution is
optimal. Otherwise examine . If ≤ 0, then stop; the optimal solution is unbounded along
the ray
−
+ ; ≥ 0 , where is a vector of zeros except a 1 at the position. If ≰ 0.
0
Update the tableau by pivoting at . Update the basic and nonbasic variables where enters the basis
and leaves the basis, and repeat the main step.
Example: + −
+ + ≤ ; + − ≤ ; − + + ≤ ; , , ≥
Introducing the nonnegative slack variables , , . The problem becomes
+ −4 +0 +0 +0
. + +2 + =9; + − + =2; − + + + =4; ≥0 ℎ ,
Since ≥ 0, then we can choose our initial basis as =[ , , ]= and we indeed have
B b=b≥0
Iteration 1
z
Z 1 -1 -1 4 0 0 0 0
0 1 1 2 1 0 0 9
0 1 1 -1 0 1 0 2
0 -1 1 [1] 0 0 1 4
Second iteration
z
Z 1 3 -5 0 0 0 -4 -16
0 [3] -1 0 1 0 -2 1
0 0 2 0 0 1 1 6
0 -1 1 1 0 0 1 4
Iteration 3
z
Z 1 0 -4 0 -1 0 -2 -17
0 1 −1 0 1 0 −2 1
3 3 3 3
0 0 2 0 0 1 1 6
0 0 2 1 1 0 1 13
3 3 3 3
This is the optimal tableau since − ≤ 0 for all non basic variables. The optimal solution is
given by = , = 0, = .
1 0 −
The tableau may be thought of as a representation of both the basic variable and the cost z in
terms of the non basic variable .The non basic variable can therefore be thought of as
independent variables, whereas are dependent variables. From row zero we have
= −( − )
= − ( − )
∈
and hence the rate of change of z as a function of a typical nonbasic variable , namely , is
Also, the basic variables can be represented in terms of the nonbasic variables can be represented
in terms of the nonbasic variables as follows:
= −
= − = −
∈ ∈
the nonbasic variables . In other words, if increases by one unit, then the basic variable
interpreted as follows. Recall that, = , and hence represents the linear combination of
the basic columns that are needed to represent . More specifically,
The simplex table also gives us a convenient way of predicting the rate of change of the
objective function and the value of the basic variables as a function of the right hand side vector
b. since the right hand side vector usually represent scarce resources, we can predict the rate of
change of the objective function as the availability of the resources is varied. In particular,
= − ( − )
∈
And hence = . If the original identity consists of slack variables with zero costs, the
elements of row zero at the final tableau under the slacks give −0= , which is .
Similarly, the rate of change of the basic variables as a function of the right hand vector b
is given by = .
of .
Note that if the tableau corresponds to a degenerate basic feasible solution, then as a non basic
variable actually increases, at least one of the basic variables may become immediately
negative destroying feasibility.
Identifying from the simplex table
The basis inverse matrix can be identified from the simplex tableau as follows:
Assume that the original tableau has an identity matrix. The process of reducing the basis matrix
B of the original tableau to an identity matrix in the current tableau, is equivalent to pre
multiplying row 1 through m of the original tableau by to produce the current tableau.
This also converts the identity matrix of the original tableau to . Therefore, can be
extracted from the current tableau as the sub matrix in row1 through m under the original identity
column.
Block pivoting
Initial Basic feasible Solution:
Recall that the simplex method starts with a basic feasible solution and moves to an improved
basic feasible solution, until the optimal point is reached or else unboundedness of the objective
function is verified. However, in order to initialize the simplex method, a basis B with
= ≥ 0 must be available. We shall show the simplex method can always be initiated
with a very simple basis namely the identity.
Easy Case:
Suppose that the constraints ≤ , ≥ 0 where A is matrix and b is an m nonnegative
vector. By adding the slack vector , the constraints are put in the following canonical standard
form: + = , ≥ 0, ≥ 0. Note that the new ( + ) constraint matrix ( , ) has
rank m, and a basic feasible solution of this system is at hand, by letting be the basic vector,
and x be the non basic vector. Hence at this starting the basic feasible solution = & = 0,
and the simplex method now can be applied.
Some bad case
In many occasions, finding a starting basic feasible solution is not as straightforward as the case
just described. To illustrate, suppose that the constraints are of the form ≤ , ≥ but the
vector b is not nonnegative. In this case, after introducing the slack vector , we cannot let
= 0, because = violates the non negativity requirement.
Another situation occurs when the constraints are of the form ≥ , ≥ 0, ℎ ≰ 0.
After subtracting the slack vector , we get − = , ≥ 0, ≥ 0. Again, there is no
obvious way of picking a basis B from the matrix( , − ). With = ≥ 0. In general any
linear programming problem can be transformed in to a problem of the following form.
. = , ≥0
Where ≥ 0( ≤ 0, ℎ row can be multiplied by -1). This can be accomplished by
introducing slack variables and by simple manipulation of the constraints and variables. If A
contains an identity matrix, then an immediate basic feasible solution is at hand, by simple
letting B=I, and since ≥ 0, then = ≥ 0. Otherwise, something else must be done. A
trial and error approach may be futile, particularly if the problem is infeasible.
Example
a. Consider the following constraints
+ ≤ ;− + ≤ ; , ≥
After adding the slack variables & , we get
+ + = ;− + + = ; , , , ≥
4 0
An obvious starting basic feasible solution is given by = = and = = .
1 0
b. Consider the following constraints:
+ + ≤ ; − + + ≥ ; , ≥
Note that is unrestricted. so the change of variable = − is made. Also the slack
variables & are introduced. This leads to the following constraints in canonical standard
form:
− + + + = 6 ; −2 +2 +3 +2 − = 3;
, , , , , ≥0
Note that the constraint matrix does not contain an identity and no obvious feasible basis B can
be extracted.
c. Consider the following constraints:
+ − ≤− ;− + + ≤ ; , , ≥
Since the right hand side of the first constraint is negative, the first inequality is multiplied by -1.
Introducing the slack variables leads to the following system:
+ − − = ;− + + + = ; , , , , ≥
Note again that this constraint matrix contains no identity matrix.
Artificial variables
After manipulating the constraints and introducing slack variables, suppose that the constraints
are put in the format = , ≥ 0 where A is an matrix and ≥ 0 is an m vector. Further
suppose that A has no identity sub matrix. In this case we shall resort to artificial variables to get
a starting basic feasible solution, and then use the simplex method itself and get rid of these
artificial variables.
To illustrate, suppose that we change the restrictions by adding an artificial vector leading to
the system + = , ≥ , ≥ . Note that by construction, we forced an identity matrix
corresponding to the artificial vector. This gives an immediate basic feasible solution of the new
system, namely = & = 0. Even though we now have a starting basic feasible solution and
the simplex method can be applied, we have in effect changed the problem.
In order to get back to our original problem, we must force these artificial variables to zero,
because = , ≥ 0 if and only if = 0. In other words artificial variables are only a tool to
get the simplex method started; however, we must guarantee that these variables will eventually
drop to zero.
Example: Consider the following constraints:
+ ≥ ; − + ≥ ; + ≤ ; , ≥
Introducing the surplus and slack variables , , , we get
+ − = ;− + − = ; + + = ; , , , , ≥
This constraint has no identity sub matrix. We can introduce three artificial variables to obtain a
starting basic feasible solution. Note, however, that appears in the last row with coefficient 1.
So we only need to introduce artificial variables & , which lead to the following system.
+ − + = ;− + − + = ; + + =
, , , , , & ≥
Now we have an immediate starting basic feasible solution of the new system, namely
= 6, =4& = 5 and we would like for the artificial variables drop to zero.
Phase I
Solve the following linear program starting with the basic feasible solution x=0 and = .
Minimize 1x
. + =
, ≥0
If at optimality ≠ 0, then stop; the original problem has no feasible solution. Otherwise let the
basic and nonbasic legitimate variables be
& . ( we are assuming that all arti icial variables left the basis). Go to phase II.
Phase II
Solve the following linear program starting with the basic feasible solution
= & =0
+
. + = ; , ≥0
Example: −
s.t + ≥
− + ≥
≤
, ≥
After introducing the surplus and slack variables , , the following problem is obtained.
Minimize x − 2x
s.t + − =
− + − =
+ =
, , , , ≥
An initial identity sub matrix is not available. So introduce the artificial variables & . Phase
I starts by minimizing = + .
Phase I
RHS
1 0 0 0 0 0 -1 -1 0
0 1 1 -1 0 0 1 0 2
0 -1 1 0 -1 0 0 1 1
0 0 1 0 0 1 0 0 3
1 0 2 -1 -1 0 0 0 3
0 1 1 -1 0 0 1 0 2
0 -1 [1] 0 -1 0 0 1 1
0 0 1 0 0 1 0 0 3
3. RHS
1 2 0 -1 1 0 0 -2 1
0 [2] 0 -1 1 0 1 -1 1
0 -1 1 0 -1 0 0 1 1
0 1 0 0 1 1 0 -1 2
4. RHS
1 0 0 0 0 0 -1 -1 0
0 1 0 −1 1 0 1 −1 1
2 2 2 2 2
0 0 1 −1 −1 0 1 1 3
2 2 2 2 2
0 0 0 1 1 1 −1 −1 3
2 2 2 2 2
This is the end of phase I. we have a starting basic feasible solution, ( , )= , . Now we
are ready to start phase II, where the original objective is minimized starting from the extreme
point , . The artificial variables are disregarded from any further consideration.
Phase II
5 RHS
1 -1 2 0 0 0 0
0 1 0 −1 1 0 1
2 2 2
0 0 1 −1 −1 0 3
2 2 2
0 0 0 1 1 1 3
2 2 2
6. RHS
1 0 0 0 −
0 1 0 −1 1 0 1
2 2 2
0 0 1 −1 −1 0 3
2 2 2
0 0 0 1 1 1 3
2 2 2
RHS
1 -3 0 2 0 -4
0 2 0 -1 1 0 1
2
0 1 1 -1 0 0 3
2
0 -1 0 1 0 1 3
2
RHS
1 -1 0 0 -2 -6
0 1 0 0 1 1 2
0 0 1 0 0 1 3
0 -1 0 1 0 1 1
Since − ≤ 0 for all non basic variables, the optimal point (0,3) with objective -6 is reached.
Note that phase I moved from the infeasible point (0,0), to the infeasible point (0,1), and finally
1 3
2, 2 . From this extreme point, phase II moved to the feasible point (0,2) and finally to the
Dilla University, Department of Mathematics Page 110
Dill University
optimal point. The purpose of phase I it to get us to an extreme point of the feasible region,
while phase II takes us from this feasible point to an optimal point.
Analysis of the two- phase method
At the end of phase I we have either ≠0 = 0.
Case 1: ≠
If ≠ 0, then the original problem has no feasible solution, because if there is an ≥0 ℎ
1 0 0 0 0 -1 0
0 1 1 1 0 0 4
0 2 3 0 -1 1 18
RHS
1 2 3 0 -1 0 18
0 1 (1) 1 0 0 4
0 2 3 0 -1 1 18
RHS
1 -1 0 -3 -1 0 6
0 1 1 1 0 0 4
0 -1 0 -3 -1 1 6
The optimality criterion of the simplex method, namely − ≤ 0 for all non basic variables,
but the artificial > 0. We conclude that the original problem has no feasible solutions.
case2: =
Sub case1: All artificial are out of the basis
Since at the end of phase I we have a basic feasible solution, and since is out of the basis, then
the basis consists entirely of legitimate variables. If the legitimate vector x is decomposed
accordingly into , then at the end of the phase I we have the following tableau.
1 0 0 -1 0
Now Phase II can be started, with the original objective, after discarding the columns
corresponding to . Note, however, that an artificial variable should never be permitted to enter
the basis again.)
The − ’s for the nonbasic variables are given by the vector − , which can be
easily calculated from the matrix stored in the final tableau of phase I. The following
initial tableau of Phase II is constructed. Starting with this tableau, the simplex method is used to
find the optimal solution.
Z
1 0 −
z … … … … RHS
Z 1 0 … 0 0 0
−
1 0 0
. . .
. . .
1 0 0 0
0 … 0 , 1 0
. 1 .
. .
. .
0 0
0 0 1 0
the right hand side at the corresponding row is zero. In this case enters the basis and the
artificial variable leaves the basis, and the objective value remains constant. With this slight
modification the simplex method is used to solve phase II.
Z X Y ℎ ℎ RHS
1 2 -1 0 1 0 8 4
3 2 0 -1 0 1 12 6
The simplex value of the non basic variables x and y are positive and
Maximum {, 4 − 3} = 4 − 3. y becomes a candidate basic variable.
From the minimum ratio, becomes nonbasic and y becomes basic variable.
Second Iteration
Z X Y ℎ ℎ RHS
½ 1 -1/2 0 ½ 0 4 8
2 0 1 -1 -1 1 4 2
x becomes a candidate basic variable, from the minimum ratio x enters the basis and v2 leaves
the basis
Z x Y ℎ ℎ RHS
0 1 -3/4 ¼ ¾ -1/4 3
x 1 0 ½ -1/2 -1/2 ½ 2
The simplex value of the nonbasic variables are all negative, the current basic feasible solution is
optimal.
Example: − − ( )
− − − =1
− + +2 − = 1; , , , ≥0
Introducing artificial variables ,x , x , we write problem P as:
− − + + ( )
− − − + =1
− + +2 − + = 1; , , , ≥0
Taking x , x as an initial basic variables leads the following sequence of tables
, , , ≥
= : = : >0
This is a very simple rule, but restricts the choice of both the entering and leaving variables.
In this rule, variables are first ordered in some sequence, say, , , … , without loss of
generality. Then of all nonbasic variables with − > 0 the one with the smallest index is
selected to enter the basis. Similarly, of all the candidates to leave the basis (which tie in the
usual minimum ratio test), the one with the smallest index is chosen as exiting variable.
B, use the down arrow to get to the variable rows and type zeros in cells B7 and B8. If you have
more variables, continue entering zeros. Below is a screen print with formulas shown. Note: To
show formulas, depress “~”and “`Ctrl” buttons at the same time; to change back, depress “~” and
“Ctrl” buttons again.
12. In Column B use the down arrow to get to the constraints rows. Type the following formulas,
replacing the variables with the cell names:
In cell B11, constraint x1 + 4 x2 should be typed as " = B7 + 4 * B8"
In cell B12, constraint 4 x1 + x2 should be typed as "= 4* B7 + B8"
13. In Column C, type the augments (constants on the right side of the constraint inequalities) for
each of the constraints.
In cell C10, type "Augments".
In cell C11, type "9".
In cell C12, type "6".
14. Click in cell B4 (the objective function) before going to the Data menu and selecting
“Solver.” The Data menu should be located on the far right side of the ribbon.
Make sure the Set Objective box is set to “$B$4” (If not, close the window, click in cell B4, and
open the Data Solver option again.) Note: "Objective" refers to the cell location of the objective
function formula. On the To: line select “Max” to find the maximum value or select “Min” to
find the minimum value. Click in the By Changing Variable Cells: box and type variables “B7,
B8” separated by commas. Note: Do not type the absolute signs (dollar signs) because Excel will
add them in for you. If you have more than two variables, make sure you include them in the list.
Click in the Subject to the Constraints box and then click the Add button. The Add Constraint
window will pop up
Type the first constraint cell location. For example, enter "B11" in the Cell Reference box. Select
the appropriate sign from “<=”, “=”, or “>=” as it appears in the linear programming problem by
clicking on the down arrow in the middle box. In the Constraint box, enter the first augment cell
location of "C11". Click the Add button. Enter the rest of the constraints and augments in the
same manner. When completed, click OK
15. In the Solver Parameters window, click on "Make Unconstrained Variables Non-Negative"
In the box for Select a Solving Method:, choose "Simplex LP". (In older versions of Excel, you
set these selections by choosing “Assume Linear Model” and “Assume Non-Negative” in the
Options window.)
16. In the Solver Parameters window click the Solve button.
17. A Solver Results window will appear telling you that the solver found a solution. Click
beside "Keep Solver Solution". Select Answer and Sensitivity from the Reports box and then
click OK.
18. A sheet tab labeled “Answer Report 1” should now be at the bottom of the screen. Click on it
to see the results.
19. If you need to re-run the Solver, type zeros in the cells for the variables. Also, right-click on
the tabs displaying the results (Answer Report, Sensitivity Report), and delete each of them.
20. Interpret the results. In this sample problem, the maximum is listed on row 16 under Final
Value. It equals 14. The maximum occurs when x1 = 1 and x2 = 2. These answers are shown on
row 21 and row 22 under Final Value.
Summary
To use simplex method
An equation is kept an equation even after the introduction of artificial variables only in
one side of an equation which is not correct from the mathematical point of view. it is
only true if the value of the artificial variables are equal to zero.
In solving, problem of this type by using simplex method one must be sure at the optimal
stage, that the value of all artificial variables at the optimal stage is zero.
If it is not possible to bring all artificial variables at zero level at the optimal stage, we
conclude that the problem has no feasible solutions.
The following cases may arise at the optimal stage:
All artificial variables are not present in the basis which indicates that all artificial
variables are at zero level at the optimal stage. Thus the solution obtained is a BFS.
Some artificial variables exist in the basis at positive level at the optimal stage. In this
case, there exists no feasible solution.
All artificial variables are at zero level and at least one artificial variable present at the
basis at the optimal stage. Here the solution under test is optimal and we conclude that
some constraints may be redundant. By redundancy we mean that the system has more
than enough constraints.
Remarks:
Since the modified problem P(M) has a feasible solution say ( = 0 = ), then while
solving it by the simplex method one of the following two cases may arise.
3. We shall arrive at an optimal solution of P(M).
4. Conclude that P(M) has unbounded optimal solution.
Case 1: Finite optimal solution of P(M)
c. Optimal solutions to P(M) has all artificial at vlue zero
In this case the original problem has finite optimal solution.
d. At optiml solution not all artificial variables are zero.
In this case we conclude that the original problem has no feasible solution of.
b)
Chapter Five
P: +
Subject to 3 + ≥4
5 +2 ≥7
, ≥0
D: +
3 +5 ≤6
+2 ≤8
, ≥0
Standard Form of Duality
Another equivalent definition of duality applies when the constraints are equalities. Suppose that the
primal linear program is given in the form:
P: Minimize
Subject to = , ≥
Then the dual linear program is defined by:
D: Maximize
Subject to ≤
unrestricted
Example:
: 6 +8
. 3 + − =4
5 +2 − =7
≥ 0, = 1,2,3,4
D: 4 +7
. 3 +5 ≤6
+2 ≤8
− ≤0
− ≤0
,
Formulation of the dual problem
Given one of the definitions, canonical or standard, it is easy to demonstrate that the other definition is
valid. For example, suppose that we accept the standard form as a definition and wish to demonstrate that
the canonical form is correct. By adding slack variables to the canonical form of a linear program, we
may apply the standard form of duality to obtain the dual problem.
P: Minimize :
s. t − = , , ≥ . ≤
− ≤0
but since − ≤ 0is the same as ≥ 0, we obtain the canonical form of the dual problem.
Lemma 1 : The dual of the dual is the primal.
Mixed Forms of Duality
In practice, many linear programs contain some constraints of the "less than or equal to" type, some of the
"greater than or equal to" type and some of the "equal to" type. Also, variables may be " ≥ 0," " ≤ 0," or
"unrestricted." In theory, this presents no problem since we may apply the transformation techniques to
convert any "mixed" problem to one of the primal or dual forms discussed above, after which the dual can
be readily obtained. In practice such conversions can be tedious. Fortunately, it is not necessary actually
to make these conversions, and it is possible to give immediately the dual of any linear program.
Consider the following linear problems
Standard form
. ≥ . − =
= =
≤ + ; , , ≥0
≥0
The dual of this problem is
+ +
+ + ≤
− ≤0
≤0
, , unrestricted
From this example we see that "greater than or equal to" constraints in the minimization problem give rise
to " ≥ 0" variables in the maximization problem.
Also, "equal to" constraints in the minimization problem give rise to "unrestricted" variables in the
maximization problem; and "less than or equal to" constraints in the minimization problem give rise to "
≤ 0" variables
iables in the maximization problem. The complete results may be summarized in Table below:
c. − −
. −2 − ≤4
−2 +4 ≤−8
− +3 ≤−4
, ≥0
Answer c: 4 −8 −7
. −2 −2 − ≥ −1
− +4 +3 ≥ −1
, , ≥0
Primal-Dual Relationship
The relationship between Objective values
Weak duality
Theorem: If x is a feasible solution for the primal problem (minimization form) and w is feasible
solution for the dual, then
c ≥
Consider the canonical form of duality and let be feasible solutions of the primal and dual
programs respectively. Then ≥ , ≥ 0, ≤ ≥ 0. Multiplying ≥ on the left
by and ≤ on the right by , we get c ≥ ≥ .
Lemma 2
The objective function value for any feasible solution to the minimization problem is always greater than
or equal to the objective function value for any feasible solution to the maximization problem. In
particular, the objective value of any feasible solution of the minimization problem gives an upper bound
on the optimal objective of the maximization problem. Similarly, the objective value of any feasible
solution of the maximization problem is a lower bound on the optimal objective of the minimization
problem.
Corollary 1
If and are feasible solutions to the primal and dual problems such that = , then and
are optimal solutions to their respective problems.
Corollary 2
If either problem has an unbounded objective value, then the other problem possesses no feasible
solution.
Duality and the Kuhn-Tucker Optimality Conditions
A necessary and sufficient condition for x* to be an optimal point to the linear program
∗
≥ , ≥ is that there exists a vector such that
∗ ∗
1. ≥ , ≥
∗ ∗
2. ≤ , ≥
∗( ∗
3. − )=
∗ ∗
4. ( − ) =
Condition 1 above simply requires that the optimal point x* must be feasible to the primal.
Condition 2 condition indicates that the vector w* must be a feasible point for the dual problem.
From condition 3 above, we find that c x* = w*b. Hence w* must be an optimal solution to the
dual problem.
The Kuhn-Tucker optimality conditions for the dual problem imply the existence of a primal
feasible solution whose objective is equal to that of the optimal dual.
Strong Duality
Theorem: If the primal problem (minimization canonical form) has an optimal form feasible ,
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
=( , ,…, ) , then the dual also has an optimal solution =( , ,…, ) such that
∗ ∗
=
Lemma 3
If one problem possesses an optimal solution, then both problems possess optimal solutions and the two
optimal objective values are equal.
3 +w ≤3
,w ≥ 0
The feasible region for the dual problem is given bbelow
∗
The optima solution for the dual is = , w = 3/5 with objective value 5. Using the weak theorem
∗ ∗ ∗
of complementary slackness, we further know that , = = = 0.. Since none of the
∗ ∗
corresponding complementary dual constraints are tight. Since , > 0,then
∗ ∗ ∗ ∗ ∗ ∗
+3 = 4 and 2 + = 3. From these two equations we get = 1 and = 1, the
∗
optimal objective value = 5.. Thus the primal optimal point is obtained from the duality theorems and
the dual optiaml point.
≥ , ≥0
Let B be a basis that is not necessarily feasible and consider the following tableau.
Let B be a basis that is not necessarily feasible and consider the following tableau.
In certain instances it is difficult to find a starting basic solution that is feasible (that is, all ≥0
) to a linear program without adding artificial variables. In these same instances it is often
possible to find a starting basic, but not necessari
necessarily
ly feasible, solution that is dual feasible (that is,
all − ≤ 0 for a minimization problem). In such cases it is useful to develop a variant of the
simplex method that would produce a series of simplex tableau that maintain dual feasibility and
complementary
lementary slackness and strive toward primal feasibility.
Consider the above tableau representing a basic solution at some iteration. Suppose that the
tableau is dual feasible (that is, − ≤ 0 for a minimization problem). Then, if the tableau is
also primal feasible (that is, all ≥ 0)) then we have the optimal solution. Otherwise, consider
some < 0.. By selecting row r as a pivot row and some column k such that < 0 as a pivot
′
column wee can make the new right
right-hand side . Through a series of such pivots we hope to
make all ≥ 0 while maintaining all − ≤ 0 and thus achieve optimality. The pivot column
k is determined by the following minimum ratio test.
− −
= : < 0 … … … … … … (∗)
Note that the new entries in row 0 after pivoting are given by:
′
− = − − ( − )
′
Multiplying both sides by < 0, we get − − ( − ), that is, − ≤ 0. To
summarize, if the pivot column is chosen according to equation (∗),, then the new basis obtained
by pivoting at is still dual feasible. Moreover, the dual objective after pivoting is given by
− . − ≤0
0, < 0, < 0 , then − ≥ 0 and the dual
Given the primal basis B,, recall that = . Substituting in equation (1), we get
= −
= − ( )
− ( )
0
= … … … … … … . (2)
− ( )
Note that = 93) lead naturally to a dual basis.
Since both ( ) are not necessarily zero, then the vector w and the last n-m
n
components of form the dual basis. In particular, the dual basis corresponding to the primal
basis B is given by
The rank of the preceding matrix is n. The primal basis is feasible if ≥ 0 and the dual basis is
feasible if ≥ 0; that is, if ( − ( ) ) ≥ 0.. Even if these conditions do not hold, the primal
and dual bases are complementary in the sense that the complementary slackness condition
( — ) = 0 holds as shown below:
( — ) = (0, − ) =0
To summarize, during any dual simplex iteration we have a primal basis that is not necessarily feasible,
and a complementary dual feasible basis. At termination primal feasibility is attained, and so all the
Kuhn-Tucker
Tucker optimality conditions hold.
Recall that in the dual simplex method we begin with a basic (not necessarily feasible) solution to the
primal problem and a complementary basic feasible solution to the dual problem. The dual simplex
method proceeds, by pivoting, through a series of dual basic ffeasible
easible solutions until the associated
complementary primal basic solution is feasible, thus satisfying all of the Kuhn
Kuhn-Tucker
Tucker conditions for
optimality.
≥0 w unrestricted
Let w be an initial dual feasible solution, that is, ≤ for all j. By complementary slackness, if
= , then , is allowed to be positive and we attempt to attain primal feasibility from among these
variables. Let = { ∶ − = ), that is, the set of indices of primal variables allowed to be
positive. Then the phase I problem that attempts to find a feasible solution to the primal problem among
variables in the set Q becomes:
+
∈
Subject to ∑ ∈ + =
≥ ∈
≥
We utilize the artificial vector to obtain a starting basic feasible solution to the phase I
problem. The phase I problem is sometimes called the restricted primal problem.
Denote the optimal objective value of the foregoing problem by . At optimality of the phase I
problem either = > 0.
When = , we have a feasible solution to the primal problem since all artificials are zeros.
Furthermore, we have a dual feasible solution, and the complementary slackness condition
( — ) = 0 holds because either ∈ in which case — = 0 or else ∉ in
which case = 0. Therefore, we have an optimal solution of the overall problem
Whenever, = .
If > 0, primal feasibility is not achieved and we must construct a new dual solution that
would admit a new variable to the restricted primal problem in such a way that might be
decreased. We shall modify the dual vector w such that all the basic primal variables in the
restricted problem remain in the new restricted primal problem, and in addition, at least one
primal variable that did not belong to the set Q would be passed to the restricted primal problem.
Furthermore, this variable would reduce if introduced in the basis. In order to construct such a
dual vector, consider the following dual of the phase I problem.
. ≤0 ∈
≤1
Let V* be an optimal solution to the foregoing problem. Then, if a real variable is a member
of the optimal basis for the restricted primal, the associated dual constraint must be tight, that is,
∗
= 0. Also the criterion for basis entry in the restricted primal problem is that the associated
∗
dual constraint be violated, that is, > 0.
However, no variable presently in the restricted primal has this property since the restricted
∗ ∗
primal is optimal. For j ∈ , compute . If > 0, then if , could be passed to the
restricted primal problem it would be a candidate to enter the basis with the potential of a further
∗
decrease in . Therefore, we must find a way to force some variable with > 0 into the
set Q.
Construct the following dual vector ′ where > 0:
′ ∗
= +
′ ∗) ∗
− =( + − = − + ……………………( )
′ ∗) ∗
− =( + − = − + ( )
∗ ′
Note that − = ≤ ∈ . This implies that − ≤ ∈ .
∗
In particular, if ℎ ∈ is a basic variable in the restricted primal, then =0
′
− = 0 , permitting j in the new restricted primal problem.
∗ ′
If ∉ ≤ 0 , then from equation 3 and − < 0, we have − ≤ 0.
∗
Finally consider ∉ with > 0. Examining equation (3) and noting that − < 0 for
′
∉ , it is evident that we can choose a > 0 such that − ≤0 ∉ with at least
one component equal to zero. In particular, define as follows:
−( − ) − − ∗
= ∗
= ∗
: > 0 > 0 … … … … … … … … . (4)
′
By definition of above and from equation (3), we see that − = .
∗
Furthermore, for each j with > 0, and noting equations (3) and (4), we have ′ − ≤
To summarize, modifying the dual vector as detailed above leads to a new feasible dual solution
where ′ − ≤ for all j. Furthermore, all the variables that belonged to the restricted
primal basis are passed to the new restricted primal. In addition, a new variable that is a
candidate to enter the basis, is passed to the restricted primal problem. Hence we continue from
the present restricted primal basis by entering , which leads to a potential reduction in .
There are many other variants of the simplex method. Thus far we have discussed only the
primal and dual methods. There are obvious generalizations that combine these two methods.
Algorithms that perform both primal and dual steps are referred to as primal-dual algorithms and
there are a number of such algorithms. We present here one simple algorithm of this form called
the parametric primal-dual.
It is most easily discussed in an example.
The above example can easily be put in canonical form by addition of slack variables. However,
neither primal feasibility nor primal optimality conditions will be satisfied. We will arbitrarily
consider the above example as a function of the parameter θ in Tableau 6. Clearly, if we choose θ
large enough, this system of equations satisfies the primal feasibility, and primal optimality
conditions. The idea of the parametric primal-dual algorithm is to choose θ large initially so that
these conditions are satisfied, and attempt to reduce θ to zero through a sequence of pivot
operations. If we start with θ = 4 and let θ approach zero, primal optimality is violated when θ <
3. If we were to reduce θ below 3, the objective-function coefficient of would become
positive. Therefore we perform a primal simplex pivot by introducing x2 into the basis. We
determine the variables to leave the basis by the minimum-ratio rule of the primal simplex
method:
leaves the basis and the new canonical form is then shown in Tableau 7.
The optimality conditions are satisfied for 7/ 10 ≤ θ ≤ 3. If we were to reduce θ below 7 /10, the
right hand side value of the third constraint would become negative. Therefore, we perform a
dual simplex pivot by dropping from the basis. We determine the variable to enter the basis
by the rules of the dual simplex method:
After the pivot is performed the new canonical form is given in Tableau 8.
wa − c ≤ 0 for all j, and by assumption v ∗ a ≤ 0 for all , then from equation (3) ′ is a dual
feasible solution for all > 0. Furthermore, the dual objective is
′ ∗) ∗
=( + = + b
∗
Since = , and the latter is positive, then ′ can be increased indefinitely by choosing
arbitrarily large. Therefore the dual is unbounded and hence the primal is infeasible.
The Primal-Dual Algorithm (Minimization Problem)
Initialization Step
Choose a vector w such that − ≤0
Main Step
1. Let ={ : − = 0} and solve the following restricted primal problem
+
∈
Subject to ∑ ∈ + =
≥ ∈
≥
Denote the optimal objective by . If = 0, stop; an optimal solution is obtained.
∗
Otherwise, let be the optimal dual solution to the foregoing restricted primal problem.
∗
2. If ≤ 0 for all j, then stop; the dual is unbounded and the primal is infeasible.
Otherwise let
− − ∗
= ∗
: >0
′ ∗
Replace w by = + . Repeat step 1.
Summary
Dual linear programming
Suppose that the primal linear program is given in the form:
P:
s.t ≥ , ≥
Then the dual linear program is defined by:
D:
St ≤ , ≥
Remark: There is exactly one dual variable for each primal constraint and exactly one dual
constraint for each primal variable.
Primal-Dual Relationship
The relationship between Objective values
Weak duality
Theorem: If x is a feasible solution for the primal problem (minimization form) and w is
feasible solution for the dual, then
c ≥
Consider the canonical form of duality and let be feasible solutions of the primal and
dual programs respectively. Then ≥ , ≥ 0, ≤ ≥ 0. Multiplying ≥
on the left by and ≤ on the right by , we get c ≥ ≥ .
Review exercise
CHAPTER 6
6. Sensitive Analysis
6.1 Introductions
In Linear Programming, sensitive analysis plays a vital role to discuss the nature of a problem
and it gives a direct test of sensitivity of a particular problem due to the variation of a component
of cost or requirement vector etc. It is in general a post optimality test. It determines the range of
a components of the cost or requirement vector when all other components remain unchanged so
that the optimal solution remains unaffected, i.e. the optimality remains undisturbed. Such type
of change is a discrete change. It gives a clear idea how the nature of the problem changes due to
the addition or deletion of a single variable and due to the removal or addition of constraint or
due to the change of an element a ij of the coefficient matrix. As it is possible to discuss the effect
on a problem due to the variations stated above, it is extremely help full commercial point of a
view and it is possible to rectify adjust a problem suitable so that it will enhance the prospect of a
problem from the economical points of view . for example, as the procedure is able to determine
the extremum of a component bi of b so that the optimal base so that the optimal basis remains
unchanged this technique can be suitably used to get the benefit from the knowledge of shadow
prices or accounting prices to get an overall maximum profit etc.
There are a number of questions that could be asked concerning the sensitivity of an optimal
solution to changes in the data. In this chapter we will address those that can be answered most
easily. Every commercial linear-programming system provides this elementary sensitivity
analysis, since the calculations are easy to perform using the tableau associated with an optimal
solution. There are two variations in the data that invariably are reported: objective function and
required vector. The objective-function ranges refer to the range over which an individual
coefficient of the objective function can vary, without changing the basis associated with an
optimal solution. In essence, these are the ranges on the objective-function coefficients over
which we can be sure the values of the decision variables in an optimal solution will remain
unchanged. The right hand-side ranges (required vector) refer to the range over which an
individual right hand-side value can vary, again without changing the basis associated with an
optimal solution. These are the ranges on the right hand-side values over which we can be sure
the values of the shadow prices and reduced costs will remain unchanged. Further, associated
with each range is information concerning how the basis would change if the range were
exceeded.
Sensitivity analysis is a systematic study of how sensitive solutions are to (small) changes in the
data. The basic
Case(2) Let = be the component basic variable [r=1,2,3,…,m] and the cost
component = corresponding to changes to = ∆ .Now the change of affects
all for all j in the basis. Now due to change
∗ ∗
− = − [for all ∉ ]
=∑, − +( +∆ )
= ∑ , − +∆
= − +∆
Now to keep the solution optimal, we must have all ∗ − ∗ ≥ 0 for all ∉
Thus − + ∆ ≥ 0.
If =0, then − + ∆ = − ≥ 0.
< 0, ∆ ≤
(6.2.2)
> 0, ∆ ≥
Combining these two conditions given in (6.2.2) we can say that the optimal basis
remains unaffected if △ = ∆ lies in the interval given below
( ) ( )
, >0 ≤∆ =△ ≤ , <0 (6.2.3)
For all ∉ .If no > 0, there is no upper bound of △ and if no < 0 there is no lower
bound of △ .But here the value of the objective function changes by △ =△ .
∑ △ .
Example 6.3.1
Given the following L.P.P
Maximize , = + 4 − 2 + 3 +
Subject to −3 + +2 +6 ≤ 3
2 + +3 +2 ≤6
4 + − + ≤2
≥ 0, = 1,2, … ,5
Find the ranges over which , , can be changed so that the optimality of the current
solution remains undisturbed [Assume that when , changes ,all other quantities remain
unchanged etc].
The optimal simplex table is
C 1 4 -2 3 -1 0 0 0
Basis b
0 10 25 0 1 0 37 1 1 11
2 4 4 4
3 1 −1 0 0 1 1 0 1 −1
2 4 4 4
4 3 7 1 0 0 5 0 1 3
2 4 4 4
− 15 23 0 2 0 27 0 7 9
2 4 4 4
−( − ) −( − )
, >0 ≤∆ ≤ , <0
1
= 0 − (6.3.6)
0
Note:verify this result by finding the inverse of given in (6.3.3)by matrix inverse method.
The range of be such that the optimal basis remains unchanged is given by
− −
, >0 ≤∆ ≤ , <0
Example 6.3.2 Discuss the effect of changes in the requirements for the following L.P.P
Maximize , = − + 3
Subject to + + ≤ 10
2 −3 ≤2
2 −2 +3 ≤ 0
≥ 0, = 1,2,3
So that the optimality of this problem remains undisturbed
The components of requirement = [ , , ] = [10,2,0]
The optimum simplex table is given by
Basis b ( ) ( )
( )
-1 6 1 1 0 3 0 −1
5 5 5
0 6 14 0 0 2 1 1
5 5 5
3 4 4 0 1 2 0 1
5 5 5
− 6 6 0 0 3 0 4
5 5 5
( ), ( ) and ( ) constitute the initial unit basis which are thee slack vectors. Then
, , under , respectively of the final table constitute final base inverse which is
given by
3 −1
0
5 5
2 1
= 1
5 5
2 1
0
5 5
6
And the optimal basic solution = 6
4
−6 −6 −4 ≤ ∆ [∵
3, 2, 3 = 1]
5 5 5
−6 , −4 ≤∆ ≤ −6 [∵ = 3]
The interpretation is that , remains unchanged then the optimal solution remains optimal if
ℎ
(0 − 20 ≤ ≤ 0 + 30) = (−20 ≤ ≤ 30)
For = −20,the B.F.S corresponding to the basis [10,2, −20] = [10,2,0] which is a
degenerate B.F.S with the third component = =0.
, =∑ , =1 (6.4.3)
3. Replace the original cost coefficient ( ) of by a large positive number ,
but keep equal to the old value .
Remarks:
1. The number has to be taken sufficiently large to ensure that cannot be contained in the
new optimal basis that is ultimately going to be found.
2. The procedure above can easily be extended to cases where changes in coefficients of more
than one column are made.
3. The present procedure will be computationally efficient (compared to reworking of the
problem from the beginning) only for cases where there are not too many number of basic
columns in which the are changed.
7 6
Example: Find the effect changing from to in the problem below is
3 10
= −45 − 100 − 30 − 50
. 7 + 10 + 4 + 9 ≤ 1200
3 + 40 + + ≤ 800
≥ 0, = 1, … ,4
(i.e Change are made in the coefficients of non-basic variables only)
Solution
The relative cost coefficients of the non basic variables (of the original optimum solution
) corresponding to the new are given by
̅ = − =
Example 6.4.1
Consider the problem
− 5 − + 12
. 3 + 2 + = 10
5 + 3 + = 16
≥ 0, = 1, … ,4
An optimal solution to this problem is given by = (2,2,0,0) and the corresponding simplex
tableau is given by
12 0 0 2 7
=2 1 0 -3 2
=2 0 1 5 -3
Since ̅ is negative introducing the new variable to the basis can be beneficial.
We observe that = (−1,2) and augment the tableau by introducing a column associated
with
12 0 0 2 7 -4
=2 1 0 -3 2 -1
=2 0 1 5 -3 2
We then bring in to the basis ; exist
and we obtain the following tabealu,which happens to be optimal:
16 0 2 12 1 0
If you add a constraint to a problem, two things can happen. Your original solution satisfies the
constraint or it doesn't. If it does, then you are finished. If you had a solution before and the
solution is still feasible for the new problem, then you must still have a solution. If the original
solution does not satisfy the new constraint, then possibly the new problem is infeasible. If not,
then there is another solution. The value must go down. (Adding a constraint makes the problem
harder to satisfy, so you cannot possibly do better than before). If your original solution satisfies
your new constraint, then you can do as well as before. If not, then you will do worse.
solution to the new problem as well.If the new constraint is violated ,we introduce a new non
negative slack variable ,and rewrite the new constraint in the form of
− = .
We obtain a problem in standard form, in which the matrix is replaced by
0
′
−1
If be an optimal basis for the original problem we form a base for the new problem by
selecting the original basic variable together with ,the new basis matrix is of the form
0
= ′
−1
When the row vector ′ contains those components ′ associated with the original basic
column.
The basic solution associated with this basis is ∗ , ′ ∗
− , and is infeasible because of
∗
our assumption that violates the new constraint.
Note that the new invers basis matrix is readily available because
= ′ 0
−1
Let be the M-dimensional vector with the costs of the basic variable in the original problem,
then the vector of reduced cost associated with the basis for the new problem is given by
, 0 0
[ ′ 0]- [ 0] ′ ′
−1 −1
= ′− ′ 0
and is non-negative due to the optimality of B for the original problem. Hence is a dual
feasible basis and we are in a position to apply the dual simplex method to the new problem .
Note that an initial simplex tableau for the new problem is readily constructed ,we have
0 0
= ′ = ′
−1 − ′
1
Where is available from the final simplex tableau for the original problem.
Example:
Consider the problem
min −5 − + 12
s.t 3 + 2 + = 10
5 + 3 + = 16
,…, ≥ 0
12 0 0 2 7
= 2 1 0 -3 2
= 2 0 1 5 -3
min −5 − + 12
s.t 3 + 2 + = 10
5 + 3 + = 16
+ − =5
,…, ≥ 0
Let a constant of a components of associated with the basic variables.we have = (1,1)
′ ′ 1 0 −3 2
− = [1 1] − [1 1 0 0] = (0 0 2 − 1)
0 1 5 −3
The tableau for the new problem is of the form
12 0 0 2 7 0
= 2 1 0 -3 2 0
= 2 0 1 5 -3 0
= -1 0 0 2 -1 1
Our discussion has been focused on the case , where an inequality constraint is add to the pimal
problem.
Where is dual variable associated with the new constraints .Let ∗ be an optimal basic
feasible solution to the original dual problem.Then, ( ∗ , 0) is a feasible solution to the new dual
problem.
Let be the dimension of which is the same as the original number of constraints.Since ∗ is a
basic feasible solution to the original dual problem and of the constraints in ( ∗ )′ ≤ ′ are
linearly independent and active.However ,there is no guarantee that at ( ∗ , 0) we will have
+ 1linearly independent active constraints of the new dual problem. In particular ,( ∗ , 0) need
not be a basic feasible solution to the new problem and may not provide a convenient starting
point for the dual simplex method on the new problem.
While it may be possible to obtain a dual basic feasible solution by setting to a suitable
chosen non zero value ,we present here an alternative approach.
∗
Let us assume, without loss of generality , ′ > . We introduce the auxiliary prime
′
problem +
. =
′ − =
≥ 0, ≥0
Where M is a large positive constant.A primal feasible basic for the auxiliary problem is
obtained by picking the basic variables of the optimal solution to the original problem, together
with the variable .The resulting basic matrix is the same as the matrix of the preceding
subsection.There is a difference,however.in the preceding subsection , was a dual feasible
basis, whereas here is a primal feasible basis, for this reason, the primal simplex method can
now be used to solve the auxiliary problem to optimality.
Suppose that an optimal solution to an auxiliary problem satisfies = 0 ;this will be the case
if the new problem is feasible and the coefficient M is large enough.
Then, the additional constraint ′ = has been satisfied and we have an optimal
solution to the new problem.
A company manufactures four variants of the same product and in the final part of the
manufacturing process there are assembly, polishing and packing operations. For each variant the
time required for these operations is shown below (in minutes) as is the profit per unit sold.
Assembly Polish Pack Profit (£)
Variant 1 2 3 2 1.50
2 4 2 3 2.50
3 3 3 2 3.00
4 7 4 5 4.50
Given the current state of the labour force the company estimate that, each year, they
have 100000 minutes of assembly time, 50000 minutes of polishing time and 60000
minutes of packing time available. How many of each variant should the company make
per year and what is the associated profit?
Suppose now that the company is free to decide how much time to devote to each of the
three operations (assembly, polishing and packing) within the total allowable time of
210000 (= 100000 + 50000 + 60000) minutes. How many of each variant should the
company make per year and what is the associated profit?
Constraints
The operation time limits depend upon the situation being considered. In the first situation,
where the maximum time that can be spent on each operation is specified, we simply have:
In the second situation, where the only limitation is on the total time spent on all operations, we
simply have:
Objective
Look at Sheet A in lp.xls and to use Solver do Tools and then Solver. In the version of Excel I
am using (different versions of Excel have slightly different Solver formats) you will get the
Solver model as below:
but where now we have highlighted (clicked on) two of the Reports available - Answer and
Sensitivity. Click OK and you will find that two new sheets have been added to the spreadsheet -
an Answer Report and a Sensitivity Report.
As these reports are indicative of the information that is commonly available when we solve a LP
via a computer we shall deal with each of them in turn.
Answer Report
We can see that the optimal solution to the LP has value 58000 (£) and that T ass=82000,
Tpol=50000, Tpac=60000, X1=0, X2=16000, X3=6000 and X4=0.
Note that we had three constraints for total assembly, total polishing and total packing time in
our LP. The assembly time constraint is declared to be 'Not Binding' whilst the other two
constraints are declared to be 'Binding'. Constraints with a 'Slack' value of zero are said to be
tight or binding in that they are satisfied with equality at the LP optimal. Con
Constraints
straints which are
not tight are called loose or not binding.
Sensitivity Report
Summary
In all LP models the coefficient of the objective function and the constraints are
supplied as input data or as param
parameters to the model
Sensitivity analysis helps to study how the optimal solution will change with changes
in the input coefficients
The optimal solutions obtained is based on the values of these coefficients
In most practical applications, some of the probl
problem
em data are not known exactly and hence
are estimated as well as possible.
It is important to be able to find the new optimal solution of the problem as other
estimates of some of the data become available, without the expensive task of resolving
the problem
lem from scratch.
Furthermore, in many situations the constraints are not very rigid. For example, a
constraint may reflect the availability of some resource. This availability can be increased
by extra purchase, overtime, buying new equipment, and the like.
It is desirable to examine the effect of relaxing some of the constraints on the value of
the optimal objective without having to resolve the problem. These and other related
topics constitute sensitivity analysis
The ranges of the original objective function coefficients of the original variables for
which the current basis remains optimal.
The ranges of the right-hand-side constants for the constraints for which the current basis
remains optimal.
Use Excel’s Solver Add-In to solve linear programming problems.
Interpret the results of models and perform basic sensitivity analysis.
Review Exercise
1. write a short note on sensitive analysis.
2. Describe the working for determination of the range of variation of the discrete
change of the components of such that an optimal solution remains undisturbed.
a) When the component is the coefficient of a non-basic variables
b) When the component is the coefficient of a basic variable belonging
to the optimal basis
3. Describe the working for determination of the range of a component of the
requirement vector such that optimality remains undisturbed.Also prove that
for the extreme values of the component,the B.F.S will be a degenerate
4. Solve the following L.P.P by simplex method
=5 +4 +
. 6 + + 2 ≤ 12
8 + 2 + ≤ 30
4 + − 2 ≤ 16
, , ≥0
a) Find the range of ,the coefficient of in the objective function,such
that the optimality remains undisturbed.[Assume that and remains
unchanged
b) Find the ranges of and so that optimality basis remains the same.
5. From the optimal simplex table of the following L.P.P
=2 − +3
. 3 + −2 ≤6
2 + 5 + ≤ 14
4 +4 +2 ≤ 8
, , ≥0
Find the ranges of , and such that the optimal basis remains unchanged
= + +3
. +2 − ≤ 10
3 +2 ≤8
+ 3 ≤ 15
, , ≥0
Taking the initial basis as ( , , ) where = and = are unit slack
vectors.
From the optimal simplex table ,find the ranges of , and such that the
optimality remains unchanged.
=2 +5
. −2 + ≤0
+ 3 ≤ 14
+ ≤8
, , ≥0
Where the initial B.F.S is a degenerate one
Find the range of such that optimality be not violated and find the basic
feasible solution for the extreme values of .
8. Following is the final optimal table for a L.P.P.Assume that originaly the identity
matrix was under (Maximization)
2 2 0 0
vectors in b
The basis
2 5 1 0 −
2 4 0 1 −
− 18 0 0 0 2
Suppose was equal to 3 instead of 2 .Would you still have the optimal solution?Conclude it by
calculating − with new cost components.Verify that result by using the theory of sensitive
analysis.
CHAPTER SEVEN
In this chapter, we begin our study of an alternative to the simplex method for solving linear
programming problems. The algorithm we are going to introduce is called a path-following
method. It belongs to a class of methods called interior-point methods. The path-following
method seems to be the simplest and most natural of all the methods in this class, so in this book
we focus primarily on it. Before we can introduce this method, we must define the path that
appears in the name of the method. This path is called the central path and is the subject of this
chapter. Before discussing the central path, we must lay some groundwork by analyzing a
nonlinear problem, called the barrier problem, associated with the linear programming problem
that we wish to solve.
It is then perhaps not surprising that the announcement by Karmarkar in 1984 of a new
polynomial time algorithm, an interior-point method, with the potential to improve the practical
effectiveness of the simplex method made front-page news in major newspapers and magazines
throughout the world. It is this interior-point approach that is the subject of this chapter and the
next. This chapter begins with a brief introduction to complexity theory, which is the basis for a
way to quantify the performance of iterative algorithms, distinguishing polynomial-time
algorithms from others. Next the example of Klee and Minty showing that the simplex method is
not a polynomial-time algorithm in the worst case is presented. Following that the ellipsoid
algorithm is defined and shown to be a polynomial-time algorithm. These two sections provide a
deeper understanding of how the modern theory of linear programming evolved, and help make
clear how complexity theory impacts linear programming. However, the reader may wish to
consider them optional and omit them at first reading. The development of the basics of interior-
point theory begins with this section which introduces the concept of barrier functions and the
analytic center. Section 7.2 introduces the central path which underlies interior-point algorithms.
General objectives
• Picture.
• This step doesn’t make much progress unless our starting point is central.
• So we’ll change the coordinate system so that the current point is always central.
In this chapter, we consider the linear programming problem expressed, as usual, with inequality
constraints and nonnegative variables:
(1)
And
Given a constrained maximization problem where some of the constraints are inequalities (such
as our primal linear programming problem), one can consider replacing any inequality constraint
with an extra term in the objective function. For example, in (1) we could remove the constraint
that a specific variable, say, xj , is nonnegative by adding to the objective function a term that is
negative infinity when xj is negative and is zero otherwise. This reformulation doesn’t seem to
be particularly helpful, since this new objective function has an abrupt discontinuity that, for
example, prevents us from using calculus to study it. However, suppose we replace this
discontinuous function with another function that is negative infinity when xj is negative but is
finite for xj positive and approaches negative infinity as xj approaches zero. In some sense this
smooths out the discontinuity and perhaps improves our ability to apply calculus to its study. The
simplest such function is the logarithm. Hence, for each variable, we introduce a new term in the
objective function that is just a constant times the logarithm of the variable:
(2)
This problem, while not equivalent to our original problem, seems not too different either. In
fact, as the parameter µ, which we assume to be positive, gets small, it appears that (2)
( becomes
a better and better stand-in for (1).
1).
FIGURE 7.1. .1. Parts (a) through (c) show level sets of the barrier function for three values of µ.
For each value of µ, four level sets are shown. The maximum value of the barrier function is
attained inside the innermost level set. The drawing in part (d) shows the central path.
minimize C X
sub ject to x ∈ Ω ∩ S
A is of order m x n without any loss of generality we assume that the rank of A is m. We make
the Following assumptions:
Karmarkar’s algorithm generates a finite sequence of feasible points , , … all of them > 0
such that is strictly decreasing L denotes the size of(7.1).
Karmarkar’s Method
Idea:
(1) Use projective transformation to move an interior point to the centre of the feasible region
(2) Move along projected steepest descent direction
An LP in standard form with an optimum objective value of zero can be transformed into another
with the same property but with a known strictly positive feasible solution. We show that any LP
can be transformed into another one with a known minimum objective value of zero.
Consider the LP
Minimize hx
Subject to Ex ≥ 0
x≥0 (7.2)
Let denote the row vector of dual variables. It is well that solving (7.2) is equivalent to
solving the following system of linear inequalities .
(7.3)
(7.4)
The optimum ob jective value in (7.4) is ≥ (since it is a Phase I problem corresponding to (7.3)
and (7.3) is feasible iff it is zero. Let v denote the row vector of dual variables for (7.4)
Consider the LP
(7.5)
The LP (7.5) consists of the constraints in (7.4) and its dual From the duality theory of linear
programming the optimum objective value in (7.5) is zero (since (7.4) has a finite optimum
solution). The LP (7.5) can be put in standard form for LPs by the usual transformations of
introducing slack variables etc., If ( , ) is optimal to (7.5) then is optimal to (7.4) If g =0,
then the x-portion of is an optimum solution for(7.2). If g >0, (7.3) is infeasible and hence
(7.2) is either infeasible or has no finite optimum solution.
. Affine Scaling
. Karmarkar’s Method
Affine Scaling
Idea:
Idea:
(2) Position the current point close to the centre of the feasible region
For example, one possible choice is the point:
▪Transform
Transform the current problem into an equivalent problem in yy−space so
o that the current point
is close to the centre of the feasible region
▪Use
Use projected steepest descent direction to take a step in the yy−space
−space without crossing the
feasible set boundary
▪Map
Map the new point back to the corresponding point in the xx−space
Step 2: Find the projected steepest direction and step length at for the problem,
Proposition 1.1 (The Inverse Transform Method) Let F(x), x ∈ IR, denote any cumulative
distribution
tribution function (cdf ) (continuous or not). Let (y), y ∈ [0, 1] denote the inverse
function defined in (1). Define X = (U), where U has the continuous uniform distribution
over the interval (0, 1). Then X is distributed as F, that is, P(X ≤ x) = F(x), x ∈ IR.
Proof: We must show that P( (U) ≤ x) = F(x), x ∈ IR. First suppose that F is continuous.
Then we will show that (equality of events) { (U) ≤ x} = {U ≤ F(x)}, so that by taking
probabilities (and letting a = F(x) in P(U ≤ a) = a) yields the result: P( (U) ≤ x) = P(U ≤
F(x)) = F(x).
which yields the same result after taking probabilities (since P(U = F(x)) = 0 since U is a
continuous rv.)
Examples
The inverse transform method can be used in practice as long as we are able to get an explicit
formula for (y) in closed form. We illustrate with some examples. We use the notation U ∼
unif(0, 1) to denote that U is a rv with the continuous uniform distribution over the interval
(0, 1).
This is known as the discrete inverse-transform method. The algorithm is easily verified
directly by recalling that P(a < U ≤ b) = b − a, for 0 ≤ a < b ≤ 1; here we use
Lemma 7. 3.1 If (x, y, s) ∈ (θ) then the duality gap is equal to the complementarily gap, i.e.
Proof: Any point in N2(θ) is primal and dual feasible hence by substituting
Lemma 7. 3.2 The Newton direction (∆x, ∆y, ∆s) defined by the equation system
Satisfies :
Proof: From the first two equations in (11) we get A∆x = 0 and ∆s = Q∆x − AT ∆y hence
The Newton method uses a local linearization of the complementarily condition. When a step in
the Newton direction (∆x, ∆y, ∆s) of (11) is made, the error in the approximation of
complementarily products is determined by the second-order term which is a product of ∆x and
∆s.
Minimize
Where A is an mxn matrix of full rank, b, c and x are n-dimensional column vectors.
P= {x∈ / Ax=b} as
We further define the relative interior of p (with respect to the affine space
{ x∈ / Ax=b , x>0}
An n-vector x is called an interior point ,or interior solutionof the linear programming problem
if x∈ , throughout this book , for any interior point approach . we always make a fundamental
assumption ≠0
There are several ways to find an initial interior solution to a given linear programming problem.
The details will be discussed later . For the time being , we simply assume that an initial interior
solution is available and focus on the basic ideas of the primal affine scaling algorithm.
Remember from the above the fundamental insights observed by N.Karmarkar in designing his
algorithm .Since they are still the guiding principle for the affine scaling algorithm, we repeat
them here:
1) If the current interior solution is near the center of the polytope, then it makes sense to
move in the direction of the steepest descent of the objective function to achieve a
minimum value.
2) Without changing the problem any essential way, an appropriate transformation can be
applied the solution space such that the current interior solution is placed near the center
in the transformed solution space.
Minimize x
Subject to Ax= b
x≥ 0
Our objective to convert this problem into the standerd form requird by Karmarkar,while
satisfying the assumption .We shall firstseehow toconvert problem into Karmarkar’s form and
then discuss the two assumptions. The key feature of Karmarkar’s standaard form is the simplex
structure ,which of course result in boundary feasible domain. Thus we want to regularize the
problem by adding a boundary constraint.∑ ≤0
For some positive integer Q derived from the feasible and optimality consideration in the worse
case , we can choose Q= 2 , where L is the problem size . If this constraint is binding at
optimality with the objective value of magnitude - 2 , then we can show that the given problem
is unbounded
In order to keep the matrix structure A un distribed for sparsity manipulation. We introduce a
new variable = 1 and rewrite the constraints of problem as
Ax +b =0 (1)
x+ -Q =0 (2)
x+ + =0 (3)
Note that the constraint =1 is a direct consequence of (2) and (3) for the required simplest
structure, we apply the transformation = (Q + 1) , for j= 1, 2,…., n+2. In this way, we have
an equivalent linear programming problem.
Minimum (Q+1) ( y)
Subject to Ay - b =0
The proceeding algorithm is only valid for an LP in Karmarkar’s standard form.In order to use
Karmarkar’s algorithm for other LPs, we must convert them into this form.
Minimize x x.c ∈
Subject to Ax ≤ b A∈
x ≥0 b∈
Summary
1 Basic ideas
• This step doesn’t make much progress unless our starting point is central.
• So we’ll change the coordinate system so that the current point is always central.
we consider the linear programming problem expressed, as usual, with inequality constraints and
nonnegative variables:
(1)
And
minimize C X
sub ject to x ∈ Ω ∩ S
Chapter Eight
8. Transportation problems
8.1 Introduction
In this chapter we will be concerned with transportation problem. Transportation problem is a
special case of L.P.P. We divide this chapter into six sections. The first section is introduction to
transportation problem. This section introduces and define transportation problem. We will
discuss some fundamental properties of T.P. The second section will show how to write a
transportation problem on a transportation table. Loops and their application will be presented in
this section.
Methods of determining the initial basic feasible solution will discuss in section three. On this
section we will see Techniques of solving I.B.F.S. such as North-West corner method, Row
minima method and Cost minima method.
The next section (section 4) is Optimality conditions. On this section we will discuss whether
the initial basic feasible solution obtained, is optimal or not.
Transportation problems are either unbalanced or balanced. In section five we will see
unbalanced T.P. and how to get these unbalanced transportation problem solutions.
In the last section, we have discussed about degenerate transportation problems and their
application.
General Objectives
▪Understand the numerical calculation of the net evaluation corresponding to the non-basic cells
., ,. . ., and the quantity required, i.e., the demand at be [ = 1,2, . . . , ]. Let us make
an assumption that
= = (8.1.1)
= ,
i.e. the total available quantity is equal to total demand, this is called as balanced transportation
problem and when
It is called an unbalanced transportation problem. We shall initially discuss first type problems.
Destination
… …
… …
… …
. . … . … . .
. . . . .
. . . . capacities
Origins … …
. . . . . . . .
. . . . . . . .
. . . . . . .
... …
... ...
demands
( = 1,2, … , ), the cost of transportating per unit of commodity from the origin to
destination is a known quantity. It is assumed in general that ≥ 0 for all and . But it may
be negative under some special conditions. The problem before us is to determine the quantity
[ = 1,2, … , ; = 1,2, … . , ] which is to be transported from origin to destination
such that the transportation cost is minimum provided the condition (8.1.1) is satisfied.
mimize, =∑ ∑ (8.1.2)
∑ = , = 1,2, … , (8.1.3)
∑ = , = 1,2, … , (8.1.4)
and ∑ =∑ .
From the above diagram, the constraints (8.1.3) and (8.1.4) can be written easily. The sum of the
variables of the row is equal to and the sum of the variables of the column is equal to
.
In this problem there are ( + ) constraints of which all are equations of variable
[ = 1,2, … , ; = 1,2, … , ].
Since, in general, in a L.P.P. number of variables are greater than number of constraints, there
for m and n both must be ≥ 2. since
( + )−1=( + − 1)
Theorem 8.1.1 there exists a feasible solution in each T.P. which is given by
= [ = 1,2, … , , = 1,2, … , ]
Where =∑ =∑ .
Proof. Since all and are non-negative quantities therefore ≥ 0 for all and .
∑
= = =
and
∑
= = =
Theorem 8.1.2 In a balanced T.P. having m origins and n destinations ( , ≥ 2) the exact
number of basic variables is + − 1.
Minimize, =∑ ∑
Subject to
∑ = , = 1,2, … , (8.1.6)
∑ = , = 1,2, … , (8.1.7)
And ∑ =∑ .
From (8.1.7)
= = . (8.1.8)
= . (8.1.9)
− = − = ( 8.1.10)
Thus we get
∑ (∑ −∑ )= or ∑ = (8.1.11)
Which is the last or constraint of (8.1.4). Therefore, there are only ( + − 1) linearly
independent equations with variables ( > + − 1). Thus from the definition of the
basic solution, we can say that the number of basic variables is exactly ( + − 1).
Remark. All basic variables may not be positive; some of them may be zero. When all basic
variables are positive, the solution is called a none degenerate B.F.S. When at least one basic
variable is zero, the solution is called a degenerate B.F.S.
The number of basic cells will be exactly ( + − 1) all of which contain ( + − 1) basic
variables which are either all positive basic variables or some variables may be zero which has
been shown later.
Class activity 8.1:
Consider a balanced T.P. having 5 origins and 6 destinations. Then how many basic variables are
there in this balanced transportation problem?
In this table there are squares or rectangles arranged in rows and columns. Each square
or rectangle is called a cell. The cell which is in the row and column is called as ( , )
cell or cell ( , ). Each cost component is displayed at the south-east corner of the
corresponding cell. A component of a feasible solution (if any) is to be displayed inside a
small square situated at the north west corner of the cell ( , ). the different origin capacities and
destination demands (requirements) are listed in the right side column and lower side row
respectively as given in the table (8.1). These quantities are called as rim requirements.
Capacities
Origins
Demand
In a transportation table, an ordered set of four or more cells are said to form a loop
( ) If and only if two consecutive cells in the ordered set lie either in the same row or in the
same column and
( ) If the first and the last cell of the cell , then it also lies either in the same row or in the
same column.
In the following figure on closer circuit is formed in each of the three transportation tables.
Table 8.2
. . . . . .
. . . .
. . .
. . . . . .
In the first and third table there are only two cells in each row and each column and the first and
last cell are in the same row or same column. These loops are called simple loops. In the second
table three cells are in the first column but if we ignore or omit the cell (3, 1) and ordered the set
of cells in the manner
Then there are just two cells in each row and each column and the first and last cell are in the
same row or column. Hence ignoring the cell (3, 1), the 2 nd closed circuit is also considered as a
simple loop. There are other types of loops. But in the transportation problem, all loops are
simple.
Remark: In a simple loop, there are always even numbers of cells. There are only two cells in
each row of a simple loop. If there are such rows, the number of cells will be 2 ,which is an
even number.
are called as basic cells. Obviously the allocated values are the components of the B.F.S.
Some methods of determining an initial B.F.S. are
Step2. If < , the capacity of the origin will be exhausted completely which indicates
that all other cells of the first row will remain vacant. But there remain some demand in the
destination . Compute min( , − ). Select = ( , − ) and allocate the
value of in the cell (2, 1). Let us now make an assumption that − < which indicated
that demand of is satisfied completely. Of course, this assumption is not essential. With this
assumption, the next cell for which some allocation is to be made is the cell (2, 2) etc.
If < , the demand of the destination will be satisfied exactly which indicates that all
other cells of the first column will remain vacant. But the capacity of origin will not be
exhausted. Compute min( − , ). Select = min( − , ) and allocate the value of
in cell (1, 2). Let us now make an assumption that − < which indicates that the
capacity of is exhausted completely. With this assumption the next cell for which some
allocation is to be made, is the cell (2, 2) etc. If = the capacity of the origin will be
exhausted as well as the demand of will be satisfied simultaneously. In that case, the solution
will be degenerate. Select either or = min( − , ) = min( , − ) = 0 and
allocate the value 0 only in one of the two cells (1, 2) or (2, 1). The next cell for which some
allocation is to be made is cell (2, 2). In this way proceed step by step until all the rim
requirements are satisfied. In general, if an allocation is made in the cell ( , ) in the current step,
the next allocation will be made either in cell ( + 1, ) or ( , + 1). the feasible solution
obtained by this method is always a B.F.S. In North-west corner rule, tree diagram can be
constructed by connecting all basic cells but no loop will be formed.
Example 8.3.1 Determine an initial B.F.S. of the following problems by the method of North-
west corner rule.
4 6 9 5 16 2 7 4
2 6 4 1 12 6 1 2 5 6
5 7 2 9 15 4 5 2 4 8
12 14 9 8 43 3 7 6 2 18
i) )
Note: Inside the rectangle, the cost matrix is given and in the last column and last row of the
rectangle, the rim requirements are given.
Solution: the initial B.F.S. are displayed in the following two tables.
12 4 9 5 3 1 4 7
4 6 2 5
2 10 2 1 6 6 0 5
6 4 1 2
5 7 7 8 4 5 6 2
2 9 2 4
Explanation:
i) B.F.S. is displayed in the table 8.3 : (16,12) = 12. Therefore = 12 and allocate in
the cell (1, 1). The demand of is satisfied and hence all other cells in first column remain
vacant.
(which contain components of F.S) do not contain a loop (number of variables=4+3-1=6) and
the cost due to this assignment is
4*12+6*4+6*10+4*2+2*7+9*8=226 units.
) B.F.S. is displayed in table 8.3B: min (4, 3) = 3. Therefore = 3 and allocate it in the
cell (1, 1). The demand of is satisfied and hence all other cells of first column remain vacant.
As = 3 < 4 = , therefore next allocation will be in the cell (1, 2) and = min(4 −
3, 7) = 1. Now the capacity of is exhausted. Next allocation will be in cell (2, 2) =
min(6, 7 − 1) = 6. Now the capacity of is exhausted and the demand of is satisfied Step
3. If = ; min , = = . Set = = and allocate it in the cell (1, j). Due to
this allocation, the capacity of origin will be exhausted as well as the demand of will be
satisfied completely. In that case solution will be degenerate. Set = 0 and display it in the
cell (1, k) with the assumption, that the cost of (1, ) cell is the next minimum cost. Now cross
off both the first row and column and proceed similarly until all rim requirements are
satisfied.
Example 8.3.2 Find out an initial B.F.S. of the following balanced T.B. using row minima
method.
Step 2.If < , the capacity of the origin will be exhausted. But the demand of destination
remains unsatisfied. Cross off the first row and diminish by . Proceeding similarly,
allocates the maximum feasible amounts in the cells of the remaining rows starting from the
second until all rim requirements are satisfied. If < , the total demand of the destination
is satisfied but the capacity of the origin is not exhausted completely. Cross of the column
and diminish by .
Reconsider the first row and select the next smallest cost of this row. Let it be . Compute
min ( − , ). Set = min ( − , ) and allocate it in the cell (1, k). Let as now
make an assumption that − < . Therefore the capacity of will be exhausted
completely [assumption is not restrictive]. Cross off the first row and repeat the above
procedure for the second and so on as in the above method until all rim requirements are
satisfied.
4 2 5 3 6
5 4 3 2 13 Capacity
1 4 6 5 9
7 8 5 8 28
demand
Solution: it is displayed in the transportation table given below;
table 8.4
4 6 5 3 6
5 0 5 8 13
4 3 2
7 2 5 9
6
1 4
7 8 5 8 28
Lowest cost in the first row is = 2. min( , ) = min(6,8) = 6. Set = 6 and allocate it
in the cell (1, 2), < ; therefore the capacity of is exhausted completely and hence cross
off the first row and diminish by which is shown in table 8.4. Then ignore the first row
for future computation.
Step 2. If < , the capacity of origin will be exhausted completely. Cross off the
row and diminish by .
If < , the demand of the destination will be satisfied completely. Cross off the
column and diminish by .
Step 3. If = , the capacity of origin will be exhausted and the demand of will be
satisfied simultaneously. Set = = ; allocate it in the cell ( , ) . Cross off either the
row or the column but not both. Of course, we may drop both the row and the column
by inserting a basic variable zero at a cell corresponding to the lowest cost of those row and
column.
Step 4. Apply the same technique in the reduced transportation table until all rim requirements
are satisfied. At any stage, if the minimum cost is not unique, make any arbitrary choice among
the minima.
Example 8.3.3 Determine an initial B.F.S. to the following balanced T.P. using cost minima
method:
5 3 6 2 19
Capacity
4 7 9 1 17
3 4 7 5 34
16 18 31 25 90
Demand
Table 8.5
5 18 1 6 2 19
4 7 12 25 37
9 1
16 4 5 34
18
3 7
16 18 31 25 90
The smallest cost in the reduced table is or . Let as select = 3 as the smallest cost
and allocate = min( , ) = min(19,18) = 18 in the cell (1, 2), < , therefore cross
off the second column and diminish . Proceed similarly until all rim requirements are
satisfied and table 8.5 gives the B.F.S. It is a B.F.S. because the numbers of variables are 4+3-
1=6 and the cells corresponding to the feasible solution do not contain a loop. Here the solution
is not unique.
In addition to North-west corner rule, Row minima rule and cost minima method, there are
another methods to find initial basic feasible solution. From these additional methods Vogel’s
Approximation is one method. Steps to find initial basic feasible solution is as follows:
Step1: Select the lowest and next to lowest cost for each row and determine the difference b/n
them for each row and display them within the first bracket against the respective rows. If there
are two or more with same lowest costs, difference may be taken to be zero. Compute, similarly
the difference for each column and display them within the bracket against the respective
columns.
Step2: Find the largest value of the differences and find out the row or column for which the
difference is maximum. Let the maximum difference corresponds to row. Select the lowest
cost in the row. Let it be . Allocate = min ( , ) in the cell ( , )which is the
maximum feasible amount that can be allocated in the cell ( , ) . If the maximum difference is
not unique, select any one of them.
Step3: If < , cross off the row and diminish by . If < , cross of column
and diminish by . If = , allocate = = in cell ( , ) and cross off either the
row or column but not the both. of course, we can omit both the row and column
simultaneously by inserting a basic variable 0( ) to one of cells of the corresponding row or
column possessing the next minimum cost and the solution will be degenerate them.
Step4: Recomputed the row and column differences for the reduced transportation table. Repeat
the procedure discussed above until all rim requirements are satisfied.
Example: obtain an initial basic feasible to the balanced T.P given below using Vogel’s
approximation method.
warehouses
19 30 50 10 7
factory 70 30 40 60 9 Factory
capacity
40 8 70 20 18
5 8 7 14 34
Demand
Solution:
step1: Select the lowest and next to lowest cost for each row and each column and determine the
difference b/n them for each row and column and display them within the first bracket against
the respective rows and columns. Here all the differences have been shown within the first
compartment. Maximum difference is 22 which occurs at the second column and the minimum
cost of that column is = 8. Allocate (18, 8) = 8 in the cell (3,2). Demand of has
nd
been satisfied then cross of the 2 column and ignore it for the future computation. The
resulting cost matrix will be obtained after deleting the cost components of the second column .
5 30 2
30 7 2
40 8 10
7(10) 14(10)
7(10) 4(50)
Step 2 : applying the same technique in the resulting matrix , the capacities, demands and the
deference of the cost of the components have been shown in the second compartment. Maximum
difference is 21 which occurs in the first column and the lowest cost on that column is = 19.
Allocate min(7,5) = 5 in the cell (1, 1) . The demand of has been satisfied and shade the first
column as shown in the table. The resulting cost matrix will be obtained deleting the cost
components of the first column.
Step3: proceeding in the same way, we get the maximum difference 50 which occurs in the third
row and minimum cost is = 20. Allocate min(10,14) = 10 in the cell (3,4) and the capacity
of will be exhausted and the resulting matrix will be obtained deleting the cost components of
the third row; shade the third row as shown in the table. Using the same technique, ultimately we
obtained the initial B.F.S. where all capacities have been exhausted and all the demands will be
met.
Here the initial basic feasible solution using by VAN’s approximation method is being shown in
a single table in a very compact manner which will save time.
The initial B.F.S. obtained by using the method is non-degenerate and unique and the initial
solution is = 5, = 2, = 7, = 2, = 8 and = 10 as shown on the above
table.
1. Obtain an initial B.F.S to the T.P by using the following methods and find out which
solution is better.
a. North west corner method
b. Row minima method
c. Cost (matrix) minima method
i. ii.
ai
4 3 2 5 6 9 8 5 7 12
6 1 4 3 9 4 6 8 7 14
7 2 4 6 7 5 8 9 5 16
4 6 6 6 8 18 13 3
5 1 8 12
2 1 0 14
3 6 7 4
9 10 11 6
In determining the net evaluation we shall make use of the property of the duality.
, =
Subject to ∑ = , = 1,2, … ,
= , = 1,2, … ,
Where there are ( + ) constraints of all of which are equations and out of them only ( +
− 1) equations are independent. Hence there are ( + ) dual variables to the primal
(original) problem of which one variable can be selected arbitrarily and all variables are
unrestricted in sign [as all primal constraints are equations]. If the dual variables are
=( , ,… , , ,… , )=( , ) (8.4.1)
+ ≤ (8.4.2)
Now if is the basis inverse of the primal problem at the optimal stage and is the
associated cost vector, the dual optimal solution is given by
= (8.4.3)
− = − (8.4.4)
− = −
= −
=( , ) − (8.4.5)
− = ( , ,… , , ,… , )( + −
For the basic cells, the net evaluations − = 0, . ., if be a basic variable then
− =0 (8.4.7)
Or, + − =0
or, = − if is known
If we select one of the values of the dual variables ( , , … , , , … , ) are zero, all other
values can be determined using the relation (8.4.8). And once all quantities are known, the net
evaluations − , given by the formula in (8.4.6) can be calculated. All these calculations can
be done easily from the transportation table.
Below given is a transportation table involving 4 origins and 4 destinations in which the cost
components are displayed in their proper places and all basic cells are marked by circular black
spots. The solution (not given in the table) is a basic feasible solution. The problem before us is
to calculate numerically all − corresponding to the non-basic cells.
Table 8.6
. . -1 3 3
6 9 8 7
-1 . . . 0
4 6 4 7
-6 1 . 5 0
9 5 4 2
1 6 -1 . 1
3 1 6 8
3 6 4 7
+ =
− = + − [from (8.4.6)].
For simple computation we may take the value of one variable equal to zero.
In the above table, cells (1, 1), (1, 2), (2, 2), (2, 3), (2,4), (3, 3) and (4, 4) are marked with
circular black spots. The cells are basic cells and the solution will be a B.F.S. because the set of
cells do not contain a loop and the number of cells (4 + 4 − 1) = 7.
In the second row there are three basic cells. Let us take = 0 [in general a variable or is
taken to be zero, the corresponding row or column of which contains the maximum basic cells].
+ =
+ =
+ =
+ =
+ =
+ =
+ =
We can calculate all net evaluations corresponding to the non basic cells.
For example − = + − = 3 + 4 − 8 = −1
− = + − = 3+7−7=3
In the transportation table, the values of all and are listed outside of the block as given in
the table and all net evaluations corresponding to non-basic cells are displayed on the north east
corner of the respective cells. Following this rule, all net evaluations corresponding to the non-
basic cells can be calculated.
To test the optimality of a B.F.S at any stage, we require to calculate the values of all net
evaluations corresponding to the non basic cells. If all net evaluations are non-positive quantities,
the solution is optimal. But if at least one of them is positive, the solution is not optimal. As in
the simplex method, now we shall have to proceed further to get an optimal solution. The first
problem before us is, to select a vector which will enter in the basis and will move the solution
towards optimality. The vector which will enter in the basis is the entering vector and the
corresponding cell in the transportation table is the new basic cell. If be the entering vector,
then cell ( , ) will be the new basic cell which will enter in the set of basic cells. and due to
this, a cell will leave the set of basic cells. This cell is known as the leaving cell or departing cell
and the vector corresponding to the departing cell is known as departing vector. the entering
vector is selected in the following manner.
the maximum is not unique we may select any one cell corresponding to the maximum value of
the net evaluations.
In the above table (8.6) max − , − > 0 = 6 > 0 which occurs at the cell (4,2) .
there fore, in the next iteration, the cell (4,2) will be the new basis cell and will be the vector
to enter in the next basis.
D. Determination of the departing cell and the value of the basic variable in the entering
cell.
After the selection of entering cell, we can identify the cell which will leave the set of basic cells,
geometrically from the transportation table. Let the cell (p, k) be the entering cell. The number of
basic cells including the cell ( , ) is + − 1 + 1 = + . Evidently the set of column
vectors corresponding to these ( + ) are linearly dependent. Therefore it is always possible
to construct a loop connecting the cell ( , ) and the set or any subset of the basic cells.
Construct the loop by trial and error method and the loop is unique.
Now allocate the value > 0 in the cell ( , ) and readjust the basic variables in the ordered set
of cells containing the simple loop by adding and subtracting the value alternately from the
corresponding quantities such that all rim requirements are satisfied if the ordered set of cell’s
containing the simple loop are
( , ), ( , ), ( , ), ( , ) .
Now select the maximum variable of in such a way that readjusted values of the variables
vanish at least in one cell [excluding the cell ( , ) ] of the ordered set and all other variables
remain non- negative. Let as assume that the variable vanishes in the cell ( , ) satisfying all
conditions stated above, i.e., − = 0 which gives the value of = . This is the value of
new basic variable to the allocated in the new basic cell ( , ). The cell ( , ) is the departing
cell and it will be non-basic cell during next iteration. The vector will leave the basis and the
new basic variables will be , − , + . ., , − , − + etc .
Corresponding to the cells ( , ) , ( , ), ( , ) respectively. All basic variables in the cells not in
the loop remain unchanged. It may so happen that for maximum value of, the readjusted
variables may vanish for more than one cell. In the case it is not possible to select uniquely the
cell which will leave the set of basic cells. Select arbitrarily one of the cells as a departing cell
and write down the value 0(zero) as new basic variable in all other such cells and the solution in
the next iteration will be degenerate.
1. Computational procedure
Step1: Determine an initial basic feasible solution of the given problem using any one of
methods discussed previously. But to get the optimal solution quickly, in general use either cost
minima or Vogel’s approximation method.
Step2: calculate all net evaluations corresponding to non-basic cells and display them on the
north east corner of the corresponding non-basic cells. If all net evaluations are non-basic
quantities at any iteration, the solution is optimal. Then calculate the corresponding minimum
cost using the relation min = ̃ = ∑ where are the components of optimal solution
and are the corresponding transportation cost per unit commodity. If at least one net
evaluation is positive, the solution is not optimal.
Step3: If the solution is not optimal, determine the entering cell and the value of which will be
allocated in the entering cell and the cell which will leave the basis. Construct a new
transportation table with readjusted basic variables. Calculate again, all net evaluation
corresponding to the non basic cells and display them on the north east corner of the
corresponding non-basic cells. Test the optimality of the solution. If the solution is not optimal,
proceed similarity until the optimality conditions are satisfied
Example 8.4.1 Obtain the initial basic feasible solution to the following transportation problem
by matrix minima method and then find out an optimal solution and the corresponding cost of
transportation.
5 4 6 14 15
2 9 8 6 4
6 11 7 13 8
9 7 5 6 27
Solution: Initial B.F.S is calculated with the help of matrix minima method is given in the table
8.7( ) bellow. Now calculate all net evaluations corresponding to the non basic cells with the
assumption = 0. All net evaluations are not non- positive. Hence the solution is not optimal.
Cell (2,4) has the positive net evaluation 3. Thus in the next iteration cell (2,4) will be the new
basic cell. Construct the loop as shown in table 8.7( ) loop is simple and unique. Insert the
value > 0 in the cell (2,4) and readjust the basic variables in the cells containing the loop
accordingly as given in the table 8.7( ).
Now the maximum value of will be 3 and cell (1, 3) will leave the basic cell and all other
variables remain non-negative. Construct the table 8.7( ) with the value of = 3.
Calculate all net evaluations corresponding to the non basic cells with the assumption = 0.
Cell (3,1) will be the new basic cell. Construct a simple loop with cell (3,1) as shown in table
8.7(B). insert the value > 0 in the cell(3, 1) and readjust the basic variables as shown in the
table 8.7( ). maximum value of = 1, cell (2,1) will leave the set of basic cells and all other
variables remain non negative, with = 1. Construct table 8.7( )
Calculate all net evaluations corresponding to the non-basic cells in the table 8.7( ) with the
assumption = 0. All net evaluations are non- positive. Hence the solution is optimal and
optimal solution is = 8, = 7, = 4, = 1, = 5, = 2 and the minimum cost
of transportation problem is
̃ = 5 × 8 + 4 × 7 + 6 × 4 + 6 × 1 + 7 × 5 + 13 × 2 = 159 units.
Notice. As the net evaluation corresponding to the non basic cell (1,3) is zero, an alternative
optimal solution exists.
Solve the following balanced transportation problem by using VAM to determine the initial
basic feasible solution and test either the solution is optimal or not.
4 2 7 -1 27
3 0 2 4 33
5 3 4 5 23
3 5 4 -2 17
31 24 25 20 100
100
1. When ∑ >∑ , . ., the total available capacities of m origins are greater than
the total demands of n destinations. But this problem can be converted in to balanced
transportation problem by using the following device:
a. Imagine a fictitious or fake ( + 1) destination .
b. Assume that the demand of destination is =∑ −∑ .
2 3 4 6
4 3 1 8
2 2 5 6
5 4 7
In the problem, ∑ = + + = 6 + 8 + 6 = 20
And ∑ = + + = 5 + 4 + 7 = 16
= 20 > 16 =
The problem can be converted into a balanced T.P. in the following manner:
2 3 4 0 6
4 3 1 0 8
2 2 5 0 6
5 4 7 4 20
20
=∑ −∑ = 20 − 16 = 4 and = = = 0.
Now this transportation problem can be solved as in the previous methods. In this problem, an
initial B.F.S obtained , using the matrix minima method, is = 2, = 4, = 1, =
7, = 3 and = 3, and the minimum cost of transportation such that the total demands of
the destinations will be satisfied is 25. Multiple optimal solutions exist in the problem and one of
the optimal solution is = 3, = 3, = 7, = 1, = 2 and = 4, i.e., =
3, = 7, = 2 and = 4 as actually no quantity is transported to the destination .
2. When ∑ >∑ i.e., the total demands of n destinations are greater than the total
available capacities of m-origins. Here also the problem can be converted into an
ordinary balanced T.P. but the total demands of n-destinations will not be satisfied
completely. Though we cannot satisfy all demands, we can still allocate the materials
available at the origins to the destinations in such a way that minimizes the cost of
transportation.
= − .
Table 8.8
19 10
14 11 20
10(3) 10(3) 10(3)
12 2 1
19 12 14 17
8 16 4
11 18
14
13(0)
6(3) 1(1)
Solve the above problem by taking the initial basic feasible solutions obtain by VAM. Find the
amount which is not supplied and find the destination to which the fake amount has been
supplied.
In the second row there are three basic variables then we shall take = 0, with = 0 we
have calculated all [ = 1, 2, 3, 4) and [ = 1,2,3,4] as shown in the table. Now we calculate
all − for non-basic cells.
All − are < 0 and − = 0. Then we reach at the optimal stage and alternative
optimal solutions exist.
min = 10 × 11 + 12 × 12 + 2 × 14 + 1 × 17 + 8 × 14 + 4 × 11
= 110 + 144 + 28 + 17 + 112 + 44 = 455
Example 8.5.2 solve the following balanced T.P. by finding I.B.F.S. use cost minima method
Table 8.10
2 5 0 12 -3
12
7 4 8 6 8 1
15 15 7 7
15
3 4 8 0 3 4 5
14
14 6 6 6
2 6 9 1 4
9
9 9 9
10 8 12 20 50
50
10 8 12 8
10 12 8
10 12
10 3
Table 8.11
-2 -9 0 12 − -4
2 5 0 -3
7 - -6 -4 8 + 0
4 6 8 1
3 + 8 3 − -6 0
4 0 4 5
-1 -9 9 -8 -3
2 6 1 4
4 0 4 1
Table 8.12
-2 -9 3 9 0
2 5 0 -3
4 -6 -4 11 4
4 6 8 1
6 8 0 -4 4
4 0 4 5
-3 -11 9 -8 -1
2 6 1 4
0 -4 0 -3
Here in the table 8.12 all − ≤ 0 for the non-basic cells and − = 0. thus alternative
optimal solution exists. One alternative optimal solution is
= 3, = 9, = 4, = 11, = 8, = 8, = 9 and
= 0 − 27 + 16 + 11 + 24 + 0 + 9 = 33 units.
Degeneracy may occur at the initial stage or at any sub sequent iteration. Here we shall discuss a
problem where the initial B.F.S. is degenerate and only one basic variable is zero. The problem
can be solved similarly if more than one basic variable are zero. Allocate a quantity > 0 (very
small) instead of basic variable 0 (zero) in the cell and readjust all basic variables in the cells
such that
= =
is satisfied and it may be assumed that + = . Now solve the problem treating it as a non-
degenerate problem. You may drop at any subsequent iteration if it has been found that the
solution at the stage will be non-degenerate on the assumption that = 0 after finding the
optimal solution.
Example 8.6.1 Obtain the optimal solution to the following transportation problem by using the
north-west corner rule to find I.B.F.S.
2 5 4 7 4
6 1 2 5 6
4 5 2 4 8
3 7 6 2
Table 8.13
3 1
2 5 4 7
6 0
6 1 2 5
6 2
4 5 2 4
Remark. Either (2, 3) or (3, 2) contains the basic variable 0, which indicates that the I.B.F.S. is
degenerate. 0 cannot be inserted in any other cells and in that case solution will not be by
N.W.C.R.
Optimality test. Initially keep ‘0’ in its own position as ‘0’ and not by > 0 such that
± = etc.
Table 8.14
3 1 -2 -1 0
2 5 4 7
8 6 0 1 -4
6 1 2 5
6 1 6 2 -4
4 5 2 4
2 5 6 8
Minimum value − , − < 0 is -2 which is in cell (1, 3). In the first table put > 0
instead of zero such that ± = [you may not put instead of 0 in the first table] and the
first table becomes
Table 8.15
3 1 - -2 -1
2 5 7
4
8 6 + - 1
6 5
1 2
6 1 6 2
4 5 2 4
Table 8.16
3 1 - 1 0
2 5 7
4
8 6 + 0 2 3 -4
6 1 2 5
4 2 6 2 -2
4 5 2 4
2 5 4 6
All − ≥ 0. Thus we reach at the optimal stage. Now put = 0 and the optimal solution
is = 3, = 1, = 0, = 6, = 6, = 2 and the mini cost= 3 × 2 + 1 × 5 + 0 ×
4 + 6 × 1 + 6 × 1 + 6 × 2 + 2 × 4 = 37 units. The optimal solution is degenerate as at least
one optimal basic variable = 0.
Example 8.6.2 Solve the following transportation problem by using VAM to determine the
I.B.F.S. and show that the optimal solution is a degenerate solution.
Table 8.18
10 20 15
5 7 15(2)
18 15 0 10
5 14 0 16 18
5(3) 0(4)
Optimality test:
Table8.19
6 10 18 20 15 6
5 7 5
7 15 0 10
18 9 12 8 12
5 1 0 6
15 14 16 18 16
-1 -3 0 -4
The initial basic feasible solution is the optimal solution. The optimal solution is
min = 15 × 5 + 15 × 9 + 0 × 12 + 10 × 8 + 5 × 15 + 0 × 16 = 365 .
Summary
Transportation problem
= = (8.1.1)
= ,
i.e. the total available quantity is equal to total demand, this is called as balanced transportation
problem and when
It is called an unbalanced transportation problem. We shall initially discuss first type problems.
Review exercise
1) What is an unbalanced T.P.? How can you convert it into a balanced T.P.?
2) Which one of the statement is true?
a) A T.P. is strictly a maximization problem.
b) A T.P is strictly minimization problem.
c) A T.P may be maximization or minimization problem.
3) Prove that at least one feasible solution in a balanced T.P.
4) Prove that there exists a finite optimal solution in each balanced transportation problem.
5) Define a loop in a transportation table. What is the nature of a loop in a transportation
table?
6)
a. What is the number of basic variables in balanced T.P. with m origins and n
destinations?
b. What is the maximum number of positive components in a B.F.S of a T.P. (balanced)
with m origins and n destinations?
7) Find the initial B.F.S of the following T.P. using North-West corner rule and prove that
(ii) is degenerate.
i)
ii)
4 6 9 5 16 9 7 4 2 20
4 2 7 1 11 2 9 8 6 15
2 5 2 8 10 9 7 5 8 15
12 7 6 15 14 21 6 9
6 4 2 7 8
5 1 4 6 14
6 5 2 5 9 Supply
4 3 2 1 11
7 13 12 10
Demand
9) Solve the following unbalanced transportation problem:
4 5 6 12
3 1 5 11
Supply
2 4 4 7
6 5 8
Demand
10) A affirm manufacturing a single product has three plants , , . The three plants have
produced 60, 35 and 40 units respectively during this month. The firm has made a
commitment to sell 22 units to customer A, 45 units to customer B, 20 units to customer
C, 18 units to customer D and 30 units to customer E. find the minimum cost of shifting
the manufactured product to the five customers. The cost matrix is given below:
customer
plant 4 1 3 4 4
2 3 2 2 3
3 5 2 4 4
Chapter Nine
9. Theory of games
9.1 Introduction
In this chapter, we shall study if not the most practical then certainly an elegant
application of linear programming. The subject is called game theory, and we shall
focus on the simplest type of game, called the finite two-person zero-sum game, or
just matrix game for short. Our primary goal shall be to prove the famous Minimax
Theorem, which was first discovered and proved by John von Neumann in 1928. His
original proof of this theorem was rather involved and depended on another beautiful
theorem from mathematics, the Brouwer Fixed-Point Theorem. However, it eventually became
clear that the solution of matrix games could be found by solving a certain
linear programming problem and that the Minimax Theorem is just a fairly straightforward
consequence of the Duality Theorem.
Game theory deals with decision situations in which two intelligent opponents with conflicting
objectives are trying to outdo one another .Typical examples include launching advertising
campaigns for competing products and planning strategies for warring armies. Game theory is
the formal study of decision-making where several players must make choices that potentially
affect the interests of the other players. A game is a formal description of a strategic situation.
Nash Equilibirum
Anash equilibrium, also called strategic equilibrium, is a list of strategies, one for each player,
which has the property that no player can unilaterally change his strategy and get a better payoff.
Payoff
A payoff is a number, also called utility, that reflects the desirability of an outcome to a player,
for whatever reason. When the outcome is random, payoffs are usually weighted with their
probabilities. The expected payoff incorporates the player’s attitude towards risk.
General objectives
At the end of this unit the learner will be able to:
Game theory
In a game conflict, two opponents, known as players, will each have a (finite or infinite) number
of alternatives or strategies .Associated with each pair of strategies is a pay off that one player
receives from the other. Such games are known as two person zero game sum game because a
gain by one player signifies an equal loss to the other .It suffices, then to summarize the game in
terms of the pay off to one player .Designating the two players as A and B with m and n
strategies, respectively, the game is usually represented by the pay of matrix to player A as
Game theory is the formal study of decision-making where several players must make choices
that potentially affect the interests of the other players.
Classification of games
How many players are there in the game? Usually there should be more than one player.
However, you can play roulette alone the casino doesn’t count as player since it doesn’t make
any decisions. It collects or gives out money. Most books on game theory do not treat one-player
games, but I will allow them provided they contain elements of randomness.
Is play simultaneous or sequential? In a simultaneous game, each player has only one move, and
all moves are made simultaneously. In a sequential game, no two players move at the same time,
and players may have to move several times. There are games that are neither simultaneous nor
sequential. Does the game have random moves? Games may contain random events that
influence its outcome. They are called random moves. Is the game zero-sum? Zero-sum games
have the property that the sum of the payoffs to the players equals zero. A player can have a
positive payoff only if another has a negative payoff. Poker and chess are examples of zero-sum
games. Real-world games are rarely zero-sum.
The theory of von Neumann and Morgenstern is most complete for the class of games
called two-person zero-sum games, i.e. games with only two players in which one player
wins what the other player loses.
The extreme case of players with fully opposed interests is embodied in the class of two player
zero-sum (or constant-sum) games. Familiar examples range from rock-paper scissors to many
parlor games like chess, go, or checkers. A classic case of a zero-sum game, which was
considered in the early days of game theory by von Neumann, is the game of poker. The
extensive game in Figure 10, and its strategic form in Figure 11, can be interpreted in terms of
poker, where player I is dealt a strong or weak hand which is unknown to player II. It is a
constant-sum game since for any outcome, the two payoffs add up to 16, so that one player’s
gain is the other player’s loss. When player I chooses to announce despite being in a weak
position, he is colloquially said to be “bluffing.” This bluff not only induces player II to possibly
sell out, but similarly allows for the possibility that player II stays in when player I is strong,
increasing the gain to player I.
Because games are rooted in conflict of interest, the optimal solution selects one or more
strategies for each player such that any change in the choosen strategies not improve the payoff
to either player .These solution can be in the form of a single pure strategy or several strategies
mixed according to specific probabilities .The following examples demonstrate the two cases.
Two companies, A and B, sell two brands of flu medicine .Company A advertises in radio (A1),
television(A2) ,and newspapers (A3) . Company B, in addition to using radio (B1), television
(B2), and newspaper (B3), also mails brochures (B4). Depending on the effectiveness of each
advertising campaign ,one company can capture a portion of the market from the
other.Thefollowing matrix summarizes the percentage of the market captured or lost by company
A.
The solution of the game is based on the principle of securing the best of worst for each player. If
company A selects strategy A1, then regardless of what B does, the worst that can happen is that
A loses 3% of the market share to B. This is represented by the minimum value of the entries in
row 1. Similarly, the strategy A2 worst outcome is for A to capture 5% of the market from B,
and the strategy A3 worst outcome is for A to lose 9% to B. This result are listed in the ‘row
min’ column. To achieve the best of the worst, company A chooses strategy A2 because it
corresponds to the maximum value, or the largest element in the ‘ row min’ column.
Strategic Form
The simplest mathematical description of a game is the strategic form, mentioned in the
introduction. For a two-person zero-sum game, the payoff function of Player II is the negative of
the payoff of Player I, so we may restrict attention to the single payoff function of Player I,
which we call here A.
Definition 1 The strategic form, or normal form, of a two-person zero-sum game is given
by a triplet (X, Y, A), where
(1) X is a nonempty set, the set of strategies of Player I
(2) Y is a nonempty set, the set of strategies of Player II
(3) A is a real-valued function defined on X × Y . (Thus, A(x, y) is a real number for
every x ∈ X and every y ∈ Y.)
This is a very simple definition of a game; yet it is broad enough to encompass the
finite combinatorial games and games such as tic-tac-toe and chess. This is done by being
sufficiently broadminded about the definition of a strategy. A strategy for a game of chess, for
example, is a complete description of how to play the game, of what move to make in
every possible situation that could occur. It is rather time-consuming to write down even
one strategy, good or bad, for the game of chess. However, several different programs for
instructing a machine to play chess well have been written. Each program constitutes one
strategy. The program Deep Blue, that beat then world chess champion Gary Kasparov
in a match in 1997, represents one strategy. The set of all such strategies for Player I is
denoted by X. Naturally, in the game of chess it is physically impossible to describe all
possible strategies since there are too many; in fact, there are more strategies than there
are atoms in the known universe. On the other hand, the number of games of tic-tac-toe
is rather small, so that it is possible to study all strategies and find an optimal strategy
for each player. Later, when we study the extensive form of a game, we will see that many
other types of games may be modeled and described in strategic form.
To illustrate the notions involved in games, let us consider the simplest non-trivial
case when both X and Y consist of two elements. As an example, take the game called
Odd-or-Even.
Example: Odd or Even. Players I and II simultaneously call out one of the numbers one or two.
Player I’s name is Odd; he wins if the sum of the numbers is odd. Player II’s name is Even; she
wins if the sum of the numbers is even. The amount paid to the winner by the loser is always the
sum of the numbers in dollars. To put this game in strategic form we must specify X, Y and A.
Here we may choose X = {1,2}, Y = {1,2}, and A as given in the following table.
It turns out that one of the players has a distinct advantage in this game. Can you
tell which one it is?
Let us analyze this game from Player I’s point of view. Suppose he calls ‘one’ 3/5ths
of the time and ‘two’ 2/5ths of the time at random. In this case,
1. If II calls ‘one’, I loses 2 dollars 3/5ths of the time and wins 3 dollars 2/5ths of the
time; on the average, he wins -2(3/5) + 3(2/5) = 0 (he breaks even in the long run).
2. If II call ‘two’, I wins 3 dollars 3/5ths of the time and loses 4 dollars 2/5ths of the time;
on the average he wins 3(3/5) - 4(2/5) = 1/5.
That is, if I mixes his choices in the given way, the game is even every time II calls
‘one’, but I wins 20/ c on the average every time II calls ‘two’. By employing this simple
strategy, I is assured of at least breaking even on the average no matter what II does. Can
Player I fix it so that he wins a positive amount no matter what II calls?
Let p denote the proportion of times that Player I calls ‘one’. Let us try to choose p
so that Player I wins the same amount on the average whether II calls ‘one’ or ‘two’. Then
since I’s average winnings when II calls ‘one’ is -2p + 3(1 - p), and his average winnings
when II calls ‘two’ is 3p - 4(1 - p) Player I should choose p so that
Hence, I should call ‘one’ with probability 7/12, and ‘two’ with probability 5/12. On the
average, I wins -2(7/12) + 3(5/12) = 1/12, or 8 1 3 cents every time he plays the game, no
matter what II does. Such a strategy that produces the same average winnings no matter
what the opponent does is called an equalizing strategy.
Therefore, the game is clearly in I’s favor. Can he do better than 8 1 3 cents per game
on the average? The answer is: Not if II plays properly. In fact, II could use the same
procedure:
If I calls ‘one’, II’s average loss is -2(7/12) + 3(5/12) = 1/12. If I calls ‘two’, II’s average
loss is 3(7/12) - 4(5/12) = 1/12.
Hence, I has a procedure that guarantees him at least 1/12 on the average, and II has
a procedure that keeps her average loss to at most 1/12. 1/12 is called the value of the
game, and the procedure each uses to insure this return is called an optimal strategy or
a minimax strategy.
If instead of playing the game, the players agree to call in an arbitrator to settle this
conflict, it seems reasonable that the arbitrator should require II to pay 8 1 3 cents to I. For
I could argue that he should receive at least 8 1 3 cents since his optimal strategy guarantees
him that much on the average no matter what II does. On the other hand II could argue
that she should not have to pay more than 8 1 3 cents since she has a strategy that keeps
her average loss to at most that amount no matter what I does.
Activity 9.1
Two television networks are battling for viewer shares. Viewer share is important because,
the higher it is, the more money the network can make from selling advertising time during that
program. Consider the following situation: the networks make their programming decisions
independently and simultaneously. Each network can show either sports or a sitcom. Network 1
has a programming advantage in sitcoms and Network 2 has one in sports: If both networks show
sitcoms, then Network 1 gets a 56% viewer share. If both networks show sports, then Network 2
gets a 54% viewer share. If Network 1 shows a sitcom and Network 2 shows sports, then
Network 1 gets a 51% viewer share and Network 2 gets 49%. Finally, if Network 1 shows sports
and Network 2 shows a sitcom, then each gets a 50% viewer share.
The basic premise of utility theory is that one should evaluate a payoff by
its utility to the player rather than on its numerical monetary value. Generally a player’s
utility of money will not be linear in the amount. The main theorem of utility theory
states that under certain reasonable assumptions, a player’s preferences among outcomes
are consistent with the existence of a utility function and the player judges an outcome
only on the basis of the average utility of the outcome.
However, utilizing utility theory to justify the above assumption raises a new difficulty.
Namely, the two players may have different utility functions. The same outcome may be
perceived in quite different ways. This means that the game is no longer zero-sum. We
need an assumption that says the utility functions of two players are the same (up to
change of location and scale). This is a rather strong assumption, but for moderate to
small monetary amounts, we believe it is a reasonable one.
A mixed strategy may be implemented with the aid of a suitable outside random
mechanism, such as tossing a coin, rolling dice, drawing a number out of a hat and so
on. The seconds indicator of a watch provides a simple personal method of randomization
provided it is not used too frequently. For example, Player I of Odd-or-Even wants an
outside random event with probability 7/12 to implement his optimal strategy. Since
7/12 = 35/60, he could take a quick glance at his watch; if the seconds indicator showed
a number between 0 and 35, he would call ‘one’, while if it were between 35 and 60, he
would call ‘two’.
Two firms are competing for a single market niche. If one firm occupies the market niche, it
gets a return of 100. If both firms occupy the market niche, each loses 50. If a firm stays out of
the market, it breaks even. The payoff table is:
This game has two pure strategy equilibria , namely one of the two firms enters the market niche
and the other stays out. But, unlike the games we have encountered thus far, neither player has a
dominant strategy. When a player has no dominant strategy, she should consider playing a mixed
strategy. In a mixed strategy, each of the various pure strategies is played with some probability,
say p1 for Strategy 1, p2 for Strategy 2, etc with p1 + p2 + --- = 1. What would be the best mixed
strategies for Firms A and B? Denote by p1 the probability that Firm A enters the market niche.
Therefore p2 = 1 p1 is the probability that Firm A stays out. Similarly, Firm B enters the niche
with probability q1 and stays out with probability q2 = 1 q1. The key insight to a mixed strategy
equilibrium is the following. Every pure strategy that is played as part of a mixed strategy
equilibrium has the same expected value. If one pure strategy is expected to pay less than
another, then it should not be played at all. The pure strategies that are not excluded should be
expected to pay the same. We now apply this principle. The expected value of the \Enter"
strategy for Firm A, when Firm B plays its mixed strategy, is
The expected value of the \Stay out" strategy for Firm A is EV Stay out = 0. Setting EV Enter =
EV Stay out we get
50q1 + 100q2 = 0:
Using q1 + q2 = 1, we obtain
= =
Similarily,
= =
As you can see, the payoff s of this mixed strategy equilibrium, namely (0; 0), are in efficient.
One of these firms could make a lot of money by entering the market niche, if it was sure that the
other would not enter the same niche. This assurance is precisely what is missing. Each firm has
exactly the same right to enter the market niche. The only way for both firms to exercise this
right is to play the inefficient, but symmetrical, mixed strategy equilibrium. In many industrial
markets, there is only room for a few firms { a situation known as natural oligopoly. Chance
plays a major role in the identity of the firms that ultimately enter such markets. If too many
firms enter, there are losses all around and eventually some firms must exit. From the mixed
strategy equilibrium, we can actually predict how often two firms enter a market niche when
there is only room for one: with the above data, the probability of entry by either firm is 2/3, so
the probability that both firms enter is( 2/3) = 4=9. That is a little over 44% of the time! This is
the source of the inefficiency. The efficient solution has total payoff of 100, but is not
symmetrical. The fair solution pays each player the same but is inefficient. These two principles,
efficiency and fairness, cannot be reconciled in a game like Market Niche. Once firms recognize
this, they can try to find mechanisms to reach the efficient solution. For example, they may
consider side payments. Or firms might simply attempt to scare of competitors by announcing
their intention of moving into the market niche before they actually do so.
A finite two-person zero-sum game in strategic form, (X, Y, A), is sometimes called
a matrix game because the payoff function A can be represented by a matrix. If X =
{ ,..., } and Y = { , . . . , }, then by the game matrix or payoff matrix we mean
the matrix
In this form, Player I chooses a row, Player II chooses a column, and II pays I the entry
in the chosen row and column. Note that the entries of the matrix are the winnings of the
row chooser and losses of the column chooser.
Note that the pure strategy for Player I of choosing row i may be represented as the
mixed strategy , the unit vector with a 1 in the position and 0’s elsewhere. Similarly,
the pure strategy for II of choosing the jth column may be represented by . In the
following, we shall be attempting to ‘solve’ games. This means finding the value, and at
least one optimal strategy for each player. Occasionally, we shall be interested in finding
all optimal strategies for a player.
Anything Player I can achieve using a dominated row can be achieved at least as well
using the row that dominates it. Hence dominated rows may be deleted from the matrix.
A similar argument shows that dominated columns may be removed. To be more precise,
removal of a dominated row or column does not change the value of a game. However, there
may exist an optimal strategy that uses a dominated row or column. If so,
removal of that row or column will also remove the use of that optimal strategy (although
there will still be at least one optimal strategy left). However, in the case of removal of a
strictly dominated row or column, the set of optimal strategies does not change.
We may iterate this procedure and successively remove several rows and columns. As
an example, consider the matrix, A.
The middle column is dominated by the outside columns taken with probability 1/2
each. With the central column deleted, the middle row is dominated by the combination
of the top row with probability 1/3 and the bottom row with probability 2/3. The reduced
Of course, mixtures of more than two rows (columns) may be used to dominate and
remove other rows (columns). For example, the mixture of columns one two and three
and so the last column may be removed. Not all games may be reduced by dominance. In fact,
even if the matrix has a saddle point, there may not be any dominated rows or columns. The 3 ×
3 game with a saddle point found in Example 1 demonstrates this.
The central entry, 2, is a saddle point, since it is a minimum of its row and maximum
of its column. Thus it is optimal for I to choose the second row, and for II to choose the
second column. The value of the game is 2, and (0,1,0) is an optimal mixed strategy for
both players. For large m × n matrices it is tedious to check each entry of the matrix to see if
it has the saddle point property. It is easier to compute the minimum of each row and the
maximum of each column to see if there is a match. Here is an example of the method.
This is one form of the minimax theorem to be stated more precisely and discussed in
greater depth later. If V is zero we say the game is fair. If V is positive, we say the game
favors Player I, while if V is negative, we say the game favors Player II.
Exercises 9.1
1. Consider the game of Odd-or-Even with the sole change that the loser pays the
winner the product, rather than the sum, of the numbers chosen (who wins still depends
on the sum). Find the table for the payoff function A, and analyze the game to find the
value and optimal strategies of the players. Is the game fair?
2. Player I holds a black Ace and a red 8. Player II holds a red 2 and a black 7. The
players simultaneously choose a card to play. If the chosen cards are of the same color,
Player I wins. Player II wins if the cards are of different colors. The amount won is a
number of dollars equal to the number on the winner’s card (Ace counts as 1.) Set up the
payoff function find the value of the game and the optimal mixed strategies of the players.
Figure 10 shows an extensive game that models this situation. From the perspective of the
startup, whether or not the large company has done research in this area is random. To capture
random events such as this formally in game trees, chance moves are introduced. At a node
labelled as a chance move, the next branch of the tree is taken randomly and non-strategically by
chance, or “nature”, according to probabilities which are included in the specification of the
game.The game in Figure 10 starts with a chance move at the root. With equal probability 0.5,
the chance move decides if the large software company, player I, is in a strong position (upward
move) or weak position (downward move). When the company is in a weak position, it can
choose to Cede the market to the startup, with payoffs (0,16) to the two players (with payoffs
given in millions of dollars of profit). It can also announce a competing product, in the hope that
the startup company, player II, will sell out, with payoffs 12 and 4 to players I and II. However,
if player II decides instead to stay in, it will even profit from the increased publicity and gain a
payoff of 20, with a loss of−4 to the large firm.
Figure 10, The chance move decides if player I is strong (top node) and does have a competing
product, or weak (bottom node) and does not. The ovals indicate information sets. Player II sees
only that player I chose to announce a competing product, but does not know if player I is strong
or weak.
No matter what zero sum game is being played, there is at least one optimal mixed strategy
for each player. In other words, every two-person zero-sum game can be solved.
In a strictly determined game, the use of an optimal pure strategy will minimize a player's
potential losses. In a non strictly determined game, a player can do better by using a mixed
strategy. This is illustrated by the following example.
Consider once again the game “paper, scissors, rock.” The payoff matrix is
The moves are p, s, r, in that order.) If you are the row player and use any pure strategy it
doesn't matter which because of the symmetry of the game then your potential loss is one
point on each round of the game (since the smallest entry in each row is -1). On the other
hand, if you use the mixed strategy S = [ ] by playing p, s and r equally often, then no
matter what strategy the column player uses, the expected value of the game is
Since a draw is better than a loss, it follows that using the mixed strategy [ ] is more
advantageous than using any pure strategy.
To see why this strategy is better than any other mixed strategy, suppose you tried
another mixed strategy, like S = =[ ⦌. In any mixed strategy other than [4 4 4], one
move will be played more often than some other; in this case p will be played more often
than any other. Here is a counter-strategy that the column player can use against S: play the
pure strategy s all the time. Since scissors beat paper, the column player will tend to win
more often than lose. In fact, the expected value is
So, the worst that can happen to you playing [ ⦌ is better than the worst that can
happen if you play =[ ⦌The same will be true for any other mixed strategy with unequal
proportions, so [4 4 4] is your optimal mixed strategy.
Not all games can be analyzed by such “common sense” methods, so we now describe a
method of solving a game using the simplex method. Using the simplex method makes some
sense, since we are looking to maximize a quantity the worst expected payoff subject to
certain constraints, such as: the entries in the desired mixed strategy cannot exceed 1 and the
entries in the opponent's mixed strategy cannot exceed 1. At the end of this section we shall
explain in more detail why the following works.
Solution
Step 1 (Optional, but highly recommended) Reduce the payoff matrix by dominance
Looking at the matrix, we notice that rows 3 and 4 are dominated by row 1, so we
eliminate them, obtaining
Step 2 Convert to a payoff matrix with no negative entries by adding a suitable fixed
number to all the entries.
If we add 2 to all the entries, we will eliminate all negative payoffs. Notice that this won't
affect the analysis of the game in any way; the only thing that is affected is the expected
value of the game, which will be increased by 2. Adding 2 to all the payoffs gives the new
matrix,
The number of variables and the coefficients of the constraints depend on the payoff
matrix; there is one variable for each column. We always take the objective function to be the
sum of the variables, and the right hand sides of the constraints are always 1.
We now use the simplex method. The first tableau is the following.
Notice that the payoff matrix appears in the top left part of the tableau. We now proceed to the
solution as usual:
Column Strategy
1. Express the solution to the linear programming problem as a column vector.
2. Normalize by dividing each entry in the solution vector by the value of p (which is also the
sum of the values of the variables).
Recalling that we deleted column 4, we insert a zero in the fourth position, getting the
column player’s optimal strategy:[ 0 0] .
Row strategy
1. Read off the entries under the slack variables in the bottom row of the final tableau.
[2 2]
2. Normalize by dividing each entry in the vector by the sum of the entries:
The sum of the entries is 4, so we get
[ ]
e= –k,
where k is the number we originally added to the entries in the payoff matrix to make them
non-negative.
e= -2=-
First: Check for saddle points. If there is one, you can solve the game by selecting each
player's optimal pure strategy. Otherwise, continue with the following steps.
Step 1 Reduce the payoff matrix by dominance.
Step 2 Add a fixed number k to each of the entries so that they all become non-negative.
Step 3 Set up and solve the associated linear programming problem using the simplex method.
Step 4 Find the optimal strategies and the expected value as follows.
Column strategy:
1. Express the solution to the linear programming problem as a column vector.
2. Normalize by dividing each entry of the solution vector by p (which is also the sum of the
values of the variables).
3. Insert zeros in positions corresponding to the columns deleted during reduction.
Row strategy:
1. List the entries under the slack variables in the bottom row of the final tableau in vector
form.
2. Normalize by dividing each entry of the solution vector by the sum of the entries.
3. Insert zeros in positions corresponding to the rows deleted during reduction.
To solve this game (i.e. to find the value and at least one optimal strategy for each player)
we proceed as follows.
1. Test for a saddle point.
2. If there is no saddle point, solve by finding equalizing strategies.
We now prove the method of finding equalizing strategies of Section above works whenever
there is no saddle point by deriving the value and the optimal strategies.
Assume there is no saddle point. If a ≥ b, then b < c , as otherwise b is a saddle point.
Since b < c, we must have c > d , as otherwise c is a saddle point. Continuing thus, we see
that d < a and a > b. In other words, if a ≥ b , then a > b < c > d < a. By symmetry, if
a ≤ b , then a < b > c < d > a. This shows that
If there is no saddle point, then either a > b, b < c, c > d and d < a, or a < b, b > c, c < d and
d > a.
In equations (1), (2) and (3) below, we develop formulas for the optimal strategies
and value of the general 2 × 2 game. If I chooses the first row with probability p (i.e. uses
the mixed strategy (p,1 - p)), we equate his average return when II uses columns 1 and 2.
Since there is no saddle point, (a- b) and (c- d) are either both positive or both negative;
hence, 0 < p < 1. Player I’s average return using this strategy is
If II chooses the first column with probability q (i.e. uses the strategy (q,1-q)), we equate
his average losses when I uses rows 1 and 2.
aq + b(1 - q) = dq + c(1 - q)
Hence,
Again, since there is no saddle point, 0 < q < 1. Player II’s average loss using this strategy
is
the same value achievable by I. This shows that the game has a value, and that the players
have optimal strategies. (something the minimax theorem says holds for all finite games).
Example 2
Example 3.
But q must be between zero and one. What happened? The trouble is we “forgot to test
this matrix for a saddle point, so of course it has one”. (J.D.Williams The Compleat
Strategyst Revised Edition, 1966, McGraw-Hill, page 56). The lower left corner is a saddle
point. So p = 0 and q = 1 are optimal strategies, and the value is v = 1.
Suppose Player I chooses the first row with probability p and the second row with probability 1-
p. If II chooses Column 1, I’s average payoff is 2p+4(1-p). Similarly, choices of
Columns 2, 3 and 4 result in average payoffs of 3p+ (1-p), p+6(1-p), and 5p respectively.
We graph these four linear functions of p for 0 ≤ p ≤ 1. For a fixed value of p, Player I can
be sure that his average winnings is at least the minimum of these four functions evaluated
at p. This is known as the lower envelope of these functions. Since I wants to maximize
his guaranteed average winnings, he wants to find p that achieves the maximum of this
lower envelope. According to the drawing, this should occur at the intersection of the lines
for Columns 2 and 3. This essentially, involves solving the game in which II is restricted
3 1
to Columns 2 and 3. The value of the game is v = 17/7, I’s optimal strategy is
1 6
(5/7,2/7), and II’s optimal strategy is (5/7,2/7). Subject to the accuracy of the drawing,
we conclude therefore that in the original game I’s optimal strategy is (5/7,2/7) , II’s is
(0,5/7,2/7,0) and the value is 17/7.
The accuracy of the drawing may be checked: Given any guess at a solution to a
game, there is a sure-fire test to see if the guess is correct, as follows. If I uses the strategy
(5/7,2/7), his average payoff if II uses Columns 1, 2, 3 and 4, is 18/7, 17/7, 17/7, and 25/7
respectively. Thus his average payoff is at least 17/7 no matter what II does.
Similarly, if II uses (0,5/7,2/7,0), her average loss is (at most) 17/7. Thus, 17/7 is the value, and
these strategies are optimal.
We note that the line for Column 1 plays no role in the lower envelope (that is, the
lower envelope would be unchanged if the line for Column 1 were removed from the graph).
This is a test for domination. Column 1 is, in fact, dominated by Columns 2 and 3 taken
with probability 1/2 each. The line for Column 4 does appear in the lower envelope, and
hence Column 4 cannot be dominated.
A Latin square is an n × n array of n different letters such that each letter occurs once and only
once in each row and each column. The 5 × 5 array at the right is an example. If in a Latin
square each letter is assigned a numerical value, the resulting matrix is the matrix of a Latin
square game. Such games have simple solutions. The value is the average of the numbers in a
row, and the strategy that chooses each pure strategy with equal probability 1/n is optimal for
both players. The reason is not very deep. The conditions for optimality are satisfied.
In the example above, the value is V = (1+2+3+3+6)/5 = 3, and the mixed strategy
p = q = (1/5,1/5,1/5,1/5,1/5) is optimal for both players. The game of matching pennies
is a Latin square game. Its value is zero and (1/2,1/2) is optimal for both players.
2.6 Exercises.
−1 −3
1. Solve the game with matrix , that is find the value and an optimal
−2 2
(mixed) strategy for both players.
0
2. Solve the game with matrix for an arbitrary real number t. (Don’t forget
2 1
to check for a saddle point!) Draw the graph of v(t), the value of the game, as a function
of t, for -∞ < t < ∞.
3. Show that if a game with m×n matrix has two saddle points, then they have equal
values.
Summary
Game theory is the formal study of decision-making where several players must make choices
that potentially affect the interests of the other players.
Definition 1. The strategic form, or normal form, of a two-person zero-sum game is given
by a triplet (X, Y, A), where
(1) X is a nonempty set, the set of strategies of Player I
(2) Y is a nonempty set, the set of strategies of Player II
(3) A is a real-valued function defined on X × Y . (Thus, A(x, y) is a real number for
every x ∈ X and every y ∈ Y)
This is one form of the minimax theorem to be stated more precisely and discussed in
greater depth later. If V is zero we say the game is fair. If V is positive, we say the game
favors Player I, while if V is negative, we say the game favors Player II.
Games with matrices of size 2 × n or m × 2 may be solved with the aid of a graphical
interpretation.
A Latin square is an n × n array of n different letters such that each letter occurs once and only
once in each row and each column.
Review exercise
Two players, A and B, have decided to change the rules of the game “Paper, Scissors, Rock” by
using instead the following payoff matrix:
Hint : An example of a non-zero sum game would be one in which the government taxed the
earnings of the winner. In that case the winner's gain would be less than the loser's loss.
If player B can't make up her mind whether to use paper or scissors as a pure strategy, what
would you advise?
2. You are the head coach of the Alphas (Team A), and are attempting to come up with a
strategy to deal with your rivals, the Betas (Team B). Team A is on offense, and Team B is
on defense. You have five preferred plays, but are not sure which to select. You know,
however, that Team B usually employs one of three defensive strategies. Over the years,
you have diligently recorded the average yardage gained by your team for each
combination of strategies used, and have come up with the following table.
References
Adler, I. & Berenguer, S. (1981), Random linear programs, Technical Report 81-4, Operations
Research Center Report, U.C. Berkeley. 209
Adler, I. & Megiddo, N. (1985), ‘A simplex algorithm whose average number of steps is
bounded between two quadratic functions of the smaller dimension’, Journal of the ACM 32,
871–895. 53, 208
Adler, I., Karmarkar, N., Resende, M. & Veiga, G. (1989), ‘An implementation of Karmarkar’s
algorithm for linear programming’, Mathematical Programming 44, 297– 335. 369
Ahuja, R., Magnanti, T. & Orlin, J. (1993), Network Flows: Theory, Algorithms, and
Applications, Prentice Hall, Englewood Cliffs, NJ. 240, 257
Dantzig, G., Orden, A. & Wolfe, P. (1955), ‘The generalized simplex method for minimizing a
linear form under linear inequality constraints’, Pacific Journal of Mathematics 5, 183–195. 44
den Hertog, D. (1994), Interior Point Approach to Linear, Quadratic, and Convex Programming,
Kluwer Academic Publishers, Dordrecht. 423
Gale, D., Kuhn, H. & Tucker, A. (1951), Linear programming and the theory of games, in
T.Koopmans, ed., ‘Activity Analysis of Production and Allocation’, John Wiley and Sons, New
York, pp. 317–329. 87, 187
Kojima, M., Mizuno, S. & Yoshise, A. (1989), A primal-dual interior point algorithm for linear
programming, in N. Megiddo, ed., ‘Progress in Mathematical Programming’, Springer-Verlag,
New York, pp. 29–47. 306