Download as pdf or txt
Download as pdf or txt
You are on page 1of 259

DILLA UNIVERSITY

DEPARTMENT OF MATHEMATICS
MODULE IN

LINEAR OPTIMIZATION (MATH 356)


PREPARED BY :-

TADIOS KIROS (MSC)

HIWOT REDA (MSC)

AWOKE DERESSE (MSC)

EDITED BY: –

SULTAN SHUKUR (MSC)

BIRUKE TAFESE (MSC)


Dill University

Table of Contents
CHAPTER ONE ............................................................................................................................................... 8
1.Introduction to matrices ........................................................................................................................ 8
1.1 Introduction (Definition of LP, Motivation,…)................................................................................. 8
1.1.1 Types of Matrices ....................................................................................................................... 11
1.2 Matrices (Rank, Elementary Row Operations ) ............................................................................ 16
1.2.1 Rank of a Matrix ........................................................................................................................ 16
1.2.2 Elementary row (column) operations ........................................................................................ 17
1.3 System of Linear equations ........................................................................................................... 23
1.3.1 Gaussian-Jordan Elimination Method ........................................................................................ 25
1.4 System of linear inequality............................................................................................................ 31
CHAPTER TWO ............................................................................................................................................ 36
2. Linear Programming Problems models of practical problems............................................................ 36
2.1 Introduction .................................................................................................................................. 36
2.2 Decision process and relevance of optimization .......................................................................... 36
2.3. Model and model building ........................................................................................................... 38
CHAPTER THREE .......................................................................................................................................... 59
3.Geometric methods ............................................................................................................................. 59
3.1 Graphical Solution Methods ......................................................................................................... 59
3.2 Convex set ..................................................................................................................................... 64
3.3Polyhedral sets and Extreme Points .............................................................................................. 72
3.4 The corner Point Theorem ............................................................................................................ 74
CHAPTER four .............................................................................................................................................. 82
4.The Simplex method ............................................................................................................................ 82
4.1 Linear programming in standard form .......................................................................................... 82
4.2 Basic Feasible Solutions (Analytical Method) ............................................................................... 83
4.3. Fundamental theorem of linear programming ............................................................................ 86
4.4 Algebra of the Simplex Method .................................................................................................... 89
4.5 Optimality test and basic exchange .............................................................................................. 91
4.6 The simplex Algorithm .................................................................................................................. 96
4.7 Degeneracy and finiteness of simplex algorithm .......................................................................... 97
4.8 Finding a starting basic feasible solution ...................................................................................... 97

Dilla University, Department of Mathematics Page 2


Dill University

4.8.1 The two phase method ............................................................................................................ 106


4.8.2 Big m method ........................................................................................................................... 115
4.9 Using solver (MS EXCEL) in solving linear programming............................................................. 122
Chapter Five .............................................................................................................................................. 127
5.Duality theory and further variation s of the simplex method .......................................................... 127
5.1 Dual linear programs................................................................................................................... 127
5.2 Duality theorems......................................................................................................................... 132
5.3 The dual simplex method............................................................................................................ 134
5.4 Primal-Dual Method.................................................................................................................... 140
CHAPTER 6 ................................................................................................................................................ 149
6. Sensitive Analysis .............................................................................................................................. 149
6.1 Introductions ............................................................................................................................... 149
6.2 Variation of coefficients of objective function (cj)...................................................................... 150
6.3 Variation of vector requirement ( ) .......................................................................................... 151
6.4 Variation of constraints............................................................................................................... 155
6.4.1. Changes in the constraint coefficients .............................................................................. 155
6.4.2. Addition of new variables ....................................................................................................... 156
6.5 Addition of constraints................................................................................................................ 158
6.4.1 A new inequality constraint is added....................................................................................... 158
6.4.2 A new equality constraint is added.......................................................................................... 161
6.6 Solver outputs and interpretations............................................................................................. 161
CHAPTER SEVEN ........................................................................................................................................ 168
7. INTERIOR POINT METHODS .............................................................................................................. 168
7.1 Basic ideas ................................................................................................................................... 169
7.2 One iteration of Karmarkar’s projective algorithm..................................................................... 170
7.2.1 Projective transformation ........................................................................................................ 171
7.2.2 Moving in the direction of steepest descent ........................................................................... 173
7.2.3. Inverse transformation ........................................................................................................... 179
7.3 The algorithm and its polynomiality ........................................................................................... 180
7.4 A purification scheme ................................................................................................................. 181
7.5 Converting a given linear programming into the required format ............................................. 182

Dilla University, Department of Mathematics Page 3


Dill University

Chapter Eight ............................................................................................................................................ 186


8. Transportation problems .................................................................................................................. 186
8.1 Introduction ................................................................................................................................ 186
8.2 Transportation table ................................................................................................................... 191
8.3 Determination of an initial B.F.S ................................................................................................. 193
8.3.1 North-west corner method ...................................................................................................... 194
8.3.2 Row minima method................................................................................................................ 196
8.3.3 Cost minima (Matrix minima) method..................................................................................... 198
8.4 optimality condition .................................................................................................................... 203
8.5 Unbalanced transportation problems and their solutions. ........................................................ 212
8.6 Degenerate transportation problems and their solution ........................................................... 220
Chapter Nine ............................................................................................................................................. 229
9. Theory of games................................................................................................................................ 229
9.1 Introduction ................................................................................................................................ 229
9.2 Formulation of Two-Person Zero-Sum Games............................................................................ 231
9.3 Pure and Mixed strategy ............................................................................................................. 235
9.3.1 Pure Strategies ......................................................................................................................... 235
9.3.2 Mixed strategy ......................................................................................................................... 235
9.4 Solving pure strategy games ....................................................................................................... 238
9.4.1 Reduction by dominance ......................................................................................................... 238
9.4.1.1 Removing Dominated Strategies .......................................................................................... 238
9.4.1.2 Saddle points ......................................................................................................................... 240
9.4.2 The Minimax ( or maxmin) criterion ........................................................................................ 241
9.5 Some basic probabilistic considerations ..................................................................................... 242
9.6 Solving Games with the Simplex Method ................................................................................... 243
9.7 Solving 2 × n and m × 2 games .................................................................................................... 251
References ........................................................................................................................................ 258

Dilla University, Department of Mathematics Page 4


Dill University

Course Introduction
This course deals with linear programming, geometric and simplex methods, duality theory and
further variations of the simplex method, sensitivity analysis, interior point methods,
transportation problems, and theory of games. Every units of course is touches the real world
activities so that it is advising to cover the course contents attentively.

The purpose of this module is to provide a unified, insightful, and modern treatment of linear
programming, geometric and simplex methods, duality theory and further variations of the
simplex method, sensitivity analysis, interior point methods, transportation problems, and theory
of games. We discuss both classical topics, as well as the state of the art. We give special
attention to theory, but also cover applications and present case studies. Our main objective is to
help the learner become a sophisticated practitioner of linear optimization. More specifically, we
wish to develop the ability to formulate fairly linear optimization problems, provide an
appreciation of the main classes of problems that are practically solvable, describe the available
solution methods, and build an understanding of the qualitative properties of the solutions they
provide.

Dilla University, Department of Mathematics Page 5


Dill University

COURSE AIMS AND OBJECTIVES


The course intends to introduce students to both theoretical and algorithmic aspects of linear
optimization. It is a basis for further study in the area of optimization. You in turn shall be
required to conscientiously and actively work through this course to understand and upon
completion of this course, the learner should be able to:

▪Understand the concept of matrix

▪Identify different arrangement of matrices

▪Explain the number of rows and columns of matrix

▪Change the system linear equations to matrix form

▪Identify the types of matrix

▪Understand the trace of matrix

▪Know elementary transformation of matrix

▪Understand the form of linear programming problem


▪Know the decision process in linear programming problems
▪Differentiate the techniques for solving linear programming problems
▪Identify how to formulate linear programming problems
▪Understand how to apply the the geometrical method for linear programming problems
▪Identify the fundamental theorem of linear programming problem
▪Define the basic concept of convex sets

▪Differentiate the polyhedral sets

▪Explain the properties of extreme points

▪Understand the corner point theorem

▪Remind the formation of linear programming problem


▪Understand how to solve basic feasible solution in LP
▪Consider the representation of the linear programming problem with non-basic variables
▪Explain the interpretation of optimality test
▪Identify the uniqueness and alternative optimal solutions
▪Differentiate the main steps to apply simplex method on linear programming

Dilla University, Department of Mathematics Page 6


Dill University

▪Know the two phase method


▪Understand the standard form of duality

▪Identify the primal-dual relationship

▪Explain the Kuhn-Tucker optimality condition

▪Define the fundamental theorems of duality

▪Know the interpretation of dual simplex method

▪Observe and investigate the impact of optimal solution due to change of parametric values

▪Identify the parameters whose values cannot be changed without changing the optimal solution

▪Understand the basic ideas about interior point method

▪Identify the steepest descent direction

▪Explain one iteration Karmarkar’s projective algorithm

▪Define inverse transformation

▪Convert a given linear programming into the required format

▪Define the form of transportation problem

▪Identify the meaning of balanced and unbalanced transportation problem

▪Construct the transportation problem

▪Determine some methods of an initial basic feasible solution

▪Understand the numerical calculation of the net evaluation corresponding to the non-basic cells

▪Differentiate degeneration of transportation problem and their solution

▪Understand the formal study of game theory

▪Explain the formulation of two-person zero sum games

▪Differentiate the pure and mixed strategies in game theory

▪Identify how to solve pure strategy games

▪Understand some probabilistic consideration of game theory

Dilla University, Department of Mathematics Page 7


Dill University

CHAPTER ONE

1.Introduction to matrices
1.1 Introduction (Definition of LP, Motivation,…)
The concept of matrices has had its origin in various types of linear problems, the most
important of which concerns the nature of solutions of any given system of linear equations.
Matrices are also useful in organizing and manipulating large amounts of data. Today, the
subject of matrices is one of the most important and powerful tools in Mathematics which
has found applications to a very large number of disciplines such as engineering, business
and economics, statistics etc.

Objectives

▪Understand the concept of matrix

▪Identify different arrangement of matrices

▪Explain the number of rows and columns of matrix

▪Change the system linear equations to matrix form

▪Identify the types of matrix

▪Understand the trace of matrix

▪Know elementary transformation of matrix

Definition of a matrix

Definition 3.1.1: Matrix is a rectangular arrangement of m x n numbers (real or complex) in


to m horizontal rows and n vertical columns enclosed by a pair of brackets [
].

 a11 a12 ... a1n 


a a 22 ... a 2 n 
 21
 . . . 
 . . . 
 
 . . . 
a m1 a m1 ... a mn 
is matrix

It is called an m x n (read “m by n”) matrix or a matrix of order m x n.

Parentheses ( ) are also commonly used to enclose numbers constituting matrices.

Dilla University, Department of Mathematics Page 8


Dill University

 We can abbreviate A by writing =


×

Before we go any further, we need to familiarize ourselves with some terms that are associated
with matrices.

 The numbers in a matrix are called the entries or the elements of the matrix. For the
entry a ij , the first subscript i specify the row and the second subscript j the column in
which the entry appears. That is, a ij is an element of matrix A which is located in the i th
row and j th column of the matrix A. Whenever we talk about a matrix, we need to know
the order of the matrix.

The order of a matrix is the number of rows and columns it has. When we say a matrix is a 3 by
4 matrix, we are saying that it has 3 rows and 4 columns. The rows are always mentioned first
and the columns second. This means that a 3  4 matrix does not have the same order as a 4  3
matrix. It must be noted that even though an m  n matrix contains m x n elements, the entire
matrix should be considered as a single entity. In keeping with this point of view, matrices are
denoted by single capital letters such as A, B, C and so on.

Remark: By the size of a matrix or the dimension of a matrix we mean the order of the matrix.

1 5 2 
Example 1. A   
0 3 6 

A matrix with size 3 × 4 can be written as

= and a general × matrix is denoted by



= ⋮ … … ⋮

4 3 5 6
−2 4 −1 7
Example 1: Let =
0 −4 −7 −8
5 6 7 8
a. Find , , , ,
b. What is the size of matrix A?

Example 2: Form a 4 by 5 matrix, B, such that bij = i + j.

Dilla University, Department of Mathematics Page 9


Dill University

Solution:
Since the number of rows is specified first, this matrix has four rows and five columns.

 b11 b12 b13 b14 b15 


b b 22 b 23 b 24 b 25 
B   21 
b 31 b 32 b 33 b 34 b 35 
 
b 41 b 42 b 43 b 44 b 45 

2 3 4 5 6
3 4 5 6 7
=  
4 5 6 7 8
 
5 6 7 8 9
Exercise: form a 4 by 3 matrix, B, such that a) bij  i  j b) bij  ( 1) i  j

Remark:
1. A vector is a matrix having either one row or one column.
2. A matrix consisting of a single row (a1, a2, a3, …, an) is called a row vector. Hence a row
matrix is a 1 × matrix.

3. A matrix consisting of a single column is called a column vector. Hence a column


matrix is an ×1
Definition ( Equality of matrices):
Two matrices A and B are said to be equal, written A=B, if they are of the same order and if all
the corresponding entries are equal.

5 1 0  2  3 1 0  9 
Example 1.     2 3 2  2 but
9 2   
2 3 4   2 ,Why?
 x  y 6 1 6
 x  y 8   3 8 
Example 2. Give the matrix equation    
Solution:
x y 1
By the definition of equality of matrices,
x y  3

Dilla University, Department of Mathematics Page 10


Dill University

Solving gives x = 2 and y = -1.


Exercise: Find the values of x, y, z and w which satisfy the matrix equation
 x  y 2x  z  1 5  x  3 2 y  x 0  7 
2 x  y 3 z  w    0 13  z  1 4 w  6   3 2 w 
a)     b)    

1.1.1 Types of Matrices


Definitions:
1. Row Matrix: A matrix that has exactly one row is called a row matrix. For
Example 1. The matrix A  5 2  1 4  is a row matrix of order 1 4 .
2. Column Matrix: A matrix consisting of a single column is called a column matrix. For
3 
example, the matrix B   1  is a 3  1 column matrix.
4 
3. Zero or Null Matrix: A matrix whose entries are all 0 is called a zero or null matrix. It
is usually denoted by 0m n or more simply by 0. For example,
0 0 0 0 
0    is a 2  4 zero matrix.
0 0 0 0 
4. Square Matrix: An m  n matrix is said to be a square matrix of order n if m = n. That
is, if it has the same number of columns as rows.
 3 4 6 
2  1
Example 1. A=  2 1 3  5 6  are square matrices of order 3 and 2
 5 2  1  
and B =
respectively.
 
In a square matrix A  a ij of order n, the entries a11 , a 22 , ..., a nn which lie

on the diagonal extending from the left upper corner to the lower right

corner are called the main diagonal entries, or more simply the main diagonal.
3 2 4 
Thus, in the matrix C = 1 6 0  the entries c11  3, c 22  6 and c33  8
5 1 8 

constitute the main diagonal.

Dilla University, Department of Mathematics Page 11


Dill University

1. Diagonal Matrix: A square matrix is said to be diagonal if each of the entries not
 
falling on the main diagonal is zero. Thus a square matrix A  a ij is diagonal if
a ij  0 for i  j .

What about for i = j?

0 ⋯ 0
0 ⋯ 0
Example 1: = is diagonal matrix of order n.
⋮ ⋮ ⋯ ⋮
0 0 ⋯

5 0 0 
0 0 0 
 
Example 2. 0 0 7 
  is a diagonal matrix.

Notation: A diagonal matrix A of order n with diagonal elements a11 , a 22 , ..., a nn is denoted by
A  diag ( a11 , a 22 , ..., a nn )

1. A diagonal matrix in which all diagonal elements are equal is called a scalar matrix.
0 ⋯ 0 2 0 0
Example: Let = 0 ⋯ 0 and 0 2 0
⋮ ⋮ ⋯ ⋮  
0 0 ⋯ 
 0 0 2 
are a scalar matrices.

= 0, ∀ ≠
Note: = is a scalar matrix, if  
× = ,∀ =
1. Identity Matrix or Unit Matrix: A square matrix is said to be identity matrix or unit
matrix if all its main diagonal entries are 1’s and all other entries are 0’s. In other
words, a diagonal matrix whose all main diagonal elements are equal to 1 is called an
identity or unit matrix. An identity matrix of order n is denoted by In or more simply
by I

1 0 0 
1 0
Example: I 3  0 1 0  is identity matrix of order I 2    is identity matrix of
 0 1 
0 0 1
order 2.

5. Triangular Matrix: A square matrix is said to be


i. an upper triangular matrix if all entries below the main diagonal are zeros.

Dilla University, Department of Mathematics Page 12


Dill University

ii. Lower triangular matrix if all entries above the main diagonal are zeros.

5 0 0 0
2 4 8  1 3 0 0 
Example: 0 1 2   are upper and lower triangular matrices,
6 1 2 0
0 0  3  
and  2  4 8 6
respectively.

i.e A square matrix = of order n is called an upper triangular matrix if


×

= 0, ∀ > and A square matrix = of order n is called a lower


×

triangular matrix if = 0, ∀ < .

Example: consider

Note: A triangular matrix is a matrix which is either upper triangular or lower


triangular.

1. A square matrix A  ( a ij ) is said to be symmetric if A t  A , or equivalently, if


a ij  a ji for each i and j.

Example: The following matrices are symmetric matrices, since each is equal to
its own transpose (verify)
0 0 0 2 1 5 
1 4 3  
7 −3 0 0 0
= , = 4 −3 0 , = 0 D   1 0  3
−3 5 0 0
3 0 7 5  3 6 
0 0 0  
Note: for a symmetric matrix the elements that are at equal distance from the main
diagonal are equal.

Dilla University, Department of Mathematics Page 13


Dill University

Exercise 1.3

a 3 4 8
 
 b c 3 9 
1.For A  is to be a symmetric matrix, what numbers
numbe should the
 d e f 10 
 
g h i j 
 
letters a to j represent?

2. A) Does a symmetric matrix have to be square?


B) Are all square matrices symmetric?
1.A square matrix A is said to be skew symmetric if =− .
0 1 2
Example: = −1 0 −3
−2 3 0

Remark: aii   aii  2aii  0 or aii  0 . Hence elements of main diagonal of a


skew-symmetric
symmetric matrix are all zero.

 0 5 7 0  5  7 
  t  
Example: For A    5 0 3  , A   5 0  3    A
7  3 0 7 3 0 
  
So A is skew
skew-symmetric.

Note: let A be any square matrix. Then


 + is symmetric
 − is skew symmetric
 A can be written as a sum of a symmetric and a skew symmetric matrices.
 ( ) =( ) = ( ) = ( ) =
Example: Identify the following mat
matrices as symmetric and skew symmetric.
0 4 −1 4 −5 1
1 1 1
= −4 0 3 , = −5 3 2 =
1 1 1
1 −3 0 1 2 −4

Dilla University, Department of Mathematics Page 14


Dill University

1 1 0 1 1 −1
= , = , =
1 0 1 0 1 0
Trace of square matrix

Definition: If A is a square matrix, then the trace of A denoted by tr(A) is defined as the sum
of the diagonal elements of A.

i.e, ( )=∑ ℎ =
×

Example 1 : consider

= , ℎ ( )= + +

4 −5 1
1. Let = −5 3 2 ℎ ( )=4+3−4= 3
1 2 −4
Remark: The trace of A is undefined if A is not a square matrix.
Properties of the trace
Let A and B be square matrices that are conformable for addition and multiplication. Then
a. tr(kA) = ktr(A) for any scalar k.
b. tr(A + B) = tr(A) + tr(B)
c. tr(AB) = tr(BA)
d. tr(A ) = tr(A)

Exercise
1. a) Form a 4 by 5 matrix, B, such that bij = i*j, where * represents
Multiplication.

b) What is BT? c) Is B symmetric? Why or why not?

3  1 0   2 4 3 
2. Given A  2 4 5   5 1 7  Verify that
   
1 3 6   2 3 8 
and B =

i) ( A  B ) t  A t  B t , ii) ( AB )  B A
t t t
iii) (2 A)  2 A
t t

1 1 1
3. Let A  , is At A is symmetric?
1 2 3

Dilla University, Department of Mathematics Page 15


Dill University

1.2 Matrices (Rank, Elementary Row Operations )

1.2.1 Rank of a Matrix


Defination 1.11: Let A be m x n matrix and U be an echelon or the reduced echelon form of
A.The rank of A is denoted by Rank(A) and is define as the number of non-zero rows of U.

i.e if a matrix A is carried to a row-echelon matrix U by elementary row operations, then the
number of leading 1s in U is called the rank of A.

Example: Find the rank of each of the following matrices.

 1 1  2  3  2 1  2
   
A   3 1 1  B   1 1 3 5 
 1 3 4   1 1 1 1 
  

Solution : We transform the matrix A in to row –echelon form by using elementary row
operations.

 1 1  2 1 1  2
   
A   3  1 1    0  4 5  , R2  3R1  R2 and R3  R1  R3
 1 3 4   0 4 2 

1 1  2
 
  0  4  5  , R3  R2  R3
0 0  3
 

1 1  2
 4  1 1
 0 1  , R2   R2 and R3   R3
 5  4 3
0 1 1 
 

is row echelon form.Therefore Rank(A)=3

 3  2 1  2  3  2 1  2
   
B   1  1 3 5    0  5 10 13  , R2  3R2  R1 and R3  3R3  R1
 1 1 1 1  0 5 2  1 
  

Dilla University, Department of Mathematics Page 16


Dill University

  3  2 1  2
 
  0  5 10 13 , R3  R2  R3
 0 0 12 12 

 2 1 2 
1 
 3 3 3 
 13  1 1 1
 0 1 2 R1  R1 , R2   R2 and R3   R3
 5  3 5 12
0 0 1 1 
 
 

 2 1 2 
1 
 3 3 3 
 13 
 0 1 2 , R3  2 R3  R2
 5 
 3 
0 0 0 
 5 

is row of echelon form and also the number of leading 1 ’s is 3.Therefore Rank(B)=3

1.2.2 Elementary row (column) operations


Definition: the following operation are called Elementary row (column) operations on matrix
1. Interchanging any two rows (or columns)
It can be represented by ↔ ↔
2. Multiplying a row (or column) by a non-zero scalar. It is called scaling It is represented
by . → →
3. 3. Replacing one row (the row) or column (the column) by the sum of
itself and a multiple of (k times) another row or column. → + →
+ .
Definition: A matrix A is said to be row (column) equivalent to a matrix B if B can be
obtained by applying a finite sequence of elementary row (column) operations to A.
Example: The following four matrices are row equivalent.
2 1 1 2 1 1 2 1 1 2 1 1
= 4 2 2 = 0 0 0 = 0 0 0 = 1 0 0
≅ ≅ ≅
1 2 2 −3 0 0 1 0 0 0 0 0

Dilla University, Department of Mathematics Page 17


Dill University

 B is obtained from A by → −2 + → −2 +

 C is obtained from B by R → R

 D is obtained from C by ↔
Example: Show that A is row equivalent to B.
1 2 4 3 2 4 8 6
= 2 1 3 2 , = 1 −1 2 3
1 −1 2 3 4 −1 7 8
Solution
Since B is obtained from A by performing the following ERO
→2 , ↔ → +2 Then A is row equivalent to B.

Definition: A matrix obtained from an identity (unit) matrix by applying a single elementary
operation is called an elementary matrix.

1 0 0 1 0 3
1 0
Example: The following are elementary matrices , 0 1 0 , 0 1 0
0 −3
0 0 1 0 0 1

0 0 1
Example: The matrix 0 1 0 is an elementary matrix obtained by ↔
1 0 0
 A nonzero row (or column) in a matrix means a row (or column) that contains at least one
non-zero entry.
 A leading entry of a row refers to the left most nonzero entry (in a non zero row).
Definition: A matrix is said to be in echelon (row echelon) forms if it satisfies the
following three properties.
1. All nonzero rows are above any rows of all zeros.
2. Each leading entry of a row is in a column to the right of the leading entry
of the row above it.
4. All entries in a column below a leading entry are zero.

Example: the following matrices are in row echelon form.

Dilla University, Department of Mathematics Page 18


Dill University

 
2  3 2 1 1 0 0 29
A) 0 1  4 8 , 0 1 0 16 
 5  
0 0 0  0 0 1 1 
 2
1 4 3 7 1 1 0 0 1 2 6 0
B) 0 1 6 2 , 0 1 0 , 0 0 1 −1 0
0 0 1 5 0 0 0 0 0 0 0 1
Definition: If a matrix in echelon form satisfies the following additional condition then it
is called in reduced echelon form (or row reduced echelon form)
1. The leading entry in each non zero row is 1
2. Each leading 1 is the only nonzero entry in its column.
Example 1: The following matrices are in reduced row echelon form
0 1 −2 0 1
1 0 0 4 1 0 0
0 0 0 0 0 1 3
0 1 0 7 , 0 1 0 , ,
0 0 0 0 0 0 0
0 0 1 −1 0 0 1
0 0 0 0 0

Remark: A matrix in row-echelon form has zeros below each leading 1, where as a matrix in
reduced row-echelon form has zeros both above and below each leading 1.

Remark:

1. Each matrix is row equivalent to one and only one row reduced echelon matrix.

But a matrix can be row equivalent to more than one echelon matrices.

2. If matrix A is row equivalent to an echelon matrix U, we call U an echelon form of A. If


U is in reduced echelon form, we call U the reduced echelon form of A.

Activity

1. Determine which of the following matrices are in row reduced echelon form and which
others are in row echelon form (but not in reduced echelon form)

1 0 1 0
1 0 1  1 1 1  0 1 1 0
a) 0 1 0 0 0 0  
    0 0 0 1
0 0 0 0 0 0
b) c) 0 0 0 0

Dilla University, Department of Mathematics Page 19


Dill University

0 2 3 4 5 1 0 5 0  8 3
0 0 3 4 5 0 1 4  1 0 6
d)  
0 0 0 0 5 0 0 0 0 1 0
   
0 0 0 0 0 e) 
0 0 0 0 0 0

Activity1.3
3: Reduce the following Matrices in to row reduced echelon form.
1 2 −3 0
A) = 2 4 −2 2
3 6 −4 3
0 0 −2 0 7 12
B) = 2 4 −10 6 12 28
2 4 −5 6 −5 −1

x1  x2  x3  2 1 1 1  1 1 1 2 
   
2x1  3x2  x3  3  2 3 1   2 3 1 3 
x1  x2  2x3  6  1  1  2  1  1  2  6
    
matrix of coefficien
t augmented matrix

Observe that the matrix of coefficients is a sub matrix of the augmented matrix. The augmented
matrix completely describes the system.

Transformations called elementary transformations can be used to change a system of linear


equation into another system of linear equations that has the same solution. These
transformations are used to solve systems of linear equations by eliminating variables. In practice
it is simpler to work in terms of matrices using equivalent transformations called elementary row
operations. These transformations are as follows:

Elementary Transformations

1. Interchanging two equations


2. Multiplying both sides of an equation by a nonzero constant
3. Add a multiple of one equation on to another equation.

Dilla University, Department of Mathematics Page 20


Dill University

Elementary Row Operations

1. Interchanging two rows of a matrix


2. Multiply the elements row by a nonzero constant
3. Add a multiple of the elements of one row to the corresponding elements of another row.

Systems of equations that are related through elementary transformations, and thus have the
same solutions, are called equivalent systems. The symbol  is used to indicate equivalent
system of equations. The next example compares the elementary transformation with elementary
row operations.

Example 1: Solve the following system of linear equations.

 x1  x 2  x3  2

 2 x1  3 x 2  x3  3
 x  x  2 x  6
 1 2 3

Solution: Elimination Methods

x1  x2  x3  2
Initial system 2 x1  3 x2  x3  3
x1  x2  2 x3  6

nd rd
Eliminate x1 from the 2 and 3 equations

x1  x2  x3  6
eq ( 2)  2eq (1)  eq ( 2)
x 2  x 3  1
eq (3)  eq (3)  eq (1)
 2 x 2  3 x 3  6

st rd
Eliminate x2 from the 1 and 3 equations

x1  2 x3  3
eq (1)  eq (1)  eq ( 2)
≅ x 2  x 3  1
eq (3)  eq (3)  2eq (2)
 5 x3  10

rd
Make coefficient of x3 in 3

Dilla University, Department of Mathematics Page 21


Dill University

x1  2 x3  3
1
≅ x 2  x 3  1 eq(3)  eq(3)
5
x3  2

Eliminate x3 from 1st and 2nd equations

x1  1
≅ x2  1
x3  2

The solution is x1= -1, x2 = 1,x3 =2

Matrix Method

1 1 1 2 
 
Augmented matrix  2 3 1 3 
 1 1  2  6
 

We refer to the first row as the pivot row, and then we have:

1 1 1 2 
 
  0 1  1  1  , R2  R2  2 R1 , R3  R3  R1
 0  2  3  8
 

Create appropriate zeros in column 2

1 0 2 3 
 
  0 1 1 1  R1  R1  R2 , R3  R3  2 R2
 0 0  5  10 
 

Make the (3,3) element 1

1 0 2 3 
  1
  0 1  1  1, R3  R3
0 0 1 2  5
 

Create zeros in column 3

Dilla University, Department of Mathematics Page 22


Dill University

 1 0 0  1
 
  0 1 0 1 , R1  R1  2 R3 , R2  R2  R3
0 0 1 2 
 

Matrix corresponding to the system

x1  1
x2  1
x3  2

Thus the solution is x1=-1,x2=1,x3=2

Exercise 1.2: Solve the following system of linear equations.

x1  2x2  4x3 12


2x1  x2  5x3 18
 x1  3x2 3x3  8

1.3 System of Linear equations


In this section we will present certain systematic methods for solving system of linear
equations.

Definition: a linear equation in the variables x1 , x 2 ,..., x n over the real field
R is an equation that can be written in the form

a1 x1  a 2 x 2  ....  a n x n  b
(1)

a1 , a 2 ,..., a n
Where b and the coefficients are given real numbers.

Definition: A system of linear equations (or a linear system) is a collection of one or more
linear equations involving the same variables, say x1 , x 2 ,..., x n .

Now consider a system of m linear equations in n-unknowns x1 , x 2 ,..., x n :

Dilla University, Department of Mathematics Page 23


Dill University

a11 x1  a12 x 2  ...  a1n x n  b1


a 21 x1  a 22 x 2  ...  a 2 n x n  b2
.
(2)
.
.
a m1 x1  a m 2 x 2  ...  a mn x n  bm

If b1  b2  ...  bm  0 then the system of equation is called homogeneous . If bi  0 for some

i{1,2,3, . . ., m} then the system is called non homogeneous.

In matrix notation, the linear system (2) can be written as AX  B where

 a11 a12 ... a1n   x1   b1 


a a 22 ... a 2 n  x  b 
 21  2  2
. . .
A    ,
 X   B 
.  .   . 
     
 .   .   . 
a m1 am2 ... a mn   x n  bm 
and

We call A coefficient matrix of the system (2).



The matrix ⋮ … … ⋮ is called the augmented matrix for the system.

Example 1: for the non homogeneous linear system


x1  3 x 2  x 3  2
x 2  2 x3  4
 2 x1  3 x 2  3 x 3  5

1 3  1 1 3  1 2

The matrix A  0  
1  2 is the coefficient matrix and  0 1  2 4 is the

  2  3  3  2  3  3 5

augmented matrix.
Are the coefficient matrix and the augmented matrix of a homogeneous linear system equal?
Why?

Dilla University, Department of Mathematics Page 24


Dill University

A system of linear equations has either

1. no solution, or
2. exactly one solution, or
3. Infinitely many solutions.
We say that a linear system is Consistent if it has either one solution or infinitely many
solutions; a system is inconsistent if it has no solution.

Solving a linear system

This is the process of finding the solutions of a linear system of equation. We first see the
technique of elimination (Gaussian elimination method) and then we add two more techniques,
matrix inversion method and Cramer’s rule.

1.3.1 Gaussian-Jordan Elimination Method


The Gaussian elimination method is a standard method for solving linear systems. It applies to
any system, no matter whether m < n, m = n or m > n (where m and n are number of equations
and variables respectively). We know that equivalent linear systems have the same solutions.
Thus the basic strategy in this method is to replace a given system with an equivalent system,
which is easier to solve.
The basic operations that are used to produce an equivalent system of linear equations are the
following:

1. Replace one equation by the sum of itself and a multiple of another equation.
2. Interchange two equations
3. Multiply all the terms in an equation by a non zero constant.

 Why these three operations do not change the solution set of the system?

The three basic operations listed above correspond to the three elementary row operations on the
augmented matrix.

Thus to solve a linear system by elimination we first perform appropriate row operations on the
augmented matrix of the system to obtain the augmented matrix of an equivalent linear system
which is easier to solve and use back substitution on the resulting new system. This method can

Dilla University, Department of Mathematics Page 25


Dill University

also be used to answer questions about existence and uniqueness of a solution whenever there is
no need to solve the system completely.
In Gaussian elimination method we either transform the augmented matrix to an echelon matrix
or a reduced echelon matrix. That is we either find an echelon form or the reduced echelon form
of the augmented matrix of the system.

1 0 0 5
Example 2: if 0 1 0 −2 is the row reduced form of the augmented matrix of a system
0 0 1 4
1 0 0 5
linear equations then 0 1 0 = −2 is the equivalent system of equation in matrix
0 0 1 4
form. The corresponding system of equation is

=5
= −2
=4

This is the solutions of the original system of linear equation.

Example 3: Determine if the following system is consistent. If so how many solutions does it
have?

x1  x 2  x 3  3
x1  5 x 2  5 x 3  2
2 x1  x 2  x 3  1

Solution:

The augmented matrix is

Let us perform a finite sequence of elementary row operations on the augmented matrix.

Dilla University, Department of Mathematics Page 26


Dill University

1  1 1 3  R   R  R 1  1 1 3
R 3   2 R1 R3
A B  1 5  5 2     0 6  6  1 
2 1 2

    
2 1  1 1  2 1  1 1 

 
1  1 1 3 1
R3  2 R 2 R3 1  1 1 3 
0 6  6  1     0 6  6  1 
   9
0 3  3  5 0 0 0  
 2

The corresponding linear system of the last matrix is

x1  x 2  x 3  3
6 x 2  6 x 3  1
9
0 
2

9
But the last equation 0. x1  0. x 2  0. x 3  is never true. That is there are no values x1 , x 2 , x 3
2
that satisfy the new system (*). Since (*) and the original linear system have the same solution
set, the original system is inconsistent (has no solution).

Generally let = be system of equation in matrix form with n variables.

1. If [ / ] > ( ) and hence = has no solution.


2. If [ / ] = ( ) = = , and there exists a unique solution to the
System Ax = b.
3. If [ / ]= ( ) = < , and we have an infinite number of
Solutions to the system =
Example 1: Using Gaussian elimination solve the linear system

2x  y  z  2
 2x  y  z  4
6x  3y  2z   9
Solution:

Dilla University, Department of Mathematics Page 27


Dill University

The augmented matrix of the given system is

 2 1 1 2
A B   2 1 1 4 

 6  3  2  9

Let us find an echelon form of the augmented matrix first. From this we can determine whether
the system is consistent or not. If it is consistent we go ahead to obtain the reduced echelon form
of [AB], which enable us to describe explicitly all the solutions.

 2 1 1 2
R 2 R1 R 2 
2 1 1 2
3  3 R1 R 3
 R
 
 2 1 1 4       0 0 2 6     
   
 6  3  2  9 6  3  2  9

2  1 1 2  R 3 5 R 2  R 3  2  1 1 2  1
0 0 2  2

6      0 0 2 6     R 2 2 R 2

0 0  5  15 0 0 0 0 

2  1 1 2  2  1 0  1 1
0 0 1 3 R1  R 2 R1

 

  0 0 1 3  R 
1
 R1

2

   
0 0 0 0 0 0 0 0 

 1  1
1 2
0
2
0 0 1 3
 
0 0 0 0
 

The associated linear system to the reduced echelon form of [AB] is

x  1
2
y  1
2

z  3
0  0

The third equation is 0x  0y  0z  0 . It is not an inconsistency, it is always true whatever


values we take for x, y, z. Then the system is consistent and if we assign any value  for y in the
first equation, we get x  1
2
 1
2
 . From the second we have z = 3.

Dilla University, Department of Mathematics Page 28


Dill University

Thus x  1
2
 1
2
 , y =  and z = 3 is the solution of the given system, where  is any real
number.
1
Then there are an infinite number of solutions, for example, x = 2
, y = 0, z = 3

x = 0, y = 1, z = 3 and so on.

In vector form the general solution of the given system is of the form ( 21  12  ,  , 3) where  

 . What does this represents in  3 ?

Activity 1.2

1.solve the system using Gauss – Jordan elimination

x + y + 2z = 9
2 +4 −3 =1
3x + 6y − 5z = 0
x1  x2  x 4  1
2.Find the solution set of the system: x1  x 2  x3  2
x 2  x3  x 4  0

Exercise 1.3
1. Find the solution set of the following system:

2 x1  x 2  3 x3  1
x1  x 2  2 x3  2
4 x1  3 x 2  x3  3
a)
x1  5 x3  3

x1  2 x2  3 x3  x 4  0
3 x1  x 2  5 x3  x 4  0

b)
2 x1  x2  x 4  0

x1  3 x 2  5 x3  2 x 4  11
3 x1  2 x 2  7 x3  5 x 4  0

c)
2 x1  x 2  x 4  7

Dilla University, Department of Mathematics Page 29


Dill University

x yz 6
2. For what values of  and  the system: x  2 y  3 z  10
x  2 y  z   has

i. No solution ii. Unique solution iii. Infinitely many solution


3. Find the augmented matrix for each of the following system of linear equations:

1.3.2 Solving Linear Systems (Basic solutions)

If A is an invertible square matrix, then for each column matrix b, the system of equations
= has exactly one solution, namely, = .
Example: Consider the system of linear equations

+2 +3 =5
2 +5 +3 =3
+8 = 17

Solution: In matrix form this system can be written as = where

1 2 3 5
= 2 5 3 , = , = 3
1 0 8 17

−40 16 9
The inverse of A is = 13 −5 −3
5 −2 −1

Then the solution of the system is

Dilla University, Department of Mathematics Page 30


Dill University

−40 16 9 5 1
= = 13 −5 −3 3 = −1
5 −2 −1 17 2

Let us now summarize the different possible cases that may arise for the solution of the system
of equation = .
4. If ( , ) > ( ) and hence = has no solution.
5. If ( , ) = ( ) = = , and there exists a unique solution to the
system Ax = b.
6. If ( , ) = ( ) = < , and we have an infinite number of
solutions to the system = .

1.4 System of linear inequality


Many practical problems in business, science, and engineering involve systems of linear
inequalities. A solution of a system of inequalities in x and y is a point (x, y) that satisfies each
inequality in the system. To sketch the graph of a system of inequalities in two variables, first
sketch the graph of each individual inequality (on the same coordinate system) and then find the
region that is common to every graph in the system. For systems of linear inequalities, it is
helpful to find the vertices of the solution region.
Example .1 Solving a System of Inequalities Sketch the graph (and label the vertices) of the
solution set of the system

− <2
> −2  
≤3

Example.2 (consumer surplus and producer surplus)


The demand and supply equations for a new type of personal digital assistant are given by
= 150 − 0.00001  
= 60 + 0.00002
where p is the price (in dollars) and x represents the number of units. Find the consumer surplus
and producer surplus for these two equations.
Solution

Dilla University, Department of Mathematics Page 31


Dill University

Begin by finding the point of equilibrium by setting the two equations equal to each other and
solving for x.
60 + 0.00002x = 150 − 0.00001x Set equations equal to each other.
0.00003x = 90 Combine like terms.
x = 3,000,000 Solve for x
So, the solution is x = 3,000,000, which corresponds to an equ
equilibrium
ilibrium price of p = $120. So, the
consumer surplus and producer surplus are the areas of the following triangular regions.
region

In Figure F1, you can see that the consumer and producer surpluses are defined as the areas of
the shaded triangles.
Consumer surplus = 1 /2(base)(height) = 1 /2(3,000,000)(30) = $45,000,000
Producer surplus = 1/ 2(base)(height) = 1/ 2(3,000,000)(60) = $90,000,000
Activity 1.4.1
The minimum daily requirements from the liquid portion of a diet are 300 calories, 36 units of
vitamin A, and 90 units of vitamin C. A cup of dietary drink X provides 60 calories,
calori 12 units of
vitamin A, and 10 units of vitamin C. A cup of dietary drink Y provides 60 calories, 6 units of
vitamin A, and 30 units of vitamin C. Set up a system of linear inequalities that describes how

Dilla University, Department of Mathematics Page 32


Dill University

many cups of each drink should be consumed each day to meet the minimum daily requirements
for calories and vitamins.
Hint: Begin by letting x and y represent the following.
x = number of cups of dietary drink Y
To meet the minimum daily requirements, the following inequalities must be satisfied
60 + 60 ≥ 300
12 + 6 ≥ 36
10 + 30 ≥ 90  
≥0
≥0

Summary
Types of Matrices
Definitions:
1.Row Matrix: A matrix that has exactly one row is called a row matrix. For

Example 1. The matrix A  5 2  1 4  is a row matrix of order 1 4 .


2. Column Matrix: A matrix consisting of a single column is called a column matrix. For
3 
example, the matrix B   1  is a 3  1 column matrix.
4 

3.Zero or Null Matrix: A matrix whose entries are all 0 is called a zero or null matrix. It is
usually denoted by 0m n or more simply by 0. For example,

0 0 0 0 
0    is a 2  4 zero matrix.
0 0 0 0 

4.Square Matrix: An m  n matrix is said to be a square matrix of order n if m = n. That is, if


it has the same number of columns as rows.
Properties of the trace
Let A and B be square matrices that are conformable for addition and multiplication. Then
a. tr(kA) = ktr(A) for any scalar k.
b. tr(A + B) = tr(A) + tr(B)
c. tr(AB) = tr(BA)

Dilla University, Department of Mathematics Page 33


Dill University

d. tr(A ) = tr(A)
Note: let A be any square matrix. Then
 + is symmetric
 − is skew symmetric
 A can be written as a sum of a symmetric and a skew symmetric matrices.
 ( ) =( ) = ( ) = ( ) =
Thus the basic strategy in this method is to replace a given system with an equivalent system,
which is easier to solve.
The basic operations that are used to produce an equivalent system of linear equations are the
following:

1.Replace one equation by the sum of itself and a multiple of another equation.
2.Interchange two equations
3.Multiply all the terms in an equation by a non zero constant.

 Why these three operations do not change the solution set of the system?

The three basic operations listed above correspond to the three elementary row operations on the
augmented matrix.

Thus to solve a linear system by elimination we first perform appropriate row operations on the
augmented matrix of the system to obtain the augmented matrix of an equivalent linear system
which is easier to solve and use back substitution on the resulting new system. This method can
also be used to answer questions about existence and uniqueness of a solution whenever there is
no need to solve the system completely.
In Gaussian elimination method we either transform the augmented matrix to an echelon matrix
or a reduced echelon matrix. That is we either find an echelon form or the reduced echelon form
of the augmented matrix of the system.

Review exercise

1. Given the matrices A, B, and C below

Dilla University, Department of Mathematics Page 34


Dill University

 1 2 4   2 -1 3   4 
A =  2 3 1  B =  2 4 2  C =  2 
 5 0 3   3 6 1   3 
Find, if possible. a) A + B b) B + C
1
2 3 −1
2. Let = = 2 . ℎ
0 4 2
−1
3. If a matrix A is 3x5 and the product AB is 3x7, then what is the order of B?
a 3 4 8
 
 b c 3 9 
4. Let A  is to be a symmetric matrix, what numbers should the letters a
 d e f 10 
 
g h i j 

to j represent?
a) Does a symmetric matrix have to be square?
b) Are all square matrices symmetric?
1 1 1
5. Let A    , is At A is symmetric?
1 2 3
6. For the non homogeneous linear system

x1  3 x 2  x 3  2
x 2  2 x3  4
 2 x1  3 x 2  3 x 3  5

1 3  1 1 3  1 2
The matrix A  0   
1  2 is the coefficient matrix and  0 1  2 4 is the

  2  3  3  2  3  3 5
augmented matrix.

Are the coefficient matrix and the augmented matrix of a homogeneous linear system equal?
Why?

A system of linear equations has either

1. no solution, or
2. exactly one solution, or
3. Infinitely many solutions.

Dilla University, Department of Mathematics Page 35


Dill University

CHAPTER TWO

2. Linear Programming Problems models of practical problems


2.1 Introduction
Optimization is the theory of external problems. Programming problems in general are concerned
with the use or allocation of scarce resources- labor, materials, machines, and capital – in the
“best” Possible manner so that costs are minimized or profits are maximized. In using the term
“best” it is implied that some choice or set of alternative course of actions is available for making
the decision. In general the best decision is obtained by solving a mathematical problem.
Optimization models attempt to express, in mathematical terms, the goal of solving a problem in
the “best” way. That might mean running a business to maximize profit, minimize loss,
maximize efficiency, or minimize risk. It might mean designing a bridge to minimize
weight or maximize strength. It might mean selecting a flight plan for an aircraft to minimize
time or fuel use. The desire to solve a problem in an optimal way is so common that optimization
models arise in almost every area of application. They have even been used o explain the laws of
nature,as in Fermat’sd erivation of the law of refraction for light.
Objectives
At the end of this unit the learners will be able to:
▪Understand the form of linear programming problem
▪Know the decision process in linear programming problems
▪Differentiate the techniques for solving linear programming problems
▪Identify how to formulate linear programming problems

2.2 Decision process and relevance of optimization


Optimization is a fundamental tool for understanding nature, science, engineering, economics,
and mathematics. Physical and chemical systems tend to a state that minimizes some measure of
their energy. People try to operate man-made systems (for example, a chemical plant, a cancer
treatment device, an investment portfolio, or a nation’s economy) to optimize their performance
in some sense. Consider the following examples
1. Given a range of foods to choose from, what is the diet of lowest cost that meets an
individual’s nutritional requirements?
2. What is the most profitable schedule an airline can devise given a particular fleet of

Dilla University, Department of Mathematics Page 36


Dill University

planes, a certain level of staffing, and expected demands on the various routes?
3. Where should a company locate its factories and warehouses so that the costs of
transporting raw materials and finished products are minimized?
4. How should the equipment in an oil refinery be operated, so as to maximize rate of
production while meeting given standards of quality?
5. What is the best treatment plan for a cancer patient, given the characteristics of the
tumor and its proximity to vital organs?
Simple problems of this type can sometimes be solved by common sense, or by using tools from
calculus.
Others can be formulated as optimization problems, in which the goal is to select values that
maximize or minimize a given objective function, subject to certain constraints.
A linear programming problem is a problem of minimizing or maximizing a linear function in
the presence of linear constraints of the inequality and/or the equality type.
- It is the problem of optimizing a linear function of n dimensional vector X under
finitely many linear equality and non strict inequality constraints.
- Linear Programming techniques are widely used to solve a number of military, economic,
industrial, and social problems
Three primary reasons for its wide use are:
 A large variety of problems in diverse fields can be represented or at least approximated
as linear programming models.
 Efficient techniques for solving linear programming problems are available.
 Ease through which data variation ( sensitivity analysis can be handled through linear
programming models.

General Optimization Problem


Let be a vector space and ⊆ . Let a function : → ℝ and a set ⊆ be given.
Then we consider the following.
Extermal Problem
Find a point ℎ ℎ ( )≤ ( ) ∈ ( )
We call (P) an optimization problem, particularly a minimization problem.
In short hand notation, we can write problem P as follows

Dilla University, Department of Mathematics Page 37


Dill University

( ) ( )→ , ∈ .
In the following, we will use the denotations:
 The set S is called a feasible set.
 f is called the objective function of (P).
 ℎ ∈ is called a feasible point or a feasible solution.
 A point ∈ , ℎ ( )≤ ( ) ∈ is called a solution of (P) or an optimal
solution of (P).

General formulation of an optimization problem


( ) ( )→ ( ),
∈ = ∈ : ( ) ≤ , ∈ { , ,…, }
General Formulation of a Linear Optimization Problem
( , , ,…, )= ⋯ → ( )
Subject to
+ + ⋯+ ≤=≥

+ + ⋯+ ≤=≥ ,

≥ , = ,…, .

2.3. Model and model building


Formulation of Linear Programming Models
 The main objective of Linear Programming techniques is to achieve optimal results with
restricted resources.
i.e, the goal is to find a point that minimizes the objective function and at the same time
satisfies the constraints. We refer to any point that satisfies the constraints as a feasible point.
In a linear programming problem, the objective function is linear, and the set of feasible
points is determined by a set of linear equations and/or inequalities.
In formulation of any LP model it is quite necessary to specify:
a. Decision Variables: The problem should have number of decision variable for
which the decisions are to be taken. These are, , ,…, .

Dilla University, Department of Mathematics Page 38


Dill University

b. Well-defined objective function: The linear function z given by the equation:


= + + ⋯+ is called the objective function that is to be
optimized. It is necessary this function should be well defined.
c. Constraints: These are conditions of the problem exposed as simultaneous linear
equations or inequalities
d. Linearity: The objective function and constraint functions must be linear.
e. Non negativity of decision variables: The decision variables must be non
negative.
f. Deterministic: All the coefficients of z, i.e., ′ and constraints are known with
certainty.
g. Feasible Solution: Any set of variables say x, where
=( , ,…, , , ,…, ) is called a Feasible Solution of LPP and
also satisfying non negative restriction of the problem.
h. Basic solution
The three basic steps in constructing a linear programming model are:
1. Identify the unknown variables to be determined (decision variables) and represent them
in terms of algebraic symbols.
2. Identify all the restrictions or constraints in the problem and express them in linear
equations or linear inequalities which are linear functions of the unknown symbols.
3. Identify the objective or criterion function and represent it as a linear function of the
decision variables, which is to be maximized or minimized.

Example1:
A manufacturer produces two products A and B. The profit per unit sold of each product is Br. 2
and Br.3 respectively. The time required manufacturing one units of the product and the daily
capacity of the two machines C and D are given below:

Time required per units( in minute) product Machine capacity( minute per day)
Machine A B

Dilla University, Department of Mathematics Page 39


Dill University

C 5 7 1200
D 4 6 100

The manufacturer wants at least 60 units of product A and 30 units of product B to be produced.
It is assumed that all products manufactured are sold in the market. Formulate the problem
mathematically.
Solution:
Let x be the number of units of product A and y be the number of units of product B.
a) Since the profit product A is Br. 2x
b) The profit in product B is Br. 3y
c) The total profit is 2x+3y
The objective is maximize the profit.
Therefore the objective function is
= 2 + 3 … … … … … … … … … … … … … … … … … … … (1)
Time required to produce x and y units on machine C is 5x + 7y minutes and the capacity of C is
1200 minutes.
Hence, 5x + 7y ≤ 1200
and for machine D we have, 4x + 6y ≤ 1000
Since the manufacturer wants at least 60 units of A and 30 units of B products
We have ≥ 60, ≥ 30
Thus the required LP model is given by
z = 2x + 3y → maximization
s. t
5x + 7y ≤ 1200
4x + 6y ≤ 1000
x ≥ 60, y ≥ 30
Example 2:
A dietician wishes to mix two types of foods in such a way that the vitamin contents of the
mixture contains at least 8 units of vitamin A and 10 units of vitamin B. food I contains 2 units
per kg of vitamin A and 1 units per kg of vitamin B. while food II contains 1 units per kg of

Dilla University, Department of Mathematics Page 40


Dill University

vitamin A and 2 units per kg of vitamin B. It costs Br. 5 per kg to purchase food I and Br. 8 to
purchase food II. Prepare a mathematical model of the problem stated above.
Solution:
Let x kg of food I and y kg of food II be used in the food mixture so as to attain the minimum
requirements of vitamin A and B. We form the summary table as follows
Food Units of vitamin contents per kg Costs per Kg(Br.)
A B
Food I 2 1 5
Food II 1 2 8
Daily min. requirement 8 10
The total cost of mixture is 5x + 8y and there should be at least 8 units of vitamin A and 10 units
of vitamin B in the food mixture.
Therefore the constraints are
+ ≥
+ ≥
≥ , ≥
Hence the mathematical formulation is
= +
Subject to
+ ≥
+ ≥
≥ , ≥

Example 3 (Reddy Mikks company): Reddy Mikks produces both interior and exterior paints
from two raw materials, M1 and M2. The following table provides the basic data of problem:

Dilla University, Department of Mathematics Page 41


Dill University

A market survey indicates that the daily demand for interior paint cannot exceed that for exterior
paint by more than 1 ton. Also, the maximum daily demand for interior paint is 2 tons.
Reddy Mikks wants to determine the optimum (best) produce mix of interior and exterior paints
that maximizes the total daily profit.
The LP model has three basic components.
1. Decision variables that we seek to determine.
2. Objective(goal) that we need to optimize (maximize or minimize).
3. Constraints that the solution must satisfy.
The proper definition of the decision variables is an essential first step in the development of the
model. Once done, the task of constructing the objective function and the constraints becomes
more straightforward.
For the Reddy Mikks problem, we need to determine the daily amounts to be produced of
exterior and interior paints. Thus, the variables of the model are defined as
−Tons produced daily of exterior paint
−Tons produced daily of interior paint
To construct the objective function, note that the company wants to maximize (i.e, increase as
much as possible) the total daily profit of both paints. Given that the profit per ton of exterior and
interior paints are 5 and 4(thousand) dollars, respectively, it follows that
Total profit from exterior paint=5 (thousand) dollars
Total profit from interior paint=5 (thousand) dollars
Letting Z represent the total daily profit (in thousands of dollars), the objective of the company is
Maximize = +
Next, we construct the constraints that restrict raw material usage and product demand. The raw
material restrictions are expressed verbally as
Usage of raw materials by both paints≤ Maximum raw material availability
Usage of raw material M1 by exterior paint= 6
Usage of raw material M1 by interior paint= 4
Hence, usage of raw material M1 by both paints= + tons/day
In a similar manner,
Usage of raw material M2 by both paints= + tons/day.

Dilla University, Department of Mathematics Page 42


Dill University

Because the daily availabilities of raw materials M1 and M2 are limited to 24 and 6 tons,
respectively, the associated restrictions are given as
+ ≤ ( )
+ ≤ ( )
The first demand restriction stipulates that the excess of the daily production of interior over
exterior paint, − , should not exceed 1 ton, which translates to
− ≤ ( )
The second demand restriction stipulates that the maximum daily demand of interior paint is
limited to 2 tons, which translates to
≤ ( )
Non-negativity restriction: , ≥ 0.
The complete Reddy Mikks model is
Maximize Z = 5x + 4x
subject to
6x + 4x ≤ 24
x1 + 2x2 ≤ 6
−x + x ≤ 1
x2 ≤ 2
x1 , x2 ≥ 0
Example 3: Advertizing Media Selection
An advertizing company wishes to plan an advertizing campaign in three different media-
television, radio, and magazines. The purpose of the advertizing program is to reach as many
potential customers as possible. Results of a market study are given below:

Dilla University, Department of Mathematics Page 43


Dill University

Television

Day Prime time Radio Magazines


Time

Cost of an advertizing unit (Br.) 40,000 75,000 30,000 15,000

Number of potential customers reached per unit 400,000 900,000 500,000 200,000

Number of women customers reached per unit 300,000 400,000 200,000 100,000

The company does not spend more than Br.800,000 on advertizing. It further requires that
1. At least 2 million exposures take place among women;
2. Advertizing on television be limited to Br. 500,000;
3. At least 3 advertizing units be bought on daytime television, and two units during prime
time and
4. The number of advertising units on radio and magazines should each be between 5 and
10. Formulate the problem mathematically.

Solution: Let , , , be the number of advertizing units bought in daytime television,


primetime television, radio, and magazines, respectively.
The total number of potential customers reach ( in thousands) is
+ + +
The restrictions on the advertizing budget( in BR.) is represented by
, + , + , + , ≤ ,
The constraints on the number of women customers reached by the advertizing campaign
becomes
, + , + , + , ≥ , ,
The constraints on television advertizing are
, + , ≤ ,


Since the advertising units on radio and magazines should each be between 5 and 10, we get the
following constraints:

Dilla University, Department of Mathematics Page 44


Dill University

≤ ≤
≤ ≤
Thus the complete linear programming problem is given below:
Maximize: + + +
Subject to
, + , + , + , ≤ ,
, + , + , + , ≥ , ,
, + , ≤ ,


Example 4:
A firm manufactures 3 products A, B and C. the profits per units of product are Br. 3, Br. 2, and
Br. 4 respectively. The firm has two machines and given below is the required processing time in
minutes for each machine on each product.
Machines Products
A B C
M1 4 3 5
M2 2 2 4

Machine M1 and M2 have 2000 and 2500 machine minutes respectively. The firm must be
manufacturing 100 A’s 200 B’s and 50 C’s but not more than 150 A’s. Formulate the
mathematical model?
Example 5:
A farmer has 1000 acres of land on which he can grow corn, wheat or soya beans. Each acres of
corn costs Br. 100 for pre- preparation, requires 7 man days of work and yields a profit Br. 30.

Dilla University, Department of Mathematics Page 45


Dill University

acres of corn costs Br. 120 to prepare, requires 10 man days of work and yields a profit Br. 40.
acres of soya beans costs Br. 70 to prepare, requires 8 man days of work and yields a profit
Br.20. If the farmer has Br. 1,00,000 for preparation and can count 0n 80,000 man days work,
formulate the mathematical model?
Example 6:
A truck company requires the following number of drivers for its trucks during 24 hours.
Time Number of drivers required
00-04hr 5
04-08hr 10
08-12 hr 20
12-16 hr 12
16-20 hr 22
20-24 hr 8

According to the shift schedule a driver may join for duty at midnight, 04,08,12,16,20 hours and
work continuously for 8 hrs. Formulate the problem as LP model for optimal shift plan.
Solution: Let , , , , , denote the number of drivers join duty at 00, 04, 08,12,16,20
hours respectively. The objective is to minimize the number of drivers required.
Therefore the objective function is
= + + + + +
Drivers who join duty at 00 hrs. and 04 hrs. should be available between 04 and 08 hrs. As the
number of drivers required during this interval is 10, we have the constraint:
+ ≥
Likewise, for the remaining time intervals we obtain
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
≥ , = ,…,

Dilla University, Department of Mathematics Page 46


Dill University

Thus the required linear model is


= + + + + +
Subject to
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
+ ≥
≥ , = ,…,
Example 7:
Dilla University referral hospital has the following minimal daily requirements of nurses.
period Clock time(24hrs.day) Minimal number of nurses required
1 6A.M to 10 A.M 2
2 10 A.M to 2 P.M 7
3 2 P.M to 6P.M 15
4 6 PM to 10 PM 8
5 10 P.M to 2 A.M 20
6 2 A.M to 6 A.M 6

Nurses report to the hospital at the beginning of each period and work for 8 consecutive hours.
The hospital wants to determine the minimum number of nurses to be employed so that there is
sufficient number of nurses available for each period. Formulate this as linear programming
problem.

Ans.
= + + + + +
Subject to
+ ≥
+ ≥
+ ≥

Dilla University, Department of Mathematics Page 47


Dill University

+ ≥
+ ≥
+ ≥
≥ , = ,…,
Example 8: Warehouse problem
A person running a warehouse purchases and sells identical items. The warehouse can
accommodate 1,000 such items. Each month, the person can sell any quantity he has in stock.
Each month, he can buy as much as he likes to have in stock for delivery at the end of the month,
subject to a maximum of 1,000 items. The forecast of purchase and sale prices for the next six
months is given below:
Month i 1 2 3 4 5 6
Purchase prices, Ci 12 14 17 19 20 21
Sale price, Si 13 15 16 20 21 23

If at present he has a stock of 200 items, what should be his policy?


Formulation of LP model
Key decision is to determine the number of items to be purchased and sold for each of the six
months.
Let ( = 1, … ,6) be the number of items purchased and sold for each of the six
months, where , ≥ 0.
The objective is to maximize the profit i.e,
Maximize = [(13 + 15 + 16 + 20 + 21 + 23 ) − (12 + 14 + 17 +
19 4+20 5+21 6]
The person cannot sell anything that is not in stock.
Therefore, for each month n=1,...,6

200 + ( − )≥

Which is equivalent to

− ≤ 200

Dilla University, Department of Mathematics Page 48


Dill University

Further, since he cannot overstock beyond 1,000 item

200 + ( − ) ≤ 1000

Which is equivalent to

− ≤ 800

Thus, the required LP model is


Maximize = [( + + + + + )−( + + +
+ + ]
Subject to

− ≤

− ≤

, ≥ , = ,…,
Example 9:
Evening shift resident doctors in a government hospital work five consecutive days and then
have two days off. Their five days of work can start on any day of the week and the schedule
rotates indefinitely. The hospital requires the following minimum number of doctors.
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
35 55 60 50 60 50 45

No more than 40 doctors can start their five working day’s scheduled on the same day.
Formulate the LP model to minimize the number of doctors to be employed by the hospital.
Solution:
Let , , , , , denote the number of doctors join duty at Sunday, Monday, Tuesday
Wednesday, Thursday, Friday and Saturday respectively.
The objective is to minimize the number of doctors to be hired by the hospital.
The objective function is = + + + + +

Dilla University, Department of Mathematics Page 49


Dill University

doctors join duty on Sunday, , doctors who joined duty on last Monday and Tuesday will
have off on Sunday.
Therefore, for Sunday we have + + + + ≥ 35
For Monday: + + + + ≥ 55
For Tuesday: + + + + ≥ 60
For Wednesday: + + + + ≥ 50
For Thursday: + + + + ≥ 60
For Friday: + + + + ≥
For Saturday: + + + + ≥
≤ ≤ ; = ,…,
Thus the required LP model is:
= + + + + +
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
+ + + + ≥
≤ ≤ ; = ,…,
Example 10:
A local travel agent is planning a charter trip to a major sea resort. The eight-day/ seven night
packages include the fare for round trip travel, surface transportation, board and lodging and
selected tour options. The charter trip is restricted to 200 people and experience indicates that
there will not be any problem for getting 200 persons. The problem for the travel agent is to
determine the number of Deluxe, Standard and Economy tour packages to offer for this charter.
These three plans differ according to seating and services for the flight, quality of
accommodation, meal plans and tour options. The following table summarizes the estimated
prices for the three packages and the corresponding expenses for the travel agent. The travel
agent has hired an aircraft for the flat fee of Br. 200,000 for the entire trip.

Dilla University, Department of Mathematics Page 50


Dill University

Price and cost for tour packages per person


Tour Plan Prices Hotel Meal and other expenses
Deluxe 10,000 3000 4750
Standard 7000 2200 2500
Economy 6500 1900 2200
In planning the trip the following considerations must be taken in to account.
i.At least 10 percent of the packages must be Deluxe type
ii.At least 35 percent but not more than 70 percent must be of the standard type
iii.At least 30 percent must be of the economy type
iv.The maximum number of deluxe packages available in any aircraft is restricted to 60.
v.The hotel desires that at least 120 of the tourists should be on the deluxe and standard
packages together.
The travel agent wishes to determine the number of packages to offer in each type so as to
maximize the total profit.
Formulate this as an LP model.
Solution:
Let , represent the number of Deluxe, Standard and Economy tour packages.
Then the problem can be expressed as
[ (10,000 − 3000 − 4750) + (7000 − 2200 − 2500)
+ (6500 − 1900 − 2200) − 200,000]
Subject to the following constraints:

≥ 0.01 9 − − ≥0
+ +
−7 + 13 − 7 ≥ 0
−7 + 3 − 7 ≤ 0
−3 − 3 + 7 ≥ 0
≤ 60
+ = 120
+ + = 200
, , ≥0

Dilla University, Department of Mathematics Page 51


Dill University

Example 11:
A manufacturing company has plants in cities A, B, and C. The company produces and
distributes its product to dealers in various cities. On a particular day, the company has 30 units
of its product in A, 40 in B, and 30 in C. The company plans to ship 20 units to D, 20 to E, 25 to
F, and 35 to G, following orders received from dealers. The transportation costs per unit of each
product between the cities are given in below:

In the table, the quantities supplied and demanded appear at the right and along the bottom of the
table. The quantities to be transported from the plants to different destinations are represented by
the decision variables. This problem can be stated in the form:

General linear programming problem:


 Find the set of values , , ,…, which will optimize the objective function.
( , , ,…, )= ⋯ → max ( )
s.t + + ⋯+ ≤=≥
.
.

Dilla University, Department of Mathematics Page 52


Dill University

.
+ + ⋯+ ≤=≥ ,

≥ 0, = 1, … , .
Remark: For =( , , ,…, ) and =( , , ,…, )
〈 , 〉= + +⋯ =∑ .( Inner product of x and y).

Now we can write the above optimization problem as follows:


( )=〈 , 〉→ ( )
Subject to ≤=≥
Where =( , ,…, ) and =( , ,…, )


. .
= ⋮ ⋱ ⋮ , = , =
. .

. .

Thus, we have the following


( )=〈 , 〉→ ( )
Subject to ≤ , ≥ 0, = 1, … , .
Where , ∈ℝ , ∈ , ≥ 0, =( ) is called Canonical Standard Linear
Programming problems.
Problem Manipulation
Note: In the above CSLPP if

i) ≤0 =− ≥ 0.
′ ′′ ′ ′′
ii) ∈ ℝ, = − where , ≥ 0.
iii) The variable x is called decision variable.
iv) b –requirement vector and c- cost or price vector.
Example: Find the canonical standard form of the following linear programming problem
( , , ) = 12 +8 +5 →
. 4 +2 + ≤ 80
2 +3 +2 ≤ 100
5 + − ≤ 75, ℎ ≥ 0, ≤ 0, ∈ ℝ.
′ ′ ′′ ′ ′′
Solution: let =− ≥ 0, = − , , ≥ 0. Then we have the following LP

Dilla University, Department of Mathematics Page 53


Dill University

( , ′ ′ ′′ ) ′ ′ ′′
, , = 12 −8 +5 −5 →
′ ′ ′′
. 4 −2 + − ≤ 80
′ ′ ′′
2 −3 +2 −2 ≤ 100
′ ′ ′′
5 − − + ≤ 75
Equivalently we can write as follows
( )=〈 , 〉→
. ≤ , ℎ

4 −2 1 −1 ′ 80
= (12, −8,5, −5), = 2 −3 2 −2 , = ′ , = 100 .
5 −1 −1 1 ′′ 75

Since we have a general theory of solving system of linear equations, it will be advantageous to
change all inequality constraints in to equality constraints.
 When a constraints are connected by “≤” a variable will be added to the left side to
convert in to equality constraints and these variables are called Slack variables.
4 +2 + ≤ 80 ⟺ 4 +2 + + = 80.
 When a constraints are connected by “≥” a variable will be subtracted to the left side
hand side to convert in to equality constraints , these variables are called Surplus
variables. 4 +2 + ≤8⟺4 +2 + − = 8.
( ) = 〈 , 〉 → max ( )
Subject to = is called standard form of LPP.
Example: Find the SLOP form of the following problem
( , , )=2 +3 −4 →
. 4 +2 − ≤4
−3 +2 +3 ≥6
+ −3 ≤ 8, ≥ 0.
Solution: Add a slack variable to the first and third constraints and subtract surplus variable in
the second constraint. The constraints becomes
4 +2 − + = 4,
−3 +2 +3 − =6
+ −3 + =8

Dilla University, Department of Mathematics Page 54


Dill University

≥ 0; = 1,2, … ,6
Thus in matrix representation, we can write the above standard LPP as:
( )=〈 , 〉→
. = , ≥
Where, ( ) = 〈 , 〉
= (2,3, −4,0,0,0)
=( , , , , , )
4 2 −1 1 0 0 4
= −3 2 3 0 1 0 , = 6
1 1 −3 0 0 1 8
Minimization and maximization problems
Another problem manipulation is to convert a maximization problem into a minimization
problem and conversely. Note that over any region
Max ( ) = −Min (− ( ).
So a maximization (minimization) problem can be converted into a minimization (maximization)
problem by multiplying the coefficients of the objective function by −1. After the optimization
of the new problem is completed, the objective of the old problem is −1 times the optimal
objective of the new problem.
Example: Express the following minimization problem as standard maximization problem
( , , )=4 − +2 →
. 4 + − ≤7
2 −3 + ≤ 12
4 +7 − ≥ 0, ≥ 0, = 1,2,3.
Solution: Since Max ( ) = −Min (− ( ).
Now we have the following
( , , )=− ( , , ) = −4 + −2 →
. 4 + − + =7
2 −3 + + = 12
4 +7 − − = 0, ≥ 0, = 1,2, . . . ,6
Is the corresponding maximization problem in standard form.

Dilla University, Department of Mathematics Page 55


Dill University

4 1 −1 1 0 0 7
= (−4,1, −2,0,0,0) , = 2 −3 1 0 1 0 , = 12 ,
4 1 −1 0 0 −1 0
=( , , , , , ).
( )=〈 , 〉→ .
. = , ≥ 0.
S={ x ∈ ℝ : Ax = b, x ≥ 0}.
Standard and Canonical Forms

Summary
Optimization is a fundamental tool for understanding nature, science, engineering, economics,
and mathematics. Physical and chemical systems tend to a state that minimizes some measure of
their energy. People try to operate man-made systems (for example, a chemical plant, a cancer
treatment device, an investment portfolio, or a nation’s economy) to optimize their performance
in some sense.
In formulation of any LP model it is quite necessary to specify:
a. Decision Variables: The problem should have number of decision variable for
which the decisions are to be taken. These are, , ,…, .

Dilla University, Department of Mathematics Page 56


Dill University

b. Well-defined objective function: The linear function z given by the equation:


= + + ⋯+ is called the objective function that is to be
optimized. It is necessary this function should be well defined.
c. Constraints: These are conditions of the problem exposed as simultaneous linear
equations or inequalities
d. Linearity: The objective function and constraint functions must be linear.
e. Non negativity of decision variables: The decision variables must be non
negative.
f. Deterministic: All the coefficients of z, i.e., ′ and constraints are known with
certainty.
g. Feasible Solution: Any set of variables say x, where
=( , ,…, , , ,…, ) is called a Feasible Solution of LPP and
also satisfying non negative restriction of the problem.
h. Basic solution
The three basic steps in constructing a linear programming model are:
1. Identify the unknown variables to be determined (decision variables) and represent them
in terms of algebraic symbols.
2. Identify all the restrictions or constraints in the problem and express them in linear
equations or linear inequalities which are linear functions of the unknown symbols.
3. Identify the objective or criterion function and represent it as a linear function of the
decision variables, which is to be maximized or minimized.
Review exercise
1. Solve the following problem:

2. Solve the following linear programming problems. If you wish, you may check your
arithmetic by using the simple online pivot tool:

Dilla University, Department of Mathematics Page 57


Dill University

a)

b)

Dilla University, Department of Mathematics Page 58


Dill University

CHAPTER THREE
3.Geometric methods
Introduction

When the number of variables in a linear programming problem is three or less, we can graph the
set of feasible solutions together with the level sets of the objective function. But if a number of
variables and constraints are many, it is difficult to apply the graphical method to solve the linear
programming problem. From the picture, it is usually a trivial matter to write down the optima
solution.

Each constraint (including the non negativity constraints on the variables) is a half plane. These
half-planes can be determined by first graphing the equation one obtains by replacing the
inequality with an equality and then asking whether or not some specific point that doesn’t
satisfy the equality often (0, 0) can be used satisfies the inequality constraint. The set of feasible
solutions is just the intersection of these half planes. For the problem given above, this set is also
shown two level sets of the objective function. One of them indicates points at which the
objective function value is feasible. This level set passes through the middle of the set of feasible
solutions. As the objective function value increases, the corresponding level set moves to the
right. The level set corresponding to the case where the objective function increases the last level
set that touches the set of feasible solutions.

General objectives
At the end of this unit the learner will be able to:

▪Understand how to apply the the geometrical method for linear programming problems

▪Identify the fundamental theorem of linear programming problem

▪Define the basic concept of convex sets

▪Differentiate the polyhedral sets

▪Explain the properties of extreme points

▪Understand the corner point theorem

3.1 Graphical Solution Methods


Remark: There are two types of polyhedral set
Bounded polyhedral:
Strictly bounded and has finite number of extreme points.
1. Unbounded polyhedral se : Has finite number of extreme points and unbounded.

Dilla University, Department of Mathematics Page 59


Dill University

Theorem: If the convex set of F.S. of a LPP is bounded, then any objective function has both
maximum and minimum corresponding to some extreme points of the polyhedral set.
Proof: Exercise
Note:
1. When the feasible region of a LPP is strictly bounded, i.e., bounded polyhedral set, any
objective function has both finite maximum and finite minimum which occurs at the
extreme points.
2. If the feasible region of a LPP is bounded from below only, then an objective function
may have either maximum or minimum but not both.
3. A particular objective function may not have any optimal value, maximum or minimum,
where the feasible region is unbounded.
Definition:
Multiple optimal solutions: If a LPP has more than one optimal solution, then the problem is
said to have multiple optimal solution.
Theorem: If a LPP has at least two optimal feasible solution, then there are infinite number of
optimal solutions, which are the convex combination of the initial optimal solutions.
Proof:
Let , ( ≠ ) be two optimal feasible solution of a LPP which will maximize the
objective function = . = ; ≥0
Then ̂ = ̂= = , =
= + (1 − ) ,0 ≤ ≤1
= + (1 − ) = + (1 − ) =
=
= + (1 − ) = ̂
Therefore is also an optimal feasible solution of the LPP which is the convex combination of
two distinct points , , which indicates that there are infinite optimal solutions.
Fundamental theorem of LPP (Geometric approach)
Theorem: If a LPP admits an optimal solution, then the objective function assumes the optimum
value at an extreme points of the convex set generated by the set of all feasible solutions.
Proof: exercise
Some remarks on the solution of LPP

Dilla University, Department of Mathematics Page 60


Dill University

A linear programming problem may have:


a) A definite and unique solution
b) An infinite number of optimal solutions: a LPP may have the same optimal value of the
objective function at more than one extreme point. For this two conditions must be satisfied
 The objective function, when plotted, should be
 The constraints should form a boundary of the feasible region in the direction of optimal
movement of the objective function line. i.e., the constraints should be active constraints.
c) An unbounded solution: it exists, when a LPP has no limit on the constraints. i.e. the feasible
region is not bounded in any respect
d) No feasible solution (infeasible solution): infeasible is a condition that exists, when there is
no solution to an LPP that satisfies all the constraints and non negativity restriction. It means
that the constraints in the problem are in conflict and inconsistent. Infeasibility depends
solely on the constraints and has nothing to the objective function.
e) A single solution
Example 1: Draw the feasible space, if any, given by the following LPP and find out the extreme
points of the feasible region.
a) =2 + → b) =2 +5 → .
. + ≤ 2. . 5 +6 ≥ 30
− + ≤ 2. 3 +2 ≤ 21
≤ 2; , ≥ 0. + ≤ 12; , ≥0

Solution a:

The convex set of the feasible region is given by ABCD. This region is also known as an admissible
region. It is strictly bounded region. The four extreme points are (0,0), (2,0), ( , ), (1,0). The

region is convex polyhedral with finite number of extreme points.


The value of the objective function Z is

Dilla University, Department of Mathematics Page 61


Dill University

= 0,
=4
5
= ,
2
=1
Hence the maximum value of z is 4 which occurs at B.
b).

Example 2: Solve the given LPP graphically


=2 +3 →
2 + 7 ≥ 22
+ ≥6
5 + ≥ 10; , ≥0
Solution:

The four extreme points are (11,0), (4,2), (1,5), (0,10), Then
= 22,
= 14,
= 17,
= 30

Dilla University, Department of Mathematics Page 62


Dill University

Hence the minimum is z=14 occurs at B.


To verify the minimum is finite take any two points (4.7,1.8),(3.9,2.1) very close to the extreme
points B on the line segment BA and BC respectively and the values of the objective functions
are 14.8 and 14.1 both of which are greater than the minimum value 14.
Remark: a) If = 2 + 3 → , then no finite value of z will be obtained. Here
max( , , , ) = 30 (0,10), but for points on ,eg at (0,11) the value of the function
is 33 which is greater than 30 and in the same way it can be established that the objective
function has no finite maximum. Hence the problem have unbounded solution.
b) Any objective function
= + , ℎ ℎ has no finite maximum
or minimum subject to the constraints given in the above problem.
Example 3: find the solution of the following LLP graphically.
a) = + c) = +
. + ≤ . . − ≤ .
+ ≤ . + ≥ .
, ≥ . , ≥ .
b) =− + d) = +
. − ≤− . . + ≥ .
− . + ≤ . − − ≤− .
, ≥ . , ≥ .
Solution: the feasible region satisfying the constraints and non negativety constraints is
given below

2x+y=800

X+y=1000

A(400,0)

The optimal values of the objective function can be tabulated as follows


Corner points Objective function value

Dilla University, Department of Mathematics Page 63


Dill University

Z=30x+40y
O(0,0) 0
A(400,0) 12000
B(200,400) 14,000
C(0,500) 10,000
Form this we find that the value of Z maximum occurs at the vertex B(200,400).
b) min = 4 +
. 3 + 4 ≥ 20.
− − 15 ≤ −15. , ≥ 0.
The feasible region is given below

3.2 Convex set


Basic concept of convex sets
Definition: An n vector is a row or column array of n numbers. For example = (1,2,3,4) is a
3
row vector of size 4 and = is a column vector of size two.
4
 Zero vector: the zero vector, denoted by 0, is a vector with all components equal to zero.
 Unit vector: This is a vector with zero components, except for a 1 in the position.
This vector is denoted by and sometimes called the coordinate vector.
 = (0,0, … 1, … . ,0,0)
 Addition of two vectors: two vectors of the same size can be added, where addition is
performed component wise. To illustrate let =( ,…, ); =( ,…, then
addition of + is given by
+ =( + ,…, )
 A vector ∈ ℝ is said to be a linear combination of ,…, in ℝ , if =∑
where are real numbers
 A collection of vectors ,…, of dimension n is called linearly independent if
∑ implies that = 0 for each i=1,2,…,n.

Dilla University, Department of Mathematics Page 64


Dill University

 Suppose =ℝ , , ∈ℝ
inner product of x and y, 〈 , 〉 = ∑ , = ( ,…, ), = ( ,…, ).

‖ ‖= 〈 , 〉= ∑ = ∑ .(norm of x.)

 , ∈ℝ , ℎ ( , )=‖ − ‖= 〈 − , − 〉
is the distance between two points x and y in ℝ .
 Line: , ∈ ℝ , the line joining the points , ( ≠ ) is a set of points X
given by = { :x= + (1 − ) , }.
 Line segment: Let , ∈ℝ , ≠ then the line segment joining these two points
is a set X of points given by = { :x= + (1 − ) , 0 ≤ ≤ 1}.
 Hyper plane: a hyper plane in is a set X of points given by ={ : = }, where
c is a row vector given by = ( ,…, ) = 0, = ( ,…, ) is an n
component column vector.
A hyper plane divides the space in to three mutually exclusive disjoint sets given by:
={ : > }.
={ : = }.
={ : < }.
The set are called open half spaces.
Remark: The objective function = and the constraints with equality sign are all hyper plane
in LPP.
 Hyper space: A hyper space in with center = , ,…, > 0 is
defined to be a set X of points given by
= { :| − | = Where = ( , … , ), the equation can be written as follows:
( − ) +⋯+( − ) )=
 An − : An − neighbourhood about a point is defined as the set X
of points lying inside the hyper sphere with center at and radius > 0, i.e., the −
ℎ ℎ about the point a is a set of points given by = { : | − | < }.
 Interior point of a set: Let ≠ ⊆ℝ , is said to an interior point of S if and only if there
exists at least one > 0 such that − ℎ ℎ is in S.
∃ >0 ℎ ℎ ( , )⊆ .

Dilla University, Department of Mathematics Page 65


Dill University

is said to be a boundary point of S iff for any > 0 the − ℎ ℎ


contains points in S and points which are not in S.
 Int(S) = the interior point of ={ : ∈ }.
 Cl(S) the closure of S, ( )={ ∈ ℝ : }.
 Open Set: A set s is said to be open if it contains only interior points.
 Closed Set: A set s is said to be closed if it contains all its boundary points.
 Bounded Set: A set S is said to be strictly bounded if there exists a positive number r such
that for any point ∈ ,| | ≤ .
Example: 1 = {( , ): + <1
2. = {( , ): + = 1.
Convex Set
Let =ℝ
 [ , ]={ ∈ℝ : = + (1 − ) , [0,1]} is called closed segment in ℝ .
 ( , )={ ∈ℝ : = + (1 − ) , (0,1)} is called open segment in ℝ .
Definition: A set ⊆ ℝ is said to be a convex set iff for any two points x, y in K, + (1 − ) .
 The line segment joining any two distinct points of a set is also in the set, and then we call the set
as convex set.
Examples: The following are some examples of convex sets
 The empty set
 A set consisting of a single set
 A line or line segment
 A subspace
 A hyper plane
 Half space
 ℝ
 The line segment joining any two distinct points of a set is not in the set, then we call the set as
non convex set.

convex sets

Dilla University, Department of Mathematics Page 66


Dill University

nonconvex set

Examples :
1. Let u be the linear function in ℝ , then ={ ∈ℝ : ≥ , ∈ ℝ, ∈ℝ , ≠ 0} is a
convex set.
Solution: Let x, y ∈ . We want to show that + (1 − ) ∈ .
By definition of ℎ ≥ , ≥
⇒ λux ≥ λα, (1 − λ)uy ≥ (1 − λ)α.
⇒ λux + (1 − λ)uy ≥ λα + (1 − λ)α.
⇒ ( ) + (1 − ) ≥ + (1 − ) .
⇒ ( + (1 − ) ) ≥ .
⇒ + (1 − ) ∈
There fore is a convex set.
2. ={ ∈ℝ : ≤ , ∈ℝ , ∈ℝ , ≠ 0} is also convex set.
3. ( , ) ∈ℝ , ℎ ={ ∈ℝ : ≤ } is a convex set.

Exercise
Prove that the following sets are convex or non convex
a. = {( , ): − 2 = 2}.
b. = {( , ): − 2 ≤ 5}.
c. = { : | | < 2}.
d. = {( , ): + ≤ 4}.
e. = {( , ): | ≤ 2, | | | ≤ 1}.
f. = {( , ): ≤ }.
g. = {( , ): ≥ 4 }.
Solution: = {( , ): + ≤ 4}
i. Geometrical approach: the set of points representing a circle of radius 2with the boundary and all
its interior points. Hence, the set is convex.
ii. Algebraic approach: Let ( , ) ( , ) be any two points of the set S. then

Dilla University, Department of Mathematics Page 67


Dill University

  + ≤4
+ ≤4
Now any convex combination of the points is a point which is { + (1 − ) , + (1 − ) }
Since,
( + (1 − ) ) + ( + (1 − ) )
( + ) + (1 − ) ( + ) + 2 (1 − )( + )
+ + +
≤ ( + ) + (1 − ) ( + ) + 2 (1 − )( )
2

( ≤ , ≤ )

≤4 + 4(1 − ) + 8 (1 − ).
=4 +4+4 −8 −8 = 4.
Hence the set is convex.
= {( , ): | | ≤ 2, | | ≤ 1}
i. Geometrical approach: The set represents a rectangle with all its interior points having
the four extreme points (2,1),(2,-1),(-2,-1),(-2,1). Hence the set is convex.
ii. Algebraic approach: The set is not empty. Let ( , ) ( , )
Be any two points belonging to the set K. Then
| | ≤ 2, | | ≤ 2 | | ≤ 1, | | ≤ 1 .The convex combination of the points is the point
which is given by { + (1 − ) , + (1 − ) }
Now,
| + (1 − ) | ≤ | | + |(1 − ) |
= | | + (1 − )| | ≤ 2 + 2(1 − ) = 2
In the same way we can prove that
| + (1 − ) | ≤ 1.
Hence the set is convex set.
 Show that = {( , ): ≥ 4 } is not convex.
Solution: (0,0) and (1,2) are two points in K. A particular convex combination of the two points is

. 0 + . 1, . 0 + . 2 = ,1 =

Since 1 ≱ 4. = 2. Thus the set is not convex set.

Definition: Let , ,…, ∈ℝ , ,…, ∈ ℝ, =∑ ;∑ = 1, ≥0


is called convex combination of , ,…, .

Dilla University, Department of Mathematics Page 68


Dill University

 Let , ∈ℝ , = + (1 − ) ; = , = (1 − ).
Theorem: Let ⊆ ℝ be a convex set and let , ,…, ∈ . Then each convex combination
of , ,…, belongs to K. i.e, =∑ ∈ ;∑ = 1, ≥ 0, = 1, … , .
Proof:
 If n=1, ∈ , = 1, = ∈ .
 For n=2, let , ∈ , = + ∈ , + =1; , ≥0
Since K is convex we have + (1 − ) ∈ .
= + (1 − ) , = , = 1 − . Hence the theorem is true for n=2.
 Assume the theorem is true for > 2, , ,…, ∈ , ℎ = + ⋯+ ∈
,∑ ; ≥ 0.
Let , ,…, , ∈ ; Let ′ = + ⋯+ + ,∑ = 1; ≥0
WTS ∈ ?
Form = + ⋯+ ∈
+ (1 − ) ∈ ( )
( + ⋯+ ) + (1 − ) ∈ , = , = 1, … , =1− .
′= +⋯+ +

=( ) +

= +

= + (1 − ) = 1
Therefore, the theorem holds true for m+1 elements.
Hence, the proof is completed.
Theorem: let , , ⊆ℝ , ∈ ℕ be convex sets, then
i. ⋂∈ is convex set
ii. + is convex set
iii. ∑ is convex set.
iv. ∈ℝ.
Proof:

Dilla University, Department of Mathematics Page 69


Dill University

4. Let , ∈⋂∈
⇒ , ∈ , ∈ .
⇒ +(1 − λ)y ∈ K , for each i ∈ I.
⇒ +(1 − ) ∈ ⋂ ∈ .
⇒⋂∈ is convex set.
. Let , ∈ + , = + , = + , , ∈ , , ∈
⇒ + (1 − ) = ( + ) + (1 − )( + )
⇒ + (1 − ) + + (1 − )
∈ ∈

⇒ + .
. Let , ∈ , = , ∈ ∈ . Then we have
+ (1 − ) = + (1 − )
+ (1 − ) = ( + (1 − ) )

⇒ is convex.
Definition: Let ⊆ℝ , ℎ =∩ { : ⊆ } is called a convex
hull of A. If A is convex set , then A=conv A.
 Let X be a point set, then the convex hull of X denoted by C(X), is the set of all convex
combination of sets of points from X.
 If the set consists of a finite number of points, then the convex hull of X is called convex
polyhedron. If X convex set, then C(X) =X.
Theorem: Let ⊆ ℝ .then the ConvA is the set of all convex combination of elements of A.
i.e., Conv A={ ∈ ℝ : ∃ , , =∑ ,∑ = 1, ∈ , ≥ 0} =
WTS for x, y ∈ , + (1 − ) ∈ .
=∑ ,∑ = 1, ∈ , ≥ 0 and =∑ ,∑ = 1, ∈ , ≥ 0.
=∑ (1 − ) = ∑ (1 − )

Put = , = 1, … , ; = (1 − ) , = 1, … , .
= , = 1, … , .
= , = 1, … , .

Dilla University, Department of Mathematics Page 70


Dill University

+ (1 − ) = + (1 − )

= +

, ≥ 0, ∈

= +

=∑ +∑ (1 − )
= ∑ + (1 − ) ∑ = + (1 − ) = 1
+ (1 − ) ∈ . Hence C is convex set.
Next it remains to show that ConvA=C
Let ∈ , ∈
Since k is convex , x can be written as a convex combination of elements in k.
⇒ =1 ∈ .
⇒ ∈ , since C is convex.
⇒ ⊆ .
Let ∈ ,∑ , ∈ ,∑ =1
⇒ ∈ , .
⇒ ⊆
Thus, = .
Theorem: Let A , B∈ ℝ , then
i. convA is the minimal convex set containing A.
ii. if ⊆ , then convA⊆convB.
iii. conv(convA)=convA.
Proof: i) convA=⋂ , ⊆ .
⇒⋂ ⊆
⇒⋂ is minimal set containing A.
ii. ⊆ ⇒ convA⊆convB.

Dilla University, Department of Mathematics Page 71


Dill University

Suppose ⊆ and ∈
⇒ =∑ ,∑ =1 ≥ 0, ∈
⇒ ∈ ⇒ ∈
⇒ ∈
⇒ ⊆

Definition: let S be a non empty sub set of ℝ . If S is closed and bounded, then S is said to be compact
set.
Example:
 S= [0,1] is compact set.
 = {( , ): + ≤ } is also compact set.

3.3Polyhedral sets and Extreme Points


Definition: Consider the following CSLP problem

( )=〈 , 〉→ ( )
Subject to
≤ , ≥ , = ,…, .
The set = : ≤ is called polyhedron.
Theorem: A bounded polyhedral set is a compact set.
Theorem: Hyper plane is a convex set.
Theorem: A polyhedral set is a convex set.

Definition: Let ⊆ ℝ and k is convex. A point ∈ is said to be an extreme point of k if and only if p
cannot be expressed as a strict convex combination of any other two distinct points in k, or p is not an
interior point of any line segment, ∉[ , ].
In other words , if = + (1 − ) , with ∈ (0,1) and , ∈ , then = =

Dilla University, Department of Mathematics Page 72


Dill University

Extreme point enumeration approach method


This method finds the optimal solution of a LPP by testing the value of the objective function at
each corner (extreme) point of the feasible region.

Procedure:
1. Take each inequality constraints as equality constraint.
2. Plot each equation on the corresponding plane ( 2D)
3. Determine the feasible region: This is the region that satisfies all the constraints.
4. Find all extreme points of the feasible region and compute the value of the objective
function and compare the value obtained.
Properties of extreme points
Let ⊆ ℝ ,k convex set
i. If p is an extreme point of k and ∈[ , ] ⊆ , then either = = .
ii. The point p is an extreme point of k if and only if − { } is convex set.
Proof:
a. if p is an extreme point and ∈[ , ]
The proof immediately follows from the definition.
b. ⇒) Assume p is extreme point of k.
− { } is convex set.
Let , ∈ − { } =M. WTS [ , ]∈
Since , ∈ , ≠ , ≠
⇒ P is not an interior point of [ , ].
⇒[ , ]∈ .
⇒ Convex.
⇐). Suppose − { } is a convex set. Assume p is an interior point. Let , ∈ .

Dilla University, Department of Mathematics Page 73


Dill University

Where ∈[ , ]⇒ ≠ , ≠ . , ∈ − { }.
⇒ [ , ]∈ − { }. This is a contradiction.
⇒ .
 Extreme Points are finite in number. There are at most BS to a set of m equations with n
unknowns. Hence the maximum number of basic feasible solution to LPP is which is finite provided
that n is finite.
 Extreme points of the convex set K are the boundary points, but all boundary points may not be
necessarily extreme points.
 Every points of the boundary of a circle is an extreme point of the convex set which includes the
boundary and interior the circle.
 The extreme points of a rectangle are its four vertices.
 A circle with all its interior point is a convex hull, but not convex polyhedral.
 All convex polyhedral are convex hull, but all convex hull may not be convex polyhedral.

3.4 The corner Point Theorem


Theorem: Any point of a bounded polyhedral set can be expressed as a convex combination of its extreme
points.
Corner points Objective function value
Z=30x+40y
A(15,0) 60
B(40/11,25/11) 185/11
C(0,5) 5
The optimal solution is ( , ) = (0,5) = 5.
c) =− +2
. − ≤ −1.
− . + 2 ≤ 2.
, ≥ .
RHS of the first constraint is negative, multiply both sides by the constraint by -1, it takes
the form − + ≥ 1.The solution space satisfying the constraints and the non negativity
constraints is given below

Dilla University, Department of Mathematics Page 74


Dill University

Values of the objective function at the vertices of the closed region ABC are
Vertices Objective Function Value

A(0,1) 2

B(0,2) 4

C(2,3) 4

The points B and C give the same maximum value of z. It follows every point between B and C
( on the line BC) also gives the same value of z. The problem, therefore, have multiple optimal
solution and =4.
Example: Find a geometric solution for the following LPP
= + → c) = 2 + 10 →
3 + ≥3
a) s. t . 2 + 5 ≤ 16
+ ≤1
, ≥0 6 ≤ 30
b) = 20 + 30 → 0 ≤ ,≥ 0
. 3 + 3 ≤ 36 d) =6 +4 →
5 + 2 ≤ 50 . 2 + ≤ 390
2 + 6 ≤ 60 3 + 3 ≤ 810
0 ≤ ,0 ≤ ≤ 200

Moving hyper plane Method


This is a second geometrical method of solving a LLP which is suitable if the feasible set is
unbounded.
Definition: Let f be a function of two variables, and let ( , ) be in the domain of f. The
( , ) ( , )
partial derivative of f with respect to x at ( , ) is defined by lim → and is

Dilla University, Department of Mathematics Page 75


Dill University

denoted by provided that the limit exists. Similarly, the partial derivative of f with respect to
( , ) ( , )
y at ( , ) is defined by lim → and is denoted by provided that the limit

exists.
In general if f is a function of n variables = ( ,…, ) then the partial derivative of f is given

.
by = .
.

Consider the following LPP


( , )= + → ( )
11 1 + 12 2 ≤ 1
21 1
+ 22 2 ≤ 2
.  
.
.
1 1 + 2 2 ≤

Steps:
1. Plot the feasible set S, on the coordinate plane.
2. Select an x and ind the value f(x ). Let z = f(x )
3. Plot the level line of f by c x ⋯ c x = z, through x0
4. Translate (move) the level line parallel to itself in the direction of improvement within
the feasible region until the point that maximize or minimize f is found or we conclude
that the problem has no finite minimum/maximum. .

5. ∇ ( , )= = , −∇ ( , )=− = − is direction of improvement for

maximization and minimization problems respectively..


Example: f(x , x ) = c x + c x → max(min)

Dilla University, Department of Mathematics Page 76


Dill University
−2 1 + 2 ≤3
− 1+2 2 ≤9
≤ 6.  
3 + ≤ 21.
3 − ≤ 15.
1, 2 ≥0

Example 1: ( , )= + →

X+y=5

Example 2: ( , )=− + →

X+y=5

 Geometric method is not possible in higher dimensions


 Algebraic approach is needed to solve LPs in higher dimensions
Example:

Dilla University, Department of Mathematics Page 77


Dill University

Minimize − −
subject to
+ ≤
− + ≤
The feasible region is illustrated in Figure below. The first and second constraints represent
points "below" lines 1 and 2 respectively. The nonnegativity constraints restrict the points to be
in the first quadrant. The equations − − = are called the objective contours and are
epresented by dotted lines .
In particular the contour − − = passes through the origin. The contours are moved in
the direction − = ( , ) as much as possible until the optimal point ( / , / ) is
reached.

In this example we had a unique optimal solution. Other cases may occur depending upon the
problem structure.
All possible cases that may arise are summarized below (for a minimization problem).
Unique Finite Optimal Solution: If the optimal finite solution is unique, then it occurs at an
extreme point. Figures a and b show a unique optimal solution. In Figure a the feasible region is
bounded; that is, there is a ball around the origin that contains the feasible region. In Figure b the
feasible region is not bounded. In each case, however, the unique optimal solution is finite.

Alternative Finite Optimal Solutions: This

Dilla University, Department of Mathematics Page 78


Dill University

case is illustrated in Figure below


Note that in Figure a the feasible region is bounded. The two corner points ∗ ∗ are
optimal, and also any point on the line segment joining them. In Figure b the feasible region is
unbounded but the optimal objective is finite. Any point on the "ray" with vertex x* in Figure b
is optimal.

Unbounded Optimal Solution. This case is illustrated in Figure below where both the feasible
region and the optimal solution are unbounded. For a minimization problem the plane =
can be moved in the direction – indefinitely while always intersecting with the feasible region.
In this case the optimal objective is unbounded with value −∞.
Empty Feasible Region. In this case the system of equations and/or inequalities defining the feasible
region is inconsistent. To illustrate, consider the following problem.

Examining the figure, it is clear that there exists no point ( , ) satisfying the above
inequalities. The problem is said to be infeasible, inconsistent, or with empty feasible region.

Dilla University, Department of Mathematics Page 79


Dill University

Summary
Graphical Solution Methods
Remark: There are two types of polyhedral set
Bounded polyhedral : strictly bounded and has finite number of extreme points.
2. Unbounded polyhedral se : Has finite number of extreme points and unbounded.
Theorem: If the convex set of F.S. of a LPP is bounded, then any objective function has both
maximum and minimum corresponding to some extreme points of the polyhedral set.
Definition:
Multiple optimal solutions: If a LPP has more than one optimal solution, then the problem is
said to have multiple optimal solution.
Theorem: If a LPP has at least two optimal feasible solution, then there are infinite number of
optimal solutions, which are the convex combination of the initial optimal solutions.
Convex Set
Let =ℝ
 [ , ]={ ∈ℝ : = + (1 − ) , [0,1]} is called closed segment in ℝ .
 ( , )={ ∈ℝ : = + (1 − ) , (0,1)} is called open segment in ℝ .
Definition: A set ⊆ ℝ is said to be a convex set iff for any two points x, y in K, + (1 − ) .
 The line segment joining any two distinct points of a set is also in the set, and then we call the set
as convex set.

Dilla University, Department of Mathematics Page 80


Dill University

 Definition: Consider the following CSLP problem


 ( )=〈 , 〉→ ( )
 Subject to
 ≤ , ≥ , = ,…, .
 The set = : ≤ is called polyhedron.
 Theorem: A bounded polyhedral set is a compact set.
 Theorem: Hyper plane is a convex set.
 Theorem: A polyhedral set is a convex set.

Dilla University, Department of Mathematics Page 81


Dill University

CHAPTER four
4.The Simplex method
Introduction
In the previous chapter, we have seen graphical solutions of a linear programming problem. But
Graphical method cannot be applied to solve LPP involving more than two variables. In such
cases, analytical method is quite helpful. This method also forms a good basis to grasp the more
powerful simplex method.
In this section we introduce basic feasible solutions, and show that they correspond to extreme
points. Since an algebraic characterization of the former (and hence the latter) exists, we shall be
able to move from one basic feasible solution to another until optimality is reached.
General objectives
At the end of this unit the learner will be able to :
▪Remind the formation of linear programming problem
▪Understand how to solve basic feasible solution in LP
▪Consider the representation of the linear programming problem with non-basic variables
▪Explain the interpretation of optimality test
▪Identify the uniqueness and alternative optimal solutions
▪Differentiate the main steps to apply simplex method on linear programming
▪Know the two phase method

4.1 Linear programming in standard form


Given a linear programming problem in standard form
=
.
= , ≥
= , ≤
Suppose that ( )= = . After rearranging the columns of A, let =[ , ]
where B is an m x m invertible matrix and N is an m x (n-m) matrix.

Dilla University, Department of Mathematics Page 82


Dill University

.
A = . denote the j column.
.

= ( ,…, ) ∈ ℝ denote a vector in ℝ .

.
= . denote a vector in ℝ .
.

S = {x ∈ ℝ : Ax = b, x ≥ 0} called feasible set for LPP.


Note:
A basis of A is a set ={ ,…, } of m linearly independent column vectors of A. The
index set = { (1), … , ( )} is given arbitrarily, but in a fixed manner.

Suppose after rearranging the columns of A, let = [ , ] where B is an m x m invertible

matrix and N is an m x (n-m) matrix and = .

4.2 Basic Feasible Solutions (Analytical Method)


Consider the system, = where =[ , ]
Therefore, we have the following
= ⟹[ , ] =

+ = , =

= −
= −

 The point = , where = and = is called a basic solution of the

system
 The components of are called basic variables and The components of are
called non basic variables.
 If = ≥ , then x is called a basic feasible solution.
 A solution is said to be basic solution if the vectors associated with the nonzero
variables are linearly independent. This condition is both necessary and sufficient.

Dilla University, Department of Mathematics Page 83


Dill University

 If all components of a solution set corresponding to the basic variable are nonzero, then
the basic solution is called non-degenerate basic feasible solution.
 If some components of a solution set corresponding to the basic variables are zero the
basic solution is called degenerate basic feasible solution.
 Matrix B is called the basic matrix (or simply the basis) and
 Matrix N is called the nonbasic matrix.
For a system of equation with n variables and m equations the number of basic feasible solution
is less than or equal to

!
=
!( − )!
Example 1: Consider the following system of equations. Find all basic and basic feasible
solutions.
+ +− =
+ + =
Solution: The set of equations can be written as
+ + =
2 4 −2 10 2 4 −2
ℎ = , = , = , = , = .
10 3 7 33 10 3 7
( ) = , number of linearly independent column vectors. Thus, the equations are linearly
independent we will have three sub matrices taking two at a time. The total number of basic
!
solutions will be 3 =( )! !
=3

The possible sub matrixes are given below:


2 4
=( , )= .
10 3
4 −2
=( , )= .
3 7
4 −2
=( , )= . Since all the three sub matrices are non-singular and all of them are
3 7
basis matrices, there exists 3 basic solutions.
2 4 3 −4 3 −4 10 3
For B = (a , a ) = . B = ;x = = .
10 3 −10 2 −10 2 33 1
2 −2 7 2 7 2 10 4
For B = (a , a ) = ; B = ;x = =
10 7 −10 2 −10 2 33 −1

Dilla University, Department of Mathematics Page 84


Dill University

4 −2 7 2 7 2 10 4
For B = (a , a ) = ,B = ,x = = .
3 7 −3 4 −3 4 33 3
x 3 x 0
x = x = 1 and x = x = 4 are basic feasible solutions.
x 0 x 3
x 4
x = x = 0 is basic solution but not feasible solution.
x −1
Example 2: Find all the basic solutions to the following problem:
= + + →

+ + =
+ + =
Also find which of the basic solutions are
a. Basic feasible
b. Non degenerate basic feasible solution
c. Optimal basic feasible solution
Example 3: How many basic feasible solutions are there in the following linearly independent
equations? Find all of them?
− + + =
− − + =
Solution: There are at most 4c = 6 basic solutions with two basic and two non-basic variables.
The given equation can be written as
a x + a x + a x + a x = b.
2 −1 3 1 6
Where a = ;a = ;a = ;a = ;b =
4 −2 −1 2 10
The six square sub matrices, taking two at a time are given by
2 −1 2 3 2 1
= (a , a ) = = (a , a ) = = (a , a ) = .
4 −2 4 −1 4 2
−1 3 −1 1 3 1
= (a , a ) = . = (a , a ) = . = (a , a ) = .
−2 −1 −2 2 −1 2
Since there are only three non singular matrices B , B , B there are only three basic solutions.
18
For B ; x =B b= 7 ⇒ BS = ( , 0 , 0)
2
7

Dilla University, Department of Mathematics Page 85


Dill University

−36
For B ; x =B b= 7 ⇒ BS = (0, , , 0)
2
7
2
For B ; x =B b= 7 ⇒ BS = (0, ,0 , )
36
7
Example: Two linearly independent equations with three variables are given
+ − = .
− − = .
Find, if possible the basic solutions with x as a non basic variable.

4.3. Fundamental theorem of linear programming


Important facts about the linear programming problem,
= where A is an mxn matrix with rank m.

=
≥0
Theorem1: The collection of extreme points corresponds to the collection of basic feasible
solutions, and both are nonempty, provided that the feasible region is not empty.
Theorem2: If an optimal solution exists, then an optimal extreme point (or equivalently an
optimal
basic feasible solution) exists.

Limitation of the Method : Since the number of basic feasible solutions is bounded by ,

one may think of simply listing all basic feasible solutions, and picking the one with the minimal
objective value. This is not satisfactory, however, for a number of reasons.

Firstly, the number of basic feasible solutions is bounded by , which is large, even for

moderate values of m and n.


Secondly, this simple approach does not tell us if the problem has an unbounded solution that
may
occur if the feasible region is unbounded.
Lastly, if the feasible region is empty, and if we apply the foregoing "simple-minded
procedure," we shall discover that the feasible region is empty, only after all possible ways of

Dilla University, Department of Mathematics Page 86


Dill University

extracting m columns out of n columns of the matrix A fail to produce a basic feasible solution,
on the grounds that B does not have an inverse, or else ≱ 0.
The simplex method is a clever procedure that moves from an extreme point to another extreme
point, with a better (at least not worse) objective value. It also discovers whether the feasible
region is empty and whether the optimal solution is unbounded. In practice, the method only
enumerates a small portion of the extreme points of the feasible region.
Keys to Simplex method
The simplex method is a clever procedure that moves from an extreme point to another extreme
point, with a better (at least not worse) objective. It also discovers:
 Whether the feasible region is empty and
 Whether the optimal solution is unbounded.
In practice, the method only enumerates a small portion of the extreme points of the feasible
region.
The key to simplex method lies in recognizing the optimality of a given extreme point solution
based on local consideration without having to (globally) enumerate all extreme points or basic
feasible solutions.
Consider the following linear programming problem

. = ; ≥0

Where A is an matrix with rank m. suppose that we have a basic feasible solution
0
whose objective value is given by

= =( , ) = … … … … … … … … … … … … … … … . (1)

Now let , denote the sets of basic and non-basic variables for the given basis.
Then feasibility requires that
x ≥ 0, x ≥ 0 and that b = Ax = Bx + Nx
Multiplying the last equation by and rearranging the terms, we get
= − = −∑ ∈ = −∑ ∈ ( ) … … … … . (2)
Where, R is the current set of the indices of the non-basic variables.
Noting equation (1) and (2) and letting z the objective function value, we get

Dilla University, Department of Mathematics Page 87


Dill University

z = cx
=c x +c x
=c B b−∑∈ B ax +∑∈ cx =c B b−∑∈ c B ax +∑∈ cx

=z − z − c x … … … … … … … … … … … … … … … … … … (3)

Where z = for each non-basic variable using the foregoing transformations, the linear
programming problem may be rewritten as:
= −∑∈ −

. + = ̅; ≥ , ∈ , ≥ ……………………..( )

Without loss of generality, let us assume that no row in equation (4) has all zeros in the column
of the non-basic variables , ∈ . Otherwise, the basic variable in such a row is known in
value, and this row can be deleted from the program. Now observe that the variable simply
play the role of slack variables in equation (4). Hence, we can equivalently write LP in the non-
basic variable space, i.e, in terms of the non-basic variables as follows:
= −∑∈ −

. ≤ ̅; ≥ , ∈ ……………………..( )

Note that the number of non-basic variables is = − and so we have represented LP in


some p dimensional space. This is to be expected since there are p independent variables or p
degree of freedom in our constraint system.
The variables − are sometimes referred to as reduced cost coefficients. Since, they are the
coefficients of the non-basic variables in this reduced space. The representation (4) in which the
objective function z and the basic variables have been solved in terms of the non-basic
variables is referred to as a representation of basic variables in canonical form.
The key result now simply says the following.
If − ≤ 0, ∈ ……………………………………………………..(6)
Then the current basic solution is optimal
This should be clear by noting that since − ≤ 0, for all j ∈ , we have ≥ for any
feasible solution and for the current (basic) feasible solution, and we know that = ,
= 0 for all j ∈ .

Dilla University, Department of Mathematics Page 88


Dill University

4.4 Algebra of the Simplex Method


Towards this end, consider the representation of the LP in the non-basic variables space, written
in equality form as in equation (4).
If − ≤ 0, for all j ∈ , then = 0 and = is optimal for LP as noted in equation (6).
Otherwise, while holding ( − 1) non basic variables, fixed at zero, the simplex method
considers increasing the remaining variables say . Naturally we would like ( − ) to be
positive, and perhaps the most positive of all the − , ∈ . Now fixing = 0 for
∈ − { }, we obtain from equation (4) that
z = z − (z − c )x … … … … … … … … … … … .7
x b y
. . .
and . = − . x … … … … … … … … … .8
.
x b y
If y ≤ 0, then x increases as increases and so continues to be nonnegative.
If ≥ 0, then will decrease as increases. In order to satisfy non negativity is
increased until the first point at which a basic variable drops to zero. From equation (8), it is

clear that the first basic variable dropping to zero corresponds to the minimum of for

positive .More precisely, we can increase until

= = : > 0 … … … … … . .9

In the absence of degeneracy, > 0, and hence = > 0. From equation (7) and the

fact that − > 0, it then follows that < and the objective function strictly improves.

As increases from level 0 to , a new basic solution is obtained. Substituting in

equation (8) gives the following:


x =b − b ; i = 1,2, … , m

b
x = y … … … … … … … .. (10)

From equation (10) , x = 0 and hence at most m variables are positive. The corresponding
columns in A are , ,…, , , …, . Note that these columns are linearly

Dilla University, Department of Mathematics Page 89


Dill University

independent since ≠ 0. Recall that ,…, , , …, are linearly


independent, and if replaces , then the new columns are linearly independent if and only
if ≠ 0. Therefore, the point given by (10) is a basic feasible solution.
To summarize, we have algebraically described an iteration, that is the process of transforming
from one basis to an adjacent basis. This is done by increasing the values of the non basic
variable with positive − and adjusting the current basic variables. In the process, the
variable drops to zero. The variable enters the basis and the variable leaves the basis.
In the absence of degeneracy the objective function value strictly decreases and hence the basic
feasible solutions generated are distinct. Since there is only a finite number of basic feasible
solution, the procedure terminates in a finite number of steps.
Example:
+
. + ≤

, ≥
Introduce slack variables , to put the problem in canonical form. This leads to the following
constraint matrix A:
1 2 1 1
=[ , , , ]=
0 1 0 0
Consider the basic feasible solution corresponding to = [ , ].
The representation of the problem in the non basic variables space may be obtained as follows:
1 −2 1 −2
First compute = , = (1,1) = (1, −1). Hence
0 1 0 1
1
= =
0
−2
= =
1
2
= = and
1
= =3
− = − =1
− = −1
Hence the required representation of the problem is

Dilla University, Department of Mathematics Page 90


Dill University

3− +
. x − 2x + x = 2
+ =1
, , , ≥0

4.5 Optimality test and basic exchange


Interpretation of −
The criterion − > 0 for a non-basic variable to enter the basis will be used repeatedly
until an optimal solution is obtained or the problem is known to have unbounded solution.
It will be helpful at this stage to review the definition of and make a few comments on the
meaning of the entering criterion − > 0. Recall that
= −( − ) ,
Where z = c B a =c y =∑ c y … … … … … … . (11)
and is the cost of the basic variable. Note that if is raised from zero level, while the
other non basic variables are kept at zero level, then the basic variables , ,…, must be
modified according to equation (8).
In other words, if is increased by one unit, then , ,…, will be decreased
respectively by , ,…, units (if < 0, ℎ will be increased).
The saving (a negative saving means more cost) that results from the modification of the basic
variables, as a results increasing by one unit, is there fore ∑ , ℎ ℎ . However,
the cost of increasing it self by one unit is . Hence − is the saving minus the cost of
increasing by one unit.
Naturally, if − is positive, it will be to our advantage to increase .for each unit of , the
cost will be reduced by an amount z − c and hence it will be to our advantage to increase as
much as possible. On the other hand, if − < 0, then by increasing , the net saving is
negative, and this action will result in a large cost. So this action is prohibited. Finally, if
− = 0, then increasing will lead to a different solution with the same cost. So whether
is kept at zero level, or increased, no change in the cost takes place.

Dilla University, Department of Mathematics Page 91


Dill University

Now suppose that is a basic variable. In particular, suppose that is the basic variable,
that is = , = = . Recall that = = . But
is a vector of zero except for one at the position.
= ℎ − = − =0
Leaving basis and Blocking Variables
Suppose that we decide to increase a non basic variable with a positive − . From
equation (7), the larger the value of , the smaller is the objective z. as increased, the basic
variables are modified according to equation (8). If the vector has any positive component(s),
then the corresponding basic variable(s) is decreased as is increased. Therefore, the non basic
variable cannot be indefinitely increased because the non negativity of the basic variable will
be violated. Recall that a first basic variable that drops to zero is called the blocking
variables because it blocks further increase of . Thus enters the basis and leaves the
basis.
Termination, with optimality and unboundedness
We have discussed a procedure that moves from one basis to an adjacent basis, by introducing
one variable in to the basis, and removing another variable from the basis. The criteria for
entering and leaving are summarized below:
1. Entering: may enter if − >0

2. Leaving: may leave if = : ≥0

Two logical questions immediately arise:


1. What happen if each non basic variable has − ≤ 0. In this case, as seen by
equation (6), we have obtained an optimal basic feasible solution.
2. Suppose that − > 0 and is eligible to enter the basis, but we cannot find any
positive component . In this case, the optimal solution value is unbounded. These
cases will be discussed in more detail in this section.
Termination with an optimal solution
Consider the following problem, where A is an m x n matrix with rank m.

. = ; ≥

Dilla University, Department of Mathematics Page 92


Dill University

∗ ∗
Suppose that is a basic feasible solution with basis B; that is, = .
0
∗ ∗
Let denote the objective of , i.e,

= Suppose further that − ≤ 0 for all non basic variable, and hence there are no
non basic variables that are eligible to enter the basis. Let x be any feasible solution with

objective value z. then from equation (3) we have − =∑ ∈ − … … … … . (13)
∗ ∗
Since − ≤ 0 and ≥ 0 for all variable then ≤ , and so as in equation (6), is an
optimal basic feasible solution.
Uniqueness and Alternative Optimal Solutions
We can get more information from equation (13). If − < 0 for all non basic components,
then the current optimal solution is unique. To show this, let x be any basic feasible solution that

is distinct from . Then there is at least one non basic component that is positive, because if

all non basic components are zero x would not be distinct from . From equation (13) it follows
∗ ∗
that ≥ , and hence is the unique optimal solution.
Now consider the case where − ≤ 0 for all non basic components, but − = 0 for at
least one non basic variable . As is increased, we get (in the absence of degeneracy) points

that are distinct from but have the same objective value(why). if is increased until it is
blocked by a basic variable, we get an alternative optimal basic feasible solution. The process of
increasing from zero until it is blocked generates an infinite number of alternative optimal
solutions.
Example: Solve the following problem
Minimize −3 +
+2 + =4
− + + =1
, , , ≥0
Solution:
1 0 1 0
Consider the basic feasible solution with basis =( , )= and = .
−1 1 1 1
The corresponding point is given by
1 0 4 4 0
= = = = , = =
1 1 1 5 0
The objective value is -12.

Dilla University, Department of Mathematics Page 93


Dill University

To see if we can improve the solution, calculate − and − as follows, noting that
= (−3,0), ℎ = (−3,0)
− = − = −7
− = − = −3
Since both − < 0 and − < 0 , then the basic feasible solution
( , , , ) = (4,0,0,5) is the unique optimal solution.
Now consider a new problem where the objective function −2 −4 is to be minimized over
the same region. Again, consider the same point (4,0,0,5). The objective value is -8, and the
= (−2,0), = (−2,0). Calculate − and − as follows,
2
− = − = (−2,0) +4=0
3
1
− = − = (−2,0) − 0 = −2
1
In this case, the given basic feasible solution is optimal, but it is no longer a unique optimal
solution. We see that by increasing , a family of optimal solutions is obtained. Actually,
if we increase , keep = 0, and modifying and , we get
4 2
= − = −
5 3
4−2
For any ≤ , the solution =
0
5−3

Is an optimal solution with objective value−8. In particular, If = , we get an alternative basic

feasible optimal solution, where leaves the basis. Note that the new objective function
contours are parallel to the hyper plane corresponding to the first constraint. That is why we
obtain alternative optimal solutions in this example. In general, whenever the optimal objective
function contour contains a face of dimension greater than zero, we will have alternative optimal
solutions.
Unboundedness
Suppose that we have a basic feasible solution of the system = , ≥ 0 with objective value
. Now let us consider the case when we find a corresponding nonbasic variable
with − >0 ≤ 0. This variable is eligible to enter the basis since increasing
will improve the objective value. From equation (13) we have

Dilla University, Department of Mathematics Page 94


Dill University

= −( − )
Since we are minimizing z and since − > 0 , then it is to our benefit to increase
indefinitely, which will make go to −∞. The reason that we were not able to do this before was
that the increased in the value of was blocked by a basic variable. This puts a “ceiling” on
beyond which a basic variable will be negative. But if blocking is not encountered, there is no
reason why we should stop increasing . This is precisely the case when ≤ 0. Recall that
from equation (8) we have = − and so if ≤ 0, then can be increased
indefinitely without any of the basic variables becomes negative. Therefore the solution x (
where = − , is arbitrarily large and other nonbasic components are zero) is
feasible and its objective value = −( − ) , which approaches −∞
approaches to + ∞.
To summarize, if we have a basic feasible solution with − > 0 for some nonbasic
variable , and meanwhile ≤ 0 , then the optimal is unbounded with objective value −∞.
This is obtained by increasing indefinitely and adjusting the values of the current basic
variables, and is equivalent to moving along the ray:

0
0 .
. .
. + 1 ; ≥0
0 .
. .
0 0

Note that the vertex of the ray is the current basic feasible solution and the direction of
0

.
.
the ray is = 1 where the 1 appear in the kth position. It may be noted that
.
.
0
=( , ) =− + =− +
But since − + < 0 ( because was eligible to enter the basis). Then < 0, which is the
necessary and sufficient condition for unboundedness.

Dilla University, Department of Mathematics Page 95


Dill University

4.6 The simplex Algorithm


We now give a summary of the simplex method for solving the following linear programming
problem.
Minimize cx
. Ax = b; x ≥ 0
Initialization Step
Choose a starting basic feasible solution with basis B:
Main Step
1. Solve the system Bx = b ( with unique solution x = B b = b.
Let x = b, x = 0, and z = c x
2. Solve the system = (with unique solution = ). The vector w is referred to as
the vector of simplex multipliers because its components are the multipliers of the rows of A
that are added to the objective function in order to bring it into canonical form. Calculate
− = − for all nonbasic variables.( this is known as the pricing operation.) Let
− = − ; where R is the current set of induces associated with the

nonbasic variables. If − ≤ 0, then stop with the current basic feasible solution as an
optimal solution. Otherwise go to step 3 with as the entering variable. ( This strategy for
selecting an entering variable is known as Dantzig’s rule.)
3. Solve the system = ( ℎ = ). If ≤ 0 then stop with

the conclusion that the optimal solution is unbounded along the ray + ; ≥0
0
where is an − vector of zeros except for a 1 at the k th position. if ≰ 0, go to step
4. Let enter the basis. Then index r of the blocking variable which leaves the basis is
determined by the following minimum ratio test:

= minmum :

Update the basis b where replaces , update the index set R, and repeat step 1.

Dilla University, Department of Mathematics Page 96


Dill University

4.7 Degeneracy and finiteness of simplex algorithm


Modification for maximization problem
A maximization problem can be transformed in to minimization problem by multiplying the
objective coefficients by -1. A maximization problem also be handled directly as follows. Let
− instead be the minimum − for j nonbasic: the stopping criterion is that − ≥ 0.
Otherwise, the steps are as previously.
Finite convergence of the simplex method in the absence of Degeneracy
Note that, at each iteration, one of the following three actions is executed. We may stop with an
optimal extreme points if − ≤ 0; we may stop with unbounded solution if − >0
≤ 0; or else we generate a new basic feasible solution if − >0 ≰ 0. In the

absence of degeneracy, > 0 and hence = > 0. Therefore the difference between the

objective values at the previous iteration and the current iteration is ( − ) > 0. In other
words, the objective function strictly decreases at each iteration, and hence the basic feasible
solution is generated by the simplex method are distinct. Since there is only a finite number of
basic feasible solutions, the method would stop in a finite number of steps with a finite optimal
solution or with an unbounded optimal solution.
Theorem: (Finite Convergence): In the absence of degeneracy (and assume feasibility), the
simplex method stops in a finite number of iterations, either with an optimal basic feasible
solution or with the conclusion that the optimal value is unbounded.
In the presence of degeneracy, there is the possibility of cycling in an infinite loop.

4.8 Finding a starting basic feasible solution


The simplex method in tableau format
At each iteration of the simplex method following linear systems of equations need be solved:
= , = , = . Various procedures for solving and updating these systems
will lead to different algorithms that all lie under the general framework of the simplex method
described previously. In this section we describe the simplex method in tableau format.
Suppose that we have a starting basic feasible solution x with basis B. the LP can be represented
as follows.

Dilla University, Department of Mathematics Page 97


Dill University

Subject to − − =0 (15)
+ = ; , ≥0 (16)
From equation (16) we have + = (17)
Multiplying (17) by and adding to equation (15), we get
+0 +( − ) = (18)
Currently x = 0, and from equations (17), (18)we get = , and = .
Also, from (17) and (18) we can conveniently represent the current basic feasible solution with
basis B in the following tableau. Here we think z as a basic variable to be minimized. The
objective row will be referred as row zero and the remaining rows are rows 1 through m. the
right hand side column (RHS) will denote the values of the basic variables ( including the
objective function). The basic variables are identified on the far left column.

1 0 − 0
0 1

The tableau in which z and have been solved in terms of is said to be in canonical form.
Not only does this tableau give us the value of the objective function and the basic variables on
the right hand side, but it also gives us all the information we need to proceed with the simplex
method. Actually the cost row gives us − , which consists of the − ’s for the
nonbasic variables. So row zero will tell us if we are at optimal solution ( if − ≤ 0), and
which nonbasic variables to increase otherwise. If is increased, then the vector = ,
which is stored in row 1 through m under the variable , will help us determine by how much
can be increased. If ≤ 0, then can be increased indefinitely without being blocked and
the optimal objective is unbounded. On the other hand, if ≰ 0, that is , if has at least one
positive component, then the increase in will be blocked by one of the current basic variables,
which drops to zero. The minimum ratio test ( which can be performed since
= and are both available in the tableau) determines the blocking variable. We would
like to have a scheme that will do the following.
a. Update the basic variables and their values.
b. Update the − values of the new nonbasic variables.

Dilla University, Department of Mathematics Page 98


Dill University

c. Update the columns.


Pivoting: All of the foregoing tasks can be simultaneously accomplished by a simple pivoting
operation. If enters the basis and leaves the basis, then pivoting on can be stated as
follows.
a. Divide row r by .
b. For i=1,…,m and ≠ , update row by adding to it − times the new .
c. Update row zero by adding to it − times the new . The following tables
represent the situation immediately before and after pivoting.
Table 1 before pivoting
RHS

z 1 0 . 0 . 0 . − . − . .

0 1 0 0

. . . . . .

0 0 1 0

. . . . . . .

0 0 0 1

Dilla University, Department of Mathematics Page 99


Dill University

Table 2 After pivoting


RHS

z 1 0 . − . 0 . ( − )− ⁄ ( . 0 . −( − )
− )
0 1 − ⁄ 0 − ⁄ 0 − ⁄
1

. . . . . . .

0 0 1⁄ 0 ⁄ 1

. . . . . . .

0 0 − ⁄ 1 − ⁄ 0 − ⁄

The simplex method in tableau format (Minimization Problem)


Initialization step
Find an initial basic feasible solution with basis B. form the following initial tableau.
z

1 0 −

0 I

Main step
Let − = { − : ∈ }. If − ≤ 0, then stop; the current solution is
optimal. Otherwise examine . If ≤ 0, then stop; the optimal solution is unbounded along
the ray

+ ; ≥ 0 , where is a vector of zeros except a 1 at the position. If ≰ 0.
0

Determine the index r as follows: = : >0

Update the tableau by pivoting at . Update the basic and nonbasic variables where enters the basis
and leaves the basis, and repeat the main step.

Dilla University, Department of Mathematics Page 100


Dill University

Example: + −
+ + ≤ ; + − ≤ ; − + + ≤ ; , , ≥
Introducing the nonnegative slack variables , , . The problem becomes
+ −4 +0 +0 +0
. + +2 + =9; + − + =2; − + + + =4; ≥0 ℎ ,
Since ≥ 0, then we can choose our initial basis as =[ , , ]= and we indeed have
B b=b≥0
Iteration 1
z

Z 1 -1 -1 4 0 0 0 0

0 1 1 2 1 0 0 9

0 1 1 -1 0 1 0 2

0 -1 1 [1] 0 0 1 4

Second iteration
z

Z 1 3 -5 0 0 0 -4 -16

0 [3] -1 0 1 0 -2 1

0 0 2 0 0 1 1 6

0 -1 1 1 0 0 1 4

Iteration 3
z

Z 1 0 -4 0 -1 0 -2 -17

0 1 −1 0 1 0 −2 1
3 3 3 3

0 0 2 0 0 1 1 6

0 0 2 1 1 0 1 13
3 3 3 3

Dilla University, Department of Mathematics Page 101


Dill University

This is the optimal tableau since − ≤ 0 for all non basic variables. The optimal solution is

given by = , = 0, = .

Note that the current optimal basis consists of the columns , , .


Interpretation of Entries in the Simplex Tableau
Consider the following typical simplex tableau and assume non degeneracy
Z

1 0 −

The tableau may be thought of as a representation of both the basic variable and the cost z in
terms of the non basic variable .The non basic variable can therefore be thought of as
independent variables, whereas are dependent variables. From row zero we have
= −( − )

= − ( − )

and hence the rate of change of z as a function of a typical nonbasic variable , namely , is

simply, − . In order to minimize z, we should increase if < 0, that is, − > 0.

Also, the basic variables can be represented in terms of the nonbasic variables can be represented
in terms of the nonbasic variables as follows:
= −

= − = −
∈ ∈

Therefore = − ; that is , − is t he rate of change of the basic variable as a function of

the nonbasic variables . In other words, if increases by one unit, then the basic variable

decreases by an amount , or simply =− . A column can be alternatively

interpreted as follows. Recall that, = , and hence represents the linear combination of
the basic columns that are needed to represent . More specifically,

Dilla University, Department of Mathematics Page 102


Dill University

The simplex table also gives us a convenient way of predicting the rate of change of the
objective function and the value of the basic variables as a function of the right hand side vector
b. since the right hand side vector usually represent scarce resources, we can predict the rate of
change of the objective function as the availability of the resources is varied. In particular,

= − ( − )

And hence = . If the original identity consists of slack variables with zero costs, the

elements of row zero at the final tableau under the slacks give −0= , which is .

More specifically, if we late = , then = .

Similarly, the rate of change of the basic variables as a function of the right hand vector b

is given by = .

In particular, is the row of , is the column of , and is the ( , ) entry

of .
Note that if the tableau corresponds to a degenerate basic feasible solution, then as a non basic
variable actually increases, at least one of the basic variables may become immediately
negative destroying feasibility.
Identifying from the simplex table
The basis inverse matrix can be identified from the simplex tableau as follows:
Assume that the original tableau has an identity matrix. The process of reducing the basis matrix
B of the original tableau to an identity matrix in the current tableau, is equivalent to pre
multiplying row 1 through m of the original tableau by to produce the current tableau.
This also converts the identity matrix of the original tableau to . Therefore, can be
extracted from the current tableau as the sub matrix in row1 through m under the original identity
column.
Block pivoting
Initial Basic feasible Solution:

Dilla University, Department of Mathematics Page 103


Dill University

Recall that the simplex method starts with a basic feasible solution and moves to an improved
basic feasible solution, until the optimal point is reached or else unboundedness of the objective
function is verified. However, in order to initialize the simplex method, a basis B with
= ≥ 0 must be available. We shall show the simplex method can always be initiated
with a very simple basis namely the identity.
Easy Case:
Suppose that the constraints ≤ , ≥ 0 where A is matrix and b is an m nonnegative
vector. By adding the slack vector , the constraints are put in the following canonical standard
form: + = , ≥ 0, ≥ 0. Note that the new ( + ) constraint matrix ( , ) has
rank m, and a basic feasible solution of this system is at hand, by letting be the basic vector,
and x be the non basic vector. Hence at this starting the basic feasible solution = & = 0,
and the simplex method now can be applied.
Some bad case
In many occasions, finding a starting basic feasible solution is not as straightforward as the case
just described. To illustrate, suppose that the constraints are of the form ≤ , ≥ but the
vector b is not nonnegative. In this case, after introducing the slack vector , we cannot let
= 0, because = violates the non negativity requirement.
Another situation occurs when the constraints are of the form ≥ , ≥ 0, ℎ ≰ 0.
After subtracting the slack vector , we get − = , ≥ 0, ≥ 0. Again, there is no
obvious way of picking a basis B from the matrix( , − ). With = ≥ 0. In general any
linear programming problem can be transformed in to a problem of the following form.

. = , ≥0
Where ≥ 0( ≤ 0, ℎ row can be multiplied by -1). This can be accomplished by
introducing slack variables and by simple manipulation of the constraints and variables. If A
contains an identity matrix, then an immediate basic feasible solution is at hand, by simple
letting B=I, and since ≥ 0, then = ≥ 0. Otherwise, something else must be done. A
trial and error approach may be futile, particularly if the problem is infeasible.
Example
a. Consider the following constraints

Dilla University, Department of Mathematics Page 104


Dill University

+ ≤ ;− + ≤ ; , ≥
After adding the slack variables & , we get
+ + = ;− + + = ; , , , ≥
4 0
An obvious starting basic feasible solution is given by = = and = = .
1 0
b. Consider the following constraints:
+ + ≤ ; − + + ≥ ; , ≥
Note that is unrestricted. so the change of variable = − is made. Also the slack
variables & are introduced. This leads to the following constraints in canonical standard
form:
− + + + = 6 ; −2 +2 +3 +2 − = 3;
, , , , , ≥0
Note that the constraint matrix does not contain an identity and no obvious feasible basis B can
be extracted.
c. Consider the following constraints:
+ − ≤− ;− + + ≤ ; , , ≥
Since the right hand side of the first constraint is negative, the first inequality is multiplied by -1.
Introducing the slack variables leads to the following system:
+ − − = ;− + + + = ; , , , , ≥
Note again that this constraint matrix contains no identity matrix.
Artificial variables
After manipulating the constraints and introducing slack variables, suppose that the constraints
are put in the format = , ≥ 0 where A is an matrix and ≥ 0 is an m vector. Further
suppose that A has no identity sub matrix. In this case we shall resort to artificial variables to get
a starting basic feasible solution, and then use the simplex method itself and get rid of these
artificial variables.
To illustrate, suppose that we change the restrictions by adding an artificial vector leading to
the system + = , ≥ , ≥ . Note that by construction, we forced an identity matrix
corresponding to the artificial vector. This gives an immediate basic feasible solution of the new
system, namely = & = 0. Even though we now have a starting basic feasible solution and
the simplex method can be applied, we have in effect changed the problem.

Dilla University, Department of Mathematics Page 105


Dill University

In order to get back to our original problem, we must force these artificial variables to zero,
because = , ≥ 0 if and only if = 0. In other words artificial variables are only a tool to
get the simplex method started; however, we must guarantee that these variables will eventually
drop to zero.
Example: Consider the following constraints:
+ ≥ ; − + ≥ ; + ≤ ; , ≥
Introducing the surplus and slack variables , , , we get
+ − = ;− + − = ; + + = ; , , , , ≥
This constraint has no identity sub matrix. We can introduce three artificial variables to obtain a
starting basic feasible solution. Note, however, that appears in the last row with coefficient 1.
So we only need to introduce artificial variables & , which lead to the following system.
+ − + = ;− + − + = ; + + =
, , , , , & ≥
Now we have an immediate starting basic feasible solution of the new system, namely
= 6, =4& = 5 and we would like for the artificial variables drop to zero.

4.8.1 The two phase method


Various methods can be used to eliminate artificial variables. One of these methods is to
minimize the sum of the artificial variables, subject to the constraints
+ = , ≥ 0, ≥ 0.
If the original problem has a feasible solution, then the optimal value of this problem is zero,
where all the artificial variables drop to zero. More importantly, as the artificial variables drop to
zero, they leave the basis, and legitimate variables enter instead.
Eventually all artificial variables leave the basis (this is not always the case, because we may
have an artificial variable in the basis at zero level.) The basis then consists of the legitimate
variables. In other words, we get a basic feasible solution of the original system = , ≥ 0,
and the simplex can be started with the original objective function . If, on the other hand, after
solving this problem we have a positive artificial variable, then the original problem has no
feasible solution (why?). This procedure is called the two phase method.
In the first phase we reduce artificial variables to value zero or conclude that the original
problem has no feasible solutions. In the former case, the second phase minimizes the original
objective function starting with the basic feasible solution obtained at the end of the phase I.

Dilla University, Department of Mathematics Page 106


Dill University

Phase I
Solve the following linear program starting with the basic feasible solution x=0 and = .
Minimize 1x
. + =
, ≥0
If at optimality ≠ 0, then stop; the original problem has no feasible solution. Otherwise let the
basic and nonbasic legitimate variables be
& . ( we are assuming that all arti icial variables left the basis). Go to phase II.
Phase II
Solve the following linear program starting with the basic feasible solution
= & =0
+
. + = ; , ≥0
Example: −
s.t + ≥
− + ≥

, ≥
After introducing the surplus and slack variables , , the following problem is obtained.
Minimize x − 2x
s.t + − =
− + − =
+ =
, , , , ≥
An initial identity sub matrix is not available. So introduce the artificial variables & . Phase
I starts by minimizing = + .

Dilla University, Department of Mathematics Page 107


Dill University

Phase I
RHS

1 0 0 0 0 0 -1 -1 0

0 1 1 -1 0 0 1 0 2

0 -1 1 0 -1 0 0 1 1

0 0 1 0 0 1 0 0 3

Add rows 1 and 2 t0 row 0 so that − = − = 0 will be displayed.


2. RHS

1 0 2 -1 -1 0 0 0 3

0 1 1 -1 0 0 1 0 2

0 -1 [1] 0 -1 0 0 1 1

0 0 1 0 0 1 0 0 3

3. RHS

1 2 0 -1 1 0 0 -2 1

0 [2] 0 -1 1 0 1 -1 1

0 -1 1 0 -1 0 0 1 1

0 1 0 0 1 1 0 -1 2

Dilla University, Department of Mathematics Page 108


Dill University

4. RHS

1 0 0 0 0 0 -1 -1 0

0 1 0 −1 1 0 1 −1 1
2 2 2 2 2

0 0 1 −1 −1 0 1 1 3
2 2 2 2 2

0 0 0 1 1 1 −1 −1 3
2 2 2 2 2

This is the end of phase I. we have a starting basic feasible solution, ( , )= , . Now we

are ready to start phase II, where the original objective is minimized starting from the extreme

point , . The artificial variables are disregarded from any further consideration.

Phase II
5 RHS

1 -1 2 0 0 0 0

0 1 0 −1 1 0 1
2 2 2

0 0 1 −1 −1 0 3
2 2 2

0 0 0 1 1 1 3
2 2 2

Multiplying rows 1 and 2 by 1 and -2 respectively and add to row 0 producing


− = − = 0.

Dilla University, Department of Mathematics Page 109


Dill University

6. RHS

1 0 0 0 −

0 1 0 −1 1 0 1
2 2 2

0 0 1 −1 −1 0 3
2 2 2

0 0 0 1 1 1 3
2 2 2

RHS

1 -3 0 2 0 -4

0 2 0 -1 1 0 1
2

0 1 1 -1 0 0 3
2

0 -1 0 1 0 1 3
2

RHS

1 -1 0 0 -2 -6

0 1 0 0 1 1 2

0 0 1 0 0 1 3

0 -1 0 1 0 1 1

Since − ≤ 0 for all non basic variables, the optimal point (0,3) with objective -6 is reached.
Note that phase I moved from the infeasible point (0,0), to the infeasible point (0,1), and finally
1 3
2, 2 . From this extreme point, phase II moved to the feasible point (0,2) and finally to the
Dilla University, Department of Mathematics Page 110
Dill University

optimal point. The purpose of phase I it to get us to an extreme point of the feasible region,
while phase II takes us from this feasible point to an optimal point.
Analysis of the two- phase method
At the end of phase I we have either ≠0 = 0.
Case 1: ≠
If ≠ 0, then the original problem has no feasible solution, because if there is an ≥0 ℎ

= , ℎ is a feasible solution of the phase I problem and 0( ) + 1(0) = 0 < 1 , violating


0
optimality of .
Example: − +
. + ≤ ; + ≥ , ≥
The constraints admits no feasible points. This will be detected by phase I. Introduce the slack variables
, we get the following constraints in standard form:
+ + = ;
+ − = ;
, , , ≥
Since no convenient basis exists, introduce the artificial variables in to the second constraint. Phase I
is now used to try to get rid of the artificial variables.
PHASE I
RHS

1 0 0 0 0 -1 0

0 1 1 1 0 0 4

0 2 3 0 -1 1 18

RHS

1 2 3 0 -1 0 18

0 1 (1) 1 0 0 4

0 2 3 0 -1 1 18

Dilla University, Department of Mathematics Page 111


Dill University

RHS

1 -1 0 -3 -1 0 6

0 1 1 1 0 0 4

0 -1 0 -3 -1 1 6

The optimality criterion of the simplex method, namely − ≤ 0 for all non basic variables,
but the artificial > 0. We conclude that the original problem has no feasible solutions.

case2: =
Sub case1: All artificial are out of the basis
Since at the end of phase I we have a basic feasible solution, and since is out of the basis, then
the basis consists entirely of legitimate variables. If the legitimate vector x is decomposed
accordingly into , then at the end of the phase I we have the following tableau.

1 0 0 -1 0

Now Phase II can be started, with the original objective, after discarding the columns
corresponding to . Note, however, that an artificial variable should never be permitted to enter
the basis again.)
The − ’s for the nonbasic variables are given by the vector − , which can be
easily calculated from the matrix stored in the final tableau of phase I. The following
initial tableau of Phase II is constructed. Starting with this tableau, the simplex method is used to
find the optimal solution.
Z

1 0 −

Dilla University, Department of Mathematics Page 112


Dill University

Sub case B2 ( Some Artificial are in the Basis at zero level)


In this case we may proceed directly to phase II, or else eliminate the artificial basic variables
and then proceed to phase II.
proceed directly to phase II
First eliminate the columns corresponding to the non basic artificial variables of phase I. the
starting of phase II consists of some legitimate variables and some artificial variables at zero
level. The cost row consists of − ’s is constructed for the original objective function so that
all legitimate variables that are basic have − = 0. The cost coefficient of the artificial
variables are given value zero ( justify). While solving the phase II problem by the simplex
method, we must be careful that artificial variables never reach a positive level( since this would
destroy feasibility).
To illustrate, consider the following tableau, where for simplicity we assume that the basis
consists of the legitimate variables , ,…, and the artificial variables ,…, ( the
artificial variables ,…, left the basis during the phase I).

Dilla University, Department of Mathematics Page 113


Dill University

z … … … … RHS

Z 1 0 … 0 0 0

1 0 0

. . .

. . .

1 0 0 0

0 … 0 , 1 0

. 1 .

. .

. .

0 0

0 0 1 0

Suppose that − > 0, and so is eligible to enter the basis. If ≥ 0, = + 1, … , ,


then the artificial variables will remain zero as enters the basis and the usual minimum
ratio test is performed. If on the other hand, at least one component of ≥ 0, = +
1, … , , then the artificial variable becomes positive as increased. This action must be
prohibited it would destroy feasibility. This can be done by pivoting at rather than using the
usual minimum ratio tes. Even though < 0, pivoting at would maintain feasibility since

Dilla University, Department of Mathematics Page 114


Dill University

the right hand side at the corresponding row is zero. In this case enters the basis and the
artificial variable leaves the basis, and the objective value remains constant. With this slight
modification the simplex method is used to solve phase II.

4.8.2 Big m method


Recall that artificial variables constitute a tool that can be used to initiate the simplex method.
However, the presence of artificial variables at a positive level, by construction, means that the
current point is an infeasible solution of the original system. The two-phase method is one way
to get rid of the artificials.However, during phase I of the two-phase method the original cost
coefficients are essentially ignored. Phase I of the two-phase method seeks any basic feasible
solution, not necessarily a good one. Another possibility for eliminating the artificial variables is
to assign coefficients for these variables in the original objective function in such a way as to
make their presence in the basis at a positive level, very unattractive from the objective function
point of view.
The big M method or M technique or method of penalty
This method consists of the following basic steps:
Step 1. Express the given LPP in SLPP form by introducing slack and surplus variables
Step 2. Add non negative variables to the left hand side of all constraints of (initially )
≥ = type. The purpose of Artificial Variables is to obtain an initial basic
feasible solution. They , have however, two drawbacks:
a. They are fictitious, have no physical meaning or economic significance and have no
relevance to the problem.
b. Their introduction (addition) violets the equality of constraints that has been already
established in step 1.
They are, therefore, rightly termed as artificial variables as opposed to the real variables in the
problem. Therefore, we must get rid of these variables and must not allow them to appear in the
final solution. To achieve this, the variables are assigned a very large unit penalty in the
objective function.
This penalty is designated by – for maximization problems and for minimization problems,
where > 0. Value of M is much higher than the cost coefficient of other variables and for
hand calculation it is not necessary to assign any specific value to it.
Step 3: Solve the modified LPP by the simplex method as usual.

Dilla University, Department of Mathematics Page 115


Dill University

In general we have the following


To use simplex method:
All inequality constraints are to be changed in to equality constraints by introduction of slack or
surplus variables. Find an initial feasible solution which will obtained smoothly and easily, if the
coefficient matrix contains a unit basis and ≥ 0. If such unit basis does not exist, then we
introduce artificial variables to the left of the converted equation( initial which is connected by
either ≥ 0 = type) to get a unit basis matrix. This technique is used only to find the initial BFS
of the problem easily.
Note:
 An equation is kept an equation even after the introduction of artificial variables only in
one side of an equation which is not correct from the mathematical point of view. it is
only true if the value of the artificial variables are equal to zero.
 In solving, problem of this type by using simplex method one must be sure at the optimal
stage, that the value of all artificial variables at the optimal stage is zero.
 If it is not possible to bring all artificial variables at zero level at the optimal stage, we
conclude that the problem has no feasible solutions.
The following cases may arise at the optimal stage:
 All artificial variables are not present in the basis which indicates that all artificial
variables are at zero level at the optimal stage. Thus the solution obtained is a BFS.
 Some artificial variables exist in the basis at positive level at the optimal stage. In this
case, there exists no feasible solution.
 All artificial variables are at zero level and at least one artificial variable present at the
basis at the optimal stage. Here the solution under test is optimal and we conclude that
some constraints may be redundant. By redundancy we mean that the system has more
than enough constraints.
Remarks:
Since the modified problem P(M) has a feasible solution say ( = 0 = ), then while
solving it by the simplex method one of the following two cases may arise.
1. We shall arrive at an optimal solution of P(M).
2. Conclude that P(M) has unbounded optimal solution.

Dilla University, Department of Mathematics Page 116


Dill University

Case 1: Finite optimal solution of P(M)


a. Optimal solutions to P(M) has all artificial at vlue zero
In this case the original problem has finite optimal solution.
b. At optiml solution not all artificial variables are zero.
In this case we conclude that the original problem has no feasible solution of.
Case 2: P(M) has unbounded optimal solution
a. If all artificial variables are zero, then the original problem has an unbounded optimal
solution.
b. If at least one artificial variables is positive, then the original problem is infeasible.
Note: Once an artificial variable leaves the basis we forget all about it for ever and never
consider it as a variable to enter in to the basis at any iteration.
Example: Solve the following LPP by big M method.
=4 +3 →
. +2 ≥ 8
3 + 2 ≥ 12, , ≥0
Solution: the problem is minimization and > 0.
Hence the problem is to minimize = 4 + 3 . By introducing surplus variables we have the
following.
=4 +3 →
. +2 −ℎ =8
3 + 2 − ℎ = 12, , ,ℎ ,ℎ ≥ 0
Since the coefficient matrix does not contain a unit basis matrix we introduce two artificial
variables , . Therefore, we have the following LPP
=4 +3 + + → ( )
. +2 −ℎ + =8
3 +2 −ℎ + = 12, , ,ℎ ,ℎ , , ≥0
We have a unit basis matrix, construct the first table by taking as basic variables.

Dilla University, Department of Mathematics Page 117


Dill University

Z X Y ℎ ℎ RHS

4 −4 4 −3 − − 0 0 20M Min ratio

1 2 -1 0 1 0 8 4

3 2 0 -1 0 1 12 6

The simplex value of the non basic variables x and y are positive and
Maximum {, 4 − 3} = 4 − 3. y becomes a candidate basic variable.
From the minimum ratio, becomes nonbasic and y becomes basic variable.
Second Iteration
Z X Y ℎ ℎ RHS

2 − 5/2 0 − 3/2 − −2 + 3/2 0 4 + 12 Min ratio

½ 1 -1/2 0 ½ 0 4 8

2 0 1 -1 -1 1 4 2

x becomes a candidate basic variable, from the minimum ratio x enters the basis and v2 leaves
the basis
Z x Y ℎ ℎ RHS

0 0 −1/4 −5/4 -M+1/4 -M+5/4 17

0 1 -3/4 ¼ ¾ -1/4 3

x 1 0 ½ -1/2 -1/2 ½ 2

The simplex value of the nonbasic variables are all negative, the current basic feasible solution is
optimal.
Example: − − ( )
− − − =1
− + +2 − = 1; , , , ≥0
Introducing artificial variables ,x , x , we write problem P as:
− − + + ( )

Dilla University, Department of Mathematics Page 118


Dill University

− − − + =1
− + +2 − + = 1; , , , ≥0
Taking x , x as an initial basic variables leads the following sequence of tables

The most positive − correspnds to ≤ 0. Therfore the big M problem is


unbounded. Furthermore, the artificials are equal to zero, the original problem has
unbounded optimal solution along the ray {(3,3,02,0) + (1,1,00): ≥ 0}.
Exercise 4.1
Solve the following LPP by big M method.
a) =2 −3 =4 +8 +3
. − + ≥ −2 s.t + ≥2
5 + 4 ≤ 46 2 + ≥0
7 + 2 ≥ 32 , , ≥0 , , ≥0
=2 −3 + + +
. 2 + ≤8 s.t − − + + ≤
10 + 11 ≥ 100 − − + + ≤
, ≥0 − − − ≥−
Dilla University, Department of Mathematics Page 119
Dill University

, , , ≥

Degeneracy and cycling


Definition: A LP is degenerate if it has at least one BFS in which a basic variable is equal to
Zero.
Degeneracy may occur
 At the initial stage when at least one basic variable is zero in the BFS. this will so if the
right hand side of the constraint is zero.
 At any subsequent computation of simplex table when there is tie among the minimum
value ratio. This will results in the other tied variable becoming zero in the next table.
If a LPP is degenerate, there may exist a sequence of pivoting operations in which a basic
variable = = 0 replaced by a non- basic variable = 0 without changing the value of the
objective function.
This may finally lead to a repetition of BFS and is called cycling of the simplex method. To
avoid such undesirable situation, we use the following methods.
The following rules, which specifies the variable leaving the basis if the simplex minimum ratio
test produces several candidates, will guarantee non cycling.
1. Lexicographic rule
Exiting Rule
Given a basic feasible solution with basis B, suppose that the nonbasic variable is choosen to
enter the basis. The index r of the variable leaving the basis is determined as follows:

= : = : >0

If is a singleton, namely = { }, ℎ leaves the basis. Otherwise form as follows:


={ : = , ∈ }
If is a singleton, = { }, ℎ . Otherwise form .

Ingeneral is formed from as follows:


={ : = , ∈ }

For some ≤ , will be a singleton. If = { }, leaves the basis

Dilla University, Department of Mathematics Page 120


Dill University

Example: consider the following problem

Here = {1, 2}, = {2}, and therefore = leaves the basis.

The foregoing tableau gives the optimal solution.


1. Bland’s pivot selection rule

Dilla University, Department of Mathematics Page 121


Dill University

This is a very simple rule, but restricts the choice of both the entering and leaving variables.
In this rule, variables are first ordered in some sequence, say, , , … , without loss of
generality. Then of all nonbasic variables with − > 0 the one with the smallest index is
selected to enter the basis. Similarly, of all the candidates to leave the basis (which tie in the
usual minimum ratio test), the one with the smallest index is chosen as exiting variable.

4.9 Using solver (MS EXCEL) in solving linear programming


Using excel 2010 to solve linear programming problems
Working a Maximization or Minimization Problem

Microsoft Excel makes it easy to solve a “Maximization Problem” or “Minimization Problem.”


The following steps are used to solve the above maximization problem. Minimization problems
are also done the same way except for some slight differences.
1. Open a new Excel Worksheet by clicking the Start button on the task bar, clicking on All
Programs, clicking on Microsoft Office, and selecting Microsoft Excel 2010.
2. Click on cell A1 and type the label: “Maximization Problem” or “Minimization Problem”
depending on the problem you are solving for. (You do not type the quotation marks. The
quotation marks indicate exactly what is to be typed.) Press Enter after each entry. Stretch
Columns A and B to be wider by putting the curser in between the column headers until the
curser changes to “+” and then dragging the curser to the desired width.
3. Use the down arrow key to move to cell A3 and type the label: “Objective Function”.
4. Click on cell A4 and type the objective function: “z=4x1+5x2”
5. Click on cell A6 and type the label: “Variables”
6. Using the down arrow key, click on cell A7 and type the first variable, “x1”
7. Using the down arrow key, click on cell A8 and type the second variable, “x2”. If there are
more variables, type each variable on the following rows.
8. Skip a row using the down arrow key and type the label: “Constraints”
9. Type each constraint inequality in a separate row of Column A. Leave out the basic
restrictions of x ≥0 and y ≥0 because these constraints can be set automatically in the Solver
process.
10. Now you are ready to type the equations in Column B using cell names in the formulas
instead of variable names. Click on cell B4 and type the objective function. The objective
function z = 4 x1 5x2 should be typed as "= 4 * B7 + 5 * B8" Notice variable 1 x was replaced
with its cell location of B7. Variable 2 x was replaced with its cell location of B8. It does not
matter whether you use capital letters or lower case letters in these entries. Repeat for all the
variables. After you enter each formula, you will see a zero on your Excel worksheet. For the
solver to work, you must use the cell names in Column B instead of the variables. 11. In Column

Dilla University, Department of Mathematics Page 122


Dill University

B, use the down arrow to get to the variable rows and type zeros in cells B7 and B8. If you have
more variables, continue entering zeros. Below is a screen print with formulas shown. Note: To
show formulas, depress “~”and “`Ctrl” buttons at the same time; to change back, depress “~” and
“Ctrl” buttons again.
12. In Column B use the down arrow to get to the constraints rows. Type the following formulas,
replacing the variables with the cell names:
In cell B11, constraint x1 + 4 x2 should be typed as " = B7 + 4 * B8"
In cell B12, constraint 4 x1 + x2 should be typed as "= 4* B7 + B8"

13. In Column C, type the augments (constants on the right side of the constraint inequalities) for
each of the constraints.
In cell C10, type "Augments".
In cell C11, type "9".
In cell C12, type "6".
14. Click in cell B4 (the objective function) before going to the Data menu and selecting
“Solver.” The Data menu should be located on the far right side of the ribbon.
Make sure the Set Objective box is set to “$B$4” (If not, close the window, click in cell B4, and
open the Data Solver option again.) Note: "Objective" refers to the cell location of the objective
function formula. On the To: line select “Max” to find the maximum value or select “Min” to
find the minimum value. Click in the By Changing Variable Cells: box and type variables “B7,
B8” separated by commas. Note: Do not type the absolute signs (dollar signs) because Excel will
add them in for you. If you have more than two variables, make sure you include them in the list.
Click in the Subject to the Constraints box and then click the Add button. The Add Constraint
window will pop up
Type the first constraint cell location. For example, enter "B11" in the Cell Reference box. Select
the appropriate sign from “<=”, “=”, or “>=” as it appears in the linear programming problem by
clicking on the down arrow in the middle box. In the Constraint box, enter the first augment cell
location of "C11". Click the Add button. Enter the rest of the constraints and augments in the
same manner. When completed, click OK
15. In the Solver Parameters window, click on "Make Unconstrained Variables Non-Negative"
In the box for Select a Solving Method:, choose "Simplex LP". (In older versions of Excel, you
set these selections by choosing “Assume Linear Model” and “Assume Non-Negative” in the
Options window.)
16. In the Solver Parameters window click the Solve button.
17. A Solver Results window will appear telling you that the solver found a solution. Click
beside "Keep Solver Solution". Select Answer and Sensitivity from the Reports box and then
click OK.
18. A sheet tab labeled “Answer Report 1” should now be at the bottom of the screen. Click on it
to see the results.

Dilla University, Department of Mathematics Page 123


Dill University

19. If you need to re-run the Solver, type zeros in the cells for the variables. Also, right-click on
the tabs displaying the results (Answer Report, Sensitivity Report), and delete each of them.
20. Interpret the results. In this sample problem, the maximum is listed on row 16 under Final
Value. It equals 14. The maximum occurs when x1 = 1 and x2 = 2. These answers are shown on
row 21 and row 22 under Final Value.
Summary
To use simplex method
 An equation is kept an equation even after the introduction of artificial variables only in
one side of an equation which is not correct from the mathematical point of view. it is
only true if the value of the artificial variables are equal to zero.
 In solving, problem of this type by using simplex method one must be sure at the optimal
stage, that the value of all artificial variables at the optimal stage is zero.
 If it is not possible to bring all artificial variables at zero level at the optimal stage, we
conclude that the problem has no feasible solutions.
The following cases may arise at the optimal stage:
 All artificial variables are not present in the basis which indicates that all artificial
variables are at zero level at the optimal stage. Thus the solution obtained is a BFS.
 Some artificial variables exist in the basis at positive level at the optimal stage. In this
case, there exists no feasible solution.
 All artificial variables are at zero level and at least one artificial variable present at the
basis at the optimal stage. Here the solution under test is optimal and we conclude that
some constraints may be redundant. By redundancy we mean that the system has more
than enough constraints.
Remarks:
Since the modified problem P(M) has a feasible solution say ( = 0 = ), then while
solving it by the simplex method one of the following two cases may arise.
3. We shall arrive at an optimal solution of P(M).
4. Conclude that P(M) has unbounded optimal solution.
Case 1: Finite optimal solution of P(M)
c. Optimal solutions to P(M) has all artificial at vlue zero
In this case the original problem has finite optimal solution.
d. At optiml solution not all artificial variables are zero.
In this case we conclude that the original problem has no feasible solution of.

Dilla University, Department of Mathematics Page 124


Dill University

Case 2: P(M) has unbounded optimal solution


c. If all artificial variables are zero, then the original problem has an unbounded optimal
solution.
If at least one artificial variables is positive, then the original problem is infeasible.
Review Exercise
solve the following LPP.
b) max 5 x1  3x 2 c) max 2 x1  2 x
a) max 2 x1  3 x 2  10 x3
st x1  x 2  2 st 4 x1  3x 2  10
st x1  2 x3  0
5 x1  2 x 2  10 4 x1  x 2  8
x 2  x3  1
3x1  8 x 2  12 4 x1  x 2  8
x1, x 2, x3  0
x1, x 2  0 x1, x 2  0
2.Solve the following problem by the big-M method.
a)

b)

Dilla University, Department of Mathematics Page 125


Dill University

1. Solve the following problem by the two-phase method

Dilla University, Department of Mathematics Page 126


Dill University

Chapter Five

5.Duality theory and further variation s of the simplex method


Introduction
Associated with every linear program is another called its dual. The dual of this dual linear
program is the original linear program (which is then referred to as the primal linear
program). Hence, linear programs come in primal/dual pairs. It turns out that every feasible
solution for one of these two linear programs gives a bound on the optimal objective function
value for the other. These ideas are important and form a subject called duality theory, which
is the topic we shall study in this chapter.
General objectives
At the end of this unit the learner will be able to:
▪Understand the standard form of duality
▪Identify the primal-dual relationship
▪Explain the Kuhn-Tucker optimality condition
▪Define the fundamental theorems of duality
▪Know the interpretation of dual simplex method

5.1 Dual linear programs


Associated with each linear programming problem there is another linear programming problem called
the dual. The dual linear program possesses many important properties relative to the original primal
linear program. There are two important forms (definitions) of duality:
 Canonical form of duality and
 Standard form of duality. These two forms are completely equivalent.
Canonical Form of Duality
Suppose that the primal linear program is given in the form:
P:
s.t ≥ , ≥
Then the dual linear program is defined by:
D:
St ≤ , ≥
Remark: There is exactly one dual variable for each primal constraint and exactly one dual constraint for
each primal variable.
Consider the following linear program and its dual:

Dilla University, Department of Mathematics Page 127


Dill University

P: +
Subject to 3 + ≥4
5 +2 ≥7
, ≥0
D: +
3 +5 ≤6
+2 ≤8
, ≥0
Standard Form of Duality
Another equivalent definition of duality applies when the constraints are equalities. Suppose that the
primal linear program is given in the form:

P: Minimize
Subject to = , ≥
Then the dual linear program is defined by:
D: Maximize
Subject to ≤
unrestricted
Example:
: 6 +8
. 3 + − =4
5 +2 − =7
≥ 0, = 1,2,3,4
D: 4 +7
. 3 +5 ≤6
+2 ≤8
− ≤0
− ≤0
,
Formulation of the dual problem
Given one of the definitions, canonical or standard, it is easy to demonstrate that the other definition is
valid. For example, suppose that we accept the standard form as a definition and wish to demonstrate that

Dilla University, Department of Mathematics Page 128


Dill University

the canonical form is correct. By adding slack variables to the canonical form of a linear program, we
may apply the standard form of duality to obtain the dual problem.

P: Minimize :
s. t − = , , ≥ . ≤
− ≤0

but since − ≤ 0is the same as ≥ 0, we obtain the canonical form of the dual problem.
Lemma 1 : The dual of the dual is the primal.
Mixed Forms of Duality
In practice, many linear programs contain some constraints of the "less than or equal to" type, some of the
"greater than or equal to" type and some of the "equal to" type. Also, variables may be " ≥ 0," " ≤ 0," or
"unrestricted." In theory, this presents no problem since we may apply the transformation techniques to
convert any "mixed" problem to one of the primal or dual forms discussed above, after which the dual can
be readily obtained. In practice such conversions can be tedious. Fortunately, it is not necessary actually
to make these conversions, and it is possible to give immediately the dual of any linear program.
Consider the following linear problems
Standard form

. ≥ . − =
= =
≤ + ; , , ≥0
≥0
The dual of this problem is
+ +
+ + ≤
− ≤0
≤0
, , unrestricted

From this example we see that "greater than or equal to" constraints in the minimization problem give rise
to " ≥ 0" variables in the maximization problem.

Dilla University, Department of Mathematics Page 129


Dill University

Also, "equal to" constraints in the minimization problem give rise to "unrestricted" variables in the
maximization problem; and "less than or equal to" constraints in the minimization problem give rise to "
≤ 0" variables
iables in the maximization problem. The complete results may be summarized in Table below:

Examples: Write the dual of the following problems


a. 8 +3 d. Maximize −2
. . +2 − + ≥0
5 + 7 = −4 4 +3 +4 −2 ≤3
−6 ≥2 − − +2 + =1
≤0 , ≥0
≥0
b. 2 +5 +
.
2 − 7 ≤6
+3 +4 ≤9
3 +6 + ≤3
, , ≥0

c. − −
. −2 − ≤4
−2 +4 ≤−8
− +3 ≤−4
, ≥0
Answer c: 4 −8 −7
. −2 −2 − ≥ −1
− +4 +3 ≥ −1
, , ≥0

Dilla University, Department of Mathematics Page 130


Dill University

Primal-Dual Relationship
The relationship between Objective values
Weak duality
Theorem: If x is a feasible solution for the primal problem (minimization form) and w is feasible
solution for the dual, then

c ≥

Consider the canonical form of duality and let be feasible solutions of the primal and dual
programs respectively. Then ≥ , ≥ 0, ≤ ≥ 0. Multiplying ≥ on the left
by and ≤ on the right by , we get c ≥ ≥ .
Lemma 2
The objective function value for any feasible solution to the minimization problem is always greater than
or equal to the objective function value for any feasible solution to the maximization problem. In
particular, the objective value of any feasible solution of the minimization problem gives an upper bound
on the optimal objective of the maximization problem. Similarly, the objective value of any feasible
solution of the maximization problem is a lower bound on the optimal objective of the minimization
problem.
Corollary 1
If and are feasible solutions to the primal and dual problems such that = , then and
are optimal solutions to their respective problems.
Corollary 2
If either problem has an unbounded objective value, then the other problem possesses no feasible
solution.
Duality and the Kuhn-Tucker Optimality Conditions
A necessary and sufficient condition for x* to be an optimal point to the linear program

≥ , ≥ is that there exists a vector such that
∗ ∗
1. ≥ , ≥
∗ ∗
2. ≤ , ≥
∗( ∗
3. − )=
∗ ∗
4. ( − ) =

Dilla University, Department of Mathematics Page 131


Dill University

Condition 1 above simply requires that the optimal point x* must be feasible to the primal.
Condition 2 condition indicates that the vector w* must be a feasible point for the dual problem.
From condition 3 above, we find that c x* = w*b. Hence w* must be an optimal solution to the
dual problem.
The Kuhn-Tucker optimality conditions for the dual problem imply the existence of a primal
feasible solution whose objective is equal to that of the optimal dual.
Strong Duality
Theorem: If the primal problem (minimization canonical form) has an optimal form feasible ,
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
=( , ,…, ) , then the dual also has an optimal solution =( , ,…, ) such that
∗ ∗
=
Lemma 3
If one problem possesses an optimal solution, then both problems possess optimal solutions and the two
optimal objective values are equal.

5.2 Duality theorems


Theorem I (Fundamental Theorem of Duality)
With regard to the primal and dual linear programming problems, exactly one of the following statements
is true.
1. Both possess optimal solutions x* and w* with cx* = w*b.
2. One problem has unbounded objective value, in which case the other problem must be infeasible.
3. Both problems are infeasible.
From this theorem we see that duality is not completely symmetric. The best we can say is that (here
optimal means finite optimal, and unbounded means having an unbounded optimal objective):
P OPTIMAL ⇒ D OPTIMAL
P UNBOUNDED ⇒ D INFEASIBLE
D UNBOUNDED ⇒ P INFEASIBLE
P INFEASIBLE ⇒ D UNBOUNDED OR INFEASIBLE
D INFEASIBLE ⇒ P UNBOUNDED OR INFEASIBLE
Complementary Slackness conditions
Sometimes it is necessary to recover an optimal dual solution when only an optimal primal solution is
known. The following theorem, known as the complementary slackness theorem, can help in this
regard.
Let x* and w* be any pair of optimal solutions to the primal and dual problems in canonical form
respectively. Then

Dilla University, Department of Mathematics Page 132


Dill University
∗ ∗ ∗ ∗
≥ ≥
∗ ∗
= . ℎ
∗ ∗ ∗ ∗
= =
∗ ∗ ∗ ∗ ∗
This gives ( − ) = 0 and ( − ) = 0. Since ≥ 0 and
∗ ∗ ∗ ∗
∗ − ≥ 0, then ( − ) = 0implies ( − ) = 0 for i = 1, . . ,m.
∗ ∗ ∗ ∗
Similarly ( − ) = 0implies ( − ) = 0 for j = 1, . . . , n.
Thus we have the following theorem.
Theorem 2 (Weak Theorem of Complementary Slackness)
∗ ∗
If are any optimal points to the primal and dual problems in the canonical form, then
∗ ∗ ∗ ∗
( − ) = 0 for = 1, . . . , and ( − ) = 0 for = 1, . . , .
This is a very important theorem relating the primal and dual problems. It obviously indicates that at least
one of the two terms in each expression above must be zero. In particular,
∗ ∗
>0 ⇒ =c
∗ ∗
<c ⇒ =0
∗ ∗
>0 ⇒ =
∗ ∗
> ⇒ =0
The weak theorem of complementary slackness can also be stated as follows: at optimality "If a variable
in one problem is positive, then the corresponding constraint in the other problem must be tight" and "If a
constraint in one problem is not tight, then the corresponding variable in the other problem must be zero."
Suppose that we let = − ≥ 0, i = 1, … , m be the m slack variables in the primal problems
and let = − ≥ 0, j = 1, … , n be the n slack variables in the dual problem. Then we can
write the complementary slackness conditions as follows:
∗ ∗
w =0, j = 1, … , n

w =0, i = 1, … , m
Example: Consider the following primal and dual problems
: Minimize 2 +3 +5 +2 +3
. + +2 + +3 ≥4
2 − +3 + + ≥ 3, , , , , ≥0
: 4 + 3w
s. t + 2w ≤ 2
− 2w ≤ 3
2 + 3w ≤ 5
+w ≤2

Dilla University, Department of Mathematics Page 133


Dill University

3 +w ≤3
,w ≥ 0
The feasible region for the dual problem is given bbelow


The optima solution for the dual is = , w = 3/5 with objective value 5. Using the weak theorem
∗ ∗ ∗
of complementary slackness, we further know that , = = = 0.. Since none of the
∗ ∗
corresponding complementary dual constraints are tight. Since , > 0,then
∗ ∗ ∗ ∗ ∗ ∗
+3 = 4 and 2 + = 3. From these two equations we get = 1 and = 1, the

optimal objective value = 5.. Thus the primal optimal point is obtained from the duality theorems and
the dual optiaml point.

5.3 The dual simplex method


The dual simplex method solves the dual problem directly on the (primal) simplex tableau. At each
iteration we move from a basic feasible solution of the dual problem to an improved basic feasible
solution until optimality of the dual (and also the prima
primal)
l) is reached, or else until we conclude that the
dual is unbounded and that the primal is infeasible.
Interpretation of Dual Feasibility on the Primal Simplex Tableau
Consider the following linear programming problem

≥ , ≥0

Dilla University, Department of Mathematics Page 134


Dill University

Let B be a basis that is not necessarily feasible and consider the following tableau.

Lemma 4 : At optimality of the primal minimization problem in the canonical form


∗ ∗
( − ≤0 ), =c B is an optimal solution to the dual problem. Further more =
−( − )=− , for i=1,…,m.
The dual simplex method
In this section we describe the dual simplex method, which solves the dual problem directly on the
(primal) simplex tableau. At each iteration we move from a basic feasible solution of the dual problem to
an improved basic feasible solution until optimality of the dual (and the primal) is reached, or else until
we conclude that the dual is unbounded and that the primal is infeas
infeasible.
Interpretation of Dual Feasibility on the Primal Simplex Tableau

Dilla University, Department of Mathematics Page 135


Dill University

Let B be a basis that is not necessarily feasible and consider the following tableau.

The tableau presents a primal feasible solution if ≥ 0 for i = 1, … , m. i. e, = ≥ 0. Furthermore,


the tableau is optimal if − ≤0 = 1,2, … , + .
Define = . For j=1,…,n we have
− = − = − Hence − ≤0 = 1,2, … , implies that − ≤
0 = 1, … , , which intern implies that ≤ .
Furthermore, note that =− = 0, = 1, … , and so we have
− = − = (− ) − 0 = − , = 1, … ,
In addition, if − ≤0 = 1,2, … , , then ≥0 ≥ 0.
Thus, − ≤0 = 1,2, … , + implies that ≤ ≥ 0, where = .
In other words, dual feasibility is precisely the simplex optimality criteria − ≤ 0 for all j. At
∗ ∗ ( )= ∗
optimality = and the dual objective = = = , that is the
primal and dual objectives are equal. Thus we have the following result.
Lemma : At optimality of the primal minimization problem in the canonical form (that is, − ≤
∗ ∗
0 ), = is an optimal solution to the dual problem. Furthermore = −( −
+ =− + =1,…, .

Consider the following linear programming problem.


Minimize cx
Subject to Ax = b
x≥0

Dilla University, Department of Mathematics Page 136


Dill University

In certain instances it is difficult to find a starting basic solution that is feasible (that is, all ≥0
) to a linear program without adding artificial variables. In these same instances it is often
possible to find a starting basic, but not necessari
necessarily
ly feasible, solution that is dual feasible (that is,
all − ≤ 0 for a minimization problem). In such cases it is useful to develop a variant of the
simplex method that would produce a series of simplex tableau that maintain dual feasibility and
complementary
lementary slackness and strive toward primal feasibility.

Consider the above tableau representing a basic solution at some iteration. Suppose that the
tableau is dual feasible (that is, − ≤ 0 for a minimization problem). Then, if the tableau is
also primal feasible (that is, all ≥ 0)) then we have the optimal solution. Otherwise, consider
some < 0.. By selecting row r as a pivot row and some column k such that < 0 as a pivot

column wee can make the new right
right-hand side . Through a series of such pivots we hope to
make all ≥ 0 while maintaining all − ≤ 0 and thus achieve optimality. The pivot column
k is determined by the following minimum ratio test.
− −
= : < 0 … … … … … … (∗)

Note that the new entries in row 0 after pivoting are given by:

− = − − ( − )


Multiplying both sides by < 0, we get − − ( − ), that is, − ≤ 0. To

summarize, if the pivot column is chosen according to equation (∗),, then the new basis obtained
by pivoting at is still dual feasible. Moreover, the dual objective after pivoting is given by

Dilla University, Department of Mathematics Page 137


Dill University

− . − ≤0
0, < 0, < 0 , then − ≥ 0 and the dual

objective improves over the current value of = .


Summary of the Dual Simplex Method (Minimization Problem)
Initialization step
Find a basis B of the primal such that − = − ≤ 0 for all j.
Main Step
1. If = ≥ 0,, stop; the current solution is optimal. Otherwise select the pivot row r
with < 0, say = { }
2. If ≥ 0 for all j, stop; the dual is unbounded and the primal is infeasible.
Otherwise select the pivot column k by the following minimum ratio test:
− −
= : <0

3. Pivot at and return to step 1.

Example: Consider the following problem


2 +3 +4
subject to
+2 + ≥3
2 − +3 ≥4
, , ≥0
A starting basic solution that is dual feasible can be obtained by utilizing the slack variables
. This results from the fact that the cost vector is nonnegative. Applying the dual
simplex method, we obtain the following series of tableaux.

Dilla University, Department of Mathematics Page 138


Dill University

The Complementary Basic Dual Solution


Consider the following pair of primal and dual problems in standard form.
P: Minimize D: Maximize
Subject to ≥ Subject to ≤
≥0 w unrestricted
Given any primal basis B, there is an associated complementary dual basis. To illustrate,
introduce the dual slack vector so that + = . The dual constraints can be rewritten
in the following more convenient form:
+ =
unrstricted ≥ …………………………….( )

Given the primal basis B,, recall that = . Substituting in equation (1), we get
= −
= − ( )

− ( )

0
= … … … … … … . (2)
− ( )
Note that = 93) lead naturally to a dual basis.

Dilla University, Department of Mathematics Page 139


Dill University

Since both ( ) are not necessarily zero, then the vector w and the last n-m
n
components of form the dual basis. In particular, the dual basis corresponding to the primal
basis B is given by

The rank of the preceding matrix is n. The primal basis is feasible if ≥ 0 and the dual basis is
feasible if ≥ 0; that is, if ( − ( ) ) ≥ 0.. Even if these conditions do not hold, the primal
and dual bases are complementary in the sense that the complementary slackness condition
( — ) = 0 holds as shown below:

( — ) = (0, − ) =0

To summarize, during any dual simplex iteration we have a primal basis that is not necessarily feasible,
and a complementary dual feasible basis. At termination primal feasibility is attained, and so all the
Kuhn-Tucker
Tucker optimality conditions hold.
Recall that in the dual simplex method we begin with a basic (not necessarily feasible) solution to the
primal problem and a complementary basic feasible solution to the dual problem. The dual simplex
method proceeds, by pivoting, through a series of dual basic ffeasible
easible solutions until the associated
complementary primal basic solution is feasible, thus satisfying all of the Kuhn
Kuhn-Tucker
Tucker conditions for
optimality.

5.4 Primal-Dual Method


In this section, we describe a method, called the primal
primal-dual algorithm, similar
lar to the dual simplex
method, which begins with dual feasibility, and proceeds to obtain primal feasibility while maintaining
complementary slackness. An important difference between the dual simplex method and the primal-dual
primal
method is that the primal-dual
ual algorithm does not require a dual feasible solution to be basic. Given a dual
feasible solution, the primal variables that correspond to tight dual constraints (so that complementary
slackness is satisfied) are determined. Using phase I of the simplex method, we attempt to attain primal
feasibility using only these variables. If we are unable to obtain primal feasibility, we change the dual
feasible solution in such a way as to admit at least one new variable to the phase I problem. This is
continued until
til either the primal becomes feasible or the dual becomes unbounded.
Development of the Primal
Primal-Dual Method
Consider the following primal and dual problems in standard form where ≥ 0.
P: Minimize D: Maximize
Subject to ≥ Subject to ≤

Dilla University, Department of Mathematics Page 140


Dill University

≥0 w unrestricted
Let w be an initial dual feasible solution, that is, ≤ for all j. By complementary slackness, if
= , then , is allowed to be positive and we attempt to attain primal feasibility from among these
variables. Let = { ∶ − = ), that is, the set of indices of primal variables allowed to be
positive. Then the phase I problem that attempts to find a feasible solution to the primal problem among
variables in the set Q becomes:

+

Subject to ∑ ∈ + =
≥ ∈

We utilize the artificial vector to obtain a starting basic feasible solution to the phase I
problem. The phase I problem is sometimes called the restricted primal problem.
Denote the optimal objective value of the foregoing problem by . At optimality of the phase I
problem either = > 0.
When = , we have a feasible solution to the primal problem since all artificials are zeros.
Furthermore, we have a dual feasible solution, and the complementary slackness condition
( — ) = 0 holds because either ∈ in which case — = 0 or else ∉ in
which case = 0. Therefore, we have an optimal solution of the overall problem
Whenever, = .
If > 0, primal feasibility is not achieved and we must construct a new dual solution that
would admit a new variable to the restricted primal problem in such a way that might be
decreased. We shall modify the dual vector w such that all the basic primal variables in the
restricted problem remain in the new restricted primal problem, and in addition, at least one
primal variable that did not belong to the set Q would be passed to the restricted primal problem.
Furthermore, this variable would reduce if introduced in the basis. In order to construct such a
dual vector, consider the following dual of the phase I problem.

. ≤0 ∈
≤1

Dilla University, Department of Mathematics Page 141


Dill University

Let V* be an optimal solution to the foregoing problem. Then, if a real variable is a member
of the optimal basis for the restricted primal, the associated dual constraint must be tight, that is,

= 0. Also the criterion for basis entry in the restricted primal problem is that the associated

dual constraint be violated, that is, > 0.
However, no variable presently in the restricted primal has this property since the restricted
∗ ∗
primal is optimal. For j ∈ , compute . If > 0, then if , could be passed to the
restricted primal problem it would be a candidate to enter the basis with the potential of a further

decrease in . Therefore, we must find a way to force some variable with > 0 into the
set Q.
Construct the following dual vector ′ where > 0:
′ ∗
= +
′ ∗) ∗
− =( + − = − + ……………………( )
′ ∗) ∗
− =( + − = − + ( )
∗ ′
Note that − = ≤ ∈ . This implies that − ≤ ∈ .

In particular, if ℎ ∈ is a basic variable in the restricted primal, then =0

− = 0 , permitting j in the new restricted primal problem.
∗ ′
If ∉ ≤ 0 , then from equation 3 and − < 0, we have − ≤ 0.

Finally consider ∉ with > 0. Examining equation (3) and noting that − < 0 for

∉ , it is evident that we can choose a > 0 such that − ≤0 ∉ with at least
one component equal to zero. In particular, define as follows:
−( − ) − − ∗
= ∗
= ∗
: > 0 > 0 … … … … … … … … . (4)


By definition of above and from equation (3), we see that − = .

Furthermore, for each j with > 0, and noting equations (3) and (4), we have ′ − ≤
To summarize, modifying the dual vector as detailed above leads to a new feasible dual solution
where ′ − ≤ for all j. Furthermore, all the variables that belonged to the restricted
primal basis are passed to the new restricted primal. In addition, a new variable that is a
candidate to enter the basis, is passed to the restricted primal problem. Hence we continue from
the present restricted primal basis by entering , which leads to a potential reduction in .

Dilla University, Department of Mathematics Page 142


Dill University

There are many other variants of the simplex method. Thus far we have discussed only the
primal and dual methods. There are obvious generalizations that combine these two methods.
Algorithms that perform both primal and dual steps are referred to as primal-dual algorithms and
there are a number of such algorithms. We present here one simple algorithm of this form called
the parametric primal-dual.
It is most easily discussed in an example.

The above example can easily be put in canonical form by addition of slack variables. However,
neither primal feasibility nor primal optimality conditions will be satisfied. We will arbitrarily
consider the above example as a function of the parameter θ in Tableau 6. Clearly, if we choose θ
large enough, this system of equations satisfies the primal feasibility, and primal optimality
conditions. The idea of the parametric primal-dual algorithm is to choose θ large initially so that
these conditions are satisfied, and attempt to reduce θ to zero through a sequence of pivot
operations. If we start with θ = 4 and let θ approach zero, primal optimality is violated when θ <
3. If we were to reduce θ below 3, the objective-function coefficient of would become
positive. Therefore we perform a primal simplex pivot by introducing x2 into the basis. We
determine the variables to leave the basis by the minimum-ratio rule of the primal simplex
method:

leaves the basis and the new canonical form is then shown in Tableau 7.

Dilla University, Department of Mathematics Page 143


Dill University

The optimality conditions are satisfied for 7/ 10 ≤ θ ≤ 3. If we were to reduce θ below 7 /10, the
right hand side value of the third constraint would become negative. Therefore, we perform a
dual simplex pivot by dropping from the basis. We determine the variable to enter the basis
by the rules of the dual simplex method:

After the pivot is performed the new canonical form is given in Tableau 8.

As we continue to decrease θ to zero, the optimality conditions remain satisfied.


Thus the optimal final tableau for this example is given by setting θ equal to zero. Primal-dual
algorithms are useful when simultaneous changes of both right hand-side and cost coefficients
are imposed on a previous optimal solution, and a new optimal solution is to be recovered. When
parametric programming of both the objective-function coefficients and the right hand-side
values is performed simultaneously, a variation of this algorithm is used. This type of parametric
programming is usually referred to as the rim problem.
Unbounded dual
The foregoing process is continued until either = 0 in which case we have an optimal
solution, or else > 0 and v ∗ a ≤ 0 for all j ∉ Q. In this case consider ′
= + ∗
and

Dilla University, Department of Mathematics Page 144


Dill University

wa − c ≤ 0 for all j, and by assumption v ∗ a ≤ 0 for all , then from equation (3) ′ is a dual
feasible solution for all > 0. Furthermore, the dual objective is
′ ∗) ∗
=( + = + b

Since = , and the latter is positive, then ′ can be increased indefinitely by choosing
arbitrarily large. Therefore the dual is unbounded and hence the primal is infeasible.
The Primal-Dual Algorithm (Minimization Problem)
Initialization Step
Choose a vector w such that − ≤0
Main Step
1. Let ={ : − = 0} and solve the following restricted primal problem

+

Subject to ∑ ∈ + =
≥ ∈

Denote the optimal objective by . If = 0, stop; an optimal solution is obtained.

Otherwise, let be the optimal dual solution to the foregoing restricted primal problem.

2. If ≤ 0 for all j, then stop; the dual is unbounded and the primal is infeasible.
Otherwise let
− − ∗
= ∗
: >0

′ ∗
Replace w by = + . Repeat step 1.
Summary
Dual linear programming
Suppose that the primal linear program is given in the form:
P:
s.t ≥ , ≥
Then the dual linear program is defined by:
D:
St ≤ , ≥

Dilla University, Department of Mathematics Page 145


Dill University

Remark: There is exactly one dual variable for each primal constraint and exactly one dual
constraint for each primal variable.
Primal-Dual Relationship
The relationship between Objective values
Weak duality
Theorem: If x is a feasible solution for the primal problem (minimization form) and w is
feasible solution for the dual, then

c ≥

Consider the canonical form of duality and let be feasible solutions of the primal and
dual programs respectively. Then ≥ , ≥ 0, ≤ ≥ 0. Multiplying ≥
on the left by and ≤ on the right by , we get c ≥ ≥ .

Dilla University, Department of Mathematics Page 146


Dill University

Review exercise

a) State this problem with equality constraints and nonnegative variables.


b) Write the dual to the given problem and the dual to the transformed problem found in part (a).
Show that these two dual problems are equivalent.
3. The initial and final tableaus of a linear-programming problems are as follows:

a) Find the optimal solution for the dual problem.


4. In the second exercise of Chapter 1, we graphically determined the shadow prices to the
following linear program:

Dilla University, Department of Mathematics Page 147


Dill University

a) Formulate the dual to this linear program.


b) Show that the shadow prices solve the dual problem.

Dilla University, Department of Mathematics Page 148


Dill University

CHAPTER 6

6. Sensitive Analysis
6.1 Introductions
In Linear Programming, sensitive analysis plays a vital role to discuss the nature of a problem
and it gives a direct test of sensitivity of a particular problem due to the variation of a component
of cost or requirement vector etc. It is in general a post optimality test. It determines the range of
a components of the cost or requirement vector when all other components remain unchanged so
that the optimal solution remains unaffected, i.e. the optimality remains undisturbed. Such type
of change is a discrete change. It gives a clear idea how the nature of the problem changes due to
the addition or deletion of a single variable and due to the removal or addition of constraint or
due to the change of an element a ij of the coefficient matrix. As it is possible to discuss the effect
on a problem due to the variations stated above, it is extremely help full commercial point of a
view and it is possible to rectify adjust a problem suitable so that it will enhance the prospect of a
problem from the economical points of view . for example, as the procedure is able to determine
the extremum of a component bi of b so that the optimal base so that the optimal basis remains
unchanged this technique can be suitably used to get the benefit from the knowledge of shadow
prices or accounting prices to get an overall maximum profit etc.
There are a number of questions that could be asked concerning the sensitivity of an optimal
solution to changes in the data. In this chapter we will address those that can be answered most
easily. Every commercial linear-programming system provides this elementary sensitivity
analysis, since the calculations are easy to perform using the tableau associated with an optimal
solution. There are two variations in the data that invariably are reported: objective function and
required vector. The objective-function ranges refer to the range over which an individual
coefficient of the objective function can vary, without changing the basis associated with an
optimal solution. In essence, these are the ranges on the objective-function coefficients over
which we can be sure the values of the decision variables in an optimal solution will remain
unchanged. The right hand-side ranges (required vector) refer to the range over which an
individual right hand-side value can vary, again without changing the basis associated with an
optimal solution. These are the ranges on the right hand-side values over which we can be sure
the values of the shadow prices and reduced costs will remain unchanged. Further, associated
with each range is information concerning how the basis would change if the range were
exceeded.

Sensitivity analysis is a systematic study of how sensitive solutions are to (small) changes in the
data. The basic

idea is to be able to give answers to questions of the form:

1. If the objective function changes, how does the solution change?

Dilla University, Department of Mathematics Page 149


Dill University

2. If resources available change, how does the solution change?

3. If a constraint is added to the problem, how does the solution change?

One approach to these questions is to solve lots of linear programming problems.

At the end of this unit learner will be able to:-


 Observe and investigate the impact of optimal solution due to change of
parametric values.

 Identify the parameters whose values cannot be changed without changing


the optimal solution.

6.2 Variation of coefficients of objective function (cj)


There are two different cases to discuss:
1) is the cost component corresponding to a non basic variable.
2) is the cost component corresponding to an optimal basic variable.
Case(1) At the optimum simplex table all ≥ 0[ ]
Let ∆ be the amount to be added to corresponding to a non-basic variable .Now for that
discrete change of the cost component the simplex table remains optimal provided.
+∆ ≥0 ℎ ℎ ℎ
⟹ − ≥ ∆ (6.2.1)
From (6.2.1) we can say that there is no lower bound of ∆ and any change of dose not
change value of the objective function since =0 [non-basic variables]

Case(2) Let = be the component basic variable [r=1,2,3,…,m] and the cost
component = corresponding to changes to = ∆ .Now the change of affects
all for all j in the basis. Now due to change

∗ ∗
− = − [for all ∉ ]
=∑, − +( +∆ )
= ∑ , − +∆
= − +∆
Now to keep the solution optimal, we must have all ∗ − ∗ ≥ 0 for all ∉
Thus − + ∆ ≥ 0.
If =0, then − + ∆ = − ≥ 0.

< 0, ∆ ≤
  (6.2.2)
> 0, ∆ ≥

Dilla University, Department of Mathematics Page 150


Dill University

Combining these two conditions given in (6.2.2) we can say that the optimal basis
remains unaffected if △ = ∆ lies in the interval given below
( ) ( )
, >0 ≤∆ =△ ≤ , <0 (6.2.3)

For all ∉ .If no > 0, there is no upper bound of △ and if no < 0 there is no lower
bound of △ .But here the value of the objective function changes by △ =△ .

6.3 Variation of vector requirement ( )


Let us discuss the optimality of a problem when a component of b ,say changes to
+△ [ = 1,2, … , ].
For the requirement vector b, let the optimal basis be B and = ≥ 0.For the change of
△ of the component ,the new basic solution corresponding to the base B

= [ , , … , +△ , … , ]
=[ + △ ] [ = 1,2, … , ] (6.3.1)
For all in the basis and where is the element of the row and column of .

Now we shall have to discuss the following cases so that new basic solution remains feasible.

I. If =0 , then = ≥0
II. If >0 , then + △ ≥ 0 ⇒△ ≥− [△ ≤ 0 here]
III. If <0 , then + △ ≥ 0 ⇒△ ≤− [△ ≥ 0 here]
Combining these two, we can say that the optimality remains unaffected if △ lies in
the interval

, >0 ≤∆ ≤ , <0 (6.3.2)


Here also, if no > 0, there is no upper bound and if no < 0 ,there is no lower bound.

Again it can be easily verified that if for = , , > 0 occurs then will be zero

for the change △ of , similarly if for = , , < 0 occurs then will be
zero.

Due to the new B.F.S , the change in the value of the objective function is given by

∑ △ .

Dilla University, Department of Mathematics Page 151


Dill University

Example 6.3.1
Given the following L.P.P
Maximize , = + 4 − 2 + 3 +
Subject to −3 + +2 +6 ≤ 3
2 + +3 +2 ≤6
4 + − + ≤2
≥ 0, = 1,2, … ,5

Find the ranges over which , , can be changed so that the optimality of the current
solution remains undisturbed [Assume that when , changes ,all other quantities remain
unchanged etc].
The optimal simplex table is
C 1 4 -2 3 -1 0 0 0
Basis b

0 10 25 0 1 0 37 1 1 11
2 4 4 4

3 1 −1 0 0 1 1 0 1 −1
2 4 4 4
4 3 7 1 0 0 5 0 1 3
2 4 4 4

− 15 23 0 2 0 27 0 7 9
2 4 4 4

Where , are slack vectors.


The optimal basis
1 2 −3
=( , )= 0 3
, 1 (6.3.3)
0 −1 1
Thus = ( , , ) = (0,3,4); , ∉ = ∉
Then for and the ranges are

  ≤ − , ,△ ≤ 23/2
(6.3.4)
△ ≤ − , ,△ ≤ 2
= ,
the cost component corresponding to the second basic variable.
Hence △ lies in the interval given y the formula

−( − ) −( − )
, >0 ≤∆ ≤ , <0

Dilla University, Department of Mathematics Page 152


Dill University

Or , max{−27, −7} ≤ ∆ ≤ {23,9} (6.3.5)


Or −7 ≤ ∆ ≤ 9
, constitute the initial unit basis.Hence the vectors , , under the vectors
, , in the final simplex table constitute the optimal basis inverse and hence

1
= 0 − (6.3.6)
0

Note:verify this result by finding the inverse of given in (6.3.3)by matrix inverse method.
The range of be such that the optimal basis remains unchanged is given by
− −
, >0 ≤∆ ≤ , <0

Where is the element in theith row and third column of


Or , max{−40/11, −4} ≤ ∆ ≤ {4}
=-40/11≤ ∆ ≤ 4

Example 6.3.2 Discuss the effect of changes in the requirements for the following L.P.P
Maximize , = − + 3
Subject to + + ≤ 10
2 −3 ≤2
2 −2 +3 ≤ 0
≥ 0, = 1,2,3
So that the optimality of this problem remains undisturbed
The components of requirement = [ , , ] = [10,2,0]
The optimum simplex table is given by

Dilla University, Department of Mathematics Page 153


Dill University

Basis b ( ) ( )
( )

-1 6 1 1 0 3 0 −1
5 5 5
0 6 14 0 0 2 1 1
5 5 5

3 4 4 0 1 2 0 1
5 5 5

− 6 6 0 0 3 0 4
5 5 5

The optimal basis


1 0 1
=( , , )= 0 1 −1
−2 0 3

( ), ( ) and ( ) constitute the initial unit basis which are thee slack vectors. Then
, , under , respectively of the final table constitute final base inverse which is
given by

3 −1
0
5 5
2 1
= 1
5 5
2 1
0
5 5
6
And the optimal basic solution = 6
4

Hence the individual range of changes of , are given by

−6 −6 −4 ≤ ∆ [∵
3, 2, 3 = 1]
5 5 5

Dilla University, Department of Mathematics Page 154


Dill University

Or −10 ≤ ∆ [there is no upper limit]


max(−6/1) ≤ ∆
Or −6 ≤ ∆

−6 , −4 ≤∆ ≤ −6 [∵ = 3]

Or ,−20 ≤ ∆ ≤ 30 [ maximum occurs for i=3,and minimum occurs for i=1]

The interpretation is that , remains unchanged then the optimal solution remains optimal if

(0 − 20 ≤ ≤ 0 + 30) = (−20 ≤ ≤ 30)
For = −20,the B.F.S corresponding to the basis [10,2, −20] = [10,2,0] which is a
degenerate B.F.S with the third component = =0.

For = 30,the B.F.S corresponding to the basis [10,2,30] = [0,12,10] which is a


degenerate B.F.S with the third component = =0

6.4 Variation of constraints

6.4.1. Changes in the constraint coefficients


Here the problem is to investigate the effect of changing the coefficient to +∆ after
finding the optimum solution with .There are two possibilities in this case. The first
possibility occurs when all the coefficients , in which changes are made, belong to the
columns of those variables that are non basic in the old optimal solution. In this case, the effect
of changing on the optimal solution can be investigated by adoptingthe procedure outlined in
the preceding section. The second possibility occurs when the coefficients changed
correspond to a basic variable, say, of the old optimal solution. The following procedure can
be adopted to examine the effect of changing , to . + ∆ , .

1. Introduce a new variable to the original system with constraint coefficients


, = , +∆ , (6.4.1)
and cost coefficient
= [original value it self] (6.4.2)
2. Transform the coefficients , to , by using the inverse of the old optimal basis,
=[ ], as

, =∑ , =1 (6.4.3)
3. Replace the original cost coefficient ( ) of by a large positive number ,
but keep equal to the old value .

Dilla University, Department of Mathematics Page 155


Dill University

4. Compute the modified cost coefficient using


̅′ = ̅ + ∆ − ∑ ∆ ∑ ≥0 = + 1, + 2, … , (6.4.4)
Where ̅ indicate the values of the relative cost coefficient corresponding to the original
optimal solution.
Where ∆ = 0 for = 1,2, … , − 1, + 1, … , and ∆ = − .
5. Carry the regular iterative procedure of simplex method with the new objective function
and the augmented matrix found in Eqs. (6.4.3) and (6.4.4) until the new optimum I
found.

Remarks:
1. The number has to be taken sufficiently large to ensure that cannot be contained in the
new optimal basis that is ultimately going to be found.
2. The procedure above can easily be extended to cases where changes in coefficients of more
than one column are made.
3. The present procedure will be computationally efficient (compared to reworking of the
problem from the beginning) only for cases where there are not too many number of basic
columns in which the are changed.
7 6
Example: Find the effect changing from to in the problem below is
3 10
= −45 − 100 − 30 − 50
. 7 + 10 + 4 + 9 ≤ 1200
3 + 40 + + ≤ 800
≥ 0, = 1, … ,4
(i.e Change are made in the coefficients of non-basic variables only)
Solution
The relative cost coefficients of the non basic variables (of the original optimum solution
) corresponding to the new are given by
̅ = − =

Since is changed ,we have


22 2 6 17
̅ = − = −45 − − − =
3 3 10 3
As ̅ is positive, the original optimum solution remains optimum for the new problem also.
Class activity
7 5
Find the effect changing from to in the problem above.
3 6
6.4.2. Addition of new variables
Suppose we introduce a new variable together with a corresponding column , and
obtain the new problem

Dilla University, Department of Mathematics Page 156


Dill University

+
s.t + =
≥0
We wish to determine whether the current basis is still optimal.
We note that ( , )=( ∗ , 0) is a basic feasible solution to the new problem associated with the
basis ,and we only need to examine the optimality conditions.For the basis to remain optimal
,it is necessary and sufficient that the reduced cost of be non negative that is

̅ = − ≥0

If this condition is satisfied ( , 0) is an optimal solution to the new problem.If however
̅ ≤ 0,then( ∗ , 0) is not necessarily optimal.In order to find an optimal solution,we add a
column to the simplex tableau associated with the new variable and apply the primal simplex
algorithmstarting from the current basis .Typically, an optimal solution to the new problem is
obtained with a small number of iteration, and approach is usually much faster than solving the
new problem from scratch.

Example 6.4.1
Consider the problem
− 5 − + 12
. 3 + 2 + = 10
5 + 3 + = 16
≥ 0, = 1, … ,4
An optimal solution to this problem is given by = (2,2,0,0) and the corresponding simplex
tableau is given by

12 0 0 2 7

=2 1 0 -3 2

=2 0 1 5 -3

Note that s given by the last two columns of the tableau


Let us now introduce a new variable and consider the new problem
− 5 − + 12 −
. 3 + 2 + + = 10
5 + 3 + + = 16
≥ 0, = 1,2, … ,5
We have = (1,1) and
−3 2 1
̅ = ′ = −1 − [−5 − 1] = −4
5 −3 1
Dilla University, Department of Mathematics Page 157
Dill University

Since ̅ is negative introducing the new variable to the basis can be beneficial.
We observe that = (−1,2) and augment the tableau by introducing a column associated
with

12 0 0 2 7 -4

=2 1 0 -3 2 -1

=2 0 1 5 -3 2
We then bring in to the basis ; exist
and we obtain the following tabealu,which happens to be optimal:

16 0 2 12 1 0

=3 1 0.5 -0.5 0.5 0

=1 0 0.5 2.5 -1.5 1

An optimal solution is given by = (3,0,0,0,1).

6.5 Addition of constraints


Adding a Constraint

If you add a constraint to a problem, two things can happen. Your original solution satisfies the
constraint or it doesn't. If it does, then you are finished. If you had a solution before and the
solution is still feasible for the new problem, then you must still have a solution. If the original
solution does not satisfy the new constraint, then possibly the new problem is infeasible. If not,
then there is another solution. The value must go down. (Adding a constraint makes the problem
harder to satisfy, so you cannot possibly do better than before). If your original solution satisfies
your new constraint, then you can do as well as before. If not, then you will do worse.

6.4.1 A new inequality constraint is added


Let us now consider a new constraints ≥ , where are given.
If the optimal solution to the original problem satisfies this constraints then ∗ is an optimal

solution to the new problem as well.If the new constraint is violated ,we introduce a new non
negative slack variable ,and rewrite the new constraint in the form of
− = .
We obtain a problem in standard form, in which the matrix is replaced by

Dilla University, Department of Mathematics Page 158


Dill University

0

−1
If be an optimal basis for the original problem we form a base for the new problem by
selecting the original basic variable together with ,the new basis matrix is of the form
0
= ′
−1
When the row vector ′ contains those components ′ associated with the original basic
column.
The basic solution associated with this basis is ∗ , ′ ∗
− , and is infeasible because of

our assumption that violates the new constraint.
Note that the new invers basis matrix is readily available because
= ′ 0
−1
Let be the M-dimensional vector with the costs of the basic variable in the original problem,
then the vector of reduced cost associated with the basis for the new problem is given by
, 0 0
[ ′ 0]- [ 0] ′ ′
−1 −1

= ′− ′ 0
and is non-negative due to the optimality of B for the original problem. Hence is a dual
feasible basis and we are in a position to apply the dual simplex method to the new problem .
Note that an initial simplex tableau for the new problem is readily constructed ,we have

0 0
= ′ = ′
−1 − ′
1
Where is available from the final simplex tableau for the original problem.
Example:
Consider the problem
min −5 − + 12
s.t 3 + 2 + = 10
5 + 3 + = 16
,…, ≥ 0

Dilla University, Department of Mathematics Page 159


Dill University

And recall the optimal simplex tableau

12 0 0 2 7

= 2 1 0 -3 2

= 2 0 1 5 -3

We introduce the additional constraints + ≥ 5 which is violated by the optimal solution


∗ ′ ∗
= (2,2,0,0) we have = (1,1,0,0) , =5 <
We form the standard form problem

min −5 − + 12
s.t 3 + 2 + = 10
5 + 3 + = 16
+ − =5
,…, ≥ 0
Let a constant of a components of associated with the basic variables.we have = (1,1)

′ ′ 1 0 −3 2
− = [1 1] − [1 1 0 0] = (0 0 2 − 1)
0 1 5 −3
The tableau for the new problem is of the form

12 0 0 2 7 0

= 2 1 0 -3 2 0

= 2 0 1 5 -3 0

= -1 0 0 2 -1 1

Our discussion has been focused on the case , where an inequality constraint is add to the pimal
problem.

Dilla University, Department of Mathematics Page 160


Dill University

6.4.2 A new equality constraint is added



We now consider the case where the new constraint is of the form = and we

assume that this new constraint is violated by the optimal solution to the original problem.
The dual of the new problem is

+
. [ ′ ] ≤ ′

Where is dual variable associated with the new constraints .Let ∗ be an optimal basic
feasible solution to the original dual problem.Then, ( ∗ , 0) is a feasible solution to the new dual
problem.
Let be the dimension of which is the same as the original number of constraints.Since ∗ is a
basic feasible solution to the original dual problem and of the constraints in ( ∗ )′ ≤ ′ are
linearly independent and active.However ,there is no guarantee that at ( ∗ , 0) we will have
+ 1linearly independent active constraints of the new dual problem. In particular ,( ∗ , 0) need
not be a basic feasible solution to the new problem and may not provide a convenient starting
point for the dual simplex method on the new problem.
While it may be possible to obtain a dual basic feasible solution by setting to a suitable
chosen non zero value ,we present here an alternative approach.

Let us assume, without loss of generality , ′ > . We introduce the auxiliary prime

problem +
. =
′ − =
≥ 0, ≥0
Where M is a large positive constant.A primal feasible basic for the auxiliary problem is
obtained by picking the basic variables of the optimal solution to the original problem, together
with the variable .The resulting basic matrix is the same as the matrix of the preceding
subsection.There is a difference,however.in the preceding subsection , was a dual feasible
basis, whereas here is a primal feasible basis, for this reason, the primal simplex method can
now be used to solve the auxiliary problem to optimality.
Suppose that an optimal solution to an auxiliary problem satisfies = 0 ;this will be the case
if the new problem is feasible and the coefficient M is large enough.
Then, the additional constraint ′ = has been satisfied and we have an optimal
solution to the new problem.

6.6 Solver outputs and interpretations


Linear programming - sensitivity analysis - using Solver
Recall the production planning problem concerned with four variants of the same product which
we formulated before as an LP. To remind you of it we repeat below the problem and our
formulation of it.
Production planning problem
Dilla University, Department of Mathematics Page 161
Dill University

A company manufactures four variants of the same product and in the final part of the
manufacturing process there are assembly, polishing and packing operations. For each variant the
time required for these operations is shown below (in minutes) as is the profit per unit sold.
Assembly Polish Pack Profit (£)
Variant 1 2 3 2 1.50
2 4 2 3 2.50
3 3 3 2 3.00
4 7 4 5 4.50

 Given the current state of the labour force the company estimate that, each year, they
have 100000 minutes of assembly time, 50000 minutes of polishing time and 60000
minutes of packing time available. How many of each variant should the company make
per year and what is the associated profit?
 Suppose now that the company is free to decide how much time to devote to each of the
three operations (assembly, polishing and packing) within the total allowable time of
210000 (= 100000 + 50000 + 60000) minutes. How many of each variant should the
company make per year and what is the associated profit?

Production planning solution Variables

Let: xi be the number of units of variant i (i=1,2,3,4) made per year

Tass be the number of minutes used in assembly per year


Tpol be the number of minutes used in polishing per year
Tpac be the number of minutes used in packing per year

where xi>= 0 i=1,2,3,4 and Tass, Tpol, Tpac>= 0

Constraints

(a) operation time definition

Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly)


Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish)
Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)

(b) operation time limits

The operation time limits depend upon the situation being considered. In the first situation,
where the maximum time that can be spent on each operation is specified, we simply have:

Dilla University, Department of Mathematics Page 162


Dill University

Tass<= 100000 (assembly)


Tpol<= 50000 (polish)
Tpac<= 60000 (pack)

In the second situation, where the only limitation is on the total time spent on all operations, we
simply have:

Tass + Tpol + Tpac<= 210000 (total time)

Objective

Presumably to maximise profit - hence we have

maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4

which gives us the complete formulation of the problem.

Solution - using Solver

Below we solve this LP with the Solver add


add-in
in that comes with Microsoft Excel.

Look at Sheet A in lp.xls and to use Solver do Tools and then Solver. In the version of Excel I
am using (different versions of Excel have slightly different Solver formats) you will get the
Solver model as below:

but where now we have highlighted (clicked on) two of the Reports available - Answer and
Sensitivity. Click OK and you will find that two new sheets have been added to the spreadsheet -
an Answer Report and a Sensitivity Report.

Dilla University, Department of Mathematics Page 163


Dill University

As these reports are indicative of the information that is commonly available when we solve a LP
via a computer we shall deal with each of them in turn.

Answer Report

The answer report can be seen below:

This is the most self-explanatory


explanatory report.

We can see that the optimal solution to the LP has value 58000 (£) and that T ass=82000,
Tpol=50000, Tpac=60000, X1=0, X2=16000, X3=6000 and X4=0.

Note that we had three constraints for total assembly, total polishing and total packing time in
our LP. The assembly time constraint is declared to be 'Not Binding' whilst the other two
constraints are declared to be 'Binding'. Constraints with a 'Slack' value of zero are said to be
tight or binding in that they are satisfied with equality at the LP optimal. Con
Constraints
straints which are
not tight are called loose or not binding.

Sensitivity Report

The sensitivity report can be seen below:

Dilla University, Department of Mathematics Page 164


Dill University

Summary
 In all LP models the coefficient of the objective function and the constraints are
supplied as input data or as param
parameters to the model
 Sensitivity analysis helps to study how the optimal solution will change with changes
in the input coefficients
 The optimal solutions obtained is based on the values of these coefficients
 In most practical applications, some of the probl
problem
em data are not known exactly and hence
are estimated as well as possible.
 It is important to be able to find the new optimal solution of the problem as other
estimates of some of the data become available, without the expensive task of resolving
the problem
lem from scratch.
 Furthermore, in many situations the constraints are not very rigid. For example, a
constraint may reflect the availability of some resource. This availability can be increased
by extra purchase, overtime, buying new equipment, and the like.

Dilla University, Department of Mathematics Page 165


Dill University

 It is desirable to examine the effect of relaxing some of the constraints on the value of
the optimal objective without having to resolve the problem. These and other related
topics constitute sensitivity analysis
 The ranges of the original objective function coefficients of the original variables for
which the current basis remains optimal.
 The ranges of the right-hand-side constants for the constraints for which the current basis
remains optimal.
 Use Excel’s Solver Add-In to solve linear programming problems.
 Interpret the results of models and perform basic sensitivity analysis.

Review Exercise
1. write a short note on sensitive analysis.
2. Describe the working for determination of the range of variation of the discrete
change of the components of such that an optimal solution remains undisturbed.
a) When the component is the coefficient of a non-basic variables
b) When the component is the coefficient of a basic variable belonging
to the optimal basis
3. Describe the working for determination of the range of a component of the
requirement vector such that optimality remains undisturbed.Also prove that
for the extreme values of the component,the B.F.S will be a degenerate
4. Solve the following L.P.P by simplex method
=5 +4 +
. 6 + + 2 ≤ 12
8 + 2 + ≤ 30
4 + − 2 ≤ 16
, , ≥0
a) Find the range of ,the coefficient of in the objective function,such
that the optimality remains undisturbed.[Assume that and remains
unchanged
b) Find the ranges of and so that optimality basis remains the same.
5. From the optimal simplex table of the following L.P.P
=2 − +3
. 3 + −2 ≤6
2 + 5 + ≤ 14
4 +4 +2 ≤ 8
, , ≥0

Find the ranges of , and such that the optimal basis remains unchanged

6. Solve the L.P

Dilla University, Department of Mathematics Page 166


Dill University

= + +3
. +2 − ≤ 10
3 +2 ≤8
+ 3 ≤ 15
, , ≥0
Taking the initial basis as ( , , ) where = and = are unit slack
vectors.
From the optimal simplex table ,find the ranges of , and such that the
optimality remains unchanged.

7. Solve the L.P by simplex method

=2 +5
. −2 + ≤0
+ 3 ≤ 14
+ ≤8
, , ≥0
Where the initial B.F.S is a degenerate one
Find the range of such that optimality be not violated and find the basic
feasible solution for the extreme values of .

8. Following is the final optimal table for a L.P.P.Assume that originaly the identity
matrix was under (Maximization)

2 2 0 0
vectors in b
The basis
2 5 1 0 −

2 4 0 1 −

− 18 0 0 0 2

Suppose was equal to 3 instead of 2 .Would you still have the optimal solution?Conclude it by
calculating − with new cost components.Verify that result by using the theory of sensitive
analysis.

Dilla University, Department of Mathematics Page 167


Dill University

CHAPTER SEVEN

7. INTERIOR POINT METHODS


Introduction

In this chapter, we begin our study of an alternative to the simplex method for solving linear
programming problems. The algorithm we are going to introduce is called a path-following
method. It belongs to a class of methods called interior-point methods. The path-following
method seems to be the simplest and most natural of all the methods in this class, so in this book
we focus primarily on it. Before we can introduce this method, we must define the path that
appears in the name of the method. This path is called the central path and is the subject of this
chapter. Before discussing the central path, we must lay some groundwork by analyzing a
nonlinear problem, called the barrier problem, associated with the linear programming problem
that we wish to solve.

It is then perhaps not surprising that the announcement by Karmarkar in 1984 of a new
polynomial time algorithm, an interior-point method, with the potential to improve the practical
effectiveness of the simplex method made front-page news in major newspapers and magazines
throughout the world. It is this interior-point approach that is the subject of this chapter and the
next. This chapter begins with a brief introduction to complexity theory, which is the basis for a
way to quantify the performance of iterative algorithms, distinguishing polynomial-time
algorithms from others. Next the example of Klee and Minty showing that the simplex method is
not a polynomial-time algorithm in the worst case is presented. Following that the ellipsoid
algorithm is defined and shown to be a polynomial-time algorithm. These two sections provide a
deeper understanding of how the modern theory of linear programming evolved, and help make
clear how complexity theory impacts linear programming. However, the reader may wish to
consider them optional and omit them at first reading. The development of the basics of interior-
point theory begins with this section which introduces the concept of barrier functions and the
analytic center. Section 7.2 introduces the central path which underlies interior-point algorithms.

General objectives

At the end of this unit the learner will be able to:

▪Understand the basic ideas about interior point method

▪Identify the steepest descent direction

▪Explain one iteration Karmarkar’s projective algorithm

▪Define inverse transformation

▪Convert a given linear programming into the required format

Dilla University, Department of Mathematics Page 168


Dill University

7.1 Basic ideas


• Suppose we are at an interior point of the feasible set

• Picture.

• Consider the steepest descent step.

• This step doesn’t make much progress unless our starting point is central.

• So we’ll change the coordinate system so that the current point is always central.

In this chapter, we consider the linear programming problem expressed, as usual, with inequality
constraints and nonnegative variables:

The corresponding dual problem is

As usual, we add slack variables to convert both problems to equality form:

(1)

And

Given a constrained maximization problem where some of the constraints are inequalities (such
as our primal linear programming problem), one can consider replacing any inequality constraint
with an extra term in the objective function. For example, in (1) we could remove the constraint
that a specific variable, say, xj , is nonnegative by adding to the objective function a term that is
negative infinity when xj is negative and is zero otherwise. This reformulation doesn’t seem to
be particularly helpful, since this new objective function has an abrupt discontinuity that, for
example, prevents us from using calculus to study it. However, suppose we replace this

Dilla University, Department of Mathematics Page 169


Dill University

discontinuous function with another function that is negative infinity when xj is negative but is
finite for xj positive and approaches negative infinity as xj approaches zero. In some sense this
smooths out the discontinuity and perhaps improves our ability to apply calculus to its study. The
simplest such function is the logarithm. Hence, for each variable, we introduce a new term in the
objective function that is just a constant times the logarithm of the variable:

(2)

This problem, while not equivalent to our original problem, seems not too different either. In
fact, as the parameter µ, which we assume to be positive, gets small, it appears that (2)
( becomes
a better and better stand-in for (1).
1).

FIGURE 7.1. .1. Parts (a) through (c) show level sets of the barrier function for three values of µ.
For each value of µ, four level sets are shown. The maximum value of the barrier function is
attained inside the innermost level set. The drawing in part (d) shows the central path.

7.2 One iteration of Karmarkar’s projective algorithm


Consider the LP in the form

minimize C X

sub ject to x ∈ Ω ∩ S

where Ω = { x: Ax=0} and S = {x: x≥0, ∑ = 1} ( 7.1)

Dilla University, Department of Mathematics Page 170


Dill University

A is of order m x n without any loss of generality we assume that the rank of A is m. We make
the Following assumptions:

i) = e , where e is the column vector of all 1’s in is feasible to this LP


ii) The optimum objective value in (7.1) is zero.

Karmarkar’s algorithm generates a finite sequence of feasible points , , … all of them > 0
such that is strictly decreasing L denotes the size of(7.1).

Karmarkar’s Method

Assumptions: The problem is in homogeneous form:

Optimum objective function value is 0.

Idea:

(1) Use projective transformation to move an interior point to the centre of the feasible region
(2) Move along projected steepest descent direction

7.2.1 Projective transformation


Consider the transformation, T, defined by,

Using the transformation,

Dilla University, Department of Mathematics Page 171


Dill University

Consider the problem,

Transforming an LP Into Another With a Known Strictly Positive

An LP in standard form with an optimum objective value of zero can be transformed into another
with the same property but with a known strictly positive feasible solution. We show that any LP
can be transformed into another one with a known minimum objective value of zero.

Consider the LP

Minimize hx

Subject to Ex ≥ 0

x≥0 (7.2)

Let denote the row vector of dual variables. It is well that solving (7.2) is equivalent to
solving the following system of linear inequalities .

(7.3)

There is no objective function in (7.3). If ( , ) is a feasible solution for (7.3), is an optimum


solution for the LP (7.2) and is an optimum dual solution If (7.3) is infeasible, either (7.2) is
itself infeasible, or (7.2) may be feasible but its dual may be infeasible (in the later case, the
objective value is unbounded below on the set of feasible solutions of (7.2)
Dilla University, Department of Mathematics Page 172
Dill University

The system (7.3) can be expressed as a system of equations in nonnegative variables by


introducing the appropriate slack variables To solve the resulting system, construct the usual
Phase I problem by introducing the appropriate artificial variables Let u denote the vector
consisting of the variables , , and the artificial variables Let the Phase I problem
corresponding to (7.3) be

(7.4)

The optimum ob jective value in (7.4) is ≥ (since it is a Phase I problem corresponding to (7.3)
and (7.3) is feasible iff it is zero. Let v denote the row vector of dual variables for (7.4)

Consider the LP

(7.5)

The LP (7.5) consists of the constraints in (7.4) and its dual From the duality theory of linear
programming the optimum objective value in (7.5) is zero (since (7.4) has a finite optimum
solution). The LP (7.5) can be put in standard form for LPs by the usual transformations of
introducing slack variables etc., If ( , ) is optimal to (7.5) then is optimal to (7.4) If g =0,
then the x-portion of is an optimum solution for(7.2). If g >0, (7.3) is infeasible and hence
(7.2) is either infeasible or has no finite optimum solution.

7.2.2 Moving in the direction of steepest descent


▪Interior Point Methods for Linear Programming

▪Points generated are in the “interior" of the feasible region

▪Based on nonlinear programming techniques

▪Some interior points methods:

. Affine Scaling

. Karmarkar’s Method

We consider the linear program,

Dilla University, Department of Mathematics Page 173


Dill University

Affine Scaling

Idea:

(1) Use projected steepest descent direction at every iteration


Given a feasible interior point at current iteration k

Projected Steepest Descent Direction


Project the steepest descent direction, −c, on the null space of A

Idea:
(2) Position the current point close to the centre of the feasible region
For example, one possible choice is the point:

Dilla University, Department of Mathematics Page 174


Dill University

Affine Scaling Algorithm:

▪ Start with any interior point

▪ while (stopping condition is not satisfied at the current point)

▪Transform
Transform the current problem into an equivalent problem in yy−space so
o that the current point
is close to the centre of the feasible region

▪Use
Use projected steepest descent direction to take a step in the yy−space
−space without crossing the
feasible set boundary

▪Map
Map the new point back to the corresponding point in the xx−space

Stopping Condition for an Affine Scaling Algorithm

Consider the following primal and dual problems:

Dilla University, Department of Mathematics Page 175


Dill University

Consider the primal problem:

Define the Lagrangian function,

Assumption: x is primal feasible and λ ≥ 0 KKT conditions at optimality:

Defining X = diag(x), the KKT conditions are

Solve the following problem to get µ:

Thus, at a given point ,

Step 1: Equivalent problem formulation to get considerable improvement in the objective


function

Given , define = diag( ). Define a transformation T as,

Dilla University, Department of Mathematics Page 176


Dill University

Using this transformation,

Step 2: Find the projected steepest direction and step length at for the problem,

Affine Scaling Algorithm (to solve an LP in Standard Form)

Dilla University, Department of Mathematics Page 177


Dill University

Application of Affine Scaling Algorithm to the problem

Dilla University, Department of Mathematics Page 178


Dill University

7.2.3. Inverse transformation


Let F(x), x ∈ IR, denote any cumulative distribution function (cdf) (continuo
(continuous
us or not). Recall
that F : IR → [0, 1] is thus a non
non-negative and non-decreasing
decreasing (monotone) function that is
continuous from the right and has left hand limits, with values in [0, 1]; moreover
moreove F(∞) = 1 and
F(−∞)
−∞) = 0. Our objective is to generate (simulate) rvs X distributed as F; that is, we want to
simulate a rv X such that P(X ≤ x) = F(x), x ∈ IR.

Define thee generalized inverse of F, : [0, 1] → IR, via

(y) = min{x : F(x) ≥ y}, y ∈ [0, 1].

If F is continuous, then F is invertible (since it is thus continuous and strictly


strictl increasing) in
which case (y) = min{x : F(x) = y}, the ordinary inverse function and thus

F( (y)) = y and (F(x)) = x. In general it holds that

(F(x)) ≤ x and F( (y)) ≥ y. (y) is a non-decreasing


decreasing (monotone) function in y. This
simple fact yields a simple method for simulating a rv X distributed as F:

Proposition 1.1 (The Inverse Transform Method) Let F(x), x ∈ IR, denote any cumulative
distribution
tribution function (cdf ) (continuous or not). Let (y), y ∈ [0, 1] denote the inverse
function defined in (1). Define X = (U), where U has the continuous uniform distribution
over the interval (0, 1). Then X is distributed as F, that is, P(X ≤ x) = F(x), x ∈ IR.

Proof: We must show that P( (U) ≤ x) = F(x), x ∈ IR. First suppose that F is continuous.
Then we will show that (equality of events) { (U) ≤ x} = {U ≤ F(x)}, so that by taking
probabilities (and letting a = F(x) in P(U ≤ a) = a) yields the result: P( (U) ≤ x) = P(U ≤
F(x)) = F(x).

Dilla University, Department of Mathematics Page 179


Dill University

To this end: F( (y)) = y and so (by monotonicity of F) if (U) ≤ x, then U = F( (U))


≤ F(x), or U ≤ F(x). Similarly (F(x)) = x and so if U ≤ F(x), then (U) ≤ x. We conclude
equality of the two events as was to be shown. In the general (continuous or not) case, it is easily
shown that

{U < F(x)} ⊆ { (U) ≤ x} ⊆ {U ≤ F(x)},

which yields the same result after taking probabilities (since P(U = F(x)) = 0 since U is a
continuous rv.)

Examples

The inverse transform method can be used in practice as long as we are able to get an explicit
formula for (y) in closed form. We illustrate with some examples. We use the notation U ∼
unif(0, 1) to denote that U is a rv with the continuous uniform distribution over the interval

(0, 1).

1. Exponential distribution: F(x) = 1 − e −λx, x ≥ 0, where λ > 0 is a constant. Solving the


equation y = 1 − e −λx for x in terms of y ∈ (0, 1) yields x = F −1 (y) = −(1/λ) ln (1 − y).
This yields X = −(1/λ) ln (1 − U). But (as is easily checked) 1 − U ∼ unif(0, 1) since U ∼
unif(0, 1) and thus we can simplify the algorithm by replacing 1 − U by U:

Algorithm for generating an exponential rv at rate λ:

i. Generate U ∼ unif(0, 1).

ii. Set X = − 1 λ ln (U).

2. Discrete rvs: discrete inverse-transform method: Consider a non-negative discrete rv X


with probability mass function (pmf) p(k) = P(X = k), k ≥ 0. In this case, the construction
X = F −1 (U) is explicitly given by: X = 0 if U ≤ p(0),

This is known as the discrete inverse-transform method. The algorithm is easily verified
directly by recalling that P(a < U ≤ b) = b − a, for 0 ≤ a < b ≤ 1; here we use

7.3 The algorithm and its polynomiality


Unlike the pure primal affine scaling and the pure dual scaling, the primal-dual algorithm is a
polynomial algorithm. The well chosen step length at each iteration leads to the nice
convergence result:

Dilla University, Department of Mathematics Page 180


Dill University

Lemma 7. 3.1 If (x, y, s) ∈ (θ) then the duality gap is equal to the complementarily gap, i.e.

Proof: Any point in N2(θ) is primal and dual feasible hence by substituting

and using Ax = b we get

Lemma 7. 3.2 The Newton direction (∆x, ∆y, ∆s) defined by the equation system

Satisfies :

Proof: From the first two equations in (11) we get A∆x = 0 and ∆s = Q∆x − AT ∆y hence

Where, the last inequality follows from positive definiteness of Q.

The Newton method uses a local linearization of the complementarily condition. When a step in
the Newton direction (∆x, ∆y, ∆s) of (11) is made, the error in the approximation of
complementarily products is determined by the second-order term which is a product of ∆x and
∆s.

7.4 A purification scheme


Let us consider a linear programming problem in its standard form:

Minimize

Subject to Ax =b, x≥0

Dilla University, Department of Mathematics Page 181


Dill University

Where A is an mxn matrix of full rank, b, c and x are n-dimensional column vectors.

Notice that the feasible domain of problem is defined by

P= {x∈ / Ax=b} as

We further define the relative interior of p (with respect to the affine space

{ x∈ / Ax=b , x>0}

An n-vector x is called an interior point ,or interior solutionof the linear programming problem

if x∈ , throughout this book , for any interior point approach . we always make a fundamental
assumption ≠0

There are several ways to find an initial interior solution to a given linear programming problem.
The details will be discussed later . For the time being , we simply assume that an initial interior
solution is available and focus on the basic ideas of the primal affine scaling algorithm.

Basic ideas of primal affine scaling

Remember from the above the fundamental insights observed by N.Karmarkar in designing his
algorithm .Since they are still the guiding principle for the affine scaling algorithm, we repeat
them here:

1) If the current interior solution is near the center of the polytope, then it makes sense to
move in the direction of the steepest descent of the objective function to achieve a
minimum value.
2) Without changing the problem any essential way, an appropriate transformation can be
applied the solution space such that the current interior solution is placed near the center
in the transformed solution space.

7.5 Converting a given linear programming into the required format


Consider a standerd form general linear programming problem

Minimize x

Subject to Ax= b

x≥ 0

Our objective to convert this problem into the standerd form requird by Karmarkar,while
satisfying the assumption .We shall firstseehow toconvert problem into Karmarkar’s form and
then discuss the two assumptions. The key feature of Karmarkar’s standaard form is the simplex
structure ,which of course result in boundary feasible domain. Thus we want to regularize the
problem by adding a boundary constraint.∑ ≤0

Dilla University, Department of Mathematics Page 182


Dill University

For some positive integer Q derived from the feasible and optimality consideration in the worse
case , we can choose Q= 2 , where L is the problem size . If this constraint is binding at
optimality with the objective value of magnitude - 2 , then we can show that the given problem
is unbounded

By introducing a slack variable , we have a new linear program:

In order to keep the matrix structure A un distribed for sparsity manipulation. We introduce a
new variable = 1 and rewrite the constraints of problem as

Ax +b =0 (1)

x+ -Q =0 (2)

x+ + =0 (3)

x≥0, ≥0, ≥0 (4)

Note that the constraint =1 is a direct consequence of (2) and (3) for the required simplest
structure, we apply the transformation = (Q + 1) , for j= 1, 2,…., n+2. In this way, we have
an equivalent linear programming problem.

Minimum (Q+1) ( y)

Subject to Ay - b =0

The proceeding algorithm is only valid for an LP in Karmarkar’s standard form.In order to use
Karmarkar’s algorithm for other LPs, we must convert them into this form.

Consider the general LP:

Minimize x x.c ∈

Subject to Ax ≤ b A∈

x ≥0 b∈

The optimal conditions are:

Dilla University, Department of Mathematics Page 183


Dill University

Summary

Interior point method

1 Basic ideas

• Suppose we are at an interior point of the feasible set

• Consider the steepest descent step.

• This step doesn’t make much progress unless our starting point is central.

• So we’ll change the coordinate system so that the current point is always central.

we consider the linear programming problem expressed, as usual, with inequality constraints and
nonnegative variables:

The corresponding dual problem is

As usual, we add slack variables to convert both problems to equality form:

(1)

Dilla University, Department of Mathematics Page 184


Dill University

And

2 One iteration of Karmarkar’s projective algorithm


Consider the LP in the form

minimize C X

sub ject to x ∈ Ω ∩ S

where Ω = { x: Ax=0} and S = {x: x≥0, ∑ = 1}

Dilla University, Department of Mathematics Page 185


Dill University

Chapter Eight

8. Transportation problems
8.1 Introduction
In this chapter we will be concerned with transportation problem. Transportation problem is a
special case of L.P.P. We divide this chapter into six sections. The first section is introduction to
transportation problem. This section introduces and define transportation problem. We will
discuss some fundamental properties of T.P. The second section will show how to write a
transportation problem on a transportation table. Loops and their application will be presented in
this section.

Methods of determining the initial basic feasible solution will discuss in section three. On this
section we will see Techniques of solving I.B.F.S. such as North-West corner method, Row
minima method and Cost minima method.

The next section (section 4) is Optimality conditions. On this section we will discuss whether
the initial basic feasible solution obtained, is optimal or not.

Transportation problems are either unbalanced or balanced. In section five we will see
unbalanced T.P. and how to get these unbalanced transportation problem solutions.

In the last section, we have discussed about degenerate transportation problems and their
application.

General Objectives

At the end of this unit the learners will be able to:

▪Define the form of transportation problem

▪Identify the meaning of balanced and unbalanced transportation problem

▪Construct the transportation problem

▪Determine some methods of an initial basic feasible solution

▪Understand the numerical calculation of the net evaluation corresponding to the non-basic cells

▪Differentiate degeneration of transportation problem and their solution

Definition: A transportation problem is a particular type linear programming problem. Here, a


particular commodity which is stored at different warehouses (origins) is to be transported to
different distribution centers (destinations) in such a way that the transportation cost is minimum.
Consider a particular example. Let there be m origins , , , . . ., , . . , and the
quantity available at origin be [ = 1,2,3, . . . , ] and let there be destinations , ,.
Dilla University, Department of Mathematics Page 186
Dill University

., ,. . ., and the quantity required, i.e., the demand at be [ = 1,2, . . . , ]. Let us make
an assumption that

= = (8.1.1)

This assumption is not restrictive. In a particular problem when

= ,

i.e. the total available quantity is equal to total demand, this is called as balanced transportation
problem and when

It is called an unbalanced transportation problem. We shall initially discuss first type problems.

Dilla University, Department of Mathematics Page 187


Dill University

Destination

… …

… …

… …

. . … . … . .

. . . . .

. . . . capacities

Origins … …

. . . . . . . .

. . . . . . . .

. . . . . . .

... …

... ...

demands

( = 1,2, … , ), the cost of transportating per unit of commodity from the origin to
destination is a known quantity. It is assumed in general that ≥ 0 for all and . But it may
be negative under some special conditions. The problem before us is to determine the quantity
[ = 1,2, … , ; = 1,2, … . , ] which is to be transported from origin to destination
such that the transportation cost is minimum provided the condition (8.1.1) is satisfied.

Mathematically, the problem can be written as:

mimize, =∑ ∑ (8.1.2)

Subject to the constraints

∑ = , = 1,2, … , (8.1.3)

∑ = , = 1,2, … , (8.1.4)

Dilla University, Department of Mathematics Page 188


Dill University

and ∑ =∑ .

From the above diagram, the constraints (8.1.3) and (8.1.4) can be written easily. The sum of the
variables of the row is equal to and the sum of the variables of the column is equal to
.

It is evident that ≥ 0 for all and .

The problem is a minimization problem and

Is the objective function which is minimized.

In this problem there are ( + ) constraints of which all are equations of variable

[ = 1,2, … , ; = 1,2, … , ].

Since, in general, in a L.P.P. number of variables are greater than number of constraints, there
for m and n both must be ≥ 2. since

All constraints are not linearly independent. There are only

( + )−1=( + − 1)

linearly independent equations.

Theorem 8.1.1 there exists a feasible solution in each T.P. which is given by

= [ = 1,2, … , , = 1,2, … , ]

Where =∑ =∑ .

Proof. Since all and are non-negative quantities therefore ≥ 0 for all and .

Dilla University, Department of Mathematics Page 189


Dill University


= = =

and

= = =

which satisfy the condition given in (8.1.3) and (8.1.4)

Hence in each T.P. there exists a feasible solution and

= [ = 1,2, … , , = 1,2, … , ] (8.1.5)

Theorem 8.1.2 In a balanced T.P. having m origins and n destinations ( , ≥ 2) the exact
number of basic variables is + − 1.

The balance T.P is

Minimize, =∑ ∑

Subject to

∑ = , = 1,2, … , (8.1.6)

∑ = , = 1,2, … , (8.1.7)

And ∑ =∑ .

There are + linear constraints with > + −1( . , ≥ 2).

From (8.1.7)

= = . (8.1.8)

Now assuming the first ( − 1) constraints of (8.1.5) we get,

= . (8.1.9)

Now subtracting (8.1.9) from (8.1.8) we get,

Dilla University, Department of Mathematics Page 190


Dill University

− = − = ( 8.1.10)

Thus we get

∑ (∑ −∑ )= or ∑ = (8.1.11)

Which is the last or constraint of (8.1.4). Therefore, there are only ( + − 1) linearly
independent equations with variables ( > + − 1). Thus from the definition of the
basic solution, we can say that the number of basic variables is exactly ( + − 1).

Remark. All basic variables may not be positive; some of them may be zero. When all basic
variables are positive, the solution is called a none degenerate B.F.S. When at least one basic
variable is zero, the solution is called a degenerate B.F.S.

The number of basic cells will be exactly ( + − 1) all of which contain ( + − 1) basic
variables which are either all positive basic variables or some variables may be zero which has
been shown later.
Class activity 8.1:
Consider a balanced T.P. having 5 origins and 6 destinations. Then how many basic variables are
there in this balanced transportation problem?

8.2 Transportation table


T.P is a special case of L.P.P. Therefore T.P can be solved by using simplex methods. But the
method is not suitable in solving a T.P. Here a specially designed table is constructed to solve
the problem systematically which is called as a transportation table. A specimen of transportation
table with origins and destinations given below:

In this table there are squares or rectangles arranged in rows and columns. Each square
or rectangle is called a cell. The cell which is in the row and column is called as ( , )
cell or cell ( , ). Each cost component is displayed at the south-east corner of the
corresponding cell. A component of a feasible solution (if any) is to be displayed inside a
small square situated at the north west corner of the cell ( , ). the different origin capacities and
destination demands (requirements) are listed in the right side column and lower side row
respectively as given in the table (8.1). These quantities are called as rim requirements.

Dilla University, Department of Mathematics Page 191


Dill University

Table 8.1: transportation table Destination

Capacities

Origins

Demand

8.2.1 Loops in transportation table

In a transportation table, an ordered set of four or more cells are said to form a loop

( ) If and only if two consecutive cells in the ordered set lie either in the same row or in the
same column and

( ) If the first and the last cell of the cell , then it also lies either in the same row or in the
same column.

In the following figure on closer circuit is formed in each of the three transportation tables.

Dilla University, Department of Mathematics Page 192


Dill University

Table 8.2

. . . . . .

. . . .

. . .

. . . . . .

The ordered set of cells in the circuits is

1) = {(1,1), (1,2), (3,2), (3,4), (4,4), (4,1)}


2) = {(1,1), (3,1), (4,1), (4,4), (2,4), (2,3), (1,3)}
3) = {(1,1), (1,2), (4,2), (4,3), (2,3), (2,1)}

In the first and third table there are only two cells in each row and each column and the first and
last cell are in the same row or same column. These loops are called simple loops. In the second
table three cells are in the first column but if we ignore or omit the cell (3, 1) and ordered the set
of cells in the manner

= {(1,1), (4,1), (4,4), (2,4), (2,3), (1,3)}

Then there are just two cells in each row and each column and the first and last cell are in the
same row or column. Hence ignoring the cell (3, 1), the 2 nd closed circuit is also considered as a
simple loop. There are other types of loops. But in the transportation problem, all loops are
simple.

Remark: In a simple loop, there are always even numbers of cells. There are only two cells in
each row of a simple loop. If there are such rows, the number of cells will be 2 ,which is an
even number.

8.3 Determination of an initial B.F.S


So long we have discussed some fundamental properties of a transportation problem which
will help in solving a problem. Next problem before us is to determine the initial B.F.S. of
the problem and from this we proceed to find another B.F.S, Which will improve the value of
the objective function. There are various methods of finding an initial B.F.S. It is interesting
to note that in all cases the solution is a B.F.S. Of course, the solution may be non-
degenerate or degenerate. If the solution is a B.F.S. the cells to which some allocations made

Dilla University, Department of Mathematics Page 193


Dill University

are called as basic cells. Obviously the allocated values are the components of the B.F.S.
Some methods of determining an initial B.F.S. are

1) North-west corner method


2) Row minima method
3) Matrix minima (cost minima) method

8.3.1 North-west corner method


Step1. Compute min( , ). If < , min( , )= and if < , min( , ) = .
Select = min( , ) and allocate the value of in the cell (1, 1), i.e., in the cell situated in
the north-west corner of the transportation table.

Step2. If < , the capacity of the origin will be exhausted completely which indicates
that all other cells of the first row will remain vacant. But there remain some demand in the
destination . Compute min( , − ). Select = ( , − ) and allocate the
value of in the cell (2, 1). Let us now make an assumption that − < which indicated
that demand of is satisfied completely. Of course, this assumption is not essential. With this
assumption, the next cell for which some allocation is to be made is the cell (2, 2) etc.

If < , the demand of the destination will be satisfied exactly which indicates that all
other cells of the first column will remain vacant. But the capacity of origin will not be
exhausted. Compute min( − , ). Select = min( − , ) and allocate the value of
in cell (1, 2). Let us now make an assumption that − < which indicates that the
capacity of is exhausted completely. With this assumption the next cell for which some
allocation is to be made, is the cell (2, 2) etc. If = the capacity of the origin will be
exhausted as well as the demand of will be satisfied simultaneously. In that case, the solution
will be degenerate. Select either or = min( − , ) = min( , − ) = 0 and
allocate the value 0 only in one of the two cells (1, 2) or (2, 1). The next cell for which some
allocation is to be made is cell (2, 2). In this way proceed step by step until all the rim
requirements are satisfied. In general, if an allocation is made in the cell ( , ) in the current step,
the next allocation will be made either in cell ( + 1, ) or ( , + 1). the feasible solution
obtained by this method is always a B.F.S. In North-west corner rule, tree diagram can be
constructed by connecting all basic cells but no loop will be formed.

Example 8.3.1 Determine an initial B.F.S. of the following problems by the method of North-
west corner rule.

Dilla University, Department of Mathematics Page 194


Dill University

4 6 9 5 16 2 7 4

2 6 4 1 12 6 1 2 5 6

5 7 2 9 15 4 5 2 4 8

12 14 9 8 43 3 7 6 2 18

i) )

Note: Inside the rectangle, the cost matrix is given and in the last column and last row of the
rectangle, the rim requirements are given.

Solution: the initial B.F.S. are displayed in the following two tables.

Table 8.3(A) table 8.3(B)

12 4 9 5 3 1 4 7

4 6 2 5

2 10 2 1 6 6 0 5

6 4 1 2

5 7 7 8 4 5 6 2

2 9 2 4

Explanation:

i) B.F.S. is displayed in the table 8.3 : (16,12) = 12. Therefore = 12 and allocate in
the cell (1, 1). The demand of is satisfied and hence all other cells in first column remain
vacant.

As = 12 < 16 = , therefore next allocation will be in cell (1, 2) and = min(16 −


12,14) = 4. now the capacity of is exhausted. Next allocation will be in cell (2, 2) and
= min(12, 14 − 4) = 10. proceeding similarly we get = 2, = 7 and = 8 and all
the rim requirements are satisfied. The solution obtained is a B.F.S. because the set of cells

Dilla University, Department of Mathematics Page 195


Dill University

(which contain components of F.S) do not contain a loop (number of variables=4+3-1=6) and
the cost due to this assignment is

4*12+6*4+6*10+4*2+2*7+9*8=226 units.

) B.F.S. is displayed in table 8.3B: min (4, 3) = 3. Therefore = 3 and allocate it in the
cell (1, 1). The demand of is satisfied and hence all other cells of first column remain vacant.
As = 3 < 4 = , therefore next allocation will be in the cell (1, 2) and = min(4 −
3, 7) = 1. Now the capacity of is exhausted. Next allocation will be in cell (2, 2) =
min(6, 7 − 1) = 6. Now the capacity of is exhausted and the demand of is satisfied Step
3. If = ; min , = = . Set = = and allocate it in the cell (1, j). Due to
this allocation, the capacity of origin will be exhausted as well as the demand of will be
satisfied completely. In that case solution will be degenerate. Set = 0 and display it in the
cell (1, k) with the assumption, that the cost of (1, ) cell is the next minimum cost. Now cross
off both the first row and column and proceed similarly until all rim requirements are
satisfied.

Example 8.3.2 Find out an initial B.F.S. of the following balanced T.B. using row minima
method.

Simultaneously. Therefore either or will be zero. Let us take = 0 and proceed


similarly until all rim requirements are satisfied. The solution obtained is a degenerate B.F.S. as
the number of variables is 4+3-1=6.

8.3.2 Row minima method


Step 1.Select the smallest cost in the first row. Let it be ; compute min , . Set =
min , and allocate in the cell (1, ). This is the maximum feasible amount which can be
allocated in the cell (1, ). If the smallest cost is not unique, select any one of the minimum cost
arbitrarily.

Step 2.If < , the capacity of the origin will be exhausted. But the demand of destination
remains unsatisfied. Cross off the first row and diminish by . Proceeding similarly,
allocates the maximum feasible amounts in the cells of the remaining rows starting from the
second until all rim requirements are satisfied. If < , the total demand of the destination
is satisfied but the capacity of the origin is not exhausted completely. Cross of the column
and diminish by .

Reconsider the first row and select the next smallest cost of this row. Let it be . Compute
min ( − , ). Set = min ( − , ) and allocate it in the cell (1, k). Let as now
make an assumption that − < . Therefore the capacity of will be exhausted
completely [assumption is not restrictive]. Cross off the first row and repeat the above

Dilla University, Department of Mathematics Page 196


Dill University

procedure for the second and so on as in the above method until all rim requirements are
satisfied.

4 2 5 3 6

5 4 3 2 13 Capacity

1 4 6 5 9

7 8 5 8 28

demand
Solution: it is displayed in the transportation table given below;

table 8.4

4 6 5 3 6

5 0 5 8 13

4 3 2

7 2 5 9
6
1 4

7 8 5 8 28

Explanation. B.F.S is given in the table 8.4

Lowest cost in the first row is = 2. min( , ) = min(6,8) = 6. Set = 6 and allocate it
in the cell (1, 2), < ; therefore the capacity of is exhausted completely and hence cross
off the first row and diminish by which is shown in table 8.4. Then ignore the first row
for future computation.

Dilla University, Department of Mathematics Page 197


Dill University

Lowest cost in the second row is = 2. min( , ) = min(13, 8) = 8. set = 8 and


allocate it in the cell (2, 4); < . Therefore the capacity of will not be exhausted; ignore
the fourth column for future computation since the demand of is satisfied completely. Cross
off the fourth column and diminish by . Reconsider the second row. The next lowest cost in
the row is = 3. min( − , ) = min(13 − 8, 5) = 5. Set = 5 and allocate it in the
cell (2, 3). As − =5= , the capacity of is exhausted as well as the demand is
satisfied completely. Therefore the solution will be degenerate. The next lowest cost of the
second row is . = 4. Set = 0 and allocate it in the cell (2, 2). Cross of the the 2nd row and
3rd column simultaneously. Now complete the table 8.4 and all the rim requirements are satisfied
now. Solution obtained is a B.F.S., because the numbers of variables are 3+4-1=6 and the set of
corresponding cells do not contain a loop.

8.3.3 Cost minima (Matrix minima) method


Step1. Select the smallest cost in the cost matrix. Let it be . Set = min ( , ) and
allocate it in the cell ( , ). This is the maximum feasible amount that can be allocated in the cell
( , ).

Step 2. If < , the capacity of origin will be exhausted completely. Cross off the
row and diminish by .

If < , the demand of the destination will be satisfied completely. Cross off the
column and diminish by .

Step 3. If = , the capacity of origin will be exhausted and the demand of will be
satisfied simultaneously. Set = = ; allocate it in the cell ( , ) . Cross off either the
row or the column but not both. Of course, we may drop both the row and the column
by inserting a basic variable zero at a cell corresponding to the lowest cost of those row and
column.

Step 4. Apply the same technique in the reduced transportation table until all rim requirements
are satisfied. At any stage, if the minimum cost is not unique, make any arbitrary choice among
the minima.

Example 8.3.3 Determine an initial B.F.S. to the following balanced T.P. using cost minima
method:

Dilla University, Department of Mathematics Page 198


Dill University

5 3 6 2 19
Capacity
4 7 9 1 17

3 4 7 5 34

16 18 31 25 90

Demand

Where denote the origin and the destination respectively.

Explanation: B.F.S. is given in the table 8.6.

Table 8.5

5 18 1 6 2 19

4 7 12 25 37

9 1

16 4 5 34
18

3 7

16 18 31 25 90

The smallest cost is = 1. Set = min( , ) = min(37,25) = 25 and allocate it in the


cell (2, 4). < Therefore, cross off the fourth column and diminish by . Then, ignore
the fourth column for the future computation.

Dilla University, Department of Mathematics Page 199


Dill University

The smallest cost in the reduced table is or . Let as select = 3 as the smallest cost
and allocate = min( , ) = min(19,18) = 18 in the cell (1, 2), < , therefore cross
off the second column and diminish . Proceed similarly until all rim requirements are
satisfied and table 8.5 gives the B.F.S. It is a B.F.S. because the numbers of variables are 4+3-
1=6 and the cells corresponding to the feasible solution do not contain a loop. Here the solution
is not unique.

In addition to North-west corner rule, Row minima rule and cost minima method, there are
another methods to find initial basic feasible solution. From these additional methods Vogel’s
Approximation is one method. Steps to find initial basic feasible solution is as follows:

Step1: Select the lowest and next to lowest cost for each row and determine the difference b/n
them for each row and display them within the first bracket against the respective rows. If there
are two or more with same lowest costs, difference may be taken to be zero. Compute, similarly
the difference for each column and display them within the bracket against the respective
columns.

Step2: Find the largest value of the differences and find out the row or column for which the
difference is maximum. Let the maximum difference corresponds to row. Select the lowest
cost in the row. Let it be . Allocate = min ( , ) in the cell ( , )which is the
maximum feasible amount that can be allocated in the cell ( , ) . If the maximum difference is
not unique, select any one of them.

Step3: If < , cross off the row and diminish by . If < , cross of column
and diminish by . If = , allocate = = in cell ( , ) and cross off either the
row or column but not the both. of course, we can omit both the row and column
simultaneously by inserting a basic variable 0( ) to one of cells of the corresponding row or
column possessing the next minimum cost and the solution will be degenerate them.

Step4: Recomputed the row and column differences for the reduced transportation table. Repeat
the procedure discussed above until all rim requirements are satisfied.

Example: obtain an initial basic feasible to the balanced T.P given below using Vogel’s
approximation method.

Dilla University, Department of Mathematics Page 200


Dill University

warehouses

19 30 50 10 7

factory 70 30 40 60 9 Factory
capacity
40 8 70 20 18

5 8 7 14 34

Demand

Solution:

step1: Select the lowest and next to lowest cost for each row and each column and determine the
difference b/n them for each row and column and display them within the first bracket against
the respective rows and columns. Here all the differences have been shown within the first
compartment. Maximum difference is 22 which occurs at the second column and the minimum
cost of that column is = 8. Allocate (18, 8) = 8 in the cell (3,2). Demand of has
nd
been satisfied then cross of the 2 column and ignore it for the future computation. The
resulting cost matrix will be obtained after deleting the cost components of the second column .

5 30 2

19 50 10 7(9) 7(9) 2(40) 2(40)

30 7 2

70 40 60 9(10) 9(20) 9(20) 9(20)

40 8 10

8 70 20 18(12) 10(20) 10(50)

5(21) 8(22) 7(10) 14(10)

5(21) 7(10) 14(10)

7(10) 14(10)

Dilla University, Department of Mathematics Page 201


Dill University

7(10) 4(50)

Step 2 : applying the same technique in the resulting matrix , the capacities, demands and the
deference of the cost of the components have been shown in the second compartment. Maximum
difference is 21 which occurs in the first column and the lowest cost on that column is = 19.
Allocate min(7,5) = 5 in the cell (1, 1) . The demand of has been satisfied and shade the first
column as shown in the table. The resulting cost matrix will be obtained deleting the cost
components of the first column.

Step3: proceeding in the same way, we get the maximum difference 50 which occurs in the third
row and minimum cost is = 20. Allocate min(10,14) = 10 in the cell (3,4) and the capacity
of will be exhausted and the resulting matrix will be obtained deleting the cost components of
the third row; shade the third row as shown in the table. Using the same technique, ultimately we
obtained the initial B.F.S. where all capacities have been exhausted and all the demands will be
met.

Here the initial basic feasible solution using by VAN’s approximation method is being shown in
a single table in a very compact manner which will save time.

The initial B.F.S. obtained by using the method is non-degenerate and unique and the initial
solution is = 5, = 2, = 7, = 2, = 8 and = 10 as shown on the above
table.

Class activity 8.2

1. Obtain an initial B.F.S to the T.P by using the following methods and find out which
solution is better.
a. North west corner method
b. Row minima method
c. Cost (matrix) minima method

i. ii.

ai

4 3 2 5 6 9 8 5 7 12

6 1 4 3 9 4 6 8 7 14

7 2 4 6 7 5 8 9 5 16

4 6 6 6 8 18 13 3

Dilla University, Department of Mathematics Page 202


Dill University

2. Prove that the above initial B.F.S (ii) is degenerate.


3. For the following problem obtain the different starting solutions by adopting the North
West corner rule and Vogel’s approximation method respectively and find out which
solution is better?

5 1 8 12

2 1 0 14

3 6 7 4

9 10 11 6

8.4 optimality condition


So long we have discussed the methods of determining the basic feasible solution. The next
problem before us is to find whether the solution obtained is optimal or not. A transportation
problem is a maximization problem. Hence at the optimal stage − ≥ 0 for all cells
corresponding to non-basic variables, i.e., − ≤ 0 for all cells corresponding to non-basic
variable. Therefore to test the optimality of the problem, we shall have to determine the values of
− for all cells corresponding to non-basic variables. If − ≤ 0 for all cells
corresponding to non-basic variables, the solution is optimal. For a basic cells − = 0. This
property is very useful in determining the values of − for all cells corresponding to non-
basic variables. If the conditions − ≤ 0 are not satisfied for all non- basic cells, we shall
have to proceed further to get an optimal solution which we shall discuss in future.

A. Determination of net evaluations ( − )( method)

In determining the net evaluation we shall make use of the property of the duality.

The original T.P. is

, =

Subject to ∑ = , = 1,2, … ,

Dilla University, Department of Mathematics Page 203


Dill University

= , = 1,2, … ,

Where there are ( + ) constraints of all of which are equations and out of them only ( +
− 1) equations are independent. Hence there are ( + ) dual variables to the primal
(original) problem of which one variable can be selected arbitrarily and all variables are
unrestricted in sign [as all primal constraints are equations]. If the dual variables are

=( , ,… , , ,… , )=( , ) (8.4.1)

The dual constraints are given by

+ ≤ (8.4.2)

( = 1,2, … , ) ( = 1,2, … , ) are unrestricted in sign.

Now if is the basis inverse of the primal problem at the optimal stage and is the
associated cost vector, the dual optimal solution is given by

= (8.4.3)

is expressed here as a row vector with ( + ) components. If is a column vector


corresponding to a non-basic variable then

− = − (8.4.4)

Therefore the net evaluation − , corresponding to non-basic cells in a transportation


problem are given by

− = −

= −

=( , ) − (8.4.5)

Where is the column vector of the co-efficient matrix associated with .

= + . Hence from (8.4.5) we get

− = ( , ,… , , ,… , )( + −

= + − ( = 1,2, … , ; = 1,2, … , ) (8.4.6)

For the basic cells, the net evaluations − = 0, . ., if be a basic variable then

Dilla University, Department of Mathematics Page 204


Dill University

− =0 (8.4.7)

Or, + − =0

or, + = for all basic cells

or, = − if is known

or, = − if is known (8.4.8)

If we select one of the values of the dual variables ( , , … , , , … , ) are zero, all other
values can be determined using the relation (8.4.8). And once all quantities are known, the net
evaluations − , given by the formula in (8.4.6) can be calculated. All these calculations can
be done easily from the transportation table.

B. Numerical calculation of the net evaluations corresponding to the non-basic cells.

Below given is a transportation table involving 4 origins and 4 destinations in which the cost
components are displayed in their proper places and all basic cells are marked by circular black
spots. The solution (not given in the table) is a basic feasible solution. The problem before us is
to calculate numerically all − corresponding to the non-basic cells.

Table 8.6

. . -1 3 3
6 9 8 7

-1 . . . 0
4 6 4 7

-6 1 . 5 0
9 5 4 2

1 6 -1 . 1
3 1 6 8

3 6 4 7

From (8.4.8) we get for a basic cell ( , )

+ =

Dilla University, Department of Mathematics Page 205


Dill University

and for non-basic cells ( , )

− = + − [from (8.4.6)].

We know that we can select arbitrarily one of the values of ( = 1,2, … , ) ( =


1,2,…, .

For simple computation we may take the value of one variable equal to zero.

In the above table, cells (1, 1), (1, 2), (2, 2), (2, 3), (2,4), (3, 3) and (4, 4) are marked with
circular black spots. The cells are basic cells and the solution will be a B.F.S. because the set of
cells do not contain a loop and the number of cells (4 + 4 − 1) = 7.

In the second row there are three basic cells. Let us take = 0 [in general a variable or is
taken to be zero, the corresponding row or column of which contains the maximum basic cells].

For basic cells (2, 2)

+ =

Or, 0 + = 6 which implies that = 6.

Similarly for basic cells (2, 3)

+ =

Or, 0 + = 4 which implies that = 4.

And for basic cell (2, 4)

+ =

Or, 0 + = 7 which implies that = 7.

Now for basic cell (3, 3)

+ =

Or, + 4 = 4 which implies that =0

For basic cell (4, 4)

+ =

Or, − 7 = 8 which implies that = 1.

Again for basic cell (1, 2)

Dilla University, Department of Mathematics Page 206


Dill University

+ =

Or, + 6 = 9 which implies that = 3.

And last of all for basic cell (1, 1)

+ =

Or, 3 + = 6 which implies that = 3.

Thus, we have calculated all values of variables , , , , , , , . then using the


relation − = + −

We can calculate all net evaluations corresponding to the non basic cells.

For example − = + − = 3 + 4 − 8 = −1

− = + − = 3+7−7=3

And thus − = −3 . and so on.

In the transportation table, the values of all and are listed outside of the block as given in
the table and all net evaluations corresponding to non-basic cells are displayed on the north east
corner of the respective cells. Following this rule, all net evaluations corresponding to the non-
basic cells can be calculated.

Note. There is another method of computing − which is known as stepping stone


algorithm. But in digital computer u v method is widely used.

C. Determination of the entering cell and the entering vector

To test the optimality of a B.F.S at any stage, we require to calculate the values of all net
evaluations corresponding to the non basic cells. If all net evaluations are non-positive quantities,
the solution is optimal. But if at least one of them is positive, the solution is not optimal. As in
the simplex method, now we shall have to proceed further to get an optimal solution. The first
problem before us is, to select a vector which will enter in the basis and will move the solution
towards optimality. The vector which will enter in the basis is the entering vector and the
corresponding cell in the transportation table is the new basic cell. If be the entering vector,
then cell ( , ) will be the new basic cell which will enter in the set of basic cells. and due to
this, a cell will leave the set of basic cells. This cell is known as the leaving cell or departing cell
and the vector corresponding to the departing cell is known as departing vector. the entering
vector is selected in the following manner.

If max − , − >0 = − > 0, , . , if the positive maximum of the net


evaluations occurs at cell (p, k), is the entering vector and cell (p k) is the entering cell. If

Dilla University, Department of Mathematics Page 207


Dill University

the maximum is not unique we may select any one cell corresponding to the maximum value of
the net evaluations.

In the above table (8.6) max − , − > 0 = 6 > 0 which occurs at the cell (4,2) .
there fore, in the next iteration, the cell (4,2) will be the new basis cell and will be the vector
to enter in the next basis.

D. Determination of the departing cell and the value of the basic variable in the entering
cell.

After the selection of entering cell, we can identify the cell which will leave the set of basic cells,
geometrically from the transportation table. Let the cell (p, k) be the entering cell. The number of
basic cells including the cell ( , ) is + − 1 + 1 = + . Evidently the set of column
vectors corresponding to these ( + ) are linearly dependent. Therefore it is always possible
to construct a loop connecting the cell ( , ) and the set or any subset of the basic cells.
Construct the loop by trial and error method and the loop is unique.

Now allocate the value > 0 in the cell ( , ) and readjust the basic variables in the ordered set
of cells containing the simple loop by adding and subtracting the value alternately from the
corresponding quantities such that all rim requirements are satisfied if the ordered set of cell’s
containing the simple loop are

( , ), ( , ), ( , ), ( , ) .

The new variables corresponding to the cells are , − , + , − . where


, , etc. are basic variables in the current iteration.

Now select the maximum variable of in such a way that readjusted values of the variables
vanish at least in one cell [excluding the cell ( , ) ] of the ordered set and all other variables
remain non- negative. Let as assume that the variable vanishes in the cell ( , ) satisfying all
conditions stated above, i.e., − = 0 which gives the value of = . This is the value of
new basic variable to the allocated in the new basic cell ( , ). The cell ( , ) is the departing
cell and it will be non-basic cell during next iteration. The vector will leave the basis and the
new basic variables will be , − , + . ., , − , − + etc .
Corresponding to the cells ( , ) , ( , ), ( , ) respectively. All basic variables in the cells not in
the loop remain unchanged. It may so happen that for maximum value of, the readjusted
variables may vanish for more than one cell. In the case it is not possible to select uniquely the
cell which will leave the set of basic cells. Select arbitrarily one of the cells as a departing cell
and write down the value 0(zero) as new basic variable in all other such cells and the solution in
the next iteration will be degenerate.

1. Computational procedure

Dilla University, Department of Mathematics Page 208


Dill University

To solve a transportation problem, proceed step by step as mentioned below:

Step1: Determine an initial basic feasible solution of the given problem using any one of
methods discussed previously. But to get the optimal solution quickly, in general use either cost
minima or Vogel’s approximation method.

Step2: calculate all net evaluations corresponding to non-basic cells and display them on the
north east corner of the corresponding non-basic cells. If all net evaluations are non-basic
quantities at any iteration, the solution is optimal. Then calculate the corresponding minimum
cost using the relation min = ̃ = ∑ where are the components of optimal solution
and are the corresponding transportation cost per unit commodity. If at least one net
evaluation is positive, the solution is not optimal.

Step3: If the solution is not optimal, determine the entering cell and the value of which will be
allocated in the entering cell and the cell which will leave the basis. Construct a new
transportation table with readjusted basic variables. Calculate again, all net evaluation
corresponding to the non basic cells and display them on the north east corner of the
corresponding non-basic cells. Test the optimality of the solution. If the solution is not optimal,
proceed similarity until the optimality conditions are satisfied

This method of solving transportation problem is known as MODI method.

Example 8.4.1 Obtain the initial basic feasible solution to the following transportation problem
by matrix minima method and then find out an optimal solution and the corresponding cost of
transportation.

5 4 6 14 15

2 9 8 6 4

6 11 7 13 8

9 7 5 6 27

Solution: Initial B.F.S is calculated with the help of matrix minima method is given in the table
8.7( ) bellow. Now calculate all net evaluations corresponding to the non basic cells with the
assumption = 0. All net evaluations are not non- positive. Hence the solution is not optimal.

Dilla University, Department of Mathematics Page 209


Dill University

Cell (2,4) has the positive net evaluation 3. Thus in the next iteration cell (2,4) will be the new
basic cell. Construct the loop as shown in table 8.7( ) loop is simple and unique. Insert the
value > 0 in the cell (2,4) and readjust the basic variables in the cells containing the loop
accordingly as given in the table 8.7( ).

Now the maximum value of will be 3 and cell (1, 3) will leave the basic cell and all other
variables remain non-negative. Construct the table 8.7( ) with the value of = 3.

Dilla University, Department of Mathematics Page 210


Dill University

Calculate all net evaluations corresponding to the non basic cells with the assumption = 0.
Cell (3,1) will be the new basic cell. Construct a simple loop with cell (3,1) as shown in table
8.7(B). insert the value > 0 in the cell(3, 1) and readjust the basic variables as shown in the
table 8.7( ). maximum value of = 1, cell (2,1) will leave the set of basic cells and all other
variables remain non negative, with = 1. Construct table 8.7( )

Dilla University, Department of Mathematics Page 211


Dill University

Calculate all net evaluations corresponding to the non-basic cells in the table 8.7( ) with the
assumption = 0. All net evaluations are non- positive. Hence the solution is optimal and
optimal solution is = 8, = 7, = 4, = 1, = 5, = 2 and the minimum cost
of transportation problem is

̃ = 5 × 8 + 4 × 7 + 6 × 4 + 6 × 1 + 7 × 5 + 13 × 2 = 159 units.

Notice. As the net evaluation corresponding to the non basic cell (1,3) is zero, an alternative
optimal solution exists.

Class activity 8.3

Solve the following balanced transportation problem by using VAM to determine the initial
basic feasible solution and test either the solution is optimal or not.

4 2 7 -1 27

3 0 2 4 33

5 3 4 5 23

3 5 4 -2 17

31 24 25 20 100
100

8.5 Unbalanced transportation problems and their solutions.


a) unbalanced transportation problem

There are two types of T.P.

1. When ∑ >∑ , . ., the total available capacities of m origins are greater than
the total demands of n destinations. But this problem can be converted in to balanced
transportation problem by using the following device:
a. Imagine a fictitious or fake ( + 1) destination .
b. Assume that the demand of destination is =∑ −∑ .

Dilla University, Department of Mathematics Page 212


Dill University

c. Assume that the cost components , = 0 for all [ = 1,2, … , ] . . , =


, =⋯ , = 0. with this assumption, the T.P. will be a balanced problem
having m-origins and (n+1) destinations. Due to the assumption , = 0, [ =
1,2, … , ] the minimum cost of transportation remains unaffected and the total
demands of the destinations will be satisfied completely through the capacities of the
origins will not be exhausted completely. Below unbalanced T.P. is given.

2 3 4 6

4 3 1 8

2 2 5 6

5 4 7

In the problem, ∑ = + + = 6 + 8 + 6 = 20
And ∑ = + + = 5 + 4 + 7 = 16

= 20 > 16 =

The problem can be converted into a balanced T.P. in the following manner:

2 3 4 0 6

4 3 1 0 8

2 2 5 0 6

5 4 7 4 20
20

=∑ −∑ = 20 − 16 = 4 and = = = 0.

Dilla University, Department of Mathematics Page 213


Dill University

Now this transportation problem can be solved as in the previous methods. In this problem, an
initial B.F.S obtained , using the matrix minima method, is = 2, = 4, = 1, =
7, = 3 and = 3, and the minimum cost of transportation such that the total demands of
the destinations will be satisfied is 25. Multiple optimal solutions exist in the problem and one of
the optimal solution is = 3, = 3, = 7, = 1, = 2 and = 4, i.e., =
3, = 7, = 2 and = 4 as actually no quantity is transported to the destination .

2. When ∑ >∑ i.e., the total demands of n destinations are greater than the total
available capacities of m-origins. Here also the problem can be converted into an
ordinary balanced T.P. but the total demands of n-destinations will not be satisfied
completely. Though we cannot satisfy all demands, we can still allocate the materials
available at the origins to the destinations in such a way that minimizes the cost of
transportation.

To convert it into a balanced transportation problem:

a. Imagine a fake ( + 1) origin .


b. Assume that the capacity of origin is

= − .

c. Assume that the components , =0 .

The problem will be ultimately a transportation problem having ( + 1) origins and


destinations. Now the problem can be solved as in the previous case.

b) Solution of unbalanced transportation problem


Example 8.5.1 Determination of the initial B.F.S. of the following unbalanced T.P. by VAM

Here ∑ = 50 > ∑ = 37 then the problem is unbalanced problem. (50-37)=13; we


assume a fictitious or fake origin having supply capacity 13 and the cost components of
transporting these 13 units is 0, and we can rewrite the problem in the manner:

Dilla University, Department of Mathematics Page 214


Dill University

Initial basic feasible by VAM.

Table 8.8

19 10
14 11 20
10(3) 10(3) 10(3)

12 2 1
19 12 14 17

15(2) 15(2) 3(3) 3(3) 3(3)

8 16 4
11 18
14

12(3) 12(3) 12(3) 12(2 4(7)


)
0 0 13
0 0

Dilla University, Department of Mathematics Page 215


Dill University

13(0)

8(14) 12(12) 16(11) 4(17) 50


50

8(0) 12(4 16(0) 1(1)

8(0) 16(0) 1(1)

8(5) 6(3) 1(1)

6(3) 1(1)

Here the solution is a non-degenerate basic feasible solution.

Solve the above problem by taking the initial basic feasible solutions obtain by VAM. Find the
amount which is not supplied and find the destination to which the fake amount has been
supplied.

The problem is a minimization problem.

Dilla University, Department of Mathematics Page 216


Dill University

In the second row there are three basic variables then we shall take = 0, with = 0 we
have calculated all [ = 1, 2, 3, 4) and [ = 1,2,3,4] as shown in the table. Now we calculate
all − for non-basic cells.

All − are < 0 and − = 0. Then we reach at the optimal stage and alternative
optimal solutions exist.

One optimal solution is = 10, = 12, = 2, = 1, = 8, = 4 and the fake


amount 13 units is not supplied to the destination ,and

min = 10 × 11 + 12 × 12 + 2 × 14 + 1 × 17 + 8 × 14 + 4 × 11
= 110 + 144 + 28 + 17 + 112 + 44 = 455

Remark: Destination will be want of 13 units.

Example 8.5.2 solve the following balanced T.P. by finding I.B.F.S. use cost minima method

Dilla University, Department of Mathematics Page 217


Dill University

Table 8.10

2 5 0 12 -3

12

7 4 8 6 8 1
15 15 7 7
15

3 4 8 0 3 4 5
14
14 6 6 6
2 6 9 1 4
9
9 9 9
10 8 12 20 50
50

10 8 12 8

10 12 8

10 12

10 3

Optimality test: initial basic feasible solution is not unique.

Dilla University, Department of Mathematics Page 218


Dill University

Table 8.11

-2 -9 0 12 − -4
2 5 0 -3

7 - -6 -4 8 + 0
4 6 8 1

3 + 8 3 − -6 0
4 0 4 5

-1 -9 9 -8 -3
2 6 1 4

4 0 4 1

All − ≤ 0 in the table 8.11 − = 0. Thus the solution = 12, = 7, =


8, = 3, = 8, = 3, = 9 is optimal and min cost= 33 units and alternative optimal
solution exists. We now find an alternative optimal solution. We put in this cell (1, 3) and
draw a simple loop as shown in the table 8.11 and readjusted the basic variables as shown in the
table 8.11. Now to be taken in such a way that all basic variables remain positive and vanishes
in a cell. If we take = 3 then the conditions will be satisfied, variable in the cell (3, 3)
vanishes, with the value of = 3 , the next solution is

Dilla University, Department of Mathematics Page 219


Dill University

Table 8.12

-2 -9 3 9 0
2 5 0 -3

4 -6 -4 11 4
4 6 8 1

6 8 0 -4 4
4 0 4 5

-3 -11 9 -8 -1
2 6 1 4

0 -4 0 -3

Here in the table 8.12 all − ≤ 0 for the non-basic cells and − = 0. thus alternative
optimal solution exists. One alternative optimal solution is

= 3, = 9, = 4, = 11, = 8, = 8, = 9 and

min cost 3 × 0 + 9 × (−3) + 4 × 4 + 11 × 1 + 6 × 4 + 8 × 0 + 9 × 1

= 0 − 27 + 16 + 11 + 24 + 0 + 9 = 33 units.

8.6 Degenerate transportation problems and their solution


As we have seen in chapter 4, if some components of the solution set corresponding to the basic
variables are zero, the basic solution is known as a degenerate basic solution and if all
components of a solution set corresponding to the basic variables are non-zero quantities then the
basic solution is known as the non-degenerate basic solution.

Degeneracy may occur at the initial stage or at any sub sequent iteration. Here we shall discuss a
problem where the initial B.F.S. is degenerate and only one basic variable is zero. The problem
can be solved similarly if more than one basic variable are zero. Allocate a quantity > 0 (very
small) instead of basic variable 0 (zero) in the cell and readjust all basic variables in the cells
such that

Dilla University, Department of Mathematics Page 220


Dill University

= =

is satisfied and it may be assumed that + = . Now solve the problem treating it as a non-
degenerate problem. You may drop at any subsequent iteration if it has been found that the
solution at the stage will be non-degenerate on the assumption that = 0 after finding the
optimal solution.

Example 8.6.1 Obtain the optimal solution to the following transportation problem by using the
north-west corner rule to find I.B.F.S.

2 5 4 7 4

6 1 2 5 6

4 5 2 4 8

3 7 6 2

The I.B.F.S. by north-west corner rule.

Table 8.13

3 1
2 5 4 7

6 0
6 1 2 5

6 2
4 5 2 4

Dilla University, Department of Mathematics Page 221


Dill University

Remark. Either (2, 3) or (3, 2) contains the basic variable 0, which indicates that the I.B.F.S. is
degenerate. 0 cannot be inserted in any other cells and in that case solution will not be by
N.W.C.R.

Optimality test. Initially keep ‘0’ in its own position as ‘0’ and not by > 0 such that

± = etc.

Table 8.14

3 1 -2 -1 0
2 5 4 7

8 6 0 1 -4
6 1 2 5

6 1 6 2 -4
4 5 2 4

2 5 6 8

Minimum value − , − < 0 is -2 which is in cell (1, 3). In the first table put > 0
instead of zero such that ± = [you may not put instead of 0 in the first table] and the
first table becomes

Dilla University, Department of Mathematics Page 222


Dill University

Table 8.15

3 1 - -2 -1
2 5 7
4

8 6 + - 1
6 5

1 2

6 1 6 2
4 5 2 4

Put in (1, 3) where − = −2, ( − , − < 0 ). Now construct the loop as


shown in the table. Now if we take = , then 6 + and 1 − > 0 and readjusted B.F.S. is

Table 8.16

3 1 - 1 0
2 5 7
4

8 6 + 0 2 3 -4
6 1 2 5

4 2 6 2 -2
4 5 2 4

2 5 4 6

Dilla University, Department of Mathematics Page 223


Dill University

All − ≥ 0. Thus we reach at the optimal stage. Now put = 0 and the optimal solution
is = 3, = 1, = 0, = 6, = 6, = 2 and the mini cost= 3 × 2 + 1 × 5 + 0 ×
4 + 6 × 1 + 6 × 1 + 6 × 2 + 2 × 4 = 37 units. The optimal solution is degenerate as at least
one optimal basic variable = 0.

Example 8.6.2 Solve the following transportation problem by using VAM to determine the
I.B.F.S. and show that the optimal solution is a degenerate solution.

Table 8.18

10 20 15

5 7 15(2)

18 15 0 10

9 12 8 25(1) 25(1) 15(3) 6(0)

5 14 0 16 18

15 5(1) 5(1) 5(1) 5(1)

5(5) 15(5) 15(7) 10(1)

5(3) 15(5) 0(4) 10(10)

5(3) 15(5) 0(4)

5(3) 0(4)

The above table is self explanatory. Initial solution is not unique.

Optimality test:

Dilla University, Department of Mathematics Page 224


Dill University

Table8.19

6 10 18 20 15 6

5 7 5

7 15 0 10

18 9 12 8 12

5 1 0 6

15 14 16 18 16

-1 -3 0 -4

For optimality test, we calculate − = − − to get more positive −


gradually.

The initial basic feasible solution is the optimal solution. The optimal solution is

= 15, = 15, = 0, = 10, = 5, = 0, the solution is generate.

min = 15 × 5 + 15 × 9 + 0 × 12 + 10 × 8 + 5 × 15 + 0 × 16 = 365 .

Class activity 8.4

a. Formulate mathematically a transportation problem (balanced) as a L.P.P having m


origins and n destinations( , ≥ 0).
b. What are the number of constraints and variables in a balanced transportation
problem?
c. What is the number of independent constraints in a balanced T.P.?
d. Is a constraint of T.P. (balanced) an equation or inequality?
[ all problems are m origins and n destinations]

Dilla University, Department of Mathematics Page 225


Dill University

Summary

Transportation problem

A transportation problem is a particular type linear programming problem. Here, a particular


commodity which is stored at different warehouses (origins) is to be transported to different
distribution centers (destinations) in such a way that the transportation cost is minimum.
Consider a particular example. Let there be m origins , , , . . ., , . . , and the
quantity available at origin be [ = 1,2,3, . . . , ] and let there be destinations , ,.
., ,. . ., and the quantity required, i.e., the demand at be [ = 1,2, . . . , ]. Let us make
an assumption that

= = (8.1.1)

This assumption is not restrictive. In a particular problem when

= ,

i.e. the total available quantity is equal to total demand, this is called as balanced transportation
problem and when

It is called an unbalanced transportation problem. We shall initially discuss first type problems.

Determination of an initial B.F.S


So long we have discussed some fundamental properties of a transportation problem which
will help in solving a problem. Next problem before us is to determine the initial B.F.S. of
the problem and from this we proceed to find another B.F.S, Which will improve the value of
the objective function. There are various methods of finding an initial B.F.S. It is interesting
to note that in all cases the solution is a B.F.S. Of course, the solution may be non-
degenerate or degenerate. If the solution is a B.F.S. the cells to which some allocations made
are called as basic cells. Obviously the allocated values are the components of the B.F.S.
Some methods of determining an initial B.F.S. are

1North-west corner method

2Row minima method

Dilla University, Department of Mathematics Page 226


Dill University

3Matrix minima (cost minima) method

Review exercise

1) What is an unbalanced T.P.? How can you convert it into a balanced T.P.?
2) Which one of the statement is true?
a) A T.P. is strictly a maximization problem.
b) A T.P is strictly minimization problem.
c) A T.P may be maximization or minimization problem.
3) Prove that at least one feasible solution in a balanced T.P.
4) Prove that there exists a finite optimal solution in each balanced transportation problem.
5) Define a loop in a transportation table. What is the nature of a loop in a transportation
table?
6)
a. What is the number of basic variables in balanced T.P. with m origins and n
destinations?
b. What is the maximum number of positive components in a B.F.S of a T.P. (balanced)
with m origins and n destinations?
7) Find the initial B.F.S of the following T.P. using North-West corner rule and prove that
(ii) is degenerate.
i)
ii)

4 6 9 5 16 9 7 4 2 20

4 2 7 1 11 2 9 8 6 15

2 5 2 8 10 9 7 5 8 15

12 7 6 15 14 21 6 9

8) Solve the following balanced transportation problem:

6 4 2 7 8

5 1 4 6 14

Dilla University, Department of Mathematics Page 227


Dill University

6 5 2 5 9 Supply

4 3 2 1 11

7 13 12 10

Demand
9) Solve the following unbalanced transportation problem:

4 5 6 12

3 1 5 11
Supply
2 4 4 7

6 5 8

Demand
10) A affirm manufacturing a single product has three plants , , . The three plants have
produced 60, 35 and 40 units respectively during this month. The firm has made a
commitment to sell 22 units to customer A, 45 units to customer B, 20 units to customer
C, 18 units to customer D and 30 units to customer E. find the minimum cost of shifting
the manufactured product to the five customers. The cost matrix is given below:
customer

plant 4 1 3 4 4

2 3 2 2 3

3 5 2 4 4

Dilla University, Department of Mathematics Page 228


Dill University

Chapter Nine

9. Theory of games
9.1 Introduction
In this chapter, we shall study if not the most practical then certainly an elegant
application of linear programming. The subject is called game theory, and we shall
focus on the simplest type of game, called the finite two-person zero-sum game, or
just matrix game for short. Our primary goal shall be to prove the famous Minimax
Theorem, which was first discovered and proved by John von Neumann in 1928. His
original proof of this theorem was rather involved and depended on another beautiful
theorem from mathematics, the Brouwer Fixed-Point Theorem. However, it eventually became
clear that the solution of matrix games could be found by solving a certain
linear programming problem and that the Minimax Theorem is just a fairly straightforward
consequence of the Duality Theorem.

Game theory deals with decision situations in which two intelligent opponents with conflicting
objectives are trying to outdo one another .Typical examples include launching advertising
campaigns for competing products and planning strategies for warring armies. Game theory is
the formal study of decision-making where several players must make choices that potentially
affect the interests of the other players. A game is a formal description of a strategic situation.

Nash Equilibirum

Anash equilibrium, also called strategic equilibrium, is a list of strategies, one for each player,
which has the property that no player can unilaterally change his strategy and get a better payoff.

Payoff
A payoff is a number, also called utility, that reflects the desirability of an outcome to a player,
for whatever reason. When the outcome is random, payoffs are usually weighted with their
probabilities. The expected payoff incorporates the player’s attitude towards risk.

General objectives
At the end of this unit the learner will be able to:

▪Understand the formal study of game theory

▪Explain the formulation of two-person zero sum games

▪Differentiate the pure and mixed strategies in game theory

▪Identify how to solve pure strategy games

▪Understand some probabilistic consideration of game theory

Dilla University, Department of Mathematics Page 229


Dill University

▪Consider how to solve games with the simplex method

Game theory

In a game conflict, two opponents, known as players, will each have a (finite or infinite) number
of alternatives or strategies .Associated with each pair of strategies is a pay off that one player
receives from the other. Such games are known as two person zero game sum game because a
gain by one player signifies an equal loss to the other .It suffices, then to summarize the game in
terms of the pay off to one player .Designating the two players as A and B with m and n
strategies, respectively, the game is usually represented by the pay of matrix to player A as

Game theory is the formal study of decision-making where several players must make choices
that potentially affect the interests of the other players.

Classification of games

How many players are there in the game? Usually there should be more than one player.
However, you can play roulette alone the casino doesn’t count as player since it doesn’t make
any decisions. It collects or gives out money. Most books on game theory do not treat one-player
games, but I will allow them provided they contain elements of randomness.

Is play simultaneous or sequential? In a simultaneous game, each player has only one move, and
all moves are made simultaneously. In a sequential game, no two players move at the same time,
and players may have to move several times. There are games that are neither simultaneous nor
sequential. Does the game have random moves? Games may contain random events that
influence its outcome. They are called random moves. Is the game zero-sum? Zero-sum games
have the property that the sum of the payoffs to the players equals zero. A player can have a
positive payoff only if another has a negative payoff. Poker and chess are examples of zero-sum
games. Real-world games are rarely zero-sum.

Dilla University, Department of Mathematics Page 230


Dill University

9.2 Formulation of Two-Person Zero-Sum Games


The individual most closely associated with the creation of the theory of games is
John von Neumann, one of the greatest mathematicians of the 20th century. Although
others preceded him in formulating a theory of games - notably Emile Borel - it was von ´
Neumann who published in 1928 the paper that laid the foundation for the theory of
two-person zero-sum games. Von Neumann’s work culminated in a fundamental book on
game theory written in collaboration with Oskar Morgenstern entitled Theory of Games
and Economic Behavior, 1944. Other discussions of the theory of games relevant for our
present purposes may be found in the text book, Game Theory by Guillermo Owen, 2nd
edition, Academic Press, 1982, and the expository book, Game Theory and Strategy by
Philip D. Straffin, published by the Mathematical Association of America, 1993.

The theory of von Neumann and Morgenstern is most complete for the class of games
called two-person zero-sum games, i.e. games with only two players in which one player
wins what the other player loses.

The extreme case of players with fully opposed interests is embodied in the class of two player
zero-sum (or constant-sum) games. Familiar examples range from rock-paper scissors to many
parlor games like chess, go, or checkers. A classic case of a zero-sum game, which was
considered in the early days of game theory by von Neumann, is the game of poker. The
extensive game in Figure 10, and its strategic form in Figure 11, can be interpreted in terms of
poker, where player I is dealt a strong or weak hand which is unknown to player II. It is a
constant-sum game since for any outcome, the two payoffs add up to 16, so that one player’s
gain is the other player’s loss. When player I chooses to announce despite being in a weak
position, he is colloquially said to be “bluffing.” This bluff not only induces player II to possibly
sell out, but similarly allows for the possibility that player II stays in when player I is strong,
increasing the gain to player I.

Because games are rooted in conflict of interest, the optimal solution selects one or more
strategies for each player such that any change in the choosen strategies not improve the payoff
to either player .These solution can be in the form of a single pure strategy or several strategies
mixed according to specific probabilities .The following examples demonstrate the two cases.

Two companies, A and B, sell two brands of flu medicine .Company A advertises in radio (A1),
television(A2) ,and newspapers (A3) . Company B, in addition to using radio (B1), television
(B2), and newspaper (B3), also mails brochures (B4). Depending on the effectiveness of each
advertising campaign ,one company can capture a portion of the market from the
other.Thefollowing matrix summarizes the percentage of the market captured or lost by company

Dilla University, Department of Mathematics Page 231


Dill University

A.

The solution of the game is based on the principle of securing the best of worst for each player. If
company A selects strategy A1, then regardless of what B does, the worst that can happen is that
A loses 3% of the market share to B. This is represented by the minimum value of the entries in
row 1. Similarly, the strategy A2 worst outcome is for A to capture 5% of the market from B,
and the strategy A3 worst outcome is for A to lose 9% to B. This result are listed in the ‘row
min’ column. To achieve the best of the worst, company A chooses strategy A2 because it
corresponds to the maximum value, or the largest element in the ‘ row min’ column.

Strategic Form

The simplest mathematical description of a game is the strategic form, mentioned in the
introduction. For a two-person zero-sum game, the payoff function of Player II is the negative of
the payoff of Player I, so we may restrict attention to the single payoff function of Player I,
which we call here A.

Definition 1 The strategic form, or normal form, of a two-person zero-sum game is given
by a triplet (X, Y, A), where
(1) X is a nonempty set, the set of strategies of Player I
(2) Y is a nonempty set, the set of strategies of Player II
(3) A is a real-valued function defined on X × Y . (Thus, A(x, y) is a real number for
every x ∈ X and every y ∈ Y.)

The interpretation is as follows. Simultaneously, Player I chooses x ∈ X and Player


II chooses y ∈Y, each unaware of the choice of the other. Then their choices are made
known and I wins the amount A(x, y) from II. Depending on the monetary unit involved,
A(x, y) will be cents, dollars, pesos, beads, etc. If A is negative, I pays the absolute value
of this amount to II. Thus, A(x, y) represents the winnings of I and the losses of II.

Dilla University, Department of Mathematics Page 232


Dill University

This is a very simple definition of a game; yet it is broad enough to encompass the
finite combinatorial games and games such as tic-tac-toe and chess. This is done by being
sufficiently broadminded about the definition of a strategy. A strategy for a game of chess, for
example, is a complete description of how to play the game, of what move to make in
every possible situation that could occur. It is rather time-consuming to write down even
one strategy, good or bad, for the game of chess. However, several different programs for
instructing a machine to play chess well have been written. Each program constitutes one
strategy. The program Deep Blue, that beat then world chess champion Gary Kasparov
in a match in 1997, represents one strategy. The set of all such strategies for Player I is
denoted by X. Naturally, in the game of chess it is physically impossible to describe all
possible strategies since there are too many; in fact, there are more strategies than there
are atoms in the known universe. On the other hand, the number of games of tic-tac-toe
is rather small, so that it is possible to study all strategies and find an optimal strategy
for each player. Later, when we study the extensive form of a game, we will see that many
other types of games may be modeled and described in strategic form.

To illustrate the notions involved in games, let us consider the simplest non-trivial
case when both X and Y consist of two elements. As an example, take the game called
Odd-or-Even.

Example: Odd or Even. Players I and II simultaneously call out one of the numbers one or two.
Player I’s name is Odd; he wins if the sum of the numbers is odd. Player II’s name is Even; she
wins if the sum of the numbers is even. The amount paid to the winner by the loser is always the
sum of the numbers in dollars. To put this game in strategic form we must specify X, Y and A.
Here we may choose X = {1,2}, Y = {1,2}, and A as given in the following table.

It turns out that one of the players has a distinct advantage in this game. Can you
tell which one it is?

Let us analyze this game from Player I’s point of view. Suppose he calls ‘one’ 3/5ths
of the time and ‘two’ 2/5ths of the time at random. In this case,

Dilla University, Department of Mathematics Page 233


Dill University

1. If II calls ‘one’, I loses 2 dollars 3/5ths of the time and wins 3 dollars 2/5ths of the
time; on the average, he wins -2(3/5) + 3(2/5) = 0 (he breaks even in the long run).
2. If II call ‘two’, I wins 3 dollars 3/5ths of the time and loses 4 dollars 2/5ths of the time;
on the average he wins 3(3/5) - 4(2/5) = 1/5.
That is, if I mixes his choices in the given way, the game is even every time II calls
‘one’, but I wins 20/ c on the average every time II calls ‘two’. By employing this simple
strategy, I is assured of at least breaking even on the average no matter what II does. Can
Player I fix it so that he wins a positive amount no matter what II calls?

Let p denote the proportion of times that Player I calls ‘one’. Let us try to choose p
so that Player I wins the same amount on the average whether II calls ‘one’ or ‘two’. Then
since I’s average winnings when II calls ‘one’ is -2p + 3(1 - p), and his average winnings
when II calls ‘two’ is 3p - 4(1 - p) Player I should choose p so that

Hence, I should call ‘one’ with probability 7/12, and ‘two’ with probability 5/12. On the
average, I wins -2(7/12) + 3(5/12) = 1/12, or 8 1 3 cents every time he plays the game, no
matter what II does. Such a strategy that produces the same average winnings no matter
what the opponent does is called an equalizing strategy.

Therefore, the game is clearly in I’s favor. Can he do better than 8 1 3 cents per game
on the average? The answer is: Not if II plays properly. In fact, II could use the same
procedure:

call ‘one’ with probability 7/12


call ‘two’ with probability 5/12

If I calls ‘one’, II’s average loss is -2(7/12) + 3(5/12) = 1/12. If I calls ‘two’, II’s average
loss is 3(7/12) - 4(5/12) = 1/12.

Hence, I has a procedure that guarantees him at least 1/12 on the average, and II has
a procedure that keeps her average loss to at most 1/12. 1/12 is called the value of the
game, and the procedure each uses to insure this return is called an optimal strategy or
a minimax strategy.

Dilla University, Department of Mathematics Page 234


Dill University

If instead of playing the game, the players agree to call in an arbitrator to settle this
conflict, it seems reasonable that the arbitrator should require II to pay 8 1 3 cents to I. For
I could argue that he should receive at least 8 1 3 cents since his optimal strategy guarantees
him that much on the average no matter what II does. On the other hand II could argue
that she should not have to pay more than 8 1 3 cents since she has a strategy that keeps
her average loss to at most that amount no matter what I does.

9.3 Pure and Mixed strategy

9.3.1 Pure Strategies


We start with a constant-sum game: for every possible outcome of the game, the utility of
Player 1 plus the utility of Player 2, adds to a constant. For example, if two firms are
competing for market shares, then + = 100%.

Activity 9.1

(Battle of the Networks)

Two television networks are battling for viewer shares. Viewer share is important because,
the higher it is, the more money the network can make from selling advertising time during that
program. Consider the following situation: the networks make their programming decisions
independently and simultaneously. Each network can show either sports or a sitcom. Network 1
has a programming advantage in sitcoms and Network 2 has one in sports: If both networks show
sitcoms, then Network 1 gets a 56% viewer share. If both networks show sports, then Network 2
gets a 54% viewer share. If Network 1 shows a sitcom and Network 2 shows sports, then
Network 1 gets a 51% viewer share and Network 2 gets 49%. Finally, if Network 1 shows sports
and Network 2 shows a sitcom, then each gets a 50% viewer share.

9.3.2 Mixed strategy


A mixed strategy is an active randomization, with given probabilities that determines the player’s
decision. As a special case, a mixed strategy can be the deterministic choice of one of the given
pure strategies. Mixed strategies are a natural device for constant-sum games with imperfect
information. Leaving one’s own actions open reduces one’s vulnerability against malicious
responses. In the poker game of Figure 10, it is too costly to bluff all the time, and better to
randomize instead. The use of active randomization will be familiar to anyone who has
played rock-paper-scissors.

The basic premise of utility theory is that one should evaluate a payoff by
its utility to the player rather than on its numerical monetary value. Generally a player’s
utility of money will not be linear in the amount. The main theorem of utility theory
states that under certain reasonable assumptions, a player’s preferences among outcomes
are consistent with the existence of a utility function and the player judges an outcome
only on the basis of the average utility of the outcome.

Dilla University, Department of Mathematics Page 235


Dill University

However, utilizing utility theory to justify the above assumption raises a new difficulty.
Namely, the two players may have different utility functions. The same outcome may be
perceived in quite different ways. This means that the game is no longer zero-sum. We
need an assumption that says the utility functions of two players are the same (up to
change of location and scale). This is a rather strong assumption, but for moderate to
small monetary amounts, we believe it is a reasonable one.

A mixed strategy may be implemented with the aid of a suitable outside random
mechanism, such as tossing a coin, rolling dice, drawing a number out of a hat and so
on. The seconds indicator of a watch provides a simple personal method of randomization
provided it is not used too frequently. For example, Player I of Odd-or-Even wants an
outside random event with probability 7/12 to implement his optimal strategy. Since
7/12 = 35/60, he could take a quick glance at his watch; if the seconds indicator showed
a number between 0 and 35, he would call ‘one’, while if it were between 35 and 60, he
would call ‘two’.

Example1. (Market Niche)

Two firms are competing for a single market niche. If one firm occupies the market niche, it
gets a return of 100. If both firms occupy the market niche, each loses 50. If a firm stays out of
the market, it breaks even. The payoff table is:

This game has two pure strategy equilibria , namely one of the two firms enters the market niche
and the other stays out. But, unlike the games we have encountered thus far, neither player has a
dominant strategy. When a player has no dominant strategy, she should consider playing a mixed
strategy. In a mixed strategy, each of the various pure strategies is played with some probability,
say p1 for Strategy 1, p2 for Strategy 2, etc with p1 + p2 + --- = 1. What would be the best mixed
strategies for Firms A and B? Denote by p1 the probability that Firm A enters the market niche.

Dilla University, Department of Mathematics Page 236


Dill University

Therefore p2 = 1 p1 is the probability that Firm A stays out. Similarly, Firm B enters the niche
with probability q1 and stays out with probability q2 = 1 q1. The key insight to a mixed strategy
equilibrium is the following. Every pure strategy that is played as part of a mixed strategy
equilibrium has the same expected value. If one pure strategy is expected to pay less than
another, then it should not be played at all. The pure strategies that are not excluded should be
expected to pay the same. We now apply this principle. The expected value of the \Enter"
strategy for Firm A, when Firm B plays its mixed strategy, is

EV Enter = 50q1 + 100q2:

The expected value of the \Stay out" strategy for Firm A is EV Stay out = 0. Setting EV Enter =
EV Stay out we get

50q1 + 100q2 = 0:
Using q1 + q2 = 1, we obtain

= =

Similarily,

= =

As you can see, the payoff s of this mixed strategy equilibrium, namely (0; 0), are in efficient.
One of these firms could make a lot of money by entering the market niche, if it was sure that the
other would not enter the same niche. This assurance is precisely what is missing. Each firm has
exactly the same right to enter the market niche. The only way for both firms to exercise this
right is to play the inefficient, but symmetrical, mixed strategy equilibrium. In many industrial
markets, there is only room for a few firms { a situation known as natural oligopoly. Chance
plays a major role in the identity of the firms that ultimately enter such markets. If too many
firms enter, there are losses all around and eventually some firms must exit. From the mixed
strategy equilibrium, we can actually predict how often two firms enter a market niche when
there is only room for one: with the above data, the probability of entry by either firm is 2/3, so
the probability that both firms enter is( 2/3) = 4=9. That is a little over 44% of the time! This is
the source of the inefficiency. The efficient solution has total payoff of 100, but is not
symmetrical. The fair solution pays each player the same but is inefficient. These two principles,
efficiency and fairness, cannot be reconciled in a game like Market Niche. Once firms recognize
this, they can try to find mechanisms to reach the efficient solution. For example, they may
consider side payments. Or firms might simply attempt to scare of competitors by announcing
their intention of moving into the market niche before they actually do so.

Dilla University, Department of Mathematics Page 237


Dill University

9.4 Solving pure strategy games

9.4.1 Reduction by dominance


Matrix Games - Domination

A finite two-person zero-sum game in strategic form, (X, Y, A), is sometimes called
a matrix game because the payoff function A can be represented by a matrix. If X =
{ ,..., } and Y = { , . . . , }, then by the game matrix or payoff matrix we mean
the matrix

In this form, Player I chooses a row, Player II chooses a column, and II pays I the entry
in the chosen row and column. Note that the entries of the matrix are the winnings of the
row chooser and losses of the column chooser.

A mixed strategy for player I may be represented by an m-tuple, p= ( , . . . , ) of probabilities


that add to 1. If I uses the mixed strategy p= ( , . . . , ) and II chooses column j, then the
(average) payoff to I ∑ . Similarly, a mixed strategy for player II is an n-tuple
q= ( , . . . , ) . If II uses q and I uses row I the payoff to I is ∑ . More generally, if I
uses the mixed strategy P and II uses the mixed q, the (average) payoff to I is Aq
=∑ ∑ .

Note that the pure strategy for Player I of choosing row i may be represented as the
mixed strategy , the unit vector with a 1 in the position and 0’s elsewhere. Similarly,
the pure strategy for II of choosing the jth column may be represented by . In the
following, we shall be attempting to ‘solve’ games. This means finding the value, and at
least one optimal strategy for each player. Occasionally, we shall be interested in finding
all optimal strategies for a player.

9.4.1.1 Removing Dominated Strategies


Sometimes, large matrix games may be reduced in size (hopefully to the 2×2 case) by deleting
rows and columns that are obviously bad for the player who uses them.

Dilla University, Department of Mathematics Page 238


Dill University

Definition:- We say the i row of a matrix A = (a ) dominates the k row if


a ≥ a for all j. We say the i row of A strictly dominates the k row if a > a
for all j. Similarly, the j column of A dominates (strictly dominates) the k column if
a ≤ a (resp. a < a ) for all i.

Anything Player I can achieve using a dominated row can be achieved at least as well
using the row that dominates it. Hence dominated rows may be deleted from the matrix.
A similar argument shows that dominated columns may be removed. To be more precise,
removal of a dominated row or column does not change the value of a game. However, there
may exist an optimal strategy that uses a dominated row or column. If so,
removal of that row or column will also remove the use of that optimal strategy (although
there will still be at least one optimal strategy left). However, in the case of removal of a
strictly dominated row or column, the set of optimal strategies does not change.

We may iterate this procedure and successively remove several rows and columns. As
an example, consider the matrix, A.

A row (column) may also be removed if it is dominated by a probability combination


of other rows (columns). If for some 0 < p < 1, p + (1- p) ≥ for all j, then the
row is dominated by the mixed strategy that chooses row i1 with probability p and row i2 with
probability 1 - p. Player I can do at least as well using this mixed strategy instead of choosing
row k. (In addition, any mixed strategy choosing row k with probability may be replaced
by the one in which k’s probability is split between i1 and i2. That is, i1’s probability is
increased by p and ’s probability is increased by (1 - p) ). A similar argument may
be used for columns.

Dilla University, Department of Mathematics Page 239


Dill University

The middle column is dominated by the outside columns taken with probability 1/2
each. With the central column deleted, the middle row is dominated by the combination
of the top row with probability 1/3 and the bottom row with probability 2/3. The reduced

Of course, mixtures of more than two rows (columns) may be used to dominate and
remove other rows (columns). For example, the mixture of columns one two and three

and so the last column may be removed. Not all games may be reduced by dominance. In fact,
even if the matrix has a saddle point, there may not be any dominated rows or columns. The 3 ×
3 game with a saddle point found in Example 1 demonstrates this.

9.4.1.2 Saddle points


Occasionally it is easy to solve the game. If some entry of the matrix A has the property that

(1) a is the minimum of the row, and


(2) a is the maximum of the column,
then we say is a saddle point. If is a saddle point, then Player I can then win at
least by choosing row i, and Player II can keep her loss to at most by choosing
column j. Hence is the value of the game.
Example 1

The central entry, 2, is a saddle point, since it is a minimum of its row and maximum
of its column. Thus it is optimal for I to choose the second row, and for II to choose the
second column. The value of the game is 2, and (0,1,0) is an optimal mixed strategy for
both players. For large m × n matrices it is tedious to check each entry of the matrix to see if

Dilla University, Department of Mathematics Page 240


Dill University

it has the saddle point property. It is easier to compute the minimum of each row and the
maximum of each column to see if there is a match. Here is an example of the method.

In matrix A, no row minimum is equal to any column maximum, so there is no saddle


point. However, if the 2 in position were changed to a 1, then we have matrix B. Here,
the minimum of the fourth row is equal to the maximum of the second column; so b42 is a
saddle point.

9.4.2 The Minimax ( or maxmin) criterion


A two-person zero-sum game (X, Y, A) is said to be a finite game if both strategy sets X and Y are
finite sets. The fundamental theorem of game theory due to von Neumann states that the situation
encountered in the game of Odd-or-Even holds for all finite two-person zero-sum games.
Specifically

The Minimax Theorem. For every finite two-person zero-sum game,

(1) there is a number v , called the value of the game,


(2) there is a mixed strategy for Player I such that I’s average gain is at least v no
matter what II does, and
(3) there is a mixed strategy for Player II such that II’s average loss is at most v no
matter what I does.

This is one form of the minimax theorem to be stated more precisely and discussed in
greater depth later. If V is zero we say the game is fair. If V is positive, we say the game
favors Player I, while if V is negative, we say the game favors Player II.

Exercises 9.1

1. Consider the game of Odd-or-Even with the sole change that the loser pays the
winner the product, rather than the sum, of the numbers chosen (who wins still depends
on the sum). Find the table for the payoff function A, and analyze the game to find the
value and optimal strategies of the players. Is the game fair?

2. Player I holds a black Ace and a red 8. Player II holds a red 2 and a black 7. The
players simultaneously choose a card to play. If the chosen cards are of the same color,

Dilla University, Department of Mathematics Page 241


Dill University

Player I wins. Player II wins if the cards are of different colors. The amount won is a
number of dollars equal to the number on the winner’s card (Ace counts as 1.) Set up the
payoff function find the value of the game and the optimal mixed strategies of the players.

9.5 Some basic probabilistic considerations


Consider the situation faced by a large software company after a small startup has announced
deployment of a key new technology. The large company has a large research and development
operation, and it is generally known that they have researchers working on a wide variety of
innovations. However, only the large company knows for sure whether or not they have made
any progress on a product similar to the startup’s new technology. The startup believes that there
is a 50 percent chance that the large company has developed the basis for a strong competing
product. For brevity, when the large company has the ability to produce a strong competing
product, the company will be referred to as having a “strong” position, as opposed to a “weak”
one. The large company, after the announcement, has two choices. It can counter by announcing
that it too will release a competing product. Alternatively, it can choose to cede the market for
this product. The large company will certainly condition its choice upon its private knowledge,
and may choose to act differently when it has a strong position than when it has a weak one. If
the large company has announced a product, the startup is faced with a choice: it can either
negotiate a buyout and sell itself to the large company, or it can remain independent and launch
its product. The startup does not have access to the large firm’s private information on the status
of its research. However, it does observe whether or not the large company announces its own
product, and may attempt to infer from that choice the likelihood that the large company has
made progress of their own. When the large company does not have a strong product, the startup
would prefer to stay in the market over selling out. When the large company does have a strong
product, the opposite is true, and the startup is better off by selling out instead of staying in.

Figure 10 shows an extensive game that models this situation. From the perspective of the
startup, whether or not the large company has done research in this area is random. To capture
random events such as this formally in game trees, chance moves are introduced. At a node
labelled as a chance move, the next branch of the tree is taken randomly and non-strategically by
chance, or “nature”, according to probabilities which are included in the specification of the
game.The game in Figure 10 starts with a chance move at the root. With equal probability 0.5,
the chance move decides if the large software company, player I, is in a strong position (upward
move) or weak position (downward move). When the company is in a weak position, it can
choose to Cede the market to the startup, with payoffs (0,16) to the two players (with payoffs
given in millions of dollars of profit). It can also announce a competing product, in the hope that
the startup company, player II, will sell out, with payoffs 12 and 4 to players I and II. However,
if player II decides instead to stay in, it will even profit from the increased publicity and gain a
payoff of 20, with a loss of−4 to the large firm.

Dilla University, Department of Mathematics Page 242


Dill University

Figure 10, The chance move decides if player I is strong (top node) and does have a competing
product, or weak (bottom node) and does not. The ovals indicate information sets. Player II sees
only that player I chose to announce a competing product, but does not know if player I is strong
or weak.

9.6 Solving Games with the Simplex Method


Recall that to solve a game means to find for each player a mixed strategy that minimizes
the potential loss against the opponent’s best counter-strategy. If a game is strictly
determined, the optimal mixed strategies are the pure strategies determined by selecting a
saddle point. We have also seen in the preceding section how to solve arbitrary 2×2 games.
But not all games are 2×2 games to solve a larger game turns out to be a linear
programming problem.

Here is a reassuring fact:

No matter what zero sum game is being played, there is at least one optimal mixed strategy
for each player. In other words, every two-person zero-sum game can be solved.

In a strictly determined game, the use of an optimal pure strategy will minimize a player's
potential losses. In a non strictly determined game, a player can do better by using a mixed
strategy. This is illustrated by the following example.

Example 1 Optimal Mixed Strategy

Consider once again the game “paper, scissors, rock.” The payoff matrix is

Dilla University, Department of Mathematics Page 243


Dill University

The moves are p, s, r, in that order.) If you are the row player and use any pure strategy it
doesn't matter which because of the symmetry of the game then your potential loss is one
point on each round of the game (since the smallest entry in each row is -1). On the other
hand, if you use the mixed strategy S = [ ] by playing p, s and r equally often, then no
matter what strategy the column player uses, the expected value of the game is

Since a draw is better than a loss, it follows that using the mixed strategy [ ] is more
advantageous than using any pure strategy.

We claim that the mixed strategy S =[ ⦌ is the optimal mixed strategy.

Why is this so?

To see why this strategy is better than any other mixed strategy, suppose you tried
another mixed strategy, like S = =[ ⦌. In any mixed strategy other than [4 4 4], one
move will be played more often than some other; in this case p will be played more often
than any other. Here is a counter-strategy that the column player can use against S: play the
pure strategy s all the time. Since scissors beat paper, the column player will tend to win
more often than lose. In fact, the expected value is

Dilla University, Department of Mathematics Page 244


Dill University

So, the worst that can happen to you playing [ ⦌ is better than the worst that can
happen if you play =[ ⦌The same will be true for any other mixed strategy with unequal
proportions, so [4 4 4] is your optimal mixed strategy.

Not all games can be analyzed by such “common sense” methods, so we now describe a
method of solving a game using the simplex method. Using the simplex method makes some
sense, since we are looking to maximize a quantity the worst expected payoff subject to
certain constraints, such as: the entries in the desired mixed strategy cannot exceed 1 and the
entries in the opponent's mixed strategy cannot exceed 1. At the end of this section we shall
explain in more detail why the following works.

Example 2 Solving a Game using the Simplex Mathod

Solve the game with payoff matrix

Solution
Step 1 (Optional, but highly recommended) Reduce the payoff matrix by dominance
Looking at the matrix, we notice that rows 3 and 4 are dominated by row 1, so we
eliminate them, obtaining

Dilla University, Department of Mathematics Page 245


Dill University

Next, we can eliminate column 4, since it is dominated by column 1.

This is as far as reduction by dominance can take us.

Step 2 Convert to a payoff matrix with no negative entries by adding a suitable fixed
number to all the entries.

If we add 2 to all the entries, we will eliminate all negative payoffs. Notice that this won't
affect the analysis of the game in any way; the only thing that is affected is the expected
value of the game, which will be increased by 2. Adding 2 to all the payoffs gives the new
matrix,

Step 3 Solve the associated standard linear programming problem:

The number of variables and the coefficients of the constraints depend on the payoff
matrix; there is one variable for each column. We always take the objective function to be the
sum of the variables, and the right hand sides of the constraints are always 1.
We now use the simplex method. The first tableau is the following.

Dilla University, Department of Mathematics Page 246


Dill University

Notice that the payoff matrix appears in the top left part of the tableau. We now proceed to the
solution as usual:

Dilla University, Department of Mathematics Page 247


Dill University

Here is how we read off the optimal strategies.

Step 4 Calculate the optimal strategies

Column Strategy
1. Express the solution to the linear programming problem as a column vector.

2. Normalize by dividing each entry in the solution vector by the value of p (which is also the
sum of the values of the variables).

Here, p = 2/3, so we get

The entries of the column will now add up to 1

3. Insert zeros corresponding to the columns deleted when we reduced by dominance:

Recalling that we deleted column 4, we insert a zero in the fourth position, getting the
column player’s optimal strategy:[ 0 0] .

Row strategy
1. Read off the entries under the slack variables in the bottom row of the final tableau.

[2 2]

2. Normalize by dividing each entry in the vector by the sum of the entries:
The sum of the entries is 4, so we get

[ ]

Dilla University, Department of Mathematics Page 248


Dill University

3. Insert zeros corresponding to the rows deleted when we reduced by dominance:


Recalling that we deleted rows 3 and 4, we insert zeros in the third and fourth positions,
getting the row player’s optimal strategy: [ 0 0].

Value of the Game: This is given by the formula

e= –k,

where k is the number we originally added to the entries in the payoff matrix to make them
non-negative.

Here, p = , so = , and k = 2, hence the value of the game is

e= -2=-

Here is a summary of the procedure

Solving a Matrix Game

First: Check for saddle points. If there is one, you can solve the game by selecting each
player's optimal pure strategy. Otherwise, continue with the following steps.
Step 1 Reduce the payoff matrix by dominance.
Step 2 Add a fixed number k to each of the entries so that they all become non-negative.
Step 3 Set up and solve the associated linear programming problem using the simplex method.
Step 4 Find the optimal strategies and the expected value as follows.

Column strategy:
1. Express the solution to the linear programming problem as a column vector.
2. Normalize by dividing each entry of the solution vector by p (which is also the sum of the
values of the variables).
3. Insert zeros in positions corresponding to the columns deleted during reduction.
Row strategy:
1. List the entries under the slack variables in the bottom row of the final tableau in vector
form.
2. Normalize by dividing each entry of the solution vector by the sum of the entries.
3. Insert zeros in positions corresponding to the rows deleted during reduction.

Value of the Game : e = - k

Solution of All 2 by 2 Matrix Games.

Consider the general 2 × 2 game matrix

Dilla University, Department of Mathematics Page 249


Dill University

To solve this game (i.e. to find the value and at least one optimal strategy for each player)
we proceed as follows.
1. Test for a saddle point.
2. If there is no saddle point, solve by finding equalizing strategies.
We now prove the method of finding equalizing strategies of Section above works whenever
there is no saddle point by deriving the value and the optimal strategies.
Assume there is no saddle point. If a ≥ b, then b < c , as otherwise b is a saddle point.
Since b < c, we must have c > d , as otherwise c is a saddle point. Continuing thus, we see
that d < a and a > b. In other words, if a ≥ b , then a > b < c > d < a. By symmetry, if
a ≤ b , then a < b > c < d > a. This shows that
If there is no saddle point, then either a > b, b < c, c > d and d < a, or a < b, b > c, c < d and
d > a.
In equations (1), (2) and (3) below, we develop formulas for the optimal strategies
and value of the general 2 × 2 game. If I chooses the first row with probability p (i.e. uses
the mixed strategy (p,1 - p)), we equate his average return when II uses columns 1 and 2.

ap + d(1 - p) = bp + c(1 - p).


Solving for p, we find

Since there is no saddle point, (a- b) and (c- d) are either both positive or both negative;
hence, 0 < p < 1. Player I’s average return using this strategy is

If II chooses the first column with probability q (i.e. uses the strategy (q,1-q)), we equate
his average losses when I uses rows 1 and 2.

aq + b(1 - q) = dq + c(1 - q)

Dilla University, Department of Mathematics Page 250


Dill University

Hence,

Again, since there is no saddle point, 0 < q < 1. Player II’s average loss using this strategy
is

the same value achievable by I. This shows that the game has a value, and that the players
have optimal strategies. (something the minimax theorem says holds for all finite games).
Example 2

Example 3.

But q must be between zero and one. What happened? The trouble is we “forgot to test
this matrix for a saddle point, so of course it has one”. (J.D.Williams The Compleat
Strategyst Revised Edition, 1966, McGraw-Hill, page 56). The lower left corner is a saddle
point. So p = 0 and q = 1 are optimal strategies, and the value is v = 1.

9.7 Solving 2 × n and m × 2 games


Games with matrices of size 2 × n or m × 2 may be solved with the aid of a graphical
interpretation. Take the following example.

Dilla University, Department of Mathematics Page 251


Dill University

Suppose Player I chooses the first row with probability p and the second row with probability 1-
p. If II chooses Column 1, I’s average payoff is 2p+4(1-p). Similarly, choices of
Columns 2, 3 and 4 result in average payoffs of 3p+ (1-p), p+6(1-p), and 5p respectively.
We graph these four linear functions of p for 0 ≤ p ≤ 1. For a fixed value of p, Player I can
be sure that his average winnings is at least the minimum of these four functions evaluated
at p. This is known as the lower envelope of these functions. Since I wants to maximize
his guaranteed average winnings, he wants to find p that achieves the maximum of this
lower envelope. According to the drawing, this should occur at the intersection of the lines
for Columns 2 and 3. This essentially, involves solving the game in which II is restricted
3 1
to Columns 2 and 3. The value of the game is v = 17/7, I’s optimal strategy is
1 6
(5/7,2/7), and II’s optimal strategy is (5/7,2/7). Subject to the accuracy of the drawing,
we conclude therefore that in the original game I’s optimal strategy is (5/7,2/7) , II’s is
(0,5/7,2/7,0) and the value is 17/7.

Dilla University, Department of Mathematics Page 252


Dill University

The accuracy of the drawing may be checked: Given any guess at a solution to a
game, there is a sure-fire test to see if the guess is correct, as follows. If I uses the strategy
(5/7,2/7), his average payoff if II uses Columns 1, 2, 3 and 4, is 18/7, 17/7, 17/7, and 25/7
respectively. Thus his average payoff is at least 17/7 no matter what II does.

Similarly, if II uses (0,5/7,2/7,0), her average loss is (at most) 17/7. Thus, 17/7 is the value, and
these strategies are optimal.

We note that the line for Column 1 plays no role in the lower envelope (that is, the
lower envelope would be unchanged if the line for Column 1 were removed from the graph).
This is a test for domination. Column 1 is, in fact, dominated by Columns 2 and 3 taken
with probability 1/2 each. The line for Column 4 does appear in the lower envelope, and
hence Column 4 cannot be dominated.

As an example of a m × 2 game, consider the matrix associated with Figure 2.2. If


q is the probability that II chooses Column 1, then II’s average loss for I’s three possible
choices of rows is given in the accompanying graph. Here, Player II looks at the largest
of her average losses for a given q. This is the upper envelope of the function. II wants
to find q that minimizes this upper envelope. From the graph, we see that any value of q
between 1/4 and 1/3 inclusive achieves this minimum. The value of the game is 4, and I
has an optimal pure strategy: row 2.

Dilla University, Department of Mathematics Page 253


Dill University

These techniques work just as well for 2 × ∞ and ∞ × 2 games.

Latin Square Games

A Latin square is an n × n array of n different letters such that each letter occurs once and only
once in each row and each column. The 5 × 5 array at the right is an example. If in a Latin
square each letter is assigned a numerical value, the resulting matrix is the matrix of a Latin
square game. Such games have simple solutions. The value is the average of the numbers in a
row, and the strategy that chooses each pure strategy with equal probability 1/n is optimal for
both players. The reason is not very deep. The conditions for optimality are satisfied.

In the example above, the value is V = (1+2+3+3+6)/5 = 3, and the mixed strategy
p = q = (1/5,1/5,1/5,1/5,1/5) is optimal for both players. The game of matching pennies
is a Latin square game. Its value is zero and (1/2,1/2) is optimal for both players.

2.6 Exercises.
−1 −3
1. Solve the game with matrix , that is find the value and an optimal
−2 2
(mixed) strategy for both players.
0
2. Solve the game with matrix for an arbitrary real number t. (Don’t forget
2 1
to check for a saddle point!) Draw the graph of v(t), the value of the game, as a function
of t, for -∞ < t < ∞.
3. Show that if a game with m×n matrix has two saddle points, then they have equal
values.

4. Reduce by dominance to 2 × 2 games and solve.

Dilla University, Department of Mathematics Page 254


Dill University

Summary

Game theory is the formal study of decision-making where several players must make choices
that potentially affect the interests of the other players.

Definition 1. The strategic form, or normal form, of a two-person zero-sum game is given
by a triplet (X, Y, A), where
(1) X is a nonempty set, the set of strategies of Player I
(2) Y is a nonempty set, the set of strategies of Player II
(3) A is a real-valued function defined on X × Y . (Thus, A(x, y) is a real number for
every x ∈ X and every y ∈ Y)

The interpretation is as follows. Simultaneously, Player I chooses x ∈ X and Player


II chooses y ∈Y, each unaware of the choice of the other. Then their choices are made
known and I wins the amount A(x, y) from II. Depending on the monetary unit involved,
A(x, y) will be cents, dollars, pesos, beads, etc. If A is negative, I pays the absolute value
of this amount to II. Thus, A(x, y) represents the winnings of I and the losses of II.

The Minimax Theorem

For every finite two-person zero-sum game,

(1) there is a number v , called the value of the game,


(2) there is a mixed strategy for Player I such that I’s average gain is at least v no
matter what II does, and
(3) there is a mixed strategy for Player II such that II’s average loss is at most v no
matter what I does.

Dilla University, Department of Mathematics Page 255


Dill University

This is one form of the minimax theorem to be stated more precisely and discussed in
greater depth later. If V is zero we say the game is fair. If V is positive, we say the game
favors Player I, while if V is negative, we say the game favors Player II.

Solving 2 × n and m × 2 games

Games with matrices of size 2 × n or m × 2 may be solved with the aid of a graphical
interpretation.

Latin Square Games

A Latin square is an n × n array of n different letters such that each letter occurs once and only
once in each row and each column.

Review exercise

1. Paper, Scissors, Rock

Two players, A and B, have decided to change the rules of the game “Paper, Scissors, Rock” by
using instead the following payoff matrix:

Hint : An example of a non-zero sum game would be one in which the government taxed the
earnings of the winner. In that case the winner's gain would be less than the loser's loss.

If player B can't make up her mind whether to use paper or scissors as a pure strategy, what
would you advise?

2. You are the head coach of the Alphas (Team A), and are attempting to come up with a
strategy to deal with your rivals, the Betas (Team B). Team A is on offense, and Team B is
on defense. You have five preferred plays, but are not sure which to select. You know,
however, that Team B usually employs one of three defensive strategies. Over the years,
you have diligently recorded the average yardage gained by your team for each
combination of strategies used, and have come up with the following table.

Dilla University, Department of Mathematics Page 256


Dill University

Which of the five plays should you select?


3. Reduce the payoff matrices in questions 1–6 by dominance.

Dilla University, Department of Mathematics Page 257


Dill University

References
Adler, I. & Berenguer, S. (1981), Random linear programs, Technical Report 81-4, Operations
Research Center Report, U.C. Berkeley. 209

Adler, I. & Megiddo, N. (1985), ‘A simplex algorithm whose average number of steps is
bounded between two quadratic functions of the smaller dimension’, Journal of the ACM 32,
871–895. 53, 208

Adler, I., Karmarkar, N., Resende, M. & Veiga, G. (1989), ‘An implementation of Karmarkar’s
algorithm for linear programming’, Mathematical Programming 44, 297– 335. 369

Ahuja, R., Magnanti, T. & Orlin, J. (1993), Network Flows: Theory, Algorithms, and
Applications, Prentice Hall, Englewood Cliffs, NJ. 240, 257

Bertsimas and J. Tsitsiklis, Introduction to linear optimization, Athena Scientific, 1997

Brian D. Bunday, Basic linear programming, Edward Arnold, 1984

Barnes, E. (1986), ‘A variation on Karmarkar’s algorithm for solving linear programming


problems’, Mathematical Programming 36, 174–182. 346

Dantzig, G. (1951a), Application of the simplex method to a transportation problem, in T.


Koopmans, ed., ‘Activity Analysis of Production and Allocation’, John Wiley and Sons, New
York, pp. 359–373. 10, 240

Dantzig, G., Orden, A. & Wolfe, P. (1955), ‘The generalized simplex method for minimizing a
linear form under linear inequality constraints’, Pacific Journal of Mathematics 5, 183–195. 44
den Hertog, D. (1994), Interior Point Approach to Linear, Quadratic, and Convex Programming,
Kluwer Academic Publishers, Dordrecht. 423

Gale, D., Kuhn, H. & Tucker, A. (1951), Linear programming and the theory of games, in
T.Koopmans, ed., ‘Activity Analysis of Production and Allocation’, John Wiley and Sons, New
York, pp. 317–329. 87, 187

H. A. Taha, Operations research, an introduction, Macmillan publishing company, 2002

Kojima, M., Mizuno, S. & Yoshise, A. (1989), A primal-dual interior point algorithm for linear
programming, in N. Megiddo, ed., ‘Progress in Mathematical Programming’, Springer-Verlag,
New York, pp. 29–47. 306

Dilla University, Department of Mathematics Page 258


Dill University

Dilla University, Department of Mathematics Page 259

You might also like