CHAPTER THREE OF Simplex Method

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Linear Optimization:- Chapter Four 2009E.

CHAPTER THREE

THE SIMPLEX METHOD

The general linear programming problem with n decision variable and m constraints can stated
mathematically as follows

max ¿ min Z=c 1 x 2 +c 2 x 2 +, … ,+c n xn

subject ¿

a 11 x 1 +a12 x 2 +, … ,+a 1 n x n ( ≤ ,=,≥ ) b 1

a 21 x2 + a22 x 2+ , … ,+ a2 n x n ( ≤ ,=, ≥ ) b2

⋮ ⋮⋮ ⋮

a m 1 x 1+ am2 x2 +, … ,+ amn x n ( ≤,=, ≥ ) b m

x i ≥ 0 for i=1 ,2 , … , n

Canonical form of linear programming problem

If the general formulation of linear programming problem is expressed as

max min
Objective function cT x cT x
Constraint Ax ≤ b Ax ≥ b
Decisional variable x≥0 x≥0
Then it called the canonical form of linear programming problem.

Standard (augmented form) form of linear programming problem

If the general formulation of linear programming problem is expressed as

max min
Objective function cT x cT x
Constraint Ax=b Ax=b
Decisional variable x≥0 x≥0
Then it called the standard form of linear programming problem.
We call this the augmented form of the problem because the original form has been augmented
by some supplementary variables.

By:-Mengistu C (MSc in optimization theory) Page 1


Linear Optimization:- Chapter Four 2009E.c

I.e. if the general formulation of linear programming problem is expressed as

max c T x

subject ¿

Ax=b

x≥0

min c T x

subject ¿

Ax=b

x≥0

Then it is called the standard form of the linear programming problem.

HOW TO TRANSFORM ANY LPP INTO STANDARD FORM

Constraint

In the general linear programming problems each constrain may be

A. Equality constraint (= type)


B. Inequality constraint ( ≥∨≤ type)

If all constraint of LPP are equality constraints then it is already in standard form but if there is
an inequality constrain in our linear programming problem the linear programming problem is
not in standard form so the inequality constraint must be change to equality constraint by adding
or subtracting non-negative variables called slack or surplus variable.

i.e.

 If our LPP has an inequality constraint of the forma i1 x1 + ai 2 x 2+ .. .+a ¿ xn ≤ bi

It can be transformed to one in standard form or equality constraint by adding slack variable

i.e.a i1 x1 + ai 2 x 2+ .. .+a ¿ xn + s=bi

 If our LPP has an inequality constraint of the forma i1 x1 + ai 2 x 2+ .. .+a ¿ xn ≥ bi

It can be transformed to one in standard form or equality constraint by adding negative slack
variable (surplus)

By:-Mengistu C (MSc in optimization theory) Page 2


Linear Optimization:- Chapter Four 2009E.c

i.e.a i1 x1 + ai 2 x 2+ .. .+a ¿ xn −s=b i

Decisional variable

A, variables with lower bounds

If a variable x i has lower bound l iwhich is not zero ( x i ≥ l i ) , one obtains a non-negative variable
w iwith the substitution

x i=w i+ l i

In this case, the bound x i ≥ l iis equivalent to the bound w i ≥ 0

B, variables with upper bounds

If a variable x i has upper bound ui which is not zero( x i ≤ ui ) , one obtains a non-negative variable
w iwith the substitution

x i=ui −w i

In this case, the bound x i ≤ uiis equivalent to the bound w i ≥ 0

C, variables with interval bounds

An interval bound of the form l i ≤ x i ≤ ui can be transformed into one non-negativity constraint
and one linear inequality constraint in standard form by making the substitution

x i=w i+ l i

In this case, the bounds l i ≤ x i ≤ uiare equivalent to the constraints w i ≥ 0andw i ≤ ui−l i :

D, free variables

Sometimes a variable is given without any bounds. Such variables are called free variables. To
obtain standard form every free variable must be replaced by the difference of two non-negative
variables. That is, if x i is free, then we get

x i=ui−v i

withui ≥ 0 and vi ≥ 0 .

Example

By:-Mengistu C (MSc in optimization theory) Page 3


Linear Optimization:- Chapter Four 2009E.c

Transform the following LPP into standard and canonical form

max x 1+ 2 x 2 min 3 x 1+2 x 2


s.t s.t
x 1+ 3 x 2 ≤5 x 1+ 3 x 2 ≤5
x 1+ x2 ≥2 x 1+ x2 ≤2
x1 , x2 ≥ 0 x1 ≤ 4
x2 ≥ 2
max x 1+ 4 x 2 +3 x3 min x 1+ x 2
s.t s.t
2 x1 + x 2 +3 x3 ≤5 2 x1 + x 2 ≤ 4
x 1+ 5 x 2 + x 3 ≥ 6 x 1+ 4 x 2 ≤ 3
x1 , x2 ≥ 0 2 ≤ x1 ≤ 4
x2 ≥ 2
Solution

Canonical form

max x 1+ 2 x 2 min−3 w 1+2 w 2+ 16


s.t s.t
x 1+ 3 x 2 ≤5 w 1−3 w 2 ≥ 2
−x 1−x 2 ≤−2 w 1−w 2 ≥ 4
x1 , x2 ≥ 0 w1, w2≥ 0
max x 1+ 4 x 2 +3 w1 +3 w 2 min 4−w1 +w 2
s.t s.t
2 x1 + x 2 +3 w1 +3 w 2 ≤ 5 4 w1 −x2 ≥ 6
−x 1−5 x 2−w 1−w2 ≥ 6 w 1−4 w2 ≥ 9
x1 , x2 , w 1 , w 2 ≥ 0 −w 1 ≥ 2
w1, w2≥ 0
Standard form

max x 1+ 2 x 2 min−3 w 1+2 w 2+ 16


s.t s.t
x 1+ 3 x 2 + s1=5 w 1−3 w 2−s 1=2
x 1+ x2−s 2=2 w 1−w 2−s2=4
x 1 , x 2 , s1 , s 2 ≥ 0 w 1 , w 2 , s1 , s 2 ≥ 0
max x 1+ 4 x 2 +3 w1 +3 w 2 min 4−w1 +w 2
s.t s.t
2 x1 + x 2 +3 w1 +3 w 2+ s 1=5 4 w1 −x2 −s 1=6
x 1+ 5 x 2 +w 1+ w2−s 2=6 w 1−4 w2 −s 2=9
x 1 , x 2 , w 1 , w 2 , s1 , s 2 ≥ 0 −w 1−s3=2
w 1 , w 2 , s1 , s 2 , s 3 ≥0

By:-Mengistu C (MSc in optimization theory) Page 4


Linear Optimization:- Chapter Four 2009E.c

BASIC FEASIBLE SOLUTIONS

Definition: -Consider the system Ax=b andx ≥ 0, where A is anm× n matrix and b is an m
vector. Suppose thatrank ( A ,b)=rank ( A)=m. After possibly rearranging the columns of A, let
A=[B , N ] where B is an m× m invertible matrix and N is an m×(n−m) matrix.

xB
The point x= [ ]
xN
where

x B =B−1 b

x N =0

is called a basic solution of the system. If x B ≥ 0, then x is called a basic feasible solution of the
system. Here B is called the basic matrix (or simply the basis) and N is called the nonbasic
matrix. The components of x B are called basic variables, and the components of x N are called
non-basic variables. If x B >0 , then x is called a non-degenerate basic feasible solution, and if at
least one component of x Bis zero then x is called a degenerate basic feasible solution.

n!
Note: - The number of basic feasible solutions is less than or equal to (mn )= m! ( n−m )!

Example

Find the basic solution of

2 x1 + 4 x 2−2 x 3=10

10 x 1+3 x 2−7 x 3=33

Solution

The above system of linear equation can be write in the form of

A1 x1 + A 2 x 2+ A 3 x 3=bWhere

A1= 2 , A 2= 4 , A3 = −2 ∧b= 10
[ ] [] [ ] [ ]
10 3 −7 33

The coefficient matrix

A=[ A 1 , A2 , A 3 ] = 2 4 −2
[ 10 3 −7 ]
Rank of A is 2.

By:-Mengistu C (MSc in optimization theory) Page 5


Linear Optimization:- Chapter Four 2009E.c

This implies that the size of basic matrix is2 ×2.

B1= 2 4 , B 2= 2 −2 , B 3= 4 −2
[
10 3 ] [
10 −7 3 −7 ] [ ]
det (¿ B1)=−34 ≠ 0 , det ( B2 )=6≠ 0 , det ( B 3) =−22 ≠ 0 ¿

All the above square sub matrixes are non-singular and therefore all of them are basis matrix and
thus there exist three basic solutions.

For basic B1

x B =B1−1 b
1

−3 2

[ ][ ] [ ]
x B = 34
10
34
17 10 = 3
−1 33
17
1

Hence X 1 =( x 1 , x 2 , x 3 )=( 3 , 1 ,0 ), x 1∧x 2 are basic variables and x 3 is non-basic variable

For basic B2

x B =B2−1 b
2

−7 1 −2
xB = 6
2
−5
3
[ ][ ] [ ]
3 10 = 3
1 33
3
−17
3

Hence X 2 =( x 1 , x 2 , x 3 )= (−23 , 0 ,− 173 ), x ∧x are basic variables and x is non-basic variable


1 3 2

For basic B3

x B =B3−1 b
3

7 1 2

[ ][ ] [ ]
x B = 22
−3
22
11 10 = 11
2 33
11
−51
11

2 51
(
Hence X 3 =( x 1 , x 2 , x 3 )= 0 , ,−
11 11 )
, x 2∧x 3 are basic variables and x 1 is non-basic variable

By:-Mengistu C (MSc in optimization theory) Page 6


Linear Optimization:- Chapter Four 2009E.c

Example

Consider

4 x1 +2 x 2+3 x 3−8 x 4 =6

3 x 1+5 x 2 +4 x 3−6 x 4 =8

x1 , x2 , x3 , x4 ≥ 0

A. How many basic matrixes are there?


B. How many basic solutions are there?
C. Find the basic solution?
D. Discuss the nature of each basic solution (degenerate and non-degenerate)?

Solution

The above system of linear equation can be write in the form of

A1 x1 + A 2 x 2+ A 3 x 3 + A4 x 4 =bWhere

A1= [ 43 ] , A =[25] , A =[ 34 ] , A =[−8


2 3
−6 ]
4
6
∧b=[ ]
8

The coefficient matrix

A=[ A 1 , A2 , A 3 , A 4 ] = [ 43 2 3 −8
5 4 −6 ]
Rank of A is 2.
This implies that the size of basic matrix is2 ×2.

And we have at most ( 42 )=6 basic matrixes


B 1= [ 43 25] , B =[ 43 34] , B =[ 43 −8
2 3
−6 ]
, B =[
2 3
5 4]4, B =[
2 −8
5 −6 ]
, B =[
5
3 −8
4 −6 ]
, 6

Now let as identify which of the above matrix are/is basic

det (¿ B1)=14 ≠ 0 , det ( B2 )=7 ≠ 0 , det ( B3 ) =0 , det (¿ B 4 )=−7 ≠ 0 , det ( B5 )=28≠ 0 , det ( B6 ) =14 ≠ 0 ¿ ¿

This implies that B1 , B2 , B4 , B5∧B6 are basic matrix

B. There are exactly five basic solutions


C. The basic solutions are

By:-Mengistu C (MSc in optimization theory) Page 7


Linear Optimization:- Chapter Four 2009E.c

x B =Bi−1 b
i

5 −1 4 −3 −4 3

[ ][ ] [ ] [ ][ ] [ ] [ ] [ ] [ ]
x B = 14
−3
14
7 6 = 1 ,x =
2 8
7
1 B 2
7
−3
7
7 6 = 0 ,x =
4 8
7
2 B 4
7
5
7
7 6=0
−2 8
7
2

−3 2 −3 4

[ ][ ] [ ] [ ][ ] [ ]
x B = 14
5
28
7 6=
1 8
14
1
−1 , x B =
2
6
7
−2
7
7 6=2
3 8
14
0

X 1 =[ x1 , x2 , x3 , x 4 ]= [ 1,10,0 ]
X 2 =[ x1 , x2 , x3 , x 4 ]= [ 0,0,2,0 ]
X 4=[ x 1 , x 2 , x 3 , x 4 ] =[ 0,0,2,0 ]
1
[
X 5 =[ x1 , x 2 , x 3 , x 4 ] = 0,1,0 ,−
2 ]
X 6 =[ x 1 , x 2 , x 3 , x 4 ] = [ 0,0,2,0 ]

D. x 2 , x 4 ∧x6 are degenerate basic feasible solution


x 1is non-degenerate basic feasible solution
x 5is non-degenerate basic solution but not feasible

Correspondence between Basic Feasible Solutions and Extreme Points

We shall now show that the collection of basic feasible solutions and the collection of extreme
points are equivalent. In other words, a point is a basic feasible solution if and only if it is an
extreme point. Since a linear programming problem, with a finite optimal value, has an optimal
solution at an extreme point, an optimal basic feasible solution can always be found.

By:-Mengistu C (MSc in optimization theory) Page 8


Linear Optimization:- Chapter Four 2009E.c

SOLUTION METHODS OF LINEAR PROGRAMMING PROBLEM

We can solve any linear programming problem by using

 Geometrically ( graphic solution method )

 Algebraically ( simplex algorithm )

Graphic solution method is a method which is suitable for solving linear programming problems
those has at most two variables. But when we formulate real- life problems as linear
programming model most of them have more than two variables and are too large for the
graphical solution method.

Thus, a more general method known as simplex method is suitable for solving linear
programming problems those has a large number of variables. The method through an iterative
process progressively approaches and ultimately reaches to the maximum or minimum value of
the objective function. The method also helps the decision maker to identify the redundant
constraints, an unbounded solution, multiple solutions and an infeasible problem.

Simplex algorithm can be in

 Algebraic form
 Tabular form

Example

Solve the following linear programming problem by using simplex algorithm method in
algebraic form

 max 6 x 1+ 5 x 2

s.t

x 1+ x2 ≤5

3 x 1+2 x 2 ≤ 2

x1 , x2 ≥ 0

Solution

The standard form of the above linear programming problem is

By:-Mengistu C (MSc in optimization theory) Page 9


Linear Optimization:- Chapter Four 2009E.c

max 6 x 1+ 5 x 2 +0 x 3+ 0 x 4

s.t

x 1+ x2 + x 3=5

3 x 1+2 x 2+ x 4=12

x1 , x2 , x3 , x4 ≥ 0

Iteration one

Simplex algorithm method always start from initial basic feasible solution

x 3∧x 4are basic feasible variables but x 1∧x 2 are non-basic feasible variables

x 3=5−x1 −x2

x 4 =12−3 x1 −2 x 2

z=6 x 1 +5 x 2

Our objective is to maximize the value of z

The value of z can increase by increasing ether the value of x 1∨x 2 or both x 1∧x 2

Since increasing rate of x 1 is greater that increasing rate of x 2 we consider x 1 as interring


variable and x 4 as leaving variable

Iteration two

x 1∧x 3are basic feasible variables but x 2∧x 4 are non-basic feasible variables

1
x 1= ( 12−2 x 2−x 4 )
3

2 1
x 1=4− x 2− x 4
3 3

2 1
(
x 3=5− 4− x 2− x 4 −x 2
3 3 )
1 1
x 3=1− x 2 + x 4
3 3

2 1
(
z=6 4− x 2− x 4 +5 x 2
3 3 )
By:-Mengistu C (MSc in optimization theory) Page 10
Linear Optimization:- Chapter Four 2009E.c

z=24 + x 2−2 x 4

The value of z can increase by increasing the value of x 2or by decreasing the value of x 4 . But the
value of x 4 cannot decrease because the value of x 4 is zero .

This implies that x 2 is our interring variable and x 3 is our leaving variable

Iteration three

x 1∧x 2are basic feasible variables but x 3∧x 4are non-basic feasible variables

1
(
x 2=3 1+ x 4 −3 x 3
3 )
x 2=3−3 x 3 + x 4

2 1
x 1=4− ( 3−3 x 3+ x 4 )− x 4
3 3

x 1=2+2 x3 −x 4

z=24 +3−3 x 3 + x 4−2 x 4

z=27−3 x 3−x 4

The value ofzincreases by decreasing the values of x 3∧x 4 . but the value of x 3∧x 4 cannot
decrease because the value of x 3∧x 4is zero

This implies thatz=27 , x 1=2 and x 2=3

Example 2

Min z=x 1 +2 x 2

Subject to

2 x1 + x 2 ≥ 3

x 1+ 2 x 2 ≥ 6

x2 ≤ 3

x1 ≤ 6

x1 , x2 ≥ 0

Solution

By:-Mengistu C (MSc in optimization theory) Page 11


Linear Optimization:- Chapter Four 2009E.c

The standard form of the above linear programming problem is

min x 1+ 2 x 2+ 0 s 1+ 0 s 2+ 0 s 3+ 0 s 4 + M a1+ M a 2

subject ¿

2 x1 + x 2−s1 +a 1=3

x 1+ 2 x 2−s2 +a 2=6

x 2+ s 3=3

x 1+ s 4=6

x 1 , x 2 , s1 , s 2 , s 3 , s 4 ≥0

Iteration one

Simplex algorithm method always start from initial basic feasible solution

a 1 , a2 , s 3∧s4 are basic feasible variables but x 1 , x 2 , s1∧s 2 are non-basic feasible variables

a 1=3−2 x 1−x 2+ s 1

a 2=6− x1−2 x 2 +s 2

s3=3−x 2

s4 =6−x 1

z=x 1 +2 x 2 + M ( 3−2 x 1−x2 + s1 ) + M ( 6−x 1−2 x 2 +s 2 )

z=x 1 +2 x 2 + M 3−2 Mx 1−M x 2 + M s1 + M 6−M x 1−2 M x 2+ M s 2

z=x 1−2 Mx 1−M x 1 +2 x2 −M x 2−2 M x 2+ M s1 + M s 2+6 M +3 M

z=( 1−3 M ) x1 + ( 2−3 M ) x 2+ M s 1 + M s2 +9 M

Our objective is to minimize the value of z

The value of z can decrease by increasing ether the value of x 1∨x 2 and by decreasing the value
of s1∨s2 .but the value of s1∧s2 cannot decrease because there value is zero

Since decreasing rate of x 1 is greater that decreasing rate of x 2 we consider x 1 as interring


variable and a 1 as leaving variable

Iteration two

By:-Mengistu C (MSc in optimization theory) Page 12


Linear Optimization:- Chapter Four 2009E.c

x 1 , a 2 , s 3∧s 4are basic feasible variables but a 1 , x 2 , s 1∧s2 are non-basic feasible variables

a 1=3−2 x 1−x 2+ s 1

3 1 1 1
x 1= − x2 + s1− a 1
2 2 2 2

a 2=6− x1−2 x 2 +s 2

3 1 1 1
a 2=6− + x 2− s 1+ a1−2 x2 + s2
2 2 2 2

9 3 1 1
a 2 = − x 2 − s 1 + a1 + s 2
2 2 2 2

s3=3−x 2

s4 =6−x 1

3 1 1 1
s4 =6− + x 2− s 1+ a1
2 2 2 2

9 1 1 1
s4 = + x 2− s1 + a1
2 2 2 2

z=( 1−3 M ) x1 + ( 2−3 M ) x 2+ M s 1 + M s2 +9 M

z=( 1−3 M ) ( 32 − 12 x + 12 s − 12 a )+( 2−3 M ) x + M s + M s +9 M


2 1 1 2 1 2

z= ( 32 − 92 M )+( 32 M − 12 ) x +( 12 − 32 M ) s +( 32 M − 12 ) a +( 2−3 M ) x + M s + M s + 9 M
2 1 1 2 1 2

3 3 1 1 3 1 9 3
z= ( 2 2 ) ( 2 2 2 2 ) (
− M x2 + − M s1 + M − a1+ M s2 + M +
2 )
2

The value of z can decrease by increasing ether the value of s1∨x 2 and by decreasing the value
of a 1∨s 2.but the value of a 1∧s 2 cannot decrease because there value is zero

Since decreasing rate of x 2 is greater that decreasing rate of s1 we consider x 2 as interring


variable and a 2 as leaving variable

Iteration three

x 1 , x 2 , s3∧s 4are basic feasible variables but a 1 , a2 , s 1∧s 2 are non-basic feasible variables

By:-Mengistu C (MSc in optimization theory) Page 13


Linear Optimization:- Chapter Four 2009E.c

9 3 1 1
a 2 = − x 2 − s 1 + a1 + s 2
2 2 2 2

1 1 2 2
x 2=3− s 1+ a1 + s 2− a 2
3 3 3 3

3 1 1 1
x 1= − x2 + s1− a 1
2 2 2 2

3 1 1 1 2 2 1 1
(
x 1= − 3− s 1 + a1 + s2− a2 + s1− a 1
2 2 3 3 3 3 2 2 )
3 3 1 1 1 1 1 1
x 1= − + s1 − a1− s 2+ a2 + s 1− a1
2 2 6 6 3 3 2 2

2 2 1 1
x 1= s1− a1− s2 + a2
3 3 3 3

s3=3−x 2

1 1 2 2
s3=3−3+ s1 − a1− s2 + a2
3 3 3 3

1 1 2 2
s3= s 1− a1− s 2+ a2
3 3 3 3

9 1 1 1
s4 = + x 2− s1 + a1
2 2 2 2

9 1 1 1 2 2 1 1
(
s4 = + 3− s 1+ a1 + s 2− a 2 − s1 + a1
2 2 3 3 3 3 2 2 )
9 3 1 1 1 1 1 1
s4 = + − s 1+ a1 + s 2− a2− s 1+ a1
2 2 6 6 3 3 2 2

2 2 1 1
s4 =6− s 1+ a1 + s2− a2
3 3 3 3

z= ( 32 − 32 M ) x +( 12 − 12 M ) s +( 32 M − 12 ) a + M s + 272 M + 32
2 1 1 2

z= ( 32 − 32 M )(3− 31 s + 13 a + 23 s − 23 a )+( 12− 12 M ) s +( 32 M − 12 ) a + M s + 92 M + 32


1 1 2 2 1 1 2

z= ( 92 − 92 M )+( 12 M − 12 ) s +( 12 − 12 M ) a + ( 1−M ) s +( M −1) a +( 12 − 12 M ) s +( 32 M − 12 ) a + M s + 92 M + 32


1 1 2 2 1 1 2

By:-Mengistu C (MSc in optimization theory) Page 14


Linear Optimization:- Chapter Four 2009E.c

z=M a1 +s 2 +0 s 1 + ( M −1 ) a2 +6

The value of z is optimal because

The value of z can decrease by increasing ether the value of s1and by decreasing the value of
a 1∨s 2∨a 2.but the value of a 1 , a2∧s 2cannot decrease because there value is zero and when the
value of s1increase the value of z remain constant that means our problem has alternative
solution.

This implies that the value of z=6 is optimal

Now let us find the alternative solution

Consider s1 as interring variable and x 2 as leaving variable

Iteration three

x 2 , s1 , s 3∧s 4 are basic feasible variables but a 1 , a2 , x1∧s 2 are non-basic feasible variables

2 2 1 1
x 1= s1− a1− s2 + a2
3 3 3 3

2 2 1 1
s1=x 1 + a1 + s2− a2
3 3 3 3

3 1 1
s1= x 1 +a1 + s2− a 2
2 2 2

1 1 2 2
x 2=3− s 1+ a1 + s 2− a 2
3 3 3 3

1 3 1 1 1 2 2
x 2=3− (
3 2 2 2 )
x 1 +a1 + s 2− a 2 + a1 + s2 − a2
3 3 3

1 1 1 1 1 2 2
x 2=3− x 1− a1− s2 + a2 + a1 + s2− a2
2 3 6 6 3 3 3

1 1 1
x 2=3− x 1 + s 2− a2
2 2 2

1 1 2 2
s3= s 1− a1− s 2+ a2
3 3 3 3

1 3 1 1 1 2 2
s3= (
3 2 2 2 3)
x 1+ a1 + s 2− a2 − a 1− s 2 + a2
3 3

By:-Mengistu C (MSc in optimization theory) Page 15


Linear Optimization:- Chapter Four 2009E.c

1 1 1 1 1 2 2
s3= x 1 + a1 + s2 − a2− a1− s2 + a 2
2 3 6 6 3 3 3

1 1 1
s3= x 1− s2 + a2
2 2 2

2 2 1 1
s4 =6− s 1+ a1 + s2− a2
3 3 3 3

2 3 1 1 2 1 1
s4 =6− (
3 2 2 2 3 )
x 1 +a1 + s2− a 2 + a 1+ s2 − a2
3 3

2 1 1 2 1 1
s4 =6−x 1− a1− s 2+ a2+ a1 + s 2− a2
3 3 3 3 3 3

s4 =6−x 1

z=M a1 +s 2 +0 s 1 + ( M −1 ) a2 +6

z=M a1 +s 2 +0 ( 32 x +a + 12 s − 12 a )+( M −1 ) a +6
1 1 2 2 2

z=M a1 +s 2 + ( M −1 ) a2 +6

The value of z is optimal because

The value of z can decrease by increasing ether the value of x 1and by decreasing the value of
a 1∨s 2∨a 2.but the value of a 1 , a2∧s 2cannot decrease because there value is zero and when the
value of x 1increase the value of z remain constant that means our problem has alternative
solution.

This implies that the value of z=6 is optimal

Now let us find the alternative solution

Consider x 1 as interring variable and x 2 as leaving variable

Graphical illustration of the problem

By:-Mengistu C (MSc in optimization theory) Page 16


Linear Optimization:- Chapter Four 2009E.c

f(x) = 2∙x + 3 8

1
g(x) = ∙x + 3
2
6
h(x) = 3
q(y) = 6
n = 6.00
4
1 n
r(x) = ∙x +
2 2
2

15 10 5 5 10 15

SIMPLEX ALGORITHM IN TABULAR FORM

The step of the simplex algorithm to obtain an optimal solution (if it exists) to a linear
programming problem as follows

Step 1:- Formulation of the mathematical model

I. Formulate the mathematical model of the given linear optimization


problem

II. If the objective function is of minimization, then convert it in to


maximization by using the following relationship

minimization Z=−maximization Z ¿ where Z ¿ =Z

I. Make the right hand side (b i) positive

II. Convert the constraint into equations by introducing the non-negative


slack or surplus variables.

Step 2: Find initial basic feasible solution

Step 3: Construct starting simplex table

Cj C1 C2 ⋯ Cn 0 0 ⋯0
Coefficient Variables in x1 x1 ⋯ xn s1 s1 ⋯ sm value of Minimu
of basic basic B basic m ratio
variables ( variable
c β) b ( x β)

By:-Mengistu C (MSc in optimization theory) Page 17


Linear Optimization:- Chapter Four 2009E.c

c β 1=0 s1 a 11 a 12 ⋯ a1 n 1 0 ⋯0 b1
c β 2=0 s2 a 21 a 22 ⋯ a2 n 0 1 ⋯0 b2
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
c βm=0 sm am1 a m2 ⋯ amn 0 0 ⋯1 bm

Z j=∑ C β 1 x j 0 0 ⋯0 0 0 ⋯0

∆ j=C j−Z j ∆1 ∆2 ⋯ ∆n 0 0 ⋯0

 The variable corresponding to the columns of the identity matrix is called basic
variables.

 The first row in the above table indicates the coefficients C j of the variables in the
objective function which remain the same in successive simplex tables.

 The second row provides the major column heading for the simplex table.

 c β Lists the coefficients of the current basic variables in the objective function. These
values are used to calculate value of Z when one unit of any variable brought into the
solution.

 Column headed by x β represent the current value of the corresponding variables in the
basis.

 The identity matrix (or basis matrix) represents the coefficient of slack or surplus
variables which have been added or subtracted to the constraint. Each column of the
identity matrix also represents a basic variable to be listed in the columnB.

 Number a ij in the column under variable are also called substitution rate or exchange
coefficients.

 The value of z j represent the amount by which the value of the objective function Z
would be decreased (or increased) if one unit of given variable is added to the new
solution.

Each of the value in the C j−Z j represent the net amount of increase (decrease) in the
objective function that would occur when one unit of the variable represented by the column
head is introduce into the solution

Step 4:- Test for optimality

By:-Mengistu C (MSc in optimization theory) Page 18


Linear Optimization:- Chapter Four 2009E.c

Test is done by computing an evaluation∆ j j=1 , 2 , ⋯ , nfor each non-basic variable (zero
variables) x j by the formula∆ j=C j−C βi x j

Note that: ∆ j for basic variable is zero.

Examine the values of ∆ j=C j−Z j there may arise three cases

I. If ∆ j ≤0 for each j the solution under test is optimal

a) If none of ∆ jis positive, but any are zero, then other optimal solution
exist with the same value of Z .

b) If all of ∆ j are negative, then the solution under test is unique optimal
solution.

II. If ∆ j >0 for all j , i.e. if any or more of ∆ j are positive , then the solution under
test is not optimal

If the solution under test is not optimal, we must proceed to the next step

a) If the corresponding to maximum positive∆ j, all the elements in the column x j are
negative or zero, then the solution under test will be unbounded.

b) If the value at least one artificial variable appearing in the basis is non-zero and
the optimality condition is satisfied, then we shall say that the problem has no
feasible solution.

Step 5: Select the variable to enter the basis (in-coming vectors)

If case (ii) of step 4 holds, then select a variable that has the largest value to enter into the new
solution. i.e. ∆ k =C k −Z k =max {C J −Z j ; C J −Z j >0 } The column to be entered is called the key
or pivot column.

Step 6:- Test for feasibility (variable to leave the basis or outgoing vector)

After identifying the variable to become basic variable, the variable to be removed from the
existing set of basic variables is determined. For this, each number in x β column (i.e.b i values) is
divided by the corresponding (but positive) number in the key column and we select the row for
constant column
which this ratio, [ key column ]is non-negative and minimum. This ratio is called

x βr x βr
replacement(exchange) ratio.i.e.
a rj
=min{ : a >0
arj rj }
By:-Mengistu C (MSc in optimization theory) Page 19
Linear Optimization:- Chapter Four 2009E.c

The row selected in this manner is called the key or pivot row and represents the variable,
which will leave the solution.

The element that lies at the intersection of the key row and key column of the simplex table
called key or pivot element.

Step 7:- Finding the new solution

i. If the key element is 1, then the row remains the same in the new simplex table.

ii. If the key element is other than 1, then divided each element in the key row
(including the element in x β -column) by the key element, to find the new values
for that row.

iii. The new values of the elements in the remaining rows for the new simplex table
can be obtained by performing elementary row operation on all rows so that all
elements except the key element in the key column are zero.

The new entries in c β (coefficient of basic variables) and x β (value of basic variables) column are
updated in the new simplex table of the current solution.

Step 8:- Repeat the procedure

Got to step 4 and repeat the procedure until all entries in the ∆ j=C j−Z j row are either negative
or zero.

Example 1

Solve the following linear programming problem by using simplex algorithm method in tabular
form

  max 6 x 1+ 5 x 2

s.t

x 1+ x2 ≤5

3 x 1+2 x 2 ≤ 12

x1 , x2 ≥ 0

Solution

The standard form of the above linear programming problem is

max 6 x 1+ 5 x 2 +0 s1 +0 s 1

By:-Mengistu C (MSc in optimization theory) Page 20


Linear Optimization:- Chapter Four 2009E.c

s.t

x 1+ x2 + s1=5

3 x 1+2 x 2+ s 2=12

x 1 , x 2 , s1 , s 2 ≥ 0

s1∧s2 are basic feasible variables but x 1∧x 2are non-basic feasible variables

  Cj 6 5 0 0  
CB B x1 x2 s1 s2 b=x B Minimum ratio
0 s1 1 1 1 0 5 5
0 s2 3 2 0 1 12 4→
zj 0 0 0 0 0  
∆ z=C j−z j 6 ↑ 5 0   0  
0 s1 0 1/3 1 1 −1/ 3 3→
6 x1 1 2/3 0 4 1/3 6
zj 6 4 024 2  
∆ z=C j−z j 0 1 ↑ 0   -2  
5 x2 0 1 3 3 -1  
6 x1 1 0 -2 2 1  
zj 6 5 3 1 27  
∆ z=C j−z j 0 0 -3 -1    
As we see from the above table our linear programming problem has unique solution.

i.e.( x 1 , x 2 )=( 2,3 )

And the optimal value is

max 6 x 1+5 x 2=6 ×2+5 × 3=27

Example 2

Solve the following linear programming problem by using simplex algorithm method in tabular
form

max 12 x1 +8 x 2

s.t

4 x1 +2 x 2 ≤ 80

By:-Mengistu C (MSc in optimization theory) Page 21


Linear Optimization:- Chapter Four 2009E.c

2 x1 +3 x 2 ≤ 100

5 x 1+ x 2 ≤75

x1 , x2 ≥ 0

Solution

The standard form of the above linear programming problem is

max 12 x1 +8 x 2 +0 s 1+ 0 s 2+ s 3

s.t

4 x1 +2 x 2+ s 1=80

2 x1 +3 x 2+ s 2=100

5 x 1+ x 2 +s 3=75

x 1 , x 2 , s1 , s 2 , s 3 ≥0

  Cj 12 8 0 0 0  

CB B x1 x2 s1 s2 s3 b=x B Minimum ratio


0 s1 4 2 1 0 0 80 20
0 s2 2 3 0 1 0 100 50
0 s3 5 2 0 0 1 75 15→
zj 0 0 0 0 0 0  
∆ z=C j−z j 12 ↑ 8 0 0 0    
0 s1 0 6 1 0 −4 20 50

5 5 3
0 s2 0 13 0 1 −2 70 350
5 5 13
12 x1 1 1 0 0 1 15 75
5 5
zj 12 12 0 0 12 180  
5 5
∆ z=C j−z j 0 28 0 0 −12    

5 5
8 x2 0 1 5 0 −2 50 -
6 3 3
0 s2 0 0 −13 1 4 80 20→
6 3 3

By:-Mengistu C (MSc in optimization theory) Page 22


Linear Optimization:- Chapter Four 2009E.c

12 x1 1 0 −1 0 1 35 35
6 3 3
zj 12 8 14 0 −4 820  
3 3 3
∆ z=C j−z j 0 0 −14 0 4    

3 3
8 x2 0 1 −1 1 0 30  
4 2
0 s3 0 0 −13 3 1 20  
8 4
12 x1 1 0 3 −1 0 5  
8 4
zj 12 8 5 1 0 300  
2
∆ z=C j−z j 0 0 −5 -1 0    
2
As we see from the above table our linear programming problem has unique solution.

i.e.( x 1 , x 2 )=( 5,30 )

And the optimal value is

max 12 x1 +8 x 2=12 ×5+8 × 30=300

Example 3

Solve the following linear programming problem by using simplex algorithm method in tabular
form

max 10 x 1+20 x 2

s.t

x 1 ≤ 10

x2 ≤ 6

2 x1 + 4 x 2 ≤ 32

x1 , x2 ≥ 0

Solution

The standard form of the above linear programming problem is

max 10 x 1+20 x 2 +0 s 1 +0 s 2+ 0 s 3

By:-Mengistu C (MSc in optimization theory) Page 23


Linear Optimization:- Chapter Four 2009E.c

s.t

x 1+ s 1=10

x 2+ s 2=6

2 x1 + 4 x 2 s3=32

x 1 , x 2 , s1 , s 2 , s 3 ≥0

  Cj 10 20 0 0 0  
CB B x1 x2 s1 s2 s3 b=x B Minimum ratio
0 s1 1 0 1 0 0 10 -
0 s2 0 1 0 1 0 6 6→
0 s3 2 4 0 0 1 32 8
zj 0 0 0 0 0 0  
∆ z=C j−z j 10 20↑ 0 0 0    
0 s1 1 0 1 0 0 10 10
20 x2 0 1 0 1 0 6 -
0 s3 2 0 0 -4 1 8 4→
zj 0 20 0 20 0 120  
∆ z=C j−z j 10↑ 0 0 -20 0    
0 s1 0 0 1 2 −1 6 3→
2
20 x2 0 1 0 1 0 6 6
10 x1 1 0 0 -2 1 4 -
2
zj 10 20 0 0 5 160  
∆ z=C j−z j 0 0 0 0↑ -5    
0 s2 0 0 1 1 −1 3  
2 4
20 x2 0 1 −1 0 1 3  
2 4
10 x1 1 0 1 0 0 10  
zj 10 20 0 0 5 160  
∆ z=C j−z j 0 0 0 0 -5    

As we see from the above table our linear programming problem has many solutions.

i.e.( x 1 , x 2 )=( 10,3 )

And the optimal value is


By:-Mengistu C (MSc in optimization theory) Page 24
Linear Optimization:- Chapter Four 2009E.c

max 10 x 1+20 x 2=10 ×10+ 20× 3=160

Alternative solution

  Cj 10 20 0 0 0  
CB B x1 x2 s1 s2 s3 b=x B Minimum ratio
0 s2 0 0 1 1 −1 3  6→
2 4
20 x2 0 1 −1 0 1 3 -
2 4
10 x1 1 0 1 0 0 10 10
zj 10 20 0 0 5 160  
∆ z=C j−z j 0 0 0 ↑ 0 -5  
0 s1 0 0 1 2 −1 6
2
20 x2 0 1 0 1 0 6
10 x1 1 0 0 -2 1 4
2
zj 10 20 0 0 5 160
∆ z=C j−z j 0 0 0 0 -5  

( x 1 , x 2 )=( 4,6 )
And the optimal value is

max 10 x 1+20 x 2=10 × 4+20 × 6=160

This implies that elements of the line segment joining ( 10,3 )and ( 4,6 ) are also optimal solutions

By:-Mengistu C (MSc in optimization theory) Page 25


Linear Optimization:- Chapter Four 2009E.c

ARTIFICIAL VARIABLE TECHNIQUE

In certain cases it is difficult to obtain an initial basic feasible solution from the standard form of
linear programming problem. So In order to get initial basic feasible solution we are going to
apply artificial variable technique.

When we apply artificial variable technique?


n
1. when the constrains are of ≥ type ( i.e. ∑ aij x j ≥ b j , x j ≥0) and have positive right hand
j=1

side (i.e. b i> 0)

after adding the surplus variable (negative slack variable) si in to the i thconstrain, and by
letting x j=0 for j=1 ,2 , ⋯ , n we get initial solution si=−bi form same i. But It is not
feasible solution because it violates the non-negative condition of the surplus variable
(i.e. si ≥0 ). In this case we add artificial variable a i to get initial basic feasible solution i.e.
n

∑ aij x j −s i+ ai=¿ b i ¿
j=1

n
2. When the constrains are of ≤ type ( i.e. ∑ aij x j ≤ bi , x j ≥ 0) and some of them have negative
j=1

right hand side (i.e. b i< 0)

after adding the non-negative slack variable si in to the i thconstrain, and by letting
x j=0 for j=1 ,2 , ⋯ , n we get initial solution si=bi form same i. But It is not feasible
solution because it violates the non-negative condition of the slack variable (i.e. si ≥0 ). In
this case we are going to multiply both side of the i thconstrain by −1 and it become the
same condition as we state above

Artificial variables have no meaning in the physical sense and are only used as tool for
generating an initial basic feasible solution. Before the final simplex solution is reached all the
artificial variables must be dropped out from the solution mix. This is done by assigning
coefficients to those variables in the objective function.

There are two methods for eliminating artificial variables from the solution

I. Two-phase method
II. Big-M method or method of penalties

TWO-PHASE METHOD

By:-Mengistu C (MSc in optimization theory) Page 26


Linear Optimization:- Chapter Four 2009E.c

In the first phase of this method the sum of the artificial variables is minimized subject to the
given constraints to get initial basic feasible solution of the linear programming problem. The
second phase minimizes the original objective function starting with the initial basic feasible
solution obtained at the end of the first phase. Since the solution of the linear programming
problem is completed in two phases this is called two-phase method.

STEPS OF THE ALGORITHM

Phase I

Step1:-

a. If all the constraints in the given linear programming problem are ≤ type and have
non-negative right hand side value, then phase II can be directly used to solve the
problem. Otherwise, the necessary numbers of surplus and artificial variables are
added to convert constraints into equality constraints.

b. If the given linear programming problem is of minimization convert it to


maximization type by the usual method.

Step2:- Assign zero coefficients to each of the decisional variables and to the surplus variables,
and assign (−1) coefficient to each of the artificial variables. This yields the following auxiliary
linear programming problem.
m
¿
max z =∑ (−1 ) ai
i=1

subject ¿
n

∑ aij x j −s i+ ai=b i for i=1 , 2, ⋯ ,m


i=1

x j , si , ai ≥ 0

Step 3 :- Apply simplex algorithm to solve this auxiliary linear programming problem . The
following three cases may arise at optimality

i. max z ¿ =0and at least one artificial variable is present in the basis with positive
value. Then no feasible solution exists for the original linear programming
problem.

ii. max z ¿ =0and no artificial variables is present in the basis. Then we move to phase
II to obtain an optimal basic feasible solution to the original linear programming
problem.

By:-Mengistu C (MSc in optimization theory) Page 27


Linear Optimization:- Chapter Four 2009E.c

iii. max z ¿ =0and at least one artificial variable is present in the basis at zero value.
Then a feasible solution to the auxiliary linear programming problem is also a
feasible solution to the original linear programming problem.

Phase II

Assign actual coefficient to the variables in the objective function and zero to the
artificial variables which appear at zero value in the basis at the end of phase I. i.e. the
last simplex table of the phase I can be used as the initial simplex table for phase II. Then
apply the usual simplex algorithm to the modified simplex table to get the optimal
solution to the original problem. Artificial variables which do not appear in the basis may
be removed

Example

Solve the following linear programming problem by using two-phase method

max z=3 x 1−x 2

s.t

2 x1 + x 2 ≥ 2

x 1+ 3 x 2 ≤2

x2 ≤ 4

x1 , x2 ≥ 0

Solution

Standard form of the above linear programming problem is

max z=3 x 1−x 2

s.t

2 x1 + x 2−s1 +a 1=2

x 1+ 3 x 2 + s2=2

x 2+ s 3=4

x 1 , x 2 , s1 , s 2 , s 3 , a1 ≥ 0

Auxiliary linear programming problem

By:-Mengistu C (MSc in optimization theory) Page 28


Linear Optimization:- Chapter Four 2009E.c

max z=−¿ a1 ¿

s.t

2 x1 + x 2−s1 +a 1=2

x 1+ 3 x 2 + s2=2

x 2+ s 3=4

x 1 , x 2 , s1 , s 2 , s 3 , a1 ≥ 0

Phase I

  Cj 0 0 0 0 0 −1
CB B x1 x2 s1 s2 s3 a1 b=x B Minimum
ratio
-1 a1 2 1 −1 0 0 1 2 1→
0 s2 1 3 0 1 0 0 2 2
0 s3 0 1 0 0 1 0 4 −¿
zj −2 −1 1 0 0 −1
∆ z=C j−z j 2↑ 1 −1 0 0 0
0 x1 1 1 −1 0 0 1 1 1→
2 2 2
0 s2 0 5 1 1 0 −1 1 2
2 2 2
0 s3 0 1 0 0 1 0 4 −¿
zj 0 0 0 0 0 0
∆ z=C j−z j 0 0 0 0 0 −1

Since all Δz ≥ 0 , Max Z∗¿ 0and no artificial vector appears in the basis, we proceed to phase II.

Phase II

Cj 3 −1 0 0 0
CB B x1 x2 s1 s2 s3 b=x B Minimum ratio

By:-Mengistu C (MSc in optimization theory) Page 29


Linear Optimization:- Chapter Four 2009E.c

3 x1 1 1 −1 0 0 1 −¿
2 2
0 s2 0 5 1 1 0 1 2→
2 2
0 s3 0 1 0 0 1 4 −¿
zj 3 3 −3 0 0
2 2
∆ z=C j−z j 0 −5 3 0 0

2 2
3 x1 1 3 0 1 0 2
0 s1 0 5 1 2 0 2
0 s3 0 1 0 0 1 4
zj 3 9 0 3 0
∆ z=C j−z j 0 −10 0 −3 0

Since all Δz ≥ 0, optimal basic feasible solution is obtained

• Therefore the solution is Max Z=6 , x 1=2 , x 2=0

By:-Mengistu C (MSc in optimization theory) Page 30


Linear Optimization:- Chapter Four 2009E.c

BIG-M METHOD

The big –M method is another method of removing artificial variable from the basis. In this
method, we assign coefficient to the artificial variable undesirable from the objective point of
view. If objective function is to be minimized, then a very large positive price (called penalty) is
assigned to each artificial variable. Similarly if the objective function is to be maximized, then a
very large negative price (also called penalty) is assigned to each of these variables. The penalty
will be designated by −M for maximization problem and + M for minimization problem where
M >0.
Steps of big-M method are the same as Steps of simplex method except when we formulate the
standard form of linear programming problem.
 big-M method assign a very large positive coefficient + M (for minimization case) and
very large negative coefficient −M (for maximization case) to artificial variables in the
objective function

STEPS OF BIG-M METHOD

Step 1:-Modify the constraints so that the RHS of each constraint is nonnegative (This requires
that each constraint with a negative RHS be multiplied by−1. Remember that if you multiply an
inequality by any negative number, the direction of the inequality is reversed!). After
modification, identify each constraint as a ≤ , ≥∨¿ constraint.

Step 2:-Put the linear programming problem in its standard form.

Step 3:- Add an artificial variable a i to the constraints identified as ≥∨¿constraints at the end of
Step 1. Also add the sign restrictiona i ≥ 0.

Step 4:- LetM denote a very large positive number. If the LP is a min ⁡problem, add (for each
artificial variable) M aito the objective function. If the LP is a max problem, add (for each
artificial variable) −M aito the objective function.

Step 5:- solve the transformed linear optimization problem by using simplex algorithm
method (In choosing the entering variable, remember that M is a very large positive number!).

Example

Solve the following linear programming problem by using big-M method

max 3 x 1+ 2 x 2+ x3

By:-Mengistu C (MSc in optimization theory) Page 31


Linear Optimization:- Chapter Four 2009E.c

s.t

x 1+ 2 x 2 +2 x 3=10

x 1+ x2 + x 3=6

x1, x2 , x3≥ 0

Solution

The standard form of the above linear optimization problem with artificial variable

max 3 x 1+ 2 x 2+ x3 −M a 1−M a2

s.t

x 1+ 2 x 2 +2 x 3 +a 1=10

x 1+ x2 + x 3 +a2=6

x 1 , x 2 , x 3 ,a 1 , a2 ≥ 0

  Cj 3 2 1 −M −M
CB B x1 x2 x3 a1 a2 b=x B Minimum ratio
−M a1 1 2 2 1 0 10 5→
−M a2 1 1 1 0 1 6 6
Zj −2 M −3 M −3 M −M −M
∆j 3+2 M 2+3 M ↑ 1+3 M 0 0
2 x2 1/2 1 1 ½ 0 5 10
−M a2 1/2 0 0 −1/2 1 1 2→
Zj M 2 2 M −M
1− 1+
2 2
∆j M 0 −1 3M
2+ −1−
2 2
2 x2 0 1 1 1 −1 4
3 x1 1 0 0 −1 2 2
Zj 3 2 2 −1 4 14
∆j 0 0 −1 −M +1 −M −4

By:-Mengistu C (MSc in optimization theory) Page 32


Linear Optimization:- Chapter Four 2009E.c

From the above simplex tabula

( x 1 , x 2 , x 3 )= ( 2, 4 , 0 )is optimal solution and 14 is optimal value.

By:-Mengistu C (MSc in optimization theory) Page 33

You might also like