Professional Documents
Culture Documents
Additional Network and LP Algorithms: Chapter Guide. This Chapter Includes 3 Sections. The Minimum Cost Capacitated Flow
Additional Network and LP Algorithms: Chapter Guide. This Chapter Includes 3 Sections. The Minimum Cost Capacitated Flow
Additional Network
and LP Algorithms
Chapter Guide. This chapter includes 3 sections. The minimum cost capacitated flow
problem in Section 20.1 has been excised from Chapter 6. It deals with network flow in
which the capacity of the arcs may be limited and external input/output flow occurs at
different nodes. The objective is to determine the flow schedule that minimizes the as-
sociated cost while satisfying the capacity and external flow restrictions. The model can
be specialized to represent the shortest-route and the maximal-flow problems present-
ed in Chapter 6. The remaining 2 sections have been excised from Chapter 7. They deal
with the solution of large-scale LPs. The decomposition algorithm, a product of the
1960s technology, calls for breaking down LPs with special structures into smaller com-
putationally-manageable subproblems. This classic algorithm is interesting theoretically
but no longer viable computationally, because of the present tremendous power of the
computer. On the other hand, Karmarkar’s interior point algorithm, developed in the
1980s, is viable both theoretically and computationally. Unlike the simplex algorithm,
its iterations cut across the interior of the solution space. From the computational
standpoint, the result is a polynomial-time algorithm, as compared with the simplex
exponential-time algorithm. The new algorithm is designed for extremely large LPs.
This chapter includes 7 solved examples, 34 end of section problems, 2 AMPL
models, 1 Solver model, and 2 cases. The cases are in Appendix E on the CD. The
AMPL/Excel/Solver/TORA programs are in folder ch20Files.
CD-1
CD-2 Chapter 20 Additional Network and LP Algorithms
[ fi] [ fj]
$ cij
i j
FIGURE 20.1 (lij, uij)
xij
Capacitated arc with external flow
The new model determines the flows that minimize the total cost while satisfying
the flow restrictions on the arcs and the supply and demand amounts at the nodes. We
first present the capacitated network flow model and its equivalent linear programming
formulation. The linear programming formulation is then used as the basis for the devel-
opment of a special capacitated simplex algorithm for solving the network flow model.
Example 20.1-1
GrainCo supplies corn from three silos to three poultry farms. The supply amounts at the three
silos are 100, 200, and 50 thousand bushels and the demands at the three farms are 150, 80, and
120 thousand bushels. GrainCo mostly uses railroads to transport the corn to the farms, with the
exception of three routes where trucks are used.
Figure 20.2 shows the available route between the silos and the farms. The silos are repre-
sented by nodes 1, 2, and 3 whose supply amounts are [100], [200], and [50], respectively. The
farms are represented by nodes 4, 5, and 6 whose demand amounts are [-150], [-80], and
[-120], respectively. The routes allow transhipping between the silos. Arcs (1, 4), (3, 4), and (4, 6)
are truck routes. These routes have minimum and maximum capacities. For example, the capacity
of route (1, 4) is between 50 and 80 thousand bushels. All other routes use trainloads, whose
maximum capacity is practically unlimited. The transportation costs per bushel are indicated on
the respective arcs.
1 100 24 1
2 110 26 2
3 95 21 1
4 125 24 2
Nonminority
Minority areas areas
Maximum
School enrollment 1 2 3 1 2
1 1500 20 12 10 4 5
2 2000 15 18 8 6 5
Student population 500 450 300 1000 1000
CD-4 Chapter 20 Additional Network and LP Algorithms
1 200 250
2 190 230
3 195 240
The maximum capacity of the silo is 800 tons. The owner has $100,000 cash on hand
which can be used to purchase new crops. Also, initially, at the start of month 1, the silo is
half full. It is estimated that the storage cost per ton per month is $10. Surplus cash earns
1% interest monthly. The objective is to determine the buy/sell policy the owner must fol-
low to maximize the total cash accumulation at the end of the three-month season.
Formulate the problem as a network model.
Minimize z = a a cijxij
1i,j2 HA
subject to
The new flow variable, xijœ , has an upper limit of uij - lij. Additionally, the net
flow at node i becomes fi - lij, and that at node j is fj + lij. Figure 20.3 shows the trans-
formation of activity (i, j) after the lower bound is substituted out.
20.1 Minimum-Cost Capacitated Flow Problem CD-5
Example 20.1-2
Write the linear program for the network in Figure 20.2, before and after the lower bounds are
substituted out.
The main constraints of the linear program relate the input and output flow for each node,
which yields the following LP:
Minimize 3 4 1 5 6 1 2 2 4
Node 1 1 1 1 = 100
Node 2 -1 1 1 = 200
Node 3 -1 -1 1 1 = 50
Node 4 -1 -1 1 = -150
Node 5 -1 -1 1 = -80
Node 6 -1 -1 = -120
Note the arrangement of the constraints coefficients. The column associated with variable
xij has exactly one +1 in row i and one -1 in row j, a typical property of network models. The
rest of the coefficients are all 0. This structure is typical of network flow models.
The optimum solution is z = $1,870,000 with x13 = 20 (thousand bushels), x14 = 80,
x23 = 20, x25 = 180, x34 = 90, x46 = 20, and x56 = 100. All the remaining variables are zero.
The variables with lower bounds are substituted as
x14 = x14
œ
+ 50
x34 = x34
œ
+ 70
x56 = x56
œ
+ 100
Minimize 3 4 1 5 6 1 2 2 4
Node 1 1 1 1 = 50
Node 2 -1 1 1 = 200
Node 3 -1 -1 1 1 = -20
Node 4 -1 -1 1 = -30
Node 5 -1 -1 1 = -180
Node 6 -1 -1 = -20
Upper bounds q q 30 q q 50 q q 20
CD-6 Chapter 20 Additional Network and LP Algorithms
[⫺30]
$1
[50] 1 4
(30) $2
$1
$4
$3 [⫺20] (50) 6 [⫺20]
3 $2 $4
$5 (20)
FIGURE 20.4 $6
[200] 2 5
Network of Example 20.1-2 after substituting out lower
bounds [⫺180]
The corresponding network after substituting out the lower bounds is shown in Figure 20.4.
Note that the lower-bound substitution can be done directly from Figure 20.2 using the substitution
in Figure 20.3, and without the need to express the problem as a linear program first.
The optimum solution is z¿ = $1350 thousand (or z = 1350 + 50 * 1 + 70 * 1 + 100 *
4 = $1870 thousand) with x13 = 20 (thousand bushels), x14 œ
= 30 (or x14 = 30 + 50 = 80),
x23 = 20, x25 = 180, x34 = 20 (or x34 = 20 + 70 = 90), x46 = 20, x56
œ œ
= 0 (or x56 = 0 +
100 = 100), which, of course, is the same solution given by the before-substitution solution.
Because of change in demand, it may be economical to retain more workers than needed in
a given month. The cost of recruiting and maintaining a worker depends on the length of the
employment period, as the following table shows:
Let
xij = number of workers hired at the start of month
i and terminated at the start of month j
For example, x12 gives the number of workers hired in January for 1 month only. To formu-
late the problem as a linear program for the 4-month period, we add May as a dummy month
(month 5), so that x45 defines hiring in April for April.
20.1 Minimum-Cost Capacitated Flow Problem CD-7
The constraints recognize that the demand for period k can be satisfied by all xij such that
i … k 6 j. Letting si Ú 0 be the surplus number of workers in month i, the linear program is
given as
x12 x13 x14 x15 x23 x24 x25 x34 x35 x45 s1 s2 s3 s4
Minimize 100 130 180 220 100 130 180 100 130 100
Jan. 1 1 1 1 -1 = 100
Feb. 1 1 1 1 1 1 -1 = 120
March 1 1 1 1 1 1 -1 = 80
April 1 1 1 1 -1 = 170
The preceding LP does not have the 1-1, +12 special structure of the network flow model
(see Example 20.1-2). Nevertheless, the given linear program can be converted into an equivalent
network flow model by using the following arithmetic manipulations:
The application of these manipulations to the employment scheduling example yields the
following linear program, whose structure fits the network flow model:
x12 x13 x14 x15 x23 x24 x25 x34 x35 x45 s1 s2 s3 s4
Minimize 100 130 180 220 100 130 180 100 130 100
Jan. 1 1 1 1 -1 = 100
Feb. -1 1 1 1 1 -1 = 20
March -1 -1 1 1 1 -1 = -40
April -1 -1 -1 1 1 -1 = 90
May -1 -1 -1 -1 1 = -170
Using the preceding formulation, the employment scheduling model can be represented
equivalently by the minimum cost flow network shown in Figure 20.5 . Actually, because the arcs
have no upper bounds, the problem can be solved also as a transshipment model (see Section 5.5).
The optimum solution is x15 = 100, x25 = 20, x45 = 50, and all the remaining variables are
zero. The following table summarizes the hiring and firing activities over the 4-month horizon.
The total cost is $30,600.
x15
x14
x13 x35
FIGURE 20.5
Network representation of employment scheduling problem
(b)
Month 1 2 3 4 5
$6 )
)
40
(0
,
0,
$4 (10, )
$1 $3
[50] 1 2 4 [30]
(0, ) (10, )
[40]
20.1 Minimum-Cost Capacitated Flow Problem CD-9
2
$1 $8
) (50
(60 )
[100] 1 4 [100]
$2 (50)
$4
( $1
) 0)
(9
FIGURE 20.7
3
Network for Problem 5, Set 20.1b
a fi = 0
i=1
The condition says that the total supply in the network equals the total demand.
We can always satisfy this requirement by adding a balancing dummy source or desti-
nation, which we connect to all other nodes in the network by arcs with zero unit cost
and infinite capacity. However, the balancing of the network does not guarantee feasi-
bility, as this depends on the capacities of the arcs.
We now present the steps of the capacitated algorithm. Familiarity with the sim-
plex method and duality theory (Chapters 3 and 4) is essential. Also, knowledge of the
upper-bounded simplex method (Section 7.3) is helpful.
Step 0. Determine a starting basic feasible solution (set of arcs) for the network. Go
to step 1.
Step 1. Determine an entering arc (variable) using the simplex method optimality
condition. If the solution is optimal, stop; otherwise, go to step 2.
Step 2. Determine the leaving arc (variable) using the simplex method feasibility
condition. Determine the new solution, then go to step 1.
Example 20.1-4
A network of pipelines connects two water desalination plants to two cities. The daily supply
amounts at the two plants are 40 and 50 million gallons and the daily demand amounts at cities 1
and 2 are 30 and 60 million gallons. Nodes 1 and 2 represent plants 1 and 2, and nodes 4 and 5
represent cities 1 and 2. Node 3 is a booster station between the plants and the cities. The model
is already balanced because the sum of the supply at nodes 1 and 2 equals the sum of the demand
at nodes 4 and 5. Figure 20.8 gives the associated network.
Iteration 0
Step 0. Determination of a starting basic feasible solution: The starting basic solution must be
a spanning tree. The feasible spanning tree in Figure 20.9 (shown with solid arcs) is
20.1 Minimum-Cost Capacitated Flow Problem CD-11
$7 )
(1
0
$3 () 3 $4 ()
$8
(
(6 2
$
)
0)
$1 FIGURE 20.8
Plant 2 [50] 2 5 [60] City 2
(30) Network for Example 20.1-4
w1 0 w4 5
$5
[40] 1 4 [30] z12 c12 0 (5) 3 2
30(35)
z25 c25 5 (15) 1 9
z45 c45 5 (15) 4 6
$7 0)
10
(1
w3 7
$3 $4 () Arc (2, 5) reaches upper bound at 30.
3
$ Substitute x25 30 x52.
60 8
(
50 $2
0)
)
(6
FIGURE 20.9
Network for Iteration 0
Iteration 1
Step 1. Determination of the entering arc: We obtain the dual values by solving the current
basic equations
w1 = 0
wi - wj = cij, for basic arc1i, j2
We thus get
1. New flow in the current basic arcs of the loop cannot be negative.
2. New flow in the entering arc cannot exceed its capacity.
The application of condition 1 shows that the flows in arcs (2, 3) and (3, 5) cannot
be decreased by more than min550, 606 = 50 units. By condition 2, the flow in arc (2, 5)
can be increased to at most the arc capacity 1= 30 units2. Thus, the maximum flow
change in the loop is min530, 506 = 30 units. The new flows in the loop are thus 30
units in arc (2, 5), 50 - 30 = 20 units in arc (2, 3), and 60 - 30 = 30 units in arc (3, 5).
Because none of the current basic arcs leave the basis at zero level, the new arc
(2, 5) must remain nonbasic at upper bound. However, to avoid dealing with nonbasic
arcs that are at capacity (or upper bound) level, we implement the substitution
This substitution is effected in the flow equations associated with nodes 2 and 5 as fol-
lows. Consider
w1 0 w4 5
$5
[40] 1 4 [30] z12 c12 0 (5) 3 2
30(35)
z52 c52 15 (5) (1) 9
z45 c45 5 (15) 4 6
$7 0)
10
w 7
(1
3
$3 $4 () Arc (4, 5) enters at level 5.
3
$ Arc (1, 4) leaves at upper bound.
30 8
(
20 $2
0)
) Substitute x14 35 x41.
(6
$1
[20] 2 5 [30] Reduce x13 and x35 each by 5.
(30)*
w2 5 w5 15
FIGURE 20.10
Network for iteration 1
The results of these changes are shown in Figure 20.10. The direction of flow in arc
(2, 5) is now reversed to 5 : 2 with x52 = 0, as desired. The substitution also requires
changing the unit cost of arc (5, 2) to - $1. We will indicate this direction reversal on
the network by tagging the arc capacity with an asterisk.
Iteration 2. Figure 20.10 summarizes the new values of zij - cij (verify!) and shows that arc (4, 5)
enters the basic solution. It also defines the loop associated with the new entering arc
and assigns the signs to its arcs.
The flow in arc (4, 5) can be increased by the smallest of
Thus, the flow in arc (4, 5) can be increased to 5 units, which will make (4, 5)
basic and will force basic arc (1, 4) to be nonbasic at its upper bound 1= 352.
Using the substitution x14 = 35 - x41, the network is changed as shown in
Figure 20.11, with arcs (1, 3), (2, 3), (3, 5), and (4, 5) forming the basic (spanning
tree) solution. The reversal of flow in arc (1, 4) changes its unit cost to - $5. Also,
convince yourself that the substitution in the flow equations of nodes 1 and 4 will
net 5 input units at each node.
Iteration 3. The computations of the new zij - cij for the nonbasic arcs (1, 2), (4, 1), and (5, 2)
are summarized in Figure 20.11, which shows that arc (1, 2) enters at level 5, and
arc (1,3) becomes nonbasic at level 0. Figure 20.12 depicts the new solution.
Iteration 4. The new zij - cij in Figure 20.12 shows that the solution is optimum. Back substi-
tution yields the values of the original variables as shown in Figure 20.12.
CD-14 Chapter 20 Additional Network and LP Algorithms
w1 0 w4 11
$5
[5] 1 4 [5] z12 c12 0 (5) 3 2
(35)*
z41 c41 11 0 (5) 6
z52 c52 15 (5) (1) 9
$7 0)
5(
1
w3 7
$3 $4 5() Arc (1, 2) enters at level 5.
3
$ Arc (1, 3) leaves at level 0.
25 8
(
20 $2
0)
)
(6
Increase x23 by 5.
$1
[20] 2 5 [30]
(30)*
w2 5 w5 15
FIGURE 20.11
Network for iteration 2
w1 0 w4 9
$5 z13 c13 0 (5) 7 2
[5] 1 4 [5]
(35)* z41 c41 9 0 (5) 4
z52 c52 13 (3) (1) 9
$7 )
(1
w3 5
0
Optimum solution:
$3 5() $4 5() x12 5, x13 0
3
$ x14 35 0 35
25 8 x23 25
(
25 $2
0)
) x25 30 0 30
(6
FIGURE 20.12
Network for iteration 3
Solver Moment
The basic idea of the Excel Solver minimum cost capacitated model is similar to the
one detailed in Example 6.3-6 for the shortest-route model. File solverEx20.1-4.xls
provides the details.
AMPL Moment
File amplEx20.1-4.txt is a general model that can be used to solve the minimum cost
capacitated problem of any size. The idea of the model is similar to that of the shortest-
route AMPL model for Example 6.3-6.
2. Solve Problem 2, Set 20.1a, by the capacitated simplex algorithm, and also show that it
can be solved by the transshipment model.
3. Solve Problem 3, Set 20.1a by the capacitated simplex algorithm.
4. Solve Problem 4, Set 20.1a, by the capacitated simplex algorithm.
*5. Solve Problem 5, Set 20.1a, by the capacitated simplex algorithm.
6. Solve the employment scheduling problem of Example 20.1-3 by the capacitated simplex
algorithm.
7. Wyoming Electric uses existing slurry pipes to transport coal (carried by pumped water)
from three mining areas (1, 2, and 3) to three power plants (4, 5, and 6). Each pipe can
transport at most 10 tons per hour. The transportation costs per ton and the supply and
demand per hour are given in the following table:
4 5 6 Supply
1 $5 $8 $4 8
2 $6 $9 $12 10
3 $3 $1 $5 18
Demand 16 6 14
FIGURE 20.13
Network for Problem 8, Set 20.1c
5
2 5
2 11 6
8
10
1 4 7
4 3 7 9
1
3 6
CD-16 Chapter 20 Additional Network and LP Algorithms
Maximize z = C 1X 1 + C 2X 2 + Á + C nX n
subject to
A 1X 1 + A 2X 2 + Á + A nX n … b0
D1X 1 … b1
D2X 2 … b2
o o
DnX n … bn
X j Ú 0, j = 1, 2, Á , n
FIGURE 20.14
Layout of a decomposable linear program
Common constraints
Activity
1
Activity
Independent
2
constraints
…
Activity
n
20.2 Decomposition Algorithm CD-17
The slack and surplus variables are added as necessary to convert all the inequal-
ities into equations.
The decomposition principle is based on representing the entire problem in
terms of the extreme points of the sets 5X ƒ DjX j … bj, X j Ú 06, j = 1, 2, Á , n. To do
so, the solution space described by each 5X ƒ DjX j … bj, X j Ú 06 must be bounded. This
requirement can always be satisfied for any set j by adding the artificial restriction
1X j … M, where M is sufficiently large.
Let XN , k = 1, 2, Á , K define the extreme points of 5X ƒ D X … b , X Ú 06.
jk j j j j j
We then have
Kj
N , j = 1, 2, Á , n
X j = a b jkX jk
k=1
kj
where b jk Ú 0 for all k and a b jk = 1
k=1
We can reformulate the entire problem in terms of the extreme points to obtain
the following master problem:
K1 K2 Kn
N b +
Maximize = a C 1X N N
a C 2X 2k b 2k + a C nX nk b nk
1k 1k
Á +
k=1 k=1 k=1
subject to
K1 K2 Kn
N N N
a A 1X 1k b 1k + a A 2X 2k b 2k + a A nX nk b nk … b0
Á +
k=1 k=1 k=1
Kj
a b 1k = 1
k=1
Kj
a b 2k = 1
k=1
o o
Kj
a b nk = 1
k=1
It may appear that the solution of the master problem requires prior determina-
N , a difficult task indeed! Fortunately, it is not so.
tion of all the extreme points X jk
CD-18 Chapter 20 Additional Network and LP Algorithms
To solve the master problem by the revised simplex method (Section 7.2), we
need to determine the entering and the leaving variables at each iteration. For the en-
tering variable, given C B and B-1 of the current basis of the master problem, then for
nonbasic b jk we have
zjk - cjk = C BB-1Pjk - cjk
where
A jXN
jk
0
N and P
cjk = C jX = ¶
o
∂
jk jk
1
o
0
Now, to decide which, if any, of the nonbasic variables b jk should enter the solu-
tion, we need to determine
zj*k* - cj*k* = min 5zjk - cjk6
all j and k
The reason we are able to establish this identity is that each convex set
5X ƒ DjX j … bj, X j Ú 06 has its independent set of extreme points. In effect, what the
identity says is that we can determine zj*k* - cj*k* in two steps:
Step 1. For each convex set 5X ƒ DjX j … bj, X j Ú 06, determine the extreme point
N that yields the smallest z - c —that is, z - c = min 5z - c 6.
Yjk* jk jk jk* jk* k jk jk
From LP theory, we know that the optimum solution, when finite, must be associ-
ated with an extreme point of the solution space. Because each of the sets
5X ƒ DjX j … bj, X j Ú 06 is bounded by definition, step 1 is mathematically equivalent
to solving n linear programs of the form
Minimize wj = 5zj - cj ƒ DjX j … bj, X j Ú 06
This approach precludes the need to determine all the extreme points of all n convex
sets. Once the desired extreme point is located, all the elements of the column vector
Pj*k* are defined. We can then determine the leaving variable, and subsequently com-
pute the next B-1 using the revised simplex method computations.
Example 20.2-1
Solve the following LP by the decomposition algorithm:
Maximize z = 3x1 + 5x2 + x3 + x4
subject to
x1 + x2 + x3 + x4 … 40
5x1 + x2 … 12
x3 + x4 Ú 5
x3 + 5x4 … 50
x1, x2, x3, x4 Ú 0
The problem has two subproblems that correspond to the following sets of variables:
X 1 = 1x1, x22T, X 2 = 1x3, x42T
Starting basic
Subproblem 1 Subproblem 2 solution
b 11 b 12 Á b 1K1 b 21 b 22 Á b 2K2 x5 x6 x7
N
C 1X N
C 1X Á N
C 1X N
C 2X N
C 2X Á N
C 2X 0 -M -M
11 12 1K1 21 22 2K2
A 1XN A 1XN Á N
A 1X A 2XN A 2XN Á N
A 2X 1 0 0 = 40
11 12 1K1 21 22 2K2
1 1 Á 1 0 0 Á 0 0 1 0 = 1
0 0 Á 0 1 1 Á 1 0 0 1 = 1
C 1 = 13, 52 C 2 = 11, 12
A 1 = 11, 12 A 2 = 11, 12
Solution space, D1X 1 … b1: Solution space, D2X 2 … b2:
5x1 + x2 … 12 x3 + x4 Ú 5
x1, x2 Ú 0 x3 + 5x4 … 50
x3, x4 Ú 0
Notice that x5 is the slack variable that converts the common constraint to the following
equation:
x1 + x2 + x3 + x4 + x5 = 40
Recall that subproblems 1 and 2 account for variables x1, x2, x3, and x4 only. This is the reason x5
must appear explicitly in the master problem. The remaining starting basic variables, x6 and x7,
are artificial.
CD-20 Chapter 20 Additional Network and LP Algorithms
Iteration 0
Iteration 1
Subproblem 1 1j = 12. We have
A 1X 1
z1 - c1 = C BB-1 £ 1 ≥ - C 1X 1
0
11, 12a b
x1
x2
= 10, -M, -M2• µ - 13, 52a b
x1
1
x2
0
= -3x1 - 5x2 - M
A 2X 2
Minimize z2 - c2 = C BB-1 £ 0 ≥ - C 2X 2
1
11, 12a b
x3
x4
= 10, -M, -M2• µ - 11, 12a b
x4
0
x5
1
= -x4 - x5 - M
20.2 Decomposition Algorithm CD-21
subject to
x3 + x4 Ú 5
x3 + 5x4 … 50
x 3, x 4 Ú 0
Because the master problem is of the maximization type and z…1 - c…1 6 z…2 - c…2 and
N enters the solution.To determine
z… - c… 6 0, it follows that b associated with extreme point X
1 1 11 11
the leaving variable,
11, 12a b
0
A 1XN 12 12
11
P11 = £ 1 ≥ = • 1 µ = £ 1≥
0 0 0
Thus, B-1P11 = 112, 1, 02T. Given X B = 1x5, x6, x72T = 140, 1, 12T, it follows that x 6 (an artificial
variable) leaves the basic solution (permanently).
The new basis is determined by replacing the vector associated with x6 with the vector P11,
which gives (verify!)
1 12 0 1 -12 0
B = £0 1 0 ≥ Q B-1 = £ 0 1 0≥
0 0 1 0 0 1
C B = 10, C 1X
N , -M2 = 10, 60, -M2
11
Iteration 2
(verify!). The optimum solution yields z…1 - c…1 = w1 = 0, which means that none of the remain-
ing extreme points in subproblem 1 can improve the solution to the master problem.
Subproblem 2 1j = 22. The associated objective function is (coincidentally) the same as for
j = 2 in Iteration 1 (verify!). The optimum solution yields
N = 150, 02T, z… - c… = -50 - M
X 22 2 2
CD-22 Chapter 20 Additional Network and LP Algorithms
Note that XN is actually the same extreme point as X N . The subscript 2 is used for notational
22 21
convenience to represent iteration 2.
N
From the results of the two subproblems, z…2 - c…2 6 0 indicates that b 22 associated with X 22
enters the basic solution.
To determine the leaving variable, consider
11, 12a b
50
A 2XN 0 50
22
P22 = £ 0 ≥ = • 0 µ = £ 0≥
1 1 1
Thus, B-1P22 = 150, 0, 12T. Because X B = 1x5, b 11, x72T = 128, 1, 12T, x5 leaves.
The new basis and its inverse are given as (verify!)
1
50 12 0 50 - 12
50 0
B = £ 0 1 0 ≥ Q B-1 = £ 0 1 0≥
1 12
1 0 1 - 50 50 1
Iteration 3
Subproblem 1 1j = 12. You should verify that the associated objective function is
Minimize w1 = AM
50 - 2 B x1 + A 50 - 4 B x2 -
M 12M
50 + 48
Minimize w2 = AM
50 B 1x3 + x42 - M
Nonbasic Variable x 5. From the definition of the master problem, zj - cj of x5 must be comput-
ed and compared separately. Thus,
z5 - c5 = C BB-1P5 - c5
= A1 + M
50 , 48 - 12M
50 , -M B 11, 0, 02T - 0
M
= 1 + 50
11, 12a b
5
A 2XN 0 5
23
P23 = £ 0 ≥ = • 0 µ = £0≥
1 1 1
Thus, B-1P23 = A 10
1 9 T
, 0, 10 B . Given that X B = 1b 22, b 11, x72T = A 14
25 , 1, 25 B , the artificial variable
11 T
Iteration 4
Subproblem 1 1j = 12. w1 = -2x1 - 4x2 + 48. It yields z…1 - c…1 = w…1 = 0.
Subproblem 2 1j = 22. w2 = 0x4 + 0x5 + 48 = 48.
Nonbasic Variable x5: z5 - c5 = 1. The preceding information shows that Iteration 3 is optimal.
We can compute the optimum solution of the original problem by back-substitution:
= A 23
45 B A 50, 02 + A 45 B 15, 0 B
T 22 T
= 128, 02T
All the remaining variables are zero. The optimum value of the objective function can be
obtained by direct substitution.
(b)
2x1 + x2 … 2
3x1 + 4x2 Ú 12
x1, x2 Ú 0
*(c)
x1 - x2 … 10
2x1 … 40
x1, x2 Ú 0
*2. In Example 20.2-1, the feasible extreme points of subspaces D1X 1 = b1, X 1 Ú 0 and
D2X 2 = b2, X 2 Ú 0 can be determined graphically. Use this information to express the
associated master problem explicitly in terms of the feasible extreme points. Then show
that the application of the simplex method to the master problem produces the same
entering variable b jk as that generated by solving subproblems 1 and 2. Hence, convince
yourself that the determination of the entering variable b jk is exactly equivalent to solv-
ing the two minimization subproblems.
3. Consider the following linear program:
Maximize z = x1 + 3x2 + 5x3 + 2x4
subject to
x1 + 4x2 … 8
2x1 + x2 … 9
5x1 + 3x2 + 4x3 Ú 10
x3 - 5x4 … 4
x3 + x4 … 10
x1, x2, x3, x4 Ú 0
Construct the master problem explicitly by using the extreme points of the subspaces,
and then solve the resulting problem directly by the simplex method.
4. Solve Problem 3 using the decomposition algorithm and compare the two procedures.
5. Apply the decomposition algorithm to the following problem:
Maximize z = 6x1 + 7x2 + 3x3 + 5x4 + x5 + x6
subject to
x1 + x2 + x3 + x4 + x5 + x6 … 50
x1 + x2 … 10
x2 … 8
5x3 + x4 … 12
x5 + x6 Ú 5
x5 + 5x6 … 50
x1, x2, x3, x4, x5, x6 Ú 0
20.3 Karmarkar Interior-Point Method CD-25
*6. Indicate the necessary changes for applying the decomposition algorithm to minimization
LPs. Then solve the following problem:
Minimize z = 5x1 + 3x2 + 8x3 - 5x4
subject to
x1 + x3 + x3 + x4 Ú 25
5x1 + x2 … 20
5x1 - x2 Ú 5
x3 + x4 = 20
x 1 , x 2, x 3, x 4 Ú 0
The vector R represents the first r columns of B-1 and Vr + j is its 1r + j2th column.
subject to
0 … x1 … 2
Using x2 as a slack variable, the problem can be rewritten as
Maximize z = x1
subject to
x1 + x2 = 2
x1, x2 Ú 0
Figure 20.15 depicts the problem. The solution space is given by the line segment
AB. The direction of increase in z is in the positive direction of x1.
Let us start with any arbitrary interior (nonextreme) point C in the feasible space
(line AB). The gradient of the objective function (maximize z = x1) at C is the direc-
tion of fastest increase in z. If we locate an arbitrary point along the gradient and then
project it perpendicularly on the feasible space (line AB), we obtain the new point D
with a better objective value z. Such improvement is obtained by moving in the direc-
tion of the projected gradient CD. If we repeat the procedure at D, we will determine
a new closer-to-optimum point E. Conceivably, if we move (cautiously) in the direction
of the projected gradient, we will reach the optimum at point B. If we are minimizing z
(instead of maximizing), the projected gradient will correctly move us away from point
B toward the minimum at point A1x1 = 02.
The given steps hardly define an algorithm in the normal sense, but the idea is in-
triguing! We need some modifications that will guarantee that (1) the steps generated
along the projected gradient will not “overshoot” the optimum point at B, and (2) in
FIGURE 20.15
Illustration of the general idea of Karmarkar’s algorithm
x2
2
A
Gradient of z Maximize
z = x1
C
1
B x1
0 1 2
20.3 Karmarkar Interior-Point Method CD-27
the general n-dimensional case, the direction created by the projected gradient will not
cause an “entrapment” of the algorithm at a nonoptimum point. This, basically, is what
Karmarkar’s interior-point algorithm accomplishes.
1. X = A n1 , n1 , Á , n1 B satisfies AX = 0.
2. min z = 0.
Karmarkar provides modifications that allow solving the problem when the second
condition is not satisfied. These modifications will not be presented here.
The following example illustrates how a general LP can be made to satisfy the
two stipulated conditions.
Example 20.3-1
Consider the problem.
Maximize z = y1 + y2
subject to
y1 + 2y2 … 2
y1, y2 Ú 0
We start by defining the primal and dual problems of the LP:
Primal Dual
f Q w1 Ú 1
y1 + 2y2 … 2 w1 Ú 1
y1, y2 Ú 0 2w1 Ú 1
w1 Ú 0
CD-28 Chapter 20 Additional Network and LP Algorithms
y1 + 2y2 + y3 = 2, y3 Ú 0
w1 - w2 = 1, w2 Ú 0 (20.1)
y1 + y2 - 2w1 = 0 (20.2)
y1 + y2 + y3 + w1 + w2 … M (20.3)
y1 + y2 + y3 + w1 + w2 + s1 = M, s1 Ú 0 (20.4)
Next, define a new variable s2. From (4) the following two equations hold if, and only if, the con-
dition s2 = 1 holds:
y1 + y2 + y3 + w1 + w2 + s1 - Ms2 = 0
y1 + y2 + y3 + w1 + w2 + s1 + s2 = M + 1 (20.5)
Now, given s2 = 1 as stipulated by (5), the primal and dual equations (1) can be written as
y1 + 2y2 + y3 - 2s2 = 0
w1 - w2 - 1s2 = 0 (20.6)
Now, define
yj = 1M + 12xj, j = 1, 2, 3
wj - 3 = 1M + 12xj, j = 4, 5
s1 = 1M + 12x6
s2 = 1M + 12x7
Substitution in equations (2), (5), and (6) will produce the following equations:
x1 + x2 - 2x4 = 0
x1 + x2 + x3 + x4 + x5 + x6 - Mx7 = 0
x1 + x2 + x3 + x4 + x5 + x6 + x7 = 1
x1 + 2x2 + x3 - 2x7 = 0
x4 - x5 - x7 = 0
xj Ú 0, j = 1, 2, Á , 7
The final step calls for augmenting the artificial variable y8 in the left-hand side of each
equation. The new objective function will call for minimizing y8, whose obvious minimum value
20.3 Karmarkar Interior-Point Method CD-29
must be zero (assuming the primal is feasible). Note, however, that Karmarkar’s algorithm re-
quires the solution
X = A 18, 18, 18, 18, 18, 18, 18, 18 B T
to be feasible for AX = 0. This will be true for the homogeneous equations (with zero right-hand
side) if the associated coefficient of the artificial x8 equals the (algebraic) sum of all the coefficients
on the left-hand side. It thus follows that the transformed LP is given as
Minimize z = x8
subject to
x1 + x2 - 2x4 - 0x8 = 0
x1 + x2 + x3 + x4 + x5 + x6 - Mx7 - 16 - M2x8 = 0
x1 + 2x2 + x3 - 2x7 - 2x8 = 0
x4 - x5 - x7 + x8 = 0
x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 = 1
xj Ú 0, j = 1, 2, Á , 8
Note that the solution of this problem automatically yields the optimum solutions of the primal
and dual problems through back-substitution.
Figure 20.16 provides a typical illustration of the solution space in three dimensions with the
homogeneous set AX = 0 consisting only of one equation. By definition, the solution space con-
sisting of the line segment AB lies entirely in the two-dimensional simplex 1X = 1 and passes
through the feasible interior point A 13, 13, 13 B .
Steps of the Algorithm. Karmarkar’s algorithm starts from an interior point represented
by the center of the simplex and then advances in the direction of the projected gradient to
determine a new solution point. The new point must be strictly interior, meaning that all
its coordinates must be positive. The validity of the algorithm rests on this condition.
For the new solution point to be strictly interior, it must not lie on the boundaries of
the simplex. (In terms of Figure 20.16, points A and B must be excluded.) To guarantee
this result, a sphere with its center coinciding with that of the simplex is inscribed tightly
inside the simplex. In the n-dimensional case, the radius r of this sphere equals 1n1n1 - 12.
A smaller sphere with radius ar10 6 a 6 12 will be a subset of the sphere and
any point in the intersection of the smaller sphere with the homogeneous system
AX = 0 will be an interior point, with strictly positive coordinates. Thus, we can move as
far as possible in this restricted space (intersection of AX = 0 and the ar-sphere) along
the projected gradient to determine the new (improved) solution point.
The new solution point no longer will be at the center of the simplex. For the pro-
cedure to be iterative, we need to find a way to bring the new solution point to the cen-
ter of a simplex. Karmarkar satisfies this requirement by proposing the following
intriguing idea, called projective transformation. Let
xi
xki
yi = n x , i = 1, 2, Á , n
a
i
xkj
j=1
CD-30 Chapter 20 Additional Network and LP Algorithms
x3
(0, 0, 1)
Simplex 1X 1
B
Intersection of
Center of
AX 0 and 1X 1
simplex
1,1,1
3 3 3
x1
(1, 0, 0)
(0, 1, 0)
x2
(a) Three dimensions
(0, 0, 0, 1)
Simplex 1X 1
Intersection of
AX 0 and 1X 1
C
Center of
simplex
1,1,1,1
A
4 4 4 4
(1, 0, 0, 0)
B
(0, 0, 1, 0)
(0, 1, 0, 0)
(b) Four dimensions
FIGURE 20.16
Illustrations of the simplex 1X 1
20.3 Karmarkar Interior-Point Method CD-31
where xki is the ith element of the current solution point X k. The transformation is
n
valid, because all xki 7 0 by design. You will also notice that a i = 1yi = 1, or 1Y = 1,
by definition. This transformation is equivalent to
Dk-1X
Y =
1Dk-1X
where Dk is a diagonal matrix whose ith diagonal elements equal xki. The transforma-
tion maps the X-space onto the Y-space uniquely because we can show that the last
equation yields
DkY
X =
1DkY
By definition, min CX = 0. Because 1DkY is always positive, the original linear program
is equivalent to
Minimize z = CDkY
subject to
ADkY = 0
1Y = 1
Y Ú 0
The transformed problem has the same format as the original problem. We can thus
start with the simplex center Y = A n1 , n1 , Á , n1 B and repeat the iterative step. After each
iteration, we can compute the values of the original X variables from the Y solution.
We show now how the new solution point can be determined for the transformed
problem. At any iteration k, the problem is given by
Minimize z = CDkY
subject to
ADkY = 0
Y lies in the ar-sphere
1 t
where Y0 = a , , Á , b and cp is the projected gradient, which can be shown to be
1 1
n n n
c p = [I - P T1PP T2-1P]1CDk2T
CD-32 Chapter 20 Additional Network and LP Algorithms
where
P = a b
ADk
1
and a = 1n 3n
1n1n - 12
- 12
P = a b
ADk
1
and compute
1 T cp
Ynew = a , Á , b - ar
1
n n 7cp 7
DkYnew
Xk + 1 =
1DkYnew
where
cp = [I - P T1PP T2-1P]1cDk2T
Example 20.3-2
The problem satisfies the two conditions imposed by the interior-point algorithm—namely,
X = 1x1, x2, x32T = A 13, 13, 13 B T
Iteration 0
c = 12, 2, -32, A = 1-1, -2, 32
X0 = A 13, 13, 13 B T, z0 = 13, r = 1
16 , A = 2
9
1
3 0 0
1
D0 = £ 0 3 0≥
1
0 0 3
Y0 = A 13, 13, 13 B T
Iteration 1
1
0 0
A 23, 23, -1 B
3
cD0 = 12, 2, -32 £ 0 1
3 0≥ =
1
0 0 3
1
0 0
A - 13, - 23, 1 B
3
AD0 = 1-1, -2, 32 £ 0 1
3 0≥ =
1
0 0 3
- 13 1 -1
-1 - 23 9
1PP 2 = £a 3 b £ - 23 1 ≥ ≥ = a 14 1b
T -1 1 0
1 1 1 0 3
1 1
1 0 0 - 13 1 9
- 13 - 23
I - P 1PP 2 P = £ 0 1 ≥ a 14 1ba b
0 1
T T -1
1 0 ≥ - £ - 23
0 3 1 1 1
0 0 1 1 1
25 -20 -5
1
= 42 £ -20 16 4≥
-5 4 1
Thus,
2
25 -20 -5 3 25
cp = 1I - P 1PP 2 P21cD02 = 42 £ -20
T T -1 T 1
16 4 ≥ £ 23 ≥ = 1
126 £ -20 ≥
-5 4 1 -1 -5
Next,
Now,
1
3 Ynew
= Ynew = 1.263340, .389328, .3473322T
D0Ynew
X1 = = 1
1D0Ynew
3
z1 = .26334
Iteration 2
.263340 0 0
cD1 = 12, 2, -32 £ 0 .389328 0 ≥ = 1.526680, .778656, -1.0419962
0 0 .347332
.263340 0 0
AD1 = 1-1, -2, 32 £ 0 .389328 0 ≥
0 0 .347332
-.263340 1 -1
1PP 2 = £a b £ -.778656
-.26334 -.778656 1.041996
T -1
1≥ ≥
1 1 1
1.041996 1
= a b
.567727 0
0 .333333
I - P T1PP T2-1P
1 0 0 -.263340 1
1≥ a b
.567727 0
= £0 1 0 ≥ - £ -.778656
0 .333333
0 0 1 1.041996 1
a b
-.263340 -.778656 1.041996
1 1 1
Thus,
.526680
£ .778656 ≥
-1.041996
.165193
= £ -.118435 ≥
-.046757
Thus,
Next,
1D1Ynew = .341531
Now,
= 1.201616
D1Ynew
X2 = .438707 .3596772T
1D1Ynew
z2 = .201615
Repeated application of the algorithm will move the solution closer to the optimum point
(0, .6, .4). Karmarkar does provide an additional step for rounding the optimal solution to the op-
timum extreme point.
subject to
y1 - y2 … 2
2y1 + y2 … 4
y1, y2 Ú 0
3. Carry out one additional iteration in Example 20.3-3, and show that the solution is mov-
ing toward the optimum z = 0.
4. Carry out two iterations of Karmarkar’s algorithm for the following linear program:
Maximize z = 2x1 + x2
subject to
x1 + x2 … 4
x1, x2 Ú 0
subject to
-2x1 + 2x2 + x3 - x4 = 0
x1 + x2 + x3 + x4 = 1
x1, x2, x3, x4 Ú 0
REFERENCES
Ahuja, R., T. Magnati, and J. Orlin, Network Flows: Theory, Algorithms, And Applications, Pren-
tice Hall, Upper Saddle River, NJ, 1993.
Bazaraa, M., J. Jarvis, and H. Sherali, Linear Programming and Network Flow, 2nd ed., Wiley,
New York, 1990.
Charnes, A. and W. Cooper, “Some Network Characterization for Mathematical Programming
and Accounting Applications to Planning and Control,” The Accounting Review, Vol. 42,
No. 3, pp. 24–52, 1967.
Hooker, J., “Karmarkar’s Linear Programming Algorithm,” Interfaces, Vol. 16, No. 4, pp. 75–90,
1986.
Lasdon, L., Optimization for Large Systems, Macmillan, New York, 1970.