Professional Documents
Culture Documents
HTTP Doc-08-94-Docsviewer - Googleusercontent
HTTP Doc-08-94-Docsviewer - Googleusercontent
Brucker
10 0.15
Distance
1
0.1
0.1
0.05
0.01
0.001 0
6 7 8 9 10 11 1 1.05 1.1 1.15 1.2 1.25
N = Num. jobs/machines ρ
(B) Time to find near-optimal schedule Figure 3: Mean distance from a schedule within a factor ρ of
1
optimal to the nearest optimal schedule. Error bars are 95%
confidence intervals.
Time (s) (90th percentile)
I-JAR
0.1 Brucker
LDS
UB an optimal search tree node at depth d in the search tree for
1000 UB+BR I, and let G1 , G2 , . . . , Gn be the children of Gd , as ordered
UB+BR+VS
by some branch ordering heuristic B. The accuracy of B at
100 at depth d, which we denote by a(B, d, N, M ), is the proba-
bility that the first-ranked branch (G1 ) is optimal.
10
Given a branch ordering heuristic B, we estimate
1
a(B, d, N, M ) (as a function of d) as follows.
6 7 8 9 10 11 12 13
N = Num. jobs/machines
Procedure for estimating a(B, d, N, M ):
1. For each instance I ∈ IN,M :
Figure 4: Median number of iterations required by vari-
ous branch and bound algorithms to find a globally optimal (a) Initialize G ← ∅, upper bound ← 1.05 ∗
schedule in a random N by N JSP instance. Error bars are opt makespan(I), and d ← 0.
95% confidence intervals. (b) Let G1 , G2 , . . . , Gn be the children of G, as
ordered by B.
Table 1: Performance on random 13x13 instances. (c) For each i ∈ {1, 2, . . . , n}, use an indepen-
dent run of Brucker to determine whether
Algorithm Med. iter. Med. equiv. iter. Gi is optimal. If G1 is optimal record
Brucker 16300 16300 the pair (d, true); otherwise record the pair
DDS 9220 9220 (d, f alse).
LDS 7300 7300 (d) Let Gi be a random optimal element of
UB 2291 4582 {G1 , G2 , . . . , Gn }; set G ← Gi ; set d ←
U B+BR 650 1300 d + 1; and go to (b).
U B+BR+V S 429 1287
2. For each integer d ≥ 0 for which some or-
dered pair of the form (d, V ) was recorded,
take as an estimate of a(B, d, N, M ) the pro-
• Compared to Brucker, each of the three techniques (U B, portion of recorded pairs of the form (d, V ) for
U B+BR, and U B+BR+V S, respectively) reduces the which V = true.
median number of iterations required to find a global op-
timum by a factor that increases with problem size. Figure 5 plots a(B, d, N, N ) as a function of d for
each N ∈ {6, 7, 8, 9, 10, 11, 12} and for each B ∈
The iterations of these four algorithms are not equivalent {BBrucker , BU B +BR }.
in terms of CPU time. By design, an iteration of either U B
Examining Figure 5, we make four observations:
or U B+BR takes approximately twice as long as an itera-
tion of Brucker, while an iteration of U B+BR+V S takes 1. For all N , the trend is that a(BBrucker , d, N, N ) increases
approximately three times as long. Table 1 compares the as a function of d.
performance of the four algorithms on the largest instance
size, both in terms of raw iterations and “equivalent itera- 2. For all d, the trend is that a(BBrucker , d, N, N ) decreases
tions”. as a function of N .
As judged by this table, the performance of U B+BR+V S
is not significantly better than that of U B+BR on the largest 3. For every combination of N and d,
instance size. However, the trend in the data suggests that a(BU B +BR , d, N, N ) > a(BBrucker , d, N, N ).
U B+BR+V S will significantly outperform U B+BR on
larger instance sizes. 4. For all N , the trend is that a(BU B +BR , d, N, N ) −
a(BBrucker , d, N, N ) decreases with d.
Both DDS and LDS outperformed the depth-first ver-
sion of Brucker. It seems likely that the performance of the Observation (1) is consistent with the conventional wis-
three hybrid algorithms could be further improved by using dom that branch ordering heuristics become more accurate
these tree search strategies. at deeper nodes in the search tree (e.g., Walsh 1997), while
observations (2) and (3) are consistent with our expectations.
6.2. Comparison of BBrucker and BU B +BR (4) deserves special explanation. The reason for (4) is that
To understand the difference between the median run when applied to a node Gd , BU B+BR only makes a decision
lengths of U B and U B+BR we examine the behavior of that differs from that of BBrucker if s∗ is consistent with Gd .
their branch ordering heuristics, referred to respectively as The probability that s∗ is consistent with Gd decreases with
BBrucker and BU B +BR . d, and so the benefit of BU B+BR decreases with d as well.
Table 2: Mean CPU seconds required by various algorithms
to find a globally optimal schedule. For stochastic algo-
rithms, 95% confidence intervals are given.
0.8 8x8 ft06 6x6 < 0.1 < 0.1 < 0.1
9x9
10x10 ft10 10x10 12.8 179.5 ± 60.5 9.5 ± 1.9
0.6 11x11
12x12
la01 10x5 < 0.1 < 0.1 < 0.1
la02 10x5 < 0.1 0.1 ± 0 0.1 ± 0
0.4
la03 10x5 < 0.1 0.1 ± 0 < 0.1
la04 10x5 0.1 0.1 ± 0 < 0.1
0.2
la05 10x5 < 0.1 < 0.1 < 0.1
la06 15x5 < 0.1 < 0.1 < 0.1
0
0 1 2 3 4 5 6 7 8 9 la07 15x5 < 0.1 < 0.1 < 0.1
Depth la08 15x5 < 0.1 < 0.1 < 0.1
la09 15x5 < 0.1 < 0.1 < 0.1
(B) UB+BR branch ordering heuristic
1 la10 15x5 < 0.1 < 0.1 < 0.1
6x6 la11 20x5 < 0.1 < 0.1 < 0.1
7x7
Pr[correct branch taken]
0.8 8x8 la12 20x5 < 0.1 < 0.1 < 0.1
9x9
10x10
la13 20x5 < 0.1 < 0.1 < 0.1
0.6 11x11 la14 20x5 < 0.1 < 0.1 < 0.1
12x12
la15 20x5 0.1 < 0.1 0.1 ± 0
0.4 la16 10x10 0.8 11.9 ± 3.3 0.3 ± 0.1
la17 10x10 0.2 0.2 ± 0.1 0.1 ± 0
0.2 la18 10x10 1.0 0.3 ± 0.1 0.2 ± 0.1
la19 10x10 4.2 0.9 ± 0.2 0.8 ± 0.2
0
0 1 2 3 4 5 6 7 8 9
la20 10x10 4.2 1.0 ± 0.3 0.3 ± 0.1
Depth la22 15x10 82.5 329.2 ± 84.0 4.5 ± 1.0
la23 15x10 44.6 0.1 ± 0 0.1 ± 0
(C) Comparison
1 la26 20x10 547.0 0.5 ± 0.1 1.1 ± 0.2
la30 20x10 3.8 0.2 ± 0 0.6 ± 0.1
la31 30x10 0.2 0.2 ± 0 0.2 ± 0
Pr[correct branch taken]
0.8
11x11 (UB+BR)
11x11 (Brucker) la32 30x10 < 0.1 0.1 ± 0 < 0.1
0.6 la33 30x10 1.8 0.1 ± 0 0.4 ± 0.1
la34 30x10 0.3 0.3 ± 0 0.8 ± 0.1
0.4 la35 30x10 0.6 0.2 ± 0 0.4 ± 0.1
orb01 10x10 80.3 81.1 ± 25.4 29.6 ± 8.6
0.2 orb02 10x10 6.7 20.1 ± 5.7 2.3 ± 0.4
orb03 10x10 180 49.5 ± 12.6 33.8 ± 10.4
0 orb04 10x10 23.8 191.5 ± 44.4 25.7 ± 2.3
0 1 2 3 4 5 6 7 8 9
Depth orb05 10x10 8.6 194.9 ± 56.0 4.1 ± 0.7
orb06 10x10 38.5 13.7 ± 3.3 3.3 ± 0.7
orb08 10x10 14.7 150.5 ± 35.6 6.6 ± 1.7
Figure 5: Accuracy of (A) BBrucker and (B) BU B +BR as a orb09 10x10 3.4 12.7 ± 3.1 2.9 ± 0.6
function of depth for random N by N JSP instances. For orb10 10x10 2.7 1.7 ± 0.4 0.4 ± 0.1
ease of comparison (C) superimposes the curves from (A) swv16 50x10 0.1 0.2 ± 0 0.1 ± 0
and (B) for N = 11. swv17 50x10 0.1 0.2 ± 0 0.1 ± 0
swv18 50x10 0.1 0.2 ± 0 0.1 ± 0
swv19 50x10 0.2 0.3 ± 0 0.7 ± 0.1
swv20 50x10 0.1 0.2 ± 0 0.1 ± 0
Total 1070 1277 ± 145 134 ± 14.5
7. OR Library Evaluation Kamarainen and Sakkout (2002) apply this approach to
In this section we compare the performance of Brucker, the kernel resource feasibility problem. At each search tree
I-JAR, U B+BR, and U B+BR+V S on instances from the node, they relax the subproblem by removing resource con-
OR library. As stated in §3.2, all runs were performed on straints, then solve the relaxed problem with local search.
a 2.4 GHz Pentium IV with 512 MB of memory. First, we A resource constraint that is violated by the solution to the
ran Brucker with a time limit of 15 minutes on each of relaxation forms the basis for further branching. Nareyek
the 82 instances in the OR library. Brucker proved that it et al. (2003) use a similar approach to solve the decision
found the optimal schedule on 47 instances. For each such version of the JSP (the decision version asks whether a
instance, we recorded the amount of time that elapsed be- schedule with makespan ≤ k exists). They perform a local
fore Brucker first evaluated a globally optimal schedule (in search on each subproblem, and branch by examining a
general this is much less than the amount of time required to constraint that is frequently violated by local search.
prove that the schedule is optimal). Then for each of these
47 instances, we ran I-JAR, U B+BR, and U B+BR+V S
50 times each, continuing each run until it found a globally 8.3. Discussion
optimal schedule. Table 2 presents the mean run lengths Our upper bounding technique (using upper bounds from
for Brucker, I-JAR, and U B+BR. The performance of an incomplete search to reduce the work that must be per-
U B+BR+V S was about a factor 1.5 worse than that of formed by a systematic search) has no doubt been used with-
U B+BR on most instances, and is not shown. out fanfare many times in practice. Our variable selection
Averaged over these 47 instances, the performance of technique is an instance of local search probing. Our branch
U B+BR is approximately 9.5 times better than that of ordering heuristic differs from the two techniques just dis-
I-JAR , 8 times better than that of Brucker, and 1.5 times cussed in that we use an independent run of iterated local
better than that of U B+BR+V S (not shown). We conjec- search (i.e., one that explores the entire search space, in-
ture that the advantages of U B+BR and U B+BR+V S over dependent of the subspace currently being examined by the
Brucker and I-JAR would increase if we ran them on backtracking search) as guidance for a backtracking algo-
larger instances from the OR library. rithm that remains complete.