Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

International Journal of Production Research

ISSN: 0020-7543 (Print) 1366-588X (Online) Journal homepage: https://www.tandfonline.com/loi/tprs20

A branch-and-bound algorithm for two-stage no-


wait hybrid flow-shop scheduling

Shijin Wang, Ming Liu & Chengbin Chu

To cite this article: Shijin Wang, Ming Liu & Chengbin Chu (2015) A branch-and-bound algorithm
for two-stage no-wait hybrid flow-shop scheduling, International Journal of Production Research,
53:4, 1143-1167, DOI: 10.1080/00207543.2014.949363

To link to this article: https://doi.org/10.1080/00207543.2014.949363

Published online: 02 Sep 2014.

Submit your article to this journal

Article views: 683

View related articles

View Crossmark data

Citing articles: 20 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tprs20
International Journal of Production Research, 2015
Vol. 53, No. 4, 1143–1167, http://dx.doi.org/10.1080/00207543.2014.949363

A branch-and-bound algorithm for two-stage no-wait hybrid flow-shop scheduling


Shijin Wanga , Ming Liua∗ and Chengbin Chub
a School of Economics & Management, Tongji University, Shanghai, P.R. China; b Laboratoire Génie Industriel, Ecole Centrale Paris,
Châtenay-Malabry Cedex, France
(Received 31 March 2014; accepted 19 July 2014)

This research investigates a two-stage no-wait hybrid flow-shop scheduling problem in which the first stage contains a single
machine, and the second stage contains several identical parallel machines. The objective is to minimise the makespan. For
this problem, existing literature emphasises on heuristics or optimal solutions for special cases, whereas this paper proposes
a branch-and-bound algorithm. Several lower bounds for optimal and partial schedules are derived. Also, three dominance
rules are deduced, and seven constructive heuristics are used to obtain initial upper bounds. Extensive computational tests
on randomly generated problems are conducted. The results with comparisons indicate that the proposed bounds (especially
partial lower bounds), dominance rules and the branch-and-bound algorithm are efficient.
Keywords: scheduling; no-wait; hybrid flow-shop; branch-and-bound

1. Introduction
Scheduling concerns the assignment of limited resources to tasks over time to optimise one or more objectives (Pinedo
1995). During the past five decades, practitioners and researchers have investigated various scheduling problems in different
manufacturing environments, including single machine, parallel machine, flow shop, job shop, flexible job shop, etc. (Brucker
2007).
Due to its high practical relevance to process industry (Tang and Wang 2010), hybrid flow-shop scheduling problems
have been studied for several decades (Linn and Zhang 1999; Kis and Pesch 2005; Ribas, Leisten, and Framinan 2010;
Ruiz and Vázquez-Rodríguez 2010). This paper considers a unique hybrid flow-shop scheduling problem, the two-stage
hybrid flow-shop scheduling problem with no-wait job processing to minimise makespan, in which the first stage contains
a single machine and the second stage contains several identical parallel machines (denoted as F2(1, Pm)|no-wait|C max ).
Hoogeveen, Lenstra, and Veltman (1996) proved that F2(1, Pm)||Cmax , a relaxation of concerned problem, is NP-hard
in the strong sense. With no-wait constraint, additional complexity is introduced. In no-wait environment, every job must
be processed from start to finish without any interruption either on or between machines (Hall and Sriskandarajah 1996).
This constraint can be motivated by many types of manufacturing environments: (1) Production processes with physical
space limitation will consider carefully such problem due to limited warehouse space available for finished goods, space-
efficient way to reduce the holding costs and requirements of quality of service to customers (Hall, Posner, and Potts 1998).
(2) Scheduling goods is with high decay rate. This is the case of the production of steel, concrete wares, chemical products or
food (Mascis and Pacciarelli 2002). For example, in food processing industry, the canning operation must immediately follow
cooking to ensure freshness (Hall and Sriskandarajah 1996). Moreover, the two-stage no-wait hybrid flow-shop scheduling
model has also been applied into some practical applications, including a computer system composed by a single server and
two identical parallel machines (Guirchoun, Martineau, and Billaut 2005), two-stage assembly model (Gupta, Hariri, and
Potts 1997) and meta industry (Liu et al. 2003).
Given its importance and complexity, the two-stage no-wait hybrid flow-shop scheduling problem has been studied
since 1990s. Srishandarajah (1993) addressed the issue of worst-case bound for F2(1, Pm)|no-wait|Cmax and established
a tight worst-case bound of r = 3 − 1/m using an arbitrary sequence for the list-scheduling algorithm, where m is the
number of identical parallel machines at the second stage. Hall and Sriskandarajah (1996) conducted a survey of the no-wait
and blocking scheduling in different shops based on papers published before 1993, which reveals that there is no research
for F2(1, Pm)|no-wait|Cmax with a branch-and-bound enhanced with lower bounds and dominance rules. Gupta, Hariri,
and Potts (1997) considered a two-stage hybrid flow-shop problem F2(Pm, 1)||Cmax with branch-and-bound, and heuristic

∗ Corresponding author. Email: mingliu@tongji.edu.cn

© 2014 Taylor & Francis


1144 S. Wang et al.

algorithms. But no-wait processing constraint was not considered. Cheng, Sriskandarajah, and Wang (2000) studied the
computational complexity of some two-and three-stage no-wait flow-shop makespan problems. However, in one stage of
all problems they considered, all the jobs require a constant processing time. We extended the two-stage no-wait problem
without specific requirements of constant processing time on either stages. Glass, Shafransky, and Strusevich (2000) proposed
a polynomial time algorithm for F2, S|no-wait|Cmax problem in which each operation has a setup phase and this phase needs
to be attended by a single server, common for all jobs and different from the processing machines (S in the problem
notation stands for ‘server’). Liu et al. (2003) proposed a greedy heuristic named Least Deviation (LD) algorithm for the
F2(1, Pm)|no-wait|Cmax and tested its average performance. Xie et al. (2004) extended the LD into minimum deviation
algorithm (MDA) for two-stage no-wait hybrid flow shop scheduling with multiple identical machines at both stages with
the objective of makespan minimisation. Guirchoun, Martineau, and Billaut (2005) considered a computer system as a two-
stage hybrid flow-shop with a single machine at the first stage (as a server) and m parallel machines at the second stage
with a no-wait constraint between the two stages. The objective is to minimise the total completion time. A mathematical
formulation was proposed for the general case and two polynomial time algorithms were proposed for special cases with
m = 2. Haouari, Hidri, and Gharbi (2006) studied the two-stage hybrid flow-shop problem with multiple machines at each
stage and they presented a branch-and-bound algorithm to find the optimal solution. However, no-wait constraint is not
considered. Carpov et al. (2012) studied two versions (the classical and no-wait) of the two-stage hybrid flow-shop problem
where the second stage contains several parallel machines subject to precedence constraints with the objective of makespan
minimisation. An adaptive randomised list-scheduling (ARLS) heuristic, together with two priority rules, has been proposed
for solving both problem versions. Wang and Liu (2013a) investigated a two-stage hybrid flow shop with dedicated machine
to minimise the makespan. To solve the problem, a heuristic method based on a branch-and-bound algorithm structure is
proposed. However, on the one hand, no-wait constraint is not considered. On the other hand, the proposed method is not an
exact method though it is built on the structure of a branch-and-bound algorithm. Wang and Liu (2013b) employed genetic
algorithms to solve the two-stage no-wait hybrid flow-shop problem with the objective of makespan minimisation. However,
mixed linear programming model and exact algorithms are not explored yet.
From the literature mentioned above, it can be found that most researches in two-stage no-wait hybrid flow shop focus
on the development of heuristics or optimal solutions for special cases. There is no report for F2(1, Pm)|no-wait|C max with
a branch-and-bound method. In this context, this paper proposes a branch-and-bound method for the problem to investigate
the properties of the problem and to see how large-scale problems can be solved optimally within reasonable computational
times. Several dominance rules are first proposed and employed.
The rest of the paper is organised as follows: Section 2 describes the problem both in notation and in graph representation.
Several lower bounds are derived in Section 3, while Section 4 develops a branch-and-bound algorithm which incorporates
some derived dominance rules. Section 5 reports the results of computational tests. Finally, Section 6 concludes the paper
and gives future research.

2. Problem definition
The parameters, notations and variables used in this paper are listed as follows.

Parameters
m the number of parallel machines at stage two
n the number of jobs
aj the processing time of job j at stage one, j ∈ I = {1, . . . , n}
bj the processing time of job j on any machine at stage two, j ∈ I = {1, . . . , n}

Notation
A the single machine at stage one
i, v the machine index at stage two, i, v ∈ {1, . . . , m}
Bi the ith machine at stage two, Bi ∈ B = {B1 , . . . , Bm }
j, k, l the job index, j, k, l ∈ I = {1, . . . , n}
International Journal of Production Research 1145

Variables
σ = {σ1 , . . . , σn } a feasible schedule, where σ j = (B([ j]), [ j]), [ j] denotes the jth job scheduled
on machine A, and B([ j]) indicates on which machine, at stage two, this job is
processed
Ci, j (σ ) (i = A, B1 , . . . , Bm ) the job j’s completion time on machine i in the schedule σ
L i (σ ) the completion time of machine i (i ∈ B)
Cmax (σ ) the objective function (makespan) value of the schedule σ . If it is not ambiguous,
Cmax (σ ) can be written as Cmax
ni the number of jobs assigned on the machine Bi at stage two
πi = {πi (1), . . . , πi (n i )} (i = 1, . . . , m) a sequence of assigned jobs on machine Bi at stage two

2.1 Problem description


The problem F2(1, Pm)|no-wait|Cmax under study can be described formally as follows. The machine environment consists
of two stages in series. There exists a single machine A at stage one and a number of machines in parallel B1 , . . . , Bm at
stage two. Each job in the given job set I = {1, . . . , n} is to be processed first at stage one and then at stage two. Our
machine environment is a special case of two-stage hybrid flow-shop which allows multiple machines at each stage. With
zero intermediate storage between stages, if machine A finishes the processing of any given job, the job must be proceeded
immediately on one of the machines at stage two. This phenomenon is referred to as no-wait. Each job j ( j ∈ I) has
processing times a j and b j at the two stages, respectively. The objective is to find a schedule which minimises the makespan
(or maximum completion time), denoted by Cmax . Makespan is equivalent to the completion time of the last job to leave the
system. A minimum makespan usually implies a high utilisation of the machines (Pinedo 1995). Without loss of generality,
we suppose all input data are integers.
The assumptions associated with this problem include:
(1) All parameters are deterministic;
(2) Pre-emption is not allowed, which means that once a job processing begins, it cannot be interrupted until it is
completed;
(3) There is no machine breakdown;
(4) In a (feasible) schedule, each job can be processed on at most one machine and each machine can proceed at most
one job at a time.
(5) Setup times between jobs are negligible or can be included in the processing time;
(6) It is assumed that m < n, since in general, the number of jobs is more than the number of machines. Otherwise, it is
identical to viewing the problem in the delivery time model, i.e. single-machine scheduling to minimise the time by
which all jobs have been delivered such that each job j has a processing time a j and a delivery time b j . This model
can be solved by LDT (Largest Delivery Time first) rule.
Note that machine idle time occurs. Nevertheless, there is no advantage of delaying unnecessarily the next job to be
processed. Thus, we restrict our attention to schedules in which, after a period of idle time, processing of the next job starts as
early as possible. A schedule, denoted by σ = {σ1 , . . . , σn }, is required to indicate the job sequence at stage one and which
machine proceeds a given job at stage two. σ j ( j ∈ I) is defined as a tuple, i.e. σ j = (B([ j]), [ j]) where [ j] denotes the jth
job scheduled on machine A and B([ j]) indicates on which machine, at stage two, this job is processed.
For simplicity, use i ∈ B to denote i ∈ {B1 , . . . , Bm }.

2.2 Graph representation


Let Ci, j (σ ) (i = A, B1 , . . . , Bm ) denote job j’s completion time on machine i in schedule σ . Use L i (σ ) to denote the
completion time of machine i (i ∈ B).
Given a feasible schedule σ = {(B(1), 1), . . . , (B(n), n)}, the makespan can be computed by the following procedure.
Initial conditions:

C A,0 (σ ) = 0
L i (σ ) = 0, i ∈ B. (1)
1146 S. Wang et al.

Table 1. Processing time of jobs in an example problem.

Jobs 1 2 3 4 5 6 7 8

A 4 4 3 4 1 3 6 3
B1 3 – 6 – 3 – 2 –
B2 – 5 – 4 – 4 – 2

Figure 1. Directed graph of Example 1.

Recursive functions, for k = 1, . . . , n:

C A,k (σ ) = max{C A,k−1 (σ ) + ak , L B(k) (σ )}


C B(k),k (σ ) = C A,k (σ ) + bk
L B(k) (σ ) = C B(k),k (σ ). (2)

Objective function:
Cmax (σ ) = max{L i (σ )}, i ∈ B. (3)

For this model, the makespan under schedule σ can also be computed by determining the critical path in a directed graph.
In this directed graph, node (i, j) denotes the completion time of job j from machine i. Let (0, 1) be the start of the graph
and set a virtual node t to be the finish of the graph. In this graph, the arcs have weights. Node (0, k) (k = 1, . . . , n) has one
outgoing arc which has a weight ak . Node (A, k) (k = 1, . . . , n − 1) has two outgoing arcs; one arc goes to node (B(k), k)
and has a weight bk , the other arc goes to node (0, k + 1) and has a weight zero. Node (A, n) has one arc going to node
(B(n), n) with a weight bn . Node (B(k), k) ,k = 1, . . . , n, has one outgoing arc; if on machine B(k) job k is not the last and
followed by job l, the arc goes to node (A, l), otherwise, the arc goes to the virtual node t. The makespan under schedule σ
is equal to the length of the maximum weight path from start (0, 1) to finish t.
Example 1 (Graph Representation of No-wait Two-stage Hybrid Flow-shop)
Consider eight jobs on two-stage hybrid flow-shop which has one machine A at stage one and two parallel machines
B1 and B2 at stage two. Job processing times on each machine are presented in Table 1, i.e. σ = {(1, 1), (2, 2), (1, 3), (2, 4),
(1, 5), (2, 6), (1, 7), (2, 8)}.
The corresponding directed graph and Gantt chart are depicted in Figures 1 and 2. From the directed graph, it follows
that the makespan is 31. This makespan is determined by the critical path.
International Journal of Production Research 1147

B2 2 4 6 8
8 13 15 19 20 24 31

B1 1 3 5 7
4 7 20 2628

A 1 2 3 4 5 6 7 8
0 4 8 11 15 17 20 26 29

Figure 2. Gantt chart of Example 1.

Since the time complexity of critical path method is O(|V |2 ) where |V | is the number of nodes in the graph. Since nodes
(0, k) are useless in the proposed graph (k = 2, . . . , n) and node (A, k) can be directly linked to node (A, k + 1), hence
|V | = 2n + 2.
Then, we know
Lem m a 1 Given a schedule for F2(1, Pm)|no-wait|Cmax , the makespan is calculated in O(n 2 ) time.
To further explore the problem, define the inverse of F2(1, Pm)|no-wait|Cmax to be the problem F2(Pm, 1)|no-wait|Cmax
where the processing times of job j ∈ I at the first and second stage are b j and a j , respectively. The equivalence between
these two problems was established by Srishandarajah (1993), which can alternatively be shown by finding the path with
maximal weights in a directed graph.
In this paper, we focus on the problem F2(1, Pm)|no-wait|Cmax . However, in view of the above theorem, all of our
results and algorithms are applicable to the equivalent inverse problem F2(Pm, 1)|no-wait|Cmax .
Hoogeveen, Lenstra, and Veltman (1996) proved that F2(1, Pm)||Cmax is NP-hard in the strong sense for m ≥ 2. Since
the proof can serve directly for our concerned problem, therefore
Lem m a 2 F2(1, Pm)|no-wait|Cmax is NP-hard in the strong sense.
Let σ denote a schedule without unnecessary machine idle time (i.e. an active schedule). Let πi = {πi (1), . . . , πi (n i )}
(i = 1, . . . , m) be a corresponding
m sequence of jobs on machine Bi at stage two, where n i denotes the number of jobs
on machine Bi . Thus, i=1 n i = n. There exists one job πv ( j  ), v ∈ {1, . . . , m} and j  ∈ 1, . . . , n i , in stage two with
the property that there is no idle time between jobs πv ( j  ), . . . , πi (n v ) on machine Bv at stage two, and job πv ( j  ) starts
processing at the second stage immediately after completion of its first-stage operation. We refer to πv ( j  ) as a critical job.
Note that πv ( j  ) ∈ I. In Example 1, jobs π1 (4) and π2 (4) are critical jobs which can also be, respectively, denoted by jobs
7 and 8. The makespan of schedule σ can be expressed as

Cmax (σ ) = max{L i (σ )}
i∈B
⎧ ⎫
⎨ 
nv ⎬
= max C A, j (σ ) + bπv (l) , (4)
j∈I ⎩ 

l= j

where j  satisfies πv ( j  ) = j. The maximum is achieved in Equation (4) for some index j, where j is a critical job.

3. Lower bounds
In this section, we derive different lower bounds.
Besides the notations and parameters defined in the former section, a list of parameters used in this section is defined as
follows.

Parameters and Notations


σ∗ an optimal schedule
πi∗ (i = 1, . . . , m) a sequence of jobs on machine Bi at stage two in the optimal schedule σ ∗
δ a partial schedule in which not all jobs have been scheduled
δ̄ the set of jobs unscheduled yet
1148 S. Wang et al.

LB a lower bound of a schedule σ or a partial schedule δ


amin the minimum processing time of the jobs on the machine A at stage one, amin = min j∈I a j
bmax the maximum processing time of the jobs on the machines at stage two, bmax = max j∈I b j
bmin the minimum processing time of the jobs on the machines at stage two, bmin = min j∈I b j
ki (i = 1, . . . , m) the first job assigned on the machine Bi .

3.1 Lower bound for optimal schedule


If job [n] is the last job to be completed at the first stage in σ ∗ , we deduce from (4) that Cmax (σ ∗ ) ≥ C A,[n] (σ ∗ ) + b[n] , and


n
Cmax (σ ∗ ) ≥ L B1 = a j + bmin . (5)
j=1

The computation of L B1 requires O(n) time.


Suppose that the number of machines at stage two is infinite, this relaxed problem can be viewed as single-machine
scheduling with delivery time problem, where a j is the processing time, b j is the delivery time and the objective is to
minimize the time by which all jobs have been delivered. An optimal sequence τ = {τ1 , . . . , τn } of this problem is obtained
by applying LDT (Largest Delivery Time first) rule. By Equation (4),
⎧ ⎫
⎨ j ⎬
Cmax (σ ∗ ) ≥ L B2 = max aτi + bτ j . (6)
j∈I ⎩ ⎭
i=1

L B2 requires O(n log n) time to compute.


Let ki (i = 1, . . . , m) denote the first job assigned to machines Bi . In the optimal schedule σ ∗ , without loss of generality,
machines at stage two are indexed such that C A,k1 (σ ∗ ) ≤ · · · ≤ C A,km (σ ∗ ). Note that before time C A,ki (σ ∗ ), i = 1, . . . , m,
machine Bi is idle. We observe that at stage two, the total idle time cannot be less than


m 
m 
m
C A,ki (σ ∗ ) ≥ C A,[i] (σ ∗ ) = ha[m−h+1] (7)
i=1 i=1 h=1

where [i] denotes the ith job in the schedule. Arranging the jobs in the non-decreasing order of a j (i.e. SPT rule), we get a
sequence η = {η1 , . . . , ηn }. Together with the above inequality, we obtain the next lower bound
⎡ ⎛ ⎞⎤
1 n m
Cmax (σ ∗ ) ≥ L B3 = ⎢
⎢m
⎝ bj + haηm−h+1 ⎠⎥⎥. (8)
⎢ j=1 h=1⎥

We use a linear time median-finding technique to find job η1 , . . . , ηm ∈ {1, . . . , n}. For this implementation, the time
requirement for L B3 is O(n + m log m).

3.2 Lower bound for partial schedule


Let δ denote a partial schedule, i.e. in which not all jobs have been scheduled. We use the ‘lower bound for partial schedule
δ’ to denote the lower bound for a complete schedule constructed by appending unscheduled jobs to partial schedule δ. For
example, δ = {(B([1]), [1]), (B([2]), [2]), (B([3]), [3])}, i.e. it contains only three jobs, not all of them. Note that δ may
be empty. Let L i (δ) (i = A, B1 , . . . , Bm ) be the completion time of machine i in δ. We use δ̄ to denote the set of jobs
unscheduled. Let amin (δ̄) = min j∈δ̄ a j , bmin (δ̄) = min j∈δ̄ b j and bmax (δ̄) = max j∈δ̄ b j .
Considering the total processing at stage two, we obtain a lower bound
⎡ ⎤

Cmax (δ) ≥ L B4 = ⎢ ⎢ L A (δ) + a j + min b j ⎥
⎥. (9)
⎢ j∈δ̄
j∈δ̄ ⎥
International Journal of Production Research 1149

At time L A (δ) + amin (δ̄), the total processing of δ left on stage two is i∈B max{L i (δ) − (L A (δ) + amin (δ̄)), 0}. We
obtain the next lower bound
Cmax (δ) ≥ L B5 = L A (δ) + amin (δ̄)
⎡ ⎛ ⎞⎤
1  
+⎢ ⎢m
⎝ max{L i (δ) − (L A (δ) + amin (δ̄)), 0} + b j ⎠⎥
⎥. (10)
⎢ i∈B ⎥
j∈δ̄

L B5 is calculated in O(n) time.

4. A branch-and-bound algorithm
This section gives details of the proposed branch-and-bound algorithm. In order to improve the search efficiency, the algorithm
is combined with the lower bounds for optimal schedule and ones for partial schedule that are described in Section 3.
Furthermore, three dominance rules are derived, and initial upper bounds are also generated by seven heuristic algorithms.
Then, the complete procedure of the proposed branch and bound algorithm is given.
The new parameters for this section are defined as follows:

ψ a partial sequence
ψ→ j the partial sequence in which job j is arranged immediately after the last job in ψ
Dk (χ ) the earliest time at which machine k can proceed the next job, after scheduling each job in (partial) sequence
χ , where k ∈ {A, B1 , . . . , Bm }

4.1 Dominance rules


We now derive some dominance theorems which, if the conditions are satisfied, allow a reduction in the number of partial
schedules that need to be examined in the search for an optimal schedule using the branch-and-bound algorithm.
We introduce ECT (Earliest Completion Time first) rule. In parallel machine setting, ECT rule arranges a job on a machine
which can finish it the soonest, ties broken arbitrarily.
Theorem 1 For F2(1, Pm)|no-wait|Cmax , for a given sequence, an optimal second-stage schedule is found in O(mn)
time by applying ECT (Earliest Completion Time first) rule.
Proof For job instance I, the given sequence takes the form of {[1], . . . , [n]}. We first establish a claim: for the given
sequence, in its second-stage optimal schedule, no job can be processed earlier without violating this sequence. It holds
because all machines are identical at stage two and due to the no-wait constraint. Therefore, a schedule obtained by applying
ECT rule is optimal at stage two.
For job [ j] ( j = 1, . . . , n), the completion time is equal to
 
max L A + a[ j] , min L i + b[ j] ,
i∈B

where L k , k = A, B1 , . . . , Bm , denotes the completion time of machine k immediately before scheduling job [ j]. Therefore,
under a given sequence, an optimal second-stage schedule is found in O(mn) time. The proof is completed.
From now on, we only consider the sequence (or permutation) of jobs since its corresponding second-stage schedule can
be obtained by applying ECT rule. In the following, we always apply Theorem 1 on a sequence or permutation to construct
its corresponding second-stage schedule.
We define ψ as a partial sequence which can be empty. For example, ψ = {[1], [2], [3]}, where n − 3 jobs still in I need
to be sequenced. Let ψ → j denote the partial sequence in which job j is arranged immediately after the last job in ψ. Let
Dk (χ ) denote the earliest time at which machine k can proceed the next job, after scheduling each job in (partial) sequence
χ , where k ∈ {A, B1 , . . . , Bm } and χ ∈ {ψ, ψ → j, ψ → j → i, ψ → i, ψ → i → j}. For example, D A (ψ) denotes the
completion time of last job in ψ on machine A. Note that Dk (χ ) = max{L k (χ ), L A (χ )} for k ∈ {B1 , . . . , Bm }.
Lem m a 3 Given a schedule χ → y for F2(1, Pm)|no-wait|Cmax , if D A (χ ) + a y ≥ Dl (χ ), arranging job y on machine
Bl dominates others (where job y is scheduled on one of second-stage machines except Bl ).
Proof Find machine Bh which satisfies Dh (χ ) = mink∈B Dk (χ ). If l = h, it is trivial by Theorem 1. Therefore, we assume
that l = h in the following. By Theorem 1, arranging job y on machine Bh is one of best choice. We prove this lemma
1150 S. Wang et al.

by showing that assigning y to machine Bl is also a best choice, which means this assignment results in the same state of
machines as applying ECT rule. From the definition of h, we know Dh (χ ) ≤ Dl (χ ) ≤ D A (χ ) +a y . We discuss the following
two situations.
Situation 1 Job y is scheduled on machine Bl .
It follows
Dl (χ → y) = D A (χ ) + a y + b y
Dh (χ → y) = D A (χ ) + a y
Dk (χ → y) = max{Dk (χ ), D A (χ ) + a y }, k = {l, h}, k ∈ B. (11)
Situation 2 Job y is scheduled on machine Bh (ECT rule).
We know
Dh (χ → y) = D A (χ ) + a y + b y
Dl (χ → y) = D A (χ ) + a y
Dk (χ → y) = max{Dk (χ ), D A (χ ) + a y }, k = {l, h}, k ∈ B. (12)
Since all the second-stage machines are identical, interchanging the index of machines Bl and Bh in Situation 1 will not
delay the start time of the next job. Therefore, this operation results in Situation 2 and it completes the proof.
Theorem 2 Given a partial sequence ψ for F2(1, Pm)|no-wait|Cmax , a (partial) sequence ψ → j → i is dominated by
ψ → i → j if jobs i and j satisfy ai ≥ a j ≥ bi ≥ b j .
Proof Construct a new sequence ψ → i → j from ψ → j → i by interchanging the positions of j and i. We will prove
this theorem by showing that Dk (ψ → i → j) ≤ Dk (ψ → j → i) for k = A, B1 , . . . , Bm . Without loss of generality,
suppose that under ψ → j → i the second-stage machine which proceeds job j is B1 . We discuss the following two cases.
Case 1 D A (ψ) + a j ≥ D B1 (ψ).
In this case, there is no idle time between j and the last job in ψ on machine A. Regarding ψ → j, we have
D A (ψ → j) = max{D A (ψ) + a j , D B1 (ψ)}
= D A (ψ) + a j .
D B1 (ψ → j) = D A (ψ → j) + b j
= D A (ψ) + a j + b j .
Dk (ψ → j) = max{Dk (ψ), D A (ψ → j)}, k = B2 , . . . , Bm . (13)
Under sequence ψ → j → i, by Lemma 3, job i is processed on machine B1 since D B1 (ψ → j) ≤ D A (ψ → j) + ai
(respectively, substitute χ and y in Lemma 3 by ψ → j and i). There is also no idle time between j and i on machine A. By
Equation (13), we have
D A (ψ → j → i) = D A (ψ → j) + ai
= D A (ψ) + a j + ai .
D B1 (ψ → j → i) = D A (ψ → j → i) + bi
= D A (ψ) + a j + ai + bi .
Dk (ψ → j → i) = max{Dk (ψ), D A (ψ → j → i)}
= max{Dk (ψ), D A (ψ) + a j + ai }, k = B2 , . . . , Bm . (14)
Now we consider sequence ψ → i → j. Since D A (ψ) + a j ≥ D B1 (ψ) and ai ≥ a j , we have D A (ψ) + ai ≥ D B1 (ψ).
We know job i is processed on a second-stage machine which can be B1 . Without loss of generality, let this machine be B1 .
There is no idle time between i and ψ on machine A. Regarding ψ → i, it follows that
D A (ψ → i) = max{D A (ψ) + ai , D B1 (ψ)}
= D A (ψ) + ai .
D B1 (ψ → i) = D A (ψ → i) + bi
= D A (ψ) + ai + bi .
Dk (ψ → i) = max{Dk (ψ), D A (ψ → i)}, k = B2 , . . . , Bm . (15)
International Journal of Production Research 1151

Under sequence ψ → i → j, by bi ≤ a j , D B1 (ψ → i) ≤ D A (ψ) + ai + a j and Lemma 3, job j is processed on


machine B1 . There is no idle time between i and j on machine A. By Equation (15), we derive that

D A (ψ → i → j) = D A (ψ → i) + a j
= D A (ψ) + ai + a j .
D B1 (ψ → i → j) = D A (ψ → i → j) + b j
= D A (ψ) + ai + a j + b j .
Dk (ψ → i → j) = max{Dk (ψ), D A (ψ → i → j)}
= max{Dk (ψ), D A (ψ) + ai + a j }, k = B2 , . . . , Bm . (16)

Since bi ≥ b j , comparing Equation (14) and Equation (16), we deduce that

L k (ψ → i → j) ≤ L k (ψ → j → i), k = A, B1 , . . . , Bm . (17)

Case 2 D A (ψ) + a j < D B1 (ψ).


In this case, there exists idle time between j and ψ on machine A. For ψ → j, we have

D A (ψ → j) = max{D A (ψ) + a j , D B1 (ψ)}


= D B1 (ψ).
D B1 (ψ → j) = D A (ψ → j) + b j
= D B1 (ψ) + b j .
Dk (ψ → j) = max{Dk (ψ), D A (ψ → j)}, k = B2 , . . . , Bm . (18)

Under sequence ψ → j → i, by Theorem 1, job i is processed on machine B1 since D B1 (ψ → j) ≤ D A (ψ → j) + ai


which is due to b j ≤ ai . There is also no idle time between j and i on machine A. By Equation (18), we have

D A (ψ → j → i) = D A (ψ → j) + ai
= D B1 (ψ) + ai .
D B1 (ψ → j → i) = D A (ψ → j → i) + bi
= D B1 (ψ) + ai + bi .
Dk (ψ → j → i) = max{Dk (ψ), D A (ψ → j → i)}
= max{Dk (ψ), D B1 (ψ) + ai }, k = B2 , . . . , Bm . (19)

Let us consider the sequence ψ → i → j. Comparing D A (ψ) + ai with D B1 (ψ), there are two possibilities.
Case 2.1 D A (ψ) + ai ≥ D B1 (ψ).
Similar to Case 1, considering D A (ψ) + a j < D B1 (ψ), this case can be proved.
Case 2.2 D A (ψ) + ai < D B1 (ψ).
In this situation, there exists idle time between i and ψ on machine A. Since D A (ψ) + a j < D B1 (ψ) and the assumption
that under sequence ψ → j → i job j is scheduled on B1 , we know D B1 (ψ) = mink∈B {Dk (ψ)}. Therefore, by ECT rule,
job i is processed on B1 under ψ → i. It follows that

D A (ψ → i) = max{D A (ψ) + ai , D B1 (ψ)}


= D B1 (ψ).
D B1 (ψ → i) = D A (ψ → i) + bi
= D B1 (ψ) + bi .
Dk (ψ → i) = max{Dk (ψ), D A (ψ → i)}, k = B2 , . . . , Bm . (20)
1152 S. Wang et al.

Under sequence ψ → i → j, by bi ≤ a j , D B1 (ψ → i) ≤ D A (ψ → i) + a j and Lemma 3, job j is processed on


machine B1 . There is no idle time between i and j on machine A. From Equation (20), we derive that

D A (ψ → i → j) = D A (ψ → i) + a j
= D B1 (ψ) + a j .
D B1 (ψ → i → j) = D A (ψ → i → j) + b j
= D B1 (ψ) + a j + b j .
Dk (ψ → i → j) = max{Dk (ψ), D A (ψ → i → j)}
= max{Dk (ψ), D B1 (ψ) + a j }, k = B2 , . . . , Bm . (21)

Since ai ≥ a j and bi ≥ b j , comparing Equation (19) with Equation (21), we obtain that L k (ψ → i → j) ≤ L k (ψ →
j → i), for k = A, B1 , . . . , Bm . This completes the proof.
Theorem 3 Given a partial sequence ψ for F2(1, Pm)|no-wait|Cmax , a (partial) sequence ψ → j → i is dominated by
ψ → i → j if jobs i and j satisfy ai ≤ a j ≤ bi ≤ b j and they are processed on a same second-stage machine.
Proof Construct a new sequence ψ → i → j from ψ → j → i by interchanging jobs j and i. We will prove this theorem
by showing that Dk (ψ → i → j) ≤ Dk (ψ → j → i) for k = A, B1 , . . . , Bm . We assume that b j > 0 since if b j = 0 it
is trivial. Without loss of generality, suppose that under ψ → j → i the second-stage machine which proceeds job j is B1 .
From this assumption, we know that
D B1 (ψ) = min Dk (ψ). (22)
k∈B
Since job i is scheduled on the same second-stage machine B1 under ψ → j → i, we observe that

D B1 (ψ → j) ≤ Dk (ψ → j), k = B2 , . . . , Bm . (23)

We discuss the following two cases.


Case 1 D A (ψ) + a j ≥ D B1 (ψ).
In this case, there is no idle time between j and the last job in ψ on machine A. Regarding ψ → j, by inequality (23),
we have

D A (ψ → j) = max{D A (ψ) + a j , D B1 (ψ)}


= D A (ψ) + a j .
D B1 (ψ → j) = D A (ψ → j) + b j
= D A (ψ) + a j + b j .
Dk (ψ → j) = max{Dk (ψ), D A (ψ → j)}
= Dk (ψ), k = B2 , . . . , Bm . (24)

In Equation (24), the equation with regard to Dk (ψ → j) holds because if D A (ψ → j) > Dk (ψ), together with b j > 0,
we have D B1 (ψ → j) > D A (ψ → j) = Dk (ψ → j). This is a contradiction to inequality (23).
By inequality (23) and Equation (24), we have

D A (ψ) + a j + b j ≤ Dk (ψ), k = B2 , . . . Bm . (25)

Under sequence ψ → j → i, from the assumption, we know that job i is scheduled on machine B1 . Since ai ≤ b j
and, thus, D A (ψ → j) + ai ≤ D B1 (ψ → j), there is no idle time between j and i on machine B1 . By Equation (24) and
inequality (25), we have

D A (ψ → j → i) = D B1 (ψ → j)
= D A (ψ) + a j + b j .
D B1 (ψ → j → i) = D A (ψ → j → i) + bi
= D A (ψ) + a j + b j + bi .
Dk (ψ → j → i) = max{Dk (ψ), D A (ψ → j → i)}
= Dk (ψ), k = B2 , . . . , Bm . (26)

Now we consider the sequence ψ → i → j. There are two subcases.


International Journal of Production Research 1153

Case 1.1 D A (ψ) + ai ≥ D B1 (ψ).


In this subcase, by Equation (22) and Theorem 1, we know job i is processed on a second-stage machine B1 . There is no
idle time between i and ψ on machine A. With regard to ψ → i, by inequality (25), we observe that

D A (ψ → i) = max{D A (ψ) + ai , D B1 (ψ)}


= D A (ψ) + ai .
D B1 (ψ → i) = D A (ψ → i) + bi
= D A (ψ) + ai + bi .
Dk (ψ → i) = max{Dk (ψ), D A (ψ → i)}
= Dk (ψ), k = B2 , . . . , Bm . (27)

Under sequence ψ → i → j, by ai ≤ a j ≤ bi ≤ b j and inequality (25), we have D B1 (ψ → i) ≤ Dk (ψ → i) for


k = B2 , . . . , Bm . By Lemma 3, job j is processed on machine B1 . Since a j ≤ bi and, thus, D B1 (ψ → i) ≥ D A (ψ → i)+a j ,
there is no idle time between i and j on machine B1 . By Equation (27) and inequality (25), we derive that

D A (ψ → i → j) = D B1 (ψ → i)
= D A (ψ) + ai + bi .
D B1 (ψ → i → j) = D A (ψ → i → j) + b j
= D A (ψ) + ai + bi + b j .
Dk (ψ → i → j) = max{Dk (ψ), D A (ψ → i → j)}
= Dk (ψ), k = B2 , . . . , Bm . (28)

Since ai ≤ a j ≤ bi ≤ b j , comparing Equation (26) and Equation (28), we deduce that

L k (ψ → i → j) ≤ L k (ψ → j → i), k = A, B1 , . . . , Bm . (29)

Case 1.2 D A (ψ) + ai < D B1 (ψ).


In this subcase, by Equation (22) and Theorem 1, we know job i is processed on a second-stage machine B1 . On machine
B1 , there is no idle time between the last job on this machine in ψ and i. Regarding ψ → i, by Equation (22), we observe
that

D A (ψ → i) = max{D A (ψ) + ai , D B1 (ψ)}


= D B1 (ψ).
D B1 (ψ → i) = D A (ψ → i) + bi
= D B1 (ψ) + bi .
Dk (ψ → i) = max{Dk (ψ), D A (ψ → i)}
= Dk (ψ), k = B2 , . . . , Bm . (30)

Under sequence ψ → i → j, by bi ≤ b j , D A (ψ) + a j ≥ D B1 (ψ) and inequality (25), we have D B1 (ψ → i) ≤


Dk (ψ → i) for k = B2 , . . . , Bm . By Theorem 1, we know that job j is processed on machine B1 . Since a j ≤ bi and, thus,
D B1 (ψ → i) ≥ D A (ψ → i) + a j , there is no idle time between i and j on machine B1 . By Equation (30) and inequality
(25), we derive that

D A (ψ → i → j) = D B1 (ψ → i)
= D B1 (ψ) + bi .
D B1 (ψ → i → j) = D A (ψ → i → j) + b j
= D B1 (ψ) + bi + b j .
Dk (ψ → i → j) = max{Dk (ψ), D A (ψ → i → j)}
= Dk (ψ), k = B2 , . . . , Bm . (31)

Here, we give the reason why the equation concerned with Dk (ψ → i → j) in Equation (31) holds. From D A (ψ)+a j ≥
D B1 (ψ), bi ≤ b j and inequality (25), we have D A (ψ → i → j) ≤ D A (ψ) + a j + bi ≤ D A (ψ) + a j + b j ≤ Dk (ψ).
1154 S. Wang et al.

Since D A (ψ) + a j ≥ D B1 (ψ) and b j ≥ bi , comparing Equations (26) and (31), we deduce that

L k (ψ → i → j) ≤ L k (ψ → j → i), k = A, B1 , . . . , Bm . (32)

Case 2 D A (ψ) + a j < D B1 (ψ).


In this case, under ψ → j, there exists idle time between j and ψ on machine A. Considering Equation (22), we have

D A (ψ → j) = max{D A (ψ) + a j , D B1 (ψ)}


= D B1 (ψ).
D B1 (ψ → j) = D A (ψ → j) + b j
= D B1 (ψ) + b j .
Dk (ψ → j) = max{Dk (ψ), D A (ψ → j)}
= Dk (ψ), k = B2 , . . . , Bm . (33)

By inequality (23), we know


D B1 (ψ) + b j ≤ Dk (ψ), k = B2 , . . . , Bm . (34)

From the assumption, job i is scheduled on machine B1 under sequence ψ → j → i. Since ai ≤ b j and, thus,
D A (ψ → j) + ai ≤ D B1 (ψ → j), there is no idle time between j and i on machine B1 . By Equation (33) and inequality
(34), we have

D A (ψ → j → i) = D B1 (ψ → j)
= D B1 (ψ) + b j .
D B1 (ψ → j → i) = D A (ψ → j → i) + bi
= D B1 (ψ) + b j + bi .
Dk (ψ → j → i) = max{Dk (ψ), D A (ψ → j → i)}
= Dk (ψ), k = B2 , . . . , Bm . (35)

Let us consider the sequence ψ → i → j. Since ai ≤ a j and D A (ψ) + a j < D B1 (ψ), we deduce that D A (ψ) + ai <
D B1 (ψ). Together with Equation (22), we know that under ψ → i → j, job i is scheduled on machine B1 , and on machine
B1 , there is no idle time between the last job in ψ on this machine and i. Considering inequality (34), it follows that

D A (ψ → i) = max{D A (ψ) + ai , D B1 (ψ)}


= D B1 (ψ).
D B1 (ψ → i) = D A (ψ → i) + bi
= D B1 (ψ) + bi .
Dk (ψ → i) = max{Dk (ψ), D A (ψ → i)}
= Dk (ψ), k = B2 , . . . , Bm . (36)

By inequality (34) and bi ≤ b j , we observe that D B1 (ψ → i) ≤ Dk (ψ → i) for k = B2 , . . . , Bm . It means that under


sequence ψ → i → j, job j is scheduled on machine B1 by Theorem 1. Since a j ≤ bi and, thus, D A (ψ → i) + a j ≤
D B1 (ψ → i), there is no idle time between i and j on machine B1 . From Equation (36), inequality (34) and bi ≤ b j , we
derive that

D A (ψ → i → j) = D B1 (ψ → i)
= D B1 (ψ) + bi .
D B1 (ψ → i → j) = D A (ψ → i → j) + b j
= D B1 (ψ) + bi + b j .
Dk (ψ → i → j) = max{Dk (ψ), D A (ψ → i → j)}
= Dk (ψ), k = B2 , . . . , Bm . (37)
International Journal of Production Research 1155

Since bi ≤ b j , comparing Equation (35) with Equation (37), we obtain that


L k (ψ → i → j) ≤ L k (ψ → j → i), k = A, B1 , . . . , Bm (38)
This completes the proof.

4.2 Heuristic algorithms for initial upper bounds


To help curtail the search in the branch-and-bound algorithm, the makespan obtained from a heuristic method is used as an
initial upper bound. The following seven constructive heuristics are applied to obtain a job sequence and Theorem 1 is used
to determine the machine selection in the second stage. Finally, the best schedule is selected.
(1) Johnson’s rule: a sequence of n jobs is generated by the Johnson’s rule with a j and b j ( j = 1, 2, . . . , n). The resulting
upper bound is denoted as U B1.
(2) Modified Johnson’s rule: a sequence of n jobs is generated by the Johnson’s rule with a j and b j /m. The resulting
upper bound is denoted as U B2.
(3) LPT rule: a sequence of n jobs is generated with non-increasing of b j . The resulting upper bound is denoted as U B3.
(4) Large–Small rule: observing that the idle time on machine A occurs when all machines at stage two are busy, we try
to avoid this situation. Arrange jobs in a non-increasing order of b j to form a sequence {bτ1 , bτ2 , . . . , bτn }. Construct
a new sequence as {bτ1 , bτn , bτ2 , bτn−1 , . . .} by choosing the first and the last jobs from the sequence and then, the
first and the last jobs from the remaining sequence and so on. (We try to make big difference between two successive
jobs from the beginning). The resulting upper bound is denoted as U B4.
(5) Large–medium rule: arrange jobs in a non-increasing order of b j to form a sequence {bτ1 , bτ2 , . . . , bτn }. Construct a
new sequence in the form of {bτ1 , bτ (n+1)/2
, bτ2 , bτ (n+1)/2
+1 , . . .} by choosing the first and a middle jobs from the
sequence and then the first and a middle jobs from the remaining sequence and so on. (We try to make a roughly
average difference between two successive jobs from the beginning to the end). The resulting upper bound is denoted
as U B5.
(6) Least Deviation (LD) rule (Liu et al. 2003): for any time point when machine A begins to be idle, search the machine
to be firstly idle among machines Bk (k = 1, . . . , m)(break ties with least k). Let the processing time left to process
the job on a second-stage machine be bi . Among all the jobs to be processed, choose the job j which has the closest
a j to bi (break ties arbitrarily). If bi ≤ a j , job j starts to process on machine A immediately; otherwise, job j starts
to process on machine A after bi − a j . The resulting upper bound is denoted as U B6.
(7) Modified Gilmore and Gomory’s rule (Gilmore and Gomory (1964)): a sequence of n jobs is generated with the
Gilmore and Gomory’s rule with a j and b j ( j = 1, 2, . . . , n). Then, Theorem 1 is used to determine the machine
selection in the second stage. The obtained upper bound is denoted as U B71 . A sequence of n jobs is generated
with the Gilmore and Gomory’s rule with a j and b j /m. The obtained upper bound is denoted as U B72 . Then,
U B7 = min(U B71 , U B72 ).

4.3 Branching rule


Since each feasible schedule could be represented as a permutation of n jobs, we adopt a branching rule in which a node
at the lth(l ≤ n) level of the search tree corresponds to a partial schedule in which the first l positions are filled. Once this
first-stage schedule is specified, Theorem 1 shows that the second stage can be scheduled (i.e. machine selection) in O(mn)
time. There are n − l nodes emanating from each node: each representing the addition of an unscheduled job to the current
partial schedule. If a node is created at (l + 1)th level, the node can be eliminated if its largest partial lower bound of L B4
and L B5 is larger than the current best makespan. Theorems 2 and 3 can also be used to determine if exploring the node or
not. In our branch-and-bound algorithm, the depth-first search (DFS) strategy for searching the tree is employed due to its
relatively low memory requirements. In DFS strategy, a node with most jobs in the corresponding partial schedule is selected
for branching. In case of ties, a node that gives the lowest partial lower bound is selected. For testing with branch-and-bound
algorithms without employing partial lower bound described in Section 5, tie-breaking is the least objective value for the
current partial schedule.

4.4 Complete Procedure of the proposed branch-and-bound algorithm


The complete procedure of the proposed branch-and-bound algorithm (named after BBA) is described as follows:
Step 1 Initialisation: basic information of jobs and number of machines are input;
1156 S. Wang et al.

Step 2 Computation of lower bound and upper bound at root node:


Step 2.1 Calculate the initial lower bounds L B1 , L B2 and L B3 . Let lower B = max(L B1 , L B2 , L B3 );
Step 2.2 Generate initial upper bounds U Bi, i = 1, . . . , 7. Let bestc = min(U Bi).
Step 3 If bestc = lower B, go to Step 5; else, go to Step 4.
Step 4 Node Branching: new node is exploring and its machine selection in the second stage is determined with
Theorem 1.
Step 4.1 For a partial schedule, calculate partial lower bounds L B4 and L B5 , and let P L B = max(L B4 , L B5 ); if
P L B > bestc, this node is discarded;
Step 4.2 For a partial schedule, the branching node is determined to be discarded or not according to Theorems 2
and 3.
Step 4.3 bestc is updated whenever a feasible schedule that improves the upper bound is generated during the
branching process.
Step 5 Termination and output: the searching is terminated either if bestc = lower B, or the CPU time exceeded
900 s. Output L B1 , L B2 , L B3 , U B1, U B2, U B3, U B4, U B5, U B6, U B7, bestc, CPU time and the number of
exploring nodes.

4.5 An illustrative example


Recall the Example 1 in Subsection 2.1, we use the processing data of jobs (without machine assignment yet) to illustrate the
upper bounds, lower bounds and dominance rules. In this example, n = 8,m = 2, a1 = 4,a2 = 4, a3 = 3, a4 = 4, a5 = 1,
a6 = 3, a7 = 6, a8 = 3, and b1 = 3,b2 = 5, b3 = 6, b4 = 4, b5 = 3, b6 = 4, b7 = 2, b8 = 2.
Go to step 2.1, calculate the initial lower bounds:

n
L B1 = a j + bmin = 4 + 4 + 3 + 4 + 1 + 3 + 6 + 3 + 2 = 30
j=1

To compute L B2 , first we use LDT rule to get a sequence τ = {τ1 , . . . , τn } = {3, 2, 4, 6, 1, 5, 7, 8}, then
⎧ ⎫
⎨ j ⎬
L B2 = max aτi + bτ j = max{3 + 6, 3 + 4 + 5, 4 + 3 + 4 + 4, 4 + 3 + 4 + 3 + 4,
j∈I ⎩ ⎭
i=1
4 + 4 + 3 + 4 + 3 + 3, 4 + 4 + 3 + 4 + 1 + 3 + 3,
4 + 4 + 3 + 4 + 1 + 3 + 6 + 2,
4 + 4 + 3 + 4 + 1 + 3 + 6 + 3 + 2} = 30

To compute L B3 , first we use SPT rule to get a sequence η = {η1 , . . . , ηn } = {5, 3, 6, 8, 1, 2, 4, 7}, since
1
(3 + 5 + 6 + 4 + 3 + 4 + 2 + 2 + 1 × 3 + 2 × 1) = 34/2
2
then, ⎡ ⎛ ⎞⎤
1 n 
m
L B3 = ⎢
⎢m
⎝ bj + haηm−h+1 ⎠⎥
⎥ = 17
⎢ j=1 h=1 ⎥
Then, lower B = max(L B1 , L B2 , L B3 ) = 30.
Then, go to step 2.2, calculate the upper bounds.
For Johnson’s rule, the job sequence is {5, 3, 6, 2, 4, 1, 8, 7}, U B1 = 30.
For Modified Johnson’s rule, the job sequence is {5, 3, 6, 4, 2, 8, 7, 1}, U B2 = 31.
For LPT rule, the job sequence is {3, 2, 4, 6, 1, 5, 7, 8}, U B3 = 30.
For Large–Small rule, the job sequence is {3, 8, 2, 7, 4, 5, 6, 1}, U B4 = 31.
For Large–medium rule, the job sequence is {3, 1, 2, 5, 4, 7, 6, 8}, U B5 = 30.
For LD rule, the job sequence is {1, 3, 7, 5, 6, 2, 4, 8}, U B6 = 30.
For Modified Gilmore and Gomory’s rule, the job sequence for U B71 is {4, 8, 2, 5, 6, 7, 3, 1}, U B71 = 30. the job
sequence for U B72 is {5, 6, 7, 4, 8, 3, 2, 1}, U B72 = 30. Hence, U B7 = 30.
Then, we can get bestc = min(U Bi) = 30. For this example, the proposed BBA algorithm steps by step 3 since
bestc = lower B = 30.
International Journal of Production Research 1157

Though we do not need dominance rules for this example problem, let us illustrate them. For the example mentioned
above, a (partial) sequence ψ → 1 → 4 is dominated by ψ → 4 → 1 since they satisfy a4 ≥ a1 ≥ b4 ≥ b1 , according to
Theorem 2. Similarly, according to Theorem 2, a (partial) sequence ψ → 8 → 1 is dominated by ψ → 1 → 8 since jobs 8
and 1 satisfy a1 ≥ a8 ≥ b1 ≥ b8 .
According to Theorem 3, a (partial) sequence ψ → 3 → 6 is dominated by ψ → 6 → 3 if they are processed on a same
second-stage machine since jobs 3 and 6 satisfy a6 ≤ a3 ≤ b6 ≤ b3 . Similarly, a (partial) sequence ψ → 6 → 5 is dominated
by ψ → 5 → 6 if they are processed on a same second-stage machine since jobs 5 and 6 satisfy a5 ≤ a6 ≤ b5 ≤ b6 . Also,
job 5 and job 3, job 2 and job 4, job 2 and job 6, and job 4 and job 6 can also apply Theorem 3.
As for the partial lower bounds, let us assume we have a partial schedule ψ = {3, 4}, then we can obtain
L B4 = 3 + 4 + 4 + 4 + 1 + 3 + 6 + 3 + 2 = 30
 
1
L B5 = 3 + 4 + 1 + (max{3 + 6 − (3 + 4 + 1), 0},
2
 
max{4 + 4 − (3 + 4 + 1), 0} + 3 + 5 + 3 + 4 + 2 + 2)
= 18

Hence, the partial lower bound for ψ = {3, 4} is P L B = 30.


With these two dominance rules and partial lower bound, many branching nodes can be discarded (though we do not
need these for this example problem).

5. Computational results
In this section, we present the results of our computational tests with the proposed branch-and-bound procedure.All algorithms
are coded in Visual C++ 6.0 with minheap class and implemented on a personal computer with an Intel 2.10 GHz CPU and
1.96 GB RAM.

5.1 Test problems


Similar with the idea of testing problem generation in Gupta, Hariri, and Potts (1997), three classes of test problems are
generated. In each class, second-stage processing times are random integers from the uniform distribution [1, 100]. Problem
‘hardness’ is likely to depend on whether there is a balance between average workloads on the second-stage machines and
the total workload at the first stage. Thus, first-stage processing times are random integers from the following uniform
distribution:
(1) CLASS 1: [1, 100/m];
(2) CLASS 2: [1, 200/m];
(3) CLASS 3: [1, 200/3m];
For CLASS 1, the workloads at the two stages are balanced. However, the workload at the first stage is larger than
the average workload on the second-stage machines for CLASS 2, and is smaller for CLASS 3. Due to the characteristics
of NP-hardness of the problem and the limitations of personal computers, the branch-and-bound method can be used to
obtain optimal solution for problems up to a certain job number. In our experimentation, we consider problems up to 20
jobs. Problems are generated with 10,12,15 and 20 jobs, and with 2, 3, 5 and 10 machines at the second stage. For each
combination of n and m, 10 problem instances were generated randomly for each of the three classes.

5.2 Evaluation of branch-and-bound algorithms


Four different methods of the branch-and-bound algorithms are tested for each problem instance in this experiment. BB0
only incorporates Theorem 1. Besides Theorem 1, BBP combines the roles of the partial lower bound, and BBD23 combines
Theorems 2 and 3, separately, in order to testify the influence of the partial lower bound and Theorems 2 and 3. The entire
branch-and-bound algorithm is implemented in BBA.
Computational results with average CPU computation times (in seconds), average number of exploring nodes, number
of problems solved at root nodes and number of problems solved in 900 s are listed in Tables 2–4, for CLASS 1, CLASS 2
1158 S. Wang et al.

Table 2. Results of four branch-and-bound algorithms for CLASS 1 problems.

CLASS 1
Number of problems
ACTa ANENb solved in 900 s
Number of problems
n m BB0 BBD23 BBP BBA BB0 BBD23 BBP BBA solved at root node BB0 BBD23 BBP BBA

10 2 5.303 6.586 0.072 0.088 1,299,831 1,220,310 6522 6499 0


3 4.433 5.774 0.072 0.091 1,039,034 1,021,079 8218 8206 0
5 1.625 2.169 0.395 0.481 365,863 363,888 46,921 46,807 1
10 0.001 0.001 0.001 0.001 0 0 0 0 10 10
12 2 330.083 387.747 0.225 0.247 74,332,132 64,932,769 14,066 13,612 0
3 294.469 366.067 0.558 0.666 58,867,494 55,528,666 34,865 34,831 0
5 130.886 174.948 7.544 9.052 26,896,122 26,627,407 778,405 774,455 1
10 5.067 7.003 3.681 4.570 1,101,438 1,099,918 423,666 423,666 9
15 2 3.071 3.348 133,138 128,566 0 10 10
3 8.131 9.222 548,424 543,487 0 10 10
5 55.652 64.908 3,747,107 3,727,153 0 10 10
10 509.400 523.667 53,999,425 44,903,390 3 5 5
20 2 319.736 329.841 18,275,730 16,360,110 0 8 8
3 378.778 400.931 16,383,945 15,128,687 0 8 8
5 730.502 731.538 39,193,810 34,467,488 0 1 1
10 900.016 900.016 84,664,755 70,425,187 0 0 0
aACT: average CPU computation time in seconds in the computer with an Intel 2.10 GHz CPU and 1.96 GB RAM.
bANEN: average number of exploring nodes (round up to the nearest integer).

and CLASS 3, respectively. For example, as shown in Table 2, the average CPU computation time in seconds of 10 CLASS
1 problem instances with n = 10, m = 5 are 1.625, 2.169, 0.395 and 0.481 for BB0, BBD23, BBP and BBA, respectively.
The average numbers of exploring nodes with these four methods are 365,863, 363,888, 46,921 and 46,807, respectively.
There is 1 out of 10 problem instances that can be solved at root node because the upper bound equals to the lower bound.
All 10 problem instances can be solved by these four methods in pre-set 900 s.
Computational results with average makespan compared with initial lower bounds and upper bounds, number of problems
whose initial lower bound and initial upper bounds equals optimal solution are shown in Tables 5–7, for CLASS 1, CLASS 2
and CLASS 3, respectively. Note that, number of problems whose initial lower (upper) bound value equals optimal solution
are listed in the parenthesis if all 10 problems can be solved in 900 s. The bold values are used to highlight the better bound
(either lower bound or upper bound). Based on the results, the following observations can be made.
(1) Comparison of effectiveness of branch-and-bound methods: Tables 2–4 show that, all four branch-and-bound
algorithms perform satisfactorily when there are 10 or 12 jobs. Computation times and number of exploring nodes
become larger as the number of jobs increases. This is an expected result due to the increasing search space. BBA is
most effective at fathoming nodes when compared with BBP,BBD23 and BB0, indicating the combined strength of
the partial lower bounds and dominance rules. In addition, the number of exploring nodes of BBP is less than those
of BBD23, which in turn is less than those of BB0. This indicates the important roles of the partial lower bounds and
two dominance rules separately, and the partial lower bounds is relatively more effective than two dominance rules
in these tested cases.
(2) Comparison of efficiency of branch-and-bound methods: As for the computation times, as the number of machines
increases, it needs much more time for BB0 and BBD23 than BBA and BBP. Since the computational times for
BBD23 and BB0 are already relatively large, only BBA and BBP are used to test the problems with 15 jobs and 20
jobs. The results in Tables 2–4 show that the quality of solutions of BBA and BBP are comparable. It takes a little
more time for BBA than BBP. This may be explained as follows: as the number of machines increases, the probability
of satisfying the conditions of Theorem 2 becomes less, meanwhile the probability of satisfying the conditions of
Theorem 3 cannot be increased either due to the constraint of the same machine on the second stage (the possibility
of the same machine is getting less as the number of machines increased), but the additional computational efforts
of subroutines of these two dominance rules are still needed.
(3) Effects of problem configurations: It is apparent from Table 3 that problems in CLASS 2 are relatively easy to solve.
There are more problems that can be solved at root node with the initial upper bound equivalent to the initial lower
Table 3. Results of four branch and bound algorithms for CLASS 2 problems.

CLASS 2
ACT ANEN Number of problems solved in 900 s
Number of problems
n m BB0 BBD23 BBP BBA BB0 BBD23 BBP BBA solved at root node BB0 BBD23 BBP BBA

10 2 0.417 0.514 0.008 0.009 98,227 90,734 591 587 9


3 4.102 4.583 0.855 0.878 951,981 859,347 100,730 91,996 7
5 2.311 2.866 0.558 0.647 507,415 507,163 66,889 66,889 5
10 0.001 0.001 0.001 0.001 0 0 0 0 10 10
12 2 0.324 0.359 0.036 0.038 73,787 62,501 2528 2349 6
3 18.475 22.427 0.005 0.005 4,212,034 4,016,354 506 505 6
5 229.269 239.300 56.225 66.124 46,190,592 39,636,199 6,534,244 6,534,244 7
10 0.019 0.028 0.108 0.133 4080 4080 12,049 12,049 9
15 2 1.013 1.138 48,173 48,007 7 10 10
3 0.002 0.002 238 238 7 10 10
5 0.005 0.006 634 634 8 10 10
10 0.001 0.001 0 0 10 10 10
International Journal of Production Research

20 2 0.006 0.007 51 50 7 10 10
3 0.106 0.114 12,136 11,286 5 10 10
5 90.024 90.027 10,422,347 8,673,192 6 9 9
10 360.006 360.006 34,177,494 28,090,744 5 6 6
1159
1160

Table 4. Results of four branch and bound algorithms for CLASS 3 problems.

CLASS 3
ACT ANEN Number of problems solved in 900 s
Number of problems
n m BB0 BBD23 BBP BBA BB0 BBD23 BBP BBA solved at root node BB0 BBD23 BBP BBA

10 2 7.366 8.833 0.008 0.008 1,847,006 1,693,883 970 966 0


3 6.400 8.076 0.056 0.064 1,514,498 1,471,421 6308 6275 0
5 3.327 4.333 1.295 1.636 718,828 709,700 157,518 155,919 0
10 0.001 0.001 0.001 0.001 0 0 0 0 10 10
12 2 776.612 835.030 0.338 0.381 188,620,127 155,528,660 29,057 28,111 0
3 486.424 598.822 0.300 0.338 111,667,450 102,833,921 24,528 24,124 0
S. Wang et al.

5 284.742 363.689 24.076 28.722 58,750,414 57,285,286 2,808,444 2,768,835 0


10 5.784 7.674 3.406 4.191 1,106,588 1,106,556 397,701 397,681 8
15 2 10.950 11.986 945,140 889,250 0 10 10
3 49.647 57.069 5,450,521 5,377,121 0 10 10
5 222.902 244.925 20,796,493 19,077,603 0 9 9
10 450.008 450.008 53,962,907 43,941,381 5 5 5
20 2 311.323 317.124 20,500,751 18,005,202 0 7 7
3 354.067 363.351 23,904,041 21,203,474 0 7 7
5 900.010 900.007 73,709,915 62,700,638 0 0 0
10 900.010 900.007 223,233,249 181,367,691 0 0 0
Table 5. Average makespan of four branch and bound algorithms, lower bounds, and upper bounds for CLASS 1 problems.

Results of initial lower bounds


(number of problems whose
Results of branch and initial lower bound value equals Results of initial upper bound (number of problems
bound methods optimal solution) whose initial upper bound value equals optimal solution)
n m BB0 BBD23 BBP BBA LB1 LB2 LB3 UB1 UB2 UB3 UB4 UB5 UB6 UB7

10 2 291.5 291.5 291.5 291.5 272.3(1) 274.6(4) 257.5 323.6 318.5 344.7 355.6 343.4 333.9 323.1
3 204.8 204.8 204.8 204.8 174.2 181.1(1) 189.1 248.3 223 241.5 253 251.6 249.9 230.5
5 132.2 132.2 132.2 132.2 115.1(2) 124.2(4) 114.7 163.7 142.4(1) 149.5 170.4 166.3 172.1 165.2
10 99.3 99.3 99.3 99.3 66.2 99.3(10) 71.2 126 105.8(2) 99.3(10) 109.8(1) 109.6(1) 121.5(2) 121.3
12 2 323.4 323.4 323.4 323.4 304.6(3) 304.9(3) 293 374.5 366.8 394.5 396.7 392.8 389.5 356.3
3 223.7 223.7 223.7 223.7 197.5 200.5 214.6 264.4 260.6 283.5 289 286.7 293.4 273.6
5 147.8 147.8 147.8 147.8 129.7(2) 136.6(3) 131.9 185.2(1) 161.9 177.7 190.2 185.3 193.5 193.4
10 105.1 105.1 105.1 105.1 72.1 105(9) 80.9 132.6 111.7 105.1(10) 123.5(1) 124.9(1) 143.2 133.9
15 2 411.3 411.3 391.1(4) 391.1(4) 370.9 483.8 476.2 500.6 502.6 499.8 488.2 468
3 293.2 293.2 259.7(3) 259.7(3) 276(3) 344.8 329.5 364.1 364.6 366.1 362.5 345
5 170.5 170.5 159.3(3) 161.6(3) 161.2 224.1 199.5 215.1 224.6 226.4 243.7 222.1
International Journal of Production Research

10 115.3 115.3 86.5 113.5 93.9 147.9 127.6 124.6 145.7 145.1 159.6 152.7
20 2 568.2 568.2 505.6 480.5 530.1 652.1 647.2 681.7 706.6 677.5 654.2 637.6
3 363.2 363.2 342.6 328 338.6 436.3 434.9 463.8 453.1 443.7 454.4 425.3
5 225.7 225.7 215.3 203.4 201.4 268.7 257.8 283.2 285.8 278 298.7 284.3
10 137.9 137.5 111.3 116.3 114.7 181.7 145.6 157.6 177.2 170.4 189.1 181.7
1161
1162

Table 6. Average makespan of four branch and bound algorithms, lower bounds, and upper bounds for CLASS 2 problems.

Results of initial lower bounds


(number of problems whose
Results of branch and initial lower bound value equals Results of initial upper bound (number of problems
bound methods optimal solution) whose initial upper bound value equals optimal solution)
n m BB0 BBD23 BBP BBA LB1 LB2 LB3 UB1 UB2 UB3 UB4 UB5 UB6 UB7

10 2 491.3 491.3 491.3 491.3 488.6(9) 437.9 255.6 513.2(3) 501.4(8) 503.3(6) 532.3 512.1(3) 526.5(2) 497.9(6)
3 354.6 354.6 354.6 354.6 352.1(7) 323.2 182.1 356.3(7) 355.2(8) 358(5) 388.9 377.4(3) 393.1(1) 366.6(3)
5 201.3 201.3 201.3 201.3 198.2(8) 176.9 119 216.7(3) 205.1(6) 216.1(6) 238.4 228.3(1) 241.5 223.3(1)
10 136.6 136.6 136.6 136.6 124 136.6(10) 99.4 160(1) 140(4) 136.6(10) 162.7 157.8 174.8 160.5(1)
12 2 568.4 568.4 568.4 568.4 568.4(10) 511 324.1 594.4(4) 594.3(4) 618.6(3) 629.1 597.9(3) 614.3 579.1(6)
3 384.5 384.5 384.5 384.5 384.5(10) 357.4 219.4 399.4(6) 398.1(6) 409.4(3) 434.7 420.2 441.4 398.9(3)
S. Wang et al.

5 238.8 238.8 238.8 238.8 235.7(7) 213.6 135.7 254.5(2) 240.1(9) 250.8(2) 274.9 261.9(1) 291.8 270.4
10 139.3 139.3 139.3 139.3 132.2(4) 139.3(10) 96.6 180.7 141.1(6) 139.4(9) 175.5 166.4 194.3 174.9(1)
15 2 716.2 716.2 716.2(10) 665.9 386.5 755.2(3) 753(4) 768.5(2) 778 742.8(3) 758.7(1) 732.4(5)
3 537.2 537.2 537.2(10) 507 278.4 553(2) 549.1(6) 554.3(5) 579.3 551(1) 587.7 552.6(3)
5 299.1 299.1 299.1(10) 281.9 162 311.1(5) 303(7) 313.5(4) 341.3 316.6(1) 344.7 324.4
10 183.2 183.2 180.1(6) 183.2(10) 111.1 221 183.8(7) 183.2(10) 223 223 231.4 221.8
20 2 1059.4 1059.4 1059.4(10) 1003.8 514.0 1096.1 1092.8(1) 1109.3(1) 1128.9(1) 1098.7 1089.2 1060.7(7)
3 676.3 676.3 676.3(10) 641.3 350.7 702.6(2) 704.5(3) 698.5(3) 730.8 710.1(1) 726.6 693.2(2)
5 421.4 421.4 420.8 400.3 214.8 434.6 426.2 437.3 461.6 447.5 478.8 453.6
10 220.1 220.1 217.6 211.1 124.8 238.3 221.7 222.6 259.7 253.2 272.6 264.0
Table 7. Average makespan of four branch and bound algorithms, lower bounds, and upper bounds for CLASS 3 problems.

Results of initial lower bounds


(number of problems whose
Results of branch and initial lower bound value equals Results of initial upper bound (number of problems whose initial
bound methods optimal solution) upper bound value equals optimal solution)
n m BB0 BBD23 BBP BBA LB1 LB2 LB3 UB1 UB2 UB3 UB4 UB5 UB6 UB7

10 2 264.9 264.9 264.9 264.9 186.1 173.6 261.1(2) 292.5 293.1 311.9 312.5 308.1 298.4 286.3
3 189 189 189 189 125.5 123.8 184.2 223.7 207.2 221.7 226.2 230.1 227.5 214.8
5 119.9 119.9 119.9 119.9 81.6 99.2 108 151.2 134.3(1) 136.5 146.9 143.8 150.2 137.9
10 92.9 92.9 92.9 92.9 50.9 92.9(10) 65.8 109.8 102.1 92.9(10) 95.9(4) 97.9(2) 107.6 105.9(1)

12 2 304.9 304.9 304.9 304.9 204.9 191.4 299.6 334.2 335.7 349.7 360.2 347.2 340.1 341.3
3 194.5 194.5 194.5 194.5 132.2 133.4 189.5 227.9 212.6 229.6 236.7 235.6 236.9 226.6
5 136.8 136.8 136.8 136.8 91.4 104.8 129.2 176.8 157.6 158.4 171.2 173 175 168.2
10 102.3 102.3 102.3 102.3 53.3 102.3(10) 81.6 128.1 116.8 103(8) 112.7 116.1 122 121.7

15 2 407.9 407.9 250.4 232.5 406.5(1) 447.7 451.2 480.8 481.6 457.6 453.2 440.8
3 249.6 249.6 172.4 165.7 247.8 290.9 278.9 298.8 296.1 290.3 294.4 280.5
5 152.6 152.6 111.6 111.2 147.7 191.8 170.6 186.8 192.7 188.4 194.2 198.7
International Journal of Production Research

10 102 102 58 100.3 83 133.9 118 105 111.9 110.8 125.3 126.7

20 2 502.7 502.7 342.7 343.5 501.8 557.8 558 585 576.2 565.1 554.8 551.2
3 344.2 344.2 221.8 225.6 343 388.4 380.3 398.6 403 395.4 397.6 385.3
5 231.1 231.1 153.9 163.6 226.4 267.8 253.3 271.1 275.8 273.2 282.1 280.2
10 122 122 74 100.7 110.5 158.8 141.7 136.4 157.4 154.7 163.6 157.6
1163
1164 S. Wang et al.
Paired Samples Test

Paired Differences
95% Confidence
Interval of the
Std. Error Difference
Mean Std. Deviation Mean Lower Upper t df Sig. (2-tailed)
Pair 1 UB2 - UB3 -11.29792 12.77490 1.84390 -15.00736 -7.58847 -6.127 47 .000
Pair 2 UB2 - UB7 -10.04583 16.71871 2.41314 -14.90044 -5.19123 -4.163 47 .000
Pair 3 UB3 - UB7 1.25208 24.08985 3.47707 -5.74288 8.24705 .360 47 .720

Figure 3. Paired-sample t test of UB2, UB3 and UB7.

Table 8. The comparisons of computational efforts between the proposed BBA and the CPLEX.

ACTa
CLASS 1 CLASS 2 CLASS 3
n-m CPLEX BBA CPLEX BBA CPLEX BBA

10-2 8.32 0.088 10-2 961.822 0.009 10-2 6.728 0.008


10-3 6.378 0.091 10-3 1739.905(4) 0.878 10-3 3.53 0.064
10-5 11.474 0.481 10-5 1455.363(3) 0.647 10-5 3.558 1.636
10-10 106.903 0.001 10-10 1461.123(3) 0.001 10-10 1.072 0.001
12-2 449.618 0.247 12-2 (9) 0.038 12-2 267.69 0.381
12-3 172.193 0.666 12-3 (10) 0.005 12-3 186.331 0.338
12-5 319.216 9.052 12-5 (10) 66.124 12-5 68.587 28.722
12-10 416.127 4.570 12-10 (10) 0.133 12-10 1.665 4.191
15-2 (8)b 3.348 15-2 (10) 1.138 15-2 (9) 11.986
15-3 (9) 9.222 15-3 (10) 0.002 15-3 (10) 57.069
15-5 (10) 64.908 15-5 (10) 0.006 15-5 (10) 244.925[1]
15-10 (10) 523.667[5]c 15-10 (10) 0.001 15-10 (10) 450.008[5]
20-2 (10) 329.841[2] 20-2 (10) 0.007 20-2 (10) 317.124[3]
20-3 (10) 400.931[2] 20-3 (10) 0.114 20-3 (10) 363.351[3]
20-5 (10) [9] 20-5 (10) 90.027[1] 20-5 (10) [10]
20-10 (10) [10] 20-10 (10) 360.006[4] 20-10 (10) [10]
aACT: average CPU computation time in seconds in the computer with an Intel 2.10 GHz CPU and 1.96 GB RAM.
b (8): it means that 8 out of 10 problem instances cannot be solved by CPLEX within 3600 CPU seconds.
c [5]: it means that 5 out of 10 problem instances cannot be solved by BBA within 900 CPU seconds.

bound for CLASS 2 than for CLASS 1 and CLASS 3. This means that the proposed lower bound and upper bound
are more effective for CLASS 2 than for other two classes. Tables 2 and 4 show that problems in CLASS 3 are
slightly harder than those in CLASS 1, since the number of problems in CLASS 1 that can be solved is more than
those in CLASS 3 for most cases of (n,m) combination when n = 15 or n = 20. This is mainly because of the lighter
workload at the second stage, the less idle time (i.e. jobs can processed after each other immediately) is expected at
the first stage for CLASS 1, which will lead to less makespan and make the method may discard more nodes during
the searching process.
(4) Comparison of bounds: It can also be obtained that from Tables 5–7, in most of the tested problems, L B2 ,L B1 and
L B3 have largest mean lower bound values in CLASS 1, CLASS 2 and CLASS 3, respectively. As for the upper
bound, UB2 has smallest mean upper bounds values for most tested problems in CLASS 1, CLASS 2 and CLASS
3. However, as the number of machines increases, UB3 is getting more and more closer to optimal solutions. UB3
results into the optimal solution if the number of machines equals to the number of jobs. It is also observed that UB7
performs better when the number of machines at the second stage is 2. Therefore, L B2 ,L B1 and L B3 may be more
effective for problems in CLASS 1,2 and 3, respectively, than other lower bounds. While UB2,UB3 and UB7 are
more effective upper bounds for problems in all three classes, and based on paired-samples t test, we found that UB2
is best (as shown in Figure 3). This can be used to guide the design of constructive or local search-based heuristics
for large-scale problem.
International Journal of Production Research 1165

Based on the above results, we conclude that dominance rules, bounds, (especially partial lower bounds) play an important
role in the branch-and-bound algorithm. BBA and BBP are comparable (The results for almost all cases are same of these
two methods. BBA performs slightly better for the case of (n = 20, m = 10) of CLASS 1), and are more efficient than the
other two methods. BBA takes slightly more CPU seconds than BBP does, which may be due to the additional testification
of dominance rules (Theorems 2 and 3) during the computational process.
We have also included the MIP model in the appendix. Then, we have used ILOG CPLEX to solve the problems to
compare the performance of the proposed BBA method. The comparison results are shown in Table 8. The results show the
efficiency of the proposed BBA compared with the CPLEX. The results also show that it is most difficult for CPLEX to solve
CLASS 2 problems, while it is easiest for CPLEX to solve CLASS 3 problems. On the contrary, the proposed BBA method
is much easier to solve CLASS 2 problems.

6. Conclusions
This paper considers a two-stage no-wait hybrid flow-shop scheduling problem in which the first stage contains a single
machine, and the second stage contains several identical parallel machines. The objective is to minimise makespan. We
developed a branch-and-bound algorithm which incorporates some lower bounds, partial lower bounds, upper bounds and
dominance rules. The computational results with comparisons show the efficiency and effectiveness of the partial lower
bounds and dominance rules. The proposed branch-and-bound algorithm can solve problems with up to 20 jobs within
reasonable computational time. The effects of different problem configurations and different bounds are also analysed.
Further research may explore stronger dominance rules and tighter bounds to improve the branch-and-bound algorithm
to solve larger size problem. There is also a need for efficient heuristic or meta-heuristic methods. Consideration of other
objectives, such as minimisation of total flow time or total tardiness, is also interesting and worthy of future investigation.

Acknowledgements
The authors would like to thank the editors and anonymous referees for their helpful comments and suggestions which have significantly
improved the presentation of the paper.

Funding
This work was supported by the National Science Foundation of China [grant number 71101106], [grant number 71171149], [grant number
71428002]; NSFC major program [grant number 71090404/71090400].

References

Brucker, P. 2007. Scheduling Algorithms. Berlin: Springer-Verlag.


Carpov, S., J. Carlier, D. Nace, and R. Sirdey. 2012. “Two-stage Hybrid Flow Shop with Precedence Constraints and Parallel Machines at
Second Stage.” Computers and Operations Research 39: 736–745.
Cheng, T. C. E., C. Sriskandarajah, and G. Wang. 2000. “Two-and Three-stage Flowshop Scheduling with No-wait in Process.” Production
and Operations Management 9: 367–378.
Gilmore, P. C., and R. E. Gomory. 1964. “Sequencing a One State-variable Machine: A Solvable Case of the Travelling Salesman Problem.”
Operations Research 12: 655–679.
Glass, C. A., Y. M. Shafransky, and V. A. Strusevich. 2000. “Scheduling for Parallel Dedicated Machines with a Single Server.” Naval
Research Logistics 47: 304–328.
Guirchoun, S., P. Martineau, and J. C. Billaut. 2005. “Total Completion Time Minimization in a Computer System with a Server and Two
Parallel Processors.” Computers and Operations Research 32: 559–611.
Gupta, J. N. D., A. M. A. Hariri, and C. N. Potts. 1997. “Scheduling a Two-stage Hybrid Flow Shop with Parallel Machines at the First
Stage.” Annals of Operations Research 69: 171–191.
Hall, N. G., M. E. Posner, and C. N. Potts. 1998. “Scheduling with Finite Capacity Output Buffers.” Operations Research 46: 84–97.
Hall, N. G., and C. Sriskandarajah. 1996. “A Survey of Machine Scheduling Problems with Blocking and No-wait in Process.” Operations
Research 44: 510–525.
Haouari, M., L. Hidri, and A. Gharbi. 2006. “Optimal Scheduling of a Two-stage Hybrid Flow Shop.” Mathematical Methods of Operations
Research 64: 107–124.
Hoogeveen, J. A., J. K. Lenstra, and B. Veltman. 1996. “Preemptive Scheduling in a Two Stage Multiprocessor Flow Shop is NP-hard.”
European Journal of Operational Research 89: 172-175.
Kis, T., and E. Pesch. 2005. “A Review of Exact Solution Methods for the Non-preemptive Multiprocessor Flowshop Problem.” European
Journal of Operational Research 164: 592–608.
Linn, R., and W. Zhang. 1999. “Hybrid Flow Shop Scheduling: A Survey.” Computers & Industrial Engineering 37: 57–61.
1166 S. Wang et al.

Liu, Z. X., J. X. Xie, J. Li, and J. F. Dong. 2003. “A Heuristic for Two-stage No-wait Hybrid Flowshop Scheduling with a Single Machine
in Either Stage.” Tsinghua Science and Technology 8: 43–48.
Mascis, A., and D. Pacciarelli. 2002. “Job-shop Scheduling with Blocking and No-wait Constraint.” European Journal of Operational
Research 143: 498–517.
Pinedo, M. 1995. Scheduling: Theory, Algorithm, and Systems. 2nd ed. Englewood Cliffs, NJ: Prentice Hall.
Ribas, I., R. Leisten, and J. M. Framinan. 2010. “Review and Classification of Hybrid Flow Shop Scheduling Problems from a Production
System and a Solutions Procedure Perspective.” Computers & Operations Research 37: 1439–1454.
Ruiz, R., and J. Vázquez-Rodríguez. 2010. “The Hybrid Flow Shop Scheduling Problem.” European Journal of Operational Research
205: 1–18.
Srishandarajah, C. 1993. “Performance of Scheduling Algorithms for No-wait Flowshops with Parallel Machines.” European Journal of
Operational Research 70: 365–378.
Tang, L. X., and X. P. Wang. 2010. “An Improved Particle Swarm Optimization Algorithm for the Hybrid Flowshop Scheduling to Minimize
Total Weighted Completion Time in Process Industry.” IEEE Transactions on Control Systems Technology 18: 1303–1314.
Wang, S. J., and M. Liu. 2013a. “A Heuristic Method for Two-stage Hybrid Flow Shop with Dedicated Machines.” Computers & Operations
Research 40: 438–450.
Wang, S. J., and M. Liu. 2013b. “A Genetic Algorithm for Two-stage No-wait Hybrid Flow Shop Scheduling Problem.” Computers &
Operations Research 40: 1064–1075.
Xie, J. X., W. X. Xing, Z. X. Liu, and J. F. Dong. 2004. “Minimum Deviation Algorithm for Two-stage No-wait Flowshops with Parallel
Machines.” Computers and Mathematics with Applications 47: 1857–1863.

Appendix
In this part, we present a mixed integer linear programming (MIP) model for the considered problem. The purpose is to use ILOG CPLEX
12.5 to solve the same problem as a comparison for the proposed heuristics. First, we denote the coefficients and then define decision
variables.

Coefficients
N set of jobs, i.e. N = {1, 2, . . . , n}
M set of second-stage machines, i.e. M = {1, . . . , m}
ai job i’s processing requirement on the first stage
bi job i’s processing requirement on the second stage
M̄ a sufficiently large integer

Decision variables
yik binary, equal to 1 iff job i is assigned to the second-stage machine k
xiAj binary, equal to 1 iff job i precedes job j on the first-stage machine A
xikj binary, equal to 1 iff job i precedes job j on the second-stage machine k ∈ M, provided that yik = y kj = 1
Sik job i’s start time on a second-stage machine k ∈ M. Sik = 0 if yik = 0
Cik job i’s completion time on a second-stage machine k ∈ M. Cik = 0 if yik = 0
SiA job i’s start time on the first-stage machine A
CiA job i’s completion time on the first-stage machine A, i.e. CiA = SiA + ai
Cmax the makespan

The MIP formulation


Minimising the makespan:
Z = min Cmax . (39)
The definition of Cmax :
s.t. Cmax ≥ Cik , i ∈ N , k ∈ M. (40)
The definition of Cik :

Cik ≤ yik M̄, i ∈ N , k ∈ M. (41)


Cik ≥ Sik + bi − (1 − yik ) M̄, i ∈ N , k ∈ M. (42)
Cik ≤ Sik + bi + (1 − yik ) M̄, i ∈ N , k ∈ M. (43)

The definition of Sik :


Sik ≤ yik M̄, i ∈ N , k ∈ M. (44)
International Journal of Production Research 1167

No-wait constraint:

Sik ≥ CiA − (1 − yik ) M̄, i ∈ N , k ∈ M. (45)


Sik ≤ CiA + (1 − yik ) M̄, i ∈ N , k ∈ M. (46)
Precedence on a second-stage machine:

Cik ≤ S kj + (1 − xikj ) M̄, {i  = j} ∈ N , k ∈ M. (47)

The definition of CiA :


CiA = SiA + ai , i ∈ N. (48)
Precedence on the first-stage machine:

CiA ≤ S jA + (1 − xiAj ) M̄, {i  = j} ∈ N . (49)

Convexity constraint on y: 
yik = 1, i ∈ N. (50)
k∈M

Convexity constraint on x A :
xiAj + x ji
A = 1, {i  = j} ∈ N . (51)
Linking constraint:

xikj ≤ yik , {i  = j} ∈ N , k ∈ M. (52)


xikj ≤ y kj , {i  = j} ∈ N , k ∈ M. (53)
xikj ≤ xiAj , {i  = j} ∈ N , k ∈ M. (54)
xikj ≥ 1 − (3 − yik − y kj − xiAj ) M̄, {i  = j} ∈ N , k ∈ M. (55)

Range of variables:

xiAj , xikj , yik ∈ {0, 1}, {i  = j} ∈ N , k ∈ M. (56)


Cmax , CiA , SiA , Cik , Sik ≥ 0, i ∈ N , k ∈ M. (57)

You might also like