Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Accepted Manuscript

Two Stage particle swarm optimization to solve the Flexible job shop predictive
scheduling problem considering possible machine breakdowns

Maroua Nouiri, Abdelghani Bekrar, Abderrazak Jemai, Damien Trentesaux,


Ahmed Chiheb Ammari, Smail Niar

PII: S0360-8352(17)30089-X
DOI: http://dx.doi.org/10.1016/j.cie.2017.03.006
Reference: CAIE 4658

To appear in: Computers & Industrial Engineering

Received Date: 16 March 2016


Revised Date: 28 January 2017
Accepted Date: 1 March 2017

Please cite this article as: Nouiri, M., Bekrar, A., Jemai, A., Trentesaux, D., Ammari, A.C., Niar, S., Two Stage
particle swarm optimization to solve the Flexible job shop predictive scheduling problem considering possible
machine breakdowns, Computers & Industrial Engineering (2017), doi: http://dx.doi.org/10.1016/j.cie.2017.03.006

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Title
Two Stage particle swarm optimization to solve the Flexible job shop predictive scheduling
problem considering possible machine breakdowns.

Author names and affiliations


Maroua Nouiri a,b , Abdelghani Bekrar c, Abderrazak Jemai a,d
, Damien Trentesauxc ,
Ahmed Chiheb Ammarie,f , Smail Niarc
a
University of El Manar of Tunis, Sciences Faculty of Tunis, LIP2 Laboratory,2092 Tunis,
Tunisia;
b
University of Carthage, Polytechnic School of Tunis, 2078, Tunis, Tunisia
c
LAMIH, UMR CNRS 8201, University of Valenciennes and Hainaut-Cambrésis, UVHC, Le
Mont Houy, 59313 Valenciennes Cedex, France
d
University of Carthage, INSAT, 1080, Tunis, Tunisia
e
MMA Laboratory, INSAT Institute, Carthage University, 1080 Tunis, Tunisia
f
Renewable Energy Group, Department of Electrical and computer engineering, Faculty of
Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia

Corresponding author
Maroua Nouiri: maroua.nouiri@gmail.com

1
Two Stage particle swarm optimization to solve the Flexible job
shop predictive scheduling problem considering possible machine
breakdowns
Abstract
In real-world industrial environments, unplanned events and unforeseen incidents can happen
at any time. Scheduling under uncertainty allows these unexpected disruptions to be taken
into account. This work presents the study of the flexible job shop scheduling problems
(FJSP) under machine breakdowns. The objective is to solve the problem such that the lowest
makespan is obtained and also robust and stable schedules are guaranteed. A two-stage
particle swarm optimization (2S-PSO) is proposed to solve the problem assuming that there is
only one breakdown. Various benchmark data taken from the literature, varying from Partial
FJSP to Total FJSP, are tested. Computational results prove that the developed algorithm is
effective and efficient enough compared to literature approaches providing better robustness
and stability. Statistical analyses are given to confirm this performance.
Key words: FJSP; PSO; machine breakdown; robustness; makespan; stability
1. Introduction
The job-shop scheduling problem (JSP) has been well studied during the past few decades. An
extension of the JSP, the flexible job-shop scheduling problem (FJSP) has also received
considerable attention (Xiong et al., 2012). For those systems, there has been a considerable
research effort on scheduling, most of which has been focused on deterministic problems,
which means optimizing particular performance measures, such as makespan or tardiness with
an assumption that the manufacturing environment is ideal and that no failure or breakdown
ever occurs (Jensen M., 2003). For example, the job processing time is generally considered
constant, and all the jobs are available at a release date and no disruptions occur on the job
shop (Liu et al., 2007). Many recent efficient meta-heuristics methods are developed to get
nearly optimal solutions for deterministic FJSP assuming that there is no source of
uncertainties. Among these methods, one can find the hierarchical multi-space competitive
distributed genetic algorithm (HmcDGA) (Ishikawa et al., 2015), simulated annealing
optimization algorithm (Kaplanoğlu V., 2016) and quantum behaved particle swarm
optimization (QPSO) with mutation operator ( Ranjan et Mahapatra., 2016). (Nouiri et al.,
2015) proposed two multi agent architectures based on PSO to solve deterministic FJSP.
However, in most of the real-world manufacturing environments, the probability for a
schedule to be executed as planed is quite low, and the solutions established with the
estimated data may become obsolete during the execution (Ourari et al., 2015). In fact, many
parameters related to a scheduling problem are subject to fluctuations. The disruptions may
arise from new jobs arrival or job cancellations, urgent jobs to be taken into account,
processing times changes, machine failures, etc. Thus, uncertainty is a very important
characteristic that researchers should not deny or neglect in the problem resolution.
Recently, research on production scheduling under uncertainty has attracted substantial
attention (Wang and Choi., 2012). Nevertheless, when incorporating the data uncertainty in
the formulation of the already NP-hard FJSP, the problem becomes even more difficult and
2
complicated to solve. In this context, heuristic and meta-heuristic approaches have received
attention to deal with the presence of uncertainty in the problem’s data parameters (Al-hinai
and ElMekkawy., 2012).
In this paper, we propose a two-stage PSO algorithm to solve the FJSP under uncertainty. We
restrict the term of uncertainty to machine breakdown, which refers to the temporary
unavailability of a machine. The idea is to find a predictive schedule referred as pre-schedule
that minimizes the effect of machine breakdowns in the overall performance and also
increases the schedule stability.
The rest of the paper is organized as follows. Section 2 presents some definitions of
uncertainties, robustness and stability. The literature review of scheduling approaches
addressing FJSP under uncertainties is presented in section 3. The problem formulation and
the bi-objective optimization of the FJSP are presented in Sections 4. Section 5 represents the
details of the two stage PSO algorithm. Experimental results are reported in Section 6. Finally
a conclusion and future work are dealt with in Section 7.

2. Definitions
Real manufacturing is dynamic and tends to suffer a wide range of uncertainties, such as
random process time, random machine breakdown, random job arrivals or job cancellations
(Subramaniam and Raheja ., 2003). Uncertainty means that the data are incomplete or
imprecise. It is related to doubts concerning the validity of knowledge if the proposition is
true or not (e.g., at time x, machine y is at a standstill or if there is no disturbance and no
programmed maintenance task) (Chaari et al., 2014).
(Vieira et al., 2003) classify uncertainties in job shop into two categories: Resource-related:
machine breakdown, operator illness, unavailability or failure of tools, loading limits, delay in
the arrival or shortage of materials, defective material (material with the wrong specification),
etc.; and Job-related: rush jobs, job cancellation, due-date changes, early or late arrival of
jobs, changes in job priority, and changes in job processing time, etc. Scheduling under
uncertainty allows these kinds of risks to be taken into account (Chaari et al., 2014).
Therefore, the algorithms for the deterministic scheduling cannot be applied for uncertain
environments (He et al., 2013).Taking these aspects into account is very challenging for
solving scheduling problems. Industrial requirements evolved from the usual traditional
performance criteria, described in terms of static optimality or near-optimality, towards new
performance criteria, described in terms of reactivity, adaptability and robustness (Chaari et
al., 2014). Robustness is indicated by the expected value of the relative difference between the
deterministic and actual makespan (Xiong et al., 2013). A schedule is robust if its
performance degrades a small degree under disruptions, i.e., the performance of a robust
schedule is insensitive to disruptions (Liu et al., 2007). A predictive schedule is said to be
robust if the quality of the eventually executed schedule is close to the quality of the
predictive schedule (Bidote et al,. 2009). A schedule is stable if it has a small deviation either
in time or in sequence between the predicted schedule and the realized one (Wu et al., 1993).
Different measures of stability and robustness for classical job shop scheduling problem have
been recently defined for the flexible job shop problem (Jensen et al., 2001), (He et al., 2013),
(Al-hinai and ElMekkawy., 2011).

3
3. FJSP considering machine breakdowns: a literature review
FJSP under machine breakdowns, and in a more general way, under uncertainty condition, is
an NP-hard problem and it is more complex than the one to be solved in determinist
environments (He et al., 2013). This section gives a brief review of the scheduling approaches
used in literature to cope with disruptions.
To structure this review, we use the new classification proposed by (Chaari et al., 2014), who
propose a classification scheme identifying proactive, reactive and hybrid scheduling
approaches and methods.
Proactive or predictive methods (offline) construct a predictive schedule on the basis of a
statistical knowledge of uncertainty, aiming at determining a schedule having a good average
performance (Leon et al,. 2009). A precomputed schedule or a predetermined schedule, called
a preschedule or predictive schedule, is generated and executed until a machine breaks down.
After that, a re-scheduling procedure is launched to handle the machine breakdown. The
redundancy approaches are based on adding external resources or extends processing time of
operation in order to absorb failures effects (Chiang et al,. 1990). However, the effectiveness
of these approaches depends on the determination of good predictability measures. (Jensen
M., 2003) uses genetic algorithms to find a robust and flexible schedule with low makespan,
applicable for job shop scheduling problems. He defined a new robustness measure as well.
(Fattahi and Fallahi., 2010) develop a multi-objective genetic algorithm based method for
FJSP with dynamic arrival of jobs. (Al-hinai and ElMekkawy., 2012) propose a modified
hybrid genetic algorithm to solve FSJP where processing times of some operations are
represented by uniform distribution.
Machine breakdowns are one of the most studied disruptions in flexible job shop scheduling
(He et al., 2013). (Al-hinai and ElMekkawy., 2011) propose a hybrid GA to solve FJSP with
random machine breakdowns. The objective of the method is to obtain a predictive schedule
that minimizes the effect of machine breakdowns on the overall performance. Furthermore,
they propose three stability measures. (Dalfard and Mohammadi., 2012) focus on the multi
objective FJSP with parallel machines and maintenance cost. They propose a new
mathematical modeling for the problem and apply two meta-heuristic algorithms, a hybrid
genetic algorithm and a simulated annealing algorithm. (Xiong et al., 2012) propose a robust
scheduling for a FJSP with random machine breakdowns. They use two surrogate measures
and investigate their performances by a multi-objective evolutionary algorithm. (He et al.,
2013) apply Novel Clone Immune Algorithm to Solve FJSP with machine breakdown and
propose a new stable measure to reflect the stability of machine allocation for each operation.
Recently, PSO has been used to solve the FJSP problem. (Pan et al., 2013) propose a
Quantum Particle Swarm Optimization algorithm (QPSO) to solve FJSP under uncertainty,
mainly on uncertain operation time and delivery time using mathematical model. (Singh et al.,
2015) proposes a multi objective framework based on QPSO to generate predictive schedule
that can simultaneously optimize the makespan and the robustness measure. (Sun et al. 2015)
propose a Hybrid Evolutionary Algorithm with the Bayesian Network (BN) to solve FJSP
under time uncertainties. The approach combines PSO and GA as typical Evolutionary
Algorithm (EA). The Bayesian Optimization Algorithm (BOA) is used to find out the
relationship between the variables and, according to these relationships, to regroup, at the

4
same time, using adaptive mechanism parameters, to dynamic adjust the parameters of PSO,
minimizing the makespan of the FJSP within a reasonable amount of calculating time.
On the other hand, one can find reactive methods. In reactive scheduling, no schedule is
generated in advance but decisions are made locally in real time (online). The scheduling will
be generated as a response to events and breakdown in real time without generating any kind
of pre-scheduling (Liu et al., 2007). A frequently used reactive approach implements priority
dispatching rules, which dispatches jobs dynamically to an available machine, according to
some pre assigned priorities, taking care of random disruptions as they occur. Another
frequently used reactive approach is the multi-agent approach. (Liu et al., 2007) present a
complete multi-agent framework for dynamic job shop scheduling that can react to various
disruptions. The developed framework implements a completely reactive scheduling approach
combining real-time decision-making with predictive decision-making. Other approaches
exist. For example, (Mouelhi and Pierreval., 2010) propose a new approach based on Neural
Network (NN) to select in real time the most suited dispatching rules. In (Zbib et al., 2012),
authors proposed a reactive FJSP approach using numerical potential fields. Potential fields
are there used to dynamically allocate job to resources with no prediction.
More recently, one can find hybrid approaches that combine between the two previous
approaches and lead either to predictive-reactive approaches or proactive-reactive approaches
(Chaari et al., 2014). A predictive-reactive technique generates a predictive schedule that is
reactively prepared for taking unexpected events into account, aiming at minimizing the
impact of the perturbation onto the original schedule. These approaches have two phases.
First, a deterministic schedule is set up off-line. Then production starts and, during the second
phase, this schedule is used and adapted on-line (Cowling et al., 2004). For example, (Gao et
al. 2015) propose four ensembles of heuristics for scheduling FJSP with new job insertion.
The objectives are to minimize maximum completion time (makespan), to minimize the
average of earliness and tardiness (E/T), to minimize maximum machine workload
(Mworkload) and total machine workload (Tworkload). They use the same heuristics in the
initial scheduling phase and in the rescheduling phase with consideration of the new job
insertion.
The difference between predictive-reactive approaches and proactive-reactive approaches is
mainly due to the fact that, in proactive-reactive approaches, no rescheduling is done on-line:
instead, one among several pre-calculated schedule solutions is chosen, while in predictive-
reactive approaches, the scheduling is reconstructed online (Chaari et al., 2014). Examples are
provided in (Wu et al.,2009) and (Khoukhi et al. 2016). In (Wu et al., 2009), a robust schedule
is built in such way the deviation between the baseline and expected schedule is minimized.
In the reactive phase, when unexpected events appear, the schedule is quickly revised based
on real-time manufacturing information. (Khoukhi et al. 2016) propose the “Dual-Ants
Colony” (DAC), a novel hybrid Ant Colony Optimization (ACO) approach with dynamic
history, based on an ants system with dual activities. The approach combines a local search
and a set of dispatching rules to solve FJSP with machine unavailability constraints due to
Preventive Maintenance (PM) activities, under the objective of minimizing the makespan.
As it is reported in all the previously cited studies, different meta-heuristics are applied to
solve the FJSP problem with uncertainties (genetic algorithm, Novel Clone Immune
Algorithm, Evolutionary algorithm ...) with different coding schemes, and different robustness

5
and stability measures. The challenge is always to have the appropriate meta-heuristic capable
for a better solving of this problem. In (Nouiri et al., 2013), a comparative study between
these different meta-heuristics has provided some insights, and among them, it has been
noticed that genetic algorithms and particle swarm optimization are among the most effective
methods in terms of the results quality for the FJSP. We then choose to propose a model
based on PSO in order to compare two predictive approaches and to find which metaheuristic
(PSO or GA) is more suitable for solving FJSP under uncertainties. In the already mentioned
work (Nouiri et al., 2015), the authors have shown that PSO provides better result in
comparison with GA and other reference metaheuristics without any hybridization. However
no disruption or uncertainties were taken into account. This paper extends thus the work
presented in (Nouiri et al., 2015) to cope with disruptions. The suggested approach is a
predictive one. PSO is used in order to generate a preschedule that minimizes machine
breakdown performance effects. To the best of our knowledge, there is no such previous study
that attempts to use PSO for obtaining predictive schedules of FJSP with possible machine
breakdowns. The obtained results will be compared to a reference work implementing a two
stage hybrid GA approach (Al-hinai and ElMekkawy., 2011) to evaluate the performance of
the proposed PSO based predictive approach. An experimental study and an Analysis of
Variance (ANOVA) are conducted to analyze the effects of different used robustness and
stability measures on the performance of the results.

4. Problem Formulation
FJSP is a generalization of the classic job-shop scheduling problem. The FJSP is more
difficult than the classical JSP since it adds a new decision level to the sequencing one, i.e.,
the assignment to machine selected among the available ones for each operation. The aim is
thus to find an allocation for each operation and to define the sequence of operations on each
machine to optimize a given objective. There are two kinds of FJSP, Total FJSP (T-FJSP) and
Partial FJSP (P-FJSP). For the T-FJSP, each job can be operated on every machine from the
set of machines M (Li et al., 2010) while for the P-FJSP, each operation can be processed on
one machine of only a subset of M (Liu et al., 2007).In order to introduce deterministic FJSP,
we introduce some parameters, variables and constraints (Trentesaux et al., 2013):
4.1 Notations of parameters
• P set of jobs, P= {1, 2,.. n}.
• M set of machines M= { , , .., }.
• set of operation of the job j, = {1, 2,…,| |}, j P.
• is the operation i of job j. The operation i is the operation number i of job j.
• the number of operations of job i.
• set of possible machines for the operation
• the processing time of operation on machine

6
4.2 Notations for variables
• Completion time of operation (i ), N.
• a binary variable set to 1 if operation is performed on machine ; 0,
otherwise.
• a binary variable set to 1 if operation is performed before operation ; 0,
otherwise.
• waiting time of operation in the queue of machine .
• a binary variable set to 1 if operation is waiting for operation in the
queue of machine ; 0, otherwise.
4.3 Constraints
The constraints are rules that limit the possible assignments of the operations. For our case:
 Disjunctive constraints: A machine can process one operation at time, and an
operation is performed by only one machine
+ BM * ≤ , (1)
Where
i represents the index of operation i of job j, .

k represents the index of operation k of job l,

BM is a large number
≤1, (2)
(3)

 Precedence constraints: These constraints ensure job’s production sequence. The


completion time of the next operation considers the completion time of the previous
one, the waiting time and the transportation time if the two operations are not
performed in the same machine (Trentesaux et al., 2013). In our work, the
transportation time is neglected. The constraint is then modelled as follows:
(4)
Where
- represents the processing time of the operation O(i+1)j on
machine that must be executed after operation Oij of the job j.
- r2 is the machine number in which the operation O(i+1)j will be executed.
As introduced, in real manufacturing systems, the execution of a schedule is usually
confronted with disruptions and unforeseen events. Allowing stochastic data like random
machine breakdowns further complicates the problem. Similar to other literature on Stochastic
FJSP, we assume that some information about the uncertainty of machine breakdowns is

7
available in advance, and can be quantified by some distributions. The prediction of a
machine breakdown time and its duration is covered with more details in Section 6.
4.4 Objective function
The optimization is expressed by a function that shall minimize or maximize a given criterion.
Most of the reported research focused on a single objective optimization such as optimizing
tardiness of jobs, minimizing the total workload, and/or minimizing the workload of the most
loaded machine. In the presence of uncertainty associated with machine breakdown, schedules
are expected to perform under stochastic environments. So, we have two goals that should be
achieved: find a robust schedule with the lowest makespan and in the same time the proposed
schedule has to be stable and capable to resist when machine breakdowns occur.
 Makespan: Minimizing makespan is the main objective for the first stage of the
proposed two-stage PSO algorithm. This consists in finding a schedule that requires
a minimum time to complete all operations. The makespan is denoted by Cmax and
is calculated as follows:
Cmax = max tij (5)
Where tij is the completion time of operation Oij

 Stability Measures: Different measures of stability are defined for the flexible job
shop problem. (Al-hinai and ElMekkawy., 2011) propose three measures of stability
and concluded that the following defined measure offers the best results: the
stability of a predictive or original schedule is measured by the sum of the absolute
deviations of job completion times between the realized schedule after breakdown
and the predictive schedule. It is defined as follows:
(6)

Where
- is the number of jobs, the number of operations of job i, the
predicted completion time of operation j of job i, the realized
completion time of operation j of job i.
- R for the realized schedule (or the actual schedule after breakdown).
- P for the predicted schedule (or the original schedule).
 Bi-objective function: When generating a predictive schedule, most of the
previous works often consider either robustness or stability. Little has been done on
both of them simultaneously when constructing an initial schedule. (Al-hinai and
ElMekkawy., 2011) consider both robustness and stability when generating a
predictive schedule using an approach introduced by (Lin et al., 2007) for single
machine problem with random machine breakdown. It relates the robustness of a
schedule to the degree of its makespan degradation under disruptions. Such a
schedule is considered stable when the set of all absolute deviations of its
operations completion times from the realized schedule is small. Robustness and
stability of a schedule are investigated with a bi-objective approach. The bi-
objective function is as follows:

8
(7)
Where:
- is the makespan of the realized schedule (or the actual schedule after
breakdown).
- S is stability measure.
- γ is a weighting parameter in [0,1] used to indicate the relative importance
of and S.
The objective function is used in the second stage of the proposed algorithm. It considers
both robustness and stability while using a weighting parameter in [0, 1] to indicate the
relative importance of and S.
In order to prove the efficiency of our proposed algorithm, we compare it to others methods
when using different bi-objective function:
First, the slack-time based robustness measures proposed by (Leon et al., 1994) is used. This
measure is as follows:
(8)
Where:
- is the makespan of the schedule ).
- is the average slack in schedule .
Second, the neighborhood-based robustness measure proposed by (Mattfeld D.C., 1996)
which is calculated as follows:
(9)
Where:
- is the neighborhood the schedule ) that contains all feasible
schedules that can be created from ) by interchanging two consecutive
operations on the same machine.
- is the makespan of schedule .
- The schedule is one of the neighborhoods of schedule ).
5. The proposed Two-Stage Particle Swarm Optimization algorithm
In this section, a brief description of PSO is first given. Then, the details of the proposed
approach are presented.
5.1 Particle Swarm Optimization
The PSO algorithm was proposed by Kenney and Eberhart in 1995. This method is inspired
from the social behavior of animals living in swarms, such as flocks of birds (Kennedy and
Eberhart., 1995). The PSO algorithm is initialized with a population of particles; each of the
particles represents a candidate solution to a problem and has three main attributes: the
position in the search space , the current velocity and the best position ever found by
the particle during the search process . The principle of the algorithm is to move these
particles to find optimal solutions. The search trajectory of a particle is regulated according to
the flying experience of its own and its neighboring particles. During the search, each particle
updates its velocity and position by the following equations (Kennedy and Eberhart., 1995):
(10)
(11)

9
Where is the best position of each particle which represents the private best objective
(fitness) value so far obtained, and is the global best particle which denotes the best
position among all particles in the population. w is inertia weight, which is used to maintain the
particle; and are random numbers between [0,1].
5.1.1 Parameters tuning
The PSO algorithm includes tuning parameters that greatly influence the algorithm
performance. Despite the recent research efforts, the selection of the algorithm parameters
remains empirical to a large extent. Several typical choices for the algorithm parameters are
reported in (Trelea I., 2003; Clerc and Kennedy,. 2002). In (Trelea I., 2003), the author
proposed parameter selection guidelines in order to guarantee the optimal convergence. There
are several kinds of coefficient adjustments. The one used in this paper was developed by
Kennedy (Kennedy J., 1999) and involves the constriction factor presented in the equation
below:

Where ; ; ,1
So the equation of velocity becomes:

5.1.2 Encoding particle:


In order to successfully apply PSO for solving the FJSP problem, an appropriate representation
of particles is of a big importance. In this context, (Jia et al,. 2007) divided the position of a
particle as well as the velocity into two vector parts: process[ ] and machine[ ]. In this
improved representation, machine[] represents the machine assigned to operations and
process[] defines the sequence of operations. In the search space of the problem, there are
many particles; each of them represents a feasible solution. For each iteration, each particle
moves to another position in order to find the optimum according to Eq( 11).
To satisfy the precedence constraints, we use the same procedure as the one proposed by (Jia et
al,. 2007). When updating a position, for each component of the vector machine [ ]: if a
component x changes to a decimal fraction x' after calculated by Eq(11), then x = .
Moreover, the total number of machines is a constant m, so every component should remain in
the interval [1,m] ; otherwise, we replace the value with the boundary 1 or m .
For the vector process[ ], after applying Eq(11), firstly, the algorithm sort all the components
of process [] in the ascending order. Then according the order, it realigns the corresponding
initial value of process [ ]. This procedure will enable us to construct a feasible solution subject
to the precedence constraints by modifying the components of the updated particles.
To diversify to explore as much as possible the search space, we need to define a distance
between solutions. Let and be two positions. The distance between these positions is
defined here by d( , )= If the distance between the current position and the new
position equals 0, this means that the particle does not move at all or that it remains exactly at
the same position. In a previous study (Nouiri et al., 2013), it has been show that the quality
of the initial population affects to a certain extent the quality of the solution or the

10
convergence speed of the algorithm. For this reason, we use three approaches for the creation
of the initial population: Random rule (Li et al., 2010), localization approach proposed by
(Kacem et al., 2002) and a modified new approach denoted “MMkacem”(Nouiri et al., 2013).
The creation of the initial population is controlled such that a combination with variable
percentage of the three approaches is used. For more details on encoding particle, PSO
algorithm and initialization methods used, one can see (Nouiri et al., 2015).

5.2 Description of the proposed Approach


For the predictive scheduling problem under possible machine breakdowns, a common
strategy consists in inserting appropriate idle time in advance in the predictive schedule. This
is done by generating an initial schedule on the assumption that there are no machine
breakdowns, and then idle time is inserted before each job so that the start time and the
completion time of a job change while the job sequence remains the same. These idle times
serve as a time buffering to absorb the effect of disruptions. However, this method faces two
difficulties: the search for the amount of inserted idle times and the finding of the appropriate
locations to insert those idle times.
In this work, we propose a non-idle time insertion method using PSO algorithm to find
predictive schedule for the FJSP under random machine breakdowns. The idea used is to
simultaneously integrate the knowledge of the machine breakdown probability distribution
along with the available flexible routing of machines (Al-hinai and ElMekkawy., 2011).
Thanks to such an approach, the predictive schedule is capable of assigning and sequencing
operations on machines such that a minimum impact on the overall schedule performance is
observed in case of machine breakdown occurrence. The schedule performance impact is
related to how much the disruptions will affect the quality of the solution (minimum
makespan the current work).In the proposed algorithm, the effect of disruptions on the
predicted activities is measured using the robustness and stability measure. The algorithm
searches for a predictive schedule that can work around the expected breakdown.
The proposed predictive scheduling algorithm is composed of two stages; the role of the first
stage is to optimize the primary objective function assuming deterministic problem
parameters where no disruptions are to occur. The swarm evolves based on the objective of
minimizing the makespan. After that a given number of generations is reached, the algorithm
switches to the second stage. The second stage starts by taking the final population generated
in the first stage as its initial population. Then the algorithm continues to improve the
solutions based now on the bi-objective robustness and stability measure given in Eq(7)
considering random machine breakdown.
This is done by replacing the fitness function of PSO by the robust and stability evaluation
function using the breakdown probability in the decoding procedure.
In this paper, we assume that historical records of the considered shop floor enable the
construction of an approximated distribution function for the machine breakdowns. Such a
distribution can be used when generating the predictive schedule (Al-hinai and ElMekkawy.,
2011). In our work, we integrate breakdown probabilities to perturb the predictive solution,
and we use the robustness and stability measure to evaluate the effect of disruptions on the
solution. This will lead us to obtain solutions that are more robust and stable. Both stages of

11
the proposed approach use the same particle representation and decoding. Figure 1shows the
general flow chart of the proposed two-stage PSO.

Figure1: Flow chart of the proposed two stages Particle Swarm Optimization
To illustrate this, let consider an example of a FJSP with four jobs and five machines.
Schedules in figures 3, 5, 6 and 7 are obtained after applying the proposed two stages particle
swarm optimization algorithm. Figure 2 and figure 4 show two feasible solutions of the
problem found after applying the PSO algorithm which minimizes only the makespan (Nouiri
et al., 2013). When considering deterministic FJSP with an objective to minimize makespan,
there will be no preference in selecting one as both have the same makespan which is 13 time
units.
However, when considering a FJSP under disruptions, the proposed two stages PSO selects
the schedule that is the more robust and stable. In fact, when the probability of a machine
breakdown is integrated into the problem, selecting a schedule that can absorb the impact of
future machine breakdown can be achieved. For example, the historical data indicate that
machine has a high risk of failure. Both original predicted schedules are subjected to the
same breakdown specified by yellow rectangle. The machine breakdown occupation is at
early stage (breakdown type BD3) (Section 6 covers more details about different types of
breakdown machines). The proposed two stages PSO algorithm evaluates the effect of
disruptions on the solution using the robustness and stability measure. Schedule in figure 2
has three operations affected by the breakdown: two directed affected ( , ) that are
located on the failed machine and one undirected related to the precedence constraint
between operations of the same job 1 (see figure 3). The realized makespan for schedule
after applying the rescheduling method equals 15 (the rescheduling method is detailed in the
next section).
Schedule in figure5 has six operations affected by the breakdown: two directed affected ( ,
) and four undirected , , , . The realized makespan for this schedule,
after applying the rescheduling method, equals 16. So the schedule in figure 2 is more
appropriate to be selected by the proposed approach 2s-PSO. Schedule in figure 2 is indeed
more robust than the schedule in figure 4 because it has less degree of makespan degradation
under disruptions and it is more stable because it has less number of disrupted operations.

12
Therefore the sum of absolute deviations of operations completion times from the realized
schedule is smaller.

Figure2: First example of a predictive schedule obtained by PSO minimizes Figure3: After Rescheduling (Breakdown specified by yellow color) by
MS evaluated by 2s-PSO

Figure4: Second example of a predictive schedule obtained by PSO Figure5: After Rescheduling (Breakdown specified by yellow color)
minimizes MS evaluated by 2s-PSO

Figure6: Schedule in figure 2 after Rescheduling when facing Breakdown Figure7: schedule in figure 4 after Rescheduling when facing Breakdown
type BD4 evaluated by 2s-PSO type BD4 evaluated by 2s-PSO

Figure 6 and 7 shows schedules after rescheduling when both original predicted schedules,
presented on figure 2 and 4, are subjected to another breakdown type BD4 specified by a
yellow rectangle. This type of machine breakdown occurred not at an early stage but at a late
stage. The schedule in figure 6 is more robust and stable than the schedule in figure 7 since it
has a lower realized makespan value after applying the rescheduling method (it equals 14)
13
compared to the other schedule (the makespan is 15) and it has only one affected operations
however the schedule in figure 7 has two disrupted operations ( , ).

5.3 Rescheduling procedures


When a breakdown occurs, we need to start a rescheduling procedure to handle the machine
breakdowns. Rescheduling is a procedure to repair a production plan affected by unexpected
disruptions. It is a process of generating a new executable schedule upon the occurrence of an
unforeseen disruption. In dynamic manufacturing environment, a high quality of schedules is
required but also the system should be capable to react quickly to unexpected events.
Rescheduling is thus absolutely needed to minimize the effect of such disruptions. The
realized schedule is obtained after implementing the appropriate rescheduling procedure.
There are many rescheduling methods in literature which can be classified to partial
rescheduling such us Right-shifting RSR, Affected Operation Rescheduling AOR (Abumaizar
and Svestka., 1997), ModifiedAOR mAOR (Subramaniam and Raheja., 2003) or complete
regeneration schedule. For more details about rescheduling for machine breakdown at a job
shop one may see (Subramaniam and Raheja., 2003; Don and Jang., 2012).
As the rescheduling method used influences the value of the realized makespan, the stability
measure and systematically the value of the bi-objective function, we therefore use for
comparison reason the same rescheduling method as (Al-hinai and ElMekkawy., 2011) which
is the mAOR proposed by (Subramaniam and Raheja., 2003).
The mAOR uses the same main concept as AOR but considers other different types of
rescheduling factors than just the machine breakdown (processing time variation, arrival of
unexpected job, urgent job). The principle of the method is to reschedules the jobs that are
directly or indirectly affected by a machine breakdown while keeping the processing sequence
of jobs on every machine the same as in the original schedule (Abumaizar and Svestka.,
1997).
In mAOR, operations sequence remains the same as in the predictive schedule to avoid the
setup costs incurred by sequence deviations and only the operations directly and indirectly
affected are pushed in time to account for the disruption. Thus, the realized operations’
completion time after the occurrence of machine breakdown can be calculated as follows:

where, is the processing time of operation j of job i on machine k, is the


aggregated breakdown duration of machine k; and is the realized starting time of
operation j of job i and is given by : }where is the progressive
time of machine k.
6. Experimentation
To test the effectiveness and the performance of the proposed two stage PSO algorithm, we
carried out experiments with different FJSP benchmark instances implementing both Total
FJSP and Partial FJSP instances as follows:
 Two FJSP instances with total flexibility: Ex1 is of 10*10, and Ex2 is of 15*10. The
characteristics of these instances are available in (Jia et al., 2007) and

14
(Motaghedilarijani et al., 2010). Each instance is characterized by a number of jobs n,
a number of machines m, and the related operations associated to each job i.
 Eleven FJSP instances with Partial flexibility: Ex3 is a 8*8 problem instance taken
from (Motaghedilarijani et al., 2010) and Mk1, Mk2,.., Mk10 instances proposed by
(Brandimarte., 1993). For these instances, the number of jobs varies from 10 to 20, the
number of machines from 6 to 15 and the number of operations from 58 to 232.
The implementation has been performed on a machine based on Intel "Core2Duo" processor
clocked at 2.0 GHz with 4 GB of memory. Experiments are conducted to evaluate the
performance of the predictive schedule generated using the proposed method with the random
machine breakdown.
6.1 Machine breakdown formulation
In order to experiment considering breakdowns, we need to define exactly how breakdowns
are to be generated. The most obvious kind of breakdown in a job shop is the breakdown of a
machine that makes that machine useless for some time. Having extensive simulations to
evaluate the effects of all possibilities for machine breakdowns at the initial schedule is a
cumbersome task and may take tremendous time. Hence, we propose aggregating all
breakdowns into one breakdown and accordingly evaluate the schedule stability.
As introduced above, we assume that some information about the uncertainty of machine
breakdowns is available in advance, and can be quantified by some distributions. Simulating a
breakdown includes choosing what machine is affected, and when this machine will become
operational again. In this work, the machine breakdown generation is simulated according to
the same procedure presented in (Al-hinai and ElMekkawy., 2011). In this case, three
parameters are required:
 Choosing which machine to break down: The probabilities of machine breakdowns are
related to the busy time of each machine. This is because the maximum busy time of a
machine equals its workload. A machine with a heavy workload is more likely to
suffer a breakdown after hard work. The probability of the machine Mk to fail is given
by the following empirical relation :
ρk=
Where : the busy time of machine k
: the total busy time of all machines.
The breakdown time and
 The breakdown duration.
The two later parameters are generated using the following uniform distributions respectively:

Where the breakdown time of machine k, is the busy time of machine k, and the
parameters α and β determine the breakdowns type.
This work assumes two levels of machine breakdown durations (low and high levels), and two
intervals of machine breakdown time (early and late). The early breakdown happens during
the first half of the scheduling horizon and the late one occurs in the second half. This leads to

15
four breakdown types as presented in Table 1. For each breakdown type, different α and β
parameters are set accordingly.
Table1: Breakdown combinations.
Breakdown Disruption
Type Level
BD1 Low, early 0 0.5 0.1 0.15
BD2 Low, late 0.5 1 0.1 0.15
BD3 High, early 0 0.5 0.35 0.4
BD4 High, late 0.5 1 0.35 0.4

6.2 Predictive schedule performance


In this section, the different FJSP benchmark instances are used to experiment the
performance of the predictive schedule obtained with the proposed two stage PSO algorithm
in comparison with the predictive schedule obtained using two stage hybrid GA approach
proposed by (Al-hinai and ElMekkawy., 2011). The same robustness and stability measure
defined by Eq(7) is used. Moreover, for further comparison, the 2s-PSO is compared to hybrid
GA predictive schedules optimizing and robustness and stability measures defined
respectively by equations Eq(8) and Eq(9). For this aim, each benchmark is subjected to the
four different breakdown (BD) types with 5 replications per instance per BD type. After that,
each generated predictive schedule is subjected to a 400 random machine breakdowns, which
results in 4*5*400=8000 test problems per instance.
The parameters values that are chosen for the PSO algorithm parameters are as follow:
swarmsize=500, Max_Iteration=100. The algorithm switches to the second stage after 100
generations. The value of the weighting parameter γ in the bi-objective function can be any
value in [0, 1]. If we search for a tradeoff between effectiveness and scheduling robustness,
the value of γ will depend on the viewpoint of the decision maker.
Choosing γ=1 means that one recommends to minimize only the makespan of the initial
schedule (note that the disrupted scenarios do not appear in algorithm). Choosing γ=0 means
that one recommends to minimize the deviation between the makespan of initial schedule and
the realized one after integrating the breakdown disruption. According to Table 2 obtained by
testing 2S-PSO for different instances, the more γ decreases, the more the makespan of the
initial scenario increases and correspondingly, the more the deviation between the makespan
of all disrupted scenarios and the makespan of the initial scenario decreases.

16
Table2: Experimental results with a varying γ
Instance γ Makespan
γ=1 11
4*5 γ=0.5 16
γ=0 20
γ=1 8
10*10 γ=0.5 12
γ=0 18
γ=1 17
8*8 γ=0.5 20
γ=0 26
For comparison purposes, the value of the parameter γ in the bi-objective function (Eq.7) is
set to the same value used in (Al-hinai and ElMekkawy., 2011) and equals 0.6.
To evaluate the performance of the predictive schedule obtained by our algorithm, we first
calculate the average realized makespan improvement percentage defined by:

=
Where:
 is the realized makespan of the obtained schedule using robust method after a
breakdown.
 is the realized makespan of the obtained schedule using deterministic method
after a breakdown.
Deterministic method refers to the PSO algorithm that minimizes the makespan.
We calculate also the average stability improvement percentage obtained by:
=
Where:
 RSTB is the stability of the obtained schedule using robust method
 DSTB is the stability of the obtained schedule using deterministic method.
In table 3, results can be compared in terms of average of the realized makespan improvement
percentage and the average stability improvement percentage for BD1 and BD2. The obtained
results for the breakdown type BD3 and BD4 are illustrated on Table 4.
Table 3 and table 4 consist of 11 columns: the first and second columns represent the instance
name and size. The rest of the columns represent the and the average released
makespan improvement percentage of each instance obtained by our algorithm in comparison
with HGA1, HGA2 and HGA3 when subjected to specific breakdown types. HGA1, HGA2
and HGA3 refer to Hybrid GA predictive schedules proposed by (Al-hinai and ElMekkawy.,
2011) with optimizing respectively , and robustness and stability measures.

17
Table3: Computational results of instances subject to breakdown type BD1 and BD2
BD1 BD2
I S Our H H HOur H H H
nstance size approach HGA1 HGA2 HGA3 approach HGA1 HGA2 HGA3
-6.97 -5.41 0 -2.7 -6.44 -5.41 0 -2.7
Ex1 10*10
-90.45 -100 50 -40 -82.5 -90,91 -63.64 -72.73
6.56
-8.55 -1.64 4.82 -10.41 3.28 4.92 6.56
Ex2 15*10 9.09
-98.16 -96.97 -3.56 -13.46 -58.26 11.3 32.17

-6.95
-7.98 -1.35 0 1.35 -4,37 -2.56 -2.56
Ex3 8*8 -
-39.03 -73.33 -20 -20 -37.12 38.46 -30.77
37.105
0.77 -4.7 -4,89 -3.17 4.80 0.91 2.87 4.78
MK01 10*6
-13.26 -62.64 -40.64 -44.65 -16.65 -81.74 1.68 11.77
-1.64 0 2 4.67 -9.79 0 -1.33 1.32
MK02 10*6
-17.23 -55.61 18.56 15.81 -16.21 -5.89 51.9 13.92
-2.24 -4.45 -5.45 -4.45 6.92 -7.27 -8.52 -7.18
MK03 15*8
-26.95 -85.85 -20.12 -21.89 8.49 -66.79 -35.95 -50.55
-6.46 0.29 4.39 3.51 3.72 0.3 3.25 4.14
MK04 15*8
-15.58 -49.09 31.63 26.18 10.85 -32.63 -3.76 -11.68
-3.56 -1.33 0.41 1.64 0.54 -1.05 -0.51 1.11
MK05 15*4
-65.76 -61.02 5.81 -9.27 -4.23 -9.01 1.71 -1.81
-2.45 -1.31 0.28 1.01 -2.65 0.25 -0.85 2.54
MK06 10*15
-60.65 -58.78 -23.6 -29.68 -78.34 -42.61 -51.66 21.54
-0.43 -1.87 0.79 2.53 1.32 -2.5 -0.13 3.87
MK07 20*5
-64.21 -83.42 -8.32 -7.18 -23.35 -40.26 -56.12 41.82
-2.31 -4.05 1.35 1.28 2.32 1.72 2.32 3.8
MK08 20*10
-90.23 -86.57 7.54 15.81 -50.45 -31.54 79.02 161.22
-3.38 -4.23 -1.61 -2.39 -3.42 -2.51 -0.87 -0.66
MK09 20*10
-70.32 -66.60 -10.96 -10.75 -23.65 -46.96 -17.24 -16.15
1.23 2.87 -0.48 3.36 1.65 0.74 0.49 3.3
MK10 20*15
-50.21 -58.23 -16 -3.96 -30.87 -48.32 6.10 -7.82
-3.38 -2.31 0.116 1.01 -1.41 -1.22 -0.07 1.40
Average
-51.96 -78.84 -2.28 -9.09 -27.50 -45.54 5.69 6.99

18
Table4: Computational results of instances subject to breakdown type BD3 and BD4
BD3 BD4
I S Our H H HOur H H H
nstance size approach HGA1 HGA2 HGA3 approach HGA1 HGA2 HGA3
-2.33
10*1 -3.72 -6.98 -2.33 -13.54 -9.52 -7.14 0
Ex1 -5.13
0 -17.22 -46.15 0 -8.70 -60 -6.67 26.67
-2.86
15*1 10.61 8.57 1.79 -11.39 -1.76 -2.7 5.41
Ex2 -82.73
0 12.62 23.21 4.96 -42.94 -37.34 -23.81 -19.05

-19.86 -13.33 -5.56 0 -27.58 2.47 6.17 6.17


Ex3 8*8
-8.74 -83.96 0.53 -5.88 -24.82 -86.96 139.13 143.48
0.93 -9.39 -6.56 -2.22 -6.27 -8.8 -6.4 -3.2
MK01 10*6
8.92 -69.71 -8.42 -35.62 -19.26 -44.67 -18.27 -18.78
1.58 0 3.28 -6.53 -0.83 -7.73 -3.31 4.97
MK02 10*6
-21.74 -20.94 20.56 -20.94 -34.63 -11.3 75.67 80.27
6.16 -5.24 -5.14 0 -14.96 -13.18 -11.09 -6.11
MK03 15*8
-33.96 -70.51 -9.24 -15.13 - 50.83 -49.82 0.93 14.58
12.73 -8.74 4.92 -4.24 -14.49 -13.83 -1.61 -0.25
MK04 15*8
-10.40 -96.38 28.05 22.28 -99.15 -92.11 -4.1 -13.82
-9.35 -7.32 0.61 3.55 -6.54 -13.87 -0.34 1.52
MK05 15*4
-89.34 -70.19 -28.63 -28.2 -78.26 -60.6 20.77 26.36
10*1 -8.21 -7.28 6.31 1.5 -6.18 -4.23 -0.32 1.38
MK06
5 -42.56 -66.23 23.16 1.23 -43.28 -36.99 -21.77 -23.63
-2.76 -0.31 2.15 4.09 -5.76 -6.28 -2.94 2.53
MK07 20*5
-26.23 -31.18 -14.22 -30.34 -22.87 -32.28 4.8 -19.41
20*1 -4.2 -8.4 0.47 1.02 -0.54 -1.93 4.4 5.94
MK08
0 -43.23 -61.78 6.65 -13.47 -43.65 -33.38 23.87 78
20*1 -6.2 -5.4 -2.05 -2.51 -1.98 -5.58 -1.29 -0.46
MK09
0 -50.23 -36.99 -4.3 -11.4 -11.65 -26.34 -18.75 -25.78
20*1 -5.23 -9 -0.38 -2.77 -1.76 -3.11 -0.46 2.35
MK10
5 -49.76 -75.88 -9.11 -7.56 -3.76 -2.04 -6.61 53.56
-0.66 1.55
-2.11 -6.48 0.33 -7.45 -6.71 -2.07
Average -11.16 23.265
-21.95 -62.48 2.17 -21.96 -44.14 12.65

Note: HGA1, HGA2 and HGA3 refers to the two stage hybrid GA proposed by (Al-hinai and
ElMekkawy., 2011) when using different bi-objective function , and respectively.
The best performance obtained by our 2S-PSO algorithm is printed in bold. Furthermore,
negative values in the tables indicate that there are improvements whereas positive values
indicate that there are degradations when comparing different methods. A closer look at table
3 and table 4 reveals that using the proposed two-stage PSO improves the stability and the
efficiency for all the instances compared to HGA3 when considering breakdown type BD1,
BD2 and BD4. When facing BD3, our proposed algorithm gives better result compared to
HGA2 and HGA3 excepted in Mk01, MK03, MK04 and MK07. This means that the bi-
objective function used in our method seems to be better than the neighborhood-based
robustness measure proposed by (Mattfeld D.C., 1996) used in HGA3.When facing

19
breakdown type BD1, the proposed 2S-PSO performs better than HGA2, including when
facing BD2 type excepted in MK03 and MK07 instances. The proposed 2S-PSO performs
better in term of efficiency in all instances compared to HGA2 when facing BD4 type, and
also in term of stability excepted for MK09 and MK10. Compared to HGA1, the proposed 2S-
PSO improves the efficiency and stability in some instances when considering different types
of breakdown. Through experiments, it is shown that the proposed 2S-PSO performs better
result in term of efficiency and stability results for instances EX1, EX2, Ex3, Mk2, mK4, mk5
and mk6 when considering BD1 compared to HGA1. A possible explanation of this can be
attributed to the breakdown distribution of BD1 which is low and that the predictive schedule
have slack which can absorb the effect of uncertainty. Our method found better results also
when facing BD2 for instance Ex1, Ex2, Ex 3, Mk2, Mk8, MK9, and when facing BD3 for
instances Ex1, Ex3, MK5, MK6, MK7, MK9 in comparison to HGA1.
When considering BD4, the distribution occurrence is at a late stage. This gives less chance to
find critical operations to be amongst the operations affected by the breakdown. This means
that the number of affected operations is also low. This reduces the possibility of delaying the
schedule (i.e. increasing its makespan). Hence, minimizing the stability term in the bi-
objective function is more emphasized. This is achieved by our method because according to
the result on Table3, there is an improvement of stability and efficiency on most of the
instances.
Moreover, we calculate the average of all instances on each breakdown type, as it is shown
from table 3 and table 4, the expected AMSRI of schedules obtained by HGA1 with the
schedule obtained by our method have decreased from -2.31 to -3.38 for BD1, from -1.22 to -
1.41 for BD2 and from -6.71 to -7.45. This indicates that there are improvements on the
realized makespan and that the predictive schedules are more compact and dense around the
region where the actual breakdown disruptions occurred.
Even though results seem to be in favour of the proposed 2S-PSO, the simulation results
obtained for all cases have been examined through an analysis of variance (one way ANOVA)
to conclude if a statistical difference significantly support our conclusions.
Tables 5, 6 and 7 represents the one-way ANOVA results applied for AMSRI and STBI of the
proposed method in comparison to HGA1, HGA2 and HGA3 respectively.
Tables 5, 6 and 7 show the F-ratio and the P-value of the ANOVA results obtained from
previous experiments. These tables details the measure of the effect of the robustness and
stability measures used, the considered test case, breakdown type and the interaction between
these factors on the relative quality of the predictive schedule. In this study, effects are
considered significant if the P-value is less than 0.05. The best value which are less than 0.05
are printed in bold.

20
Table5: One-way ANOVA comparing and STBI of the proposed method and HGA1
STBI
BD Type P-value P-value
F-ratio F-ratio

BD1 0.0025 5.37 0.046 4.86


BD2 0.0415 3.11 0.65 0.35
BD3 0.110 2.75 0.48 0.73
BD4 0.046 3.54 0.138 2.54

Table 6: One-way ANOVA comparing and STBI of the proposed method and HGA2
STBI
BD Type P-value P-value
F-ratio F-ratio

BD1 0.00813 8.32 0.000202 19.15


BD2 0.46 0.54 0.02854 5.42
BD3 0.3812 0.79 0.000222 11.72
BD4 0,014 6.95 0.002776 11.11

Table 7: One-way ANOVA comparing and STBI of the proposed method and HGA3
STBI
BD Type P-value P-value
F-ratio F-ratio

BD1 0,00213 11.83 0.000208 19.06


BD2 0.147 2.24 0.0614 3.84
BD3 0.0576 0.32 0.0587 3.93
BD4 0.00024 18.56 0.0011 13.54

Results from Tables 5, 6 and 7 indicate that the BD type has a significant effect on both P-
value and F-ratio measures. According to Table5, the 2S-PSO is statistically different from
the HGA1 when facing BD1, BD2 and BD3 (P-value<0.05).
As we can see from Table 6 and 7 for the most cases, there is a high statistically significant
difference between 2S-PSO and HGA2 and HGA3 when using the slack-based robustness
measures and the neighborhood-based robustness measure . In most of the cases, the 2S-
PSO offers a better performance than the other approaches.
To complete our study, the CPU processing time for both Deterministic PSO and Two stages
PSO are evaluated while maintaining the same configuration for each approach
(swarmsize=100, Max_Iteration=100). The obtained results are shown in Table 8. As it is
shown from this table, the CPU time of the two stage PSO are much higher than the CPU
times in this case where γ=1. The two-stage PSO increases the computational time
significantly. This is normal because there are many additional functions added compared to
the basic PSO algorithm, including breakdown simulation, the determination of the directly
and indirectly affected operations, the computing of the stability measure, the rescheduling

21
procedure, the running of the PSO algorithm one time to minimize makespan and another
time to optimize the bi-objective function with the integration of machine breakdown
probability. In our opinion, this increase in term of computational time does not have major
drawbacks. In fact, the proposed model in this work is predictive (offline). Therefore, it is
conceivable to spend more time to find a predictive schedule which is more robust and stable
with the lowest makespan.

Table 8: Comparison between PSO and Two stages PSO


proposed
PSO (*) 2S-PSO
Instance CPU CPU
times (s) times (s)

4*5 0.353 19.402


10*10 7.316 52.604
8*8 5.698 31.375
Note: PSO (*) refers to the PSO proposed by (Nouiri et al., 2015).

7. Conclusions
In this work, the flexible job shop scheduling problem under machine breakdowns has been
considered. A two stage particle swarm optimization algorithm has been proposed in order to
find a robust and stable predictive schedule facing possible unexpected or unpredicted failures
events. The computational results indicate that the predictive schedules generated using the
proposed approach has statically superior performances in terms of robustness and stability
compared to the HGA method proposed by (Al-hinai and ElMekkawy., 2011) using different
robustness and stability measures.
A first interesting direction for future researches is to propose a predictive-reactive method
based on PSO algorithm to solve FJSP under many other types of uncertainties. A possible
solution would be to distribute the predictive-reactive method in a multi agent system in order
to decentralize decision, so that each entity will participate in the resolution of problem and
can make decisions in real time according to the actual state of resources and to unplanned
events. The global solution will be constructed from the cooperation between all agents. The
implementation of the developed Multi agent distributed solution on a physically distributed
embedded system will be considered.
Another interesting prospect concern the generalization of the objective function, beyond
makespan, to consider other criteria such as energy or more globally, sustainability as these
criteria translate currently some major societal stakes (Giret et al., 2015), (Prabhu et al.,
2015).

8. References
Abumaizar, R.J. and Svestka, J.A., 1997. Rescheduling job shops under random disruptions.
International Journal of Production Research, Vol 35, No 7, pp.2065-2082.

22
Al-hinai, N., ElMekkawy, T., 2011. Robust and stable flexible job shop scheduling with
random machine breakdowns using a hybrid genetic algorithm. International Journal of
Production Economics. Vol 132., No 2., pp., 279-291.
Al-hinai, N., ElMekkawy, T., 2012. Solving the Flexible Job Shop Scheduling Problem with
Uniform Processing Time Uncertainty. World Academy of Science, Engineering and
Technology. Vol 6., No 4., pp.,1184-1189.
Bidot, J., Vidal, T., Laborie, P., Back, J., 2009. A theoretic and pratical framwork for
scheduling in a stochastic environment. Journal of Scheduling .Vol 12., No 3., pp 315-344.
Brandimarte, P., 1993. Routing and scheduling in a flexible job shop by tabu search. Annals
of Operations Research.Vol 41, pp 157-183.
Chaari T., Chaabane S., Loukil T., Trentesaux D., 2011. A genetic algorithm for robust hybrid
flow shop scheduling. International Journal of Computer Integrated Manufacturing.Vol 24.,
No 9.,pp., 821-833.
Chaari, T., Chaabane, S., Aissani, N., Trentesaux, D., 2014. Scheduling under uncertainties :
survey and research directions. International conference on Advanced Logistics ans Transport
(ICALT), IEEE, pp., 267-272.
Chiang, W., Fox, M., 1990. Protection againts uncertainty in a deterministic schedule. Fourth
International Conference on Expert Systems in Production and Operations Management,
South California, USA.
Clerc, M., Kennedy, J.,2002. The Particle Swarm Explosion, Stability, and Convergence in a
Multidimensional Complex Space. IEEE Transactions on Evolutionary Computation. Vol 6.,
No 1., pp 58-73
Cowling, P., Ouelhadj, D. and Petrovic, S., 2004. Dynamic scheduling of steel casting and
milling using multi-agents. Production Planning & Control. Vol., 15, No 2, pp., 178-188.
Dalfard, V., Mohammadi, G., 2012. Two meta-heuristic algorithms for solving multi-
objective flexible job-shop scheduling with parallel machine and maintenance constraints.
Computers and Mathematics with Applications. Vol 64., No 6., pp., 2111-2117
Don, Y., Jang, J., 2012. Production rescheduling for machine breakdown at a job shop,
International Journal of Production Research. Vol 50., No 10., pp., 2681-2691.
Fattahi, P., Fallahi, A., 2010. Dynamic scheduling in flexible job shop systems by considering
simultaneously efficiency and stability. Journal of Manufacturing Science and
Technology.Vol 2., No 2., pp., 114-123.
Gao, K., Nagaratnam, P., Fatih, M., Ke, Q., Sun, Q., 2015. Effective ensembles of heuristics
for scheduling flexible job shop problem with new job insertion. Computers and Industrial
Engineering, Vol 90, pp.107–117.
Giret A., Trentesaux D., Prabhu V., 2015. Sustainability in Manufacturing Operations
Scheduling: A State of the Art Review. Journal of Manufacturing Systems,Vol 37., No 1., pp.,
126-140.
He, W., Sun, D., Liao, X., 2013. Applying Novel Clone Immune Algorithm to Solve Flexible
Job Shop Problem with Machine Breakdown. Journal of Information & Computational
Science. pp., 2783-2797.

23
Ishikawa, S., Kubota, R., Horio, K., 2015. Effective hierarchical optimization by a
hierarchical multi-space competitive genetic algorithm for the flexible job-shop scheduling
problem. Expert Systems with Applications, Vol 42., No 24., pp.,9434-9440.
Jensen, M.T., 2003. Generating Robust and Flexible Job Shop Schedules Using Genetic
Algorithms. IEEE Transactions on Evolutionary Computation.Vol 7., No 3., pp., 275-288.
Jia, Z., Chen, H., Tang, J., 2007. An Improved Particle Swarm Optimization for Multi-
objective Flexible Job-shop Scheduling Problem. IEEE International Conference on Grey
Systems and Intelligent Services, Nanjing, China.
Kacem, I., Hammadi, S. Borne, P. 2002. Approach by localization and
multiobjectiveevolutionary optimization for flexible job-shop scheduling problems,
Systems, Man, and Cybernetics, Part C: Applications and Reviews. Vol 32.,No1, pp., 1-13.
Kaplanoğlu, V., 2016. An object-oriented approach for multi-objective flexible job-shop
scheduling problem. Expert Systems with Applications, Vol 45.,pp., 71-84
Kennedy, J., Eberhart, R., 1995. Particle Swarm Optimization. IEEE International
Conference on Neural Networks.pp.1942–1948.
Kennedy, J., 1999. Small Worlds and Mega minds: Effect of neighborhood Topology on
Particle Swarm Performance. Evolutionary Computation. Vol 3.
Khoukhi, F., Boukachour, J., Alaoui, A., 2016. The “ Dual - Ants Colony ”: A Novel Hybrid
Approach for the Flexible Job Shop Scheduling Problem with Preventive Maintenance.
Computers and Industrial Engineering.
Leon V.J., Wu S.D., Storer R.H., 1994. Robustness measures and robust scheduling for job
shops. IIE Transactions. Vol. 26, pp 32-43.
Li, J., Pan, Q., Xie, S., Jia, B., Wang, Y., 2010 . A hybrid particle swarm optimization and
tabu search algorithm for flexible job-shop scheduling problem. International Journal of
Computer Theory and Engineering. Vol. 2., No 2,. pp., 189-194.
Liu N., Abdelrahman M., Ramswamy S ., 2007. A Complete Multiagent Framework for
Robust and Adaptable Dynamic Job Shop Scheduling. IEEE Transactions on systems ,man
and cybernetics Part C. Vol.37., No5,. pp., 904-916.
Liu, H., Abraham, A., Grosan, C., 2007. A novel Variable Neighborhood Particle Swarm
Optimization for multi-objective Flexible Job-Shop Scheduling Problems. Digital Information
Management. Vol 1., pp., 138-145.
Mattfeld, D.C., 1996. Evolutionary search and the job shop, Production and Logistics.
Physica-Verlag, book
Mouelhi W., Pierreval H., 2010. Training a neural network to select dispatching rules in real
time. Computers and Industrial Engineering, Vol 58., No 2., pp., 249-256.
Motaghedi-larijani, A., Sabri-laghaie, K., Heydari, M., 2010. Solving Flexible Job Shop
Scheduling with Multi Objective Approach. International Journal of Industrial Engineering
& Production Researc. Vol 21., No 4., pp., 197-209.
Nouiri, M., Jemai, A., Bekrar, A., Niar, S., Ammari, A. C., 2013. An effective particle swarm
optimization to solve flexible job shop scheduling problem. 5th International Conference on
Industrial Engeneering and Systems Management (IESM), Morocco: Rabat.

24
Nouiri, M., Bekrar, A., Jemai, A., Niar, S., Ammari, A. C.,2015. An effective and distributed
particle swarm optimization to solve flexible job shop scheduling problem. Journal of
Intelligent Manufacturing. pp., 1-13.
Nouiri M., Bekrar A., Jemai A., Trentesaux D., Ammari A. C., Niar S., 2015. An improved
Multi agent Particle Swarm Optimization to solve flexible job shop scheduling problem.
International Conference on Computers & Industrial Engineering (CIE45), 28-30 octobre.
Ourari, S., Berrandjia, L., 2015 Robust approach for centralized job shop scheduling :
Sequential Flexibilty. 15th IFAC Symposium on Information Control Problems in
Manufacturing INCOM. Vol 48., pp.,1960–1965.
Pan F., Ye Ch., Yang J., 2013. Flexible job shop scheduling problem under uncertainty based
on QPSO algorithm. Advanced Materials Research.Vol 605-607., pp., 487-492
Prabhu V., Trentesaux D., Taisch M., 2015. Energy-Aware Manufacturing Operations.
International Journal of Production Research , Vol 53., pp., 6994- 7004.
Ranjan, M., Mahapatra, S., 2016. A quantum behaved particle swarm optimization for flexible
job shop scheduling. Computers and Industrial Engineering, Vol 93., pp., 36-44.
Singh M.R., Mahapatra S.S., Mishra R., 2015. Robust scheduling for flexible job shop
problems with random machine breakdowns using a quantum behaved particle swarm
optimisation. International Journal of Services and Operations Management. Vol 20., No 1.,
pp., 1-20.
Subramaniam, V., Raheja, A., 2003. mAOR: A heuristic based reactive repair mechanism for
job shop schedules. International Journal of Advanced Manufacturing Technology. Vol 22.,
No 9., pp., 669-680
Sun, L., Lin L., Wang Y., Gen M., Kawakami H., 2015. A Bayesian Optimization-based
Evolutionary Algorithm for Flexible Job Shop Scheduling. Procedia Computer Science, Vol
61., pp., 521–526.
Trelea I., 2003. The particle swarm optimization algorithm: convergence analysis and
parameter selection. Information Processing Letters.Vol 85, pp 317–325.
Trentesaux D., Pach C., Bekrar A., Sallez Y., Berger T., Bonte T., Leitao P., Barbosa J.
(2013). Benchmarking Flexible Job-Shop Scheduling and Control Systems. Control
Engineering Practice, Vol 21., No 9., pp., 1204–1225.
Vieira, G.E., Herrmann, J.W. and Lin, E., 2003. Rescheduling Manufacturing Systems: A
Framework of Strategies, Policies, and Methods. Journal of Scheduling, Vol 6., No 1., pp.,
39–62.
Wang k., Choi S.H., 2012. A decomposition-based approach to flexible flow shop scheduling
under machine breakdown. International Journal of Production Research., Vol 50., No 1.,
pp., 215-234.
Wu, S., Storer R., Chang P., 1993. One-machine rescheduling heuristics with efficiency and
stability as criteria. Computers and operations research, Vol 20., No 1., pp., 1–14.
Wu L., Chen X., Chen X.D., Chen Q.X., 2009. The research on proactive-reactive scheduling
framework based on real-time manufacturing information. Materials Science Forum, Vol
626-627., pp., 789–794.
Xiong, J., Xing Li., Chen Y., 2013. Robust scheduling for multi objective flexible job shop
problems with random machines breakdowns, International journal of Production
Economics.Vol 141., pp., 112-126.
25
Zbib N., Pach C., Sallez Y., Trentesaux D., 2012. Heterarchical Production Control in
Manufacturing Systems Using the Potential Fields Concept. Journal of Intelligent
Manufacturing, Vol 23., No 5., pp., 1649–1670.

26
Highlights

 The flexible job shop scheduling problem under machine breakdowns is considered.

 A two stages particle swarm optimization is proposed to solve the problem.

 The proposed algorithm optimizes makespan, robustness and stability of the solution.

 A predictive schedule witch is more robust and stable is obtained.

27

You might also like