Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Expert Systems with Applications 39 (2012) 3507–3515

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications


journal homepage: www.elsevier.com/locate/eswa

Optimization of modular structures using Particle Swarm Optimization


Orlando Durán a,⇑, Luis Pérez b, Antonio Batocchio c
a
Pontificia Universidad Católica de Valparaíso, Chile
b
Universidad Técnica Federico Santa María, Valparaíso, Chile
c
DEF/FEM Unicamp, Campinas, SP, Brazil

a r t i c l e i n f o a b s t r a c t

Keywords: In most configurations of modular structures, products are assumed to have a unique modular structure.
Modularization However, it is well known that alternatives for constructing modular structures may exist in any level of
Genetic Algorithms abstraction. Explicit considerations of alternative structures invoke changes in the number of module
Modular structures instances so that lower costs, more independency of structures and higher efficiency can be achieved.
Particle Swarm Optimization
Relatively few research papers were found in the literature that deal with the optimization of modular
Swarm intelligence
structures problem with alternative assembly combinations aiming at minimization of module invest-
ments. First, this paper proposes an optimization model which helps users to change their dedicated sys-
tems gradually into modular ones. The optimization is achieved through appropriately selecting the
subsets of module instances from given sets. The proposed optimization model is general in the sense
that products can have any number of modules and alternatives of assemblies. Secondly, the paper pre-
sents an adapted Discrete Particle Swarm Optimization algorithm (DPSO), which is applied in the afore-
mentioned problem. Comparisons with Genetic Algorithm, Simulated Annealing and total enumeration
are presented. Finally performance comparisons using a set of large scale problems (for which the opti-
mal solution is unknown) between the proposed algorithm (DPSO) and the other optimization tech-
niques, are presented and discussed.
Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction functional requirements and a set of constraints’’. From this defini-


tion, it can be assumed that there are alternative modular configu-
Modularity comprises the use of a finite set of components to rations for a particular product or structure.
meet the infinite changes of the environment; establish the mod- Products built around modular architectures can be more easily
ule by reviewing the similarities among the components; keep as varied without adding too much complexity to the manufacturing
much independence of the resulting structures as possible and system, and moreover, a modular architecture makes the standard-
use different modules for different varieties of assemblies (Bi & ization of components more possible. At the same time, modularity
Zhang, 2001). Modularity is one of the primary means of achieving allows the reuse of tools, equipment and expertise, and avoids
flexibility, economies of scale, product variety and easier product costly changeovers for personalized products. At present, employ-
maintenance and disposal. The main motivation of modularization ment of modular elements in the manufacturing sphere ranges
of a product or a structure is to meet the changing demand for the from the tool, jig, and fixture, through the machine tool, to the
needs of a product, in particular to achieve rapidly the maximum manufacturing system. In the following, two typical examples will
flexibility at lower costs. The modularization should result in prod- be presented (Itō, 2008).
uct architectures or structures such that the product can be ob- A modular tooling system can be designed and used in a series of
tained by simple assembling pre-existing components. In recent metal cutting operations. The same cutting tools and adaptors can
years, the conversion of dedicated structures into modular ones be used in different applications and machines, which makes it pos-
seems to be a trend in the manufacturing field, especially in the sible to standardize on one tooling system for the entire machine
search for a greater flexibility. Determination of modular configu- shop. There are many possibilities of assembling tools with a vari-
ration is defined by O’Grady and Liang (1998) as, ‘‘Given a set of ety of lengths and design characteristics. The same system can also
candidate modules, produce a design that is composed of a subset be installed in various machines types in different ways. Fig. 1a
of the candidate modules and which satisfies both a set of shows a representative modular tooling system. Fig. 1b shows a
modular boring system (Stephenson & Agapiou, 2006).
Secondly, modular fixtures have been adopted in many
⇑ Corresponding author.
manufacturing industries because of the advantages of lower cost
E-mail addresses: orlando.duran@ucv.cl (O. Durán), luis.perez@usm.cl (L. Pérez),
batocchi@fem.unicamp.br (A. Batocchio). and increased flexibility when switching different but similar

0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2011.09.041
3508 O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515

Fig. 1. Modular tooling system.

workpieces (Hoffman, 1991). Fig. 2a shows an example of modular knowledge-based system for the detailed design of prefabricated
configuration for fixturing. Fig. 2b shows a partial view of a modu- building. Despite these clear benefits, a formal theoretical ap-
lar fixturing kit. The gradual transformation from the use of dedi- proach for conversion to modular products is still lacking and
cated fixture system to modular ones seems to be a trend in the designers are often skeptical regarding the advantages of modular-
manufacturing field. Since 90s there is a significant research on ity. The definition of a modular alternative requires comparative
development and application of Computer Aided Fixtures Design estimations of time, performance and cost among alternative of
programs dedicated to the solution of planning, selection, assem- modular configurations. Recent applications have used cost models
bly and fixture validation problems (Rong, Huang, & Zhikun, and geometric optimizations based on the physical properties
2005). Many works have been reported concerning fixture design (mass, volume) of candidate modules (Pandremenos, Paralizas,
and optimization using artificial intelligence tools (Cecil, 2001). Salonitis, & Chryssolouris, 2009). To evaluate the economic impact
Liu, Chen, and Chou (2008) used Genetic Algorithms for intelligent of modularity, a cost approach is needed to compare alternative
design of automobile fixtures. Babu, Valli, Kumar, and Rao (2005) modular cases. The costs differences among modular system alter-
developed an automated fixture configuration design system to se- natives can be used to identify preferred configurations.
lect automatically modular fixture components for prismatic parts A method which helps users to change their dedicated systems
and place them in position with satisfactory assembly relation- into modular ones selecting the most appropriate alternative from
ships. However, no authors have reported a methodology to opti- cost point of view is proposed in this work. The optimization is
mize the process of transformation of dedicated into modular achieved through appropriately selecting the subsets of module
fixture systems. instances from a finite set of modules instances. The proposed for-
There are other domains where a wide variety of modular mulation is general in the sense that products can have any number
designs is available. That is, there is more than alternative to build of modules. Each module may have more than one instance and any
a given solution or a structure using a set of modular components each assembled product may have more than assembled combina-
from a given and finite set. Depending on the specific domain at tion of modules. The different alternative assemblies may provide
hand, many researchers have reported automated systems and the same capabilities and even functionalities that the required
methodologies for define one or more modular configurations for ones. Furthermore, this work accommodates simultaneity con-
a given application, i.e. Asan, Polat, and Serdar (2004) developed straints where the selection of a particular instance of a module
an integrated method for designing modular products. To test needs the use of a particular instance of another module. It is antic-
and validate the methodology it was applied to a domestic gas ipated that the proposed method can be used as a systematic tool in
detector product family. Hornby, Lipson, and Pollack (2003) devel- selection of modules instances in designing and assembling
oped an automatic design system that produces complex robots by modular products or transforming dedicated structures into modu-
exploiting the principles of regularity, modularity, hierarchy, and lar ones. In addition, the paper proposes the use of two metaheuris-
reuse. Finally, Retik and Warszawski (1994) developed a tics, a Genetic Algorithm (GA) and a Discrete Particle Swarm

Fig. 2. Example of a modular fixturing system.


O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515 3509

Optimization (DPSO), for the aforementioned optimization prob- Every solution to the addressed problem is called a chromosome. A
lem. Test results are presented and the performance of the two chromosome is a string of binary bits or numbers. The pool of solu-
algorithms are compared to Simulated Annealing technique and tions is called the population. New candidates are generated grad-
to those solutions obtained from total enumeration tests. In the ually from a set of renewed populations by applying artificial
final section performance comparisons between the two proposed genetic operators selected from strategies based on the survival
algorithms in a set of large problems are presented and discussed. of the fittest principle, after repeatedly using operators such as
crossover and mutation. Genetic Algorithms have been used to
solve a variety of optimization problems with some success. Com-
2. Problem statement
binatorial problems constitute a class of problems particularly dif-
ficult to solve, sequencing problems included. GAs need only a
Consider, a simple example, a situation where, with a finite num-
fitness or objective function value. Finally, GAs use probabilistic
ber of combinations of modular components, it is possible to con-
transition rules to find new design points for exploration rather
struct a body with a given volume. See Fig. 3(a) and (b) where two
than using deterministic rules based on gradient information to
alternative configurations for filling the same volume with lego-like
find these new points. A random initial population initiates the
pieces are shown. The two alternatives differ in the number of pieces
evolutionary process. Each chromosome is assigned a positive
used to construct the same body. Similarly, in Fig. 3(c) seven compo-
value called fitness proportional to the quality of the solution.
nents were used and in Fig. 3(d) two components were used. Each
Based on their fitness value, subsets of the chromosomes of the
modular component has its known cost and availability. If a number
current population are selected for reproduction. The reproduction
of bodies is needed to be assembled at the same moment, and several
is accomplished by applying a set of genetic operators, called cross-
modules are common to some of these bodies, and, if there is a
over and mutation, to the chromosomes.
limited number of each one of the modular components, the chal-
The basic algorithm of the GA is given as follows:
lenge here is selecting the optimum combination of modular designs
for each one of the lego-like figures (bodies) at a minimum cost.
1. Generate random population of chromosomes.
Through modularity, the number of different parts to be purchased
2. Evaluate the fitness of each chromosome in the population.
and used may be significantly reduced while achieving a sufficient
3. Test if the end condition is satisfied, stop and return the best
variety by combination of different modules.
solution.
The optimization is achieved through appropriately selecting
4. Create a new population by repeating following steps until the
the subsets of module instances from given sets. In general, each
new population is complete:
module may have more than one instance and any each assembled
 Reproduction: Select two parent chromosomes from the
product may have more than assembled combination of modules.
population according to their fitness.
As it was commented, the model accommodates simultaneity con-
 Crossover: With a crossover probability, crossover the par-
straints where the selection of a particular instance of a module
ents to form a new offspring (children).
necessitates the use of a particular instance of another module.
 Mutation: With a mutation probability, mutate new
The method is applied to a general problem and is found to be effi-
offspring.
cient in determining optimum subsets of each module from a given
5. Replace: Use new generated population for a further run of the
set. It is anticipated that the proposed method can be used as a sys-
algorithm.
tematic tool in selection of modules instances in designing and
6. Go to step 2.
assembling modular products or transforming dedicated structures
into modular ones. Two key issues in modular product design are
3.2. Simulated Annealing
to determine the optimum product variety and the number of
module instances required to support this variety. For each param-
Simulated Annealing (SA) was proposed by Metropolis, Rosenb-
eter set, the best possible product variant is selected as a result of
luth, Rosenbluth, Teller, and Teller (1953) and it has been the
the best combination of module instances. The problem is to select
object of intensive analysis and implemented to solve a series of
the subsets of modules instances that minimize a combined cost
optimization problems. The main steps of the SA algorithm can
objective function.
be summarized as follows: It starts with an initial feasible solution,
repeatedly generate neighbor solutions. A generated solution is
3. Metaheuristics accepted if it has a better value of the objective function. If it is
worse, the solution may be accepted depending on certain proba-
3.1. Genetic Algorithms bility distribution. The acceptance probability depends on the cool-
ing temperature T. At the beginning, T is set to a value which allows
The basic concepts of Genetic Algorithms (GA) were developed the acceptance of a large portion of the generated solutions. Then T
by Holland (1975). Holland showed that a computer simulation of is modified to reduce the acceptance probability. This fact assures
this process could be employed for solving optimization problems. that the algorithm does not converge to local optimum at early

Fig. 3. Two alternative configurations for constructing the two lego-like figures.
3510 O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515

steps. Several are attempted at each value of T and the algorithm is – m be the number of modules,
stopped as soon as a stopping criterion is reached. – f the number of figures,
– s the number of alternative assemblies or set ups,
– i the index of figures (i = 1, . . . , f),
3.3. Particle Swarm Optimization
– j the index of modules (j = 1, . . . ,m),
– k the index of alternative assemblies (k = 1, . . . , s),
PSO is an evolutionary computation (EC) method inspired by
flocking birds (Eberhart & Kennedy, 1995) and has been applied
Given an incidence matrix F = [Fijk], where:
to many different areas. PSO is initialized with a population of ran-
dom solutions, where this initial population evolves over genera-
Fijk = n,
tions to find optimal solutions. However, in PSO, each particle in
n = 0, 1, . . . ,n
population has a velocity, which enables them to fly through the
i = 1, . . . , f
problem space instead of dying or mutating. Therefore, each parti-
j = 1, . . .. . . , m
cle is represented by a position and a velocity. The modification of
k = 1, . . . , s
the position of a particle is performed by using its previous posi-
tion information and its current velocity. Each particle knows its
where n represents the number of modules of the j type that is
best position (personal best) so far and the best position achieved
used by the figure i in its k assembly alternative. Each module j
in the group (group best) among all personal bests. These princi-
has a unitary cost Cj. The objective function of this minimization
ples can be formulated as:
problem is the total cost of a given set of assemblies which are
    to be constructed simultaneously using a restricted set of modules.
v nþ1
i ¼ wv ni þ c1 rn1 pni  xni þ c2 r n2 png  xni ð1Þ
X
f X
m X
s

xikþ1 ¼ xki þ v nþ1 ð2Þ Min : Z ¼ F ijk C j Aik


i
i j k

where: 
1 if figure i will be assembled using the sassembly alternative k:
Aik ¼
– w is inertia weight; 0 otherwise:
– c1, c2 are two positive constants, called cognitive and social
Subject to
parameters, respectively;
– r1, r2 are random numbers, uniformly distributed in [0, 1]; X
s
Aik ¼ 1 8i;
– n = 1, 2, . . . , N, denotes the iteration number, k¼1
- N is the maximum allowable iteration number.

The first term on the right hand side of Eq. (1) is the previous 5. Codification
velocity of the particle, which enables it to fly in the search space.
The second and third terms are used to change the velocity of the The first and most important step in preparing an optimization
agent according to pbest and gbest. Generally speaking, the set of problem using a metaheuristic solution is that of defining a particu-
rules that govern PSO are: evaluate, compare, and imitate. The lar coding of the design variables and their arrangement into a string
evaluation phase measures how well each particle (candidate solu- of numerical values to be used as the chromosome by the GA or as an
tion) solves the problem at hand. The comparison phase identifies individual by the swarm based algorithm. In order to generate the
the best particles. The imitation phase produces new particles members of the population, its length is calculated first. Then ran-
based on some of the best particles previously found. These three dom numbers in the range of {0, S} are generated to form the chromo-
phases are repeated until a given stopping criterion is met. The some. In this paper, a direct coding scheme is used, that is the allele of
objective is to find the particle that best solves the target problem. each gene or swarm member (also called particle) represents the
alternative of module subset to which each part is constructed.
The member representation is shown bellow where Mj denotes the
4. Mathematical formulation alternative to which figure j has been assigned. Using this codifica-
tion, the first element represents the modular alternative used to
In this paper, we attempt to solve a generalized selection and construct or assembly the figure number 1. The second element rep-
optimization problem in modular construction of a set of figures. resents the modular alternative used by the second figure, and so on.
Consider a situation having ‘F’ figures and ‘M’ modules. Each figure
is characterized by its structure, and each structure is comprised M1 M2 M3 ... MM
for a given combination of modules. Figures may have S alternative
configurations, each one with a different combination of modules. For example, consider the following member where ten figures can
Therefore, a certain modular configuration is associated with a be comprised by modular components in 4 alternative assemblies at
given cost, in correspondence to the total number and types of maximum.
modules needed to construct the referred structure or figure. Thus, 1 2 4 2 3 3 1 2 3 2
the problem is to find optimal modular configurations minimizing
the total costs. The mathematical model is presented below. As it The member or chromosome indicates that Fig. 1 is constructed
was mentioned, the problem consists in determining the minimal using its modular alternative number 1, as well as figure number
cost required for assembly all the figures simultaneously, selecting 7. As such, Figs. 2,4, 8 and 10 are considered as using its alternative
the appropriate combination of alternative assembly for each one number 2. Figs. 5,6 and 9 are constructed according to alternative
of the figures. Here, all the figures are to be assembled simulta- structure number 3. Figure number 3 is constructed according
neously which it means that the need of some modules could be the alternative structure number 4. Note that the chromosomes
incremented according to the number of figures that use a specific are fixed in length based on the size of the problem, that is,
assembly option that considers the module at hand. Let us intro- number of figures or structures considered as to be assembled
duce the elements of this optimization model. Let: simultaneously.
O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515 3511

5.1. Proposed GA
Current Generation Next Generation
There are many different types of reproduction operators,
which are proportional selection, tournament selection, ranking Top (Cloned)
selection, etc. By selection, crossover and mutation it is possible
to obtain new population from the current one. In this study,
reproduction is accomplished by copying the best individuals from
one generation to the next. This approach is often called an elitist Generated
strategy. Parameterized uniform crossovers are employed. After by
two parents are chosen randomly from the full old population Crossover
(including chromosomes copied to the next generation in the elitist
pass), at each gene a biased coin is tossed to select which parent
will contribute the offspring. To prevent premature convergence Randomly
of the population, at each generation one or more new members Generated
of the population are randomly generated from the same distribu-
tion as the original population. This process has the same effect as Fig. 5. Evolutionary process used in this approach.
applying at each generation the traditional mutation process with
a small probability. As it was commented above, all chromosomes
of the first generation are randomly generated. This scheme was 15000
based on Goncalves and Resende (2004). Fig. 5 depicts the evolu-
tionary process.

DPSO

6. Proposed swarm-based clustering method 10000 G.A.


Best Result
Traditional PSO algorithm was developed for continuous
domains. PSO is an evolutionary computation (EC) method inspired
by flocking birds (Eberhart & Kennedy, 1995). Correa, Freitas, and
Johnson (2006) proposed a discrete PSO (DPSO) algorithm for attri- 5000
bute selection in data mining applications. The algorithm proposed
here is based on the DPSO and the concept of proportional likeli-
hoods, also used by Eberhart and Kennedy (1995). The main differ-
ence between the traditional PSO algorithm and the algorithm
proposed here is that the proposed algorithm does not use a vector
0
of velocities as the standard PSO algorithm does. It works with a
1 2 3 4 5 6 7 8 9 10 11 12
mechanism inspired by the proportional likelihoods concept.
Test number
According to Correa et al. (2006), the notion of proportional likeli-
hood used in the DPSO algorithm and the notion of velocity used in Fig. 6. Best solutions obtained by both algorithms in the 12 problems.
the standard PSO are somewhat similar.
This algorithm deals with discrete variables (assembly alterna-
tives), and its population of candidate solutions contains particles The initial population of particles is generated with a series of
of a given size (n = number of figures). Each component of the par- integer random numbers. These numbers are uniformly generated
ticle (algorithmically called here the vector) takes a value between between 1 and n inclusive. Potential solutions (particles) to the tar-
1 and k and represents the number of alternative with which the get problem are encoded as fixed length discrete strings, i.e.,
figure will be assembled. Potential sets of solutions are represented X(i) = (x(i; 1); x(i; 2); . . . ; x(i; n)), where x(i, j) 2 1, . . . , k; i = 1, 2, . . . , n
by a swarm of particles. There are N particles in a swarm. X(i) keeps and j = 1, 2, . . . , p.
a record of the best position it has ever attained. This information For example, given the number of possible or alternative config-
is stored in a separate particle labeled as B(i). The swarm also keeps urations, k = 3 and N = 4, a swarm could look like this:
a record of the global best position ever attained by any particle in
Xð1Þ ¼ ð1; 2; 1; 2; 3; 3Þ
the swarm. This information is also stored in a separated particle
labeled G. Xð2Þ ¼ ð1; 1; 2; 1; 2; 3Þ
Xð3Þ ¼ ð1; 2; 3; 2; 1; 3Þ
Xð4Þ ¼ ð1; 2; 2; 3; 1; 3Þ
Alternatives In this example, particle X(1) = (1; 2; 1; 2; 3; 3) represents a candi-
date solution where product 1 and 3 are configured according the
alternative option number 1, products 2 and 4 are configured
according alternative 2, and products 5 and 6 are constructed
according alternative 3. After the initial population of particles is
Figures generated, the process of calculation of the fitness function is per-
formed. In addition, in each iteration, a perturbation subset has
been incorporated into the swarm. This perturbation subset is
inspired by the bit change mutation approach (Lee, Park, & Jeon,
Modules 2007) with which the proposed PSO algorithm can escape from
good local optima. The perturbation subset corresponds to the frac-
Fig. 4. The representation model. tion of the total number of particles of the swarm that are mutated
3512 O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515

when the fitness function tends to premature stabilization. In this configure a set of modular structures. In addition, an assessment
work we used 3% of the swarm size. The perturbation subset was of the performance of the Genetic Algorithm and the swarm based
selected randomly from the total swarm, and its size is obtained optimization methods. When it is possible, the results were com-
through sensibility tests. The perturbation subset operates as pared to those results obtained from Simulated Annealing algo-
follows: rithm and total enumeration. Therefore, the results were grouped
into two categories. First, nine randomly generated trial problems
whileðperturbation element <> perturbation set sizeÞ
for which the optimum was determined by total enumeration and
if ðxði; jÞ ¼ kÞthenðxði; jÞ ¼ pÞ; the second category corresponds to a set of 12 larger problems,
p <> k where the optimum solution is unknown.
end while;
7.1. Problems solved by total enumeration
where p is selected randomly from the possible other alternative
configurations. Nine test problems were used to evaluate the proposed imple-
As commented previously, the algorithm proposed in this paper mentations. Each experiment considers a problem with 25 figures
is based on the notion of proportional likelihoods, and it is based each one with 2 alternative configurations. The module population
on an adaptation of the concept of velocity proposed by Hornby is comprised by 20 different instances. These 9 problems were first
et al. (2003) in DPSO. The updating process is based on X(i), B(i), solved through an extensive search, so that their optimums are
and G, and it works as follows. In the context of combinatorial opti- known. Each total enumeration experiment took about 16 h on a
mization, the velocity of a particle must be understood as an PC with Intel Pentium 4 running at 3.6 GHz under MS windows.
ordered set of transformations that operates on a solution. The As it was mentioned before, Simulated Annealing was also applied
transformation of a solution is represented with a term that repre- to the mentioned nine problems. In order to obtain dispersion
sents the difference between two positions. Hence, in each case, information and as a means of validating and comparing the
(X(i)  B(i)) and (X(i)  G) represent the needed movements to results, SA experiments were run 10 times each. Table 1 shows
change from the position given by the first term to the position the minimum cost obtained by the total enumeration and the
given by the second term of each expression. For example, consider results obtained by 10 runs of the SA algorithm (obtained optimal
the following instances of X(i) and B (i): solutions are indicated in bold).
XðiÞ ¼ ð1; 3; 3; 2; 1; 3Þ Then the same nine problems were solved by the proposed GA
and using the DPSO algorithm. In the case of the GA, two sets of
BðiÞ ¼ ð1; 1; 3; 3; 2; 3Þ
experiments were run. The first set considered a population size
The difference between X(i) and B(i) represents the changes that of 30 individuals and 90 iterations. The second set of experiments
will be needed to move the particle i from X to B. The number l rep- considered the same population size and 150 generations. All
resents the number of elements different from 0 in the subtraction experiments were run 10 times; the results of the first set of exper-
of B from X. If the difference between a given element of X(i) and B(i) iments are shown in Table 2 where optimal solutions are indicated
is not null, it means that the said position is susceptible to change in bold. The results of the second set of experiments are shown in
through operations described below. Table 3. A comparison of these two tables shows that the GA found
A new vector P is generated that records the positions where the optimal solution 44 times (49%) in the case of the first reported
the elements X(i) and B(i) are not equal. A random number is gen- configuration and 82 times (91%) over the 90 tests using the 150
erated and assigned to b. This number b corresponds to the number generations. Table 4 shows the mean deviations (%) between the
of changes that will be made to X(i) based on the difference optimal solution and the best solution obtained from the 10 execu-
between X(i) and B(i); therefore, b is in the interval (0 l). Then, a tions of the proposed GA for each test. It can be seen that the GA
set w of b randomly generated binary numbers is defined. If the with 150 generations obtains the optimum a large number of times
binary number is 1, the change is made; in other hand, if the num- with small differences in the rest of occasions. The standard devi-
ber is 0, the change is not performed. A similar process is per- ation(s) of the error of the different runs is relatively low, indicat-
formed to update the particle position in accordance to the best ing that the performance of the algorithm is stable.
global position (G). In case that several of the applied movements The nine randomly generated trial problems were also used to
involve the same position (machine), the change caused by the evaluate the proposed implementation of the DPSO algorithm
global best position, the second operation in our algorithm, has and for tuning of the perturbation set size. In the case of the DPSO
the priority. For instance, if: algorithm, alternative configurations were evaluated using nine
experiments. The configuration differs in population size and num-
XðiÞ  BðiÞ ¼ ð1  1; 3  1; 3  3; 2  3; 1  2; 3  3Þ
ber of iterations. The parameters used in each one of the DPSO
¼ ð0; 2; 0; 1; 1; 0Þ algorithm are shown in Table 5. All experiments were run 10 times.
Table 6 shows the mean deviations (%) between the optimal
The new vector P(i) is generated: P(i) = (2, 4, 5)
solution and the best solution obtained from the 10 executions of
Thus, l = 3. Suppose that b = 2 and w = (0, 1, 1). That means that
the proposed PSO algorithm for each of the nine configurations.
positions 4 and 5 will be replaced in X(i) by the elements (in the
The standard deviation of the error of the different runs is rela-
same positions) of B(i), obtaining a modified position vector X 0 ðiÞ
tively low, indicating that the performance of the algorithm is sta-
shown as follows:
ble and performs better when the swarm size tend to be larger. An
X 0 ðiÞ ¼ ð1; 3; 3; 3; 2; 3Þ observation of the Table 6 shows that, using any one of the above
detailed configurations of the DPSO algorithm parameters it was
The process is repeated with the new position X 0 ðiÞ and G, obtaining possible to find the optimal or near optimal solutions. As it was ex-
the new position of X(i + 1). pected, best results were obtained with larger swarms and using
more iterations. The DPSO configuration was selected based on
7. Numerical Results the two parameters shown in Table 6. As a compromise between
solution quality and computational effort the configuration num-
The purpose of this section is to show, using a series of numer- ber 8 was selected, that is a medium sized swarm with 100
ical examples, how the proposed formulation can be used to iterations.
O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515 3513

Table 1
Results obtained with SA.

Optimal solution 1 2 3 4 5 6 7 8 9 10
1 3583 3586 3591 3590 3594 3592 3586 3590 3588 3592 3590
2 3877 3887 3893 3889 3881 3888 3888 3885 3885 3881 3882
3 2622 2623 2637 2636 2634 2630 2634 2636 2622 2633 2635
4 3426 3428 3431 3430 3430 3429 3430 3431 3427 2428 3432
5 3491 3505 3501 3491 3500 3505 3499 3491 3498 3498 3500
6 3290 3290 3301 3305 3299 3302 3302 3298 3297 3293 3304
7 3714 3723 3729 3728 3733 3729 3728 3726 3727 3724 3736
8 3279 3283 3284 3280 3283 3281 3282 3281 3280 3280 3279
9 2921 2935 2929 2926 2926 2934 2934 2934 2939 2929 2935

Table 2
Results from the GA with 90 generation.

Optimal solution 1 2 3 4 5 6 7 8 9 10
1 3583 3586 3585 3583 3585 3583 3584 3584 3590 3588 3583
2 3877 3885 3877 3877 3877 3881 3878 3878 3877 3877 3877
3 2622 2626 2625 2625 2623 2624 2627 2622 2631 2624 2622
4 3426 3441 3430 3430 3432 3428 3426 3426 3426 3428 3426
5 3491 3495 3491 3491 3492 3503 3495 3491 3491 3491 3491
6 3290 3290 3290 3290 3292 3292 3295 3302 3290 3291 3291
7 3714 3714 3714 3714 3714 3718 3714 3714 3714 3714 3718
8 3279 3289 3285 3279 3289 3285 3285 3279 3290 3279 3283
9 2921 2921 2930 2921 2921 2921 2926 2921 2921 2921 2921

Table 3
Results from the GA with 150 generations.

Optimal solution 1 2 3 4 5 6 7 8 9 10
1 3583 3583 3583 3583 3583 3583 3583 3583 3583 3585 3583
2 3877 3877 3880 3877 3877 3880 3880 3877 3877 3877 3877
3 2622 2622 2622 2622 2622 2622 2622 2622 2622 2622 2625
4 3426 3426 3426 3426 3426 3426 3426 3426 3426 3426 3426
5 3491 3491 3491 3491 3491 3491 3491 3491 3491 3491 3491
6 3290 3290 3290 3290 3294 3290 3290 3294 3290 3290 3290
7 3714 3714 3714 3714 3714 3714 3714 3714 3714 3714 3714
8 3279 3279 3279 3279 3285 3279 3279 3279 3279 3279 3279
9 2921 2921 2921 2921 2921 2921 2921 2921 2921 2921 2921

Table 4
Gap among GA, SA results (using reference set) and the optimal solutions.

Gener.: 90 Gener.: 150 SA


Mean St. dev. of the Number of Mean St. dev. of the Number of Mean St. dev.of the Number of
deviation (%) error (%) instances deviation (%) error (%) instances deviation (%) error (%) instances
5.86 6.51 3 0.56 1.77 9 19.26 7.26 0
3.61 6.79 6 2.32 3.74 7 22.96 9.99 0
11.06 10.24 2 1.14 3.62 9 38.14 20.50 1
9.63 13.49 4 0.00 0.00 10 10.51 4.60 0
6.02 11.01 6 0.00 0.00 10 22.34 13.76 2
6.99 11.38 4 2.43 5.13 8 27.66 14.50 1
2.15 4.54 8 0.00 0.00 10 38.50 10.47 0
16.16 13.02 3 1.83 5.79 9 7.01 4.99 1
4.79 10.61 8 0.00 0.00 10 38.00 14.83 0

Table 7 shows the results of the set of ten experiments using the
Table 5
aforementioned configuration of the proposed DPSO algorithm. As
Parameters used in each one of the PSO algorithm configurations.
it was made before, and for comparison ends, in this table all the
optimal solutions found by the algorithm are indicated in bold. Test number Iterations Swarm size
1 60 10
7.2. Larger problems 2 100 10
3 200 10
4 60 20
As it was commented before, the model and the selected config- 5 100 20
urations for the Genetic Algorithm and for the DPSO were tested in 6 200 20
a number of other 12 randomly generated trial problems. The 7 60 50
8 100 50
details of each one of the 12 test problems are shown in Table 8.
9 200 50
These 12 problems are significantly larger than those solved by
3514 O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515

Table 6 Table 8
Results of the nine PSO alternative configurations. Configurations of the 12 test problems.

Configuration Avg. error (%) St. dev. of the error (%) Test Pieces Options Modules
1 0.19 0.18 1 20 3 15
2 0.06 0.10 2 20 5 15
3 0.02 0.04 3 20 10 15
4 0.07 0.13 4 30 3 15
5 0.02 0.07 5 30 5 15
6 0.01 0.03 6 30 10 15
7 0.02 0.04 7 40 3 15
8 0.01 0.03 8 40 5 15
9 0.00 0.00 9 40 10 15
10 50 3 15
11 50 5 15
12 50 10 15
total enumeration and could not be solved optimally within a rea-
sonable time limit. In applications like these, total enumeration
strategy would consume CPU time in the order of years. Consider
Table 9
the size of a given test problem, say the problem number 3, the
Performance evaluation of the resolution of the 12 test problems using the proposed
total number of possible combinations corresponds to GA
10!20 = 1,56⁄1031. Hence, explicit enumeration is computationally
Test Best Average % jBest-Avej/Best 1% Opt 5% Opt
infeasible for this type of problems. Therefore, it is an open ques-
tion how far the best solutions found by the two proposed algo- 1 1885 1894,5 0,50 90,00 100,00
2 3658 3696,8 1,06 30,00 100,00
rithms are above the optimum value.
3 4669 4718,2 1,05 60,00 100,00
We applied the GA 100 times with different random costs val- 4 3854 3883,9 0,78 90,00 100,00
ues and modular configurations of products in a set of 12 test prob- 5 4880 4913,5 0,69 80,00 100,00
lems. Computational results are summarized in Table 9. Column 6 6967 7180,8 3,07 10,00 100,00
‘‘Best’’ shows the total cost of the best solution found by the Genet- 7 6593 6638,4 0,69 70,00 100,00
8 6302 6354,2 0,83 60,00 100,00
ic Algorithm in the 100 runs. Column ‘‘% jBest-Avej/Best’’ shows the
9 14255 14396,3 0,99 50,00 100,00
deviation between the best solution and the averaged solution over 10 5291 5337,8 0,88 50,00 100,00
the 100 runs. Column ‘‘1% Opt.’’ shows the percentage of solutions 11 10322 10419,1 0,94 50,00 100,00
that differed by at most 1% from the best solution found. Column 12 12832 12996,8 1,28 30,00 100,00
‘‘5% Opt.’’ shows the percentage of solutions that differed by at
most 5% from the best solution found. The GA found solutions that
were on average always less than 3,07% above the best solution Table 10
found. For each problem, 55% on average of the solutions were Comparative results for the twelve large problems.
within 1% of optimally. Finally, 100% of the solutions found by
Test Best Average % 1% Opt 5% Opt
the Genetic Algorithm were within the 5% of optimally.
The DPSO algorithm was also applied 100 times to each one of 1 1875 1875 0.00 100.00 100.00
2 3632 3633.3 0.04 100.00 100.00
the 12 problems before described. Computational results are sum-
3 4332 4370 0.88 60.00 100.00
marized in Table 10. Column ‘‘Best’’ shows the total cost of the best 4 3822 3825.2 0.08 100.00 100.00
solution found by the DPSO in 100 runs. Column ‘‘% jBest-Avej/ 5 4682 4700.6 0.40 100.00 100.00
Best’’ shows the deviation between the best solution and the aver- 6 6590 6657.7 1.03 50.00 100.00
7 6514 6535.6 0.33 100.00 100.00
aged solution over the 100 runs. Column ‘‘1% Opt.’’ shows the per-
8 6060 6101.5 0.68 70.00 100.00
centage of solutions that differed by at most 1% from the best 9 13212 13492.8 2.13 20.00 100.00
solution found. Column ‘‘5% Opt.’’ shows the percentage of solu- 10 5167 5203.4 0.70 60.00 100.00
tions that differed by at most 5% from the best solution found. 11 9823 9979 1.59 20.00 100.00
The proposed DPSO algorithm found solutions that were on aver- 12 11589 11971.9 3.30 10.00 80.00

age always less than 3,30% above the best solution found. For each
problem, 65% on average of the solutions were within 1% of opti-
mally. Finally, almost 100% of the solutions found by the DPSO Fig. 7 shows the percentage error between the best solution found
Algorithm were within the 5% of optimally. and the average solution of each experiment for all the executed
Fig. 6 shows the performance of both proposed algorithms runs. As can be observed in the Fig. 6, the results obtained from DPSO
applied to the 12 problems in terms of the best solutions found. are better than those obtained from the application of the GA In

Table 7
Results from the DPSO.

Optimal solution 1 2 3 4 5 6 7 8 9 10
1 3583 3583 3583 3583 3583 3583 3583 3583 3583 3583 3583
2 3877 3877 3877 3877 3877 3878 3877 3877 3877 3877 3877
3 2622 2622 2622 2624 2622 2622 2622 2622 2622 2622 2622
4 3426 3426 3426 3426 3426 3426 3426 3426 3426 3426 3426
5 3491 3491 3491 3491 3494 3491 3491 3491 3495 3491 3494
6 3290 3290 3290 3290 3290 3290 3295 3290 3290 3290 3290
7 3714 3714 3714 3714 3714 3714 3714 3714 3714 3714 3718
8 3279 3279 3279 3279 3279 3279 3283 3279 3279 3279 3279
9 2921 2921 2923 2921 2921 2921 2921 2921 2921 2921 2921
O. Durán et al. / Expert Systems with Applications 39 (2012) 3507–3515 3515

3.5 found are close to optimum). The proposed formulation is general


in the sense that products can have any number of modules.
DPSO
3.0 Through modularity, the number of different parts to be purchased
G.A. for an assembled product set may be significantly reduced while
Performance Index

2.5 achieving a sufficient variety by combination of different modules.

2.0 References

1.5 Asan, U., Polat, S., & Serdar, S. (2004). An integrated method for designing modular
products. Journal of Manufacturing Technology Management, 15(1), 29–49. 21.
Babu, B. S., Valli, P. M., Kumar, A. V. V. A., & Rao, D. N. (2005). Automatic modular
1.0 fixture generation in computer-aided process planning systems. In: Proceedings
of The Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering
0.5 Science, 219 (10), pp. 1147–1152.
Bi, Z. M., & Zhang, W. J. (2001). Modularity technology in manufacturing: Taxonomy
and issues. International Journal of Advanced Manufacturing Technology, 18(5),
0.0 381–390.
1 2 3 4 5 6 7 8 9 10 11 12 Cecil, J. (2001). Computer-aided fixture design – a review and future trends. The
Test number International Journal of Advanced Manufacturing Technology, 18, 790–793.
Correa, E. S., Freitas, A., & Johnson, C. G. (2006). A new discrete particle swarm
algorithm applied to attribute selection in a bioinformatics data set. In: M.K.
Fig. 7. Percentual errors obtained by both algorithms in the 12 problems.
et al., (Eds.), Proceedings of the Genetic and Evolutionary Computation Conference -
GECCO-2006 (pp. 35–42). Seattle, WA, USA: ACM Press.
those problems where the number of alternatives for each module is Eberhart, R. C., & Kennedy, J. (1995). Particle Swarm Optimization. In: Proceeding of
the IEEE International Conference on Neural Networks Perth, Australia: IEEE
lower DPSO showed better results. In the case of the DPSO, as the Service Center 12–13.
number of alternatives assemblies for each figure increases, the per- Goncalves, J. F., & Resende, M. G. C. (2004). An evolutionary algorithm for
centage of error increases. In those cases where the number of alter- manufacturing cell formation. Computers and Industrial Engineering, 47,
247–273.
natives and the number of figures is high, GA presented lower errors
Hoffman, E. G. (1991). Jig and fixture design. Cengage Delmar Learning.
(Fig. 7). Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor:
University of Michigan Press.
Hornby, G. S., Lipson, H., & Pollack, J. B. (2003). Generative representations for the
8. Conclusion automated design of modular physical robots. IEEE Transactions on Robotics and
Automation, 19, 703–719.
A Genetic Algorithm and a discrete Particle Swarm Optimiza- Itō, Yoshimi (2008). Modular design for Machine Tools. McGraw-Hill Professional, p.
504.
tion (DPSO) model were presented for selection and optimization Lee, S., Park, H., & Jeon, M. (2007). Binary Particle Swarm Optimization with bit
of modular structures. This problem is found in organizations that change mutation. IEICE Transactions on Fundamentals of Electronics,
face gradual or complete transformation of these products to mod- Communications and Computer Sciences, 90(10), 2253–2256.
Liu, T., Chen, C. H., & Chou, J. H. (2008). Intelligent design of the automobile fixtures
ular based assemblies. The optimization problem considers the with Genetic Algorithms. International Journal of Innovative Computing
cost involved in using a set of modules to obtain a number of mod- Information and Control., 4(10), 2533–2550.
ular assemblies that could substitute a set of non-modular prod- Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation
of state calculations by fast computing machines. Journal of Chemical Physics,
ucts or structures. A set of problems was generated, and their
21(6), 1087–1092.
optimal solutions were obtained with total enumeration. The DPSO O’Grady, P., & Liang, Wen-Yau (1998). An internet-based search formalism for
and GA Were tested and the results show that both algorithms design with modules. Computers and Industrial Engineering, 35(1–2), 13–16.
obtained good solutions in almost all the cases maintaining a low Pandremenos, J., Paralizas, K., Salonitis, K., & Chryssolouris, G. (2009). Modularity
concepts for the automotive industry: A critycal review. CIRP Journal of
variability of the results. The methods were applied to set of gen- Manufacturing Science and Technology., 1, 148–152.
eral and larger problems and are found to be efficient in determin- Retik, A., & Warszawski, A. (1994). Automated design of prefabricated building.
ing optimum subsets of each module from a given set. The Building and Environment, 29(4), 421–436.
Rong, Y., Huang, S. H., & Zhikun, Hou. (2005). Advanced computer-aided fixture
proposed algorithms are computationally feasible for large prob- design. Burlington MA USA: Elsevier Academic Press, p. 414.
lems. Both algorithms find good solutions with moderate CPU Stephenson, D. A., & Agapiou, J. S. (2006). Metal cutting theory and practice. CRC
times in the order of few seconds (assuming that the best solutions Taylor & Francis, p. 846.

You might also like