Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

To appear in the Journal of Experimental & Theoretical Artificial Intelligence


Vol. 00, No. 00, January 2018, 1–33

Towards Effective Resolution Approaches for Solving the Sum


Coloring Problem


Olfa Harrabia and Jouhaina Chaouachib
a
Higher Institute of Management of Tunis, Tunis University, 41, Liberty Street -Bouchoucha -
2000 Bardo, Tunisia;
b
Institute of Advanced Business Studies of Carthage, Carthage University, IHEC Carthage
Presidency-2017 Tunis, Tunisia
(Received 00 Month 20XX; final version received 00 Month 20XX)

The paper sheds light on the sum coloring of graphs which has a wealth of pertinence to
several applications in scheduling and resource allocation. Throughout this work, we investi-
gate solving the problem using three new mathematical formulations. Moreover, we integrate
a general modeling methodology namely hyper-heuristic approach based on different meta-
heuristics to effectively get tight upper bounds. Comprehensive empirical study reports new
exact solutions for some tested instances. Computational performance of our proposed hyper-
heuristic approach, based on a set of benchmark instances, turned out to be effective on both
computation time and solution quality.

Keywords: Sum Coloring problem; Mathematical formulation; Hyper heuristic; Genetic


algorithm; Simulated annealing; Iterated local search

1. Introduction

The Sum coloring problem traced back to 1989 (Kubicka & Schwenk, 1989). It is tightly
related to the classical Graph Coloring Problem (GCP) and proved to be NP-hard.
Due to its theoretical and practical interests, the problem has recently begun to attract
attention. From a theoretical point of view, the studied problem is related to other
generalizations of GCP like sum multi-coloring (Bar-Noy, Halldórsson, Kortsarz, Salman,
& Shachnai, 2000), sum list coloring (Berliner, Bostelmann, Brualdi, & Deaett, 2006)
and bandwidth coloring (Johnson, Mehrotra, & Trick, 2008). The Sum coloring problem
has also various applications including mainly Very-Large-Scale Integration design (Sen,
Deng, & Guha, 1992), scheduling (Bonomo, Duran, Napoli, & Valencia-Pabon, 2015)
and resource allocation (Bar-Noy, Bellare, Halldórsson, Shachnai, & Tamir, 1998).

1.1. Related work


Due to the interest of the problem, considerable efforts have been devoted to develop-
ing both exact and heuristic algorithms for tackling the problem. However, given the
computational complexity of the sum coloring problem, there are limited number of
∗ Corresponding author. Email: siala.jouhaina@gmail.com
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

works dealing with exact methods. In (Lecat, Li, Lucet, & Li, 2015), a basic Constraint
Programming model (CP) was proposed. Nevertheless, the results were not competitive
regarding the execution times on rather easy instances. Moreover, in the same paper,
authors elaborated a Branch and Bound algorithm (BBMSCP) to solve the sum coloring
problem. This approach provides better results than the CP model, but remains limited
to small graphs. The same paper investigates a SAT programming method and describes
different SAT encoding for the problem. Tested on randomly generated graphs and on
six DIMACS instances, the SAT method outperforms BBMSCP and CP. At this level,
It is worth noting that the performance of all the cited algorithms was experimentally
evaluated using only small benchmark instances reputed to be easy during the coloring.
In the same context, an Integer Linear Programming (ILP) was proposed in (Wang, Hao,
Glover, & Lü, 2013). However, the computational performance of the ILP was evaluated
using only few graphs. Last but not least, authors in (Wang et al., 2013) recast the sum
coloring problem using a Binary Quadratic Formulation (BQF). Nevertheless, the model
was heuristically solved via a path relinking algorithm.
To the best of our knowledge, polynomial-time algorithms exist on specific graphs such
as trees, interval graphs and bipartite graphs for instance (Bar-Noy & Kortsarz, 1998;
Kroon, Sen, Deng, & Roy, 1996). However, the decision version of the sum coloring
problem remains NP-complete in the general case and any algorithm proposed to solve
optimally the problem is expected to have an exponential complexity. Therefore, for the
resolution, various approximate algorithms have been proposed. Several studies have fo-
cused on obtaining sub-optimal solutions (upper bounds) or computing lower bounds
in acceptable time. Using heuristic and metaheuristic algorithms appeared in 2007. The
algorithms belong mainly to three classes: greedy algorithms, local search heuristics and
evolutionary algorithms. In what follows, we detail these methods in a chronological
order. An early Parallel Genetic Algorithm (PGA) was introduced in (Kokosiński &
Kwarciany, 2007). PGA uses proportional selection, assignment and partition crossover
and first-fit mutation. It reports upper bounds on 16 small DIMACS graphs. Next in
2009, authors in (Li, Lucet, Moukrim, & Sghiouer, 2009) proposed two greedy algo-
rithms Minimum coloring DSATUR (MDSAT) and Minimum coloring RLF (MRLF)
as extensions of the well-known Degree SATURation method (DSATUR) and Recursive
Largest First (RLF) (Brélaz, 1979). Simulation results prove the efficiency of [MDSAT &
MRLF] compared to [DSATUR & RLF]. Then, in 2010, the work of (Bouziri & Jouini,
2010) investigates the Tabu search technique to propose new upper bounds for the con-
sidered problem. The approach, inspired from Tabucol coloring algorithm, showed a good
behavior on few DIMACS instances compared to [MDSAT & MRLF]. Moreover, in 2011,
(Helmar & Chiarandini, 2011) focused on proposing a local search heuristic namely Multi-
neighborhood Search with Local Search method (MDS(5)+LS). Their proposed approach
is based on variable neighborhood search and iterated local search. The latter, tested on
some COLOR02 competitions and DIMACS instances, outperforms all recent methods.
For the same class of local search heuristics, (Benlic & Hao, 2012) elaborated a Breakout
Local Search algorithm (BLS) in 2012. Its relies on the use of different neighborhood
operators with adaptive perturbation strategies and improved upper bounds for only 4
instances out of 27 tested graphs. In the same year, (Wu & Hao, 2012) proposed a greedy
heuristic based on extracting independent sets EXtraction of independent Set for COL-
oring (EXSCOL). The method has a powerful aspect when performing on large graphs.
Since 2014, most of research studies have been conducted to propose different evolution-
ary approaches. Firstly, authors in (Moukrim, Sghiouer, Lucet, & Li, 2014) proposed a
hybrid Memetic Algorithm for the Sum Coloring problem (MA-MSC) to improve upper
and lower bounds for the sum coloring problem. Then, (Jin, Hao, & Hamiez, 2014) elab-

2
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Figure 1. Cartography of exact/heuristic approaches related to the sum coloring problem

orated an other Memetic Algorithm (MASC) based on a Tabu search procedure with two
neighborhoods and a multi-parent crossover operator. Computational results show that
MASC achieved competitive results in comparison with five state-of-the-art algorithms
and reported 15 new upper bounds. In 2016, (Jin & Hao, 2016), elaborated a Hybrid
Evolutionary Search Algorithm (HESA). The latter is based on two crossover operators,
an iterated double-phase Tabu search procedure and an updating procedure to guide the
choice of the offspring for the next generation. HESA obtained the best-known results
for most of tested instances. Moreover, the approach improved upper and lower bounds
for 51 instances out of 94. Recently in 2017, the work of (Harrabi, Fatnassi, Bouziri, &
Chaouachi, 2017) showed that is possible to solve the sum coloring of graphs using a
bi-objective Memetic Algorithm (MA). The approach is mainly based on a bi-objective
Vector Evaluated Genetic Algorithm (VEGA) and a simple Tabu search method during
the mutation step. Experimental results show a high quality of both upper and lower
bounds solutions.
We refer to (Wang et al., 2013) for an excellent overview of the sum coloring problem.
To have a global view on the literature review, we propose in Figure 1 a cartography of
all the aforementioned works.

1.2. Scope and objective


Analyzing the conducted research, several shortcomings could be identified:
• Still highly effective, Integer Linear Programming (ILP) has been intensively pro-

3
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

posed in the literature of the tightly related graph coloring problem (Campêlo,
Campos, & Corrêa, 2008; Hansen, Labbé, & Schindl, 2009; Mehrotra, 1992; Mehro-
tra & Trick, 1996). However, few ILP models were proposed to tackle the sum
coloring problem (Furini, Malaguti, Martin, & Ternier, 2018; Wang et al., 2013).
Moreover, it is worth noting that the computational performance of these models
was evaluated using only small graphs.
• Due to the inherent computational complexity of our studied problem, many re-
searches investigate their effort to solve approximately the problem using heuristic
techniques. Although approximate approaches have been prominently mentioned,
it is still unclear to determine the best class of search methods proposed to solve
the sum coloring problem. In fact, following the literature review, one can not de-
termine a performing algorithm that could beat all the algorithms regarding the
solution quality and the computational time efforts.
• In general, heuristic approaches require frightful efforts in considering adaptive
mechanisms to alter operator choices and/or their parameters. This process of
tuning is usually time-consuming and can brutally influence the performance of
such methods.

To alleviate such limits, our research work focuses mainly on:

(1) Investigating the possibility of solving the NP-hard sum coloring problem via three
new linear programming models: Penalty Function Formulation (PFF), Box Con-
strained Formulation (BCF) and Weight-Based Formulation (WBF). Additionally,
this work conducts an intense experimental study with up to 32 instances and tries
to find optimal solutions for instances found to be intractable in (Wang et al.,
2013).
(2) Designing a general modeling methodology with effective features which uses lim-
ited problem knowledge or specific information to control the search process. The
hyper-heuristic seems to be the way forward since this techniques attempts to find
the right method or sequence of heuristics at each decision point rather than trying
to directly solve the problem.
(3) Saving a significant amount of effort by automating the tuning of parameters con-
trol. For that aim, potential key features of our proposed hyper-heuristic method
are distinguished. One of its strengths is employing the current specificity of the
search space for better guidance of the search direction. Therefore, adopting several
operators of both intensification and diversification could be extremely advanta-
geous to maintain the balance between ”exploration” and ”exploitation”.

The remainder of this research effort is structured as follows. Section 2 provides the
background of the sum coloring problem. In the subsequent sections, we present the
proposed mathematical models, provide a detailed description of the developed hyper-
heuristic approach, followed by a presentation of experimental results. Finally, we provide
some concluding remarks.

2. The sum coloring problem Statement

The sum coloring problem could be modeled using a simple undirected graph G=(V, E)
where V is the set of vertices (|V | = n) and E the set of edges (|E| = m). Given a
set of k colors {1, . . . , k}, the classical graph coloring problem aims to minimize the
total number of colors assigned to vertices so that two connected vertices are colored

4
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

differently. In this case, the obtained coloring is called proper and the resulted number
of colors k is the chromatic number denoted by χ(G). A proper k-coloring of a graph
is an application C : V → {1, . . . , k} such that C(x) 6= C(y), ∀(x, y) ∈ E. However, when
(x, y) ∈ E and C(x) = C(y), then the vertices x and y are termed conflicting vertices,
E is a conflicting edge and the obtained coloring is non proper. Equivalently, a
k-coloring could be considered as a partition of V into k color classes or disjoint
independent sets {V1 , . . . , Vk }. Color classes Vi could be formally defined as a subset
of vertices having the same color label. Vi are defined using its cardinality |Vi | which
corresponds to the number of its containing vertices. The sum coloring problem under
study seeks to find a proper k-coloring of G using natural numbers such that the following
sum of colors is minimized:
k
X
f (c) = (j ∗ |Vj |) (1)
j=1

where |Vj | is cardinality of the color class VPj and k > χ(G). This minimal sum is called
chromatic sum and P usually denoted by (G). The number of colors used to obtain
the chromatic sum (G) is called the strength s(G) of the graph.
Although the sum coloring problem is a variant of the classic graph coloring problem, it
has different objective function. For better illustration, we provide an illustrating example
in Figure 2.

Figure 2. The relation between graph coloring problem and sum coloring problem (Salavatipour,
2003)

The graph has a chromatic number χ(G) = 2 (left figure). With the given 2-coloring,
we achieve a sub-optimal sum of 12. However, this graph requires 3 colors to achieve
its chromatic sum which is equal to 11 (right figure). Clearly, we can see that χ(G) is a
lower bound of s(G), i.e. χ(G) ≤ s(G).

3. Box Constrained Formulation proposal for the sum coloring Problem:


(BCF)

The box constrained optimization method has a wealth of pertinence in modeling many
existing problems (Birgin, Chambouleyron, & Martınez, 1999; Ciarlet, 1978; Glunt, Hay-
den, & Raydan, 1993; Housh, Ostfeld, & Shamir, 2012; Moré & Toraldo, 1991). In the
same context, we propose a novel Box Constrained Formulation (BCF) to handle the
sum coloring problem. We state in what follows the variables used for the development
of the proposed mathematical model.

5
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

• V : is a set of vertices.
• E: is a set of edges.
• K: is a set of colors.
• ck : is (
the value of colors k.
1 if edge (i, j) violated the proper coloring, ∀(i, j) ∈ E
• yij =
0 otherwise
Our BCF model uses the following binary decision variable defined as:
(
1 if vertice i is colored with color k, ∀k ∈ K, ∀i ∈ V
• xik =
0 otherwise

The optimization technique considers minimizing the sum of colors assigned to different
vertices while defining a limiting interval for the constraints counting the number of
conflicting edges. Therefore, we address a BCF model for the sum coloring problem as
follows:

XX
M inimize ck xik (2)
i∈V k∈K

X
xik = 1 ∀i ∈ V (3)
k∈K

yij ≥ xik + xjk − 1 ∀k ∈ K, ∀(i, j) ∈ E (4)

X
B1 ≤ yij < B2 (5)
(i,j)∈E

xik ∈ {0, 1} ∀k ∈ K, ∀i ∈ V (6)

yij ∈ {0, 1} ∀(i, j) ∈ E (7)

The objective (2) is to minimize the total sum of coloring. Constraints (3) ensure that
each vertice is colored only once. Constraints (4) indicate if the solution is proper or
not. Through constraints (5), we restrict the search to explore only feasible regions using
two bounds B1 and B2 1 . Rationally, the upper bound B2 on the variable yij must be
set to 1 in order to guide the search process without needing extra computation time
requirements.
The constraints (6) and (7) state that the decision variable xik and the variable yij are
binary-valued.

1B
P
1 ≤ (i,j)∈E yij ≤ B2 leads to proper solutions but was more time consuming than (5).

6
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

4. Penalty Function Formulation proposal for the sum coloring problem:


(PFF)

An efficient alternative to tackle the hard constrained optimization problems investigates


penalty function methods for the resolution (Coello, 2000). These methods consist mainly
of relaxing a constrained problem using a modified objective function. The latter ensures
getting feasible solutions and avoids straying far from feasible regions. Accordingly, it
could be interesting to propose a Penalty Function Formulation (PFF) for the sum col-
oring problem. In our context, we adopt an exterior penalty function which penalizes
unfeasible solutions. For our proposed PFF model, we used the following notations to
formulate the problem.
• V : is a set of vertices.
• E: is a set of edges.
• K: is a set of colors.
• ck : is the value of colors k.
• σ: is a penalty coefficient imposed for violation of constraints.
The decision variables are binary and defined as follows:
(
1 if vertice i is colored with color k, ∀k ∈ K, ∀i ∈ V
• xik =
0 otherwise
(
1 if edge (i, j) violated the proper coloring,∀(i, j) ∈ E
• yij =
0 otherwise

The constrained sum coloring problem could be considered as an unconstrained opti-


mization problem:
XX X
M inimize ck xik + σ yij (8)
i∈V k∈K (i,j)∈E

X
subject to : xik = 1 ∀i ∈ V (9)
k∈K

yij ≥ xik + xjk − 1 ∀k ∈ K, ∀(i, j) ∈ E (10)

xik ∈ {0, 1} ∀k ∈ K, ∀i ∈ V (11)

yij ∈ {0, 1} ∀(i, j) ∈ E (12)


P
The objective (8) is to minimize the total sum of coloring. [σ (i,j)∈E yij ] represents our
penalty term. Constraints (9) ensure that each vertex is colored only once. Constraints
(10) indicate if the solution is proper or not. Finally, constraints (11) and (12) state that
the decision variables xik and yij are binary-valued.

7
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

In practice, a lot of difficulties arise from tuning the value of σ. Authors in (Richardson,
Palmer, Liepins, & Hilliard, 1989) suppose that σ is based on the expected cost to repair
the solution. For the sum coloring problem, one must be aware of choosing the value of
σ to ensure getting proper solutions.

5. Weight-Based Formulation proposal for the sum coloring problem:


(WBF)

The weighted sum method is getting a pride of place in addressing several optimization
problems (Yuan, Ling, Gao, & Cao, 2014; Zhao, Wu, & Yan, 1989). Formally, these
methods involve the notion of weights, tough, the latters could be considered as gauges
for each objective function importance. In our context, it would be promising to consider
a Weight-Based Formulation (WBF) to solve the sum coloring problem. The motive
behind using a such model is the opportunity of restricting the search to only feasible
solutions and driving the population toward the optimum using the appropriate weights.
We carefully point out in what follows our proposed WBF mathematical model. Let us
firstly state the considered variables as:
• V : a set of vertices.
• E: a set of edges.
• K: a set of colors.
• ck : the value of colors k.
• w1 : is a scalar weight assigned to the first term in the objective.
• w2 : is a scalar weight assigned to the second term in the objective.
To develop the WBF model, some binary decision variables are required
(
1 if vertice i is colored with color k, ∀k ∈ K, ∀i ∈ V
• xik =
0 otherwise
(
1 if edge (i, j) violated the proper coloring,∀(i, j) ∈ E
• yij =
0 otherwise

The sum coloring problem can be recasted as a Weight-Based Formulation as follows:

XX
M inimize w1 ck xik (13)
i∈V k∈K

X
M inimize w2 yij (14)
(i,j)∈E

X
s.t. xik = 1 ∀i ∈ V (15)
k∈K

yij ≥ xik + xjk − 1 ∀k ∈ K, ∀(i, j) ∈ E (16)

8
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

xik ∈ {0, 1} ∀k ∈ K, ∀i ∈ V (17)

yij ∈ {0, 1} ∀(i, j) ∈ E (18)

Constraints (15) ensure that each vertex is colored only once. Constraints (16) indicate
if the solution is proper or not. Finally, constraints (17) and (18) state that the decision
variables xik and yij are binary-valued.

Normalization of the WBF


To ensure the uniform distribution of the weight factors in the objective, it was required
to normalize the scalar objectives (13 and 14). In our context, we adopt the following
scaling method for the normalization of our objectives (Marler & Arora, 2005):
 
wj
Fj (x) =  qP  fj (x) (19)
2
j∈J βij

where:
• wj is the weight assigned to term I in the objective for each j ∈ J.
• βij is the coefficient of term j in response function i, for each j ∈ J and i ∈ I.
• fi (x) is the objective function i, for each i ∈ I.
Subsequently, the objective functions (13) and (14) are converted to its normal form
as follows:

   
 q w1 w2
XX X
M inimize  ck xik +  qP  yij (20)
2 2
P
j∈J β1j i∈V k∈K j∈J β2j (i,j)∈E

6. An online learning Hyper-Heuristic approach for the sum coloring


problem: HHA-SCP

A recent trend in search and optimization, hyper-heuristics, aims at automating the de-
sign of heuristic methods in order to raise the generality level (E. Burke et al., 2003;
E. K. Burke et al., 2010). Interestingly, these techniques require a limited problem in-
formation when controlling the search process. Indeed, the required bespoke problem-
specific methods are encapsulated in a pool of low-level heuristics. Specifically, in com-
binatorial optimization field, the term hyper-heuristic was employed as ”heuristics to
choose heuristics” (Cowling, Kendall, & Soubeiga, 2000). Interested readers can refer to
(E. K. Burke et al., 2013) for a detailed and recent survey of conducted hyper-heuristic
methods. In the same paper, authors discuss many relevant research problems and iden-
tify a number of promising application domains of hyper-heuristics: Production schedul-
ing (Garcı́a-Villoria, Salhi, Corominas, & Pastor, 2011), Educational timetabling (Sabar,

9
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Ayob, Qu, & Kendall, 2012), 1D Packing (Marı́n-Blázquez & Schulenburg, 2007), 2D
cutting and packing (López-Camacho, Terashima-Marı́n, Ross, & Valenzuela-Rendón,
2010), Vehicle routing (Garrido & Riff, 2010), etc.
From two exclusive perspectives, different classifications of hyper-heuristics have been
distinguished (E. K. Burke et al., 2010). In our context, we handle the sum coloring
problem using an online learning Hyper-Heuristic Approach (HHA-SCP). A key moti-
vating goal for this class of hyper-heuristics is the challenge of automating the design
and tuning of heuristics while incorporating some learning mechanisms during the search
process. Moreover, there are no training instances to provide general rules during the res-
olution. Such a strategy could considerably influence the performance of the autonomous
design of algorithms when dealing with hard computational problems.
Our proposed HHA-SCP dynamically incorporates some rules to change the preference
toward each heuristic based on the current specificity of the search space. For this pur-
pose, HHA-SCP selects at each generation the most performing low level heuristic that
improves the solutions. To maintain the balance between exploration and exploitation,
our approach relies on three well-known algorithms: Genetic Algorithm (GA), Simulated
Annealing (SA) and Iterated Local Search (ILS). In fact, GAs have been considered as
the most powerful method to generate diversified solutions in significant ways (Sivanan-
dam & Deepa, 2007). On the other hand, intensification was ensured using two local
search methods: ILS and SA. According to a probability, each of the cited algorithm is
executed. In what follows, we briefly detail specifies of these algorithms.

6.1. Search space and evaluation function


The search space explored by HHA-SCP is the set of both proper and improper k-
colorings. A k-coloring is proper when all adjacent vertices {x, y} ∈ E are colored differ-
ently. Otherwise, the k-coloring is non proper. The objective value of a proper coloring is
given by the function f defined by Eq. (1). A proper coloring c1 is better than a proper
coloring c2 if f (c1 )< f (c2 ). For non proper k-coloring ci , we elaborate the following score
function Fs :

Fs (ci ) = f (ci ) + e|Eci |/|V | (21)

where Eci is the set of conflicting edges induced by the coloring ci and |V | is the number
of the graph’s vertices. Accordingly, this introduced score function penalizes as most as
possible non proper solution.

6.2. Genetic algorithm


The main reason for the interest of these techniques is their ability to generate differ-
ent solutions in order to better explore the search solution space. GAs have also been
recognized as a performing technique that examines unvisited regions over different gen-
erations. To address these issues, genetic algorithms use several operators inspired by
evolutionary biology. Algorithm 1 summarizes the outline of GA.
The obtained k-coloring solutions are evaluated using the objective function (Eq.(21)).
This evolutionary search aims to minimize the sum of colors in a set of k-colorings.
For this purpose, GA starts with an initial population randomly generated. Then, un-

10
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Algorithm 1 Pseudo-code of the genetic algorithm procedure


1: Input:A graph G, population of solutions (P op)
2: Output: The best found solution (Best-Indiv)
3: Begin
4: While a maximum number of iterations is not reached do
5: For each Individual in the population do
8: (parent1, parent2) ←− Selection (P op);
9: Offspring ←− Crossover (parent1, parent2);
10: Offspring’ ←− Mutation (Offspring);
11: Evaluate (Offspring’);
12: X ←− Get-Worst(P op) ;
13: ∆ ←− F(offspring’) - F(X);
14: If ∆ < 0 then
15: Insert (Offspring’, P op, X);
16: end if
17: End for
18:End while
19: Best-Indiv ←− (Best-Individual (P op));
20: Return Best-Indiv
21: End

til a maximum number of generations is reached, GA performs different evolutionary


steps. Firstly, two solutions are selected using the binary tournament selection. The lat-
ter experimentally performs better than a random selection or roulette-wheel selection
(Moukrim et al., 2014). Then, the proposed GA relies on a crossover operator to improve
the solution by exchanging information contained in the current selected parents. Mo-
tivated by the achieved results in (Bouziri & Harrabi, 2013), we have adopted the Sum
Partition Crossover operator (SPX). Mutation operator is, thereafter, used to improve
the obtained offspring resulting from the crossover step. In our context, we adapt the hill
climbing neighborhood tested in (Bouly, Dang, & Moukrim, 2010) as it performs well for
the sum coloring problem. The latter chooses randomly a vertice v colored with C(v).
Then, it changes the color class of v by the one having a minimum cardinality without
creating conflicts. Finally, GA inserts the obtained solution in the population if it is
better than the worst one. The rationale behind this rule is to prevent a bad-qualified
offspring from participating in the population updating mechanism.

6.3. Simulated annealing approach


Simulated Annealing (SA) is a performing local search method which is able to intensify
the search process using modified random ascent moves (Kirkpatrick, Gelatt, Vecchi, et
al., 1983). Interestingly, among many existing metaheuristics, this technique has an op-
portunity of escaping from getting trapped into local minima (Lin, Vincent, & Lu, 2011).
For this purpose, SA accepts not only better solutions, but also worse solutions with a
probability exp(∆ /T) stipulated by the Metropolis criterion (Metropolis, Rosenbluth,
Rosenbluth, Teller, & Teller, 1953). By adopting this strategy, SA acquires a high capa-
bility on intensifying the search relative to the current solution. On the other hand, we
precisely choose the simulated annealing approach since it has been successfully applied
to the closely related graph coloring problem (Chams, Hertz, & De Werra, 1987; John-
son, Aragon, McGeoch, & Schevon, 1991). Details of the simulated annealing algorithm

11
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

is presented in Algorithm 2.

Algorithm 2 Pseudo-code of the simulated annealing procedure


Input: A graph G, T 0: Initial temperature, α : Cooling ratio, N : Epoch length.
Output: a solution S
Begin
Repeat
T ←− T 0;
Choose a Neighborhood Operator S 0 ∈ N (S);
∆ ←− f (S 0 ) − f (S);
If (∆ < 0) then
S ←− S 0 ;
else
{
p(∆)=exp(-∆/T);
y =RAND(0,1);
}
If (y < p( ∆ ) then S ←− S 0 ;
end if
end if
T ←− αT 0
Until (N is reached)
End

Basically, SA algorithm performs N generations of neighboring solutions. Firstly, it


starts from an initial solution with an initial temperature. Then, a random neighbor
solution is chosen. To escape from getting trapped into local minima, SA accepts not
only better solutions, but also worse solutions with a probability according to the state
of the system. Note that we use the same evaluation function reported in Eq.(21).

6.3.1. Neighboring operators


Two neighborhood structures were used during the search process:
(1) Destroy/repair methods: We remove d selected conflicting vertices from the
solution (d is randomly chosen from {1, . . . , k}). Then, we reconstruct the solution
in an iterated way: each removed vertice is inserted into the color class with a
maximum cardinality without creating conflicts. If such color class does not exist,
the vertice will be inserted in a new color class and the number of colors used k is
incremented (k ←− k + 1). This process will be iterated until all removed vertices
are assigned to a possible color class.
(2) Swap operator: this operator gives the priority to operate with conflicting ver-
tices. Basically, it chooses randomly two conflicting vertices in the solution and
replaces its color randomly by i ∈ {1, . . . , k + 1}. If no conflicts are found, the
operator randomly chooses a vertex and replaces its color by j ∈ {1, . . . , k}.

6.4. Iterated local search


Iterated local search (ILS) is a well-known local search technique which was successfully
adapted to tackle various difficult optimization problems (Lourenço, Martin, & Stutzle,

12
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

2003). In fact, despite its conceptual simplicity, the approach shows high capabilities in
intensifying the search without using too much problem-specific knowledge. ILS is mainly
based on two components: a local search and a perturbation operators. Basically, ILS
builds a sequence of solutions generated in an iterated way according to a neighborhood
structure. This procedure offers to ILS the opportunity of intensifying the search around
a given solution. Then, ILS repeats the search from another starting point and adopts
for this purpose a perturbation mechanism. In our model, we choose to apply the ILS
approach in cooperation with the simulated annealing in order to intensify the search
space as much as possible. More details of the adapted ILS are given in Algorithm 3.

Algorithm 3 Pseudo-code of the iterated local search procedure


Input: Initial solution S0
Output: Best local solution Sbest
Begin
S ←− S0 ;
Sbest ←− ∅ ;
Repeat
0
S ←− Local search (S) ;
0
Sneighbor ←− Perturbation (S ) ;
If [F(Sneighbor ) ≤ F(S)] Then Sbest ←− Sneighbor ;
Return ( Sbest );
Until (a maximum number of iterations is reached)
End

All obtained solutions are evaluated using the function presented in Eq.(21). To explore
the neighborhood of the initial k-coloring solution, our ILS performs using two moves:
(1) Critical one-move (v, i): it considers only conflicting vertex v and replaces its actual
color c(v) by i ∈ {1, . . . , k + 1}.
(2) Perturbation operator (v, α): we have adopted a randomized perturbation since
deterministic one may lead to short cycles. The operator chooses d vertices (d ∈
{1, . . . , |V | = n}) and randomly changes their colors with α ∈ {1, . . . , k + 1}.

6.5. The proposed resolution approach


Our proposed HHA-SCP approach started by an initial population randomly sized and
generated using k colors. To perform a good management of the population, one usu-
ally must control and balance two important features: quality and diversity. Accordingly,
HHA-SCP considers the current specifies of population in order to decide the next di-
rection of the search process. For example, if the population is diversified, one needs
therefore an intensifying operators (ILS or SA) to improve quality of the obtained solu-
tions. Otherwise, a GA must be applied in order to maintain the balance between these
two features. After the processing of a chosen search operator, if the stopping criterion
(maximum number of iterations) is reached and the best found solution is proper, then,
the latter is recorded. Otherwise, our algorithm increments the number of the used colors
k and performs an online learning strategy to update the parameters and choose the next
promising search direction. For a better illustration of the general framework, we provide
in Figure 3 the flowchart of our HHA-SCP. To ensure the performance of our HHA-SCP,
three main features are considered:
(1) A new strategy of the parameters initialization.

13
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Figure 3. Flowchart of the proposed HHA-SCP for the sum coloring problem

14
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

(2) An online learning strategy based on the specificity of the current search space.
(3) A parameter-updating procedure which considers the results of the previous itera-
tion.
More details about the different features of our proposed hyper-heuristic approach are
discussed in the subsequent sections.

6.5.1. Parameter initialization


Different parameters can considerably influence the performance of a proposed resolution
approach. Choosing the appropriate parameters is one of the most persisting and chal-
lenging tasks (Smit & Eiben, 2010). In practice, most researches adopt fixed parameters
with reference to previous similar works. However, the significant way to improve the
performance of a given algorithm is to iterate the search process from different starting
positions. In the same context, for each iteration, we propose a dynamic initialization
with random values of six parameters: P GA, P ILS, P SA, P op − Size, M ax − Iter and
Diver(%). As the parameters have a huge number of alternatives, we propose three levels
of values: low, medium and high (Table 1).
Table 1. Designation of the parameters and their range levels
Parameter Designation Low Medium High
P GA Probability of genetic algorithm 0.3 0.4 0.6
P ILS Probability of iterated local search 0.2 0.4 0.6
P SA Probability of simulated annealing 0.2 0.4 0.6
P op − Size The size of the population 20 60 100
M ax − Iter The maximum number of iterations 50 90 150
Diver(%) The percentage of diversity in population [0, 40%[ [40%, 70%[ [70%, 100%]

6.5.2. An online learning strategy


As previously mentioned, our proposed HHA-SCP adopts an online learning strategy
which considers the current features of the population: quality and diversity. How can
we estimate these two factors? And what is precisely their impact on the search direc-
tion? We rationally investigate a diversity function in order to detect the specificity of
the current search space.
The proposed diversity function measures the percentage of diversity between all indi-
viduals in the population. Basically, if Diver(%) is more than 70%, this means that the
population is diversified. In this case, one needs to perform an intensification operator.
However, if Diver(%) is less than 40%, the next direction search space needs a diversi-
fication technique. To estimate the diversity, we assume that the search space is metric
and propose the following distance D. Considering two colorings Ci and Cj , Dij is the
number of vertices in the corresponding coloring Ci and Cj having different colors.

Dij = |v ∈ V : Ci (v) 6= Cj (v)| (22)

The pseudo code of the diversity function is described in Algorithm 4.


The proposed function tests the diversity between two individuals based on the distance
Dij . If the latter is higher than b N2 c, then the individuals are diversified and the function
increments the counter. The last line reports the percentage of diversity in the current
population.

15
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Algorithm 4 Diversity Function


Input: pop: population with size Pop-Size, n: the number of vertices in a graph G
Output: The percentage of diversity in population Diver(%)
Begin
i ←− 0, j ←− 1;
nbr-diver ←−0;
nbr-diver-max ←− Determine nbr-diver-max (pop, Pop-Size);
/*This function returns the number of diversity in the case of 100% diversity*/
For each indivi from pop To Pop-Size do
j ←− i+1
For each indivj from pop To Pop-Size -1 do
Dij == Distance(indivi ,indivj );
If Dij > b N2 c Then
nbr-diver ←− nbr-diver +1;
End If
End For
End For
Diver(%) ←− (nbr-diver /nbr-diver-max )*100
End

6.5.3. A parameter-updating procedure


In our work, we fix three levels of values of the parameters: low, medium or high. Ac-
cordingly, we propose a procedure to update efficiently the parameters values. To this
end, three scenarios were considered for our model. These are summarized in Algorithm
5.
Scenario 1: If the percentage of diversity of the current population Diver(%) is low,
then our algorithm adopts the following parameters in order to improve the diversification
feature:
(1) P GA increases to the High level;
(2) P ILS and P SA decrease to the low level;
(3) P op − Size and M ax − Iter increase to the High level.
Scenario 2: If the percentage of diversity of the current population Diver(%) is high,
then our algorithm performs the following steps:
(1) P ILS and P SA increase to the high level;
(2) P GA, P op − Size and M ax − Iter decrease to the low level.
Scenario 3: If the percentage of diversity of the current population Diver(%) is
medium, then our algorithm operates as follows:
(1) P ILS, P SA and P GA change to the medium level;
(2) P op − Size and M ax − Iter remain unchanged.
Nevertheless, in the case that the parameter values are already at the high level, they
remain unchanged.

7. Experimental study

All of the proposed resolution approaches were coded using C++. Specifically, we opted
for cplex 12.6 to solve the mathematical formulations. Computational experiments were
conducted on a i3 processor with 2.53 Ghz and 4 Gb of available memory. The presented
running times are expressed in seconds. The formulations operated under a time limit of

16
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Algorithm 5 A parameters updating procedure


Input: pop: population, Diver(%)
Output: pop, P GA, P ILS, P SA
Begin
Diver(%) ←− Diversity function(pop);
If (Diver(%) ∈ Low level(Diver(%))) Then
P GA ←− High level(P GA)
P ILS ←− Low level(P ILS)
P SA ←− Low level(P SA)
M ax − Iter ←− High level(M ax − Iter)
P op − Size ←− High level(P op − Size)
Else If (Diver(%) ∈ High level(Diver(%))) Then
P GA ←− Low level(P GA)
P ILS ←− High level(P ILS)
P SA ←− High level(P SA)
M ax − Iter ←− Low level(M ax − Iter)
P op − Size ←− Low level(P op − Size)
Else
P GA ←− Medium level(P GA)
P ILS ←− Medium level(P ILS)
P SA ←− Medium level(P SA)
End If
End

3600 seconds.

7.1. Benchmark instances


The test-bed we have used consists of a set of graphs used to report computational results
for the sum coloring problem. Some of these instances come from the second DIMACS
challenge 2 while the others are part of the COLOR 2002-2004 competitions 3 . The tested
instances refer to various typologies and densities:
(1) Random graphs DSJCn.d; n ∈ {125, 250, 500, 1000}, d ∈ {1; 5; 9};
(2) Leighton graphs (le450-χa, le450-χb, le450-χc, le450-χd, χ ∈ {15; 25})
(3) Graphs from the Donald Knuth’s Stanford Graph Base:
• milesn with n ∈ {250; 500; 750; 1000; 1500},
• queena.a, a ∈ {5; 6; 7; 8; 9; 10; 11; 12},
• others: anna, david, huck, jean, games120, zeroin.i.3, mulsol.i.1, mulsol.i.2,
mulsol.i.3, mulsol.i.4 and mulsol.i.5...
(4) Graphs based on the Mycielski transformation (myciela, a ∈ {3; 4; 5; 6; 7} ;

7.2. Used metrics


To evaluate the efficiency of our three tested mathematical formulations and the
proposed hyper-heuristic approach, we used two different performance measures:

2 http://dimacs.rutgers.edu/Challenges/
3 http://mat.gsia.cmu.edu/COLOR02

17
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

• The average relative percentage deviation (ARPD) from the best-obtained solution
measured as follow:
(Solf ormulation −Solf∗ormulation )
ARP D=[ Solf∗ormulation ]*100

where Solf∗ ormulation is the best value obtained using the three formulations
implemented in this paper.

• The average relative Gap above the obtained solution defined by:
b
(Sol−fLB )
Gap=[ b
fLB
]*100

b
where fLB value is the current best known lower bound for the sum coloring
problem(Jin, Hamiez, & Hao, 2017).

7.3. Evaluation of the proposed formulations


In this section, we investigate proposing exact solutions for the sum coloring problem. To
that end, we propose to compare the proposed mathematical formulations. It is notewor-
thy to estimate the appropriate parameters for the proposed weight-based formulation
(WBF).

7.3.1. Parameters tuning for the weight-based formulation (WBF)


Our WBF model is beset with some challenges: How to explain our preferences in term
of weights? What is the effective way to select the parameters in order to guarantee
the performance of our model? To this end, it was strongly recommended to carry out
a parameter tuning for the proposed model. Subsequently, several parameters must be
carefully chosen:
• w1 : the weight assigned to the minimization of sum colorings.
• w2 : the weight assigned to the minimization of conflicting vertices.
• β11 : the coefficient of term 1 in response function 1 ( minimization of sum colorings).
• β22 : the coefficient of term 2 in response function 2 ( minimization of conflicting
vertices).
Experimental simulations tend to fix β11 =β22 =0.1 4 . On the other hand, one must clev-
erly generate the appropriate weight parameters to efficiently allocate the WBF formu-
lation efforts. To ensure the guidance to feasible solutions, we investigate testing differ-
ent weighting coefficients that reflect more preference toward minimizing the number of
conflicting vertices rather than minimizing the sum of colors. For this purpose, several
combinations were tried:
• VERSION I: [w1 =0.1 ; w1 =0.9].
• VERSION II: [w1 =0.2 ; w1 =0.8].
• VERSION III: [w1 =0.3 ; w1 =0.7].
• VERSION IV: [w1 =0.4 ; w1 =0.6].
Table 2 reports the computational statistics of the different WBF versions over 11 tested
instances. Columns 2-3 indicate the characteristics of each tested instance mainly |V | :

4 These values were determined empirically according to our problem’s knowledge

18
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

the number of nodes and |E| : the number of edges. For each tested version, we report the
value of the obtained solution Sol, the average relative percentage deviation ARPD(%)
and the running time t (in seconds).

19
May 26, 2019
Journal of Experimental & Theoretical Artificial Intelligence
Table 2. Results of parameter tuning for the weight-based formulation (WBF)
Instances |V | |E| VERSION I VERSION II VERSION III VERSION IV
Sol ARPD(%) t Sol ARPD(%) t Sol ARPD(%) t Sol ARPD(%) t
myciel3 11 20 21∗ 0 0 - - - - - - - - -
myciel4 23 71 45∗ 0 0 - - - - - - - - -
myciel5 47 236 93 ∗ 0 0 - - - - - - - - -
myciel6 95 755 189∗ 0 0 - - - - - - - - -
huck 74 301 243∗ 0 0 - - - - - - - - -
20

jean 80 254 217 ∗ 0 0 - - - - - - - - -


75∗

tETAguide
queen5.5 25 160 0 0 - - - - - - - - -
Mulsol.i.1 197 3925 1957∗ 0 0 - - - - - - - - -
Mulsol.i.5 186 3973 1160 ∗ 0 0 - - - - - - - - -
zeroin.i.2 211 3541 1004∗ 0 0 - - - - - - - - -
games120 120 638 443∗ 0 0 - - - - - - - - -
Average - - - 0 0 - - - - - - - - -
#Solved - - 11/11 0/11 0/11 0/11
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

From Table 2, we note a similar behavior of the VERSION II, VERSION III and
VERSION IV. Indeed, for the same used test bed, all the versions terminated abnormally
due to excess requirements of memory. However, VERSION I succeeds to solve all the
tested graphs. This factor slightly favors the first configuration during the coloring of
graphs. Within such configuration, the proposed formulation performs remarkably well,
being able to solve all the tested instances without requiring any CPU effort. Moderate
sized instances having up to 211 nodes and 3541 arcs were quickly solved to optimality.
To the best of our knowledge, the exact solutions of some instances (underlined entries)
have never been previously reported in the literature.
At this point, it is instructive to favor the use of the first configuration [w1 =0.1 ; w1 =0.9]
since VERSION I quickly yielded exact solutions to 11 tested instances.

7.3.2. Comparison of different formulations


In order to assess the practical performance of the proposed formulations, we provide a
reliable basis for comparison on a set of 32 instances. For our proposed PFF formulation,
some simulation tests were conducted to fix some related parameter-values. Indeed, the
effort required to obtain proper solutions could rapidly increase as the graph becomes
more complicated. Therefore, we set σ=e|V | .
The results are displayed in Table 3. In the same table, we report for comparison the
exact solutions delivered by the integer linear programming (ILP) (Wang et al., 2013).
At this point, it is worth noting that the test-bed used by all the previously stated exact
methods is not sufficient to completely carry out a consistent comparison.
In this table, we display for each instance the best known solution (BKS) from the
literature (Jin et al., 2017). We also report for each model the delivered solution value
Sol within the 1-h time limit, the average relative percentage deviation ARPD(%) from
the best obtained solution and the running time t (in seconds).

21
May 26, 2019
Journal of Experimental & Theoretical Artificial Intelligence
Table 3. Comparisons of different formulations for the sum coloring problem. The dash ”-” symbol indicates that the related statistics are not available.
Instances |V | |E| BKS BCF PFF WBF ILP
Sol ARPD(%) t Sol ARPD(%) t Sol ARPD(%) t Sol ARPD(%) t
Le450-25a 450 8260 3153 3127∗ 0 3500 - - - - - - - - -
inithx.i.2 645 13979 2050 2050∗ 0 139.222 - - - - - - - - -
inithx.i.3 621 13969 1986 1986∗ 0 119.417 - - - - - - - - -
mulsol.i.1 197 3925 1957 1957∗ 0 12.452 - - - 1957∗ 0 0 - - -
mulsol.i.2 188 3885 1191 1191∗ 0 22.989 - - - - - - - - -
mulsol.i.3 184 3916 1187 1187∗ 0 19.098 1187∗ 0 730.084 - - - - - -
mulsol.i.4 185 3946 1189 1189∗ 0 20.464 1189∗ 0 672.411 - - - - - -
mulsol.i.5 186 3973 1160 1160∗ 0 20.893 1160∗ 0 619.649 1160∗ 0 0 - - -
zeroin.i.1 211 4100 1822 1822∗ 0 14.104 1822∗ 0 803.875 - - - - - -
zeroin.i.2 211 3541 1004 1004∗ 0 8.802 1004∗ 0 494.127 1004∗ 0 0 - - -
zeroin.i.3 206 3540 998 998∗ 0 8.271 998∗ 0 461.440 - - - - - -
fpsol2.i.2 451 8691 1668 1668∗ 0 61.4 - - - - - - - - -
fpsol2.i.3 425 8688 1636 1636∗ 0 47.179 - - - - - - - - -
myciel3 11 20 21 21∗ 0 0.100 21∗ 0 0.170 21∗ 0 0 21∗ 0 -
myciel4 23 71 45 45∗ 0 0.650 45∗ 0 1.420 45∗ 0 0 45∗ 0 -
22

myciel5 47 236 93 93∗ 0 2.574 93∗ 0 7.511 93∗ 0 0 93∗ 0 -


myciel6 95 755 189 189∗ 0 160.774 - - - 189∗ 0 0 189∗ 0 -

tETAguide
anna 138 493 276 276∗ 0 0.989 - - - - - - 276∗ 0 -
david 87 406 237 237∗ 0 1.971 - - - - - - 237∗ 0 -
huck 74 301 243 243∗ 0 0.956 243∗ 0 7.981 243∗ 0 0 243∗ 0 -
jean 80 254 217 217∗ 0 2.100 217∗ 0 9.071 217∗ 0 0 217∗ 0 -
queen5.5 25 160 75 75∗ 0 1.287 - - - 75∗ 0 0 75∗ 0 -
queen6.6 36 290 138 138∗ 0 42.001 138∗ 0 57.996 - - - 138∗ 0 -
queen7.7 49 476 196 196∗ 0 347.802 196∗ 0 562.028 - - - 196∗ 0 -
queen8.8 64 728 291 291∗ 0 > 3600 - - - - - - 291∗ 0 -
queen8.12 96 1368 624 624∗ 0 53.726 624∗ 0 109.858 - - - - - -
miles250 128 387 325 325∗ 0 10.467 - - - - - - 325∗ 0 -
miles500 128 1170 705 705∗ 0 63.449 - - - - - - 705∗ 0 -
miles750 128 2113 1173 1173∗ 0 285.061 - - - - - - - - -
miles1000 128 3216 1666 1666∗ 0 3400 - - - - - - - - -
miles1500 128 5198 3354 3354∗ 0 142.238 - - - - - - - - -
games120 120 638 443 443∗ 0 15.849 443∗ 0 46.232 443∗ 0 0 443∗ 0 -
#Solved - - - 32/32 15/32 11/32 15/32
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Looking at Table 3, we can see that the proposed formulations delivered exact solutions
for the most of the tested instances. More precisely, the results show clearly that the BCF
formulation strictly outperforms all the other formulations since it successfully delivered
exact solutions for all the tested instances. However, PFF, WBF and ILP formulations
output only 15, 11 and 15 solutions respectively (see Figure 4). Furthermore, one notes
that 17 instances remain unsolved after reaching the 1-h CPU time limit using the ILP
model. At this point, it is worth recalling that these instances were found to be intractable
in literature when using an exact resolution approach. In terms of computational speed,
we observed that WBF, although being extremely fast, exhibits an erratic behavior since
it fails to solve most of the tested instances. In terms of solutions quality, the ARP D(%)
suggests the robustness of the BCF formulation. Indeed, for all instances, it yields the
best solution although requiring a moderate CPU computational time and the largest
instances involving 645 nodes and 13979 edges were solved in about 3 minutes of CPU
time. Moreover, the performance of the BCF was made evident since 5 new optimal
solutions (underlined entries) were delivered. Among these solved instances, one notes
the class of leighton graphs known to be hard during the coloring. Performance of the
BCF could be explained by the fact of handling the hard constraint of the problem using
an upper bound B2 strictly less than 1. From this perspective, the BCF formulation
reduces as much as possible the number of generated constraints. Therefore, a significant
computational burden is also lessened.

Figure 4. Comparisons of different formulations for the sum coloring problem.

Next, we focus on analyzing our computational findings through some well known
statistical tests. In this vein, we specifically study the relation that could exist between
the size of the tested instances and the computational time of the different mathematical
formulations. In other words, we attempt to test the influence of the number of nodes
on the performance of the tested mathematical models. For this purpose, a correlation
test was conducted between the number of nodes and the average computational time
of the mathematical formulations. Such a test is generally used to observe the statistical
relationships between two or more data values or variables. Firstly, we assess normality of
data to decide whether parametric or non-parametric tests are necessary. In this context,
we perform three different normality tests: the Kolmogorov-Smirnov normality test, the
Shapiro-Wilk normality test and the d’Agostino-Pearson test. Normality tests showed
that the data were not normally distributed. Therefore, we used the Spearman correlation
test. Results of this test are displayed in Table 4.

23
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Table 4. Results of the Spearman correlation test.


BCF PFF
Statistic
r 0.320 0.616
P (two tailed) 0.073 0.001
Significant? (alpha = 0.05) No Yes

The latter shows that there is a strong correlation between these two variables for the
PFF formulation. This statistically proves the significant relation between the number of
the instances’ nodes and the average computational time. However, performance of the
BCF formulation is slightly depending on the instances characteristics. Tough, statistic
tests show that there is no significant correlation between the considered variables. This
factor with performing statistical tests confirms the efficacy of the BCF formulation to
tackle the sum coloring problem.

7.4. Evaluation of the HHA-SCP approach


This section has two major objectives. We firstly exhibit the influence of the hyper-
heuristic main features. Then, we evaluate the performance of the HHA-SCP method
and compare the output upper bounds with some recent performing algorithms.

7.4.1. Analysis of the proposed hyper-heuristic approach


Our hyper-heuristic algorithm combines the use of three well-known search operators.
Thereafter, it is relevant to show the impact of such a cooperation. Moreover, we
investigate the impact of online learning strategy of the proposed HHA-SCP. The
test-bed we have used consists of selecting 16 graphs reputed to be hard and difficult
during the coloring.

To firstly show the impact of the cooperation feature, we compare the results of HHA-
SCA against each of the three separately used algorithms: GA, SA and ILS. During these
experiments, we ran all the algorithms for 10 times to compute upper bounds for the
sum coloring problem. As we previously mentioned, we initialize HHA-SCP by choosing
a random level for each parameter from Table 1. Sensitivity analysis led to determine
the following parameter settings:
• The population size of GA and HHA-SCP are equivalent (to ensure the same com-
putational efforts for these approaches).
• The crossover probability for GA and HHA-SCP was similarly set to 0.9.
• The mutation probability for GA and HHA-SCP was similarly set to 0.2.
• The cooling ratio α of SA was randomly chosen
√ from [0.8, . . . , 0.99].
• The initial temperature of SA was set to n as suggested by (Chams et al., 1987)
(where n is the number of vertices).
The results of comparisons are displayed in Table 5. For each generated algorithm, we
report the best upper bound fU∗ B , the relative percentage of Gap(%) and the required
CPU time for computing upper bound t.

24
May 26, 2019
Journal of Experimental & Theoretical Artificial Intelligence
Table 5. Comparative results of HHA-SCP, GA, SA and ILS algorithms.
Instances fUb B HHA-SCP GA SA ILS
fU∗ B Gap(%) t fU∗ B Gap(%) t fU∗ B Gap(%) t fU∗ B Gap(%) t
DSJC125.5 1012 1012 0.843 5 1017 0.852 7 1020 0.857 5 1020 0.857 6
DSJC125.9 2503 2503 0.480 7 2520 0.490 15 2529 0.495 10 2610 0.543 9
DSJC250.5 3210 3211 1.494 27 3240 1.517 25 3255 1.529 20 3378 1.624 27
DSJC250.9 8277 8277 0.919 8 8280 0.920 15 8294 0.923 8 8370 0.941 18
DSJC500.1 2836 2841 1.272 37 2860 1.288 28 2900 1.32 12 3045 1.436 15
DSJC500.5 10886 10890 2.725 55 10905 2.730 30 11107 2.799 40 11186 2.826 38
DSJC500.9 29862 29910 1.706 45 29937 1.708 40 29956 1.710 31.5 30110 1.724 33
25

DSJC1000.1 8991 9000 2.258 99 9100 2.294 110 9170 2.320 51 9272 2.356 53
DSJC1000.5 37575 37596 4.604 25 37687 4.618 35 38561 4.748 30 38720 4.772 30

tETAguide
DSJC1000.9 103445 103464 2.895 63 103480 2.896 65 104490 2.934 61 104671 2.941 60
flat300.20-0 3150 3150 1.057 50 3257 1.127 25 3261 1.129 27 3258 1.128 28
flat300.26-0 3966 3966 1.562 2 4015 1.593 7 4019 1.596 1 4020 1.596 2
flat300.28-0 4238 4289 1.772 133 5495 2.552 125 5570 2.600 102 5526 2.572 100
flat1000.50-0 25500 25513 2.865 10 26780 3.056 18 26900 3.075 7 27000 3.090 8
flat1000.60-0 30100 30100 3.533 12 32160 3.843 16 32280 3.861 10 32390 3.878 10
flat1000.76-0 37164 37168 4.604 22.5 35235 4.614 40 38600 4.820 20 38620 4.823 35
Average - - 2.162 37.531 - 2.237 37.562 - 2.295 27.218 - 2.319 29.5
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Table 6. Results of the comparison between HHA-SCP, GA, SA and ILS on the Kruskal-Wallis test
Comparison of Gap Comparison of computational time
P value 0.943 0.777
Exact or approximate P value Approximate Approximate
Are means significantly different(P < 0.05) Yes Yes
Kruskal-Wallis statistic 0.381 1.100

From Table 5, it is clear that HHA-SCP outperforms remarkably all the tested ap-
proaches.
More precisely, HHA-SCP is extremely the best coloring algorithm for all the graphs
(the results are indicated in bold). We also observe that for 5 cases out of 16, the hyper-
heuristic approach matches the best known upper bounds (underlined entries). Indeed,
the average Gap of HHA-SCP is only 2.162 % compared to 2.237 % for GA, 2.295 % for
SA and 2.319 % for ILS. Surprisingly, despite its complicated composition, HHA-SCP
was able to achieve best results in moderate computational times. This observation is
consistent when comparing our approach to the well known GA algorithm. In fact, we
note that HHA-SCP was the fastest since it requires only 37.531 minutes.
All the reported observations should be approved by statistical tests. In this vein, we
firstly check normality of data using the same previous normality tests. The results
prove that all data are not normally distributed. Then, we used Kruskal-Wallis test for
analysis and present results in Table 6.
This table shows that HHA-SCP significantly improves all the results. In terms of
computational speed, statistical tests show that HHA-SCP performs significantly well
and outperforms the genetic algorithm. In summary, the use of different cooperative
approaches has allowed the HHA-SCP method the ability to control the search process
by executing the most suitable algorithm at the appropriate time.

Pushing our analysis a step further, we propose to get a better insight into the efficiency
of the online learning strategy. For this purpose, we compare between HHA-SCP and its
counterpart without this strategy (static scheme). Moreover, we investigate a comparison
with a Master-Slave Parallel Evolutionary approach (MSPE). The approach performs on
a single population which is further randomly divided into 3 fractions (sub-populations).
Each fraction is then assigned by the master process to one slave process on which genetic
algorithm, local search approach or simulated annealing are elaborated. If the stopping
criterion (a maximum number of iterations) is not reached, the master process nominates
the best found solution from each slave processs population and integrates it again for
the reproduction. For more details one can refer to (Abbasian & Mouhoub, 2013) where
authors proposed a similar approach to the tightly related graph coloring problem.
To be informative, all attributed parameters for MSPE are similar to those adapted for
the static scheme. Each method was ran 30 times. The stopping condition is a timeout
limit set to be 2 hours except for some hard graphs for which additional time was allowed.
Such a cutoff time was commonly used in literature (Jin & Hao, 2016). For the static
scheme without adaptive strategy, we considered the same probability for each algorithm
(i.e. each algorithm has a probability equal to 0.4). Furthermore, we vary the medium
values of the parameters P op − size and M ax − iter for the static scheme and MSPE.
The results are summarized in Table 7 providing the same information as in Table 5.
Examining this table, one observes that the proposed approach keeps its robust
performance even with a static aspect and a parallel version. In fact, all of the tested
approaches were able to match the best known results for 4 cases of graphs (underlined
entries). In particular, for very large instances involving 1000 vertices, the static scheme

26
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Table 7. Comparative results of HHA-SCP, HHA-SCP(Static scheme) and MSPE approach.


Instances fUb HHA-SCP HHA-SCP (Static scheme) MSPE
B

fU Gap(%) t ∗
fU Gap(%) t fU∗ Gap(%) t
B B B
DSJC125.5 1012 1012 0.843 5 1017 0.852 8 1015 0.848 6
DSJC125.9 2503 2503 0.480 7 2503 0.480 10 2503 0.480 8.5
DSJC250.5 3210 3211 1.494 27 3230 1.509 30 3233 1.512 29
DSJC250.9 8277 8277 0.919 8 8277 0.919 10 8277 0.919 10
DSJC500.1 2836 2841 1.272 37 2854 1.283 42 2850 1.280 42
DSJC500.5 10886 10890 2.725 55 10897 2.728 61 10896 2.727 59
DSJC500.9 29862 29910 1.706 45 29927 1.707 52 29920 1.706 50.5
DSJC1000.1 8991 9000 2.258 99 9980 2.613 98 9156 2.314 99
DSJC1000.5 37575 37596 4.604 25 37643 4.611 32 37629 4.609 30
DSJC1000.9 103445 103464 2.895 63 103488 2.896 65 103479 2.896 65
flat300.20-0 3150 3150 1.057 50 3240 1.116 50 3156 1.061 51
flat300.26-0 3966 3966 1.562 2 3966 1.562 1 3966 1.562 2
flat300.28-0 4238 4289 1.772 133 5334 2.447 138 5310 2.432 135
flat1000.50-0 25500 25513 2.865 10 26410 3.000 15 26209 2.970 12
flat1000.60-0 30100 30100 3.533 12 30100 3.533 18 30100 3.533 14
flat1000.76-0 37164 37168 4.604 22.5 37214 4.611 37 37209 4.610 30
Average - - 2.162 37.531 - 2.242 41.687 - 2.216 40.187

and the parallel version of HHA-SCP successfully get the best known results. However,
computational results provide evidence that HHA-SCP outperforms its static scheme
and even the parallel MSPE approach in term of Gap and CPU computational time.
More precisely, HHA-SCP achieves the best average Gap of 2.162% while requiring only
37.531 minutes compared to 41.687 minutes for the static scheme and 40.187 minutes
for the MSPE algorithm.

Looking at the same table, a striking observation is that, the MSPE algorithm has
tremendously outperformed the static scheme within lower CPU computational effort t.
Indeed, the MSPE requires only 40.187 minutes in average to output an average Gap
equals to 2.216 compared to a total average times of 41.687 minutes to deliver an average
Gap of 2.242 for the static scheme of HHA-SCP.
A global view of the different computational time efforts required by the compared
algorithms are summarized in Figure 5.

Figure 5. The CPU computational time comparison of the HHA-SCP, static scheme and MSPE
algorithms

27
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

7.4.2. Comparative study


To evaluate the performance of the proposed HHA-SCP, we compare the obtained upper
bounds with bounds reported by the most recent performing approximate algorithms
in literature namely: HESA (Jin & Hao, 2016), EXSCOL (Wu & Hao, 2012), MA-MSC
(Moukrim et al., 2014), and MASC (Jin et al., 2014). The results are summarized in
Table 8. For each solution approach, we report: fU∗ B = the best upper bound obtained
by the reference algorithm; Gap(%)= relative percentage deviation of obtained upper
bound and t= CPU time in minutes required to compute fU∗ B .

28
May 26, 2019
Journal of Experimental & Theoretical Artificial Intelligence
Table 8. Results of comparisons analysis for the upper bounds. The dash ”-” symbol indicates that the related statistics are not available.
Instances fU b HESA EXSCOL MA-MSC MASC HHA-SCP
B

fU Gap t ∗
fU Gap t ∗
fU Gap t ∗
fU Gap t ∗
fU Gap t
B B B B B
DSJC125.1 326 326 0.319 5.2 326 0.319 1 326 0.319 10 326 0.319 4.4 320 0.295 12
DSJC125.5 1012 1012 0.843 10.1 1017 0.852 1 1013 0.845 5 1012 0.843 3.5 1012 0.843 5
DSJC125.9 2503 2 503 0.481 0.3 2512 0.487 1 2503 0.481 2 2503 0.481 1.9 2503 0.480 7
DSJC250.1 970 970 0.704 30.7 985 0.731 4 983 0.727 42 974 0.711 17.3 970 0.701 17
DSJC250.5 3210 3 210 1.507 47.1 3246 1.535 6 3214 1.510 26 3230 1.523 23.1 3211 1.494 27
DSJC250.9 8277 8277 0.934 24.6 8286 0.936 7 8277 0.934 33 8280 0.935 5.6 8277 0.919 8
DSJC500.1 2836 2836 1.268 82.6 2850 1.28 9 2897 1.317 4 2940 1.352 50.4 2841 1.272 37
DSJC500.5 10886 10886 2.726 97 10910 2.735 11 11082 2.793 42 11101 2.800 202.5 10890 2.725 55
DSJC500.9 29862 29862 1.744 95.4 29912 1.749 15 29995 1.756 51 29994 1.756 90.9 29910 1.706 45
DSJC1000.1 8991 8991 2.255 101.6 9003 2.259 28 9188 2.326 31 8995 2.256 70 9000 2.258 99
DSJC1000.5 37575 37575 4.601 33.5 37598 4.604 24 38421 4.727 23 37594 4.604 200 37596 4.604 25
DSJC1000.9 103445 103445 2.895 103.1 103464 2.895 27 105234 2.962 61 103464 2.895 125 103464 2.895 63
Le450-15a 2632 2634 0.130 91.5 2632 0.130 5 2681 0.151 19 2706 0.161 41 2630 0.128 43
Le450-15b 2632 2632 0.120 89.9 2642 0.125 7 2690 0.145 19 2724 0.160 40 2642 0.125 40
Le450-15c 3487 3487 0.344 86.7 3866 0.490 6 3943 0.520 6 3491 0.346 45 3570 0.367 42
Le450-15d 3505 3 505 0.336 82.7 3921 0.495 5 3926 0.497 3 3506 0.337 59 3610 0.373 58
Le450-25a 3153 3153 0.051 88.5 3153 0.049 7 3178 0.058 5 3166 0.054 39 3160 0.052 38
29

Le450-25b 3365 3365 0.018 88.6 3366 0.018 6 3379 0.022 7 3366 0.018 40 3366 0.018 61
Le450-25c 4515 4553 0.251 84.8 4515 0.241 8 4648 0.277 16 4700 0.291 25 4550 0.244 45

tETAguide
Le450-25d 4544 4569 0.235 92.4 4544 0.229 7 4696 0.270 3 4722 0.277 27 4550 0.230 89
flat300.20-0 3150 3150 1.066 0 3150 1.066 3 3150 1.066 0 3150 1.066 0 3150 1.057 50
flat300.26-0 3966 3966 1.582 0.4 3966 1.582 3 3966 1.582 0 3966 1.582 0 3966 1.562 2
flat300.28-0 4238 4260 1.764 49.7 4282 1.778 3 4261 1.765 28 4238 1.750 22 4289 1.772 133
flat1000.50-0 25500 25500 2.863 0.3 25500 2.863 9 25500 2.863 28 25500 2.863 0 25513 2.865 10
flat1000.60-0 30100 30100 3.533 2.7 30100 3.533 11 30100 3.533 16 30100 3.533 114 30100 3.533 12
flat1000.76-0 37164 37164 4.603 36.8 37167 4.604 19 38213 4.761 8 37167 4.604 1 37168 4.604 22.5
myciel3 21 21 0.312 0 21 0.312 1 21 0.312 0 21 0.312 0 21 0.312 0
myciel4 45 45 0.323 0 45 0.323 1 45 0.323 0 45 0.323 0 45 0.323 0
myciel5 93 93 0.328 0 93 0.328 1 93 0.328 0 93 0.328 0 93 0.328 0
myciel6 189 189 0.330 0 189 0.330 2 189 0.330 0 189 0.330 0 189 0.330 0
myciel7 381 381 0.332 0 381 0.332 2 381 0.332 0 381 0.332 1 381 0.332 0
anna 276 276 0.010 0.2 283 0.036 2 276 0.010 1 276 0.010 0 276 0.010 1
david 237 237 0.012 0.1 237 0.012 1 237 0.012 1 237 0.012 0 237 0.012 0.5
huck 243 243 0 0 243 0 1 243 0 0 243 0 0 243 0 1.5
jean 217 217 0.004 0 217 0.004 1 217 0.004 0 217 0.004 0 217 0.004 0
queen5.5 75 75 0 47.8 75 0 1 75 0 0 75 0 0 75 0 0
queen6.6 138 138 0.095 0 150 0.190 1 138 0.095 0 138 0.095 1 138 0.095 0.5
queen7.7 196 196 0 0 196 0 1 196 0 0 196 0 0 196 0 0
queen8.8 291 291 0.010 0 291 0.010 1 291 0.010 0 291 0.010 12 291 0.010 0.8
miles250 325 325 0.022 0.1 328 0.031 2 325 0.022 8 325 0.022 0 325 0.022 1
miles500 705 705 0.027 0 709 0.033 2 708 0.032 12 705 0.027 1 705 0.027 4
games120 443 443 0,002 0.5 443 0,002 2 443 0.002 0 443 0.002 0 443 0.002 1.5
Average - - 0.928 35.116 - 0.941 6.071 - 0.953 12.142 - 0.936 30.061 - 0.927 25.15
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Looking at Table 8, we can draw the following conclusions:


• All shown methods tend to exhibit very similar performance. More precisely, the
average Gap are 0.928% for HESA, 0.941% for EXSCOL, 0.953% for MA-MSC,
0.936% for MASC and 0.927% for HHA-SCP. Moreover, HHA-SCP outperforms all
cited algorithms for 2 cases and successfully gets improved upper bounds (entries
in bold).
• Compared to the most recent hybrid algorithm HESA, one notes the efficiency of
HHA-SCP algorithm since it yields to better or equal bounds for 25 cases out of
42.
• To evaluate the performance of HHA-SCP on large graphs (with at least 500 ver-
tices), one must refer to the well-known EXSCOL algorithm specifically designed to
deal with these instances. Comparing HHA-SCP to EXSCOL, it is noteworthy that
our approach behaves favorably well and delivers a Gap equals to 0.927% better
than 0.941%.
We further confirm our outputs using statistical tests. For this purpose, we apply the
Kruskal-Wallis test. The statistical results are presented in Table 9.
Table 9. Results of the Kruskal-Wallis tests
Gap(%) CPU time
P value 0.9998 0.4561
Exact or approximate P value Approximate Approximate
Are means significantly different(P < 0.05) No Yes
Kruskal-Wallis statistic 0.04377 3.646

From statistical tests, we can conclude that, in term of Gap, obtained upper bounds
are significantly similar. However, the computational times of cited algorithms are sig-
nificantly different. All comparisons done with statistical tests provide evidence that
HHA-SCP performs remarkably well. Interestingly, our proposed approach was able to
solve large instances within a modest computing time. In particular, compared to EXS-
COL algorithm, HHA-SCP requires a slightly worse CPU time.

8. Conclusion

In the present paper, the minimum sum coloring problem is addressed. This problem
commonly arises in many domains such as scheduling and resource allocation. New exact
solutions were reported and a box-constrained formulation is shown to dominate both of a
penalty-based formulation and a weight-based formulation. Handling the hard constraints
using bound limits to restrict the search process has empowered the box constrained
formulation to lessen a significant computational burden. Moreover, experimental results
conducted on some hard benchmark instances show 5 new optimality results solutions
for instances having up to 645 nodes and 13979 edges. At this point, it is worth recalling
that performance of the ILP-based formulations could slightly depend on some instance
features (eg. number of nodes, edges, etc.).
A second part of our current research efforts focuses on developing a generally applica-
ble search methodology ”hyper heuristic to alleviate some heuristic approaches limits. In
this regard, the proposed HHA-SCP incorporates different effective elements that were
experimentally evaluated to address their relevance for the performance of HHA-SCP.
Compared to the most recent state-of-the-art algorithms, HHA-SCP has tremendously
improved two best known upper bounds within a modest CPU computational effort. In-

30
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

deed, despite its complicated composition, HHA-SCP was shown to yield tight and fast
upper bounds with significant reductions of the gap between the reported upper bounds
and best known lower bounds. The obtained results strongly motivate future research
to focus on designing other effective exact algorithms such as Branch-and-Price. In this
vein, it could be instructive to strengthen the efficiency of such method using heuris-
tics to tackle the intractable instances. Another interesting practical application of the
sum coloring problem that is worth of future investigation could be the optimization of
classroom scheduling where the weight of the color represents the faculty preferences.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of
this paper.

References

Abbasian, R., & Mouhoub, M. (2013). A hierarchical parallel genetic approach for the graph
coloring problem. Applied intelligence, 39 (3), 510–528.
Bar-Noy, A., Bellare, M., Halldórsson, M. M., Shachnai, H., & Tamir, T. (1998). On chromatic
sums and distributed resource allocation. Information and Computation, 140 (2), 183–202.
Bar-Noy, A., Halldórsson, M. M., Kortsarz, G., Salman, R., & Shachnai, H. (2000). Sum multi-
coloring of graphs. Journal of Algorithms, 37 (2), 422–450.
Bar-Noy, A., & Kortsarz, G. (1998). Minimum color sum of bipartite graphs. Journal of
Algorithms, 28 (2), 339–365.
Benlic, U., & Hao, J.-K. (2012). A study of breakout local search for the minimum sum coloring
problem. In Seal (pp. 128–137).
Berliner, A., Bostelmann, U., Brualdi, R. A., & Deaett, L. (2006). Sum list coloring graphs.
Graphs and Combinatorics, 22 (2), 173–183.
Birgin, E. G., Chambouleyron, I., & Martınez, J. M. (1999). Estimation of the optical constants
and the thickness of thin films using unconstrained optimization. Journal of Computational
Physics, 151 (2), 862–880.
Bonomo, F., Duran, G., Napoli, A., & Valencia-Pabon, M. (2015). A one-to-one correspondence
between potential solutions of the cluster deletion problem and the minimum sum coloring
problem, and its application to p4-sparse graphs. Information Processing Letters, 115 (6),
600–603.
Bouly, H., Dang, D.-C., & Moukrim, A. (2010). A memetic algorithm for the team orienteering
problem. 4OR: A Quarterly Journal of Operations Research, 8 (1), 49–70.
Bouziri, H., & Harrabi, O. (2013). Behavior study of genetic operators for the minimum sum
coloring problem. In Modeling, simulation and applied optimization (icmsao), 2013 5th
international conference on (pp. 1–6).
Bouziri, H., & Jouini, M. (2010). A tabu search approach for the sum coloring problem. Electronic
Notes in Discrete Mathematics, 36 , 915–922.
Brélaz, D. (1979). New methods to color the vertices of a graph. Communications of the ACM ,
22 (4), 251–256.
Burke, E., Kendall, G., Newall, J., Hart, E., Ross, P., & Schulenburg, S. (2003). Hyper-heuristics:
An emerging direction in modern search technology. Handbook of metaheuristics, 457–474.
Burke, E. K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., & Qu, R. (2013).
Hyper-heuristics: A survey of the state of the art. Journal of the Operational Research
Society, 64 (12), 1695–1724.

31
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

Burke, E. K., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., & Woodward, J. R. (2010). A
classification of hyper-heuristic approaches. In Handbook of metaheuristics (pp. 449–468).
Springer.
Campêlo, M., Campos, V. A., & Corrêa, R. C. (2008). On the asymmetric representatives
formulation for the vertex coloring problem. Discrete Applied Mathematics, 156 (7), 1097–
1111.
Chams, M., Hertz, A., & De Werra, D. (1987). Some experiments with simulated annealing for
coloring graphs. European Journal of Operational Research, 32 (2), 260–266.
Ciarlet, P. (1978). The finite element method for elliptic problems north-holland amsterdam
google scholar.
Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization
problems. Computers in Industry, 41 (2), 113–127.
Cowling, P., Kendall, G., & Soubeiga, E. (2000). A hyperheuristic approach to scheduling a sales
summit. In International conference on the practice and theory of automated timetabling
(pp. 176–190).
Furini, F., Malaguti, E., Martin, S., & Ternier, I.-C. (2018). Ilp models and column generation
for the minimum sum coloring problem. Electronic Notes in Discrete Mathematics, 64 ,
215–224.
Garcı́a-Villoria, A., Salhi, S., Corominas, A., & Pastor, R. (2011). Hyper-heuristic approaches for
the response time variability problem. European Journal of Operational Research, 211 (1),
160–169.
Garrido, P., & Riff, M. C. (2010). Dvrp: a hard dynamic combinatorial optimisation problem
tackled by an evolutionary hyper-heuristic. Journal of Heuristics, 16 (6), 795–834.
Glunt, W., Hayden, T. L., & Raydan, M. (1993). Molecular conformations from distance matrices.
Journal of Computational Chemistry, 14 (1), 114–120.
Hansen, P., Labbé, M., & Schindl, D. (2009). Set covering and packing formulations of graph
coloring: algorithms and first polyhedral results. Discrete Optimization, 6 (2), 135–147.
Harrabi, O., Fatnassi, E., Bouziri, H., & Chaouachi, J. (2017). A bi-objective memetic algorithm
proposal for solving the minimum sum coloring problem. In Proceedings of the genetic and
evolutionary computation conference companion (pp. 27–28).
Helmar, A., & Chiarandini, M. (2011). A local search heuristic for chromatic sum. In Proceedings
of the 9th metaheuristics international conference (Vol. 1101, pp. 161–170).
Housh, M., Ostfeld, A., & Shamir, U. (2012). Box-constrained optimization methodology and its
application for a water supply system model. Journal of Water Resources Planning and
Management, 138 (6), 651–659.
Jin, Y., Hamiez, J.-P., & Hao, J.-K. (2017). Algorithms for the minimum sum coloring problem:
a review. Artificial Intelligence Review , 47 (3), 367–394.
Jin, Y., & Hao, J.-K. (2016). Hybrid evolutionary search for the minimum sum coloring problem
of graphs. Information Sciences, 352 , 15–34.
Jin, Y., Hao, J.-K., & Hamiez, J.-P. (2014). A memetic algorithm for the minimum sum coloring
problem. Computers & Operations Research, 43 , 318–327.
Johnson, D. S., Aragon, C. R., McGeoch, L. A., & Schevon, C. (1991). Optimization by simulated
annealing: an experimental evaluation; part ii, graph coloring and number partitioning.
Operations research, 39 (3), 378–406.
Johnson, D. S., Mehrotra, A., & Trick, M. A. (2008). Special issue on computational methods for
graph coloring and its generalizations. North-Holland.
Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P., et al. (1983). Optimization by simulated annealing.
science, 220 (4598), 671–680.
Kokosiński, Z., & Kwarciany, K. (2007). On sum coloring of graphs with parallel genetic algo-
rithms. Adaptive and Natural Computing Algorithms, 211–219.
Kroon, L. G., Sen, A., Deng, H., & Roy, A. (1996). The optimal cost chromatic partition problem
for trees and interval graphs. In International workshop on graph-theoretic concepts in
computer science (pp. 279–292).
Kubicka, E., & Schwenk, A. J. (1989). An introduction to chromatic sums. In Proceedings of the

32
May 26, 2019 Journal of Experimental & Theoretical Artificial Intelligence tETAguide

17th conference on acm annual computer science conference (pp. 39–45).


Lecat, C., Li, C.-M., Lucet, C., & Li, Y. (2015). Exact methods for the minimum sum coloring
problem.
Li, Y., Lucet, C., Moukrim, A., & Sghiouer, K. (2009). Greedy algorithms for the minimum sum
coloring problem. In Logistique et transports (pp. LT–027).
Lin, S.-W., Vincent, F. Y., & Lu, C.-C. (2011). A simulated annealing heuristic for the truck
and trailer routing problem with time windows. Expert Systems with Applications, 38 (12),
15244–15252.
López-Camacho, E., Terashima-Marı́n, H., Ross, P., & Valenzuela-Rendón, M. (2010). Problem-
state representations in a hyper-heuristic approach for the 2d irregular bpp. In Proceedings
of the 12th annual conference on genetic and evolutionary computation (pp. 297–298).
Lourenço, H. R., Martin, O. C., & Stutzle, T. (2003). Iterated local search. International series
in operations research and management science, 321–354.
Marı́n-Blázquez, J. G., & Schulenburg, S. (2007). A hyper-heuristic framework with xcs: Learning
to create novel problem-solving algorithms constructed from simpler algorithmic ingredi-
ents. In Learning classifier systems (pp. 193–218). Springer.
Marler, R. T., & Arora, J. S. (2005). Function-transformation methods for multi-objective
optimization. Engineering Optimization, 37 (6), 551–570.
Mehrotra, A. (1992). Constrained graph partitioning: decomposition, polyhedral structure and
algorithms.
Mehrotra, A., & Trick, M. A. (1996). A column generation approach for graph coloring. informs
Journal on Computing, 8 (4), 344–354.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation
of state calculations by fast computing machines. The journal of chemical physics, 21 (6),
1087–1092.
Moré, J. J., & Toraldo, G. (1991). On the solution of large quadratic programming problems
with bound constraints. SIAM Journal on Optimization, 1 (1), 93–113.
Moukrim, A., Sghiouer, K., Lucet, C., & Li, Y. (2014). Upper and lower bounds for the minimum
sum coloring problem, submitted for publication.
Richardson, J. T., Palmer, M. R., Liepins, G. E., & Hilliard, M. (1989). Some guidelines for genetic
algorithms with penalty functions. In Proceedings of the third international conference on
genetic algorithms (pp. 191–197).
Sabar, N. R., Ayob, M., Qu, R., & Kendall, G. (2012). A graph coloring constructive hyper-
heuristic for examination timetabling problems. Applied Intelligence, 37 (1), 1–11.
Salavatipour, M. R. (2003). On sum coloring of graphs. Discrete Applied Mathematics, 127 (3),
477–488.
Sen, A., Deng, H., & Guha, S. (1992). On a graph partition problem with application to vlsi
layout. Information processing letters, 43 (2), 87–94.
Sivanandam, S., & Deepa, S. (2007). Introduction to genetic algorithms. Springer Science &
Business Media.
Smit, S. K., & Eiben, A. (2010). Parameter tuning of evolutionary algorithms: Generalist vs.
specialist. In European conference on the applications of evolutionary computation (pp.
542–551).
Wang, Y., Hao, J.-K., Glover, F., & Lü, Z. (2013). Solving the minimum sum coloring problem
via binary quadratic programming. arXiv preprint arXiv:1304.5876 .
Wu, Q., & Hao, J.-K. (2012). An effective heuristic algorithm for sum coloring of graphs.
Computers & Operations Research, 39 (7), 1593–1600.
Yuan, Y., Ling, Z., Gao, C., & Cao, J. (2014). Formulation and application of weight-function-
based physical programming. Engineering Optimization, 46 (12), 1628–1650.
Zhao, W., Wu, X., & Yan, M. (1989). Weight function method for three dimensional crack
problemsi. basic formulation and application to an embedded elliptical crack in finite plates.
Engineering Fracture Mechanics, 34 (3), 593–607.

33

You might also like