Professional Documents
Culture Documents
Intelligent Decision Support Systems For Sustainable Computing Paradigms and Applications 1st Edition Arun Kumar Sangaiah
Intelligent Decision Support Systems For Sustainable Computing Paradigms and Applications 1st Edition Arun Kumar Sangaiah
https://textbookfull.com/product/cognitive-computing-for-big-
data-systems-over-iot-frameworks-tools-and-applications-1st-
edition-arun-kumar-sangaiah/
https://textbookfull.com/product/intelligent-paradigms-for-smart-
grid-and-renewable-energy-systems-b-vinoth-kumar/
https://textbookfull.com/product/decision-support-systems-viii-
sustainable-data-driven-and-evidence-based-decision-support-
fatima-dargam/
https://textbookfull.com/product/smart-and-sustainable-
intelligent-systems-sustainable-computing-and-optimization-1st-
edition-namita-gupta-editor/
Advanced Intelligent Systems for Sustainable
Development AI2SD 2018 Volume 5 Advanced Intelligent
Systems for Computing Sciences Mostafa Ezziyyani
https://textbookfull.com/product/advanced-intelligent-systems-
for-sustainable-development-ai2sd-2018-volume-5-advanced-
intelligent-systems-for-computing-sciences-mostafa-ezziyyani/
https://textbookfull.com/product/sustainable-and-energy-
efficient-computing-paradigms-for-society-mohd-abdul-ahad/
https://textbookfull.com/product/deep-learning-for-medical-
decision-support-systems-utku-kose/
https://textbookfull.com/product/electronic-systems-and-
intelligent-computing-proceedings-of-esic-2020-pradeep-kumar-
mallick/
https://textbookfull.com/product/intelligent-data-analytics-for-
decision-support-systems-in-hazard-mitigation-theory-and-
practice-of-hazard-mitigation-ravinesh-c-deo/
Studies in Computational Intelligence 705
Intelligent Decision
Support Systems
for Sustainable
Computing
Paradigms and Applications
Studies in Computational Intelligence
Volume 705
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
About this Series
123
Editors
Arun Kumar Sangaiah Patrick Siarry
School of Computing Science Faculté des Sciences et Technologie
and Engineering Université Paris-Est Créteil Val-de-Marne
VIT University Créteil Cedex
Vellore, Tamil Nadu France
India
Michael Sheng
Ajith Abraham Department of Computing
Scientific Network for Innovation Macquarie University
and Research Excellence Sydney, NSW
Machine Intelligence Research Labs Australia
Auburn, WA
USA
The fast development of Internet technologies has raised the need for sustainable
computing, namely energy-aware management of computing resources to effi-
ciently control and reduce ecological and societal impacts. Most prominently, the
rapid development and the world-wide adoption of cloud computing and data
centers are critical challenges to sustainable computing in the twenty-first century.
The aim is therefore to find trade-offs in the digital ecosystems, which support
computation and data storage at large scale while ensuring eco-friendly computing.
The book “Intelligent Decision Support Systems for Sustainable Computing”
addresses research and development issues in sustainable computing through a
variety of evolutionary computational paradigms and applications. This book is a
significant collection of 14 chapters covering various computational intelligence
techniques, as well as their applications in sustainable computing that have emerged
in the recent decade. This book provides an excellent platform to review various
areas of intelligence decision support and analytics in depth, and caters for the
needs of researchers and practitioners in the field. The multi-disciplinary nature
of the sustainable computing spanning fields of computer science, optimization, soft
computing, electrical engineering, etc. is sought in the book by a variety of para-
digms, such as neuro-fuzzy, genetic algorithms, and optimization techniques, which
are focused on real-world decision-making analysis, modeling, optimization, and
control problems arising in the area of sustainable computing.
To my knowledge, this is the first attempt to comprehensively use evolutionary
computation and provide an intensive and in-depth coverage of the key subjects in
the fields of evolutionary computational intelligence approaches to sustainable
computing. This book is an invaluable, topical, and timely source of knowledge in
the field, which would serve nicely as a major textbook for several courses at both
undergraduate and postgraduate levels. It can also serve as a key reference for
scientists, professionals, researchers, and academicians, who are interested in new
challenges, theories, practice, and advanced applications of the specific area of
sustainable computing.
v
vi Foreword
Sustainable computing has been extended to become a significant research area that
covers the fields of computer science and engineering, electrical engineering, and
other engineering disciplines. Recently, we have been witnessing from adequate
literature of sustainable computing that includes energy efficiency and natural
resource preservation and emphasize the role of ICT (information and communi-
cation technology) in achieving system design and operation objectives. The energy
impact/design of more efficient IT infrastructure is a key challenge for organizations
to realize new computing paradigms.
On the other hand, the uses of computational intelligence (CI) techniques for
intelligent decision support can be exploited to originate effectual computing sys-
tems. CI consists of various branches that are not limited to expert systems (ES),
artificial neural networks (ANN), genetic algorithms (GA) and fuzzy logic (FL),
knowledge-based systems, and various hybrid systems, which are combinations of
two or more of the branches. The intention of this book is to explore sustainability
problems in computing and information processing environments and technologies
at the different levels of CI paradigms. Moreover, this edited volume is to address
the comprehensive nature of sustainability and to emphasize the character of CI in
modeling, identification, optimization, prediction, forecasting, and control of
complex systems.
The chapters included in this book focus on addressing latest research, inno-
vative ideas, challenges, and CI solutions in sustainable computing. Moreover,
these chapters specify novel in-depth fundamental research contributions from a
methodological/application perspective in accomplishing sustainable lifestyle for
society. This book provides a comprehensive overview of constituent paradigms
underlying evolutionary computational intelligence methods, which are illustrating
more attention to sustainability computing problems as they evolve. Hence, the
main objective of the book is to facilitate a forum to a large variety of researchers,
where decision-making approaches under CI paradigms are adapted to demonstrate
how the proposed procedures as well as sustainability computing problems can be
handled in practice.
vii
viii Preface
Over the recent decades, many of useful methods have been proposed to solve
organizational decision-making problems. Computational Intelligence paradigm in
intelligence decision support systems has fostered a broad research area, and their
significance has also been clearly justified at many applications. This volume
addresses a wide spectrum of CI paradigms, making decisions of an industry or
organization happened at all the levels of sustainable challenges.
This volume aims to provide relevant theoretical frameworks and the latest
empirical research findings in the area. The CI approaches applied to sustainable
computing decision-making systems usually have received more attention in
recently published volumes. Based on this context, there is need envisioning for a
key perspective into current state of practice of computational intelligence tech-
niques. Consequently, to address the predicative analysis of sustainable computing
problems including energy efficiency and natural resource preservation, load dis-
tribution strategy has been addressed in this book.
Solutions for these problems have been effectively handled through wide range
of algorithmic and computational intelligence frameworks, such as optimization,
machine learning, decision support systems, and meta-heuristics. The main con-
tributions to this volume address sustainability problems in computing and infor-
mation processing environments and technologies, and at various levels of the
computational intelligence process.
making has been investigated. The authors propose an integrated decision support
system that involves Adaptive Neuro-Fuzzy Systems (ANFIS), Neural Networks
(NN), and Fuzzy Cognitive Maps (FCM), along with Econometric Models (EM) in
a hybrid fashion to predict energy consumption and prices.
Chapter “Computational Intelligence Based Heuristic Approach for Maximizing
Energy Efficiency in Internet of Things” presents computational intelligence-based
heuristic approach for maximizing energy efficiency in the Internet of Things (IoT).
The authors present the Modified Multi-objective Particle Swarm Optimization
(MMOPSO) algorithm based on the concept of dominance to solve the mobile
cloud task scheduling problem. Overall, this chapter explores IoT and cloud
computing as well as their symbiosis based on the common environment of dis-
tributed processing.
Chapter “Distributed Algorithm with Inherent Intelligence for Multi-cloud
Resource Provisioning” illustrates distributed algorithms with inherent intelligence
for multi-cloud resource provisioning. The authors introduce the substantial ranking
method for elastic and inelastic tasks scheduler to support the heterogeneous
requests and resources in multi-cloud environments.
Chapter “Parameter Optimization methods Based on Computational Intelligence
Techniques in Context of Sustainable Computing” provides a comprehensive study
of parameter optimization. Accuracy of any computational intelligence (CI) algo-
rithm is highly dependent on optimal settings of parameters. The authors have
discussed how parameter setting affects the performance and robustness of evolu-
tionary algorithms.
Chapter “The Maximum Power Point Tracking Using Fuzzy Logic Algorithm
for DC Motor Based Conveyor System” presents to design a conveyor belt system
driven by DC motor whose speed is controlled by solar-powered converter oper-
ating at maximum power point (MPPT). In this chapter, the authors have compared
the proposed MPPT algorithm and fuzzy approach with existing algorithms such as
perturb, observe and incremental conductance.
Chapter “Differential Evolution Based Significant Data Region Identification on
Large Storage Drives” emphasizes on the identification of data relevant sector
regions in digital hard disk drives (HDD) using computationally intelligent differ-
ential evolution (DE) algorithm to accelerate the overall digital forensic (DF) pro-
cess. In this chapter, the authors have presented a new trade-off rectangle to reveal
present requirements toward the development of existing DF facilities. The chapter
also proposes a methodology that extract data storage pattern using storage drive’s
structural information and DE algorithm that can help investigator in planning
further course of action.
Chapter “A Fuzzy Based Power Switching Selection for Residential Application
to Beat Peak Time Power Demand” introduces a fuzzy-based power switching
selection for residential application to beat peak time power demand. The authors
have designed the hybrid system using solar and main power as sources. The fuzzy
logic algorithm has been used to select the power source based on the peak and off
peak time, and utilized power level and availability of power source to meet the
power demand during the peak time and off peak time.
x Preface
Audience
The editors would like to acknowledge the help of all the people involved in this
project and, more specifically, to the authors and reviewers that took part in the
review process. Without their support, this book would not have become a reality.
First, the editors would like to thank each one of the authors for their contri-
butions. Our sincere gratitude goes to the chapter’s authors who contributed their
time and expertise to this book.
Second, the editors wish to acknowledge the valuable contributions of the
reviewers regarding the improvement of quality, coherence, and content presenta-
tion of chapters. We deeply appreciate the comments of the reviewers who helped
us to refine the context of this book. Most of the authors also served as referees; we
highly appreciate their double task.
Finally, our gratitude goes to all of our friends and colleagues, who were so
generous with their encouragement, advice, and support.
xiii
Contents
xv
xvi Contents
1 Introduction
Computational Intelligence and its decision support methodologies have the power
to attain knowledge about a specific task from given data. A system which is called
computationally intelligent if it handles with structured/un-structured data during
the decision making process of any application. This book mainly has designed the
CI paradigms to solve complex real world sustainable computing problems.
Moreover, the intelligent decision support CI paradigms which are extract the
historical data in order to predict the future sustainable problems.
CI methodologies are of various divisions that are not limited to, Neuro com-
puting, evolutionary computing, granular computing and artificial Immune system
and so on. Moreover, CI mainly represents the integration of the following five
methods from the soft computing perspective: Fuzzy Logic, Artificial Neural
Intelligent Decision Support Systems for Sustainable Computing 3
5 Conclusion
References
1 Introduction
X. Wang
School of Computer Science and Technology, Xidian University, Xi’an 710071, China
e-mail: wangxiaoli@mail.xidian.edu.cn
B. Veeravalli (✉)
Department of Electrical and Computer Engineering, National University of Singapore,
Singapore 117576, Singapore
e-mail: elebv@nus.edu.sg
imparts large carbon footprints on the environment [2]. Emerging sustainable com-
puting technologies, which primarily aim at reducing massive energy consumption
by developing certain ab initio computational and mathematical models, methods,
and tools for resource allocation and task scheduling, are therefore gain significant
interest to researchers and practitioners.
One of the key techniques to save energy is Dynamic Voltage Scaling (DVS).
It exploits the hardware characteristics to save energy by degrading CPU voltage
and operating frequency while keeping the processor to operate at a slow speed [3].
During the past decades, substantial energy-efficient scheduling strategies have been
proposed for DVS-enabled systems [4–7]. These DVS-based techniques, however,
may not applicable to virtualized environments where physical processors are shared
by multiple virtual machines (VMs) as lowering the supply voltage will inadver-
tently affect the performance of VMs belonging to different applications [8]. Another
promising approach to conserve energy is turning off idle computing nodes in a data
center by packing the running VMs to as few physical servers as possible, often
called VM consolidation [9]. However, live VM migration must be guaranteed and
resources must be properly allocated in order to avoid severe performance degrada-
tion due to resource competition by co-located VMs. There are a large amount of
studies on the migration strategies, concerning the issues of where, when, and how
a VM should be migrated [10–13]. At the current stage, several management issues
about VM consolidation still deserve additional investigations. For example, trans-
ferring large-sized data over the shared network link is a huge challenge, especially
when several goals in terms of Service-Level Agreement (SLA) violation avoid-
ance, minimum communication delay, high system throughput, and high quality of
services have to meet [14].
While significant advancements have been made to minimize the energy con-
sumption for sustainable computing, even stronger effort is needed to promote the
effective utilization of energy at the various processing nodes. By “effective uti-
lization” we mean that efforts need to be devoted to making compute platforms not
just minimizing the energy consumption but also to make every amount of energy
consumed for workload computation worthwhile. This is based on the fact that the
actual energy consumed for workload computation might not be equal to the energy
that is required for workload computation. For example, most existing scheduling
models assume that the compute units, which are processors, are able to stay online
and available forever. That is to say, it assumes that all processors remain idle at
the beginning of the workload assignment and that they will be kept busy until the
assigned workload fractions are completed. In reality, processors, however, may have
arbitrary unavailable time periods. They may still be busy computing any previous
workload even when a new workload arrives and may even get off-line before fin-
ish computing the currently assigned load. The time period between release time
and off-line time of a processor is referred to as its available time period. Hence, if
we inadvertently assign tasks to processors according to their computational capa-
bilities without taking into account of the availability constraints, some processors
would not be able to finish their assigned workloads. Thus all of the unfinished work-
loads need to be reassigned to other available processors resulting in an inefficient
A Genetic Algorithm Based Efficient Static Load Distribution . . . 9
time and energy schedule. Therefore, designing an efficient load distribution strategy
seems appropriate when one considers resource (processor) available times.
It is believed that workloads to be scheduled on Heterogenous Sustainable Com-
puting Systems (HSCS) are quite large in size and possess computationally intensive
CPU requirements; otherwise, one or a few processors should be enough for work-
load computation. Also, workloads should be partitionable so that they can simply
be further divided into a number of load fractions and distributed to processors for
independently parallel computing. Ideally, if a workload can be divided into an arbi-
trary number of load fractions such that there are no precedence relationships among
these fractions, then we refer to it as a divisible load [15]. Actually, divisible loads
exist in widely multiple real-world applications, such as real-time video encoding
[16, 17], satellite image classification [18], signature searching in a networked col-
lection of files [19], and so on. It may be noted that divisible load modelling can also
be adopted for modern day Big Data processing when the requirements of processing
demand homogeneous processing on the data.
There are considerable studies available on finding an optimal load distribution
strategy for scheduling large-scale divisible loads on various distributed networks
with different topologies, including linear networks [20], bus networks [21], tree
networks [22], Gaussian, mesh, torus networks [23], and complete b-Ary tree net-
works [24]. Generally, a load distribution strategy involves two main issues—one in
deriving optimal sizes of the workloads to the processors, referred to as an optimal
load partition (OLP), and the other is to determine a viable sequence of distribution
that achieves minimum processing time, referred to as an optimal load distribution
sequence (OLDS).
As for the first issue, in order to obtain a minimized processing time, it is nec-
essary and sufficient to require that all processors stop computing at the same time
instant; otherwise, the processing time of the entire workload could be reduced by
transferring some load fractions from busy to idle processors. This widely accepted
principle in Divisible-Load Theory (DLT) is referred to as the optimality principle,
which provides a key to derive a closed-form solution for OLP [25]. However, as
mentioned earlier, processors may have arbitrary unavailable time periods in real-
ity. Hence, we could not inadvertently assign tasks to processors according to the
optimality principle as usual; otherwise, workload rescheduling would result in an
inefficient time and energy schedule. Therefore, searching for an OLP is necessary
for sustainable computing where processor available time periods are involved.
As for the second issue, sufficient evidence has shown that load distribution
sequences play a significant role in computational performance. For heterogeneous
single-level tree networks, it has been proven that only when the load distribution
sequence follows the decreasing order of communication speeds does the processing
time reach the minimum [26]. As regard to heterogeneous multi-level tree networks,
the OLDS depends only on communication speeds of links but not on computation
speeds of processors [27]. Nonetheless, the above studies did not consider start-up
overheads for both communication and computation into consideration. For the case
of homogenous bus networks with start-up overheads, it was shown that the process-
ing time is minimized when the load distribution sequence follows the order in which
10 X. Wang and B. Veeravalli
the computation speeds of processors decrease [28]. For a large enough workload
on heterogeneous single-level tree networks with arbitrary start-up overheads, the
sequence of load distribution should follow the decreasing order of the communi-
cation speeds in order to achieve minimum processing time [29], but how large a
workload should be to consider it as a large enough workload. Moreover, when we
consider processor available time periods, does the above conclusion still hold? If
not, what sequence does the load distribution should follow to achieve a minimum
processing time?
As regard to processor release times alone, several load distribution strategies
were proposed for bus networks [30], linear daisy chain networks [31], and single-
level tree networks [32], but they did not take start-up overheads and the influence of
load distribution sequence into consideration. In order to obtain an OLP and OLDS
simultaneously on single-level tree networks with arbitrary processor release times,
a bi-level genetic algorithm was proposed in [33]. The proposed algorithm comprises
two layers of nested genetic algorithms, with the upper genetic algorithm applied for
searching an OLDS and the lower algorithm utilized for finding an OLP. However,
as the number of processors increases, the proposed bi-level genetic algorithm gets
hard to converge. In order to obtain an accurate OLP, an exhaustive search algorithm
was proposed in [34] for release-time aware divisible-load scheduling on bus net-
works, but it did not consider processor off-line times and the influence of OLDS on
processing time.
In this chapter, both processor release times and off-line times are explicitly con-
sidered in our model, which brings the work more closer to reality. This schedul-
ing problem at hand is complex owing to an inherent nature of the computing plat-
form which could possibly comprise heterogeneous processors. We propose a novel
Processor Availability-Aware Genetic Algorithm (PAA-GA) based global optimiza-
tion strategy to minimize the processing time of the entire workload, thus reducing
the total energy consumption too, on HSCS.
The remaining of this chapter is organized as follows. Section 2 firstly gives a
mathematical description of the divisible-load scheduling problem on HSCS with
arbitrary start-up overheads and processor available time periods, followed by the
proposed availability-aware scheduling model. With this model, we accordingly
design algorithm PAA-GA in Sect. 3, which will be evaluated through experiments
in Sect. 4. In the last section, conclusions are obtainable.
An HSCS is considered in this chapter with its topology given in Fig. 1. It com-
prises N + 1 Heterogeneous processors {P0 , P1 , … , PN } connected through com-
munication links {L1 , L2 , ⋯ , LN }, where P0 signifies the master, while the others
A Genetic Algorithm Based Efficient Static Load Distribution . . . 11
denote worker processors. P0 does not participate in computation itself but merely
takes the responsibility of assigning loads to worker processors. P0 firstly divides
the entire workload Wtotal into N fractions A ⃗ = (𝛼1 , 𝛼2 , … , 𝛼N ) with 0 ≤ ai ≤ Wtotal
∑N
and i=1 𝛼i = Wtotal . Then the load fractions are assigned to worker processors in
a certain distribution order (P𝜎1 , P𝜎2 , … , P𝜎N ), where 𝜎⃗ = (𝜎1 , 𝜎2 , … , 𝜎N ) is proces-
sor index which is a permutation of (1, 2, … , N) and 𝛼i is assigned to processor P𝜎i
with i = 1, 2, … , N. P0 sends load fraction to only one processor at a time and each
worker starts computing after its entire load fraction has been received completely.
Workers cannot communicate and compute simultaneously.
It is necessary to note that, not all worker processors have necessity to participate
in workload computation. Suppose that only the first n processors P𝜎1 , P𝜎2 , … , P𝜎n
in the distribution sequence are needed for workload computation, so they will be
assigned with non-zero load fractions, that is, 𝛼i > 0 with i = 1, … , n, while the
remaining processors are not assigned with any load fractions, that is, for i = n +
1, … , N, 𝛼i = 0.
We consider a heterogeneous system wherein we have, for ∀i ≠ j, wi ≠ wj and
gi ≠ gj . Also, it is assumed that communication speeds are much faster than compu-
tation speeds; otherwise, only one or two processors should be enough to involve in
the workload computation [35]. As P0 assigns 𝛼i to the i-th processor P𝜎i in the load
distribution sequence, the communication and computation components are mod-
elled as affine functions, given by e𝜎i + g𝜎i 𝛼i and f𝜎i + w𝜎i 𝛼i , including communica-
tion and computation start-up overheads e𝜎i and f𝜎i associated with processor P𝜎i and
link L𝜎i , respectively.
Some processors in our system may be engaged in any of the previous workload
computation when a new load arrives, say at time t = 0, so they cannot participate for
the newly arrived workload computation until their release times. Meanwhile, they
have to finish computing their assigned load fractions before they arrive at their off-
line times. It is assumed that processors can estimate their release times by the size of
the current workload to process, and the master knows the release and off-line times
of all processors. Even though the master does not know the accurate processor off-
line times, there exist some prediction techniques to estimate an approximate off-line
time for each processor based on a history of processor usage (for more information,
please refer to [36–38]). Let ri and oi be the release time and off-line time of processor
Pi respectively, where i = 1, 2, … , N.
12 X. Wang and B. Veeravalli
Fig. 2 Gantt chart for load scheduling on HSCS with processor available times
Figure 2 shows a possible Gantt Chart for load scheduling on HSCS with proces-
sor release times and off-line times. As illustrated in Fig. 2, the first processor P𝜎1
starts to receive load fraction 𝛼1 from master P0 when it gets released at time
t = r𝜎1 . Let si be the start time of processor P𝜎i , at which P𝜎i starts to receive
load fraction 𝛼i from P0 . Thus s1 = r𝜎1 . Except for P𝜎1 , the start time si of proces-
sor P𝜎i depends not only on its release time r𝜎i , but also on the start time si−1 of
processor P𝜎i−1 and the communication time (e𝜎i−1 + g𝜎i−1 𝛼i−1 ) taken by processor
P𝜎i−1 to receive its load fraction 𝛼i−1 from P0 . By observing Fig. 2, we obtain that
{ }
si = max r𝜎i , si−1 + e𝜎i−1 + g𝜎i−1 𝛼i−1 , where i = 2, 3, … , n.
According to the optimality principle of DLT, if processor off-line times are
ignored, all processors should finish computing at the same time to obtain a min-
imized processing time, say at time t = T ⋆ . Once we consider off-line times, proces-
sors whose off-line times are smaller than T ⋆ will not be able to finish their assigned
load fractions. Hence, lest these load fractions be rescheduled and result in a waste
of energy consumption, processors should be assigned with appropriate workload
sizes according to their available time periods in the first place. ( )
The processing time Ti of processor P𝜎i is given by si + e𝜎i + f𝜎i + g𝜎i + w𝜎i 𝛼i ,
where i = 1, 2, … , n. It can be observed from the formulation that Ti depends directly
on two parts: the former part si indicating when processor P𝜎i starts to receive load
( ( ) )
from P0 , and the latter part e𝜎i + f𝜎i + g𝜎i + w𝜎i 𝛼i representing how long it takes
for processor P𝜎i to finish computing its assigned load fraction. Both parts are deter-
mined directly by load partition A ⃗ = {𝛼1 , 𝛼2 , … , 𝛼N } and load distribution sequence
( )
P𝜎1 , P𝜎2 , … , P𝜎N . Hence, Ti is actually a function of 𝜎⃗ and A.⃗ As the processing
time T of the entire workload lies upon the processor which stops computing the last,
A Genetic Algorithm Based Efficient Static Load Distribution . . . 13
⃗ as follows.
T can be derived as a function of 𝜎⃗ and A
{ ( ) }
⃗ = max Ti = max si + e𝜎 + f𝜎 + g𝜎 + w𝜎 𝛼i .
𝜎 , A)
T(⃗
1≤i≤n i 1≤i≤n
i i i
s.t.
∑N
(1) i=1 𝛼i = Wtotal , 0 ≤ 𝛼i ≤ Wtotal , i = 1, … , N.
( )
(2) 𝜎⃗ = 𝜎1 , 𝜎2 , … , 𝜎N , where 𝜎i ∈ {1, 2, … , N} and i = 1, 2, … , N. ∀j, k ∈ {1, 2,
… , N}, if j ≠ k, then 𝜎j ≠ 𝜎k .
(3) n ≤ N; ∀i (∈ {1, 2, …) , n}, 𝛼i > 0, while ∀i ∈ {n + 1, … , N}, 𝛼i = 0.
(4) si + e𝜎i + g𝜎i + w𝜎i 𝛼i + f𝜎i ≤ o𝜎i , i = 1, … , n.
14 X. Wang and B. Veeravalli
where
(5) n = card({𝛼i | 𝛼i ∈ A⃗ and 𝛼i > 0}), where card(X) denotes the number of ele-
ments{in set X.
r1 , i = 1;
(6) si =
max{r𝜎i , si−1 + e𝜎i−1 + g𝜎i−1 𝛼i−1 }, i = 2, 3, … , n.
⃗
) two sets of variables are involved: A = {𝛼1 , 𝛼2 , … , 𝛼N } and
In the( proposed model,
𝜎⃗ = 𝜎1 , 𝜎2 , … , 𝜎N . Therefore, the solution of the proposed model is a mix of real
numbers and integer numbers. The problem of deriving an OLDS is similar to Trav-
elling Salesman Problem (TSP) which asks the following question: Given a list of
cities and the distances between each pair of cities, what is the shortest possible route
that visits each city exactly once and returns to the origin city? It is well acknowl-
edged that TSP is an NP-hard problem in combinatorial optimization. Therefore, as
an even more complex problem with two sets of variables A ⃗ and 𝜎⃗ optimized simul-
taneously, the problem considered in this chapter is definitely an NP-hard problem.
When N turns out to be large, it is hard to obtain a global optimal solution (A, ⃗ 𝜎⃗ ).
Another random document with
no related content on Scribd:
Bartlett—[Gruffly.] I’ve naught to forget—leastways naught that’s in
your mind. But they’s things about the stubborn will o’ woman I’d like to
forget. [They look at each other across the table. There is a pause. Finally
he cannot stand her accusing glance. He looks away, gets to his feet, walks
about, then sits down again, his face set determinedly—with a grim smile.]
Well, here we be, Sarah—alone together for the first time since—
Mrs. Bartlett—[Quickly.] Since that night, Isaiah.
Bartlett—[As if he hadn’t heard.] Since I come back to you, almost.
Did ye ever stop to think o’ how strange it be we’d ever come to this? I
never dreamed a day’d come when ye’d force me to sleep away from ye,
alone in a shed like a mangy dog!
Mrs. Bartlett—[Gently.] I didn’t drive you away, Isaiah. You came o’
your own will.
Bartlett—Because o’ your naggin’ tongue, woman—and the wrong ye
thought o’ me.
Mrs. Bartlett—[Shaking her head, slowly.]
It wasn’t me you ran from, Isaiah. You ran away from your own self—
the conscience God put in you that you think you can fool with lies.
Bartlett—[Starting to his feet—angrily.] Lies?
Mrs. Bartlett—It’s the truth, Isaiah, only you be too weak to face it.
Bartlett—[With defiant bravado.] Ye’ll find I be strong enough to face
anything, true or lie! [Then protestingly.] What call have ye to think evil o’
me, Sarah? It’s mad o’ ye to hold me to account for things I said in my
sleep—for the damned nightmares that set me talkin’ wild when I’d just
come home and my head was still cracked with the thirst and the sun I’d
borne on that island. Is that right, woman, to be blamin’ me for mad
dreams?
Mrs. Bartlett—You confessed the rest of what you said was true—of
the gold you’d found and buried there.
Bartlett—[With a sudden fierce exultation.] Aye—that be true as
Bible, Sarah. When I’ve sailed back in the schooner, ye’ll see for yourself.
There be a big chest o’ it, yellow and heavy, and fixed up with diamonds,
emeralds and sech, that be worth more, even, nor the gold. We’ll be rich,
Sarah—rich like I’ve always dreamed we’d be! There’ll be silks and
carriages for ye—all the woman’s truck in the world ye’ve a mind to want
—and all that Nat and Sue’ll want, too.
Mrs. Bartlett—[With a shudder.] Are you tryin’ to bribe me, Isaiah—
with a treasure that’s been cursed by God?
Bartlett—[As if he hadn’t heard.] D’ye remember long ago, back East,
just after we was married, and I was skipper o’ my first whalin’ ship, how
that foreigner come to me with the map o’ the pirates’ gold and asked me to
charter the ship? D’ye remember o’ how I’d talk to ye o’ findin’ ambergris,
a pile o’ it on one vige that’d make us rich? Ye used to take interest then,
and all th’ voyage with me ye’d be hopin’ I’d find it, too.
Mrs. Bartlett—That was my sin o’ greed that I’m bein’ punished for
now.
Bartlett—[Again as if he hadn’t heard.] And now when it’s come to us
at last—bigger nor I ever dreamed on—ye drive me away from ye and say
it’s cursed.
Mrs. Bartlett—[Inexorably.] Cursed with the blood o’ the man and
boy ye murdered!
Bartlett—[In a mad rage.] Ye lie, woman! I spoke no word!
Mrs. Bartlett—That’s what you kept repeatin’ in your sleep, night
after night that first week you was home, till I knew the truth, and could
bear no more. “I spoke no word!” you kept sayin’, as if ’twas your own soul
had you at the bar of judgment. And “That cook, he didn’t believe ’twas
gold,” you’d say, and curse him.
Bartlett—[Wildly.] He was lyin’, the thief! Lyin’ so’s he and the boy
could steal th’ gold. I made him own up he was lyin’. What if it’s all true,
what ye heard? Hadn’t we the right to do away with two thieves? And we
was all mad with thirst and sun. Can ye hold madmen to account for the
things they do?
Mrs. Bartlett—You wasn’t so crazed but you remember.
Bartlett—I remember I spoke no word, Sarah—as God’s my judge!
Mrs. Bartlett—But you could have prevented it with a word, couldn’t
you, Isaiah? That heathen savage lives in the fear of you. He’d not have
done it if——
Bartlett—[Gloomily.] That’s woman’s talk. There be three o’ us can
swear in any court I spoke no word.
Mrs. Bartlett—What are courts? Can you swear it to yourself? You
can’t, and it’s that’s drivin’ you mad, Isaiah. Oh, I’d never have believed it
of you for all you said in sleep, if it wasn’t for the way you looked and
acted out of sleep. I watched you that first week, Isaiah, till the fear of it had
me down sick. I had to watch you, you was so strange and fearful to me. At
first I kept sayin’, ’twas only you wasn’t rid o’ the thirst and the sun yet.
But then, all to once, God gave me sight, and I saw ’twas guilt written on
your face, on the queer stricken way you acted, and guilt in your eyes. [She
stares into them.] I see it now, as I always see it when you look at me. [She
covers her face with her hands with a sob.]
Bartlett—[His face haggard and drawn—hopelessly, as if he were too
beaten to oppose her further—in a hoarse whisper.] What would ye have
me do, Sarah?
Mrs. Bartlett—[Taking her hands from her face—her eyes lighting up
with religious fervor.] Confess your sin, Isaiah! Confess to God and men,
and make your peace and take your punishment. Forget that gold that’s
cursed and the voyage you be settin’ out on, and make your peace.
[Passionately.] I ask you to do this for my sake and the children’s, and your
own most of all! I’ll get down on my knees, Isaiah, and pray you to do it, as
I’ve prayed to God to send you his grace! Confess and wash your soul of
the stain o’ blood that’s on it. I ask you that, Isaiah—and God asks you—to
make your peace with Him.
Bartlett—[His face tortured by the inward struggle—as if the word
strangled him.] Confess and let someone steal the gold! [This thought
destroys her influence over him in a second. His obsession regains
possession of him instantly, filling him with rebellious strength. He laughs
harshly.] Ye’d make an old woman o’ me, would ye, Sarah?—an old,
Sunday go-to-meetin’ woman snivvelin’ and prayin’ to God for pardon!
Pardon for what? Because two sneakin’ thieves are dead and done for? I
spoke no word, I tell ye—but if I had, I’d not repent it. What I’ve done I’ve
done, and I’ve never asked pardon o’ God or men for ought I’ve done, and
never will. Confess, and give up the gold I’ve dreamed of all my life that
I’ve found at last! By thunder, ye must think I’m crazed!
Mrs. Bartlett—[Seeming to shrivel up on her chair as she sees she has
lost—weakly.] You be lost, Isaiah—no one can stop you.
Bartlett—[Triumphantly.] Aye, none’ll stop me. I’ll go my course
alone. I’m glad ye see that, Sarah.
Mrs. Bartlett—[Feebly trying to get to her feet.] I’ll go to home.
Bartlett—Ye’ll stay, Sarah. Ye’ve had your say, and I’ve listened to
ye; now I’ll have mine and ye listen to me. [Mrs. Bartlett sinks back in
her chair exhaustedly. Bartlett continues slowly.] The schooner sails at
dawn on the full tide. I ask ye again and for the last time, will ye christen
her with your name afore she sails?
Mrs. Bartlett—[Firmly.] No.
Bartlett—[Menacingly.] Take heed, Sarah, o’ what ye’re sayin’! I’m
your husband ye’ve sworn to obey. By right I kin order ye, not ask.
Mrs. Bartlett—I’ve never refused in anything that’s right—but this be
wicked wrong.
Bartlett—It’s only your stubborn woman’s spite makes ye refuse.
Ye’ve christened every ship I’ve ever been skipper on, and it’s brought me
luck o’ a kind, though not the luck I wanted. And we’ll christen this one
with your own name to bring me the luck I’ve always been seekin’.
Mrs. Bartlett—[Resolutely.] I won’t, Isaiah.
Bartlett—Ye will, Sarah, for I’ll make ye. Ye force me to it.
Mrs. Bartlett—[Again trying to get up.] Is this the way you talk to me
who’ve been a good wife to you for more than thirty years?
Bartlett—[Commandingly.] Wait! [Threateningly.] If ye don’t christen
her afore she sails, I’ll take Nat on the vige along with me. [Mrs. Bartlett
sinks back in her chair, stunned.] He wants to go, ye know it. He’s asked me
a hundred times. He s’spects—’bout the gold—but he don’t know for sartin.
But I’ll tell him the truth o’ it, and he’ll come with me, unless—
Mrs. Bartlett—[Looking at him with terror-stricken eyes—
imploringly.] You won’t do that, Isaiah? You won’t take Nat away from me
and drag him into sin? I know he’ll go if you give him the word, in spite of
what I say. [Pitifully.] You be only frightenin’ me! You can’t be so wicked
cruel as that.
Bartlett—I’ll do it, I take my oath—unless—
Mrs. Bartlett—[With hysterical anger.] Then I’ll tell him myself—of
the murders you did, and—
Bartlett—[Grimly.] And I’ll say ’twas done in fair fight to keep them
from stealin’ the gold! I’ll tell him your’s is a woman’s notion, and he’ll
believe me, not you. He’s his father’s son, and he’s set to go. Ye know it,
Sarah. [She falls back in the chair hopelessly staring at him with horrified
eyes. He turns away and adds after a pause.] So ye’ll christen the Sarah
Allen in the mornin’ afore she sails, won’t ye, Sarah?
Mrs. Bartlett—[In a terrified tone.] Yes—if it’s needful to save Nat—
and God’ll forgive me when He sees my reason. But you—Oh, Isaiah! [She
shudders and then breaks down, sobbing.]
Bartlett—[After a pause, turns to her humbly as if asking her
forgiveness.] Ye mustn’t think hard o’ me that I want your name. It’s
because it’s a good woman’s name, and I know it’ll bring luck to our vige.
I’d find it hard to sail without it—the way things be.
Mrs. Bartlett—[Getting to her feet—in a state of feverish fear of him.]
I’m goin’ to home.
Bartlett—[Going to her.] I’ll help ye to the top o’ the hill, Sarah.
Mrs. Bartlett—[Shrinking from him in terror.] No. Don’t you touch
me! Don’t you touch me! [She hobbles quickly out of the door in the rear,
looking back frightenedly over her shoulder to see if he is following as