Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/3817909

Models, algorithms and decision support systems for letter mail logistics

Conference Paper · August 1999


DOI: 10.1109/IPMM.1999.792467 · Source: IEEE Xplore

CITATIONS READS

0 850

1 author:

Hans-Jürgen Sebastian
RWTH Aachen University
52 PUBLICATIONS 258 CITATIONS

SEE PROFILE

All content following this page was uploaded by Hans-Jürgen Sebastian on 10 September 2014.

The user has requested enhancement of the downloaded file.


IFIP WG 7.6 - IIASA Workshop on
Advances in Modeling:
Paradigms, Methods and Applications

21 – 23 September, 1998

Abstracts

International Institute for Applied Systems Analysis


Laxenburg, Austria
Contents
M. Bouzoubaa, G. Hasle, The GGT- a generic toolkit for VRP applications 1

T. Bui, An Agent-based Framework for Building Decision Support System 2

C. Charalambous, K. Hindi, T. Tahmassebi, Modelling Multi-Stage Manufac-


turing Systems for Efficient Scheduling 3

T. Crainic, Network Design in Freight Transportation 5

D. Dolk, Modeling in the Data Warehouse World 7

M. Fischer, P. Staufer, Optimization in an Error Backpropagation Neural


Network Environment with a Performance Test on a Pattern Classification
Problem 9

J. Granat, Multicriteria Analysis in the Design of Telecommunications Net-


works 11

T. Grünert, H. Sebastian, K. Büdenbender, A New Model and Tabu Search


Algorithm for the Direct Flight Network Design Problem 12

Y. Hamam, Optimal Task Assignment of Program Modules in Distributed


Systems by Simulated Annealing 14

R. Hartog, I. Houba, A. Beulens, Scheduling in food industry: object classes


and constraints. 17

S. Irnich, A Multi-Depot Pickup-and-Delivery Problem with Heterogeneous


Vehicles 19

M. Kainuma, Y. Matsuoka, T. Morita, Implication of embodied emissions


induced by socio-economic changes 21

S. Kim, Y. Park, Y. Yoon, Water Conservation Simulation of Han River Mul-


tiple Reservoir System with a Mixed Integer Optimization Model 22

L. Kruś, Forecasting prices on agricultural markets in Poland by experts judg-


ment. 24

M. Makowski, Modeling paradigms applied to the analysis of cost-effective


policies aimed at improving European air quality 26

T. Masui, Analysis on Recycle Activities by Using Multi-Sectoral Economic


Model with Material Flow 28

Y. Nakamori, M. Ryoke, H. Tamura, Fuzzy Modeling and Inverse Simulation 30

H. Nakayama, S. Yanagiuchi, Computational Intelligence and Multi-objective


Programming in Financial Engineering 31

W. Ogryczak, Inequality Measures and Equitable Approaches to Location Prob-


lems 32

J. Paczyński, Modeling Languages as a Teaching Challenge 34

-i-
H. Sebastian, T. Grünert, Models, Algorithms, and Decision Support Systems
for Letter Mail Logistics 35

J. Van der Vorst, A. Beulens, A Modelling Paradigm for Supply Chain Man-
agement Concepts 37

J. Wessels, E. Aarts, F. Reijnhoudt, P. Stehouwer, A decomposition approach


for a class of on-line decision problems 40

A. Zilinskas, J. Calvin, A statistical model of search for global minimum 42

H. Zimmermann, The Modeling of Uncertainty 43

List of Participants 45

Note: The abstracts have been processed automatically using the abstract forms e-mailed by
the authors. Only one substantive type of modification has been applied, i.e., in a few cases the
co-author has been named first, if only he/she will participate in the Workshop.

- ii -
The GGT- a generic toolkit for VRP applications
Mouhssine Bouzoubaa and Geir Hasle
SINTEF Applied Mathematics, Oslo, Norway

Keywords: fleet management, object-oriented programming

In industries of today, the resolution of vehicle routing problems (e.g., in fleet management)
typically use isolated software tools, if supported by at all by information technology. Such tools
cannot model the wide variety of vehicle routing problems that are found in real-life applica-
tions. They typically fail to address important constraints, cannot balance partially conflicting
objectives, do not react to dynamics, and, cannot interact with the user in a timely and mean-
ingful way. In short, existing tools are inadequate and inflexible. Development and maintenance
of tailor-made solutions are too costly, partly because of the software bottleneck. The Green-
Trip Esprit project has developed the GGT - a rapidly re-configurable, generic software tool for
dynamic and optimised vehicle routing. The toolkit contains a comprehensive, object-oriented
software library based on constraint programming, with objects and algorithms for the mod-
elling and resolution of a wide range of VRPs. To overcome the software bottleneck, the GGT
includes an application modelling tool and facilities for automated configuration of bespoke de-
cision support systems based on application models. In this paper, we shall describe the GGT,
its modelling capabilities, and how it improves transportation logistics performance.

-1-
An Agent-based Framework for Building Decision Support
System
Tung Bui
The University of Hawaii, College of Business Administration, Honolulu, USA

Keywords: Decision Support Systems, crisis management

This paper proposes a framework for building Decision Support Systems using software
agent technology to support organizations characterized by physically distributed, enterprise
wide, heterogeneous information systems. Intelligent agents have offered tremendous potential
in supporting well-defined tasks such as information filtering, data mining, and data conversion.
However, the use of intelligent agents to support decisions has not been explored and merits
serious consideration. This paper proposes a taxonomy of agent characteristics that can be used
to help identify agents to support different types of decision tasks. We advocate a goal-directed,
behavior-based architecture for building cooperative decision support using agents. We look
at the development of agent-based DSS as being a process of putting together a coordinated
workflow of collaborating agents that is able to optimally support a problem-solving process.
The methodology is illustrated by its application to the creation of an agent-based DSS for crisis
management.

-2-
Modelling Multi-Stage Manufacturing Systems for Efficient
Scheduling
Christoforos Charalambous
Department of Manufacturing and Engineering Systems, Brunel University, Uxbridge, UB8
3PH, UK

Khalil Hindi
Department of Manufacturing and Engineering Systems, Brunel University, Uxbridge, UB8
3PH, UK

Turaj Tahmassebi
Unilever Research, Port Sunlight Lab, Quarry Road East, Bebington Wirral, Mersey Side L63
3JW, UK

Keywords: Manufacturing Systems; Scheduling; Modelling; Heuristics

In this paper, we present a novel approach for modelling the scheduling of complex, multi-
stage manufacturing systems that is both generally applicable and powerful.
The problem is concerned with the development of near-optimal, short-term schedules for the
large variety of manufacturing systems that are present in the process industry. Such systems
comprise stages that consist of parallel units and are dedicated to a specific task (producing,
post-dosing, packing, etc.). In addition, storage stages may be incorporated in the system for
the intermediate buffering of materials. Different connections can be defined between stages,
thus forming the system network. Given a portfolio of material demands, the aim is to generate
a schedule for the whole system that is optimal (or near optimal) with respect to an objective
criterion such as makespan or cost.
Due to the inherent complexity of the problem there has been relatively limited research on
the general version. Most previous research concentrates on developing mathematical program-
ming (MP) formulations or specialised heuristics for simplified instances with relaxed constraints.
Where investigation of the generalised case is considered, only theoretical conclusions are drawn.
As a result, the schedulers for such systems have been either too specialised, or prohibitively
slow.
This paper attempts to provide a solution to the problem by introducing a two-level model.
First, a generic model is presented that enables the description of most of the complex systems
found in the process industry today (though not all). The model first maps all the materials that
could be found at some point in the system onto a directed graph according to precedence rules.
Then the static units of the system are defined. The model classifies all system units in terms of
’transformers’ (which change materials), buffers and connectors. Each unit is described in terms
of its available operations with respect to materials, its constraints and its inter-relationships
with other units.
In the second part, a heuristic-based scheduler is developed that uses the definition model
to construct efficient schedules. The product demands are divided into sublots that can then
be allocated into sequences. Each sublot is taken in turn from the sequence and the necessary

-3-
operations for its production are made based on the intermediate schedule developed so far.
To choose the necessary operations the scheduler uses the definition model. Using the material
graph, the scheduler incrementally assigns an operation to a given unit that can accommodate
the material considered. To find the best operation a localised greedy search is performed based
on several criteria such as cost increase and expected utilisation of the system unit considered.
The remaining task of identifying a ’good’ sequence can be achieved by employing meta-heuristic
search in the solution space of sublot sequences.
A pilot study has been conducted for a relatively small real-life system to establish the appli-
cability of the model. Several scenarios have been investigated and different search paradigms,
such as GA and SA, have been tested. The results indicate that the tool developed can produce
optimal, or near-optimal, solutions within reasonable computational time, which suggests that
the attempt to develop the generic manufacturing system scheduler is well justified.

-4-
Network Design in Freight Transportation
Teodor Gabriel Crainic
Département des sciences administratives
Université du Québec à Montréal
and
Centre de recherche sur les transports
Université de Montréal, Montréal, QC, Canada

Keywords: Freight Transportation, Planning, Service Network Design, Network Design For-
mulations and Methods

Transportation is an important domain of human activity. It supports and makes possible


most other social and economic activities and exchanges. Freight transportation, in particular, is
one of today’s most important activities, not only as measured by the yardstick of its own share
of a nation’s gross national product, but also by the increasing influence that the transportation
and distribution of goods have on the performance of virtually all other economic sectors.
There are several different types of players in the transportation field, each with its own
set of objectives and means. Producers of goods, for exemple, require transportation services
to move raw materials and intermediate commodities and to distribute final products in order
to meet demands. Hence, they determine the demand for transportation and are often called
shippers. (Other players, such as brokers, may also fall in this category.) Transportation is usu-
ally performed by carriers, such as railways, shipping lines, motor carriers, etc. Thus, one may
describe an intermodal container service or a port facility as a carrier. Governments constitutes
another important group of players since they still regulate several aspects of freight transporta-
tion (dangerous and obnoxious goods transportation, for example) and provide a large part of
the transportation infrastructure: roads and highways, and often a significant portion of the
port, internal navigation, and rail facilities.
Transportation is also a complex domain, with several players and levels of decision, where
investments are capital-intensive and usually require long implementation delays. Furthermore,
freight transportation has to adapt to rapidly changing political, social and economic conditions
and trends. It is thus a domain where accurate and efficient methods and tools are required to
assist and enhance the analysis of planning and decision-making processes.
In this paper we focus on freight transportation carriers. This industry, as many other
economic sectors, has to adapt to rapidly changing political, social and economic conditions
and trends. It has, in particular, to achieve high performance levels both in terms of economic
efficiency and of service quality. Economic efficiency because a transportation firm has to make
profits while at the same time evolving in an increasingly open and competitive market where
cost (or cost for a given quality level) is still the major decision factor in selecting a carrier or
distribution firm. Yet, one also observes an increasing emphasis on the quality of the service
offered. Indeed, the new paradigms of production and management (e.g., small or no inventories
associated to just-in-time procurement, production and distribution, quality control of the entire
logistics chain driven by customer demand and requirements, etc.) impose high service standards
on carriers. This applies, in particular, to the response time (the ability to rapidly respond to
customer demands), total delivery time (be there fast) and service reliability (be there within

-5-
specified limits and be consistent in your performance).
Transportation systems are rather complex organizations which involve a great deal of human
and material resources and which display intricate relationships and trade-offs among the various
decisions and management policies affecting their different components. It is then convenient to
classify these policies according to the following three planning levels:

1. Strategic (long term) planning at the firm level typically involves the highest level of man-
agement and requires large capital investments over long time horizons. Strategic decisions
determine general development policies and broadly shape the operating strategies of the
system. Prime examples of decisions at this planning level are the design of the physical
network and its evolution (upgrading or resizing), the location of main facilities (rail yards,
multimodal platforms, etc.), resource acquisition (motive power units, rolling-stock, etc.),
the definition of broad service and tariff policies, etc.

2. Tactical (medium term) planning aims to ensure, over a medium term horizon, an efficient
and rational allocation of existing resources in order to improve the performance of the
whole system. At this level, data is aggregated, policies are somewhat abstracted and
decisions are sensitive only to broad variations in data and system parameters (such as
the seasonal changes in traffic demand) without incorporating the day-to-day information.
Tactical decisions need to be made mainly concerning the design of the service network, i.e.,
route choice and type of service to operate, general operating rules for each terminal and
work allocation among terminals, traffic routing using the available services and terminals,
repositioning of resources (e.g., empty vehicles) for use in the next planning period.

3. Operational (short term) planning is performed by local management (yardmasters and


dispatchers, for example) in a highly dynamic environment where the time factor plays an
important role and detailed representations of vehicles, facilities and activities are essen-
tial. Scheduling of services, maintenance activities, crews, etc., routing and dispatching of
vehicles and crews, resource allocation are important operational decisions.

A significant number of these challenging planning and operational issues may be repre-
sented as network flow optimization problems, fixed charge, capacitated, multicommodity net-
work design formulations, in particular. This class of formulations appears prominently non
only in freight carrier planning but also when addressing issues in infrastructure construction or
improvement, telecommunication network design and dimensioning, power system design, etc.
Here, several “commodities” (goods, data packets, people,...) have to be moved over the links of
a network with limited capacities from their respective origins to their respective destinations.
Furthermore, other than the usual “transportation” costs related to the volume of each com-
modity flowing though a given link, a “fixed” (construction or utilization) cost is paid as soon as
a link is used. The trade-off between the variable and fixed costs inherent in the selection of any
given solution, as well as the interplay between the limited capacity (and the resulting competi-
tion among the various commodities) and the fixed costs of using the links of the network, makes
for a formulation which presents both a broad modelling appeal and serious obstacles when one
attempts to solve realistically sized instances. In fact, while interesting results have been de-
rived for the uncapacitated design problem, less effort has been directed towards the capacitated
problem, which is more difficult to solve and pose considerable algorithmic challenges.
In this paper, we first briefly describe some of the major planning and operational issues
that challenge freight carriers, mostly at the strategic and tactical planning levels. This directly
leads to a review of the main types of network design formulations proposed to address these
issues and the associated algorithmic developments. We conclude with some general remarks
and possible research issues.

-6-
Modeling in the Data Warehouse World
Dan Dolk
Naval Postgraduate School, Monterey, CA, USA

Keywords: Model management, model warehouse, data warehouse, decision metrics, multi
criteria models

The data warehouse phenomenon has supplied a very practical foundation for decision sup-
port and modeling applications. The ability to capture data from multiple operational source
databases and retrieve it efficiently across many different dimensions has always been a major
impediment to the timely delivery of useful decision making information. This is especially true
for modeling projects where data is more often the bottleneck to a model’s success than the
model itself. Data warehouse technology promises not only to relieve us from much of that
burden but also offers via the Web rapid and widespread dissemination of modeling results in
the bargain. This signifies a dramatic escalation in what’s possible for model management.
My talk will focus on three areas where I see fruitful research arising from the role of data
warehouses in support of modeling and model management:
1. Model Warehouse: Data warehouses are designed largely to support on line analytical
processing (OLAP) by which typically is meant some form of cross-tabulations. This is more a
tool for data visualization across multiple dimensions than it is an analytical model which sup-
ports sensitivity analysis. It makes sense that warehouses be tailored for modeling applications
as a further step in warehouse technology. Thus, we can speak of a model warehouse as a data
warehouse built to support a particular model instance, class, or paradigm. It seems reasonable
to suppose that different warehouse structures may be useful for optimization models, say, than
for discrete event simulation models which in turn may be different than for multi criteria deci-
sion models. Thus, we can profit from investigating the relationship between classes of models
and appropriate warehouse architectures which can support them.
2. Decision Metrics: While consulting in the telecommunications industry recently, I stum-
bled upon a very useful warehouse-driven method for presenting management information which
I call decision metrics. Decision metrics are indicators of organizational values derived from
quality-based processes such as total quality management (TQM). For example, network re-
liability may be a quality-based value which has associated metrics such as Switch Capacity,
% Dropped Calls, and % Availability. The advantage of metrics is that they define a decision
space in a way that is simple to see, particularly when there is a spatial dimension present. For
example, it is easy to look at a map and see all switches in the US color coded for % Availability
depending upon whether the switch was below, within, or above predefined thresholds. Switches
which are under performing with respect to the operational metric can then be examined in more
detail using the ”drill down” capabilities of standard OLAP tools. When this can be done via a
desktop Web browser using up-to-the-minute information from a real time warehouse, it proves
to be an extremely valuable management decision support tool.
Metrics may be as simple as a ratio or percentage but may also derive from complex models.
Specifically, multi criteria models are excellent vehicles for defining metrics which may encompass
both quantitative and qualitative dimensions. My colleagues and I have recently developed such
a model for military readiness for the U.S. Army Reserve which we have embedded in a metrics-

-7-
based decision support system called ARIES. I will discuss some of our experiences with this
project particularly regarding the data warehouse aspects of building a multi criteria model.
3. Model Integration: The dramatic increase in availability which the WWW provides for
all information resources presents wonderful opportunities in the area of model integration. If
indeed the Web gives rise to a proliferation of distributed models, this should lead, in turn, to a
natural increase in demand for integrating them. I will discuss how multi criteria-based metrics
provide a ready-made structure for model integration.

-8-
Optimization in an Error Backpropagation Neural Network
Environment with a Performance Test on a Pattern
Classification Problem
Manfred M. Fischer
Department of Economic and Social Geography, Wirtschaftsuniversität Wien, A-1090 Vienna,
Austria; and
Institute for Urban and Regional Research, Austrian Academy of Sciences, A-1010 Vienna,
Austria

Petra Staufer
Department of Economic and Social Geography, Wirtschaftsuniversität Wien, A-1090 Vienna,
Austria

Keywords: Feedforward Neural Network Training, Numerical Optimization Techniques, Error


Backpropagation, Cross-Entropy Error Function, Multispectral Pixel-by-Pixel Classification

Feedforward neural networks have received a great deal of attention in geography and regional
science in recent years. The explosion of research has been accompanied by no little measure
of hyperbole of the technological potential of neural networks and an associated metaphorical
jargon of the field. Both may act to lessen the amount of serious attention given to computational
(not to speak artificial) neural networks.
The purpose of this paper is to dispel some of the mystique surrounding the study of compu-
tational neural networks and to make clear the relationship to statistics and optimization theory
in the context of network training. We view feedforward neural networks as a class of non-linear
mathematical models and network training as an optimization problem, i.e. a problem to mini-
mize a multivariate error function that depends on the network parameters. From this point of
view, current practice is predominantly based on minimizing the sum-of-square error function
in an error backpropagating environment, using the gradient descent technique for parameter
adjustment in on-line or off-line mode. No doubt, the backpropagation approach is a compu-
tationally efficient technique for evaluating the partial derivatives of the error function. But
the sum-of-square error function is a less attractive objective function for pattern classification
rather than for function approximating purposes.
In pattern classification contexts, the cross-entropy error function that allows the network
outputs to be interpreted as probabilities is preferable. This paper attempts to develop a math-
ematically rigid framework for minimizing the cross entropy function in an error backpropagat-
ing framework. In doing so, we derive the backpropagation formulae for evaluating the partial
derivatives in a computationally efficient way. Various techniques of optimizing the multiple class
cross-entropy error function to train single hidden layer neural network classifiers with softmax
output transfer functions are investigated on a real-world multispectral pixel-by-pixel classifi-
cation problem that is of fundamental importance in remote sensing. These techniques include
epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient
and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning
task and whether one wants to optimize learning for speed or generalization performance. It

-9-
was found that, comparatively considered, gradient descent error backpropagation provided the
best and most stable out-of-sample performance results across batch and epoch-based modes of
operation. If the goal is to maximize learning speed and a sacrifice in generalisation is accept-
able, then PR-conjugate gradient error backpropagation tends to be superior. If the training
set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing
a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization
results.

- 10 -
Multicriteria Analysis in the Design of Telecommunications
Networks
Janusz Granat
Institute of Telecommunications, 04-894 Warsaw
and
Institute of Control and Computation Engineering Warsaw University of Technology,
00-665 Warsaw, Poland

Keywords: telecommunications networks, multicriteria model analysis, ISAAP

Planing of the telecommunications networks is a highly complex process. Application of


various analytical modeling and analysis techniques significantly improves the reliability and
quality of network plans. Conventional network planing techniques are based on single criteria
optimization techniques. The following decision support systems based on this techniques are
used by network planers and significantly reduces the cost of building the network:

• SONET Toolkit - DSS to design robust fiber-optic networks (Bellcore).

• NETCAP - an interactive optimization system for GTE Telephone Network Planning


(GTE laboratories)

• Archane - DSS to automate interoffice facilities planning ( NYNEX Science and Technol-
ogy)

The recent advances in multicriteria model analysis can be applied in solving telecommunications
problems. There are some examples of using multicriteria analysis in telecommunications:

• Ring Network Design: an MCDM Approach. (U. Mocci and L. Primicerio.)

• Implementing Telecommunications Infrastructure: A Rural America Case. (S. M. Nazem,


Y.H. Liu, H. Lee, Y. Shi.)

• Multiple Objective Routing in Integrated Telecommunications Networks. MCDM Confer-


ence (C.H. Antunes et al.).

An overview of multicriteria analysis in designing telecommunication network will be pre-


sented as well as application of the ISAAP module. The name ISAAP is an abbreviation for
Interactive Specification And Analysis of Preferences. ISAAP handles interaction with the
user in the stage of the decision selection.

- 11 -
A New Model and Tabu Search Algorithm for the Direct Flight
Network Design Problem
Tore Grünert and Hans-Jürgen Sebastian and Klaus Büdenbender
Dept. of Operations Research, RWTH, Aachen, Germany

Keywords: Flight Scheduling, Freight Transportation, Network Design

This talk introduces a network design problem with a structure which is encountered in many
transportation processes. The general organization of these systems is as follows: freight has to
be transported between a large number of origins and destinations. In order to consolidate the
freight, it is first shipped to a terminal. Next, it is transported directly to a terminal, where it is
re-loaded and shipped to its destination. The task is to decide which terminals have to be used
and how the freight is transported between the terminals. We describe an application where
the terminals are airports and the freight is letter post. The problem occurs as a sub-problem
within a project that is carried out in cooperation between the ELITE foundation, the RWTH
Aachen, and the Deutsche Post AG. Here the transportation between terminals is by air, which
is required due to time window constraints. The economical impact of these decisions is huge
since air transportation is costly and the process is repeated every night.
The task is to organize the letter mail transportation between 84 depots. The 84 depots are
called letter mail centers (LMC) and are distributed throughout Germany with the international
postal center at the Frankfurt am Main airport. The process of mail collection and distribution
consists of 5 sub-processes:
1. Input mail collection. Mail is collected directly from customers and mailboxes and trans-
ported to the regional letter mail center.
2. Sorting input mail. The collected mail is sorted according to the destination LMC. This
process finishes at about 09.15 pm in most LMCs.
3. Transportation from LMC to LMC. The letter mail to and from all (84 x 83 = 6972) LMCs
is transported by air and ground. The average daily freight volume is about 1,500 metric
tons. All mail has to arrive no later than 04.15 am.
4. Sorting output mail. The incoming mail from other LMCs is sorted according to its local
output destination.
5. Transportation to local delivery points. The outgoing mail is transported to the local
delivery point, where it is picked up by the mailman and delivered to the customer.
Our focus is on the third process, the transportation from LMC to LMC. Each such transporta-
tion requirement can be considered a load. It can be seen that the average time window for
transportation is 7 hours. This tight time-window constraint requires that a fraction of the post
is transported by plane in the so-called night airmail network. We will only consider the case
where the loads are flown directly between airports. In this case it has to be decided which loads
are flown by which flights. A flight is defined by take-off and landing airport and aircraft type.
There exist constraints on the take-off and landing times, the number of available aircrafts of
different types and on the number of possible take-offs and landings.

- 12 -
We show how the problem can be modeled as a capacitated warehouse location problem with
side constraints. The model is based on the consideration of possible take-off times. However,
even the computation of an initially feasible solution is NP-complete due to the side constraints.
We have developed a variant of a greedy heuristic that attempts to find good feasible solutions.
The feasible solutions are then improved by a hybrid combination of Tabu Search and branch
and bound. The general idea is to fix a large subset of the binary variables and to designate
the remaining binary variables free. The values of the free variables are then determined by
branch and bound. The Tabu Search phase guides the selection of the free variables by memory
based components and candidate lists. Candidate lists become necessary in order to evaluate the
neighborhood of the current solution effectively. The memory components avoid cycling of the
search and guides the search through diversification. The diversification is based on frequency
information.
Computational results demonstrate the effectiveness of the approach. Even large, real-world
instances can be solved within an acceptable amount of time. In the terminology of warehouse
location problems these have up to 20,000 warehouses and 3,000 customers. The solutions
indicate substantial possibilities for improvement of the current solution.

- 13 -
Optimal Task Assignment of Program Modules in Distributed
Systems by Simulated Annealing
Yskandar Hamam
ESIEE, Paris, France

Keywords: Distributed systems, simulated annealing, task assignment

In this paper, we present a simulated annealing approach to the task assignment of program
modules to processors in a a distributed computer system. Modules of a program communicate
at a given rate and necessitate a given computer capacity. Processors are interconnected by a
communication network constituted of various types of links : local area network, wide area
network and specialised links. Two versions of the algorithm are presented. In the first the
capacity of the various components of the network are considered given whereas the second
version considers the capacity of the components to be determined.

Introduction
The task assignment has received an intensive attention in parallel and multi-processor config-
urations. Examples of such treatment are given in [5],[7]. In this work presented in this paper
we are interested in case of the assignment of interdependent tasks on processors in distributed
computer systems.
The solution of the task assignment of program modules to processors in a distributed com-
puter system is useful at the design stage where it helps in the determination of the configuration
necessary to obtain a convenient performance level. Two types of algorithms have been proposed
to solve this problem:

• Exact algorithms applied to simplified problems [1],[4]

• Heuristic algorithms applied to the full problem or a simplified version [1],[2],[6]

• Exact algorithms applied to the full problem [2]

This problem is strongly NP-complete and solving the exact problem, though possible using
branch and bound [2], is time consuming.
Simulated annealing has been shown to be quite efficient in many combinatorial optimisation
problems [8]. In this paper we will apply this method to the the task allocation problem.

Definition of the allocation problem


The program modules are caracterised by the following properties:

• Computation time requirement

• Memory requirement

- 14 -
Program modules comunicate data at a given rate. A graph discribing this communication
is given. Each node in the graph corresponds to a module and oriented arcs correspond to
communicating modules.
The distributed processing system is defined by both the processors and the interconnecting
network. Each processor is caracterised by its processing speed and memory available.
The network is decomposed into sub-networks (LANs, WANs, direct links), each is carac-
terised by its transmission capacity. A virtual link is pre-defined between any two processors
[2].
The constraints used during the optimisation are:

• Each module is to be assigned to one and only one processor.

• The sum of the computational requirements for modules on any given processor is limited
to the processors computational capacity (idem for the memory requirements).

• The virtual path between any two processors has a capacity equal to that of the weakest
connection encountererd along the path.

• The the sum of the flows in any given sub-network is limited to the capacity of the sub-
network.

The objective function consists of both processor and network communication costs. A zero
communication cost is affected if a pair of communication modules are on the same processor.

The Simulated Annealing Algorithm


A simulated annealing algorithm is characterised by both the coding of solutions and neigbour-
hood structure as well as the cooling schedule.

• Coding of solutions
Each solution is coded by assigning a program module to a processor. Thus for P processors
and M program modules M variables (xm , i = 3D1, 2, ..., P ) are defined each assigned a
value between one and P .

• Neighbourhood structure
A neighbour solution is obtained by choosing a module m with assigned processor xm =
3Dp, and choosing at random a processor p0 to replace p, thus changing the value of xm
to p0 .

• Calculating the cost


Two arrays are initially defined: a two dimensional array of size M × P with the cost of
assigning module m to processor p. Another array of dimension P × P is precalculated
with the per unit cost of transmitting data from processor i to processor j. The cost is
updated at each iteration incrementally. Thus cost update computation time is rendered
independent of the problem size.

• Cooling Schedule A geometric cooling schedule is used. The temperature is = reduced


in the following manner: Tk+1 = 3Dα × Tk , where α is a constant less than one.
The chaine at each temperatue is updated in a similar manner: Lk+1 = 3Dβ × Lk , where
β is a constant greater than one.

The initial and final temperatures are calculated analytically using an expression developped
by the author from the variance of the cost function at each temperature.

- 15 -
Results and Conclusions
The algorithm was tested on real size problems. It is quite efficient and the results are promissing.
Experimental results will be given in the full paper. The algorithm is now being adapted to
solve design and reinforcement problems.

References
[1] D.Fernandez-Baca & A.Medepalli: ’Exact and Approximate Algorithms for Assignment
Problems in Distributed Systems’, TR 92-29, Report, Iawa State University of Science &
Technology, Oct. 1992.

[2] A. Hagin, G.Dermler & K.Rothermel: ’Problem Formulation, Model and Algorithms for
Mapping Distributed Multimedia Applications to Distributed Computer Systems’, Technical
Report, Universität Stuttgart, Fakultät Informatik, February, 1996.

[3] M.G.Norman & P.Thanisch: ’Models of Machines and Computation for Mapping in Multi-
Computers’, ACM Computing Surveys, 3, 1993, pp. 163-302.

[4] J.B.Sinclair: ’Efficient Computation of Optimal Assignments for Distributed Tasks’, Journal
of Parallel & Distributed Computing, 1987, pp. 342-362.

[5] F.Berman & L.Snyder: ’On Mapping Parallel Algorithms in Parallel Architecture’, Journal
of Parallel & Distributed Computing, 1987, pp. 439-458.

[6] V.M.Lo: ’Heuristic algorithms for Task Assignment in Distributed Systems’, IEEE Trans.
Comut. 1988, pp. 1384-1397.

[7] C.E.Houstis & M.Aboelaze: ’A Comparative Performance Analysis of Mapping Applica-


tions to Parallel Multi-Processor Systems: a Case Study’, Journal of Parallel & Distributed
Computing, 1991, pp. 17-29.

[8] C.R.Reeves : ’Modern Heuristic Techniques for Combinatorial Problems, Blackwell Scientific
Publications, 1993.

- 16 -
Scheduling in food industry: object classes and constraints.

Rob Hartog and Iris Houba and Adrie Beulens

1) Wageningen Agricultural University Department of Computer Science, 6703 HB


Wageningen, The Netherlands 2) ATO-DLO, 6700AA Wageningen, The Netherlands

Keywords: scheduling, modelling, food industry, object, constraint

The demand for automated support for planning and scheduling in food industries increases
due to increased product variety and increasing quality demands. A decision support system
for detailed planning could improve the performance of food industries and reduce the degree
of dependency on the experience of the human production planner(s). Constraint satisfaction
techniques have to a limited extend successfully been used to solve detailed-planning problems
in practice, mainly in discrete production environments.
Application of constraint satisfaction techniques requires a model of the planning situation.
The objective of our research is to support the development of suitable models of detailed-
planning (operational planning, scheduling) situations in food industry. Suitable in this case
implies: suitable to make effective use of constraint satisfaction techniques, and suitable to be
integrated in Enterprise Resource Planning systems. Thus object classes, objects, constraint
classes and constraints will be the building stones of the models. Our approach is to try to
identify typical modelling decisions and to arrive at a set of partial models that can be used
as reference when a new planning situation in food industry is encountered, thus reducing the
effort required to design a model for a specific planning situation.
The paper will describe some important characteristics of operational planning in the food
industry. These encompass: variable price and quality of inputs, short shelf life of many products
and raw materials, tracking and tracing requirements, the necessity to communicate with other
links in the supply chain, push and pull effects, existing planning levels and (rolling) planning
procedures, expensive production facilities of a semi-process type. The requirement of tracking
and tracing in particular increases the complexity of the planning situation. The paper will
illustrate these characteristics with descriptions of situations in a milk powder factory and a
salad factory.
The paper will identify the most important modelling decisions and compare several can-
didate models for planning situations. Furthermore the paper will discuss some experience in
implementing these candidate models using the class libraries of ILOG.TM
The comparison will be based on the following criteria:
• Quality of the resulting solution for the planning problem.
• Effort required changing the model when the planning situation changes.
• The possibility of tracking and tracing based on the model.
• Effort required reformulating constraints.
• The degree in which the model supports a varying time-resolution.
• Ease of incorporating specific knowledge (for example quality of ingredients) to the model.
• Computational efficiency of finding a solution.

- 17 -
References
Tsang E. ”Foundations of Constraint Satisfaction”, Academic Press Limited, 1993.
Dockx Kris, De Boeck Yvan, Meert K=FCrt ”Interactive Scheduling in the Chemical Process
Industry.” Computers Chem Engng Vol21 No 9 pp 925 - 945 ILOG, ”Ilog Scheduler Reference
Manual”, version 4.0, 1997.

- 18 -
A Multi-Depot Pickup-and-Delivery Problem with
Heterogeneous Vehicles
Stefan Irnich
Lehrstuhl für Unternehmensforschung, RWTH Aachen

Keywords: Vehicle Routing and Scheduling, Multi-Depot, Pickup-and-Delivery, Heterogeneous


Vehicles, Set Covering Problem

The talk introduces a new type of vehicle routing and scheduling problem. It can be char-
acterised as a special multi-depot pickup-and-delivery problem with heterogeneous vehicles.
There is a unique location working as a consolidation point or hub. All the other locations
can be considered as pickup and delivery points. The task is either to pick up a commodity at
a pickup point and bring it to the hub or deliver a commodity available at the hub and bring it
to the corresponding
 1 2
destination point. Every commodity c has a corresponding time window
tc = 3D tc , tc representing the time interval available for transportation. In the first case of
delivering commodities to the hub, t1c is the earliest pickup time and t2c the latest delivery time
to deliver at the hub. In the second case of delivering to a location t1c is the earliest pickup
time at the hub and t2c is the latest delivery time. It is always possible to have two or more
commodities with different time windows at one location.
All vehicles are located at pickup and delivery points, so a feasible route starts at one of
these locations, visits the hub and ends at the same location. In addition to these short tours it
is possible to visit more locations before arriving or after leaving the hub. But due to the tight
time windows the number of locations visited by one tour is always small (at most four). Three
types of tours can occur: Firstly, a vehicle is only used to pick up commodities at one or more
points, brings them to the hub and goes back to its starting location without loading at the hub.
Secondly, a vehicle can be sent emptily to the hub in order to pickup commodities and deliver
them to the starting location and possibly some additional locations. Thirdly, a vehicle can be
used for pickup and delivery. In this case it may be imposed by the time window restrictions
that the vehicle has to wait at the hub.
The objective is to minimize the overall transportation costs. Costs for the use of a vehicle
consist of two components, one for the route duration and one for the distance travelled.
It is possible to formulate the problem as a set covering problem (SCP). The task is to find
a cost-minimal set of feasible trips which perform all pickups and deliveries (requests). So the
columns of the SCP correspond to possible trips which are combinations of a route, a vehicle
and a set of commodities which can be transported simultaneously. Each request corresponds
to a row of the SCP.
The algorithm we present here is a two phase procedure: the task of the first phase is to
generate a set of different feasible trips. We describe a procedure to generate a pool of trips.
Each request is realized (covered) by at least one, in general a set of different trips. Trips differ
in

• the route, i.e. the sequence the locations are visited

• the type of vehicle which is used on this route (different capacities, speed and time for
loading and unloading)

- 19 -
• the commodities which are transported within a certain vehicle on a given route (the
commodity decision also implies the type of trip: pure pickup, pure delivery or mixed
pickup and delivery).

Because of the huge number of possible trips it is not (always) practicable to enumerate all
of them as columns of the SCP. We present an algorithm to generate promising trips which
keeps the number of columns of the SCP algorithmically tractable. It uses different ideas for
preprocessing and some heuristic elements.
The second phase of the algorithm is the solution of the SCP. A new heuristic algorithm
based on Lagrangean-relaxation has been implemented. It is based on an algorithm developed
by Caprara, Fischetti and Toth.
Our work was inspired by a practical problem at the Deutsche Post AG, Germany=B4s post
service. 83 Letter mail centers (LMCs) are spread over all regions of Germany to give service to
a particular region. Every two LMCs have to exchange the letter mail addressed to customers
in the other region. Because of time restrictions letter mail transportation is partly done by
air-services. So LMCs positioned near one airport have to deliver letter mail to flights starting
at the airport. On the other hand letter mail from flights landing at the airport has to be picked
up and delivered to the LMCs. Clearly the airport corresponds to a hub, the LMCs correspond
to pickup and delivery points. The letters which have to be transported between LMCs and
flights represent the commodities.

- 20 -
Implication of embodied emissions induced by socio-economic
changes
Mikiko Kainuma
Global Environment Division, National Institue for Environmental Studies, Tsukuba, 305
Japan

Yuzuru Matsuoka
Faculty of Engineering, Kyoto University, Kyoto, 606-8501 Japan

Tsuneyuki Morita
Global Environment Division, National Institue for Environmental Studies, Tsukuba, 305
Japan

Keywords: CO2 emission, input-output model, general equilibrium model

It is very important to know how much loads are added to environment when one produces
and/or consumes goods. One way to express such loads is know as Life Cycle Assessment (LCA)
where environmental loads are calculated based on embodied pollution emissions. A unit of
embodied pollution emissions is an additional amount of emissions including indirect ones when
one unit of goods is additionally produced. I/O analysis has been used to estimate embodied
pollution emissions. I/O method is very effective to estimate emissions from production points of
view. However, it is difficult to incorporate indirect effects caused by changes of socio-economic
structures such as utility and production efficiencies.
This study estimates how much CO2 is added or reduced when some structural change occurs.
A general equilibrium model (GE model) has been developed and embodied CO2 emissions are
calculated based on I/O analysis and GE model. The embodied CO2 emissions by I/O models
are much larger than that by GE model. In some cases, the total CO2 emission increases even
if less intermediate inputs are required because of technological improvement. When there is
a socio-economic structural change, not only final demand of interests changes, but also final
demands of other goods change. Careful consideration is necessary to estimate impacts of policies
to reduce environmental loads.

- 21 -
Water Conservation Simulation of Han River Multiple
Reservoir System with a Mixed Integer Optimization Model
Sheung-Kown Kim
Korea University, Department of Industrial Engineering

YoungJoon Park
Korea University, Department of Industrial Engineering

YongNam Yoon
Korea University, Department of Civil and Environmental Engineering

Keywords: Water supply, conservation, Multi-reservoir operation

There are numerous application examples with simulation and the mathematical optimiza-
tion model. The most of the optimization applications tried to adopt a specific economic ob-
jective directly as their primary objective function. However the practical problem is multi-
objective in nature. Therefore, the direct use of the single objective function with economic
criterion might not provide a good practical solution.
We propose a mixed integer network linear programming optimization model for multiple
reservoir operation. It could simulate the optimal operating rule which is self-generated by the
optimization engine in the algorithm. Cost coefficients of the objective function are specified as
priority factors so that the optimal operating rule can be determined by itself. Priority factors
are specified so that desired behavioral characteristics of reservoir operation be reflected on the
results of the model. The optimization model describes the coordinated behavior of the multiple
reservoir system operation while optimizing water supply use, maintaining flood reserve volume,
minimizing unnecessary spill, and possibly maximizing the efficiency of hydro turbine operation
by maintaining higher water storage level.
The economic evaluation will be left as post analysis. We proposed three measures of system
performance. These would provide us ways to measure the level of operational performance
satisfying the given demand release requirements. They are 1) how much water could be saved
by reducing the unnecessary spill, 2) how much more water could be released through penstock,
and 3) how higher the storage level of individual reservoirs could be maintained during the study
period.
It has been tested with the Han River multi-reservoir system in Korea,which consists of 2
large multipurpose dams and 5 hydroelectric dams. The test model with 18 years of data consists
of 44,172 nodes and 85,122 arcs and 3,920 mixed integer variables. The resulting operational
tracks in terms of storage variations of individual reservoirs show that they behaved extremely
well in comparison with known historical inflow and release records. To get the savings as
the model suggests, we might need a dam comparable to the size of Hwachun of which cost
might be more than 550 million US dollars, unless we operate the multiple reservoirs from the
perspective of system optimization. We may still clarify how much of the savings resulted from
the clairvoyance of inflow data. And yet, we know that synergy of optimal system operation is
also included in the savings. It is an optimization model in structure, but it indeed simulates the

- 22 -
coordinating behavior of multi-reservoir operation. Therefore, the results demonstrate that there
is a good chance of saving substantial amount of water should it be put to use in conjunction
with a good inflow forecasting system in real time.

- 23 -
Forecasting prices on agricultural markets in Poland by experts
judgment.
Lech Kruś
Systems Research Institute, Polish Academy of Sciences, 01-447 Warsaw, Poland

Keywords: group decision analysis, forecasting, modeling

The paper deals with a forecasting system implemented in the Agency for Agricultural
Markets in Poland.
The Agency for Agricultural Markets is a state agency created by the Parliament of the Re-
public of Poland. Its mission is to implement the state agricultural interventionist policies aimed
at stabilizing the nation’s agricultural markets and protecting economic interests of agricultural
producers. The Agency implements the policies through: interventionist purchases and sales
of raw and processed agricultural commodities on national and international markets, creating
and managing stockpiles of agricultural commodities, offering credits guarantees to business en-
tities which execute the Agency’s commissioned tasks. Statutory tasks of the Agency include
also analysis and forecasting of the agricultural markets. The scope of the Agency activities is
indicated in annual programs approved by the Council of Ministers. These programs provide
a list of products affected by the Agency’s interventionist activities, forecasted price levels at
which the Agency’s actions are initiated, and budget estimates to conduct the above activities.
The Agency’s activities focus on grain, milk and meat markets which have a significant impact
on agricultural incomes in Poland. Proper forecasting prices of the commodities on the above
markets is a base for construction of the annual interventionist program and current activities.
The forecast is made in the Agency for prices of the most important agricultural commodities
in Poland: wheat, rye, pork, beef and milk. The prices are defined as average purchasing
prices obtained by producers. These prices are observed on the market and recorded by the
Central Statistical Office. There is a group of experts selected among outstanding professionals
- scientists and practitioners in agriculture nominated by the President of the Agency. Four
times per year meetings of the experts are organized. On the base of their opinions intervals
of the prices forecasted for the next quarter and for the next two quarters are prepared. The
forecasting system implemented in the Agency utilizes some ideas of the Delphi method with a
computer assistance.
The system includes:
• preparation of questionnaires for the experts,
• statistical elaboration of the experts responses obtained by correspondence before each of the
quarterly meetings,
• presentation of the provisional aggregated results (including both proposed intervals of fore-
casted prices as well as detailed motivation) to the experts at the meeting,
• substantial discussion at the meeting and correction of the expert’s opinions,
• elaboration and acceptation of the final forecast to be published by the Agency,
• process of the experts nomination based on evaluation of their opinions’ accuracy in the past.
The system as well as an experience related to the system implementation and utilization
will be presented.
The agricultural sector in Poland is currently under structural changes. Whole the Polish

- 24 -
economy is in transition period. Long series of statistical data have been broken. There are gaps
in substantial statistical data currently observed. The question arises: what forecast methods
can be applied in such conditions? It has been found that forecasting the prices by econometric
modeling methods can hardly be directly applied. Utilization of expert judgment seems to
be more appropriate. However each experts can of course individually apply some modeling
techniques to prepare his opinion. The author would appreciate a discussion on decision analysis
and forecasting problems referring to the above case.
(The author has been responsible for development of the system as the adviser of the Presi-
dent of the Agency).

References
Hwang Ching-Lai, Ming-Jeng Lin (1987); Group Decision Making under Multiple Criteria.
Springer Verlag, Lecture Notes in Economics and Math. Systems No 281, Berlin.
Lewandowski A., Johnson S., Wierzbicki A. P. (1986). A Prototype Selection Committee
Decision Analysis and Support System SCDAS: Theoretical Background and Computer Imple-
mentation. WP-8627, IIASA, Laxenburg, Austria.
Linstone, H. A. and M. Turoff (1975). The Delphi Method, Techniques and Applications,
Adison Wesley, Reading, Massachusetts.
Krus, L., B. Lopuch, J. Rajtar (1985); A Model of Polish Agriculture - Assumptions, Struc-
ture and Utilization. Zagadnienia Ekonomiki Rolnej No. 6, Warsaw, Poland (in Polish).

- 25 -
Modeling paradigms applied to the analysis of cost-effective
policies aimed at improving European air quality
Marek Makowski
International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria

Keywords: decision support systems, air pollution, nonlinear models, object-oriented program-
ming, preprocessing, robustness, multiple-criterion optimization, model management, criterion
functions, constraint satisfaction problems.

In many parts of Europe the critical levels of air pollution indicators are exceeded and mea-
sures to improve air quality in these areas are needed to protect the relevant ecosystems. Several
international agreements have been reached over the last decade in Europe to reduce emissions.
Most of the current agreements determine required abatement measures solely in relation to
technical and economic characteristics of the sources of emissions, such as available abatement
technologies, costs, historic emission levels, etc. For achieving overall cost-effectiveness of strate-
gies, however, the justification of potential measures in relation to their environmental benefits
must also be taken into account. Recently, progress has been made in quantifying the environ-
mental sensitivities of various ecosystems. Critical loads and critical levels have been established
reflecting the maximum exposure of ecosystems to one or several pollutants not leading to envi-
ronmental damage in the long run. Such threshold values have been determined on a European
scale, focusing on acidification and eutrophication as well as on vegetation damage from tropo-
spheric ozone.
The Transboundary Air Pollution (TAP) Project1 at IIASA has been developing since several
years models that has been used for supporting the international negotiation. The models help
to identify cost effective measures aimed at the reduction of ground level ozone concentrations
at several hundreds of receptors over Europe can be calculated by a minimization of a cost
function that corresponds to the costs related to reductions of NOx and VOC emissions subject
to constraints on the resulting ozone concentrations. The Ozone model has been developed for
analysis of various policy options that lead to improvement of the air quality by reductions of
such emissions. However, the emissions of NOx should also conform to the standards set at each
receptor for acidification and for eutrophication. The latter problem is handled by the RAINS
model. An analysis of two separate models is cumbersome, therefore the RAINS model has been
included in the Ozone model. This in turn requires a joint consideration of not only emissions
of NOx and VOC but also of NH3 (ammonia) and SOx (sulphur oxides).
The atmospheric dispersion processes over Europe for sulfur and nitrogen compounds are
modeled based on results of the European EMEP model developed at the Norwegian Meteoro-
logical Institute2. For tropospheric ozone, source-receptor relationships between the precursor
emissions and the regional ozone concentrations are derived from the EMEP photo-oxidants
model.
The summary of the problem presented above illustrates the complexity of the problem.
The corresponding model is a large non-linear model. There is a number of methodological and
1
http://www.iiasa.ac.at/Research/TAP/.
2
The current state of art of the EMEP model will be presented at the CSM’98 Workshop by Dr K. Olendrzyński.

- 26 -
technical issues of specification and analysis of such a model which are of a broader interest and
that will be discussed during the presentation:
• The resulting model is a nonlinear one, therefore a problem specific generator has been de-
veloped and coupled with three nonlinear solvers. Object-oriented programming approach to
the model generation and analysis has been applied. Advantages (which include efficiency and
portability) of such an approach will be presented.
• The generation of the model requires processing of a large amount of data coming from various
sources. For efficient and portable handling of data the public domain library HDF developed
by the National Center for Supercomputing Applications, Illinois, USA3 has been applied.
• A representation of environmental targets by hard constraints would result in recommendations
of expensive solutions, hence soft constraints (with compensations for violations of the original
targets) are specified.
• The resulting optimization problem has typically non-unique solutions4, therefore a technique
called regularization was applied in order to provide a suboptimal solution having additional
properties that are specified by a user.
• A minimization of costs related to measures needed for improvement of air quality is a main
goal; however, other objectives (such as robustness of a solution, trade-offs between costs and
violations of environmental standards) are also important. Therefore, a multicriteria model
analysis has been applied to this case study.
• Some instances of the model contain over 10,000 variables and contraints, therefore its pre-
processing5 is essential. It will be shown how much one can gain by a proper reformulation
and preprocessing of a large non-linear model.
The main message of the presentation is to stress the (often forgotten) fact that no single
modeling paradigm can be successfully used to an analysis of a complex problem, especially if
the results of such an analysis are used for supporting various elements of real decision making
processes. There is a number of rules that have to be observed during specification of a model in
order to provide useful results. Also various techniques of a model analysis should be used instead
of the classical approaches which are focused and driven by either simulation or optimization
paradigms.

3
URL: http://hdf.ncsa.uiuc.edu/HDF5
4
More exactly: has man very different solutions with almost the same value of the original goal function that
correspond to various instances of the mathematical programming problem that differ very little.
5
Preprocessing of an optimization problem is aimed at generating another problem that has the same goal
function value as the original problem and fulfills its constraints, but which is easier to solve. It is a commonly
known fact that a preprocessing of a large optimization problem can dramatically reduce computation time and
memory requirements. Preprocessing is a standard feature of any good LP solver. However, preprocessing of
nonlinear models is a much more difficult task.

- 27 -
Analysis on Recycle Activities by Using Multi-Sectoral
Economic Model with Material Flow
Toshihiko Masui
National Institute for Environmental Studies, Tsukuba, Japan

Keywords: recycle, waste management, macroeconomic model, simulation

In Japan, the waste management is recognized one of the most important environmental
problems to solve because of the shortage of the final disposal site. Recently, it is found that
the wastes generated in metropolitan area are carried and disposed in the rural area. This
means that the wastes from urban areas overflow the capacity of their management. In order to
construct the ”sustainable society” we will have to consider not only production activities but
also waste management. In the conventional research of waste management, individual waste is
usually spotted. This way is important to understand its characteristics, but we can not observe
the changes of the waste values in the economic society, that is to say, the macroeconomic loss
or benefit by waste management cannot be evaluated. In this presentation, the new type macro
economic model including material balance of wastes in the economic activity is proposed and
applied to Japan, in order to evaluate the economic benefit which the recycle activities for waste
management policy will bring.
This model is based on the dynamic optimization model with 11 economic sectors. The
objective function of this model is the sum of the utility by consumption in the whole periods,
from 1990 to 2020. So, the results from this model are normative. Each sector produces the
economic goods by using labor, capital, intermediate inputs and energy. On the other hand, they
generate the wastes, which are generally regarded as the negative economic goods. This model
has the waste management process, reduction process, such as incineration, recycle process and
final waste disposal process, with the conventional economic model. The recycle activity is the
most expensive of all waste management processes, but this has the benefits of economically
useful goods supply.
It is assumed that, mainly from the law material industries and the energy transfer sector,
the recycled goods or service can be supplied. The best way of the waste management is decided
to be equal the marginal cost among these management, subject to the limit of the final disposal
site.
In this analysis, mainly two scenarios are set as follows;
1. Business as Usual scenario; the final waste disposal site will be decreased by 1%/year. This
number is based on the recently trend in Japan.
2. Severe constraint scenario; the area for final waste disposal will be decreased by 3%/year.
In the case of Scenario-1), the growth rates of the total economic output, waste generation,
and carbon emission are 3.2%/year, 2.4%/year, and 1.0%/year, respectively. The supply of goods
from recycle activitities will increase in the future. Especially, in the waste power generation and
chemical goods production sector, the recycled goods will be introduced more. In the case of
Scenario-2), the total economic output will decrease compared with Scenario-1) slightly. In this
scenario, carbon emission will be increased until 2010, because of increase of the incineration
management. After 2010, the recycle process will increase because the waste reduction by only
incineration will not meet the severe waste disposal constraint. From these results, though the

- 28 -
recycle activities are generally regarded as the costly way to manage the wastes, it is recognized
that these activities are the most beneficial and efficient way under the severe constraint of the
final disposal site. Including above two scenarios, other scenarios with various constraints on the
final disposal site are simulated, and the relationship between waste reduction and the marginal
waste management cost is estimated. In 2000, the constraint of final disposal site will not affect
to the waste management cost. After the year of 2010, this waste management cost will increase
exponentially according to the quantities of waste reduction. For example, in 2020, in order
to reduce 20 million ton of wastes, it costs very little, but for the reduction of 60 million ton’s
wastes, it costs more than 500 thousand yen (almost 3.5 thousand dollars) per ton.

- 29 -
Fuzzy Modeling and Inverse Simulation
Yoshiteru Nakamori
Japan Advanced Institute of Science and Technology

Mina Ryoke
Japan Advanced Institute of Science and Technology

Hiroyuki Tamura
Osaka Univeristy

Keywords: Inverse Simulaion, Fuzzy Model, Genetic Algorithm, Fractional Programming.

In this paper, a procedure which uses the genetic algorithm and the fractional programming
in the order referred is proposed in order to carry out the inverse simulation. Let the term inverse
simulation denote the determination of the input values of systems for getting the desirable
output value by using the information given by the mathematical model. Here, a solution to the
inverse simulation of Takagi-Sugeno fuzzy models is proposed. The fractional programming has
a problem such that it does not work in the case that all degrees of confidence of rules are pretty
small. The reason why it does not work in such a case is that each element of gradient vector
of the objective function is also pretty small. When the past represented by the membership
function is a matter of great importance, the point which has so small confidence is not needed
and a method to except such a point is essential. The procedure is consisted by two steps. In the
first step, the genetic algorithm is carried out to get a set of the various and rough input values.
The evaluations of the individuals are squared errors between the output value and desirable
output and the degree of confidence. In the second step, the fractional programming is done by
using each element of the obtained set in the first step as a initial point in order to get various
input values. On the other hands, it is a very important problem to discover a combination that
has small degree for the the present but may have big degree for the future, and this problem
will be related to make an external rule. In such a case that even though a possible input value
has small degree of the confidence is required, the evaluation of the genetic algorithm in the first
step and the formulation of the inverse simulation is redefined.
This paper treats the inverse simulation of fuzzy models and proposed a solution method.
It is easy to extend to the general optimization problem. A simple numerical example is given
here, in which the premise and consequence variables are the same.
Future works include fuzzy simulation considering the constraint about the correlation be-
tween the input variables and formulation for plural response variables.

- 30 -
Computational Intelligence and Multi-objective Programming
in Financial Engineering
Hirotaka Nakayama and Shingo Yanagiuchi
Department of Applied Mathematics, Konan University, Kobe, Japan

Keywords: computational intelligence, muli-objective programming, portfolio, pattern classi-


fication

Techniques for learning and inference in computer intelligence are expected to be effectively
applied to many problems in financial engineering such as credit evaluation, portfolio selection
and so on. Among them, the author has reported the importance of additional learning and
forgetting in machine learning, because the environment of decision making changes over time.
It is usually time ocnsuming to relearn from the beginning. Therefore, we mean by additional
learning to calculate only the increment between the current status and the new one without
re-learning from the beginning for revising the machine knowledge.. On the other hand, since
the decision rule becomes more and more complex with only additional learning, forgetting is
also necessary. Here, we mean by forgetting to remove unnecessary knowledge. At first, in
this paper, several techniques for additional learning and forgetting will be discussed along with
applications to some problems in financial engineering.
Furthermore, we shall discuss how multi-objective programming techniques can be effec-
tively applied to portfolio mix problems. In many transactions, the subjective judgment of
investors, traders, fund managers and so on play an important role. Multi-objective program-
ming techniques can provide a solution by incorporating the subjective judgment of human
beings. Techniques combining non-human techniques such as computer intelligence and human
techniques such as multi-objective programming seem to be effective in many practical financial
problems,

- 31 -
Inequality Measures and Equitable Approaches to Location
Problems
Wlodzimierz Ogryczak
Warsaw University, Institute of Informatics, Warsaw, Poland

Keywords: Location, Multiple Criteria, Efficiency, Equity

Public goods and services are typically provided and managed by governments in response to
perceived and expressed need. The spatial distribution of public goods and services is influenced
by facility location decisions. A host of operational models has been developed to deal with
facility location optimization. Most classical location studies focus on some aspects of two
major approaches: the minimax (center) or the minisum (median) solution concepts. Both
concepts minimize only simple scalar characteristics of the distribution: the maximal distance
and the average distance, respectively. In this paper all the distances for the individual clients
are considered as the set of multiple uniform criteria to be minimized. This results in a multiple
criteria model taking into account the entire distribution of distances. Moreover, the model
enables us to link location problems with theories of inequality measurement.
In typical multiple criteria problems values of the individual objective functions are assumed
to be incomparable. The individual objective functions in our multiple criteria location model
express the same quantity (usually the distance) for various clients. Thus the functions are
uniform in the sense of the scale used and their values are directly comparable. For multiple
criteria problems with uniform and equally important objective functions we introduce an effi-
ciency concept based rather on the distribution of outcomes than on the achievement vectors
themselves. For this purpose, we assume that the preference model satisfies the principle of
impartiality (anonymity)
While locating public facilities, the preference model should take into account equity of the
effects (distances). Equity is, essentially, an abstract socio-political concept that implies fairness
and justice. Nevertheless, equity is usually quantified with the so-called inequality measures to
be minimized. Inequality measures were primarily studied in economics. However, Marsh and
Schilling (1994) describe twenty different measures proposed in the literature to gauge the level
of equity in facility location alternatives. Among many inequality measures perhaps the most
commonly accepted by economists is the Gini coefficient, which has been recently also analyzed
in the location context. When applied to the multiple criteria problem, direct minimization of
typical inequality measures contradicts the Pareto–optimality. As noticed by Erkut (1993), it is
rather a common flaw of all the relative inequality measures that while moving away from the
spatial units to be serviced one gets better values of the measure as the relative distances become
closer to one-another. As an extreme, one may consider an unconstrained continuous (single-
facility) location problem and find that the facility located at (or near) infinity will provide
(almost) perfectly equal service (in fact, rather lack of service) to all the spatial units.
According to the theory of equity measurement the preference model should satisfy the
(Pigou-Dalton) principle of transfers. The principle of transfers states that a transfer of small
amount from an outcome to any relatively worse-off outcome results in a more preferred achieve-
ment vector. Requirement of impartiality and the principle of transfers, i.e. two crucial axioms
of inequality measures, do not contradict the multiple criteria optimization rules. A solution

- 32 -
concept satisfying all these properties we call an equitably efficient (E-E) solution concept and
the location pattern generated by this concept we call the equitably efficient solution.
In the paper we develop the basic theory and methodology for the E-E solution concepts for
location problems. Various equitable multicriteria solution concepts are analyzed. The special
attention is paid to the concepts based on the bicriteria optimization of the mean distance
and the absolute inequality measures. The restrictions for the trade–offs are identified which
guarantee that the bicriteria approaches comply with with the rules of equitable multicriteria
optimization.

References: Erkut, E. (1993), ’Inequality Measures for Location Problems’, Location Science,
vol. 1, pp. 199-217.
Marsh, M.T. and Schilling, D.A. (1994), ’Equity Measurement in Facility Location Analysis: A
Review and Framework’, European Journal of Operational Research, vol. 74, pp. 1-17.

- 33 -
Modeling Languages as a Teaching Challenge
Jerzy Paczyński
Institute of Control and Computation Engineering, Warsaw University of Technology, 00-665
Warsaw, Poland

Keywords: modeling languages, teaching

Modeling and simulation plays an increasing role in any engineering activity. At present there
are hundreds of modeling languages and new ones are emerging. Some standardising efforts are
undertaken, but up to now rather without significant impact. On the other hand scientists and
engineers attack new areas in which new domain-specific features in the modeling process are
necessary. In many modeling problems the simulation part is coupled with the optimisation
part. This creates additional challenges for inexperienced users.
Modeling languages are sometimes described as being of higher level then universal pro-
gramming languages. While there is a well established tradition of teaching about the latter
group, it seems that similar courses on modeling languages are currently missing. The goal of
the presented approach is to give the students a presentation of basic ideas, concepts and their
implementations with minimal possible bias of a particular domain. Such knowledge should pro-
mote a more literate way of using existing systems with better understandings of their strong
points and their limitations. Also in case of need it would eventually allow for the design of
simple extensions to existing modeling systems.
The most controversial but crucial decision is the selection of material. The course will be
concentrated around analytical models. Logical models and models in the form of numerical
data will be only briefly introduced. The presentation of material will be done according to
the underlying mathematical concepts, e.g. algebraic equations, ordinary differential equations.
partial differential equations, differential algebraic equations, hybrid systems. More stress will
be on the following topics:

1. programming paradigms; model definition languages; syntax and semantics;

2. symbolic model preprocessing;

3. software tools applicable at different stages;

4. solvers and their interfaces.

The current outline of the course will be as presented above. However, the incorporation
of oncoming theoretical achievements, dissemination of information of newest implementations
(especially those distributed free of charge for teaching and scientific usage) will remain the
essential issue of the course in the future.
The course was prepared in the framework of activities of the TEMPUS Structural Joint
European Project No. 11253-96.

- 34 -
Models, Algorithms, and Decision Support Systems for Letter
Mail Logistics
Hans-Jürgen Sebastian and Tore Grünert
Dept. of Operations Research, RWTH Aachen, Germany

Keywords: Transportation Logistics, Routing, Scheduling, Network Design

The reorganization of the Deutsche Post AG imposed massive structural and organizational
changes. These changes strongly influence the design and operations of the logistic network.
Here we will focus on the so-called main transportation network. It consists of the network
connecting 83 letter mail centers distributed throughout Germany as well as the international
letter mail center at the Frankfurt am Main airport. The planners have to decide how an
average of about 1,500 tons of letter mail is transported between the letter mail centers each
night. Moreover the system should be able to deal with special situations, such as the strong
quantity increases before Christmas.
The Deutsche Post AG and the ELITE Foundation are implementing a Decision Support
System for this specific planning task in cooperation with the RWTH Aachen. The system
is currently running at the Deutsche Post AG’s branch in Bonn and is simultaneously being
extended and improved. It is based on a client-server architecture, including a Geographical
Information System (GIS), a relational data-base system, the Graphical User Interface (GUI),
and a number of optimization algorithms for different planning tasks.
In order to reduce the complexity of the planning problem, it was decided to divide the prob-
lem into sub-problems, which are solved sequentially. There is, of course, always the possibility
to backtrack to an earlier planning stage, if necessary. The sub-problems include
• the night airmail network,
• the ground feeding transportation design,
• the road network and hub design problem,
• hub vehicle scheduling, and
• the direct loading vehicle scheduling problem.
The process of letter mail collection can be roughly divided into five sub-processes: first,
the letter mail is collected from the mailboxes at each letter mail center. It is then sorted
according to the destination letter mail center. This process finishes at about 21.15 hours. In
the third step, the letter mail is transported between the letter mail centers. This process has
to be finished no later than 04.15 next morning, resulting in a transportation time window
of about 7 hours. However, due to sorting capacities it must be guaranteed that the letter
mail arrives almost continuously during the 7-hour period, effectively making the problem an
inventory-routing design problem. The incoming letters are then sorted in correspondence to
their local destination at every letter mail center. Eventually, the letter mail is transported to
pick-up points where it is collected by the postman.
The tight time window constraint forces a fraction of about 20letter mail to be transported
in the so-called night airmail network. However, it might be optimal to increase this fraction in

- 35 -
order to save road transportation costs. The assignment of letter mail to either the night airmail
network or the road network is in itself an optimization problem, which can only be solved by
consideration of the entire transportation network.
For a given quantity of letter mail the optimization of the night airmail network consists
of finding an assignment of the letter mail for each origin-destination pair to a flight. A flight
is defined by its take-off and landing airport, the take-off time, and the type of aircraft. It
can be proved that only a limited number of take-off times have to be considered, thus re-
ducing the number of possible flights. This observation leads to a model that is similar to a
warehouse-location (or facility location, plant location) model. However, its size (several thou-
sands of warehouses and customers) requires heuristic methods to be used. We have developed
a Tabu Search algorithm, which iteratively calls the commercial mixed-integer solver CPLEX.
The results indicate strong potentials for savings compared to the existing solution.
The ground feeding of the night airmail network is performed by vehicles of different size and
speed. The objective is to design vehicle trips from the letter mail centers to the airport and back,
so that on-time delivery of letter mail at the airport is guaranteed. There is a limited possibility
of by-passing other letter mail centers on the way to the airport and picking up additional mail
in order to save transportation costs. The number of additional pick-ups is, however, limited to
two or three due to the time window constraints. We have chosen to generate a large number of
tours and to model the problem as a set covering problem. The model is solved by a Lagrangean
heuristic for set covering.
Most letter mail quantities between the different letter mail centers are rather small. It is,
therefore, desirable to consolidate the letter mail in a hub system. On the other hand, hub
consolidation is time-consuming and rather costly due to sorting costs - especially since the
sorting process is performed manually. Due to the requirement of almost continuous input, it is
only feasible to delay a fraction of the mail by hub consolidation. Moreover, the feasibility of the
process can only be checked if the entire system is considered simultaneously. We have developed
an Evolutionary Algorithm which performs the optimization, i.e. whether to transport by hub
consolidation or direct loads, based on modifications of vehicle schedules. Letter mail is then
re-assigned to the modified schedules in the algorithm. The algorithm is able to improve initial
solutions considerably.
The hub and direct loading vehicle scheduling problems are similar to the ground feeding
problems and can again be solved by a set covering approach. However, due to the size of the
problems heuristic preprocessing methods have to be employed.
The Decision Support System also contains a large number of methods for cost and resource
analysis, which support manual modifications of the computer-generated plans. These methods
have increased the user-acceptance of the system and enable users to gain an impression of the
system-wide consequences of their decisions.

- 36 -
A Modelling Paradigm for Supply Chain Management Concepts
Jack Van der Vorst
Department of Management Studies, Wageningen Agricultural University

Adrie Beulens
Department of Computer Science

Keywords: Supply chain modelling, Simulation model, Uncertainty

Supply Chain Management (SCM) is generally associated with the reduction of time delays
in goods and information flows, and the elimination of as many non-added-value operations as
possible (see, for example, Scott and Westbrook, 1991; Ellram, 1991; Kurt Salmon Associates,
1993). A driving force behind SCM is the recognition that sub-optimisation occurs if each
organisation in the supply chain attempts to optimise its own results rather than to integrate its
goals and activities with other organisations to optimise the results of the chain (Cooper et al.,
1997). SCM should recognise end-customer service level requirements, define where to position
inventories along the supply chain and how much to stock at each point, and it should develop
the appropriate policies and procedures for managing the supply chain as a single entity (Jones
and Riley, 1985). In the paper we will focus on the ontology used to model alternative supply
chain scenarios. A discrete dynamic simulation model developed for a supply chain for chilled
products will be used as an example.
Our research focuses on the improvement of the logistical performance of food supply chains.
This performance is measured with two main indicators: customer service (in this case product
freshness for the final consumer) and the total cost of all logistical activities (e.g. transport,
storage, order picking, administration, write-offs). It is our belief that the improvement of the
chain performance is directly related to the reduction or elimination of uncertainties in the supply
chain. Deduced from literature and practical experience, we distinguish four main clusters of
sources of uncertainty in logistical control that restrict operational performance: order-forecast
horizon, input data, administrative and decision processes, and inherent uncertainties (Van der
Vorst et al., 1998). These uncertainties should be systematically and jointly tackled by all stages
in the supply chain by increased co-operation and the tuning of logistic control concepts. This
is exactly what makes supply chain modelling different from traditional modelling; the usual
’givens’ when modelling an individual organisation become variable in chain perspective. For
example, supplier delivery uncertainty is usually seen as a factor that can not be influenced at
the level of the individual business, as opposed to at the level of the supply chain.
The central issue in the paper is the modeling of supply chains to identify all relevant
uncertainties, to evaluate alternative supply chain designs on performance and organizational
consequences and to support supply chain decision making. The translation of the actual supply
chain into a simulation model is done via a number of steps (of which the first five will be
discussed in detail in the paper):
(1) Distinguish the relevant processes with the relevant business entities to be modelled. A
business entity represents an information flow or physical flow of goods, or it can represent the
availability of a certain capacity of either human or machine. It represents a complex entity with
one or more attributes and a time stamp. The process (systems) approach is used to obtain

- 37 -
an hierarchical construct of relevant places, transitions and corresponding business entities.
Relevant in the sense that they (directly of indirectly) influence logistical performance. We
have found that ODL (Organisation Description Language) can be used to obtain this construct
(Uijttebroek, 1994). ODL is capable of describing complex business entities with their mutual
relationships, focusing on the input, output and successive steps of each transition. This phase
results in the identification of a network of administrative and logistical activities with control
precedence relations in time.
(2) Distinguish performance indicators for each process with corresponding control variables.
The logistical objectives of the supply chain as a whole are translated into performance indicators
for each individual process. Each process should then be analysed in order to identify the control
variables that influence the performance (e.g. procedure, timing). When all control variables are
distinguished, alternative settings construct scenarios. Finally, the method to value the realised
performance on the different indicators should be defined (average inventory levels, average
product freshness, write-offs, transport utilisation degrees, etc.).
(3) Distinguish uncertainties in the supply chain. For each process ways to improve the
performance of the process should be identified. Uncertainty is mainly created by long forecast
horizons, inaccurate or unavailable input data, and uncoordinated decision policies. SCM can
reduce these uncertainties by creating information transparency and by redesigning logistic
control structures in the supply chain. Instead of analysing what performance can be obtained
under the current restrictions, the focus is on leaving those restrictions and redesigning the
supply chain to obtain the required performance. This phase results in several improvement
options for each process.
(4) Make (realistic) assumptions to model the supply chain. For the time being we model
the dynamic behaviour of the supply chain deterministic. For example, the impact of variations
in supplier delivery times are eliminated by fixing the starting times of successive processes
leaving enough time to complete all relevant tasks. The only stochastic element modelled is
consumer demand. In the case study, the simulation model is run with 20 weeks of real weekly
point-of-sale data transformed into demand per hour per day by using fixed weekly and daily
demand distributions. Some 10 representative products are taken along of which the results are
translated to the total volume of the goods and information flows.
(5) Design the simulation model The simulation model consists of business entities going
through several transitions, which means a conversion or diversion of business entities. The
simulation model should be capable of calculating the state of the business entities after each
transition. In this particular project we have worked with the tool ExSpect (Van Hee et al,
1989); a specification language based on timed coloured Petri Nets. Our choice for Petri Nets
is motivated by the relative ease with which logistic concepts can be described in a formal way
and because these nets allow for a representation which is close to the problem situation. A
business entity, found in practice, is represented in the model by a token with a more or less
complex structure (or ’colour’). The complexity of its structure is determined by the amount
of information (attributes) that is necessary to describe the subject. Timed Petri Nets use a
timing mechanism where time is associated with tokens and delays are specified by an interval.
The business entity that is the output of a transition forms the input of the next transition.
In this way the timing of successive transitions, c.q. logistical activities, can be arranged and
throughput times and waiting times can be calculated. Each activity is related with all other
activities by the flow of business entities through the model. Events trigger processes in the
network. This event can be the arrival of a business entity, it can also be the completion of a
certain time period. Processes can have preconditions which means that the process will not
always be triggered as the event occurs. Certain restrictions have to be fulfilled. In this way the
model comes close to reality, where activities are also started by triggers. For example, order
picking is started at a the arrival of the pick-order, if capacity is available.
(6) Validate the model with real data. The simulation model should be validated with
the results of a pilot study, in which one scenario is implemented in real life. In this way

- 38 -
organizational consequences are also analyzed which are complimentary to the results of the
model study.
(7) Analyse alternative scenarios and choose the best. After the model is validated, alter-
native scenarios are defined and simulated with the model. The definition of these scenarios
depends on practical possibilities, the control variables defined in phase 2 and the improvement
options found in phase 3. The scenario of which the results of the model study together with
the pilot study promises the best results, should be nominated for implementation.
In the paper a case study for a supply chain for chilled food products will be discussed briefly,
in which the model proved its applicability and usefulness. The results are very interesting;
inventories dropped over 50 per cent and product freshness increased with an average of 5 days.
At the moment, this supply chain works on the implementation of the suggested scenario.
References
Cooper, M.C., D.M. Lambert, and J.D. Pagh, Supply Chain Management: more than a new
name for logistics, International Journal of Logistics Management, 8, 1, 1997, pp.1-13
Ellram, L.M. (1991). Supply chain management; the industrial organisation perspective,
International Journal of Physical Distribution and Logistics Management, Vol. 21, No. 1, pp.
13-22.
Jones, T.C. & Riley, D.W. (1985). Using inventory for competitive advantage through Supply
Chain Management. International Journal of Physical Distribution & Materials Management,
Vol. 15, No. 5, pp. 16- 26.
Kurt Salmon Associates (1993). Efficient Consumer Response; enhancing consumer value in
the grocery industry. Washington D.C.: Food Marketing Institute.
Scott, S. & Westbrook, R. (1991). New strategic tools for supply chain management. In-
ternational Journal of Physical Distribution and Logistics Management, Vol. 21, No. 1, pp.
23-33.
Van der Vorst, J.G.A.J., A.J.M. Beulens, W. de Wit and P. van Beek, Supply Chain Man-
agement in food chains: improving performance by reducing uncertainty, 1998 (submitted for
publication)

- 39 -
A decomposition approach for a class of on-line decision
problems
Jaap Wessels
Faculty of Mathematics and Computing Science, Eindhoven University of Technology,
Eindhoven, The Netherlands

Emile Aarts
EUT and Philips Research Laboratories, Eindhoven, The Netherlands

Frans Reijnhoudt
EUT and Philips Research Laboratories, Eindhoven, The Netherlands

Peter Stehouwer
(currently) CQM, Eindhoven, The Netherlands

Keywords: on-line decision making, information horizon, planning horizon, subplan, neural
nets, lot-sizing, bin-packing

Most on-line planning problems are difficult to solve. This is particularly true in cases where
already the finite-horizon off-line version of the problem is relatively complex. But even in
situations where the latter is not the case, the introduction of a lack of knowledge at the times
of decision may complicate the analysis considerably.
Only for a few situations there are integral solution approaches available which are also
executable.
The most common heuristic approach is based on the following idea: make a forecast of (part
of) the future, consider this forecast as solid information and solve the off-line problem based
on the forecast, implement the first decisions and repeat the procedure at a later moment. This
approach is based on a divide-and-conquer strategy: the uncertainty aspect is separated from
the original planning problem.
The weak point in this approach is that it discards the uncertainty aspect and replaces it by
a -hopefully- representative forecast, which is treated as certain. In some cases this works well,
but definitely not in all cases.

Decomposition
It would be more convincing if we could design a divide-and-conquer strategy that takes into
account the uncertainty about the future and also that more information will become available
at later times.
To some extent, such an approach is possible in case we can profit from a few features:

1. The uncertainty features can be used to deduct a relatively small off-line problem for which
the solution would be optimal (or nearly optimal) in the on-line problem.

- 40 -
2. The off-line problem can be solved efficiently.

One has a good chance that this can be achieved if the off-line version of the planning
problem is regenerative. Thismeans that the planning problem can be separated in several off-
line problems for smaller time-intervals, which can be solved independently. If we use this for
on-line problems, we need to decide on the length of the first regeneration interval. By doing
this with trained neural nets, we are able to take the uncertainty into account and choose the
regeneration point in the best way in the light of this uncertain future.

Lot-sizing
For several lot-sizing problems it will be shown how the approach works. A large experiment
has been executed to obtain more insight in the way the approach functions. Several details are
shown.
In fact the neural net may also be replaced by a statistical procedure for categorising data,
which can be trained by the same learning set. The most promising statistical procedure is
”k-nearest neighbour”. Its performance is compared with the performance neural nets in the
form of multi-layered perceptrons. The performances come close, but the neural nets perform
better. It is quite well possible that a still better performance can be obtained by using other
types of neural nets.

Other problems
The same concepts may be used for other problems, but the restrictions are relatively severe.
Some other types of problems will be mentioned which might be treated in this way.

Reference
H.P.Stehouwer, On-line lot-sizing with perceptrons. Ph.D.-thesis, EUT, Eindhoven (October
1997).

- 41 -
A statistical model of search for global minimum
Antanas Zilinskas
Institute of Mathematics and Informatics, Vilnius, Lithuania

James Calvin
New Jersey Institute of Technology, USA

Keywords: statistical models, global optimization, convergence, efficiency

The global optimization of multimodal functions may be considered as a problem of sequen-


tial decision making under uncertainty. To develop an efficient algorithm a model of uncertainty
must be chosen in addition to a criterion of rationality of the search.
The results of extensive experimental testing show that algorithms based on statistical models
are among the most efficient for one-dimensional global optimization of multimodal functions.
Frequently the model used is the Wiener process with the criterion of maximizing the probability
of exceeding a certain level. Such an algorithm is an example of the so-called P-algorithms.
The straight implementation of the P-algorithm for the multidimensional case is difficult.
The next trial point is defined to maximize the probability of exceeding some estimate of the
global minimum, and this entails solving a global optimization problem for all known multidi-
mensional statistical models. In the present paper a modification of the P-algorithm is proposed.
The trial points are chosen not as maximum points of the mentioned probability, but by the
procedure of cutting the simplex chosen from the simplexes covering the feasible region. The
simplex with maximal probability at the center point is chosen for cutting. Such a choice of
trial points means a weak form of the probability optimization, which is more tractable from
the algorithmic point of view.
The choice of a model corresponding to the uncertainty of features of the objective function is
discussed. The convergence of the global optimization algorithm is investigated and its average
efficiency is estimated.

- 42 -
The Modeling of Uncertainty
Hans Jürgen Zimmermann
RWTH Aachen, Lehrstuhl für Unternehmensforschung (Operations Research), D-52056 Aachen

Keywords: modeling, uncertainty, fuzzy models

Until the sixties probability theories and statistics were the only methods to model uncer-
tainty, a phenominum for which a number of discipline oriented definitions exist (decision theory
etc), for which, however, I have not found any generally valid definition or description. For many
scientists and experts uncertainty is synonymus with stochastic, probabilistic or similar terms.
On the other hand Fuzzy Set Theory has been ”sold” since the sixties as a tool the main goal
of which is to model uncertainty. If one looks at many fuzzy models, however, hardly any
uncertainty can be detected.
Whether uncertainty is an objective or a subjective phenominum is a philosophical ques-
tion, which shall not be discussed here. Rather uncertainty shall be considered as a subjective
(observer dependent) phenominum which is a function of the available information in a cer-
tain context. Broadly speaking, a situation is certain, if the information about it is ”perfect”,
otherwise it is uncertain.
A ”context” in this respect can be described by:
• The causes of uncertainty
• The quality and the quantity of information that is available
• If the information is numerical:the scale level on which the information is supplied
• The type of information which is requested by the observer.
I believe that most of the discussion about ”the”best or only way of modeling uncertainty
is caused by the fact, that most of the scientists developing these theories are formal scientists.
They start from an axiomatic basis and then develop quite sophisticated tools and theories that
are consistent with these axioms. This is quite legitimate. Since they hardly ever consider the
phenominum, however, that has to be modeled the tool might be good but just not adequate
for the specific context. What happens very often is, that the scientist or the observer of an
uncertain phenominum confuses the glasses through which he or she considers an uncertain
phenominum with the uncertainty of the phenominum itself. I.e. rather than considering the
uncertainty, they see probabilities or degrees of membership inspite of the fact that those terms
are modeling artifacts and not parts of the uncertain phenominum. In other words: Existing
theories for uncertainty modeling are generally context independent while my claim is, that they
ought to be context dependent.
Why should they be context dependent? Because these theories assume qualitatively and
quantitatively a certain input of information. Then they propose certain mathematical opera-
tions that have to be performed within the framework of the theory and they finally offer to the
observer certain informations (for instance, mean values, variances, confidence intervals or levels
etc.). If, however, the information that is available is not in the form or on a scale level that
permits these mathematical operations, then the information is processed in a nonlegitimate
way and the output is pretentious or misleading. The output might also be in a form which is
not understandable to the observer and, therefore, of very little use to her or him. There is also
another danger or shortcoming of assuming just one legitimate method for modeling uncertainty

- 43 -
and also just one cause for uncertainty (for instance:lack of information): Sometimes uncertainty
is considered to be undesirable and one looks for ways to reduce it. If the cause for uncertainty
is not recognized correctly, the measures taken to reduce it might be quite inadequate.
The postulate that is suggested on the basis of above is therefore:
There are different causes for uncertainty and in different contexts quality and quantity of
information varies. The context should therefore be described by a vector, expressing cause,
quality and quantity of available information and also the type of information requested by
the observer. Theories (languages) to model uncertainties should be considered to be context
dependent. They should clearly state the quantity and quality (type) of information they need
as input. The same should be done for the output. Then the ”profils” of the available tools
for modeling uncertainty should be compared with the vector describing the uncertainty to be
modeled. And then the or those theories should be used that fit to the context. Of course the
relationship context-theory does not have to be unique.

- 44 -
List of Participants

Dr Mouhssine Bouzoubaa Dr Dan Dolk


SINTEF Applied Mathematics Naval Postgraduate School
Department of Optimisation Code Sm/Dk
Forskningsvn. 1 555 Dyer Rd; Room 231
P.O.Box 124 Blindern Monterey, CA 93943-5103
N-0314 Oslo USA
Norway email: dolker@redshift.com
email: mbo@math.sintef.no URL: http:\\web.nps.navy.mil/~drdolk
URL: http://www.oslo.sintef.no/am/ telephone: 408.656.2260
telephone: 47 22 06 77 22 fax: 408.656.3407
fax: 47 22 06 73 50

Prof. Manfred M. Fischer


Dr Tung Bui
Department of Economic and Social Geography
Department of Decision Sciences
Wirtschaftsuniversität Wien
CBA University of Hawaii
Augasse 2-6
2404 Maile Way, E303
1090 Vienna
Honolulu, Hawaii 96822
Austria
USA
email: manfred.m.fischer@wu-wien.ac.at
email: tbui@busadm.cba.hawaii.edu
URL: http://wigeoweb.wu-wien.ac.at
URL: www2.hawaii.edu/matsonco
telephone: ++43-(1)-31336-4836
telephone: 001-808-956-5565
fax: ++43-(1)-31336-703
fax: 001-808-956-9889

Dr Christoforos Charalambous Dr Janusz Granat


Department of Manufacturing and Engineering, Institute of Telecommunications
Brunel University ul. Szachowa 1
Uxbridge 04-894 Warsaw
UB8 3PH Poland
United kingdom email: J.Granat@ia.pw.edu.pl
email: empgccc@brunel.ac.uk URL: www.ia.pw.edu.pl/~janusz
telephone: 44 171 7234756 telephone: (+48 22) 128303
fax: 44 171 7234756 fax: (+ 48 22) 872 90 07

Dr Teodor Gabriel Crainic


Centre de recherche sur les transports Dr Tore Gruenert
Universite de Montreal Operations Research
and D.S.A. - U.Q.A.M. RWTH Aachen
C.P. 6128, succ. Centre-Ville Templegraben 55
Montreal QC Canada H3C 3J7 52056 Aachen
Canada Germany
email: theo@crt.umontreal.ca email: tore@or.rwth-aachen.de
URL: www.crt.umontreal.ca/CRT URL: www.or.rwth-aachen.de
telephone: 1-514-343-7143 telephone: +49 241 806187
fax: 1-514-343-7121 fax: +49 8888 168

- 45 -
Prof. Yskandar Hamam Prof. Sheung-Kown Kim
Ecole Superieure d‘Ingenieurs en Electrotechnique Dept. of Industrial Engineering,
et Electronique Korea University
2, Bd Blaise Pascal 5-1 Anamdong, Sungbukku,
F.93162 Noisy-le-Grand CEDEX Seoul, 136-701
France Republic of Korea
email: hamam@esiee.fr email: kimsk@syslab.korea.ac.kr
URL: http://www.esiee.fr/~hamamy telephone: 82-2-3290-3385
telephone: +33 1 45 92 66 11 fax: 82-2-929-5888
fax: +33 1 45 92 66 99
Dr Lech Krus
Polish Academy of Sciences
Systems Research Institute
Mr Rob Hartog Newelska 6
Wageningen Agricultural University Warsaw 01-447
Applied Computer Science Group Poland
Dreijenpl 2 email: krus@ibspan.waw.pl
6703 HB Wageningen telephone: 4822/ 361990
Netherlands fax: 4822/ 372772
email: Rob.Hartog@users.info.wau.nl
URL: www.info.wau.nl
telephone: +31 (0)317 484154 Dr Marek Makowski
fax: + 31(0)317 483158 IIASA
Schlossplatz 1
A-2361 Laxenburg
Austria
Prof. Khalil Hindi email: marek@iiasa.ac.at
Department of Manufacturing and Engineering URL: www.iiasa.ac.at/~marek
systems telephone: +43-2236-807.0
Brunel University fax: +43-2236-71.313
Kingston Lane
Uxbridge UB8 3PH Dr Toshihiko Masui
UK National Institute for Environmental Studies
email: khalil.hindi@brunel.ac.uk 16-2, Onogawa
URL: www.brunel.ac.uk/~emstksh Tsukuba, Ibaraki, 305-0053
telephone: 44 1895 203 299 Japan
fax: 44 1895 812 556 email: masui@nies.go.jp
telephone: +81-298-50-2524
fax: +81-298-50-2524
Mr Stefan Irnich
Prof. Yoshiteru Nakamori
Operations Research
Graduate School of Knowledge Science
RWTH Aachen
Japan Advanced Institute of Science and
Templergraben 55
technology
52056 Aachen
1-1 Asahidai, Tatsunokuchi
Germany
Ishikawa, 923-1292
email: sirnich@or.rwth-aachen.de
Japan
URL: www.or.rwth-aachen.de
email: nakamori@jaist.ac.jp
telephone: +49 241 806192
telephone: +81-761-51-1755
fax: +49 8888 168
fax: +81-761-51-1149

Prof. Hirotaka Nakayama


Dr Mikiko Kainuma Dept. of Applied Mathematics
National Institute for Environmental Studies Konan University
16-2, Onogawa 8-9-1 Okamoto, Higashinada
Tsukuba, 3050053 Kobe 658-8501
Japan Japan
email: mikiko@nies.go.jp email: nakayama@edu2.math.konan-u.ac.jp
telephone: +81-298-50-2422 telephone: +81-78-435-2534
fax: +81-298-50-2524 fax: +81-78-452-5507

- 46 -
Dr Seiichi Nishihara Mr Jack (G.A.J.) Van der Vorst
Inst. Inf. Sci. and Electr. Department of Management Studies
Univ. of Tsukuba Wageningen Agricultural University
1-1-1, Tenno-dai Hollandseweg 1
Tsukuba, 305-8573 6706 KN Wageningen
Japan Netherlands
email: nishihar@is.tsukuba.ac.jp email: Jack.vanderVorst@ALG.BK.WAU.NL
URL: http://www.npal.is.tsukuba.ac.jp telephone: (31) 317-48 3644 / 484160
telephone: +81-298-53-5525 fax: (31) 317-48 4763
fax: +81-298-53-5206

Prof. Rudolf Vetschera


Dr Wlodzimierz Ogryczak University of Vienna
Warsaw University Management Center
Instutute of Informatics Bruesser Strasse 72
Banacha 2 A-1210 Vienna
02-904 Warsaw Austria
Poland email: Rudolf.Vetschera@univie.ac.at
email: ogryczak@mimuw.edu.pl URL: http://www.bwl.univie.ac.at/bwl/org/
telephone: +48 22 658 3165 Vetsched.HTM
fax: +48 22 658 3164 telephone: +43 1 291 28 701
fax: +43 1 291 28 704

Dr Jerzy Paczynski
Warsaw University of Technology Prof. Jaap Wessels
Faculty of Electronics and Information Technology Faculty of Mathematics and Computing Science
ul. Nowowiejska 15/19 Eindhoven University of Technology
00-665 Warszawa Room H.G.9.14
Poland P.O.-Box 513
email: J.Paczynski@ia.pw.edu.pl NL 5600 MB Eindhoven
telephone: +48 22 660 7750 Netherlands
fax: +48 22 253719 email: wessels@win.tue.nl
URL: http://www.win.tue.nl
telephone: +31 40 2472608
fax: +31 40 2465995
Prof. Hans-Juergen Sebastian
Aachen Institute of Technology (RWTH)
Operations Research
Prof. Wolfgang S. Wittig
Templergraben 64
Leipzig University of Aplied Sciences
52056 Aachen
Dptm. of Computer Sc., Mathem., Science
Germany
P.O.B. 300 066
email: sebasti@or.rwth-aachen.de
D-04251 Leipzig
telephone: +49-241-806185
Germany
fax: +49-241-8888 168
email: wswg@imn.htwk-leipzig.de
URL: www.imn.htwk-leipzig.de/~wswg
telephone: +49 341 3076492
fax: +49 341 3012722
Dr Petra Staufer
Department of Economic and Social Geography
Wirtschaftsuniversität Wien
Augasse 2-6 Prof. Antanas Zilinskas
1090 Vienna Institute of Mathematics and Informatics
Austria Akademijos str. 4
email: petra.staufer@wu-wien.ac.at Vilnius, LT 2600
URL: http://wigeoweb.wu-wien.ac.at email: antanasz@ktl.mii.lt
telephone: ++43-(1)-31336-4205 telephone: 3702729326
fax: ++43-(1)-31336-703 fax: 3702729209

- 47 -
Prof. Hans-Juergen Zimmermann
RWTH Aachen
Templergraben 55
52062 Aachen
Germany
email: zi@or.rwth-aachen.de
URL: www.or.rwth-aachen.de/chef/zihome.htm
telephone: 49-241-806182
fax: 49-241-806189

- 48 -

View publication stats

You might also like