A Practical Guide To Multi Criteria Anal

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 85

A Practical Guide to Multi-Criteria Analysis

Dr Marco Dean

Bartlett School of Planning, University College London

(marco.dean@ucl.ac.uk)

26 January 2022
2
Abstract

Multi-criteria analysis (MCA), in the literature also known under the names of multiple-criteria
decision-making (MCDM), multiple-criteria decision analysis (MCDA), multi-objective decision
analysis (MODA), multiple-attribute decision-making (MADM) or multi-dimensional decision-making
(MDDM), comprises various classes of methods, techniques and tools, with different degrees of
complexity, that explicitly consider multiple objectives and criteria (or attributes) in decision-making
problems. Since the late 20th century, MCA methods have ignited an increasing interest amongst
researchers and practitioners working in a number of fields.
Whereas a number of articles, books and manuals have been written on the subject, the large
majority of these are either too technical or overly simplistic. The former type of publications, on the
one hand, is hardly readable and comprehensible for non-specialists. The latter pieces of work, on
the other hand, tend to ignore many critical aspects of MCA and contribute to the diffusion of
practices that are fundamentally flawed. This paper represents an attempt to find a middle ground
between those two positions. Its main purpose is to help students, academics, but also
practitioners and all those approaching this field of knowledge for the first time to gain insight into
the key features of MCA with the view to enabling them to access more complex textbooks in the
future. The paper also offers practical guidance regarding the fundamental tasks required to
develop basic, yet rigorous multi-criteria frameworks and effectively apply them to different types of
decision-making situations and problems. This constitutes the first foundational step for eventually
mastering more rigorous and elaborated MCA techniques. Finally, this paper also seeks to discuss
the potential advantages and limitations of MCA by adopting, as far as possible, a balanced and
neutral perspective so as to break down clichés and false beliefs on the subject.

3
4
Contents

Abstract ............................................................................................................................ 3

1. Introduction .................................................................................................................. 7

2. Key Elements of Multi-Criteria Analysis ........................................................................ 9

3. Classification of Multi-Criteria Analysis Methods ........................................................ 11

3.1 Formal Methods ....................................................................................................................13

3.1.1 Continuous Methods .............................................................................................................. 13


3.1.2 Discrete, Full Aggregation Methods ....................................................................................... 13
3.1.3 Discrete, Partial Aggregation Methods ................................................................................... 18

3.2 Simplified Methods ................................................................................................................20

3.2.1 Simple Summary Charts ........................................................................................................ 20


3.2.2 Simple Additive Weighting Methods....................................................................................... 22
3.2.3 Multi-Criteria Checklists and Other Screening Tools.............................................................. 22

4. Non-Participatory and Participatory Approaches to Multi-Criteria Analysis ................. 24

5. Key Steps of a Simple, Analyst-Led Multi-Criteria Analysis......................................... 28

5.1 Primary Problem Analysis .....................................................................................................30


5.2 Option Generation .................................................................................................................32
5.3 Identification of Objectives and Associated Criteria .............................................................33

5.3.1 Objective Identification ........................................................................................................... 33


5.3.2 Criteria Identification .............................................................................................................. 34
5.3.3 Basic Requirements for Criteria ............................................................................................. 35
5.3.4 Preferential Independence Condition ..................................................................................... 37
5.3.5 Some Practical Considerations Regarding the Selection of Objectives and Criteria .............. 41

5.4 Construction of the Performance Profile of the Options .......................................................43

5.5 Scoring ..................................................................................................................................45


5.5.1 Scoring Techniques ............................................................................................................... 45
5.5.2 Use of Threshold Values........................................................................................................ 51

5.6 Weighting...............................................................................................................................53
5.6.1 Overarching Approach to Weighting ...................................................................................... 53
5.6.2 Weighting Techniques ........................................................................................................... 55
5.6.3 Compensatory Weighting Techniques ..............................................................................56

5
5.6.4 Non-Compensatory Weighting Techniques ........................................................................... 64

5.7 Combination of Scores and Weights ....................................................................................69

5.8 Sensitivity Analysis ................................................................................................................69

5.9 Presentation of the Results of the MCA Exercise ................................................................72

6. Concluding Remarks ................................................................................................................74

References ..................................................................................................................... 75

6
1. Introduction

The life-cycle of a policy, a programme, a project or any investment case, can be conveniently
divided into three distinct and broad phases. The initial planning and decision-making phase runs
from the moment when the need for an intervention (e.g. a policy, a programme or a project) is
firstly recognised up to the point when the final decision to implement a specific (policy, programme
or project) proposal, amongst those ones put forward for consideration, is ultimately taken. In the
successive implementation phase, the given proposal is then implemented. Finally, there is an
operational phase where the outcomes and impacts of the policy (or programme or project) occur
over a given time period. This phase lasts until the moment the policy is withdrawn or superseded
by a new policy (or the project is decommissioned). As illustrated in Figure 1, during each phase,
specific assessment activities can be undertaken (Voogd, 1983; Boardman et al. 2006; Samset
and Christensen, 2017):
 Ex ante appraisal activities: these activities are carried out during the planning and decision-
making phase of policies, programmes and projects and they seek to assess, in advance,
whether a proposal is worthwhile and it is opportune to proceed with it. They present a
‘forward looking’ and thus rely extensively on forecasts and predictions.
 Ex post evaluation activities: they are concerned with assessing the performance of a policy
(or a programme or a project) after it has been implemented and completed. The main goal of
ex post evaluation is to identify beneficial and planned impacts of the intervention and, at the
same time, highlight problems, unintended consequences or unanticipated results. Differently
from ex ante appraisals, they present a ‘backward looking’ and are based mainly on impact
studies, surveys and direct observations1.
 Medias res (or interim) analyses: they combine elements of both ex ante appraisal and ex post
evaluation and are aimed at monitoring the implementation phase of the given intervention.

Figure 1 - Assessment of a policy (or a programme or a project) at different points in time.

Source: Adapted from Samset and Christensen (2017)

1
In the literature the terms ‘appraisal’ and ‘evaluation’ are often used interchangeably as synonyms. However, in this
paper, appraisal is used to indicate ex ante assessment activities, whereas evaluation is applied to ex post assessment
activities.
7
In the course of time, a number of different methods, techniques and tools have been
proposed in the attempt to ensure more systematic (ex ante and ex post) assessments of both
‘soft’ policies and ‘hard’ programmes and projects and thus arrive at more informed decisions.
Appraisal and evaluation methods can be classified in several ways (compare e.g. Faludi and
Voogd, 1985; Söderbaum, 1998; Oliveira and Pinho, 2010; Rogers and Duffy, 2012). One of the
simplest classification schemes is based on the number of objectives and decision criteria
considered in the analysis. From this point of view, it is possible to distinguish between two families
of methods, although, as highlighted further below, their boundaries are frequently blurred:
 Mono-criterion methods, which assess a given plan against a single and specific objective.
This family includes, for instance, cost-benefit analysis (CBA), which assesses a plan primarily
against the objective of economic efficiency (as shown by the benefit-cost ratio or the net
present value of the plan), by translating all impacts into discounted monetary terms.
 Multi-criteria methods, which appraise or evaluate a plan by taking into account (more
explicitly than mono-criterion methods) the various dimensions of interest, and the interplay
between multiple, often contrasting, objectives, and different decision criteria and metrics.

Hence, contrary to what is commonly believed, multi-criteria analysis (MCA) does not
constitute a single, specific method. Rather, it should be intended as an umbrella term for a
number of different techniques and tools by which multiple objectives and decision criteria can be
formally incorporated into the analysis of a problem. MCA is generally assumed to have originated
in the fields of mathematics and operational research during the second half of the previous
century, with the works of Kuhn and Tucker (1951) and Charnes and Cooper (e.g. Charnes et al.,
1955; and Charnes and Cooper 1961) on goal programming that are commonly regarded as one of
the major stimuli for the development of MCA methods. However, as pointed out by Köksalan and
colleagues (2011), the real roots of this discipline are much more ancient and are deeply entwined
with studies of classical economists and mathematicians, which are also at the origin of CBA.
Through the decades, the evolution of MCA has been directly or indirectly influenced by research
in different areas of study (e.g. utility and value theories, social choice theory, revealed preference
theory, game theory, and fuzzy and rough set theories) so that, presently, the realm of MCA
comprises many subfields and different schools of thought (Bana e Costa et al., 1997; Figueira et
al., 2005a; Köksalan et al., 2011). Since the late 20th century, MCA methods have ignited a
growing interest amongst researchers and practitioners working in a number of fields, including
ecology, sustainability and environmental science (Wang et al., 2009; Herath and Prato, 2006;
Huang et al. 2011), health care decision-making (Thokala et al., 2016), banking and finance
(Aruldoss et al., 2013), urban and regional planning (Nijkamp et al, 1990) and transport project
appraisal and evaluation (Macharis and Bernardini, 2015). This increased popularity can be
primarily attributed to an ever-greater awareness of the fact that many contemporary policy
problems facing society have a multi-dimensional nature and therefore require the careful
examination of a variety of different, often conflicting, perspectives and aspects (Munda, 1995 and
2008; and Giampietro, 2003). Indeed, MCA constitutes, in principle, a useful and rather intuitive
decision-support model, which can help analysts and decision-makers2 master large amount of
complex and contrasting data regarding such decision problems and advance towards a solution.
Whilst a number of articles, books and manuals have been written on the subject, the large
majority of these are either too technical or overly simplistic. The former type of publications, on the

2
For the purpose of this paper, the term ‘decision-makers’ is used to indicate the people who have responsibility for the
final decision, and in most cases also for the critical and highly subjective steps of the MCA process (e.g. weighting of
criteria), whereas the term ‘analysts’ refer to the specialists who assist decision-makers in reaching a decision by
collecting data and information, running the MCA model and making all the necessary computations.
8
one hand, is hardly readable and comprehensible for non-specialists3. The latter type of
publications, on the other hand, tends to ignore many important aspects of MCA and contributes to
the diffusion of practices which are fundamentally flawed 4. This paper, based on a comprehensive
review of the literature, is an attempt to find a middle ground between those two positions. Its main
purpose is to help students, academics, but also practitioners and all those approaching this field
of knowledge for the first time to gain insight into the key features of MCA with the view to enabling
them to access more complex textbooks in the future. The paper also offers practical guidance
regarding the fundamental steps of a basic MCA exercise as a necessary precondition for
eventually mastering more rigorous and elaborated MCA techniques.
The paper consists of five further sections. Section 2 describes the key elements and features
of MCA. Section 3 offers an examination of the key principles and theoretical foundations of the
most widely known MCA methods. Section 4 briefly introduces participatory MCA techniques,
which, especially over the past two decades, have been devised and promoted by many scholars
with the view to producing more thorough, transparent and democratic assessments of policy and
projects. Section 5, describes the key steps of a basic MCA exercise undertaken in a non-
participatory manner, and discusses all the aspects that must be considered to improve (to the
maximum possible extent) the consistency and rigorousness of the process and the reliability of
the results. Finally, Section 6 includes some concluding remarks regarding the potential
advantages and limitations of MCA.

2. Key Elements of Multi-Criteria Analysis

As illustrated in the previous section, the MCA literature covers a high number of methods (and
several variants of these methods) accounting for multiple objectives and decision criteria. Already
in the 1980s, a review undertaken by Despontin and colleagues (1983) identified more than 100
different MCA approaches. Although these methods can differ even substantially from one another,
many of them have certain aspects in common and exhibit a similar decision-support framework,
which includes the following key elements:
 Option: an alternative course of action proposed in order to address a perceived problem and
achieve an overarching end result.
 Objective: an intended and specific aim against which any proposed option is being assessed.
Objectives are usually clustered around different overarching appraisal and evaluation
dimensions (e.g. sustainability policy problems generally include the economic, environmental

3
For instance, Keeney and Raiffa's 1976 book, Decisions with Multiple Objectives, which forms the foundation of Multi-
Attribute Utility Theory (see Section 3.1.2) and is unanimously recognised by experts as one of the best books ever
written on the topic of MCA and decision science, has also been described by Zeleny (1982:439) as “a difficult work,
uneven in its exposition and sometimes discouragingly complicated by mathematical formalism”.
4
A report prepared as part of the MEDIATION (Methodology for Effective Decision-making on Impacts and AdaptaTION)
Project, funded by the European Commission and aimed at identifying appropriate methods and tools to support climate
change decision-making, offers the following description of MCA: “The approach identifies alternative options, selects
criteria and scores options against these, then assigns weights to each criterion to provide a weighted sum that is used
to rank options” (Van Ierland et al., 2013:1). Such a definition, which can also be found in many other articles and
reports, and which portrays MCA as a single, specific method based exclusively on the multiplication of scores and
weights, neglects the large body of work done on this topic. The mind-set induced by statements of this kind can, very
easily, lead to an over-simplistic view of MCA.
9
and social dimensions5). Sometimes, objectives are instead grouped according to their
geographical scope (e.g. local, regional, national, supra-national objectives) and temporal
dimension (e.g. short-, medium- and long-term objectives) (Voogd, 1983). Finally, especially in
participatory MCA exercise (see Section 4), objectives can also be aggregated based on the
social groups for whom they are relevant.
 Criterion: a specific measurable indicator of the performance of an option in relation to an
objective that allows measuring the extent to which an option meets that objective. For
instance, the objective of ‘promoting economic growth’ can be measured through a criterion
such as the ‘GDP growth rate’ (see Table 1). In principle, however, any objective may imply
several different criteria. Another possible criterion for assessing growth maximization is, for
example, the actual individual consumption per capita. It is possible to distinguish between
quantitative indicators, measuring the performance of an option in a numerical fashion (e.g.
monetary units or bio-physical units), and qualitative indicators, containing a (qualitative)
description of the performance of the option6. Qualitative criteria are generally more subjective
than quantitative criteria as the former indicators tend to be largely based on the personal
feelings, perceptions and attitude of the people involved in the MCA exercise.

Table 1 – Examples of objectives and associated criteria.


Appraisal/Evaluation
Objectives Possible Appraisal/Evaluation Criteria
Dimensions
To promote economic growth GDP growth rate
Economic
To improve the accessibility of an
Access cost indicator
area
To minimise the adverse effects on Change of CO2 emissions across the local
Environmental local air quality area/region/country in a given period
To promote modal shift from private Number or portion of automobile trips
vehicles to trains shifted to train mode
qualitative descriptions of the impacts
To protect the archeological and
produced by a project/plan on the
Social cultural heritage of an area
archeological/cultural resources of the area
To provide adequate public Public transport accessibility level of the
transport service in an area area

 Performance Score: a pure number (with no physical meaning), belonging to a given scale
(e.g. a 0 to 1 scale, a 1 to 100 scale or a -5 to +5 scale), that identifies the performance of an
option against a specific objective/criterion. High-performing options are ascribed high scores,
whilst low-performing options score lower on the scale. Critical objectives and criteria may also
be assigned some constraints in the form of specific threshold values, which place some
restrictions concerning the worst acceptable performance of an option against those criteria.
5
These, however, represent artificially constructed, totally arbitrary and highly intertwined dimensions so that, often, the
same criterion can be allocated to different dimensions. For example, the objective of ‘creating new job opportunities’ can
be interpreted as either an economic objective or a social objective.
6
Other classification systems for criteria exist. Keeney and Gregory (2005), for example, differentiates between natural
indicators, which directly measure (mainly in quantitative terms) the degree of achievement of an objectives; proxy
indicators, which are capable of indicating only indirectly the degree to which an objective is met (e.g. ‘number of vehicle
accidents’ to measure the objective of minimisation of road fatalities); and constructed indicators, which represent a
combination of different indicators and variables and are specifically developed to assess the achievement of a given
objective when natural (or proxy) indicators are not available (e.g. a combination of different data set focusing on the
attitudes and actions of various social groups to show the percentage of the public supporting a given development
intervention in a certain area).
10
Threshold values can be set in compliance with policy targets and legal instruments, scientific
criteria, which identify limits to natural processes and systems, or ethical standards (see
Section 5.5.2).
 Criterion Weight: a coefficient which is commonly intended to represent the level of importance
of an objective and corresponding criterion relatively to the other objectives and criteria under
consideration (i.e. high-importance objectives and criteria are identified with high weights).
However, as highlighted in Section 3.1 and Section 5.6.2, the actual meaning of weights can
change substantially according to the different MCA method employed (Munda, 2008;
Bouyssou et al., 2000; Belton and Stewart, 2002).
Typically, in a multi-criteria decision-making problem one or more policy (or project) options
are assessed against a number of different objectives, for which a set of criteria have been
identified. The performances of an option against the various objectives and criteria, which can be
assigned different weights, are identified by scores. Overall, what formally defines a multi-criteria
method is the set of rules establishing the nature of options, objectives, criteria, scores and weights
as well as the way in which objectives/criteria, scores and weights are used to assess, compare,
screen in/out or rank options7.

3. Classification of Multi-Criteria Analysis Methods

Given the deep variety of MCA methods developed over the years, the identification of a
comprehensive scheme of classification, mapping all the existing techniques and systematically
capturing their similarities and differences remains problematic. In the course of time, a number of
different (partial) taxonomies have been suggested (e.g. Roy, 1996; Munda, 1995; Janssen and
Munda, 1999; Rogers et al., 2000; Belton and Stewart, 2002; Kodikara, 2008; Rogers and Duffy,
2012; Ishizaka and Nemery, 2013; Zardari et al. 2015). The belonging (or not) of a method to a
specific category within a given classification system is not always obvious and can easily become
the object of a fierce dispute amongst experts. The classification proposed by Dean (2018 and
2020) and illustrated in Figure 2 is particularly convenient for the purpose of this paper. According
to this classification scheme, a first important distinction has to be made between formal and
simplified methods. Formal MCA methods (illustrated briefly in Section 3.1) are based on
elaborated procedures, a number of rather rigorous (although often arbitrary) rules and,
sometimes, also on advanced mathematical principles. Computer support is also often needed to
implement such methods, which, however, are still susceptible to inaccuracies and errors. A
comprehensive examination of formal MCA methods can be found in many textbooks and manuals
(e.g. Chankong and Haimes, 1983; Vincke, 1992; Roy, 1996; Triantaphyllou, 2000; Belton and
Stewart, 2002; Figueira et al., 2005b; Bouyssou et al., 2006; Ishizaka and Nemery, 2013), which,
7
In the MCA literature, however, there is not unanimity on either the use of these terms or their definition. For instance:
• Options are also often indicated as ‘decision options’, ‘alternatives’, ‘actions’, ‘courses of action’, ‘choices’,
‘strategies’, ‘scenarios’, ‘means’ or ‘items’.
• Objectives can also be referred to as ‘aims’, ‘goals’, ‘criteria’, ‘targets’, ‘ends’, ‘purposes’, ‘missions’, ‘ambitions’ or
‘intents’.
• Criteria can also be termed as ‘indicators’, ‘attributes’, ‘verifiers’, ‘metrics’ or ‘standards’.
• Scores are also labelled as ‘value scores’, ‘attribute scores’, ‘criterion scores’ or ‘intra-criterion information’.
• Weights are frequently described as ‘scaling constants’, ‘scaling factors’, ‘importance coefficients’, ‘trade-off
coefficients’ or ‘inter-criterion information’.
Whereas some of these terms can be used as synonyms without generating confusion (e.g. ‘options’, ‘alternatives’ and
‘alternative courses of action’ to indicate policy or project options; ‘performance scores’ and ‘criterion scores’ to indicate
scores), others may imply fundamental theoretical differences between MCA methods and approaches.
11
however, are not easily readable and understandable by the general readers. By contrast,
simplified methods (presented in Section 3.2) entails simple and frequently rough MCA
applications.
Despite the great variety of approaches may be seen as a strong point of MCA, this
‘methodological chaos’ often creates several critical dilemmas. Practical applications have
demonstrated that, when applied to the same decision-making situation, different methods typically
lead to different results (e.g. Roy and Bouyssou, 1986; Zanakis et al. 1998; Tsamboulas et al.,
1999; Bouyssou et al., 2000; Triantaphyllou, 2000; Jeffreys, 2004; Banihabib et al., 2017). Indeed,
each method has its own properties as well as its own advantages and disadvantages when it
comes to analysing and presenting data and information. Therefore, selecting an appropriate MCA
method can turn out to be, almost paradoxically, a multi-criteria problem itself (Triantphyllou and
Mann, 1989). Whereas the choice of which technique to employ in a decision-making problem
should be well justified, in practice, this is rarely done. The selection of a MCA method is usually
taken in a largely arbitrary manner and motivated only by the analysts and decision-makers’
knowledge of a given method, the availability of software and tools for carrying out the analysis, or
the existence of examples and similar studies that can be emulated without too much difficulty.
Over the years, some tentative guidelines to assist analysts and decision-makers in their selection
process have been produced (e.g. Ozernoy, 1992; Guitouni and Martel, 1998; Li and Thomas,
2014). However, many of these guidelines consider and compare only a relatively limited number
of MCA methods and do not actually provide clear and unambiguous advice (Watróbski et al.,
2018). Being based on different selection criteria and procedures, these guidelines also tend to
produce contrasting recommendations.

Figure 2 - Classification of MCA methods.

Source: Adapted from Dean (2018 and 2020).


12
3.1 Formal Methods
3.1.1 Continuous Methods
Formal MCA methods can be categorised in continuous and discrete methods. Continuous MCA
methods typically deal with problems where an infinite (or, however, extremely large) number of
possible options exist, although they are not explicitly known at the outset. This category
encompasses multi-objective programming methods such as linear programming and goal
programming, where alternatives are generated during the resolution of complex equation
systems, including an infinite or semi-infinite number of variables, constraints and objectives
(Charnes and Cooper, 1977; Korhonen, 2005; Ehrgott, 2005). Such methods are generally suitable
for technical design and optimisation problems (e.g. the identification of the best highway
alignment; the selection of the most convenient layout for a port or an airport; traffic signal
optimisation studies), which typically follow higher-level strategic decisions (e.g. whether or not to
build a highway, a port or an airport), and can be mastered only by mathematicians and experts.

3.1.2 Discrete, Full Aggregation Methods


Discrete methods, by comparison, better reflect real-world planning and policy problems, where the
alternatives to assess are limited in number and relatively well-defined at the beginning of the
analysis. The large majority of formal, discrete MCA methods can be encompassed in two broad
categories, namely full aggregation and partial aggregation methods, representing two different
schools of thought8. The former category, corresponding to the American MCA school, aims at
synthetising the performances of an option against all the different criteria into a single, global
score. Discrete, full aggregation MCA methods comprise, amongst others, the Multi-Attribute Utility
Theory (MAUT) methods, whose aim is to determine the overall utility of an option under study with
reference to a given number of decision criteria, which here are termed ‘attributes’ (Keeney and
Raiffa, 1976). Similarly to CBA, with MAUT methods, the concept of ‘utility’ conveys a decision-
maker’s level of satisfaction with a particular outcome (Fishburn, 1970). Each criterion (attribute)
has its own utility function which expresses varying levels of satisfaction according to the different
possible performances of an option against that specific criterion. With MAUT methods all the
marginal (or partial) utility functions for the individual criteria are ultimately combined within one
mathematical expression, called multi-attribute utility function, representing the overall utility (i.e.
the global attractiveness) of that option. In assessing two or more alternative courses of action with
MAUT methods, the preferred option is the one with the highest value for the overall utility. The
multi-attribute utility function can assume different forms according to the nature of the problem at
hand and the types of criteria considered in the analysis. In the simplest case, it presents a linear
form so that the overall utility of an option can be calculated as a weighted sum of the utility
functions for each individual criterion. In other words, given an option a and a set of N appraisal
criteria, the overall utility U of a, measured against the N criteria, is determined in accordance with
the following mathematical rule:

𝑈(𝑎) = ∑𝑁
𝑗=1 𝑤 𝑗 × 𝑢 𝑗 (𝑎) = 𝑤 1 × 𝑢 1 (𝑎) + 𝑤 2 × 𝑢 2 (𝑎) +. . . + 𝑤 𝑁 × 𝑢 𝑁 (𝑎) (1)

∑𝑁
𝑗=1 𝑤 𝑗 = 1 and 0 ≤ 𝑤 𝑗 ≤ 1

Where:

8
There are also other formal, discrete MCA methods based on rules and principles not exactly in accordance with one of
these two approaches. Some of these methods are discussed in Bana e Costa (1990), Gal et al. (1999), Figueira et al.
(2005b) and Doumpos et al. (2019).
13
uj(a) represents the partial utility function for the j-th criterion, expressing the performance
(utility) of option a on the j-th criterion; and
wj is the weight of the j-th criterion, through which uj(a) is scaled to a [0-1] interval.

The single-criterion value functions uj(a) are typically expressed in a 0 to 1 interval scale, with
0 and 1 that indicate the worst and the best possible performances, respectively. Therefore, given
the fact that the value of criterion weights varies between 0 and 1, also the multi-attribute utility
function U(a) assumes values comprised between 0 (worst utility) and 1 (best utility).
This weighted additive model is perfectly valid only if the utility of each criterion is independent
of that of the others (Keeney and Raiffa, 1976). This property, known as ‘mutual preferential
independence’, implies, loosely speaking, the absence of phenomena of synergy or conflict
between different criteria so that the marginal contribution of each criterion to the overall utility can
be assessed separately (see Section 5.3.4). Criteria, therefore, must be selected carefully and a
huge number of conditional clauses have to be checked in order to minimise the possibility of such
interactions to occur (Keeney, 1977). When, however, mutual preferential independence between
criteria is not verified, the multi-attribute utility function, combining the various partial utility,
functions assumes more complex forms (Keeney and Raiffa, 1976; Zeleny, 1982). Some of these
forms (e.g. quasi-additive, multiplicative, quasi-pyramid, semi-cubic, multi-linear utility functions),
which entail weaker independence conditions between criteria, are presented in Table 2. According
to Zeleny (1982), additive or multiplicative utility functions are the only manageable models for
cases with more than four criteria.

Table 2 – Common forms for multi-attribute utility function.

Types Of Multi-
Attribute Utility Three-Criterion Representations
Function

Weighted Additive Model U(a) = w1u1(a) + w2u2(a) + w3u3(a)

U(a) = w1u1(a) + w2u2(a) + w3u3(a) + kw1w2u1(a)u2(a) + kw1w3u1(a)u3(a) +


Multiplicative Model + kw2w3u2(a)u3(a) + k2w1w2w3u1(a)u2(a)u3(a)
With k scaling factor

U(a) = w1u1(a) + w2u2(a) + w3u3(a) + w12u1(a)u2(a) + w13u1(a)u3(a) + w23u2(a)u3(a) +


Quasi-Additive Model
+ w123u1(a)u2(a)u3(a)

U(a) = w1u1(a) + w2u2(a) + w3u3(a) + w12f1(a)f2(a) + w13f1(a)f3(a) + w23f2(a)f3(a) +


Bilateral Model + w123f1(a)f2(a)f3(a)
With fi(a) normalised utility difference functions

Quasi-pyramid Model U(a) = w1u1(a) + w2u2(a) + w3u3(a) + w12u12(a) + w13u13(a) + w23u23(a) + w123u1(a)u2(a)u3(a)

U(a) = w1u1(a) + w2u2(a) + w3u3(a) + w12u12(a) + w13u13(a) + w23u23(a) + w123f1(a)f2(a)f3(a)


Semi-cubic Model
With fi(a) normalised utility difference functions

U(a) = w1u1(a) + w2u2(a) + w3u3(a) + w12w1w2u1(a)u2(a) + w13w1w3u1(a)u3(a) +


Multi-linear Model
+ w23w2w3u2(a)u3(a) + w123w1w2w3u1(a)u2(a)u3(a)

Source: Adapted from Zeleny (1982).


14
In the course of time, difficulties in employing MAUT methods have led to the introduction of
the Simple Multi-Attribute Rating Technique (SMART), which represents a less complex (but also a
less theoretically sound) multi-attribute approach (von Winterfeld and Edwards, 1986). Like others
MAUT methods, SMART relies on a weighted additive function but considers the condition of
mutual preferential independence to be rather superfluous and easily by-passable through an
opportune selection and definition of the criteria employed in the analysis (Edwards, 1977). A few
variants of this method, namely SMARTS (Simple Multi-attribute Rating Technique using Swings)
and SMARTER (Simple Multi-attribute Rating Technique Exploiting Ranks), have also been
developed in the attempt to address concerns over the logical consistency of SMART (Edwards
and Barron, 1994; Bouyssou et al., 2000).
Another well-known method aimed at assessing options through the calculation of a global
score is the Analytic Hierarchical Process (AHP) (Saaty, 1980), whose preference structure,
although presented by its proponents as completely distinct from MAUT methods (Saaty, 1990a),
can be reconciled to a weighted additive model (Belton, 1986; Rogers and Duffy, 2012). The AHP,
in particularly, seeks to reduce a multi-criteria decision-making problem to a series of smaller, self-
contained analyses based on the observation that the human mind is incapable of considering
simultaneously too many factors when taking a decision (Miller, 1956; Saaty, 1980; Arrow and
Raynaud, 1986). It begins by arranging the elements of the analysis in three main hierarchical
levels as shown in Figure 3: the overall goal of the decision-making problem at the top; a set of
(ideally, mutually preferential independent) decision criteria in the middle layer; and a group of
competing options at the bottom. A further middle layer can be added if criteria need to be broken
down in sub-criteria.

Figure 3 – Typical AHP structure for a decision problem.

Source: Author’s own elaboration.

Once this three-level hierarchy has been created, the AHP requires determining the relative
priority of each criterion (second level) with respect to the goal of the analysis (first level). This is
established by firstly carrying out a series of pairwise comparisons of criteria. Overall, with N
criteria, N(N-1)/2 pairwise comparisons are necessary. The subjective judgments regarding the
relevance of the different criteria are translated into quantitative scores by using a discrete, nine-
15
point semantic scale, ranging from 1 (when the two criteria under examination are ‘equally
preferred’) to 9 (when one criterion is ‘preferred very strongly’ over the other one). The results of
the pairwise comparison of different criteria are arranged in a matrix as illustrated in Figure 4. After
the construction of the pairwise comparison matrix, the next step is to retrieve the actual priority (or
weight) of each criterion. The most rigorous, but also the most computationally demanding
approach consists in calculating the normalised principal eigenvector of the matrix (Saaty, 2003). A
much easier way to determine criterion weights consists instead in the calculation of the geometric
mean of each row and the successive normalisation of the resulting new column of the pairwise
comparison matrix (Saaty, 2001), as shown in Figure 4. This approximation approach produces
sufficiently close results to the eigenvector method in most situations (Rogers and Duffy, 2012;
Barfod and Leleur, 2014).
Subsequently, the local priority of each option (third level) with respect to the decision criteria
(second level) also needs to be determined. The relative merit of each option is also established
through a pairwise comparison (based on the same nine-point semantic scale) of the relative
performance ratings for all combinations of project options, separately for each decision criterion
considered in the analysis. Overall, with M options there are M(M-1)/2 pairwise comparisons for
each criterion. The same procedure, involving the computation of the normalised principal
eigenvector (or the normalised geometric means) of the pairwise comparison matrices of the
options (one matrix for each decision criterion considered in the analysis), is then implemented to
determine the local priority (or score) of each option with reference to each criterion (see also
Figure 20 in Section 5.5.1).

Figure 4 – Example of pairwise comparisons between criteria with the AHP method.

Source: Author’s own elaboration.

16
Finally, once weights and scores have been determined (both score and weights are mapped
onto a 0 to 1 scale), the overall valuation V of an option a with respect to the overall goal of the
analysis and N decision criteria is calculated by summing together the products of each criterion
weight and the performance of a with respect to that criterion. Mathematically this is expressed as:

𝑉(𝑎) = ∑𝑁
𝑗=1 𝑤 𝑗 × 𝑥 𝑗 (𝑎) = 𝑤 1 × 𝑥 1 (𝑎) + 𝑤 2 × 𝑥 2 (𝑎) +. . . + 𝑤 𝑁 × 𝑥 𝑁 (𝑎) (2)

Where:

xj (a) is the local priority (performance score) of option a with reference to the j-th criterion; and
wj is the priority (weight) of the j-th criterion.

The reliability of the ranking results produced with the AHP has, however, been the subject of
substantial debate amongst MCA specialists, with several authors questioning, amongst other
things, the validity of the eigenvector method, the coherence of the pairwise comparisons and the
justification for the interpretation of the nine-point semantic scale as a ratio scale (e.g. Freeling,
1983; Harker and Vargas, 1987; Stewart, 1992; Ishizaka and Labib, 2001; Bana e Costa and
Vansnick, 2008; Wedley, 2010; Asadabadi et al., 2019). Over the years there have been several
attempts to modify this method with the view to correcting some logical errors and making the
process less time-consuming (e.g. Dyer, 1990; Belton and Stewart, 2002; Ferrari, 2003; Wang and
Elhag, 2006). Razei (2015), for instance, has proposed the Best-Worst Method (BWM), which uses
only the best criterion and the worst criterion for the pairwise comparisons. With the BWM, after the
identification of criteria, two criteria, namely the best and the worst ones are (arbitrarily) selected.
Two rounds of pairwise comparisons are undertaken respectively between the best criterion and
the remaining criteria, and between the worst criterion and the remaining criteria. Hence, with N
criteria, overall the BWM requires only 2N-3 comparisons. The results of these comparisons, which
are coded into a predefined numerical scale (e.g. the nine-point scale of the AHP), are ultimately
used to derive the weights of the criteria.

As it is evident from Equations (1) and (2)9, full aggregation methods provide full
compensation between criteria. Indeed, with such methods good performances on some criteria
can compensate for a low score against one or more criteria. Hence, an option that performs quite
well with respect to all criteria may turn out to have a rather similar or even the same overall
performance of an option that scores exceptionally high against a few criteria but present very poor
performances for some other criteria. Compensation can, however, be decreased by using
nonlinear aggregation functions or by fixing some thresholds regarding the acceptable
performance levels against some criteria (see Section 5.5.2).

Weights, in full aggregation MCA methods, modulate the marginal contribution of each
criterion to the overall performance score (or overall utility) of each option. They thus assume
primarily the meaning of trade-off coefficients (see Section 5.6.2), that is, the amount of
achievement of one criterion that must be sacrificed in order to gain a unitary increase on another
criterion (Munda, 2008; Bouyssou et al., 2000; Belton and Stewart, 2002).

9
Equation (1) and Equation (2) are similar. In the case of utility-based methods, however, uj(a) represents a partial utility
function which measures the utility of an option against the j-th criterion by adopting an interval scale of measurement. In
the case of the AHP, by comparison, the priorities of criteria and options are the results of the reconciliations of a series
of pairwise comparisons made on a semantic scale which is interpreted as normalised ratio scale, termed ‘relative’ ratio
scale (Saaty, 1990b and 1993).
17
3.1.3 Discrete, Partial Aggregation Methods
A typical multi-criteria problem is represented by a situation where there is no optimal solution: an
option a1 may be better than an option a2 according to one criterion but, at the same time, it may
be worse than a2 according to another criterion, so that eventually it is impossible to identify the
‘best’ course of action. This situation, as depicted in Figure 5, is generally referred to as the ‘multi-
criteria imbroglio’ (Schärlig 1985). As highlighted in the previous section, however, with full
aggregation MCA methods such a multi-dimensional problem is translated into a mono-criterion
one, where different options are assessed and ranked on the basis of their overall performance
index expressed in a unidimensional scale. The American MCA School thus implicitly assumes that
the decision-makers involved in the decision-making process have a complete preference system,
which enables them to produce a complete rank order of the options at hand.

Figure 5 – Multi-criteria and mono-criterion problems.

Source: Author’s own elaboration.

Partial aggregation MCA methods, by comparison, representing the European (French) MCA
School, question the existence of a complete preference system for any decision-making problem
and reject the full aggregation of the single performance scores into a unique common scale on
account of the strong heterogeneity that often characterises objectives and criteria. With such
methods, based on the notion of outranking, the comparison of options takes place on a pairwise
basis with respect to each individual criterion. The objective is to establish the degree of
dominance that one option has over another. An option is said to outrank (or dominate) another
one if there is strong enough argument to support a conclusion that the former outperforms the
latter on enough criteria (of sufficient importance), whilst there is no essential evidence to show
that this statement is false with respect to the remaining criteria (Roy, 1996). Hence, with discrete,
partial aggregation methods the output of an analysis is not an overall value for each alternative,
but an outranking relation on the set of alternatives (see Figure 6).

18
Figure 6 – Illustrative example of the decision model implied by partial aggregation (outranking) MCA
methods. Comparison between two alternative options (a3 and a4).

Source: Author’s own elaboration.

The various discrete, partial aggregation (or outranking) MCA methods differ for the types of
data and information that they can handle (e.g. quantitative or qualitative, complete or fuzzy) as
well as for the rules and procedures employed for determining the level of dominance of an option
over the others (compare e.g. Brans and Mareschal, 2005; Figueira et al., 2005c; Martel and
Matarazzo, 2005). For example, the Preference Ranking Organization Method for Enrichment
Evaluation (PROMETHEE), in its simplest form (i.e. PROMETHEE I), tries to calculate a
Preference Index that measures the strength of the statement ‘option a1 outranks option a2’ (Brans
and Vincke, 1985). By comparison, the first version of the ELECTRE family methods, whose
French acronym stands for ELimination Et Choix Traduisant la REalité, namely Elimination and
Choice Translating Reality, brings this approach a stage further through the calculation of a
Concordance Index and a Discordance Index (Roy, 1968). The Concordance Index, similarly to the
Preference Index of PROMETHEE I, quantifies the preference for option a1 over option a2. The
Discordance Index, as a complementary indicator of the Concordance Index, highlights information
that may contradict the statement ‘option a1 dominates option a2’ and measures the degree to
which on any of the criteria a1 is worse than a2. To account for imprecision and uncertainty in
preference elicitation, specific thresholds levels, which the concordance and discordance
measures are required to comply with for the evidence to be convincing, are defined (Roy, 1996;
Rogers et al., 2000). Concordance and discordance thresholds, however, assume different values
depending on the specific outranking method and application. This subjectivity regarding indices,
preference thresholds as well as all the other key parameters and steps of the analysis creates
unquestionably some concerns from a reliability standpoint (Roy and Bouyssou, 1986; Cook et al.,
1988).
In sharp contrast to full aggregation MCA methods, outranking MCA methods are partially or
totally non-compensatory since a low score against one criterion cannot (or can only partially) be
compensated for by a better score against another criterion. Therefore, which such methods an
option that has good performances with respect to all criteria is likely to outperform another option
with present exceptionally high scores against many criteria but performs quite poorly on some
other criteria.
Weights here assume the (more intuitive) meaning of importance coefficients (see Section
5.6.2), which measure the influence that each criterion should have in building up the case for the

19
assertion that one alternative is better than another (Munda, 2008; Bouyssou et al., 2000; Belton
and Stewart, 2002).
Finally it is important to note that such methods not always lead to a complete ranking of the
options as the notion of ‘incomparability’ is allowed (i.e. when there is no essential evidence to
demonstrate that one option is superior or inferior to another one). Whereas often problematic for
decision-making, the conclusion of incomparability between some options may also be helpful in
highlighting some aspects of the problem that would perhaps require a more thorough analysis
(Rogers and Duffy, 2012).

3.2 Simplified Methods


Notwithstanding the large number of sophisticated MCA methods developed over time, simplified
MCA techniques are very popular, mainly due to practicality reasons (Beinat, 2001; Janssen, 2001;
Dean, 2018 and 2020). Indeed, many people involved in MCA applications simply do not have
enough time, resources or even the knowledge for solving complex equation systems, constructing
utility functions or performing a large number of pairwise comparisons. Elementary MCA methods
include, amongst others, simple summary charts, simple additive weighting methods, checklists
and other screening tools. By entailing less strict rules than formal methods, simplified MCA
methods, typically, turn out to be quite flexible and easily adaptable to different types of problems.
Moreover, whereas, very often, the underlying principles of formal MCA methods can be fully
appreciated only by MCA experts and, frequently, the mathematical algorithms at their heart are
even locked within proprietary software, simplified MCA methods can be ran and understood
virtually by anyone. However, it must be noted that, if used improperly (with no consideration of
even the most basic rules), elementary methods are extremely likely to lead to many
inconsistencies and errors (e.g. inaccurate selection of criteria leading to the violation of the
preferential independence condition as well as to double-counting problems; incorrect weighting
and scoring procedures; discrepancies between the weighting elicitation methods and the actual
meaning of weights; methodologically unsound rules to combine scores and weights).

3.2.1 Simple Multi-Criteria Summary Charts


With such methods the performances of the option(s) at hand against the different criteria
employed for the analysis are simply displayed using tables, graphs or diagrams without the
inclusion of scores and weights. Whereas, in some cases, (quantitative or qualitative) performance
scores may be assigned, there is no attempt to either determine mathematically a global score or
rank the project options under examination in a mechanistic manner (see Table 3 and Figure 7).
The focus of this approach, which is presentational in nature, is clearly on ‘opening up’ the analysis
(Stirling, 2006 and 2008 – see also Section 5.1), with different types of charts that provide analysts
and decision-makers with a comprehensive overview of the key features and impacts of the
option(s) under study and assist them in better understanding the problem situation.

20
Table 3 – Example of a simple summary table that displays the performances of a hypothetical road
project against different criteria. Neither scores nor weights are used in the table.

Objectives/Criteria Quantitative Impacts Qualitative Impacts


Increase in greenhouse gas emissions as a
Greenhouse result of the project:
Not Applicable
Gases +120 tonnes of CO2 over 60 year appraisal
period
Heritage of The proposal could be implemented with
No demolition of any historic building would
Historic full respect of the historical character of
be required
Resources the area.
Several car accidents potentially prevented
Accident by the construction of the road (over 60 year
Driver stress likely to be reduced
Reduction appraisal period):
fatal = 20; serious = 125; slight = 400.

Transprot The project would generate large benefits


for business users from travel time and
Economic Not Applicable
vehicle operating cost savings:
Efficiency Present Value of Benefits = £125.8m
 The project aligns with local and
Integration/ regional transport objectives
Not Applicable
Strategic Fit  Conflicts with environmental and
sustainability objectives at all levels.
Source: Author’s own elaboration.

Figure 7 - Example of a simple MCA summary diagram showing the performances of three different
options. Scores are used for illustrative purpose only and there is no attempt to rank the options in a
mechanistic way.

Source: Author’s own elaboration.


21
3.2.2 Simple Additive Weighting Methods
Simple additive weighting methods represent one of the most well-known and widely used
decision-support methods based on different criteria. These methods adopt the rather
straightforward and captivating weighted additive model (typical of some MAUT applications, AHP,
SMART and several other full aggregation MCA methods) in the attempt to calculate the overall
performance of the different options under investigation. Here the focus is thus on ‘closing down’
the analysis (Stirling, 2006 and 2008 – see also Section 5.1), with global scores, obtained as the
weighted sum of the single performance scores, that clearly point out to analysts and decision-
makers what the ‘best’ option to address the problem at hand is. The results of the process are
generally presented by means of performance tables, similar to the one illustrated in Table 4. In
such tables, each row identifies a specific criterion and the columns show the respective weights
and performance scores of the option(s) under study against that criterion. Whilst easy to
understand and clear-cut, these methods usually lack the theoretical rigor of formal methods and,
very often, turn out to resemble rudimental weighted average calculations which have very few or
even no links with the actual MCA theory.

Table 4 – Example of a performance table presenting the result of a MCA based on a simple additive
weighting model.

Weights Option 1 Option 2 Option 3


Objectives/
Criteria Scores Weighted Scores Weighted Scores Weighted
(0÷100%)
(0÷10) Scores (0÷10) Scores (0÷10) Scores

Strategic Fit 25% 4 1 8 2 10 2.5


Wider Ec. Benefits 10% 4 0.4 6 0.6 5 0.5
Env. Impacts 15% 2 0.3 7 1.05 4 0.6
Equity 30% 5 1.5 7 2.1 8 2.4
Implement. Risks 20% 9 1.8 2 0.4 8 1.6
Total 100% 5 6.15 7.6
Preference Rank 3 2 1

Source: Author’s own elaboration.

3.2.3 Multi-Criteria Checklists and Other Screening Tools


This category comprises various elementary and intuitive forms of MCA, which do not involve
numerical procedures and are also often employed (instinctively) by many people in their everyday
decisions. The most typical examples of such methods include basic checklists (see Table 5) and
lexicographic orderings (see Figure 8), which, for example, can be conveniently used at the
beginning of the planning and decision-making process to screen some options out and find the
most feasible options, which will then be developed and examined further. With basic lexicographic
orderings, in particular, the different objectives and criteria are ordered into different priority levels
and the various options are ranked or screened in/out against one criterion at a time, commencing
with the most important criterion, and ending with the less important one. If an option appears
clearly the best with reference to the first criterion the process ends and that option is selected as
the preferred one. Conversely, if more than one option performs reasonably well against the most
important criterion, this subset of options is then compared against the second most important
criterion. The process continues in this way in a sequential manner until a single option is chosen

22
or all the criteria have been gone through and complete separation between options proves to be
impossible.

Table 5 - Example of a multi-criteria checklist.

Source: Author’s own elaboration.

Figure 8 - Example of a lexicographic ordering.

Source: Author’s own elaboration.


23
4. Non-Participatory and Participatory Approaches to Multi-Criteria Analysis

Appraisal and evaluation can be undertaken either in non-participatory (i.e. analyst-led) or


participatory manner. In non-participatory assessments, the analysis is carried out autonomously
by one analyst or a team of analysts and specialist advisors, according to a typical technocratic
approach. The analysts gather, process and interpret data and information (by employing different
decision-support methods and tools), taking (to the greatest extent possible) a general and
independent view of the problem at hand, and ultimately present the results of the analysis to one
or a few decision-makers (e.g. a Minister or a Government Department; a person, a group of
individuals or a committee with responsibility for the decision). A key argument in favor of this
approach is that a group of trained specialists is best suited to support complicated and critical
decisions. By contrast, participatory techniques adopt a more collaborative and (in principle more
democratic) decision-making style, with the direct involvement of different interested and affected
parties (i.e. problem stakeholders) in the analysis. This thus may help analysts and decision-
makers account, to the largest possible extent, for neglected perspectives, excluded possibilities
and ignored issues. The choice over which approach is more appropriate depends on the nature of
the problem at hand as well as on the resources available to carry out the analysis. Ideally, an
analyst-led approach with no inputs from stakeholders may be more suitable for solving purely
technical problems, characterised by a relatively low levels of uncertainty and ambiguity.
Conversely, more intricate and uncertain policy issues, affecting society at large, may be better
addressed through (longer and more expensive) participatory processes in the attempt to ensure
that all the different viewpoints regarding the decision situation are adequately represented
(Funtowicz, and Ravetz, 1991; Stirling, 1998 and 2006; Renn 2015).
Whilst MCA has been originally conceived to be employed in a non-participatory manner, in
the course of time, due to the ever growing demand for public participation in planning and
decision-making processes, many arguments have been put forward to go beyond this
technocratic model (Vari, 1995; Banville et al., 1998; Petts and Leach, 2000; Stirling, 2006; Stagl,
2007). Hence, especially over the past three decades, methodologies combining participatory and
deliberative procedures with (in many cases, simplistic forms of) MCA have appeared in a rather
diffuse way, in many planning and policy fields (e.g. Renn et al., 1993; Gregory and Keeney, 1994;
Stirling and Mayer, 2001; Proctor and Drechsler, 2006; Stagl, 2006; Burgess et al. 2007; Mcdowall
and Eames, 2007; Munda, 2008; Macharis et al., 2012; De Brucker et al., 2015; Macharis and
Bernardini, 2015; Ward et al., 2016a and 2016b; Cornet et al., 2018a and 2018b). However, it is
not totally clear whether these techniques have enjoyed real-world applications or constitute mere
academic proposals and how they fit (or would fit) with the current planning procedures, traditional
(analyst-led) appraisal and evaluation methods and conventional public inquiry processes (Dean,
2018, 2020, 2021 and 2022).
Participatory MCA methods, besides analysts and decision-makers, thus involve some group
decision-making participants. The latter typically comprises problem stakeholders and, in some
cases, in the attempt to incorporate a more scientific perspective in the analysis, also academics
and experts. In principle, participants may take part in the multi-actor multi-criteria exercise
individually or as representatives of organised groups (e.g. local community groups, landowners,
business groups, environmental experts).
In operational terms, the steps of participatory MCA methodologies resemble those of analyst-
led MCA and typically encompass the following stages (which can take place in different orders
and manners): development of options; identification of objectives and criteria against which to test
options; scoring of impacts of options against the different criteria: and weighting of criteria.

24
However, differently from analyst-led methods, in participatory MCA techniques group decision-
making participants can contribute to the identification of the key elements of the multi-criteria
framework (i.e. options, objectives and criteria, weights and scores). Methodological adaptations of
MCA to group decision-making seem thus to have taken place primarily in three main domains
(Dean, 2018):
 Identification, classification and selection of group decision-making participants;
 involvement of stakeholders (and experts) in the analysis and management of group
processes; and
 collection, processing and inclusion of the group decision-making participants’ preferences in
the multi-criteria framework.
Each domain, however, entails some critical dilemmas and methodological challenges.
Regarding the first domain, for example, proponents of multi-actor multi-criteria methods (e.g.
Macharis and Nijkamp, 2011) emphasise that, ideally, in a MCA exercise all the parties that are
affected by the issue under discussion should be involved or represented, with no viewpoint
excluded a priori. However, the practical need for creating a workable and efficient process, limits
drastically the number of group decision-making participants. In this regard a comprehensive
review of participatory MCA techniques (Dean, 2018) have revealed that such processes rarely
involve more than 30 people overall. Obviously, an appraisal or evaluation exercise regarding a
major planning and policy problem that involves only a few actors and groups does not satisfy the
requirements of statistical representativeness and, paradoxically, risks even representing a step
backwards with reference to democracy and equity (Dean, 2018, 2021 and 2022).
The involvement of actors and groups can also be realised with different degrees of intensity
and in a wide variety of forms. Methods range from limited-participatory techniques, where
participants take part only in some stages of the process and thus have the possibility of affecting
only partially the multi-criteria framework, to fully-participatory techniques, in which instead the
various parties are involved in the definition of options, objectives/criteria, weights, scores (see
Figure 9). The latter approach is more democratic, allowing participants to take full ownership of
the problem at hand, but it can lead to processes very difficult to manage. The former approach
has diametrically opposite characteristics (Dean, 2018).

Figure 9 – Types of participatory MCA methods.

Source: Adapted from Dean (2018).


25
Moreover, it is also clear that, although potentially important to ensure a comprehensive
examination of the problem, the inclusion of stakeholder groups and other interested parties in the
analysis, increases exponentially the complexity of the MCA exercise. Indeed, under an analyst-led
approach, a typical discrete multi-criteria decision-making problem, involving a finite set A of M
options, A = {a1, a2, a3,…aM}, and a set C of N criteria, C = {c 1, c2, c3,…cN}, characterized by and a
set of weights W = {w1, w2, w3, ..., wN}, can be synthetically represented by a N×M matrix, whose
typical element xj(ai) (i = 1, 2, …, M; j = 1, 2, …, N) represents the evaluation of the i-th alternative
by means of the j-th criterion (see Table 6).

Table 6 – Tabular representation of a multi-criteria decision-making problem under an analyst-


led approach to MCA.

Source: Author’s own elaboration.

By comparison, with a multi-actor multi-criteria exercise involving G group decision-making


participants, the problem is described by a three-dimensional matrix N×M×G, which captures also
the preferences of the different parties involved in the exercise. When participants are offered the
possibility of scoring the impacts of the options under study, the generic element of the matrix
xj(ai)K (i = 1, 2, …, M; j = 1, 2, …, N; k = 1, 2, 3, …, G) represents the evaluation of the i-th
alternative by means of the j-th criterion according to the viewpoint of the k-th group decision-
making participant. Moreover, if participants are also given the opportunity to identify their own list
of objectives and criteria and the weights of these criteria (Approach ‘L’ in Figure 9), the set C of
criteria and the set W of weights can also vary according to the viewpoint of the person (or group)
undertaking the assessment (see Figure 10). In general, since stakeholder groups typically present
different interests and priorities, a participatory MCA process may lead to as many lists of criteria,
weighting schemes and sets of scores as the number of groups involved. When the multi-actor
multi-criteria exercise involves a high number of participants (as a participatory process on a large-
scale transport project or another major policy problem would theoretically require) the multi-criteria
framework can thus easily become very difficult (if not impossible) to manage and analyse.

26
Figure 10 – Tabular representation of a multi-criteria decision-making problem under a participatory
approach to MCA.

Source: Author’s own elaboration.

Finally, one of the most critical aspects of participatory MCA is represented by the ways in
which the interests and priorities of the different stakeholder groups are collected and processed
to determine the options, the list of objectives and appraisal criteria, the set of scores and/or the
weighting scheme. As illustrated in Figure 11, different approaches are possible (Dean, 2018,
2021 and 2022; Dean et al., 2019). The points of view of the actors and groups taking part in the
process can be kept separate from each other with the view to highlighting better differences and
similarities in the positions of the various group decision-making participants. As highlighted
above, however, with this approach the resulting multi-criteria framework is extremely likely to
become too complex (i.e. a number of different and clashing lists of criteria, weighting schemes
and sets of scores) to be used directly in decision aiding. Alternatively, the participants’
viewpoints can be aggregated together, through the calculation of the mathematical average of a
spectrum of values or the construction of a representative value, which minimizes the differences
between participants’ opinions. Although very straightforward, this approach is methodologically
weak and covers, only temporarily, the conflicts between stakeholders. Finally, a shared multi-
criteria framework can be also obtained, in theory, through discussions and negotiations between

27
participants. Such a negotiation process is, however, extremely challenging to manage and
implies high chances of deadlock, especially when stakeholders present totally opposite interests,
with little room for compromise.

Figure 11 – Possible strategies for including multiple perspectives in participatory MCA.

Source: Adapted from Dean (2018).

There are also many other aspects and issues that need to be considered very careful before
undertaking a multi-actor multi-criteria appraisal exercise. A comprehensive discussion of
participatory MCA is, however, beyond the purpose of this paper. The following sections illustrate
the main steps of a basic, analyst-led MCA exercise, whereas for a more detailed examination of
participatory MCA methods the reader can refer to Dean (2018, 2021 and 2022) and Dean and
colleagues (2019).

5. Key Steps of an Analyst-Led Multi-Criteria Analysis

A basic multi-criteria decision problem typically includes the following main steps10:

1) Primary problem analysis


2) Development of the options to be assessed
3) Identification of objectives and associated criteria against which to test options
4) Construction of the performance profile of each option
5) Scoring of impacts of each option
6) Weighting of criteria

10
In the MCA literature, it is possible to find several versions of this process, with authors sometimes describing these
steps more coarsely (e.g. Roy, 1996), other times more finely (e.g. Belton and Stewart, 2002).
28
7) Combination of scores and weights
8) Sensitivity analysis
9) Presentation of the results of the MCA exercise as a support for the final decision-making11

The steps can change (slightly or even drastically) according to the specific type of MCA
adopted. More formal MCA methods generally require additional steps and sub-steps. In contrast,
with very basic methods such as simple MCA summary charts and multi-criteria checklists the
process is much shorter. In MCA summary charts (see Section 3.2.1), once collected, the impacts
of the options against the various objectives and criteria are simply displayed, by means of tables,
graphs or diagrams, and used as basis for decisions. With multi-criteria checklists (see Section
3.2.3) the data and information regarding the performances of the options under study are used to
produce a preliminary ranking of alternatives and select some of them for further, more
comprehensive analyses. In lexicographic models (see Section 3.2.3), an ordinal ranking of criteria
has to be produced before screening options (Table 7).

Table 7 – MCA methods and steps of the process.

Source: Author’s own elaboration.

Moreover, as illustrated in Figure 12, whereas the first and the last three steps of the process
remain fixed, the order of all the other steps can vary according to the nature of the problem under
examination. For instance, whereas, often, options are defined at the early stage of the process (or
even before its formal commencement), in some circumstances, the identification of a list of
objectives and decision criteria may also take place before the development of the possible
alternative courses of action to be assessed. Weighting procedures, in turn, can be undertaken

11
As it is easily noticeable, the steps of the process clearly mirror those of the classic rational-comprehensive decision-
making model, which starts with the definition of the problem, proceeds with the identification of the goals and objectives
to be pursued, and the possible alternative solutions to solve the problem and achieve these objectives, continues
through the collection and analysis of data and information to compare the identified options, and ends with the selection
of the best course of action (see e.g. Nijkamp et al., 1990; Bazerman and Moore, 2009). It is clear, however, that this
linear, deterministic and rather static model, which became very prominent in the planning, decision and policy making
community particularly during the 1950s and the 1960s, and still pervades the academic and grey literature, risks to clash
with the highly fragmented, turbulent and uncertain reality of many actual planning and policy problems and decision
situations (Dean, 2018 and 2021).
29
either towards the beginning of the process, immediately after the identification of objectives and
appraisal criteria, or even at the end of it, after the construction of the performance profile of the
options and the ascription of scores. It should be noted, however, that in a MCA exercise, several
iterations and feedback loops between the different steps of the process are possible (see
Sections 5.2).

Figure 12 – Possible sequences of steps in a MCA process.

Source: Author’s own elaboration.

5.1 Primary Problem Analysis


Ideally, a multi-criteria exercise should commence with a comprehensive investigation of the nature
and dynamics of the problem at hand as well as of the decision-making context (Belton and
Stewart, 2002; Dimitriou et al., 2010; Ward et al., 2016a). This step allows the team of analysts and
specialist advisors to better structure the analysis. Indeed, critical choices, potentially having a
dramatic impact on the final results of the process, need to be made with reference to several
interdependent aspects, including:
 Scope of the analysis.
 Type of MCA to be employed (see Section 3).
 Steps of the process and their order (as discussed in the previous section).
 Stance adopted in the analysis and identification of the most important problem stakeholders
(see in particular Section 5.3.5).
 Level of participation implied by the process (i.e. analyst-led or participatory MCA as illustrated
in Section 4).

30
 In the case of a participatory MCA exercise, several additional aspects, amongst which
number and types of group decision-making participants; participatory techniques employed to
engage with participants; the level of involvement of group decision-making participants; and
strategies adopted to include the different participants’ viewpoints in the multi-criteria
framework (see Section 4, and Dean 2018, 2021 and 2022).
With reference to the scope of the analysis, Roy (1996) identifies four main typology of
decision problems (or problematiques) for which MCA can be useful:
 The choice problem, in which the objective is to choose the most suitable alternative from a
set of feasible options.
 The sorting problem, where the options are distinguished into classes such as ‘acceptable for
further analysis’, ‘possibly acceptable but in need of more information’ and ‘unacceptable’.
 The ranking problem, in which the scope is to compare different options and place them in
some form of preference ordering.
 The description problem, where the emphasis is more on producing a formal and systematic
examination of the key features of one or more options and their consequences, rather than on
simply trying to rank them.
According to Keeney (1992), MCA can also be useful to refine options or generate entirely
new options (i.e. the design problem) so as to properly address the problem at hand and better
meet the goals and objectives identified through the MCA process. Finally, Stirling (2006 and 2008)
contends that MCA can be employed either to ‘close down’ or ‘open up’ the analysis. In the first
case, the aim is to provide a unitary and prescriptive advice which can assist decision-makers to
cut through the problem at hand and formulate a prescriptive response regarding the ‘best’ way
forward. In the second case, the focus is on helping decision-makers understand the multiple
aspects and implications of the problem, the different sources of information available and the
differing and even contrasting perspectives and viewpoints that should be acknowledged in the
analysis. Table 8 below combines and integrated the work of the above authors in a more
comprehensive framework.
Table 8 - Scope of MCA.

Types of Problem Underlying Approach to the Analysis


Choice Problem Closing Down
Sorting Problem Opening Up / Closing Down
Ranking Problem Closing Down
Description Problem Opening Up
Design Problem Opening Up

Source Author’s own elaboration based on Keeney (1992), Roy (1996) and Stirling (2006 and 2008).

To frame the analysis various problem structuring approaches, often also referred to as ‘soft
system’ techniques (Eden and Radford, 1990; Rosenhead and Mingers, 2001 and 2004) can be
conveniently employed. Such methods, encompassing, amongst others, Stakeholder Analysis,
Scenario Planning, Strengths-Weaknesses-Opportunities-Threats (SWOT) Analysis, Cognitive and
Causal Maps, Strategic Choice Approach (SCA), Strategic Options Development and Analysis
(SODA), Driver-Pressure-State-Impact-Response (DPSIR) Framework, and Strategic Assumptions

31
Surfacing and Testing (SAST) can also be adopted to support the other steps of the MCA process
(Belton and Stewart, 2002 and 2010; Marttunen et al., 2017).

5.2 Option Generation


As above explained, the definition of a set of options to be assessed constitutes, in many cases,
one of the first steps of a MCA process. Some authors, however, contend that option generation
should take place after the identification of objectives and criteria. Keeney (1992 and 1996), for
instance, distinguishes between ‘alternative-focused thinking’ and ‘value-focused thinking’
approaches to decision-making. Whereas the former decision-making style focuses first on the
identification of the possible alternative options to address the problem and only subsequently on
the objectives and criteria to assess them (see, e.g., approaches A, B and C in Figure 12), the
latter modus operandi starts with the selection of objectives and decision criteria (see, e.g.,
approaches D, E and F in Figure 12). According to Keeny, with an ‘alternative-focused thinking’
approach to decision-making, objectives and criteria risk to be selected based merely on the key
features of the pre-identified options, rather than on the overarching values and goals guiding the
process. This, in turn, may lead to a less comprehensive multi-criteria framework and ultimately to
a rather uncritical appraisal exercise. By comparison, commencing the decision-making process
with the identification of objectives and criteria may foster the development of outside-the-box
ideas and lead to better and more comprehensive results. Indeed, with this ‘value-focused thinking’
decision-making style, options are not determined externally, but rather they are developed as
systematic explorations of the values, principle, goals and objectives guiding the decision-making
process. This is likely to increase both the spectrum of alternatives considered and their innovative
character (Keeney, 1992 and 1996).
In contrast, other authors including Corner and colleagues (2001) claim that no approach is
superior to the other and encourage a continuing process of interaction between value focused-
thinking and alternative-focused thinking (see Figure 13).

Figure 13 – Interaction between value focused-thinking and alternative-focused thinking in MCA.

Source: Author’s own elaboration.


32
It must be noted, however, that, in practice, neither Keeney’s value-focused thinking decision-
making style nor the interaction between option generation and objective/criteria identification can
always be ensured. Indeed, in real planning and policy problems, most of the times, (project or
policy) options are defined in advanced (e.g. as potential solutions put forward by project
promoters and relevant problem stakeholders), before the formal commencement of the appraisal
exercise, and some pre-defined multi-criteria frameworks (e.g. defined in government guidance
and guidelines) are generally employed to assess and compare options.

5.3 Identification of Objectives and Associated Criteria


5.3.1 Objectives Identification
Objectives represent the key aims against which the proposed options are ultimately assessed and
compared. The number and types of objectives to be included in the multi-criteria framework
depends primarily on the decision-making situation at hand. In principle, preliminary analyses
require the consideration of only a few key aspects, whereas thorough and in-depth assessments
demand for a more comprehensive list of objectives and thus turn out to be more data-demanding.
To develop a list of objectives the team of analysts and specialist advisors undertaking the analysis
can draw on several sources, including:
• Key planning and policy documents regarding the problem at hand12.
• General guidelines, checklists and generic multi-criteria frameworks developed by government
departments, agencies or scholars (Figure 14).

Figure 14 – A generic list of objectives that can guide the development of the multi-criteria framework.

Source: Dimitriou et al. (2010).

12
However, most of the times, an appraisal or evaluation exercise entirely directed by and/or totally aligned to policy
goals may not be very desirable (see Footnote 18 on page 54).
33
• Past appraisal and evaluation reports regarding similar decision-making situations.
• Information regarding the interests and priorities of the decision-makers and various problem
stakeholders, obtained from documents, reports, interviews and surveys.

5.3.2 Criteria Identification


To measure the extent to which options meet the selected objectives, specific indicators of
performance (one for each objective) have to be established. Problem structuring methods and
visual aids such as value trees, where general and broad goals, operational objectives and specific
measurable indicators of performance are displayed hierarchically (Figure 15) can be very useful
for analysts and decision-makers to brainstorm and articulate criteria as well as to communicate
the results of the analysis to the relevant parties (Keeney and Raiffa, 1976; Chankong and Haimes,
1983; Keeney, 1992).

Figure 15 - Example of a value tree for the articulation of objectives and criteria.

Source: Author’s own elaboration.

Whereas, most of the times, the development of the value tree of objectives and criteria takes
place in a top-down manner (i.e. from fundamental values and generic goals to specific criteria),
also a bottom-up approach can be adopted (Buede, 1986; Von Winterfeldt and Edwards, 1986).
With this latter approach, criteria are developed on the basis of the relevant anticipated impacts of
the options under examination, and then grouped in broader categories (i.e. objectives and goals).
34
These two approaches to the articulation of objectives and criteria, as illustrated in Figure 16,
reflect somehow the Keeney's value-focused and alternative-focused thinking strategies discussed
in Section 5.2. The top-down modus operandi tends to be objective-driven, whereas the bottom-up
approach involves a greater emphasis on the alternative options under examination. It is often
contended that there is no single right way to construct the value tree and a combination of both
approaches is likely to lead to a more comprehensive array of objective and criteria (e.g.
McAllister, 1988; Belton and Stewart, 2002; De Brucker et al., 2004; Schutte, 2010).

Figure 16 – Top-down and bottom-up approaches to the articulation of objectives and criteria.

Source: Adapted from Schutte (2010).

5.3.3 Basic Requirements for Criteria


In MCA there are not many rules and very specific guidelines concerning the maximum and
minimum number13 and types of objectives and criteria that can be selected. Different authors
have, however, identified some basic requirements that criteria must satisfy in the attempt to
guarantee the reliability and rigour of the analysis (e.g. Keeney and Raiffa, 1976; Roy, 1985;
Bouyssou, 1990; Belton and Stewart, 2002; Diakoulaki and Grafakos, 2004; Keeney and Gregory,
2005; Dodgson et al., 2009). Having been developed for different MCA methods, these lists of
requirements differ slightly between each other. The most important properties, common to
(almost) all the MCA methods, which criteria should comply with, are as follows:
 Exhaustiveness: the set of criteria must cover all important aspects of the problem under
consideration.
13
For example, the Manual for Multi-Criteria Analysis prepared by the UK Department of the Environment, Transport and
the Regions (Dodgson et al. 2009) recommends using a number of criteria ranging from 6 to 20. However, this obviously
is a quite broad range. More precise guidelines regarding the number of objectives and associated criteria are
suggested for a few MCA methods. For SMART (see Section 3.1.2), Edwards (1977) advises to reduce as much as
possible the number of criteria especially to avoid issues in the successive weighting process. Hence, in this MCA
method, eight criteria are considered a sufficiently large number, whereas 15 or more criteria are considered too many.
According to Mu and Pereyra-Rojas (2017), due to the inherent limits to human information processing capabilities
(Miller, 1956), approximately five to nine criteria should be ideal for the AHP hierarchy (see again Section 3.1.2).
35
 Manageability: in order to avoid unnecessary analytical effort, the total number of criteria must
be as limited as possible and the value tree of objectives and criteria should not be more
detailed than necessary.
 Understandability: analysts, decision-makers and problem stakeholders and all the other
parties involved in the process must have a shared understanding of the assumptions and
concepts behind each criterion.
 Measurability: criteria must measure the performances of an option as precisely and clearly as
possible, in a quantitative or qualitative way, compatibly with the characteristics of the nature
of the measure under consideration.
 Non-redundancy: criteria that have been judged to be excessively similar to others must be
excluded from the list.
The first two requirements reflect the compromise between the desire to construct a
comprehensive value tree, capturing all the aspects of the problem at hand, and the practical need
for keeping the model relatively simple and not excessively costly and time-consuming to use, by
including only strictly relevant and fundamental criteria.
The understandability requirement imposes the establishment of a common view on meaning
of each criterion and on the associated options’ performance (e.g. whether a criterion is defined as
a ‘benefit condition’, and thus needs to be maximised to return a high performance score, or as a
‘cost condition’, for which a low value of performance against that criterion would be preferable).
Measurability involves the possibility of expressing the performances of the different options
on either a quantitative or a qualitative measurement scale through the selected criteria. Different
MCA approaches, however, demand for different levels of precision.
Finally, the non-redundancy requirement represents an attempt to avoid double counting
problems (i.e. inclusion of criteria which account for effects already taken into account elsewhere
by other criteria). Such problems are particularly frequent in rough MCA applications. They also
become progressively more difficult to avoid as the number of objectives and criteria included in
the multi-criteria framework increases. When included in the multi-criteria framework,
multidimensional and rather broad concepts such as livability and quality of life are also likely to
lead to double-counting of impacts (Anciaes and Jones, 2020). Figure 17 below illustrates an
example of double-counting between criteria, which is quite easy to encounter in transport
appraisal and evaluation exercises. In this example, different alternative transport projects are
assessed against five key objectives, namely strategic fit, job opportunities, air quality
improvement, potential environmental risks and economic efficiency. The performances of the
project options against the latter objective are measured through the benefit-cost ratio of the
projects. On the one hand, this value tree of objectives and criteria attempt to integrate CBA in the
multi-criteria framework. On the other hand, the benefit-cost ratio criterion of the options under
study accounts already for several environmental benefits and costs entailed by these projects.
This thus implies severe double counting problems. In this and in other similar situations, where
the the inclusion of redundant and overlapping criteria is likely to lead to misleading interpretations
of the pros and cons of the options under examination, a reformulation of the value tree is thus
necessary.

36
Figure 17 – Example of a double counting in MCA.

Source: Author’s own elaboration.

5.3.4 Preferential Independence Condition


In addition to the above requirements, the possibility of effectively employing a simple weighted
additive model to aggregate scores and weights into a global score (in full aggregation MCA
methods) would require the condition of mutual preferential independence between criteria to be
satisfied14. As highlighted in Section 3.1, only when this condition is met, it is possible to
decompose the multi-attribute utility function into its component parts (i.e. the partial utility
functions of the individual criteria), which can then be summed up to yield the overall utility of the
options under examination. When mutual preferential independence between criteria cannot be
assumed the value tree of objectives and criteria needs to be redefined or other non-linear and
more complex aggregation rules need to be used (Keeney and Raiffa, 1976; Zeleny, 1982).
A set C of criteria is mutually preferential independent if the preferences structure and trade-
offs between levels of achievement of any subset of criteria in C do not depend on the fixed level of
achievement of the remaining criteria (Belton and Stewart, 2002; Abbas, 2018). In the simplest
case of a decision-making problem involving only two criteria C = {c1, c2}, a possible subset of
criteria is either {c1} or {c2} and the corresponding complementary subset, including the remaining
criteria is either {c2} or {c1}, respectively. In this case, the preferential independence test is
performed by considering two (hypothetical) options performing differently with regard to one of
these two criteria, but having the same performance level for the other criterion. In general, c1 is
said to be preferentially independent of c2 if the preference relation between any two options a1
and a2, which preform differently against c1 and perform equally with respect to c2, is independent
of the fixed performance level against c2. Therefore, if, for instance, a1 is preferred to a2, this
preference structure is not modified by the (equal) performance of the two options on c2. In a more
operational form the above condition can be expressed as follows:

14
In principle outranking and goal programming methods do not require mutual preferential independence between
criteria. However, in absence of this condition, these methods may lead to inconsistencies and errors (Roy, 1996; Belton
and Stewart, 2002).
37
𝑖𝑓 𝑎1 > 𝑎2 → 𝑈 (𝑎1 ) > 𝑈 (𝑎2 ) → 𝑢(𝑐1 ; 𝑐2 )(𝑎1 ) > 𝑢(𝑐1 ; 𝑐2 )(𝑎2 )
𝑖𝑓𝑐1 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 𝑐2 → (𝑥11 ; 𝛼)(𝑎1 ) > (𝑥12 ; 𝛼) (𝑎2 ) 𝑎𝑛𝑑 (𝑥11 ; 𝛽)(𝑎1 ) > (𝑥12 ; 𝛽) (𝑎2 )

In the above expression x11 and x12 are respectively the performance of first options, a1, and
the second option, a2, against the criterion c1. This expression indicates that α (i.e. the
performance of the two options against the criterion c2) can be replaced by any value β without
altering the preference structure of the two options.
Conversely, c2 is said to be preferentially independent of c1 if the preference relation between
any two options a1 and a2, differing only upon c2, does not depend on fixed equal performance
level against c1. Therefore, assuming again that a1 is preferred to a2, then:

𝐼𝑓 𝑎1 > 𝑎2 → 𝑈 (𝑎1 ) > 𝑈 (𝑎2 ) → 𝑢(𝑐1 ; 𝑐2 )(𝑎1 ) > 𝑢(𝑐1 ; 𝑐2 )(𝑎2 )


𝐼𝑓𝑐2 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 𝑐1 → (𝛼; 𝑥21 )(𝑎1 ) > (𝛼; 𝑥22 ) (𝑎2 ) 𝑎𝑛𝑑 (𝛽; 𝑥21 )(𝑎1 ) > (𝛽; 𝑥22 ) (𝑎2 )

If c1 is preferentially independent of c2, and c2 is also preferentially independent of c1, then c1


and c2 are said to be mutually preferentially independent15.
In order to get a more intuitive understanding of the concept of preferential independence, let's
consider two projects which are assessed against two criteria, namely completion time and project
cost. If the decision-maker prefers a project whose cost is £10,000 to a project costing £15,000,
assuming, for example, that both projects take 10 days to be completed, and if she/he also would
opt for the cheapest project when, instead, the completion time of both the projects has been
estimated to 30 days, then the project cost criterion is preferentially independent of the completion
time criterion. Indeed, the decision-maker prefers the cheapest price regardless of what the
completion time of the two projects is:

𝐼𝑓 (£10k; 10 days)(𝑎1 ) > (£15k; 10 days)(𝑎2) 𝑎𝑛𝑑 (£10k; 30 days)(𝑎1 ) > (£15k; 30 days)(𝑎2 )
𝑇ℎ𝑒𝑛 → ′𝐶𝑜𝑠𝑡′ 𝑖𝑠 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 ′𝑇𝑖𝑚𝑒′

To ensure, however, that the two criteria are mutually preferential independent it is then
necessary to prove that the completion time criterion is preferential independent of the project cost
criterion. Let’s therefore consider two hypothetical projects that require respectively 15 days and 25

15
For multi-criteria decision-making problems involving only two criteria c1 and c2, an additional condition would need to
be satisfied to ensure the applicability of a weighted additive model. This condition, termed ‘corresponding trade-offs’ or
‘Thomsen condition’, says that if a decision-maker agrees on the following preference relations:
(α, β) (a1) ∼ (𝛾, 𝛿) (a2) and (𝜀, β) (a1) ∼ (𝛾, ζ) (a2)
then, she/he should also agree on the following preference relation:
(𝜀, 𝛿) (a1) ∼ (α, ζ) (a2)
In the above preference relations:
α, 𝛾 and 𝜀 represent any possible performance levels against the first criterion c1;
β, 𝛿 and ζ represent any possible performance levels against the second criterion c2;
∼ indicates indifference between the outcomes of the two options a1 and a2;
A more detailed explanation of the Thomsen condition can be found in Keeney and Raiffa (1976), Chankong and Haimes
(1983), Beroggi (1999) and Belton and Stewart (2002).

38
days. If the decision-maker prefers a given completion time no matter what the cost of the two
projects is, the preferential independence condition between these criteria can assumed to be met:

𝐼𝑓 (£5k; 15 𝑑𝑎𝑦𝑠)(𝑎1 ) > (£5k; 25 𝑑𝑎𝑦𝑠)(𝑎2 ) 𝑎𝑛𝑑 (£25k; 15 𝑑𝑎𝑦𝑠)(𝑎1) > (£25k; 25 𝑑𝑎𝑦𝑠)(𝑎2 )
𝑇ℎ𝑒𝑛 → ′𝑇𝑖𝑚𝑒 ′ 𝑖𝑠 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 ′𝐶𝑜𝑠𝑡′

The above expression might, however, not hold. For instance, when the cost of the projects is
relatively low (e.g. £5,000), the decision-maker might prefer a shorter completion time, whereas
when the cost increases (e.g. £25,000), she/he might opt for the project with the longest
completion time, which is likely to result in better quality levels with all the benefits this entails. In
such a situation, completion time is not preferential independent of project cost and the two criteria
cannot be considered mutually preferential independent.
With more than two criteria, in principle, more rounds of assessment would need to be
performed in order to prove that any possible subset of criteria is preferentially independent of its
complementary subset (and vice versa for mutual preferential independence). For example, in the
case of 3 criteria C = {c1, c2, c3}, it would be necessary to perform six rounds of assessment, as
illustrated in Table 9.

Table 9 – Preferential independence assessments for three criteria.

Rounds of Possible Complementary


Assessment Subset Subset
1 c1 c2, c3
2 c2 c1, c3
3 c3 c1, c2
4 c2, c3 c1
5 c1, c3 c2
6 c1, c2 c3
Source: Author’s own elaboration.

With reference, for instance, to the first round of assessment, criterion c1 is said to be
preferentially independent of its complementary subset {c2, c3} if the preference relation between
any two options a1 and a2, which preform differently against c1 and perform equally with respect to
c2 and c3, is independent of the fixed performance levels against c2 and c3. Therefore, assuming
that a1 is preferred to a2, this preference structure is not modified by the (equal) performance of the
two options on c2 and c3.

𝑖𝑓 𝑎1 > 𝑎2 → 𝑈 (𝑎1 ) > 𝑈 (𝑎2 ) → 𝑢(𝑐1 ; 𝑐2 ; 𝑐3 )(𝑎1 ) > 𝑢(𝑐1 ; 𝑐2 ; 𝑐3 )(𝑎2 )


𝑖𝑓𝑐1 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 {𝑐2 ; 𝑐3 } → (𝑥11 ; 𝛼; 𝛽 )(𝑎1 ) > (𝑥12 ; 𝛼; 𝛽 )(𝑎2 ) 𝑎𝑛𝑑 (𝑥11 ; 𝛿; 𝜀 )(𝑎1 ) > (𝑥12 ; 𝛿; 𝜀)(𝑎2 )

The above expression indicates that α and β (i.e. the performance of the two options against
the criteria c2 and c3, respectively) can be replaced by any values 𝛿 and 𝜀 without altering the
preference relation between the two options.

39
The criteria {c2, c3} are then said to be preferentially independent of its complement c1 (round of
assessment 4 in Table 9) if the preference relation between any two options a1 and a2, which
preform differently against {c2, c3} and perform equally with respect to c1 is independent of the fixed
performance level against c1. Therefore, if a1 is preferred to a2, the decision-maker will maintain the
same preference relation for any common performance on c1.

𝐼𝑓 𝑎1 > 𝑎2 → 𝑈 (𝑎1 ) > 𝑈 (𝑎2 ) → 𝑢(𝑐1 ; 𝑐2 ; 𝑐3 )(𝑎1 ) > 𝑢 (𝑐1 ; 𝑐2 ; 𝑐3 )(𝑎2 )


𝐼𝑓 {𝑐2 ; 𝑐3 } 𝑃𝑟𝑒𝑓. 𝐼𝑛𝑑. 𝑜𝑓 𝑐1 → (𝛼; 𝑥21 ; 𝑥31 )(𝑎1) > (𝛼; 𝑥22 ; 𝑥32 )(𝑎2 ) 𝑎𝑛𝑑 (𝛽; 𝑥21 ; 𝑥31 )(𝑎1 ) > (𝛽; 𝑥22 ; 𝑥32 )(𝑎2 )

If c1 is preferentially independent of {c2, c3} and {c2, c3} are preferentially independent of c1, then
the two subsets c1 and {c2, c3} are said to be mutually preferentially independent.
More generally, given a set C of N criteria, C = {c1, c2, c3,…, cN} the preferential independence
test would require to, first, check the individual criteria against their complementary subset of N-1
criteria, subsequently, check each pair of criteria against their complementary subset of N-2
criteria; then check triples of criteria against their complementary subset of N-3 criteria and so on.
Overall, 2N-2 rounds of assessments would be needed. If all subsets are mutually preferentially
independent the criteria c1, c2, c3,…, cN can be considered to be mutually preferentially
independent. It is thus evident that, when many criteria are involved, the preferential independence
test becomes very time consuming, cumbersome and almost impossible to perform (e.g. 5 criteria
would require 30 rounds of assessment; 10 criteria would demand 1,022 rounds of assessment).
Keeney and Raiffa (1976) have, however, demonstrated that, with N criteria, the evaluation of the
preferential independence conditions can be reduced to only N-1 assessments of pairs of criteria
involving the first criterion and the other N-1 criteria taken one at a time (which means, for
example, 4 rounds of assessments with 5 criteria and 9 assessments with 10 criteria).
Despite this substantial simplification in the process, determining whether preferential
independence holds remains a delicate matter (Keeney, 1977). Indeed, this condition is dependent
on a number of aspects including the interests and priorities of person (people) involved in the
analysis, the best and worst achievable levels of performance against the different criteria, and the
time horizon of the analysis, as preferential independency may be either stable over time or not
(Zeleny, 1982; Beroggi, 1999). Especially in many planning and policy problems, where economic,
environmental and social factors are highly intertwined, the assumption of mutual preferential
independence between criteria can be very difficult to justify.

There seem to be some disagreements over the actual implications of the violation of this
condition. Whereas many authors (e.g. Keeney and Raiffa, 1976; Beroggi, 1999; Belton and
Stewart, 2002; Abbas, 2018) stress the importance of a thorough selection of criteria and the need
for always performing the preferential independence test, others claim that the violation of this
condition only produces modest amounts of measurement errors (e.g. Weiss and Weiss, 2009). It
is also contended that the preferential independence test can be easily by-passed by setting
monotonic value functions for criterion performance (Edwards, 1977; Weiss and Weiss, 2009) or
by applying threshold values on criteria (Dodgson et al. 2009 - see Section 5.5.2). Regardless of
the validity of these claims, it is clear that in the absence of a rigorous assessment of whether
mutual preferential independence between criteria holds the results obtained by applying a simple
weighted additive model have to be taken (at least) with a pinch of salt.

40
5.3.5 Some Practical Consideration regarding the Selection of Objectives and Criteria
Compared to mono-criterion appraisal and evaluation methods, MCA can, in principle, provide
better insight into the nature of the problem at hand and eventually lead to more thorough
decisions. However, as highlighted in this paper, both formal and simplified MCA methods are not
immune from logical flaws and inconsistencies. The possibility of making more informed judgments
clearly depends, amongst other things, on the breadth of the value tree of objectives and criteria
considered in the analysis. Selecting a balanced and comprehensive list of objectives and criteria
is not an easy task. In general, given the lack of specific and universally agreed guidelines in the
MCA literature on how to define these parameters, two different analysts (or two different teams of
analysts) dealing with the same multi-criteria problem are extremely likely to produce two
contrasting value trees. The identification of a list of objectives implies, in particular, the difficult
issue of what perspective to take and what interests to consider in the appraisal or evaluation
exercise. Indeed, whilst, for example, the purpose of CBA is to estimate the possible social surplus
produced by a project or a policy measure by trying to take into account the effects experienced by
all members of society, the scope of MCA is less clear-cut (Dean, 2020; Mouter et al., 2020). On
the one hand, the MCA literature empahsises the importance of adequately representing the
concerns and priorities of all the parties involved in or affected by the problem situation under
examination (e.g. Dodgson et al. 2009; Dimitriou et al., 2010; Macharis and Nijkamp, 2011;
Macharis and Bernardini, 2015). On the other hand, for major policy decisions having far-reaching
consequences with regard to both space and time, the identification of all the potential
stakeholders and their agendas is rather problematic and, in many cases, is made even more
difficult by time and budget constraints to undertake the analysis. Hence, in selecting a set of
relevant objectives to account for the possible impacts of the options under study, analysts would
probably tend to adopt (implicitly or explicitly) the client perspective (e.g. the Minister, Government
Department, project promoter, agency or group that has commissioned the analysis) and/or take
primarily into account (intentionally or unintentionally) the positions of only a few stakeholders
groups (i.e. typically the most organized, and often most powerful groups, that have consolidated
themselves as a public presence), whilst neglecting other interested or affected parties. This may
create severe concerns from an equity standpoint (Dean, 2018).
Furthermore, whereas, in principle, MCA allows the assessment of options against a large
variety of both quantitative and qualitative objectives, in practice, the performances of the options
against many of these objectives may result very difficult to measure either in a quantitative or
qualitative manner. Sometimes these ‘intangible’ and ‘soft’ aspects such as equity, cohesion,
happiness, quality of life, sense of place are even difficult to define objectively and translate into
specific indicators (Miller, 1985; Vanclay, 1999; Anciaes and Jones, 2020). Given the high level of
uncertainty surrounding appraisal and evaluation studies, for several criteria, often, it might be
possible to gather only rough and vague data and information concerning the performances of the
options under study. For other criteria the information search process might be considered too
long, difficult, or costly. Finally, for some other criteria the needed data and information might
simply not exist16 (Dom, 1999; Gustavson et al., 1999). As a result, even for MCA exercises
commencing with the development of broad value trees encompassing a wide range of aspects,
values and interests, eventually, the actual assessment might turn out to be based only on few
objectives and criteria.

16
In this regard, the employment of a bottom-up approach to the articulation of objectives and criteria (see Section 5.3.2)
might be useful to establish, right at the beginning, the available data and information and identify the impacts that could
actually be measured.
41
Finally, another aspect which surely deserves some attention is that, compared to other
appraisal and evaluation methods, in many MCA applications, objectives and criteria are often
selected with only scant attention paid to the geographical and temporal dimensions of the
analysis. Hence, especially in simplistic MCA approaches, the performances of options against the
different criteria risk to become simply a collection of snapshots, often with no common (spatial or
temporal) basis for comparison. Table 10, adapted from Mouter and colleagues (2020), well
illustrates this issue. The table shows an example of MCA carried out for a road project. As
noticeable from this table, the temporal horizon is not homogenous across the criteria used for the
analysis. For example, the public support criterion considers mainly aspects related to the planning
and decision-making phase of the project; the safety improvement criterion regards only the
operational phase of the road; and the cost criterion refers to the implementation and operational
phase of the road. By comparison other criteria such as that assessing the potential conflicts
between different policies do not include a clear and specific time-based period. With reference to
the geographical dimensions of the analysis, it is then possible to notice that whereas some
criteria, such as the reduction of the journey time and car accidents are only referred to the road
project itself, other criteria, such as the protection of the landscape take into account a much wider
area. All these inconsistencies may lead to misleading results.

Table 10 - Example of a hypothetical MCA for an interurban road project. Inconsistencies in the
geographical and temporal dimensions of criteria.
Temporal (T) and geographical
Criteria Project Impacts
(G) dimension of criteria
 Construction costs: € 175 million (2020 price)
Project's construction and  T: Implement. and operational phase
 Maintenance costs: € 750,000 per year (2020
maintenance costs  G: road
price)
 About 600 jobs during the construction phase
 An average of 15 workers per years required for
Number of job created as a  T: Implement. and operational phase
maintenance works
result of the project  G: road
 About 10 permanent jobs created by a small rest
area along the road
Public support - Percentage of  There is a large group of citizens who support
 T: planning/decision-making phase
citizens who support the the project. Mainly people who benefit from it
 G: areas traversed by the road
projects  Certain small groups oppose the project
Reduction in greenhouse gas  T: operational phase
Reduction of CO2 emission by 1,500 tons per year
as a result of the project  G: road and surrounding areas
Some houses will benefit from the scheme,
whereas some other houses will be negatively  T: operational phase
Reduction in the noise level impacted by the project. Overall contribution to  G: road and surrounding areas
the reduction of noise in the area will be marginal.
Potential alterations to the The road will not alter the rural character of the  T: Implement. and operational phase
landscape landscape  G: areas traversed by the road
 Car using the full length of the new road will save
Improved journey time, up to 15 minutes, whereas good vehicles will  T: operational phase
comfort and convenience save up to 10 minutes  G: road
 Driver stress likely to be reduced
Transport safety - Reduction in Overall, 120 car accidents will be prevented over  T: operational phase
car accidents the life cycle of the road (60 years)  G: road
Extent to which the project  The project aligns with local and regional
align with wider policy transport objectives  T: current policy landscape
objectives and development  Several conflicts with environmental and  G: local and regional scale
strategies sustainability objectives at all levels.
Potential risk implied by the  Risk of cost overruns and lack of solvency during
 T: Implement. and operational phase
project in its implementation the implementation phase
 G: road
and operational phase  No significant risks in the operational phase

Source: Adapted from Mouter et al. (2020).

42
5.4 Construction of the Performance Profile of the Options
Once options have been generated, and objective and criteria have been identified, quantitative
and/or qualitative data and information regarding the performances of the options under
investigation against the various criteria need to be collected. Table 11 presents a hypothetical
performance profile for a road proposal which is being appraised against eight different criteria.
As already pointed out at the beginning of this paper, in the case of ex ante appraisal
exercises the performance profile of the options at hand is based on forecasts, mathematical
models, and predictions, combined with assumptions and expert judgements. On the contrary, for
ex-post evaluations options’ performances are obtained from impact studies, surveys and direct
observations. The performance profile of the options at hand can also be derived indirectly from
CBA, economic impact studies, environmental and social impact assessments, life-cycle analyses
as well as other studies carried out on the options as part of other appraisal or evaluation
exercises. From this point of view, MCA can thus be seen as a sort of overarching framework
capable of reconciling the results of different forms of (economic, environmental and social)
appraisal or evaluation exercises. However, since different appraisal methods adopt different
assumptions (for example with reference to the scope of the analysis and its geographical and
temporal dimensions) and procedures for collecting data and information, inconsistencies may
arise (Lee and Kirkpatrick, 1997 and 2000 – see also previous section).

43
Table 11 - Example of a performance profile for a road project option.

Option Performances
Objectives Criteria
Quantitative Impacts Qualitative Impacts

To promote Level of integration  The project aligns with local and


integration between the project regional transport objectives.
between policies and wider key Not Applicable  Conflicts with environmental and
and strategies at planning and policy sustainability objectives at all
all level objectives levels.

To ensure a
Project's construction  Construction costs: € 175 million
wise use of
and maintenance  Maintenance costs: € 750,000 Not Applicable
economic per year
costs
resources

 About 600 jobs during the


construction phase
 An average of 15 workers per
To provide Number of job years required for maintenance
employment created as a result of Not Applicable
works
opportunity the project
 About 10 permanent jobs created
by a small rest area along the
road

Wider economic The new road is likely to allows


To support planning restrictions to be relaxed,
impacts produced by Not Available
wider economic thus encouraging further
the project in the
benefits
surrounding areas investments in the area

To preserve (to
Number of historical
the largest
buildings, landmarks The project would require demolition Major alternations to the historical
extent possible)
and sites potentially of four historic buildings. characters of the area
the historical
threaten by the
character of the
proposal
area
Increase in greenhouse gas
emissions as a result of the project:
To minimize
Expected changes in  change in carbon emissions over
adverse effects Not Applicable
greenhouse gas 60 year appraisal period = + 120
on local air
emissions tonnes
quality
 change in carbon emissions in
opening year = + 2 tonnes
Several car accidents potentially
Expected reduction prevented by the construction of the
in the number of road:
To improve road Not Applicable
accidents as a  fatal = 20
safety
results of the
 serious = 125
proposal
 slight = 400
 Car using the full length of the
Improved journey new road will save up to 15
To improve the Driver stress likely to be reduced
time, comfort and minutes
transport system  good vehicles will save up to 10
convenience
minutes

Source: Author’s own elaboration.

44
5.5 Scoring
As noticeable from Table 11, the performances of the options against the different criteria are
expressed in various units of measurement. To make these performances comparable and to
undertake the necessary mathematical operations for assessing and ranking options, these values
may be conveniently converted to a common scale by means of performance scores. As explained
in Section 2, performance scores represent pure numbers (with no physical unit attached to them)
that measures the degree to which options under study meet the different objectives. Although
scores are assigned based on the data and information regarding the expected (in the case of
appraisal ex ante appraisal exercises) or actual (in the case of ex post evaluation exercises)
impacts of the options, this step unavoidably includes some subjectivity in the examination and
judgment of these impacts. A rather arbitrary choice is also made regarding the type and width of
the scale employed to measure options’ performances. Indeed, for instance, whereas in MAUT and
many other formal MCA methods performance scores are measured on an interval scale, typically
ranging from 0 (worst performance) to 1 (best performance), for simplistic MCA applications
several ordinal and Likert-type scales are commonly employed (e.g. a 0 to 10 scale, a 0 to 100
scale, or a -5 to 5 scale). Moreover, scores can be ascribed through different procedures with
various degrees of complexity and rigorousness. Some of the most common scoring techniques
are illustrated below.

5.5.1 Scoring Techniques


Direct Rating Approach
Quite often, simplistic and rough MCA applications, where time and resources to undertake the
analysis are limited, adopt a direct rating approach to scoring. With this approach scores are based
exclusively on the value judgments of analysts and decision-makers. High-performing options are
ascribed high scores, whilst low-performing options score lower on the scale (see Figure 18).

Figure 18 – Example of a direct rating approach to scoring.

Source: Author’s own elaboration.

Whilst extremely easy to use, and particularly convenient for qualitative-based criteria, a
direct rating approach is not as theoretically sound as other approaches to scoring. Indeed, it must
be noted that the scales that are typically employed to rate options’ performances are Likert-type
scales. Differently from interval scales, which are characterised by the existence of specific
45
standard units that ensure equal distance between successive values on the same scale, in Likert-
type scales the distance between each values is not clearly defined. Likert-type scales thus do not
have any cardinal significance and, ideally, should only be used for determining the ordinal raking
of the options under study against the different criteria 17.
To compensate for the lack of theoretical rigour and improve the reliability of the results, great
attention must be paid to the selection of the most appropriate width of the scale so as to allow for
sufficient discrimination between different performance levels. Wider interval scales are more
detailed and thus better capture the differences in performance between options, whereas narrow
scales should be used only when the knowledge about the criteria and the potential corresponding
performances is rather limited (Table 12).

Table 12 – Comparison between different Likert-type scales commonly employed for scores.

Five-Point Scale 11-Point Scale Seven-Point Scale


(1 ÷ 5) (0 ÷ 10) (-3 ÷ +3)

Totally Unacceptable
0
Performance
-3 Very Poor Performance
1 Very Poor Performance Extremely Poor
1
Performance
2 Very Poor Performance
-2 Poor Performance
3 Poor Performance
2 Poor Performance Slightly Poor Slightly Negative
4 -1
Performance Performance
Neither Negative Nor
- - - - 0 Positive impacts –
Neutral Performance
5 Acceptable Performance Slightly Positive
3 Acceptable Performance +1
6 Fairly Good Performance Performance
7 Good Performance
4 Good Performance +2 Good Performance
8 Very Good Performance
Extremely Good
9
Performance
5 Very Good Performance +3 Very Good Performance
Exceptionally Good
10
Performance

Source: Author’s own elaboration.

Efforts should also be made to compile a standardised set of descriptions for the outcomes
and impacts associated with each point on the scale (Table 13). Once this tabular information is
put together for each criterion included in the multi-criteria framework, analysts and decision-
makers can then use it in the attempt to reduce inconsistencies in the scoring procedures.

17
For instance, in a 10-point ordinal scale ranging from 1 to 10, with 1 indicating the worst possible outcome and 10
representing the best possible outcome, it is only possible to know that, say, 2 is better than 1 and that 10 is better than
9. However, due to the lack of a standard unit of measurement, the distance between 1 and 2 may be shorter (or longer)
than the distance between 9 and 10. In contrast, in a 10-point interval scale ranging from 1 to 10 and constructed through
a well-defined unit of measurement, there is an equal distance between each value on the scale. Hence, for example, for
a given option under examination and a given criterion employed in the analysis, moving from a score of, say, 4 to a
score of 8 reflects an improvement in the performance which is exactly double than that captured by an hypothetical
move from, say, 2 to 4 on the scale. The temperature (Farenheit or Celcius) is an example of an interval scale.
46
Table 13 – Example of descriptors for poor, average and excellent performance against a criterion.

‘Number of jobs created’ - 6-Point Scale [0-5]

 Less than 10 jobs during the


Totally Unacceptable construction phase
0
Performance  The option results in a reduction of job
opportunities in the affected areas

 Less than 20 jobs during the


1 Very Poor Performance construction phase
 No permanent jobs created

 Less than 50 jobs during the


2 Poor Performance construction phase
 Less than 20 permanenet jobs created

 Between 50 and 150 jobs during


construction phase
3 Acceptable Performance
 Between 20 and 50 permanenet jobs
created

 More than 150 jobs during the


construction phase
4 Good Performance
 Between 50 and 80 permanenet jobs
created

 More than 150 jobs during the


5 Very Good Performance construction phase
 More than 80 permanenet jobs created

Source: Author’s own elaboration.

Proportional Scoring Approach


The proportional scoring approach is another rather straightforward and relatively quick way to
assign scores. It is particularly indicated for quantitative-based criteria and only applicable in
decision-making situations involving the assessment of multiple options. The first step, in this
approach, consists in selecting an opportune interval scale for measuring the different performance
levels of the options against the different criteria (e.g. ideally a 0 to 1, or a 0 to 100 interval scale).
For each criterion, the lowest score on the scale (i.e. 0) is then assigned to the option having the
worst performance, whereas option with the best performance receives the highest score (i.e. 1 in
the case of a [0-1] scale, or 100 in the case of a [0-100] scale). The remaining alternative options
under examination receive intermediate scores, reflecting their performances relative to these two
end points. The proportional scoring approach, in particular assumes that, for each criterion, the
relationship between criterion performances and performance scores can be represented by a
linear and monotonically increasing (for criteria characterised as a benefit condition) or decreasing
(for criteria characterised as a cost condition) function. The scores of an option a having
intermediate performance x(a) against a particular criterion can thus be derived through a simple
mathematical proportion:

𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑜𝑓 𝑂𝑝𝑡𝑖𝑜𝑛 𝑎− 𝑊𝑜𝑟𝑠𝑡 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒


x(a) = [Highest Score on the Scale] × (3)
𝐵𝑒𝑠𝑡 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 − 𝑊𝑜𝑟𝑠𝑡 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒

47
Figure 19 below illustrates a practical example of proportional scoring. The major
disadvantage of this approach is its dependency on the set of options. Indeed, as it is easily
understandable, changes in the original list of options (e.g. the removal of the original best and
worst options from the list, or the inclusion of another option having the worst/best performance
against a given criterion) are likely to cause variations in the original performance scores.

Figure 19 - Example of a proportional scoring approach. Comparison of three different light bulbs
against four criteria.

Source: Author’s own elaboration.

Pairwise Comparison Approach


A third approach to scoring, derived directly from the AHP method (see Section 3.1.2), is based on
a series of pairwise comparison between options with respect to each criterion. Judgments
regarding the relative merit of each option are translated into a predefined scale (e.g. the nine-point
nominal scale of the AHP). Once all the required pairwise comparisons are made, the normalised
principal eigenvector (or the normalised geometric means) of the pairwise comparison matrices of
the options (one matrix for each decision criterion considered in the analysis), is then calculated to
determine the local priority (or score) of each option with reference to each criterion. Scores with
this approach are assigned in a [0-1] scale (see Figure 20). However, as already highlighted, this is
not an interval scale, but it is interpreted (questionably) as a normalised ratio scale (Saaty, 1990b
and 1993). The pairwise comparison approach can become quite work intensive when many
options and criteria are involved (with N criteria and M options, overall, N(M(M-1)/2) pairwise
comparisons must be carried out). Similarly to the proportional scoring approach, in this approach
48
scores turns out to depend on the set of options so that if new alternatives are brought into the
analysis, new pairwise comparisons need to be made.

Figure 20 - Example of a pairwise comparison approach to scoring with three criteria and three
options.

Source: Author’s own elaboration.

Value Function Approach


Finally, a very rigorous approach to scoring involves the use of a value function to translate the
impacts of the options against a certain criterion into the selected measurement scale. Belton and
Stewart (2002) provide a detailed description of this approach, which, in simple terms, involves the
following key steps:
 Selection of the most appropriate interval scale for measuring the performance scores against
the various criteria (e.g. a 0 to 1 interval scale).
 Definition, for each criterion, of the general characteristics of the value function which will be
employed to quantify the different performances of the options (e.g. linear or non-linear
function; monotonic or non-monotonic function). The shape of the value function should reflect
the decision-makers’ values and preferences.
 Identification of a few critical levels of performance for the criterion in question (e.g. often the
best and the worst conceivable performances against that criterion). These critical
performance levels will be univocally associated with some specific scores on the
measurement scale.
 Construction of the value function, which complies with all the key properties regarding the
general shape of the function and passes through each of the critical points previously
identified.
 Employment of the value function thus constructed to determine (graphically or
mathematically) the scores of each option against the given criterion.

49
Figure 21 illustrates the above steps in more detail. In this example, a 0 to 10 scale has been
chosen to measure the performance scores of the different options at hand against the criterion of
number of new jobs created. A linear and monotonically increasing function has been employed to
univocally associate each performance value to a specific performance score on the selected
interval scale. This simple value function can be defined by identifying only two points (i.e.
corresponding to two critical levels of performance) through which the function must pass. The
selected points are the best and the worst conceivable level of performance against the criterion
under examination, which, in this specific case, are thought to correspond respectively to no jobs
being created and 300 new jobs being provided. Therefore, a score of 0 is associated to the worst
possible performance (i.e. 0 jobs), whereas a score of 10 is used to identify the best possible
performance (i.e. 300 jobs). The value functions thus constructed allows the conversion from the
natural scale of measurement of the impacts (i.e. number of job created) to the 0 to 10 scale and
can be used to determine very rigorously the relative performance of all the options at hand against
this criterion. Hence, for example, an option that is expected to create 150 new jobs scores 5 on
the selected interval scale, whereas an option that is likely to create 90 new jobs has a
performance score of 3.

Figure 21 - Example of a value function approach to scoring.

Source: Author’s own elaboration.

Whereas for linear and monotonic functions, and criteria characterised as a benefit condition
(where the highest value against the criterion is the most preferred) the slope of the value function
has the direction represented in Figure 21, for criteria characterised as a cost condition (where,
vice versa, the lowest value is the most preferred), the slope of the value function is reversed
(Figure 22.a). Although extremely practical, a linear and monotonic value function cannot always
be employed to measure performance scores. With reference to the previous example, for
instance, it is not always desirable to try to create as many jobs as possible in an area as this may
lead to unbalanced economic development between regions. As illustrate in Figure 22.b, is such
situations, a piecewise-defined function can be adopted and an intermediate point is defined to
indicate the critical level of achievement above which further job opportunities in an area are not
greatly appreciated as this may drain the surrounding regions of workforce, investments and other
resources. Other decision-making situations may also require the identification of non-linear
function, which, however, are much more difficult to draw. Approaches to derive such value
50
functions are described in von Winterfeldt and Edwards (1986), Watson and Buede (1987), Belton
and Stewart (2002) and Barfod and Leleur (2014).

Figure 22 – Examples of different value functions.

Source: Author’s own elaboration.

5.5.2 Use of Threshold Values


As highlighted in Section 3.1.2, full aggregation MCA methods, including MAUT, AHP and the
simple weighted additive model, represent compensatory decision models. Indeed, with such
methods, poor performances against some criteria can be compensated by good scores against
other criteria, so that the global performance score of an option can remain high. Therefore, whilst
very straightforward, such methods imply the risk that an option performing very poorly against
several criteria may still be judged as the preferred option to address the problem at hand. The
appraisal summary table below (Table 14) illustrates well this situation. As noticeable from this
table, option 2 seems better than option 1 according to the weighted summation rule. However, the
former option performs very poorly against two out of four criteria. Despite being ranked second,
the performance profile of option 1 is more balanced and robust than option 2. Such situations are
likely to generate doubts and controversies regarding the conclusion of the analysis.

Table 14 – Difficulties in determining the best option with full-aggregation MCA methods.

Weights Option 1 Option 2


Objectives/
Criteria Scores Weighted Scores Weighted
(0÷100)
(0÷10) scores (0÷10) scores

C1 25% 7 1.75 3 0.75


C2 25% 5 1.25 9 2.25
C3 25% 5 1.25 4 1
C4 25% 7 1.75 9 2.25
Total 100 6.0 6.25
Preference Rank 2 1

Source: Author’s own elaboration.

51
To limit these issues, it is possible to assign some specific threshold values to at least the
most critical criteria. Such threshold values place some restrictions concerning the worst
acceptable performances of an option against those specific criteria, so that if an option exceeds
such values is automatically rejected, irrespectively of its global performance score and its
performances against the remaining criteria (Nijkamp and Ouwersloot, 1997; Nijkamp and Vreeker,
1999). The use of threshold values thus implicitly assigns more importance to those criteria for
which thresholds have been defined (Benoit and Rousseaux (2003).
Threshold values can be set in compliance with policy targets and legal instruments, scientific
criteria, which identify limits to natural processes and systems, or ethical standards (Rosemberg,
2001). The choice of the type of threshold to be applied to a given criterion also depends on how
the criterion is defined. For criteria characterised as a benefit condition (where the highest value
against the criterion receives the highest scores), the thresholds indicate the value below which the
performances of the options become unacceptable. For criteria characterised as a cost condition
(for which a lower value is preferable), the thresholds indicate the value above which the impacts
of the options may imply excessively high social costs. For problems involving major uncertainties,
it can also be convenient to specify a threshold value not as a point estimate but as an interval,
whose extreme values indicate respectively a more prudential and a more liberal estimate of the
allowable threshold (Mendoza et al., 2002). Figure 23 shows the relationship between option
performance against the criterion of loss of ancient woodland and threshold values. In this
example, two threshold values, respectively 35 hectares and 20 hectares, have been defined. If the
performance of the option under investigation (e.g. an infrastructure programme in a rural area of a
country) is higher than 35 hectares (i.e. the programme is expected to destroy more than 35
hectares of ancient woodland) the option receives a low score and a ‘red flag’, which means that
this option is not acceptable regardless its performances against the other criteria. If the
performance of the option is lower than 20 hectares, the option receives a relatively high score and
a ‘green flag’, which mean that there is no reason for concern. Finally, if the performance of the
option is comprised between 20 and 35 hectares, the option can be still accepted, subject to further
investigations, analyses and discussions.

Figure 23 – illustrative example of the use of threshold values.

Source: Source: Authors’ own elaboration.

52
5.6 Weighting
5.6.1 Overarching Approach to Weighting
The ascription of weights to the objectives and associated criteria constitutes a critical and
controversial stage of any MCA exercise, given the significant value judgements involved in this
step and the strong influence that this parameter can have on the results of the analysis, with
minor changes in the set of weights that can easily result in different option rankings. In the course
of time, the practical impossibility of determining objectively weights has hampered the use of MCA
outside academic environments in several countries (e.g. Annema et al., 2015; Quinet, 2000).
Weights are therefore often referred to as the ‘Achilles’ heel’ of MCA (BTE, 1999). An examination
of the relevant literature on the application of MCA in the planning and policy field reveals the
existence of contrasting currents of thought over the best way to derive weights:
 According to Nijkamp and colleagues (1990), weights could be derived (directly or indirectly)
from past decisions on problems similar to the decision-making situation under study.
 Van Pelt (1993) explains that, in principle, weights could be used in the attempt to differentiate
and strike a balance between short-terms and long-term objectives.
 Munda (2004 and 2008), suggests that weights should reflect some ethical principles (e.g. an
‘ecological stability’ position, leading to higher weights for criteria related to environmental
dimension; an ‘economic prosperity’ position, implying a strong consideration for economic
criteria; a ‘social equity’ position, entailing the assignment of higher weights to social
objectives) and different weighting schemes should thus be used to examine their
consequences on the final option ranking.
 In its guidelines on MCA, the Australian Resource Assessment Commission also recommends
changing and testing different sets of weights as part of an interactive process between the
analysts and decision-makers (RAC, 1992).
 Dimitriou and colleagues (2010) and Brown and colleagues (2001) claim that weights should
be derived from policy documents and government guidelines.
 Dodgson and colleagues (2009) recommend the team of analysts undertaking the analysis to
role play the positions of the different problem stakeholders so as to ensure that the chosen
weighting scheme reflects the interests of all the different parties and groups involved or
affected by the given decision-making situation.
 Finally, several authors including Stirling and Mayer (2001), Proctor and Drechsler (2006)
Macharis and Bernardini (2015) take even a step further and argue that weights should be
elicited directly from problem stakeholders as part of a participatory MCA exercise (see
Section 4).
However, none of the approaches to weighting suggested so far seems to be capable of
solving this impasse. For example, whilst seeking to ensure consistency with past decisions can be
important, the first of the above approaches may be problematic as detailed information on
previous choices might not be available. Past decisions also might not have been the brightest
resolutions.
Whereas the need for discerning with precision between the impacts generated by a policy or
a project in the short-term and those produced by this policy or project in the long-term is clear and
evident to almost anyone, there is little consensus on the priority level to assign to these two
categories of impacts. This, for instance, is clearly demonstrated by the long-standing debate over

53
discounting procedures in CBA (Ponti, 2003; El-Haram and Horner, 2008; Koopmans and Rietveld,
2013) and, more generally, by the multiple divergent viewpoints on the topic of sustainable
development and intergenerational equity (for an overview of this topic refer to Van Pelt, 1993 and
Munda, 1995).
If weights are chosen by the analysts or the decision-makers, they unavoidably turn out to be
largely arbitrary. They will thus tend to vary according to the will of the person (people) in charge of
the process. This may produce inconsistent decisions, with some projects being accepted on the
basis of one particular weighting scheme, and other rather similar projects being instead rejected
due to the use of different weights. Even the employment of different sets of weights, although
useful for examining the robustness of the analysis (see also Section 5.8), cannot solve the
subjectivity issue as, ultimately, a definitive weighting scheme, leading to a final option ranking,
must be chosen.
Notwithstanding the idea of having an appraisal and evaluation process directed by policies
may be appealing to someone18, it must be noted that, in policy documents, objectives and
strategies are defined at a too high level of generality. This means that specific information
concerning decision criteria and weights cannot be univocally derived from such documents and
are extremely likely to be open to multiple interpretations by politicians, experts and the public.
Assuming, however, the possibility of locating weights in policy documents and government
guidelines, several questions yet remain. Indeed, such policy weights are likely to vary from year to
year, according to the composition of legislatures, political fashions, and the exigencies of
bureaucrats. Hence, one may anticipate continued struggles over the weights to be adopted and
the danger that special-interest groups will be offered the opportunity to have an undue degree of
influence in the decision-making process.
Finally, eliciting weights from stakeholder groups having different agendas will unavoidably
lead to clashing weighting schemes and any attempt to reconcile these differences (through
negotiation or, more simply, by calculating the average of a wide spectrum of values) may easily
result in deadlocks of the process (Dean, 2018 and 2021). Chadwick (1978:276) summarizes the
situation as follows: group weighting “is a process which is theoretically impossible […]. How might
interest groups agree to a weighting which placed their own weight lower than that other?” Echoing
Chadwick’s opinions, Manheim and colleagues (1975:40) argue that “Only a very naive group
would agree to a compromise on a set of weights beforehand and then find that the resulting
‘highest score’ alternative has disastrous results for them”.
In an effort to avoid (or at least reduce) subjectivity and conflicts, some MCA methods and
techniques avoid the use of criterion weights in the analysis, thus implicitly assigning an equal
weight to all the criteria. However, also this approach has come under heavy criticism, with some
authors (e.g. Sayers et al., 2003; Dimitriou et al., 2010; Gan et al., 2017) claiming that the absence

18
According to the classic rational planning model, the planning process can be represented as a hierarchical structure
consisting of several layers: policy statements, government plans and programmes, and ultimately projects. Each
planning levels is thus intended to be a refinement of the previous one, with information cascaded from the former level
to the latter level, and each tier of decision being the result of a separate and detailed decision-making process.
However, policies are not necessarily the results of comprehensive and impartial analysis. Rather, they may be
subjectively defined, have outcomes that may or may not have been foreseen and, most importantly, do not always
prove to be right. Indeed, some policy measures are expressly implemented to address unintended consequences of
previous policy interventions. Hence, whilst as part of an appraisal exercise undertaken at a project level could be
interesting to verify whether the project in question complies with key policy objectives and priorities, a degree of
independence between policies and appraisal would be preferable. Practical evidence also shows that, in many cases,
policies are defined retrospectively to support specific programmes and projects once they have gained enough political
momentum (Dean, 2018). In such situations, an appraisal process entirely directed by and aligned to policies may
become even meaningless.
54
of any guidance relatively on which dimension and objective matter most is likely to reduce the
transparency of the process and produce inconsistent decisions. Furthermore, Munda (2008)
stresses the attention on the fact that that assigning the same weight to all the criteria does not
necessarily guarantee that all the different dimensions considered in the analysis (e.g. economic,
environmental and social dimensions) have the same importance as any dimension will be
ultimately weighted according to its number of criteria. To give the same weight to all the
dimensions, each of them should include the same number of criteria. In the attempt to meet this
condition, however, analysists may be tempted to exclude some important criteria and objectives
from the value-tree and/or add some other criteria even if completely irrelevant or redundant.

5.6.2 Weighting Techniques


To facilitate the ascription of weights a wide array of practical weighting techniques has also been
developed, even though, as underlined by several authors (e.g. Hokkanen and Salminen 1997;
Rogers et al., 2000; Clemen and Reilly, 2013; Zardari et al. 2015), some of these methods lack a
proper theoretical foundation. Having being developed to support different MCA methods, these
techniques differ in terms of procedures, level of accuracy, degree of complexity and ease of use
and, thus, when applied to the same set of criteria, typically lead to different weighting schemes
(Hobbs, 1980; Schoemaker and Waid, 1982; Poyhonen and Hamalainen, 2001; Zardari et al.
2015). Most importantly, the different weighting approaches are also based on different
assumptions concerning the actual meaning and role of weights. With reference to this aspect,
weighting techniques can be conveniently classified in two main categories, reflecting one of the
key classifying factors of MCA methods: compensatory and non-compensatory (Diakoulaki and
Grafakos, 2004).
 Compensatory weighting techniques, specifically designed for MAUT and other full
aggregation methods that provide full compensation between criteria (see Section 3.1.2), are
useful for determining weights as trade-off coefficients. These trade-offs imply the possibility
of offsetting a disadvantage on some criteria by a sufficiently large advantage on another
criterion. In this context, weights thus represent what portion of one criterion analysts and
decision-makers are willing to give up in order to improve the performance on another
criterion by one unit (Munda, 2008; Bouyssou et al., 2000; Belton and Stewart, 2002).
 Non-compensatory weighting techniques, proposed for outranking and other non-
compensatory MCA methods such as ELECTRE and PROMETHEE (see Section 3.1.3), are
particularly indicated for establishing weights as importance coefficients. An importance
coefficient expresses how important a criterion is compared with the others (Munda, 2008;
Bouyssou et al., 2000; Belton and Stewart, 2002).
To better explain this distinction, it is worthwhile to consider an example where some project
options are assessed against two main criteria, namely investment costs and loss of ancient
woodland. Let’s assume that the weight of the former criterion is double the weight of the latter. If
these two weights are defined as trade-off coefficients in the context of, for example, a MCA based
on a weighted additive model, this means that decision-maker values 1 unit on the investment cost
criterion the same as 2 units on the loss of ancient woodland criterion. Thus, assuming that the
performances of project options against the cost criterion are measured in thousands Pound
Sterling and that the performance levels of the options on the loss of ancient woodland criterion are
measured in hectares, the selected weighting scheme implies that the decision-maker would be
willing to trade £1,000 for 2 hectares of ancient woodland (1 hectare of ancient wood is thus valued
at £500). By contrast, if the two weights are interpreted as importance coefficients to be used with
55
non-compensatory MCA methods, the weights of criteria should be seen as an indication of the
fact that the decision-maker considers investment costs two times more important than the loss of
ancient woodland. In the first case, when determining weights, the relevant question to ask is ‘how
much would you be willing to increase construction costs to preserve a unit of ancient woodland?’,
whereas in the second case the question becomes ‘what is more important: investment costs (i.e.
economy) or loss of ancient woodland (i.e. environment)?’. This example underlines that trade-off
weights describe a quantitative relationship between performance scores and turn out to be linked
to the performance levels of the options against the criteria in question. On the contrary,
importance coefficients simply depend on the meaning and significance of the criteria themselves
(i.e. the more important the criterion is, the higher the weight it receives), and are not related either
to the specific measurement scale employed to assess performance scores against the criteria or
to the possible ranges of performance levels against the criteria (Beinat, 1997; Munda, 2008).
It is important to ensure consistency between the manner in which weights are derived and the
aggregation model in which they are used. It is not correct, for instance, to elicit criteria as
importance coefficients and employ them in a model based on a linear weighted aggregation rule,
which implies trade-offs between criteria. Unfortunately, the meaning of criterion weights and their
implications is an aspect which is frequently misunderstood or neglected in the MCA literature and
often generates inconsistency in many MCA applications. Indeed, the common practice amongst
researchers, analysts and decision-makers seem to choose weighting techniques merely on the
basis of their appeal and their ease to use (e.g. Yoe, 2002; Macharis et al., 2012; Barfod et al.,
2018; Salling et al., 2018; Németh et al., 2019), attach weights to criteria based on to their relative
importance and then combine them with performance scores according to a simple weighted
additive model (e.g. Ananda and Herath, 2003; Freudenberg, 2003; EC, 2008 and 2014; Blades,
2013; van Ierland et al., 2013; Barquet, 2016).
In the following sections, the most common compensatory and non-compensatory weighting
techniques are briefly presented. The reader can also refer to Nijkamp and colleagues (1990),
Beinat (1997), Hajkowicz and colleagues (2000), Rogers and colleagues (2000), Diakoulaki and
Grafakos (2004), Bouyssou and colleagues (2006), Rogers and Duffy (2012), Zardari and
colleagues (2015) and Odu (2019) for a more detailed examination of these techniques and a
description of other weighting approaches.

5.6.3 Compensatory Weighting Techniques


As highlighted in the previous section, compensatory weighting techniques are specifically
designed for compensatory MCA methods (e.g. MAUT methods and simple additive weighting
models). In such methods, weights are used to modulate the marginal contribution of each criterion
in terms of overall performance score (or overall utility) of each option. They thus assume the
meaning of trade-off coefficients and turn out to depend on the measurement scales and the
ranges of performance levels of criteria. Compensatory weighting techniques force analysts and
decision-makers to express trade-offs between criteria so as to indicate how much they are willing
to give-up in one aspect to improve another. In such methods, which imply long and rather
elaborate procedures, no considerations regarding the importance of the criteria are required.

Trade-Off Weighting Technique


This represents probably the only technique proposed in the literature where weights can be
determined as trade-off coefficients, without any ambiguity on their actual meaning. This approach,
56
developed by Keeny and Raiffa (1976) as part of their work on MAUT and the weighted additive
form of the multi-attribute utility function, is based on a pairwise examination of criteria. Given N
criteria for which weights have to be determined, two hypothetical options a1 and a2 are
constructed. The performance profiles of the two options are assumed to differ only with respect to
two criteria, cK, which, for example, can be adopted as the reference criterion, and cR. Option a1 is
assumed to have the best possible performance against cK and the worst possible performance on
cR. Option a2, by comparison, is supposed to have the worst possible performance against cK.
Analysts and decision-makers are then required to adjust the performance score of option a2 on cR,
in such a way that the overall utilities (or the global performance scores) of the two options become
the same. This choice will thus reflect the trade-off between cK and cR. Using the expressions
xK(a1), xR(a1), xK(a2) and xR(a2) to indicate the performances of the two options against cK and cR,
and wK and wR to indicate the weights of the two criteria, the problem can be expressed in symbols
as follows:

𝑈(𝑎1 ) = 𝑈(𝑎2 )

𝑁 𝑁

𝑈(𝑎1 ) = ∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎1 ) = ∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎2 ) = 𝑈(𝑎2 )


𝑗=1 𝑗=1

𝑁 𝑁

∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎1 ) + 𝑤 𝐾 × 𝑥 𝐾 (𝑎1 ) + 𝑤 𝑅 × 𝑥 𝑅 (𝑎1 ) = ∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎2 ) + 𝑤 𝐾 × 𝑥 𝐾 (𝑎2 ) + 𝑤 𝑅 × 𝑥 𝑅 (𝑎2)


𝑗=1 𝑗=1
𝑗≠𝐾,𝑅 𝑗≠𝐾,𝑅

Since ∑𝑁𝑗=1 𝑤 𝑗 × 𝑥 𝑗 (𝑎1 ) ∼ ∑𝑁𝑗=1 𝑤 𝑗 × 𝑥 𝑗 (𝑎2 )


𝑗≠𝐾,𝑅 𝑗≠𝐾,𝑅

Then 𝑤 𝐾 × 𝑥 𝐾 (𝑎1 ) + 𝑤 𝑅 × 𝑥 𝑅 (𝑎1 ) = 𝑤 𝐾 × 𝑥 𝐾 (𝑎2 ) + 𝑤 𝑅 × 𝑥 𝑅 (𝑎2 )

𝑤 𝐾 [𝑥 𝐾 (𝑎1 ) − 𝑥 𝐾 (𝑎2 )] = 𝑤 𝑅 [𝑥 𝑅 (𝑎2 ) − 𝑥 𝑅 (𝑎1 )]

𝑤𝐾 [𝑥 𝑅 (𝑎2 ) − 𝑥 𝑅 (𝑎1 )]
=
𝑤𝑅 [𝑥 𝐾 (𝑎1 ) − 𝑥 𝐾 (𝑎2 )]

With the performances xK(a1), xR(a1) and xK(a2), set at the outset and xR(a2) opportunely
adjusted to make the two options indifferent, the above trade-off relation between the two criteria
cK and cR becomes an equation with two unknown variables, wK and wR.
By repeating the same procedure for all the pairwise combinations of criterion cK with respect
to all remaining N-2 criteria, analogous equations in two unknown variables (i.e. the weights of the
two criteria considered in each equation) can be found. To determine the value of the N weights it
is necessary to solve a system of N equations. This system encompasses the N-1 equations
expressing the trade-off relations between pairs of criteria and the normalization constraint which
requires that the sum of weights is equal to 1 (∑wj = w1 + w2 + w3 +….+ wN = 1 - see Equation 1 on
page 13).
An example may help illustrate the above process. Let’s consider four (mutually preferentially
independent) criteria, which are employed to compare different cars: (c1) top speed, (c2) 0-100
57
km/h acceleration, (c3) CO2 emissions and (c4) maintenance costs. Let's assume for simplicity that
the performances against all these criteria can be described through some linear value functions
as described in Section 5.5.1. In particular, as shown in Figure 24, the value function employed to
quantify the performances of the options against the top speed criterion is assumed to be
monotonically increasing against the natural scale (i.e. the highest possible performance value
against this criterion is the most preferred), whereas the value functions used for the other criteria
are monotonically decreasing against the natural scale (i.e. the lowest possible performance values
on these criteria receive the highest scores).

Figure 24 – Value functions for (c1) top speed, (c2) 0-100 km/h acceleration, (c3) CO2 emissions and
(c4) maintenance costs.

Source: Source: Authors’ own elaboration.

To determine the weights of these criteria, let’s consider the top speed criterion as the
reference criterion which needs to be compared against every other criterion in pairs. In the first
pairwise comparison the top speed criterion (c1) is compared with the acceleration criterion (c2) and
two hypothetical options (i.e. two cars) are considered. The performance profile of the two options,
a1 and a2, is assumed to differ only with respect to the top speed and acceleration criteria. Car a1 is
assumed to have the best possible performance against the top speed criterion (i.e. 300 km/h) and
the worst possible performance on the acceleration criterion (i.e. 25s). Car a2, by comparison, is
supposed to have the worst possible performance against the top speed criterion (i.e. 150 km/h).
Analysts and decision-makers undertaking the analysis are then required to adjust the
performance score of car a2 with reference to the acceleration criterion in such a way that the
global performance scores of the two cars become the same. In mathematical terms the problem
can be expressed as follows:
58
𝑈(𝑎1 ) = 𝑈(𝑎2 )

4 4

𝑈(𝑎1 ) = ∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎1 ) = ∑ 𝑤 𝑗 × 𝑥 𝑗 (𝑎2 ) = 𝑈(𝑎2 )


𝑗=1 𝑗=1

Since criteria c3 and c4 are uninfluential

𝑤 1 × 𝑥 1 (𝑎1 ) + 𝑤 2 × 𝑥 2 (𝑎1 ) = 𝑤 1 × 𝑥 1 (𝑎2 ) + 𝑤 2 × 𝑥 2 (𝑎2 )

𝑤 1 [𝑥 1 (𝑎1 ) − 𝑥 1 (𝑎2 )] = 𝑤 2 [𝑥 2 (𝑎2 ) − 𝑥 2 (𝑎1 )]

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 2 [𝑥 2 (𝑎2 ) − 𝑢(25𝑠)]

Hence, given a change in the top speed criterion from 150 km/h to 300 km/h and assuming the
emission and maintenance costs criteria to be uninfluential, analysts and decision-makers are thus
required to establish what an equally preferred change in the acceleration time would be.
Assuming, for example, that, after careful consideration, they decide to set the value of x2(a2) at 7
s, with reference to the value functions displayed in Figure 24, the last equation becomes:

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 2 [𝑢(7𝑠) − 𝑢(25𝑠)]

𝑤1 [𝑢(7𝑠) − 𝑢(25𝑠)] [100 − 0]


= = =1
𝑤2 [𝑢(300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] [100 − 0]

𝑤2 = 𝑤1

This equation represents the trade-off relation between the top speed criterion (c1) and the
acceleration criterion (c2), and is based on the fact that an increase by 150 km/h in the top speed
criterion (i.e. from 150 km/h to 300 km/h) has been seen as equally preferable to an decrease by
18 seconds in the acceleration criterion (i.e. from 25s to 7s).
The top speed criterion (c1) needs then to be compared with the emission criterion (c3).
Therefore, this time, analysts and decision-makers consider two hypothetical cars, a1 and a2,
whose performance profiles are assumed to differ only with respect to these two criteria. Car a1 is
assumed to have the best conceivable performance against the top speed criterion (300 km/h) and
the worst possible performance on the emission criterion (200 g/km). Car a2, by comparison, is
supposed to have the worst conceivable performance against the top speed criterion (150 km/h).
Analysts and decision-makers are required to adjust the performance score of car a2 with
reference to the emission criterion in such a way that the global performance scores of the two
cars become the same. Analogously to the previous case, this problem can be expressed as
follows:

𝑤 1 × 𝑥 1 (𝑎1 ) + 𝑤 3 × 𝑥 3 (𝑎1 ) = 𝑤 1 × 𝑥 1 (𝑎2 ) + 𝑤 3 × 𝑥 3 (𝑎2 )

𝑤 1 [𝑥 1 (𝑎1 ) − 𝑥 1 (𝑎2 )] = 𝑤 3 [𝑥 3 (𝑎2 ) − 𝑥 3 (𝑎1 )]

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 3 [𝑥 3 (𝑎2 ) − 𝑢(200𝑔/𝑘𝑚)]


59
Given a change in the top speed criterion from 150 km/h to 300 km/h and assuming all the
other parameters as unimportant, analysts and decision-makers are required to establish what an
equally preferred change in the CO2 emissions would be. Assuming that the value for x3(a2) is
eventually set at 120 g/km, with reference to the value functions displayed in Figure 24, the last
equation becomes:

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 3 [𝑢(120𝑔/𝑘𝑚) − 𝑢(200𝑔/𝑘𝑚)]

𝑤1 [𝑢(120𝑔/𝑘𝑚) − 𝑢(200𝑔/𝑘𝑚)] [80 − 0] 8


= = =
𝑤3 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] [100 − 0] 10

10
𝑤3 = 𝑤1
8

The above equation represents the trade-off relation between the top speed criterion (c1) and
the CO2 emissions criterion (c3) and indicates that an increase by 150 km/h in the top speed
criterion (i.e. from 150 km/h to 300 km/h) has been considered by the analysts and decision-
makers as equally preferable to a significant reduction in the CO2 emissions from the car (i.e. from
200 g/km to 120 g/km).
Finally, the top speed criterion (c1) needs to be compared with the maintenance costs criterion
(c4) by using again two hypothetical cars. Car a1 is again assumed to have the best possible
performance against the top speed criterion (300 km/h) and the worst conceivable performance on
the maintenance costs criterion (1,000 £/year). Car a2, by comparison, is supposed to have the
worst possible performance against the top speed criterion (150 km/h). The two cars present the
same performance on the acceleration and emission criteria. Analysts and decision-makers are
required to adjust the performance score of car a2 with reference to the maintenance costs criterion
in such a way that the global performance scores of the two cars become the same.

𝑤 1 × 𝑥 1 (𝑎1 ) + 𝑤 4 × 𝑥 4 (𝑎1 ) = 𝑤 1 × 𝑥 1 (𝑎2 ) + 𝑤 4 × 𝑥 4 (𝑎2 )

𝑤 1 [𝑥 1 (𝑎1 ) − 𝑥 1 (𝑎2 )] = 𝑤 4 [𝑥 4 (𝑎2 ) − 𝑥 4 (𝑎1 )]

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 4 [𝑥 4 (𝑎2 ) − 𝑢(1,000£/𝑦𝑒𝑎𝑟)]

Assuming that the value for x4 (b) is eventually set at 500 £/year, with reference to the value
functions displayed in Figure 24, the last equation becomes:

𝑤 1 [𝑢 (300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 4 [𝑢(500£/𝑦𝑒𝑎𝑟) − 𝑢(1,000£/𝑦𝑒𝑎𝑟)]

𝑤1 [𝑢(500£/𝑦𝑒𝑎𝑟) − 𝑢(1,000£/𝑦𝑒𝑎𝑟)] [77 − 0] 77


= = =
𝑤4 [𝑢(300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] [100 − 0] 100

100
𝑤4= 𝑤1
77

60
The above equation represents the trade-off relation between the top speed criterion (c1) and
the maintenance costs criterion (c4). An increase by 150 km/h in the top speed criterion (i.e. from
150 km/h to 300 km/h) has been considered by the analysts and decision-makers as equally
preferable to a conspicuous reduction in maintenance costs (i.e. from 1,000 £/year to 500 £/year).
Given these three trade-off relations between the four criteria and the normalization constraint,
it is possible to determine the actual weights of criteria by solving a system of four equations in four
unknown variables (i.e. w1, w2, w3, and w4).

𝑤2 = 𝑤1
10
𝑤3 = 𝑤1
8
100
𝑤4 = 𝑤1
77
{ 𝑤1+𝑤2+ 𝑤3+ 𝑤4 =1

The resolution of this system leads to the following weights: w1 = 0.22, w2 = 0.22, w3 = 0.27
and w4 = 0.29.
The above example clearly highlights that weights obtained with the trade-off approach
depend on the measurement scales adopted to assess the performance scores against the
respective criteria and the ranges of performance levels considered in the analysis. Indeed, if, for
example, the maintenance costs criterion (c4) is measured in a 200-1,000 £/year scale (see Figure
25) rather than in a 350-1,000 £/year scale, which means that the best conceivable performance
level against the cost criterion is now assumed to be 200£/year rather than 350£/year, the trade-off
relationship with the top speed criterion (c1) changes as follows:

Figure 25 – New value function for (c4) maintenance costs.

Source: Source: Authors’ own elaboration.

61
𝑤 1 [𝑢(300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] = 𝑤 4 [𝑢(500£/𝑦𝑒𝑎𝑟) − 𝑢(1,000£/𝑦𝑒𝑎𝑟)]

𝑤1 [𝑢(500£/𝑦𝑒𝑎𝑟) − 𝑢(1,000£/𝑦𝑒𝑎𝑟)] [63 − 0] 63


= = =
𝑤4 [𝑢(300𝑘𝑚/ℎ) − 𝑢(150𝑘𝑚/ℎ)] [100 − 0] 100

100
𝑤4= 𝑤1
63

This, in turn, will result in a different weighting scheme. Different results would also be
obtained, for example, by measuring the performance levels against the top speed criterion in
miles per hour, rather than kilometers per hour.

SWING Weighting Technique


Although theoretically sound, the trade-off weighting approach can be cognitively challenging. A
simpler and more manageable approach for determining weights as trade-off coefficients is
represented by the Swing technique (von Winterfeldt and Edwards, 1986). This technique begins
with the construction of a benchmark option having the worst performance against all the N
decision criteria considered in the multi-criteria decision-making problem at hand. Starting from the
worst case option, several hypothetical options are determined by increasing (or ‘swinging’) one
criterion at a time, from the worst to the best possible performance. Overall, as many hypothetical
options will be created as the number of decision criteria included in the multi-criteria framework.
Analysts and decision-makers are then asked to rank all these hypothetical options (each of which
is associated to a swing of a specific criterion from the worst to the best outcome) based on
considerations regarding which swing would result in the largest, second largest, etc.,
improvement. The option (and the associated criterion) with the most convenient swing is then
given 100 points. The magnitudes of all other swings entailed by each of the other options (and
associated criteria) are expressed as percentages of the most preferred swing. The derived
percentages are the raw weights which are then normalized to yield then final weights19.
Also in this case, an example may help better explain the procedure. Let’s consider again the
same four criteria, namely top speed (c1), 0-100 km/h acceleration (c2), CO2 emissions (c3) and
maintenance costs (c4), whose respective value function is illustrated in Figure 24. Based on these
value functions, the benchmark option, with the worst possible performance against all these
criteria, is constitute by a car that has a top speed of 150 km/h, a 0-100 km/h acceleration of 25
seconds, CO2 emissions estimated at 200 g/km and whose maintenance costs are £1,000 per
year. Starting from this worst case option, four different hypothetical options are constructed by
swinging one criterion at a time, from the worst to best possible performance (see Figure 26).

19
A possible alternative approach involves the definition of a benchmark option having the best possible performance
against all the N decision criteria considered, the creation of other hypothetical options by swinging one criterion at the
time, from the best to the worst possible performance, and the assessment of the corresponding decreasing levels of
satisfaction obtained as a result of these swings.

62
Figure 26 – Example of application of the SWING weighting technique.

Source: Authors’ own elaboration.

These hypothetical options need then to be ranked according to the potential advantages
produced by each swing. Let’s assume that, after careful considerations, the analysts and
decision-makers undertaking the analysis conclude that the car with the lowest maintenance cost
is the best, followed by the car with the lowest emissions, then the car with the highest top speed
and finally the car with the greatest acceleration. The car with the lowest maintenance costs is thus
given 100 points, whereas the benchmark option, which is the worst car on all the criteria, receives
0 points and ranks fifth overall (Table 12). The ratings for the other three hypothetical options must
fall between 0 and 100. Each option needs to be compared with the best option to determine the
magnitude of the associated swing. For instance, for the car with the lowest emissions the question
to ask is: how much less satisfaction do you get by swinging CO2 emission from 200 g/km to 100
g/km as compared to swinging maintenance costs price from £1,000 per year to £350 per year?
Let’s suppose that after careful thought, analysts and decision-makers decide to assign 75 points
to the car with the lowest emission, 40 points to the car with the highest top speed and 35 points to
the car with the greatest acceleration. This means that analysts and decision-makers think that:

63
 Improving CO2 emissions from 200 g/km to 100 g/km is worth 75% of the improvement in
maintenance costs from £1,000 per year to £350 per year.
 Improving top speed from 150 km/h to 300 km/h is worth 40% of the improvement in
maintenance costs from £1,000 per year to £350 per year.
 Improving 0-100 km/h acceleration from 25 seconds to 7 seconds is worth 35% of the
improvement in maintenance costs from £1,000 per year to £350 per year.
Based on these value judgements, the final weights for the four criteria can be then calculated
with a normalised rating, making sure that they add up to 1 (Table 15). It must be noted, however,
that swing weights are sensitive to the measurement scales and the performance levels of the
criteria.

Table 15 – Results of the SWING Weighting exercise.

Rate of Weights of the


Hypothetical Options
Performances Rank the Associated
(and Associated Criteria)
Options Criteria

x1: 150 km/h


x2: 25 s
‘Worst Case’ Option – Benchmark 5th 0 /
x3: 200 g/km
x4: £1,000 per year
x1: 150 km/h
‘Top Speed’ Option x2: 25 s 40 / (40 + 35 +
3rd 40 + 75 + 100) =
(C1 = Top Speed) x3: 200 g/km 40/250 = 0.16
x4: £1,000 per year
x1: 150 km/h
‘Acceleration’ Option x2: 25 s
4th 35 35/250 = 0.14
(C2 = Acceleration) x3: 200 g/km
x4: £1,000 per year
x1: 150 km/h
‘CO2 Emissions’ Option x2: 25 s
2nd 75 75/250 = 0.3
(C3 = CO2 Emissions) x3: 200 g/km
x4: £1,000 per year
x1: 150 km/h
‘Maintenance Costs’ Option x2: 25 s
1st 100 100/250 = 0.4
(C4 = Maintenance Costs) x3: 200 g/km
x4: £1,000 per year
Source: Authors’ own elaboration.

5.6.4 Non-Compensatory Weighting Techniques


Non-compensatory weighting techniques should be used for non-compensatory MCA methods
(e.g. ELECTRE and PROMETHEE), where weights play the role of importance coefficients. Due to
the aggregation rules entailed by these methods, weights reflect the significance of the criteria
themselves and do not depend on the measurement scales and the ranges of performance levels
of criteria. In the course of time, a number of non-compensatory weighting techniques have been
64
developed. Although many of these techniques seem just minor variants of each other, it has been
shown that they can lead to very different weighting schemes (Doyle et al., 1997; Bottomley et al.,
2000; Hajkowicz et al., 2000). Non-compensatory weighting techniques are generally simpler to
use and more intuitively appealing than compensatory weighting techniques. For this reason, they
are frequently employed even as part of compensatory MCA methods, although, as already
stressed, this is likely to lead to flawed and inconsistent results.

Simple Rating
The Simple Rating represents one of the simplest approaches adopted to determine criteria
weights according to the relative importance of the criteria. With this technique a numerical score
on a given scale (e.g. 1-5 or 1-10), often associated to a qualitative judgment on a Likert scale (e.g.
very important; important; of some importance; unimportant) is used to indicate the significance of
each criterion (Table 16). These weights can then be normalised to the [0-1] interval (or to 100
percent) by dividing each weight by the sum of all of the weights.

Table 16 – Example of a possible Likert-type scale used for weighting procedures.

Level of Importance Weights


Extremely Important 5
Very Important 4
Important 3
Moderately Important 2
Slightly Important 1

Source: Author’s own elaboration.

Graphical Weighting
This technique enables analysts and decision-makers to express preferences in a purely visual
manner. With this approach, a horizontal line representing the scale on which judgments are made
is firstly drawn. A mark anywhere along this line is then made to indicate the importance of each
criterion (Figure 27). Criteria importance increases as the mark is placed closer to the right end of
the line. A quantitative value is then calculated for each criterion by measuring the distance from
the mark to the left end of the line. This value is then scaled against the total length of the line.

Figure 27 - Example of a graphical weighting method.

Source: (Adapted from) Hajkowicz et al. (2000).


65
Ranking System
The Ranking System is based on the idea that weight elicitation is a rather difficult and demanding
task. Numerically precise information on the weights of criteria is seldom available. However, most
of the time, it is possible, at least, to rank the criteria in order of importance. With this technique the
criteria are thus first ranked from the most important to the least important and each criterion is
then assigned a value based on its position in the rank. Given N criteria, the criterion ranked first is
assigned a value of 1; the one ranked second receives a value of 2; and so on up to the least
important criterion, which is ascribed a value of N. If some criteria are considered to have the same
importance an average value is assigned to them. Once all the values have been assigned, the
normalised importance weights of criteria wj is calculated using the formula:

𝑁−𝑟𝑗+1
𝑤𝑗 = ∑𝑁 (4)
𝑗=1(𝑁−𝑟𝑗 +1)

Where:

rj is the ranking value for the j-th criterion; and


N is the total number of criteria .

Table 17 below includes an illustrative example of the ranking system approach to weighting.

Table 17 – Illustrative example of the Ranking system to ascribe criterion weights.


Correspondent
Criteria Rank Position N - rj + 1 wj
Rank Value

C1 1st 1 6 0.286

C2 6th 6 1 0.048

C3 4th 4 3 0.143
2nd/3rd
C4 (2+3)/2 = 2.5 4.5 0.214
(same position as C5)
2nd/3rd
C5 (2+3)/2 = 2.5 4.5 0.214
(same position as C4)
C6 5th 5 2 0.095

∑= 21 1

Source: Adapted from Rogers and Duffy (2012).

Ratio System
This technique is very similar to the ranking system outlined above. The weighting process starts
by ranking the criteria from the most to the least important. The least important criterion is ascribed
a value of 1. All the other criteria are then compared to this criterion and ultimately receive a value
greater than 1 to reflect their importance relative to the least important criterion. The weights wj can
then be normalised by using the equation:

66
𝑟𝑗
𝑤𝑗 = ∑𝑁 (5)
𝑗=1 𝑟𝑗 )

Where:

rj is the ranking value for the j-th criterion; and


N is the total number of criteria .

An analogous approach to weighing is employed in the SMART method (Edwards, 1977 - see
Section 3.1.2). With this approach the least important criterion is assigned 10 points, whereas
more points are given to the other criteria to account for their position within the rank order. The
resulting raw weights are then normalized by dividing each weight by the sum of the total points
ascribed20.

Point Allocation
With this technique a fixed number of points (typically 10 or 100 points) need to be distributed
amongst the identified criteria. The more points a criterion receives, the greater its relative
importance is (Figure 28). Differently from the simple rating technique, where analysts and
decision-makers can alter the importance of one criterion without adjusting the weight of the other
criteria, with this approach it is only possible to assign some more points to a criterion, thus giving
it higher importance, by subtracting some points from other criteria. Although not considered a
compensatory weighting technique (i.e. with this approach weights are assigned on the basis of the
importance of criteria), the point allocation technique thus forces analysts and decision-makers to
make some kind of trade-offs when assigning criteria.

Figure 28 – Example of application of the Point Allocation method.

Source: Author’s own elaboration.

20
This approach, where weights are derived based on the importance of criteria, however, has been criticised. Indeed,
the SMART method relies on a linear aggregation function to ultimately combine scores and weights, and thus implies a
compensatory aggregation logic. Accordingly, weights in this method are trade-off coefficients and should be derived with
compensatory weighting techniques (see Section 5.6.3). This issue has been addressed by SMARTS (Edwards and
Barron, 1994 - see Section 3.1.2), where the Swing weighting technique is employed in place of a ration system
approach to weighting.
67
Whilst very intuitive, this technique becomes progressively more difficult as the number of
criteria included in the value tree increases. When there are many criteria, it may turn out to be
more convenient to, first, split the fixed number of points over the selected overarching dimensions
of criteria (e.g. economic, environmental, social) and, then, further distribute the allocated amounts
of points amongst the different criteria included within each dimension (see Figure 29).

Figure 29 – Example of application of the Point Allocation method with many criteria.

Source: Author’s own elaboration.

Paired Comparison
This technique involves the comparison of each criterion against every other criterion in pairs. As
illustrated in Section 3.1.2, a popular form of paired comparison to determine criteria weights is
derived from the AHP and requires to compare criteria in pair by using a nine-point scale to
express their relative importance (from 1, when the two criteria are judged to have the same
importance, to 9, when one criterion is considered absolutely more important than the other one).
Once all the paired comparisons have been made, normalised weights are calculated for all the
criteria21. A simplistic paired comparison approach, which implies a simple normalization procedure
rather than the calculation of the geometric mean or the normalised principal eigenvector of the
pairwise comparison matrices (as required by the AHP method), is illustrated in Figure 30.

21
As pointed out in Section 3.1.2, the AHP constitutes a full-aggregation and thus completely compensatory MCA
method, whose preference structure can be reconciled to a weighted additive value function. Therefore, in this method,
weights should always be elicited in the form of trade-offs coefficients through compensatory weighting techniques. A
paired comparison approach to weight elicitation, were weights are defined based on their importance level, unavoidably
produces some inconsistencies in the method. The paired comparison techniques can, however, be modified to estimate
the trade-offs that can be accepted amongst criteria (Beinat, 1997).
68
Figure 30 - Example of a possible process for obtaining weights from paired comparisons.

Source: (Adapted from) Hajkowicz et al. (2000).

5.7 Combination of Scores and Weights


Once the weights and scores have been assigned, it is then possible to combine them according to
the aggregation rules entailed by the MCA method employed (see Section 3) in the attempt to
address the specific problem at hand (see Section 5.1).

5.8 Sensitivity Analysis


Any appraisal or evaluation study is affected by a number of uncertainties (e.g. missing data; low
reliability of data; poor accuracy of the models used in forecasts and impact assessments; ignored
aspects) and is based on extensive value judgments. As illustrated in the previous sections, in
MCA, several arbitrary choices need to be made with reference to the number and types of
objectives and decision criteria considered in the analysis, the scales adopted for scores and
weights, specific scoring and weighting procedures, aggregation rules and so forth. In order to
account for uncertainty and subjectivity and to test the robustness of the analysis, sensitivity tests
are generally employed. These tests usually entail the moderate (or even significant) variation of

69
some key parameters so as to determine how these changes affect the outcomes of the analysis 22.
In the specific case of MCA, a common sensitivity test involves some changes in the weights
and/or scores assigned in the first place with the view to observing aspects such as:
 The impacts of these changes on the overall performance of the option(s) under investigation.
 Whether the results of the analysis (e.g. the final ranking of the options) vary significantly as a
result of these changes.
 The parameter(s) whose variation can have the largest effects on the results of the analysis.
 The limit of variation of the above parameter(s) so as to leave the results of the analysis
unaltered.
 The parameter(s) whose variation does not have any substantial effect on the results of the
analysis.
A thorough sensitivity analysis typically requires varying only one parameter at a time and thus
leads to a large number of interactions as, ideally, the possible effects of the changes in all (or at
least the most critical) weights and scores need to be gauged. The process can thus easily
become arduous and time-consuming. Figure 31 below displays a round of a hypothetical
sensitivity test made by changing the original weight of one criterion.

Figure 31 – A hypothetical sensitivity test.

Source: Author’s own elaboration.

An alternative approach to cope with the uncertainties affecting the analysis is to carry out the
MCA exercise for different possible scenarios. A scenario, in this context, can be defined as a
consistent and plausible description of the possible future exogenous (economic, social,
environmental, political and technological) conditions (Dean, 2019). As illustrated in Table 18,
under different scenarios, key parameters such as weights and scores can be changed to cope
with the various possible future conditions (Ward et al., 2016a). This approach thus allows
examining the robustness of the options and their ability to adapt to changing exogenous
conditions as well as the stability of the option ranking under different scenarios.

22
Some authors (e.g. Beinat, 1997) distinguish between sensitivity and robustness analyses. Sensitivity analysis
explores the effects of changing data in a relatively limited surrounding of the original model parameters. Robustness
analysis, by comparison, entails larger variations of the original data included in the model.

70
Table 18 – Comparison between the different approaches to Scenario-Based MCA.
Approaches to
Scenario-Based MCA
(elements of the multi- Pros and cons of the
Rationale
criteria framework approach
changed under each
scenario)
Under different scenarios the performances of a
Relatively easy and
Scores project option against some criteria (not
straightforward
necessarily all of them) may vary
Under different scenarios the level of importance
Relatively easy and
Weights of some objectives/criteria (not necessarily all of
straightforward
them) may vary
Under different scenarios the performances of a
project option against some criteria (not More comprehensive than
necessarily all of them) may vary. the previous two
Scores & Weights
At the same time, the level of importance of some approaches, but more
objectives/criteria (not necessarily all of them) time-consuming.
may also vary
Very comprehensive and
rather realistic approach
that, however, leads to the
Different scenarios may require the identification
adoption of a different
Objectives/Criteria & of different sets of objectives/criteria.
multi-criteria framework per
Scores & Weights Under different scenarios criterion weights and
each scenario. The results
performance scores may also vary.
of the different appraisal
exercises are thus hardly
comparable.
Source: Author’s own elaboration.

Stirling and Mayer (2001) have also suggested another possible way to highlight and capture
the uncertainties associated with scoring procedures in MCA. According to this approach, the
performance scores of the options under examination are not expressed as single-point numbers,
but as ranges of values, comprised between a more optimistic and a more pessimistic evaluation
judgement. In Figure 32 the length of each bar represents the uncertainty attached to each score.
The higher the level of uncertainty is, the longer the bar becomes. Whereas surely interesting and
useful for simplistic MCA applications undertaken in an open-up mode (see Section 5.1), this
approach can hardly be adopted in more formal MCA methods and can easily become impractical
when the objective of the analysis is to arrive at a final ranking of the options.

71
Figure 32 – Systematic approach for capturing the degree of uncertainty in performance scores.

Source: Adapted from Stirling and Mayer (2001).

5.9 Presentation of the Results of the MCA Exercise


As already pointed out, the results of the appraisal are generally presented in the forms of
performance tables similar to the ones displayed throughout this paper (see, e.g., Tables 3 and 4).
More elaborate appraisal and evaluation summary tables, particularly useful for MCA exercise
undertaken in an open-up mode (see Section 5.1), can include additional columns to account for
other aspects such as the level of uncertainty surrounding the performances of the options against
the various criteria, the multiple risks and opportunities implied by the each options and some
consideration regarding equity (Table 19). As already pointed out in the previous sections,
diagrams and graphs (see, e.g., Figure 7) can also be a powerful and effective way to
communicate the results of the MCA exercise to governments, decision-makers and all the
relevant parties.
The results of the analysis eventually need then to be translated into specific plans of action.
In this regards, however, it must not be forgotten that decisions over complex policy problems
present an inherent political character and are thus seldom a matter of only comparing the pros
and cons of the possible alternative solutions. Hence, MCA and other assessment techniques,
which are often explicitly required by laws and regulations, are only of assistance to the political
process of decision-making. To put it in simple terms, appraisal (or evaluation) is not decision-
making.

72
Table 19 - Example of a performance table useful for MCA exercise undertaken in an open-up mode.

Criterion Option Performances Uncertainty Opportunities Risks Equity Perf.


Objectives Criteria Weights over Considerations Scores
(0÷100%) Quantitative Impacts Qualitative Impacts Impacts (0-10)
 The project aligns with
Level of The road can
To promote local and regional Low social
integration contribute to
integration transport objectives. acceptability of
between the enhance the
between policies 25% Not Applicable  Conflicts with VERY LOW the project. - 5
project and wider accessibility and
and strategies at environmental and Several protests
key planning and competitiveness of
all level sustainability are likely to occur
policy objectives the local airport
objectives at all levels.
MEDIUM -
Construction jobs
 About 600 jobs during Figure provided
Possibilities for the for local residents
Number of job the construction phase by the project
To provide establishment of may alleviate at
created as a
10%  An average of 15 promoter. A
employment
result of the workers per years
Not Applicable
detailed
training centers for - least temprarily the 7
opportunity construction jobs unemployment
project required for breakdown of
for local residents problem affecting
maintenance works this figure is not
the area
avaialble
Number of Reduction in
To preserve (to historical tourism, loss of
the largest buildings, The local economy
The project would require Major alternations to the revenue from
extent possible) will be severely
landmarks and 15% demolition of four historic historical characters of - tourism for local 3
the historical sites potentially LOW damaged as it is
buildings. the area busines and loss
character of the threaten by the highly dependent
of jobs in tourism
area proposal on tourism
sector
 Change in carbon
Disruptions and These negative
emissions over 60 HIGH
To minimize Expected discomforts for impacts will be
year appraisal period = Rough estimate
adverse effects changes in local strongly
30% + 120 tonnes Not Applicable based on the - 4
on local air greenhouse gas communities concentrated on
quality emissions  Change in carbon examination of
living close to the certain areas and
emissions in opening similar projects
road social groups
year = + 2 tonnes
LOW
Expected Several car accidents Detailed
prevented by the Possibility of
reduction in the Analysis
construction of the road: exploiting
To imporve road number of undertaken by
20%  fatal = 20
Not Applicable synergies with the - - 8
safety accidents as a XYZ
regional campaign
results of the  serious = 125 Consultancy
for road safety
proposal  slight = 400 Group (see
attached report)

Source: Author’s own elaboration.


6. Concluding Remarks

The last 50 years have seen the rise in the popularity of MCA and the consequent establishment of
a number of different assessment methods, tools and techniques accounting for multiple objectives
and decision criteria. This paper has provided an overview of the theoretical and practical aspects
of some of the most common MCA methods with the view to helping students, academics, but also
practitioners and all those approaching this field of knowledge for the first time to better familiarise
with the subject.
There seem to be three main properties which are frequently mentioned when it comes to
explaining the merits of MCA:
 Comprehensiveness: by taking explicit account of multiple objective and criteria, MCA
techniques can provide better insights into the nature of the problem at hand relative to mono-
criterion methods such as CBA (Dimitriou et al., 2010 and 2016; Leleur, 2012; Macharis and
Bernardini, 2015;).
 Flexibility: MCA methods make it possible to study different types of problems and deal with a
wide array of data and information, whether quantitative or qualitative in nature (Brown et al.
2001; Browne and Ryan, 2011; Guhnemann et al., 2012; Barfod and Leleur, 2014).
 Transparency: the displaying, through tables, graphs or diagrams, of all the objectives,
appraisal criteria, weights and scores adopted during the assessment as well as all the data
and information employed during the analysis provides a clearer and more transparent
approach to appraisal and evaluation (Ward et al., 2016a; Cornet et al., 2018a; Macharis et al.,
2018; Hickman, 2019).
However, as highlighted in this paper, these features should not always be taken for granted.
With reference to the first property, for example, whereas, in principle, it might be true that
analysing a problem in a multi-dimensional fashion may lead to more thorough decisions, it should
not be forgotten that both formal and simplified MCA methods are not immune from logical flaws
and inconsistencies. Accordingly, MCA should not be necessarily singled out as the ‘best’
appraisal method and certainly must not be regarded as a panacea for better decisions.
The number of existing methods and their flexibility make MCA a dynamic and highly versatile
appraisal and evaluation approach, which can be tailored to suit the needs of the problem at hand.
However, it is also undeniable that this huge assortment of approaches, the great diversity
between these methods and the lack of axiomatic foundations of many of them, render MCA a
rather chaotic field and confer to this discipline a high degree of subjectivity, often making the
results of the analysis open to debate.
Finally, transparency is not an inherent quality of MCA, but rather a desired feature that must
be pursued in any appraisal and evaluation exercise. Indeed, given the lack of many specific rules
and universally accepted guidelines concerning the key steps of the process and, especially in the
case of formal methods, the mechanics and the math of behind them, which are often seen by
users without much formal training as a kind of scientific witchcraft, transparency over the key
parameters and underlying assumptions of the analysis becomes fundamental to guarantee the
possibility of third-party audit on the results of a MCA exercise.
References

Abbas, A.E. (2018). Foundations of Multiattribute Utility. Cambridge University Press.


Ananda, J. and Herath, G. (2003). Incorporating Stakeholder Values into Regional Forest Planning:
A Value Function Approach. Ecological Economics, Vol. 45, pp. 75-90.
Anciaes, P. and Jones, P. (2020). Transport Policy for Liveability – Valuing the Impacts on
Movement, Place and Society. Transportation Research Part A, Vol. 132, pp. 157–173.
Annema, J.A., Mouter, N. and Razaei, J. (2015). Cost-Benefit Analysis (CBA), or Multi-Criteria
Decision-Making (MCDM) or Both: Politicians’ Perspective in Transport Policy Appraisal.
Transportation Research Procedia, Vol. 10, pp. 788-797.
Arrow, K.J. and Raynaud, H. (1986). Social Choice and Multicriterion Decision-Making. The MIT
Press.
Aruldoss, M., Lakshmi, T.M. and Venkatesan, V.P. (2013). A Survey on Multi Criteria Decision
Making Methods and Its Applications. American Journal of Information Systems, Vol. 1, No.1,
pp. 31-43.
Asadabadi, M.R., Chang, E. and Saberi, M. (2019). Are MCDM methods Useful? A Critical Review
of Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP). Cogent Engineering,
Vol. 6, No. 1.
Bana e Costa C.A., Stewart, T. and Vansnick, J.C. (1997). Multicriteria Decision Analysis: Some
Thoughts Based on the Tutorial and Discussion Sessions of ESIGMA Meetings. European
Journal of Operational Research, Vol. 99, No., 1, pp. 28–37.
Bana e Costa, C.A. (1990) (Ed.). Readings in Multiple Criteria Decision Aid. Springer.
Bana e Costa, C.A. and Vansnick, J.-C. (2008). A Critical Analysis of the Eigenvalue Method Used
to Derive Priorities in AHP. European Journal of Operational Research, Vol. 187, No. 3, pp.
1422–1428.
Bana e Costa, C.A., Stewart, T.J. and Vansnick, J.-C. (1997). Multicriteria Decision Analysis: Some
Thoughts Based on the Tutorial and Discussion Sessions of the ESIGMA Meetings. European
Journal of Operational Research, Vol. 99, No. 1, pp. 28-37.
Banihabib, M.B., Hashemi-Madani,F.-S. andForghani, A. (2017). Comparison of Compensatory
and non-Compensatory Multi Criteria Decision Making Models in Water Resources Strategic
Management. Water Resources Management, Vol. 31, pp. 3745–3759.
Banville, C., Landry, M., Martel, J.M. and Boulaire, C. (1998). A Stakeholder Approach to MCDA.
Systems Research and Behavioral Science, Vol. 15, No. 1, pp. 15-32.
Barfod, M. (2018). Sustainable Transport Planning: Involving Stakeholders in the Decision Support
Process Using Planning Workshops and MCDA. Transport, Vol. 33, No. 4, pp. 1052–1066.
Barfod, M. B. and Leleur, S. (Eds.) (2014). Multi-Criteria Decision Analysis for Use In Transport
Decision Making. 2nd Edition, DTU Transport.
Barquet, K. (2016). Multi-Criteria Analysis Guide. Resilience-Increasing Strategies for Coasts,
D.4.2 - Evaluation of DRR Plans. Report prepared under contract from the European
Commission.
Bazerman, M.H. and Moore, D. (2009). Judgment in Managerial Decision Making. John Wiley &
Sons, Ltd.
Beinat E. (1997). Value Functions for Environmental Management, Kluwer, Dordrecht.
Beinat, E. (2001). Multi-Criteria Analysis for Environmental Management. Journal of Multi-Criteria
Decision Analysis, Vol 10, No. 2, p.51.

75
Belton, V. (1986). A Comparison of the Analytic Hierarchy Process and a Simple MultiAttribute
Value Function. European Journal of Operational Research, Vol. 26, pp. 7-21.
Belton, V. and Steward, T. (2002). Multiple Criteria Decision Analysis: An Integrated Approach.
Kluwer Academic Publisher.
Belton, V. and Stewart, T. (2010). Problem Structuring and Multiple Criteria Decision Analysis. In:
Ehrgott, M., Figueira, J.R. and Greco, S. (Eds.). Trends in Multiple Criteria Decision Analysis.
Springer, pp. 209-240.
Benoit, V., Rousseaux, P. (2003). Aid for Aggregating the Impacts in Life Cycle Assessment.
International Journal of Life Cycle Assessment, Vol. 8, pp. 74-82.
Beroggi, G. (1999). Decision Modeling in Policy Management: An Introduction to the Analytic
Concepts. Springer.
Blades, C. (2013). Multi-Criteria Analysis. In: EIB (European Investment Bank), The Economic
Appraisal of Investment Projects at the EIB. European Investment Bank, pp. 53-65.
Boardman, A.E., Greenberg, D.H., Vining, A.R. and Weimer, D.L. (2006). Cost-Benefit Analysis:
Concept and Practice. Prentice Hall (Third Edition).
Bottomley, P.A, Doyle, J.R. and Green, R.H. (2000). Testing the Reliability of Weight Elicitation
Methods: Direct Rating Versus Point Allocation. Journal of Marketing Research, Vol. 37, No. 4,
pp. 508-513.
Bouyssou, D. (1990) Building Criteria: A Prerequisite for MCDA. In: Bana e Costa C.A. (Ed.),
Readings In Multiple Criteria Decision Aid. Springer, Berlin, Heidelberg, New York, pp. 58–80.
Bouyssou, D., Marchant, T., Pirlot, M. Perny, P., Tsoukias, A. and Vincke, P. (2000). Evaluation
and Decision Models: A Critical Perspective. Kluwer Academic Publishers.
Bouyssou, D., Marchant, T., Pirlot, M., Tsoukias, A. and Vincke, P. (2006). Evaluation and
Decision Models with Multiple Criteria: Stepping Stones for the Analyst. Springer.
Barfod, M. B. and Leleur, S. (2014). Multi-Criteria Decision Analysis for Use in Transport Decision
Making. 2nd Edition, DTU Transport.
Brans, J.-P. and Mareschal, B. (2005). PROMETHEE Methods. In: Figueira, J., Greco, S. and
Ehrgott, M. (Eds.). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, pp.
163-195.
Brans, J.P. and Vicke, P. (1985). A Preference Ranking Organization Method. Management
Science, Vol. 31, pp. 647-655.
Brown, M., Milner, S. and Bulman, E. (2001). Assessing Transport Investment Projects: A Policy
Assessment Model. In: Giorgi, L. and Pohoryles, R.J. (Eds.), Transport Policy and Research:
What Future? Ashgate, pp. 44-89.
Browne, D. and Ryan, L. (2011). Comparative Analysis of Evaluation Techniques for Transport
Policies. Environmental Impact Assessment Review, Vol. 31, pp. 226–233.
Buede, D. M. (1986). Structuring Value Attributes. Interfaces, Vol. 16, pp. 52-62.
Burgess, J., Stirling, A., Clark, J., Davies, G., Eames, M., Staley, K. and Williamson, S. (2007).
Deliberative Mapping: A Novel Analytic Deliberative Methodology to Support Contested
Science-Policy Decisions. Public Understanding of Science, Vol. 16, No. 3, pp. 299–322.
BTE (Bureau of Transport Economics) (1999) Facts and Furphies in Benefit–Cost Analysis:
Transport. Report 100. Bureau of Transport Economics, Commonwealth of Australia, Canberra.
Chadwick, G.F. (1978). A Systems View of Planning: Towards A Theory of the Urban and Regional
Planning. Pergamon Press.
Chankong, V. and Haimes, Y.Y. (1983). Multiobjective Decision Making: Theory and Methodology.
North-Holland.
76
Charnes, A. and Cooper W.W. (1977). Goal Programming and Multiple Objectives Optimisations”,
European Journal of Operational Research, Vol. 1, pp. 39-54.
Charnes, A. and Cooper, W.W. (1961). Management Models and Industrial Applications of Linear
Programming. John Wiley & Sons.
Charnes, A., Cooper W.W. and Ferguson, R. (1955). Optimal Estimation of Executive
Compensation by Linear Programming”, Management Science, Vol. 1, pp. 138-151.
Clemen, R.T. and Reilly, T. (2013). Making Hard Decisions with Decision Tools. South-Western,
Cengage Learning.
Cook, W.D., Golan, I., Kazakov, A. and Kress, M. (1988). A Case Study of a Non-Compensatory
Approach to Ranking Transportation Projects. The Journal of the Operational Research Society,
Vol. 39, No. 10, pp. 901-910.
Corner, J., Buchanan, J, and Hening, M. (2001). Dynamic Decision Problem Structuring. Journal of
Multi-Criteria Decision Analysis. Vol. 10, pp. 129-141.
Cornet, Y., Barradale, M., Barfod, M. and Hickman, R. (2018b). Giving Future Generations a Voice:
Constructing a Sustainability Viewpoint in Transport Appraisal. European Journal of Transport
and Infrastructure Research, Vol. 18, No. 3, pp. 316-339.
Cornet, Y., Barradale, M., Gudmundsson, H. and Barfod, M. (2018a). Engaging Multiple Actors in
Large-Scale Transport Infrastructure Project Appraisal: An Application of MAMCA to the Case
of HS2 High-Speed Rail. Journal of Advanced Transportation. Vol. 18, pp. 1-22.
De Brucker, K., Macharis, C. and Verbeke, A. (2015). Two-Stage Multi-Criteria Analysis and the
Future of ITS-based Safety Innovation Projects. IET Intelligent Transport Systems, Vol. 9, No. 9,
pp. 842-850.
De Brucker, K., Verbeke, A. and Macharis, C. (2004). The Applicability of Multicriteria-Analysis to
the Evaluation of Intelligent Transport Systems (ITS). In: Bekiaris, E. and Nakanishi, Y.J. (Eds.),
Economic Impacts of Intelligent Transportation Systems. Innovations and Case Studies,
Elsevier, Amsterdam, pp. 151-179.
Dean, M. (2018). Assessing the Applicability of Participatory Multi-Criteria Analysis Methodologies
to the Appraisal of Mega Transport Infrastructure. Ph.D. Dissertation. The Bartlett School of
Planning, University College London, UK.
(https://www.academia.edu/43180035/Assessing_the_Applicability_of_Participatory_Multi-
Criteria_Analysis_Methodologies_to_the_Appraisal_of_Mega_Transport_Infrastructure)
Dean, M. (2019). Scenario Planning: A Literature Review. A Paper Prepared as Part of the MORE
(Multi-modal Optimisation of Road-space in Europe) Project - WP3 (Future Scenarios: New
Technologies, Demographics and Patterns of Demand). Project Number: 769276-2. UCL
Department of Civil, Environmental and Geomatic Enginnering.
(https://www.academia.edu/43649617/Scenario_Planning_A_Literature_Review)
Dean, M. (2020). Multi-Criteria Analysis. In: Mouter, N. (Ed.), Advances in Transport Policy and
Planning: Standard Appraisal Methods. Elsevier, pp. 165-224.
Dean, M. (2021). Participatory Multi-Criteria Analysis Methods: Comprehensive, Inclusive,
Transparent and User-Friendly? An Application to the Case of the London Gateway Port.
Research in Transportation Economics. Special Issue (Transport Infrastructures: Investments,
Evaluation and Regional Economic Growth), Vol. 88, Article 100887
https://doi.org/10.1016/j.retrec.2020.100887
Dean, M. (2022). Including Multiple Perspectives in Participatory Multi-Criteria Analysis: A
Framework for Investigation. Evaluation. Forthcoming
Dean, M., Hickman, R. and Chen C.-L. (2019) Testing the Application of Participatory MCA: The
Case of the South Fylde Line. Transport Policy, Vol. 73, pp. 62-70.

77
Despotin, M., Moscarola, J. and Spronk, C. (1983). A User Oriented Listing of Multiple Criteria
Decision Methods. Revue Belge de Statistique, d'Informatique et de Recherche Operationnelle,
Vol 23, No. 4, pp. 4-110.
Diakoulaki, D. and Grafakos, S. (2004). Multicriteria Analysis. ExternE - Externalities of Energy:
Extension of Accounting Framework and Policy Applications. Luxembourg-Ville, Luxembourg:
Directorate - General for Research, Sustainable Energy Systems.
Dimitriou, H.T., Ward, E.J. and Dean, M. (2016). Presenting the Case for the Application of Multi-
Criteria Analysis to Mega Transport Infrastructure Appraisal. Research in Transportation
Economics, Special Edition. Vol. 58, pp. 7-20.
Dimitriou, H.T., Ward, E.J., Wright, P. (2010). Incorporating Principles of Sustainable Development
within the Design and Delivery of Major Projects: An International Study with particular
reference to Major Infrastructure Projects for the Institution of Civil Engineering and the
Actuarial Profession. OMEGA Centre, The Bartlett School of Planning, University College
London.
Dodgson, J.S., Spackman, M., Pearman, A. and Phillips, L.D. (2009). Multi-Criteria Analysis: A
Manual. Department for Communities and Local Government: London.
Dom, A. (1999). Environmental Impact Assessment of Road and Rail Infrastructure. In: Petts, J.,
(Ed.), Handbook of Environmental Impact Assessment Volume 2 - Environmental Impact
Assessment in Practice: Impact and Limitation. Blackwell Science Ltd, pp. 331-350.
Doumpos, M., Figueira, J.R., Greco, S. and Zopounidis, C. (2019) (Eds.). New Perspectives in
Multiple Criteria Decision Making: Innovative Applications and Case Studies. Springer.
Doyle, J.R., Green, R.H. and Bottomley, P.A (1997). Judging Relative Importance: Direct Rating
and Point Allocation are not Equivalent. Organizational Behaviour and Human Decision
Processes, Vol. 70, No. 1, pp. 65-72.
Dyer, J.S. (1990). Remarks on the Analytic Hierarchy Process. Management Science, Vol. 36, No.
3, pp. 249-258.
EC (European Commission) (2008). Guide to Cost–Benefit Analysis of Investment Projects.
Brussels
EC (European Commission) (2015). Guide to Cost–Benefit Analysis of Investment Projects:
Economic appraisal tool for Cohesion Policy 2014-2020. Brussels
Eden, C. and Radford, J. (Eds.) (1990). Tackling Strategic Problems – The Role of Group Decision
Support. Sage.
Edwards, W. (1977). How to Use Multiattribute Utility Measurement for Social Decisionmaking.
IEEE Transactions on Systems, Man, and Cybernetics, Vol. 7, No. 5, pp. 326–340.
Edwards, W. and Barron, F.H. (1994). SMART and SMARTER: Improved Simple Methods for
Multiattribute Utility Measurement. Organizational Behavior and Human Decision Processes,
Vol. 60, pp. 306-325.
Ehrgott, M. (2005). Multiobjective Programming. In: Figueira, J., Greco, S. and Ehrgott, M. (Eds.),
Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, pp. 667-722.
El-Haram, M. and Horner, M. (2008). A Critical Review of Reductionist Approaches for Assessing
the Progress Towards Sustainability. Environmental Impact Assessment Review, Vol. 28, No. 4-
5, pp 286-311.
Faludi, A. and Voogd, H. (1985). Evaluation of Complex Policy Problems: Some Introductory
Remarks. In: Faludi, A. and Voogd, H. (Eds.), Evaluation of Complex Policy Problems. Delftsche
Uitgevers Maatschappij B.V., pp 1-6.
Ferrari, P. (2003). A Method for Choosing from Among Alternative Transportation Projects.
European Journal of Operational Research, Vol. 150, No. 1, pp. 194–203.

78
Figueira, J., Greco, S. and Ehrgott, M. (2005a). Introduction. In: Figueira, J., Greco, S. and Ehrgott,
M. (Eds.). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, pp. xxi- xxxvi.
Figueira, J., Greco, S. and Ehrgott, M. (2005b) (Eds.). Multiple Criteria Decision Analysis: State of
the Art Surveys. Springer.
Figueira, J., Mousseau, V. and Roy, B. (2005c). ELECTRE Methods. In: Figueira, J., Greco, S. and
Ehrgott, M. (Eds.). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, pp.
135-162.
Fishburn, P.C. (1970). Utility Theory for Decision Making. Research Analysis Corporation.
Forester, J. (1999). The Deliberative Practitioner: Encouraging Participatory Planning Processes.
The MIT Press.
Freeling, A.N.S. (1983). Belief and Decision Making. PhD Dissertation, University of Cambridge,
UK.
Freudenberg, M. (2003) Composite Indicators of Country Performance: A Critical Assessment. STI
Working Paper 2003/16, Industry Issues. OECD Publishing.
Funtowicz, S.O. and Ravetz, J.R. (1991). A New Scientific Methodology for Global Environmental
Issues. In: Costanza, R. (Ed.), Ecological Economics: The Science and Management of
Sustainability. Columbia University Press. pp. 137-152.
Gal, T., Stewart, T. and Hanne, T. (1999) (Eds.) Multicriteria Decision Making: Advances in MCDM
Models, Algorithms, Theory, and Applications. Springer.
Gan, X., Fernandez, I.C., Guo, J., Wilson, M., Zhao, Y., Zhou, B. and Wu, J. (2017). When to Use
What: Methods for Weighting and Aggregating Sustainability Indicators. Ecological Indicators,
Vol. 81, pp. 491–502.
Giampietro, M. (2003). Multi-Scale Integrated Analysis of Agroecosystem. CRC Press.
Gregory, R. and Keeney, R.L. (1994). Creating Policy Alternative using Stakeholder Values.
Management of Science, Vol. 40, No. 8, pp. 1035-1048.
Guhnemann, A., Laird, J.J. and Pearman, A.D. (2012). Combining Cost-Benefit and Multi-Criteria
Analysis to Prioritise A National Road Infrastructure Programme. Transport Policy, Vol. 23, pp.
15–24.
Guitouni, A. and Martel, J.-M. (1998). Tentative Guidelines to Help Choosing an Appropriate
MCDA Method. European Journal of Operational Research, Vol. 109, No. 2, pp. 501-521.
Gustavson, K., Lonergan, S. and Ruitenbeek, H. (1999). Selection and Modelling of Sustainable
Development Indicators: A Case Study of the Fraser River Basin, British Columbia. Ecological
Economics, Vol. 28, No. 1, pp. 117-132.
Hajkowicz, S.A., McDonald, G.T. and Smith. P.N. (2000). An Evaluation of Multiple Objective
Decision Support Weighting Techniques in Natural Resource Management. Journal of
Environmental Planning and Management, Vol. 43, No. 4, pp. 505-518.
Harker, P.T. and Vargas, L.G. (1987). The Theory of Ratio Scale Estimation: Saaty’s Analytic
Hierarchy Process. Management Science, Vol. 33, No. 1, pp 1383-1403.
Herath, G. and Prato, T. (2006) (Eds.) Using Multi-criteria Decision Analysis in Natural Resource
Management. Ashgate.
Hickman, R. (2019). The Gentle Tyranny of Cost–Benefit Analysis in Transport Appraisal. In:
Docherty, I. and Shaw, J. (Eds.), Transport Matters. Policy Press, pp. 131-152.
Hobbs, B.F. (1980). A Comparison of Weighting Methods in Power Plant Siting. Decision Sciences,
Vol. 11, pp. 725–737.
Hokkanen, J., and Salminen, P. (1997). ELECTRE III and IV Decision Aids in an Environmental
Problem. Journal of Multi-criteria Decision Analysis, Vol. 6, pp. 215–226.
79
Huang, I.B., Keisler, J. and Linkov, I. (2011). Multi-criteria Decision Analysis in Environmental
Sciences: Ten Years of Applications and Trends. Science of the Total Environment, Vol. 409,
No. 19, pp. 3578–3594.
Ishizaka, A. and Nemery, P. (2013). Multi-Criteria Decision Analysis: Methods and Software. John
Wiley & Sons, Ltd.
Janssen, R. (2001). On the Use of Multi-Criteria Analysis in Environmental Impact Assessment in
the Netherlands. Journal of Multi-Criteria Decision Analysis, Vol. 10, No. 2, pp. 101-109.
Janssen, R. and Munda, G. (1999). Multi-criteria Methods for Quantitative, Qualitative and Fuzzy
Evaluation Problems. In: van den Bergh, J. (Ed.), Handbook of Environmental and Resource
Economics. Edward Elgar, pp. 837–852.
Jeffreys, I. (2004). The Use of Compensatory and Non-compensatory Multi-Criteria Analysis for
Small-scale Forestry. Small-scale Forest Economics, Management and Policy, Vol. 3, No. 1, pp.
99-117.
Keeney, R.L and Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value
Tradeoffs. Wiley.
Keeney, R.L. (1977). The Art of Assessing Multiattribute Utility Functions. Organizational Behavior
and Human Performance, Vol. 19, pp. 267-310.
Keeney, R.L. (1992). Value-Focused Thinking: A Path to Creative Decision-Making. Harvard
University Press.
Keeney, R.L. (1996). Value-Focused Thinking: Identifying Decision Opportunities and Creating
Alternatives. European Journal of Operational Research, Vol. 92, No. 3, pp. 537-549.
Keeney, R.L. and Gregory, R.S. (2005). Selecting Attributes to Measure the Achievement of
Objectives. Operations Research, Vol. 53, No. 1, pp.1-11.
Kodikara, P.N. (2008) Multi-objective Optimal Operation of Urban Water Supply Systems. Ph.D.
Dissertation. Victoria University, Australia.
Köksalan, M.M., Wallenius, J. and Zionts, S. (2011). Multiple Criteria Decision Making: From Early
History to the 21st Century. World Scientific Publishing.
Koopmans, C. and Rietveld, P. (2013). Long Term Impacts of Mega-Projects: The Discount Rate.
In: Priemus, H. and van Wee, B. (Eds.), International Handbook on Mega-Projects. Edward
Elgar, ch. 14.
Korhonen, P. (2005). Interactive Methods. In: Figueira, J., Greco, S. and Ehrgott, M. (Eds.),
Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, pp. 641-665.
Kuhn, H.W. and Tucker, A.W. (1951) Nonlinear Programming. Proceedings of the 2nd Berkeley
Symposium on Mathematics, Statistics and Probability, University of California Press, Berkeley,
pp. 481-492.
Lee, N. and Kirkpatrick, C. (1997). Integrating Environmental Assessment with Other Forms of
Appraisal in the Development Process. In: Kirkpatrick, C. and Lee, N. (Eds.), Sustainable
Development in a Developing World – Integrating Socio-Economic Appraisal and Environmental
Assessment. Edward Elgar, pp. 3-24.
Lee, N. and Kirkpatrick, C. (2000). Integrated Appraisal, Decision Making and Sustainable
Development: and Overview. In: Lee, N. and Kirkpatrick, C. (Eds.), Sustainable Development
and Integrated Appraisal in a Developing World. Edward Elgar, pp. 1-22
Leleur, S. (2012). Complex Strategic Choices: Applying Systemic Planning for Strategic Decision
Making. Springer.
Li, Y. and Thomas, M.A. (2014). A Multiple Criteria Decision Analysis (MCDA) Software Selection
Framework. Paper prepared for the 47th Hawaii International Conference on System Science.

80
Macharis, C. and Bernardini, A. (2015). Reviewing the Use of Multi-Criteria Decision Analysis for
the Evaluation of Transport Projects: Time for a Multi-Actor Approach. Transport Policy, Vol. 37,
177-186.
Macharis, C. and Nijkamp, P. (2011). Possible Bias in Multi-Actor Multi-Criteria Transportation
Evaluation: Issues and Solutions. Research Memorandum 2011-31. Faculty of Economics and
Business Administration, VU University Amsterdam.
Macharis, C. Turcksin, L. and Lebeau, K. (2012). Multi Actor Multi Criteria Analysis (MAMCA) as a
Tool to support Sustainable Decisions: State of Use. Decision Support Systems, Vol. 54, pp.
610–620.
Macharis, C., De Brucker, K. and Van Raemdonck, K. (2018). When to Use Multi Multi Actor Multi
Criteria Analysis or Other Evaluation Methods? In: Baudry, G. and Macharis, C. (Eds.),
Decision-Making for Sustainable Transport and Mobility. Edward Elgar, pp. 28-47.
Manheim, M., Suhrbier, J., Bennett, E., Neumann, L., Colcord, F. and Reno, A. (1975).
Transportation Decision-Making: A Guide to Social and Environmental Consideration. National
Cooperative Highway Research Program Report 156. Transportation Research Board, National
Research Council, Washington, D.C.
Martel, J.-M. and Matarazzo, B. (2005). Other OUTRANKING Approaches. In: Figueira, J., Greco,
S. and Ehrgott, M. (Eds.). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer,
pp. 197-262.
Marttunen, M., Lienert, J. and Belton, V. (2017). Structuring Problems for Multi-Criteria Decision
Analysis in Practice: A Literature Review of Method Combinations. European Journal of
Operational Research. Vol. 263, No. 1, pp. 1-17.
Mendoza, G.A., Hartanto, H., Pranhu, R. and Villanueva, T. (2002). Multicriteria and Critical
Threshold Value Analyses in Assessing Sustainable Forestry. Journal of Sustainable Forestry,
Vol. 15, No. 2, pp. 25-62.
McAllister, D.M. (1982). Evaluation in Environmental Planning: Assessing Environmental, Social,
Economic and Political Trade-Offs. The MIT Press.
Mcdowall, W. and Eames, M. (2007). Towards a Sustainable Hydrogen Economy: A Multi-Criteria
Sustainability Appraisal of Competing Hydrogen Futures. International Journal of Hydrogen
Energy, Vol. 32, No.18, pp. 4611-4626.
Miller, D.H. (1985). Equity and Efficiency Effects of Investment Decisions: Multicriteria Methods for
Assessing Distributional Implications. In: Faludi, A. and Voogd, H. (Eds.), Evaluation of
Complex Policy Problems. Delftsche Uitgevers Maatschappij B.V., pp. 35-50.
Miller, G.A. (1956). The Magical Number Seven, Plus or Minus Two. Psychological Review, Vol.
63, No. 1, pp. 81-97.
Mouter, N. Dean, M., Koopmans, K. and Vassalo, J.M. (2020). Comparing Cost-Benefit Analysis
and Multi-Criteria Analysis. In: Mouter, N. (Ed.), Advances in Transport Policy and Planning:
Standard Appraisal Methods. Elsevier, pp. 225-254.
Mu, E. and Pereyra-Rojas, M. (2017). Practical Decision Making: An Introduction to the Analytic
Hierarchy Process (AHP) Using Super Decisions V2. Springer.
Munda, G. (1995). Multicriteria Evaluation in a Fuzzy Environment - Theory and Applications in
Ecological Economics. Springer.
Munda, G. (2004). Social Multi-Criteria Evaluation: Methodological Foundations and Operational
Consequences. European Journal of Operational Research, Vol. 158, No. 3, pp 662–677.
Munda, G. (2008). Social Multi-Criteria Evaluation for a Sustainable Economy. Springer.
Németh, B., Molnàr, A., Bozòki, S., Wijaya, K., Inotail, A., Campbell, J.D. and Kalò, Z. (2019).
Comparison of Weighting Methods Used in Multicriteria Decision Analysis Frameworks in
81
Healthcare with Focus on Low- and Middle-Income Countries. Journal of Comparative
Effectives Research, Vol. 8, No. 4, pp. 195–204.
Nijkamp, P. and Ouwersloot, H. (1997). A Decision Support System for Regional Sustainable
Development: The Flag Model. In: van der Bergh, J.C.J.M. and Hofkes M. (Eds.), Theory and
Implementation of Sustainable Development Modeling. Dordrecht, Kluwer.
Nijkamp, P. and Vreeker, R. (2000). Sustainability Assessment of Development Scenarios:
Methodology and Application to Thailand. Ecological Economics, Vol. 33, pp. 7–27.
Nijkamp, P., Rietveld, P. and Voogd. H. (1990). Multicriteria Evaluation in Physical Planning. North-
Holland.
Odu, G.O. (2019). Weighting Methods for Multi-Criteria Decision Making Technique. Journal of
Applied Sciences and Environmental Management, Vol. 23(8), pp. 1449-1457.
Oliveira, V. and Pinho, P. (2010). Evaluation in Urban Planning: Advances and Prospects. Journal
of Planning Literature, Vol. 24, No. 4, pp. 343 –361
Ozernoy, V.W. (1992). Choosing the “Best” Multiple Criteria Decision-Making Method. INFOR:
Information Systems and Operational Research, Vol. 30, No. 2, pp. 159-171.
Petts, J. and Leach, B. (2000) Evaluating Methods for Public Participation: Literature Review.
Research and Development, Technical Report E135, University of Birmingham. Environment
Agency.
Ponti, M. (2003). Welfare Basis of Evaluation. In: Pearman, A., Mackie, P. and Nellthorp, J. (Eds.),
Transport Project, Programmes and Polices – Evaluation Needs and Capabilities. Ashgate, pp.
139-150.
Poyhonen, M. and Hamalainen, R.P. (2001). On the Convergence of Multiattribute Weighting
Methods. European Journal of Operational Research, Vol. 129, pp. 569-585.
Proctor, W. and Drechsler, M. (2006). Deliberative Multicriteria Evaluation. Environment and
Planning C: Government and Policy, Vol. 24, No. 2, pp. 169-190.
Quinet, E. (2000). Evaluation Methodologies of Transportation Projects in France. Transport
Policy, Vol.7, No.1, pp. 27-34.
RAC (Resource Assessment Commission) (1992). Multi-Criteria Analysis as a Resource
Assessment Tool. Research Paper no. 6, AGPS, Canberra.
Renn O., Webler, T., Rakel, H., Dienel, P. and Johnson, B. (1993). Public Participation in Decision
Making: A Three-Step Procedure. Policy Science, Vol. 26, No. 3, pp. 189-214.
Renn, O. (2015). Stakeholder and Public Involvement in Risk Governance. International Journal of
Disaster Risk Science, Vol. 6, pp. 8-20.
Rezaei, J. (2015). Best-Worst Multi-Criteria Decision-Making Method. Omega. Vol. 53, pp. 49–57.
Rogers, M., and Duffy, A. (2012). Engineering Project Appraisal. Blackwell Science.
Rogers, M., Bruen, M. and Maystre, L.-Y. (2000). ELECTRE and Decision Support: Methods and
Applications in Engineering and Infrastructure Investment. Kluwer Academic Publishers.
Rosenberger, R.S. (2001). An Integrative Hierarchical Framework for Environmental Valuation:
Value Pluralism, Thresholds, and Deliberation. Research Paper 2001-14, Regional Research
Institute and Division of Resource Management West Virginia University.
Rosenhead, J. and Mingers, J. (2001) (Eds.). Rational Analysis for a Problematic World Revisited:
Problem Structuring Methods for Complexity, Uncertainty and Conflict. Wiley.
Rosenhead, J. and Mingers, J. (2004). Problem Structuring Methods in Action. European Journal
of Operational Research, Vol. 152, pp. 530–554.

82
Roy, B. (1968). Classement et Choix en Présence de Points de Vue Multiples (La Méthode
ELECTRE) (Ranking and Selection in the Presence of Multiple Points of View – The ELECTRE
Method). RIRO‚ 8, 57-75. (In French).
Roy, B. (1985) Methodologie Multicritère d'Aide à la Decision (Multicriteria Methodology for
Decision Aiding), Economica, Paris. (in French).
Roy, B. (1996). Multicriteria Methodology for Decision Aiding. Kluwer.
Roy, B. and Bouyssou, D. (1986). Comparison of Two Decision-Aid Models Applied to a Nuclear
Power Plant Siting Example. European Journal of Operational Research, Vol. 25, pp. 200-215.
Saaty, T.L. (1980). The Analytic Hierarchy Process. McGraw Hill.
Saaty, T.L. (1990a). An Exposition of the AHP in Reply to the Paper "Remarks On The Analytic
Hierarchy Process". Management Science, Vol. 36, pp. 259-268.
Saaty, T.L. (1990b). How to Make a Decision: The Analytic Hierarchy Process. European Journal
of Operational Research, Vol. 48, pp. 9-26.
Saaty, T.L. (1993). What is Relative Measurement? The Ratio Scale Phantom. Mathematical and
Computer Modelling, Vol. 17, No. 4/5, pp. 1-12.
Saaty, T.L. (2001). Decision Making for Leaders: The Analytical Hierarchy Process for Decisions in
a Complex World. RWS Publications.
Saaty, T.L. (2003). Decision-Making with the AHP: Why is the Principal Eigenvector Necessary?
European Journal of Operational Research, Vol. 145, pp. 85–91.
Salling, K.B., Barfod, M.B., Pryn, M.R. and Leleur, S. (2018). Flexible Decision Support for
Sustainable Development: The SUSTAIN Framework Model. European Journal of Transport
and Infrastructure Research, Vol. 18, No. 3, pp. 295-315.
Samset, K. and Christensen, T. (2017). Ex Ante Project Evaluation and the Complexity of Early
Decision-Making. Public Organization Review, Vol. 17, pp. 1–17.
Sayers, T.M., Jessop, A.T. and Hills, P.J. (2003). Multi-Criteria Evaluation of Transport Options -
Flexible, Transparent and User-Friendly? Transport Policy, Vol. 10, No. 1, pp. 95–105.
Schärlig A. (1985). Décider Sur Plusieurs Critères: Panorama De L'aide à la Décision Multicritère
(Decide on Several Criteria: Panorama of Multiple-Criteria Decision Analysis) - Volume 1.
Presses Polytechniques et Universitaires Romandes. (In French).
Schoemaker, P.J. and Waid, C.C. (1982). An Experimental Comparison of Different Approaches to
Determining Weights in Additive Value Models. Management Science, Vol. 28, pp. 182-196.
Schutte, I.C. (2010). The Appraisal of Transport Infrastructure Projects in the Municipal Sphere of
Government in South Africa, with Reference to the City of Tshwane. Ph.D. Dissertation.
Department of Transport Economics, Logistics and Tourism. University of South Africa.
Söderbaum P. (1998). Economic and Sustainability: An Actor-Network to Evaluation. In: Lichfield,
N., Barbanente, A., Borri, D., Khakee, A. and Prat, A. (Eds.), Evaluation in Planning: Facing the
Challenge of Complexity. Kluwer Academic Publishers, pp. 51-72.
Stagl, S. (2006). Multicriteira Evaluation and Public Participation: The Case of UK Energy Policy.
Land Use Policy, Vol. 23, No. 1, pp. 53-62.
Stagl, S. (2007). SDRN Rapid Research and Evidence Review on Emerging Methods for
Sustainable Valuation and Appraisal. A Report to the Sustainable Development Research
Network, Final Report, Policy Studies Institute, London.
Stewart, T. (1992). A Critical Survey on the Status of Multiple Criteria Decision Making Theory and
Practice. Omega, Vol. 20, pp. 569-586.

83
Stirling A and Mayer S. (2001). A Novel Approach to the Appraisal of Technological Risk: A
Multicriteria Mapping Study of a Genetically Modified Crop. Environment and Planning C:
Government and Policy, Vol. 19, No. 4, pp. 529-555.
Stirling, A. (1998). Risk at a Turning Point? Journal of Risk Research, Vol. 1, No. 2, pp. 97-109.
Stirling, A. (2006). Analysis, Participation and Power: Justification and Closure in Participatory
Multi-Criteria Analysis. Land Use Policy, Vol. 23, No. 1, pp. 95–107.
Stirling, A. (2008). Opening up or closing down: Participation, pluralism and diversity in the social
appraisal of technology. Journal of Science, Technology and Human Values, Vol. 33, No. 2, pp.
262-294.
Thokala, P., Devlin, N., Marsh, K., Baltussen, R., Boysen, M., Kalo, Z., Longrenn, T., Mussen, F.,
Peacock, S., Watkins, J. and Ijzerman, M. (2016). Multiple Criteria Decision Analysis for Health
Care Decision Making - An Introduction: Report 1 of the ISPOR MCDA Emerging Good
Practices Task Force. Value in Health, Vol. 19, pp. 1-13.
Triantaphyllou, E. (2000). Multi-Criteria Decision Making Methods: A Comparative Study. Kluwer
Academic Publishers.
Triantphyllou, E. and Mann, H. (1989). An Examination of the Effectiveness of Multi-Dimensional
Decision-Making Methods: A Decision-Making Paradox. Decision Support Systems, Vol. 5, pp.
303-312.
Tsamboulas, D., Yiotis G.S. and Panou K.D. (1999). Use of Multicriteria Methods for the
Assessment of Transport Projects. Journal of Transportation Engineering, Vol. 125, No. 5 (407).
Van Ierland , E.C. , de Bruin, K. and Watkiss, P. (2013). Multi-Criteria Analysis: Decision Support
Methods for Adaptation. European Commission FP7 funded MEDIATION Project, Briefing Note
6.
Van Pelt, M.J.F. (1993). Ecological Sustainability and Project Appraisal. Averbury.
Vanclay, F. (1999). Social Impact Assessment. In: Petts, J., (Ed.), Handbook of Environmental
Impact Assessment Volume 1 - Environmental Impact Assessment: Process, Methods and
Potential. Blackwell Science Ltd, pp. 301-326.
Vari, A. (1995). Citizens’ Advisory Committee as a Model for Public Participation: A Multiple-
Criteria Evaluation. In: Renn, O., Webler, T. and Wiedemann, P. (Eds.), Fairness and
Competence in Citizen Participation – Evaluation Models for Environmental Discourse. Kluwer
Academic Publisher, pp. 103-115.
Vincke, P. (1992) Multi-criteria Decision-Aid. Wiley.
Von Winterfeldt, D. and Edwards, W. (1986). Decision Analysis and Behavioural Research.
Cambridge University Press, Cambridge.
Voogd, J.H. (1983). Multicriteria Evaluation for Urban and Regional Planning. Pion.
WAG (Welsh Assembly Government) (2008). Welsh Transport Planning and Appraisal Guidance.
Wang, J.-J., Jing, J.-J., Zhang, C.-F. and Zhao J.-H. (2009). Review on Multi-Criteria Decision
Analysis Aid in Sustainable Energy Decision-Making. Renewable and Sustainable Energy
Reviews, Vol. 13, No. 9, pp. 2263–2278.
Wang, Y.-M. and Elhag, T. (2006). An Approach to Avoiding Rank Reversal in AHP. Decision
Support Systems. Vol. 42, No. 3, pp. 1474-1480.
Ward, E.J., Dimitriou, H.T. and Dean, M. (2016a). Theory and Background of Multi-Criteria
Analysis: Toward A Policy-Led Approach to Mega Transport Infrastructure Project Appraisal:
Toward a Policy-Led Approach to Mega Transport Infrastructure Project Appraisal. Research in
Transportation Economics, Special Edition, Vol. 58, pp. 21-45.

84
Ward, E.J., Dimitriou, H.T., Wright, P. and Dean, M. (2016b). Application of Policy-Led Multi-
Criteria Analysis to the Appraisal of the Northern Line Extension, London. Research in
Transportation Economics, Special Edition, Vol. 58, pp. 46-80.
Watróbski, J., Jankowski, J., Ziemba, P., Karczmarczyk, A. and Ziolo, M. (2018). Generalised
Framework for Multi-Criteria Method Selection. Omega, Vol. 86, pp. 107-124.
Watson, S.R. and Buede, D.M. (1987). Decision Synthesis: The Principles and Practise of Decision
Analysis. Cambridge University Press, New York, United States of America.
Wedley, W.C. (2010). Issues in Aggregating AHP/ANP Scales. In: Phillips-Wren, G., Jain, L.C.,
Nakamatsu, K. and Howlett, R.J. (Eds.), Advances in Intelligent Decision Technologies:
Proceedings of the Second KES International Symposium IDT 2010. Springer, pp. 29-42.
Weiss, J.W. and Weiss, D.J. (2008). How to Use Multiattribute Utility Measurement for Social
Decisionmaking. In: Weiss, J.W. and Weiss, D.J. (Eds.). A Science of Decision Making: The
Legacy of Ward Edwards. Oxford University Press, pp. 343-363.
Worsley, T. and Mackie, P. (2015). Transport Policy, Appraisal and Decision-Making. RAC
Foundation, London: Institute for Transport Studies, University of Leeds.
Yoe, C. (2002). Trade-Off Analysis Planning and Procedures Guidebook. A Report Submitted to
U.S. Army Corps of Engineers (USACE) Institute for Water Resources (IWR).
Zanakis, S.H., Solomon ,A., Wishart, N. and Dublish, S. (1998). Multi-Attribute Decision Making: A
Simulation Comparison of Select Methods. European Journal of Operational Research, Vol.
107, No. 3, pp. 507-529.
Zardari, N.H., Ahmed, K., Shirazi, S.M. and Yusop, Z.B. (2015). Weighting Methods and their
Effects on Multi-Criteria Decision Making Model Outcomes in Water Resources Management.
Springer.
Zeleny, M. (1982). Multiple Criteria Decision Making. McGraw-Hill.

85

You might also like