Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

DECISION THEORY

Chapter 6
Multi-Attribute Utility Function

Sonia REBAI
Institut Supérieur de Gestion
University of Tunis
2

Introduction

Ø The first attempts to decision-making with one or multiple


objective go back to the end of the 1960s through the work
of Raifa and Edwards [Rai69, Edw71] who gave birth to
Decision Analysis.

Ø The basic idea of such an approach is that the encoding of a


utility function in a given decision context will allow to assign
scores or utilities to the potential actions the decision-maker
is facing. These scores will then rank the actions from the
least desirable to the most desirable (or vice versa).
3

Ø The possibility of building such scores, however, requires


the verification of two conditions.

ü The first is to explain the conditions of coherence that


must be checked by the decision-maker's preferences.

ü The second condition relates to the other constraints at


which it becomes possible to decompose the initial
multi-objective utility function into a simple combination
of single-objective utility functions (also called multi-
attribute utility and single-attribute utility functions).
4

The limits of the cognitive abilities of the decision-maker


make it necessary to use such decompositions for the
encoding of the utility functions. In fact, each individual
having his own preferences, each decision-maker has his
own utility function.

In order to encode the utility function, the analyst will ask the
decision-maker to arbitrate between two alternatives.
However, because of the multiplicity of attributes, some
arbitrations can be extremely complex to establish from a
cognitive point of view.
5

The purpose of this chapter is to study the most commonly


used forms.

Specifically, we will discuss the additive decomposition of


utility functions as well as a more recent decomposition
technique : the multiplicative form.
6

The Decomposition of a Multi-Attribute Utility Function


In practice, the multiplicity of the objectives of the DM leads to
describing the possible consequences thanks to various
attributes; that is, the set of consequences is a
multidimensional space.
Thus, the decision maker wishing to buy a car may have as a
set of choices X = {Opel Corsa, Renault Clio, Peugeot 206}. If
its criteria of choice (the attributes) are, for simplicity, the
engine-cylinder capacity, the mark and the price, one can also
express the set X in the following form X = {(1.2l; Opel;
11400€), (1.2l; Renault; 11150€), (1.1l; Peugeot; 11600€)}.
7

Any utility function on this set therefore checks the


equation below:

" X=(x1,x2,x3), Y=(y1,y2,y3), if X ≽ Y ↔ u(x1,x2,x3) ≥ u(y1,y2,y3)

The effective construction of a multi-attribute utility function


u poses many practical problems. Due to the limited
cognitive abilities of the DM, it requires its decomposition
into a combination of single-attribute-utilities.
8

Consider a person who wants to acquire a computer, the


attributes of his preferences are : the brand, the processor
power, the hard disk capacity, the size of the computer, the
monitor size, the memory size, and the price.

The DM can easily compare

(Dell, 1.1GHz, 30GO, 17", 128MO, 1000€) and

(Apple, 1.1GHz, 30GB, 17", 128MO, 1000€)

because only the mark differs between these two machines.


9

On the other hand, from a cognitive point of view, it is


much more difficult to pronounce between

(Dell, 1.1GHz, 30GO, 17", 128MB, 1000€) and

(Apple, 850MHz, 40GB, 15", 256MB, 900€)

because these two machines have really different


characteristics.
10

The Multi-Attribute Utility Function Assessment

Let X1, . . . , Xn (n ≥ 2) be a set of attributes associated with


the consequences of a decision problem.

The utility of (x1, . . . , xn) can be determined using either a

² direct assessment, or

² decomposed assessment
11

Example

Suppose you are interested in buying a car from three


selected alternatives. Let us further suppose that you have
identified two factors that may influence your choice: price
and lifetime.

Car 1 Car 2 Car 3


Price 24000 18000 14000
Life time 12 9 6

The possible consequences of the decision alternatives are


captured by the two identified attributes, we therefore need
to determine a two attributes utility function u(Price, life time).
12

Direct assessment
Estimate the combined utility u(x1, . . . , xn) over the given
values of all n attributes.
Assign a utility of 0 to the worst consequence (24000, 6),
and a utility of 1 to the best consequence (14000, 12).
For example, a direct assessment of the utility of the
consequence (18000, 9) can be determined from:
p
(14000,12)
1
L1 (18000,9) ≈ L2
1-p
(24000,6)

In order to find a good representation through direct


assessment, we need to assess utilities for a substantial
number of possible outcomes.
13

Decomposed assessment:
First estimate n marginal utilities ui(xi) for the given values
of the n attributes; then compute u(x1, . . . , xn) by combining
the ui(xi) of all attributes:
u(x1, . . . , xn) = f[u1(x1), . . . , un(xn)]

§ u(X1, . . . ,Xn) is an n-attribute utility function;

§ u(X1,x2 . . . , xn) is a sub-utility function for X given a


fixed value xi to attributes Xi; i=2,….,n

§ ui (Xi) is a marginal utility function for X.


14

Different functional forms


Let X and Y be two attributes (generalization to n>2 attributes
is straightforward). Consider the utility function u(X,Y) for X
and Y. u can has
§ an additive form if for constants kX and kY
u(X, Y) = kX*uX(X) + kY*uY(Y)
§ a multi-linear form if for constants kX , kY, and kXY
u(X,Y) = kX*uX(X) + kY*uY(Y) + kXY*uX(X)*uY(Y)
§ a multiplicative form if for constants kX , kY, cX , and cY
u(X,Y) = (kX*uX(X) + cX)*(kY*uY(Y) + cY)
The constants are often called weights or scaling constants.
The different functional forms are only valid under certain
assumptions
15

Preferential Independence

Attribute Y is preferentially independent (PI) of attribute Z iff

the preference order of lotteries that differ only in y does not

depend on the levels of attributes z. More symbolically, Y is

preferentially independent of Z iff for all z,

(y, z) (y’, z) Þ (y,z’) (y’,z’) for all z’


16

Example

Choosing a meal :

Attribute 1: Drink {Coca, Fanta}

Attribute 2: meat {Beef, Chicken}

1. If the DM prefers Beef to Chicken then

(Coca, Beef) is preferred to (Coca, Chicken) and

(Fanta, Beef) is preferred to (Fanta, Chicken)

Thus, meat is PI of drink


17

2. If the DM prefers Fanta with chicken and Coca with beef then

(Coca, Chicken) is less preferred to (Fanta, Chicken) and

(Coca, Beef) is preferred to (Fanta, Beef)

Thus, drink is not PI of meat

Mutual Preferential Independence

Attributes Y and Z are mutually PI (MPI) iff Y is PI of Z and Z is PI

of Y

(y, z) (y’, z) Þ (y, z’) (y’, z’) for all z, z’, y, y’


18

Example
Choosing a meal :
Attribute X: Drink {Coca, Fanta}
Attribute Y: meal {Beef, Chicken}
Attribute Z: Side dish {Potato, Rice}
Suppose that the DM prefers Coca to Fanta, Beef to Chicken,
and Potato to Rice
1. If (Coca, y, z) is preferred to (Fanta, y, z) and
(x, Beef, z) is preferred to (x, Chicken, z) and
(x, y, Potato) is preferred to (x, y, Rice)
Then, each attribute is PI of the others.
19

2. If (Coca, Beef, Rice) is preferred to (Fanta, Beef, Potato)

and (Fanta, Chicken, Potato) is preferred to (Coca,

Chicken, Rice) Thus, drink and side dish are not MPI.

The fact that each attribute is PI of others does not imply

MPI.
20

Utility Independence
Y is utility independent (UI) of Z when the preference order
between lotteries on Y given z does not depend on the
particular level of z.

Y is UI of Z, if (y', z0) is the certainty equivalent for the lottery


[(y1, z0); (y2, z0); 0.5-0.5]; then, (y', z1) must also be the certainty
equivalent for the lottery [(y1, z1); (y2, z1); 0.5-0.5].

Mutual Utility Independence


Y and Z are mutually utility independent (MUI) if Y is UI of Z
and Z is UI of Y.
Theorem

Let denote by (x0, y0) the worst possible outcomes, and (x*, y*)

the best ones.

If X and Y are (MUI) then the two-attribute utility function is

multi-linear

u(x, y) = kxux(x)+ kyuy(y)+kxyux(x)uy(y)

where
1. u(x0, y0) =0 et u(x*, y*) = 1

2. ux(x) is the marginal utility function on X normalized by

ux(x0) = 0 et ux(x*) =1.

3. uy(y) is the marginal utility function on Y normalized

uy(y0) = 0 et uy(y*) =1.

4. kx = u(x*, y0)

5. ky = u(x0, y*)

6. kxy = 1- kx - ky
23

ü kx= u(x*, y0) is equal to p, where p is the indifference


probability between the two following lotteries:
p
(x*, y*)
1
L1 (x*, y0) ≈ L2
1-p
(x0, y0)

ü kY= u(x0, y*) is equal to q, where q is the indifference


probability between the two following lotteries:
q
(x*, y*)
1 ≈
L1 (x0, y*) L2
1-q
(x0, y0)

ü 0 ≤ kx ≤ 1; 0 ≤ ky ≤ 1; and -1 ≤ kxy ≤ 1
ü Note that if kxy = 0 (or kx + ky= 1), then the utility function

turns out to be simply additive:

u(x, y) = kxux(x)+ kyuy(y)

Definition

X and Y are additive independent if only if for any x, x’, y, y’

The following lotteries are equivalent :

0.5 0.5
(x, y) (x, y’)

L1 ≈ L2
0.5 0.5 (x’, y)
(x’, y’)
To verify the additive independence of X and Y, it is enough to
test if the DM is indifferent between the following lotteries :

0.5 0.5
(x*, y*) (x*, y0)

L1 ≈ L2
0.5 0.5 (x0, y*)
(x0, y0 )

But what if the additive independence does not hold?


The multiplicative utility function
Under MUI condition, the multiplicative utility function can be
used. Let Xi denote the ith attribute, and Ui(xi) and ki the
corresponding individual utility function and scaling constant.
The multiplicative utility function for n different attributes is
given by the equation
n
kU ( X 1 ,..., X n ) +1 = ∏ (kk i
U i ( X i ) +1)
i=1
where, ki is the utility of an outcome having the best level on
attribute Xi and the worst on all others.
Ui is the marginal single utility function (SAUF) for attribute i,
scaled from zero to 1.0.
k and ki are scaling constants, where k belongs to [-1, 1] and
ki belongs to [0, 1].
n
ü If ∑k i
= 1 then k = 0 and the multi-attribute utility function
i=1
will be in an additive form :
n
U ( X ) = ∑ kiU i ( X i )
i=1
n
ü If ∑k i
< 1 then k > 0
i=1
n
ü If ∑k i
> 1 then -1< k < 0
i=1

ü To find k we need to solve the following scaling equation


n
k +1 = ∏ (kki +1)
i=1
ü In the multi-attribute utility function, the interaction between
the attributes is captured by the term (1-kX-kY)UX(x)UY(y).
How can we interpret this?

ü Keeney and Raiffa (1976) give an interesting interpretation


of the coefficient: (1-kX-kY) in terms of its sign

§ if (1-kX-kY) is positive, the two attributes complement each

other.

§ if (1-kX-kY) is negative, the two attributes are substitutes.


29

Example
Let’s consider the following hierarchical structure.
30

Adopting the Keeney & Raiffa 5-point procedure, the table


below summarizes the best, the intermediate, and the worst
outcomes for each sub-attribute.

Sub- Worst Intermediate x Ri,q Best


criteria
attributes x i,0 x i,1
x Ri,1/4 x Ri,1/2 x Ri,3/4
IR GSR 8 9 10 15 25
RCCR 9 12,5 15 20 50
NPLCR 30 35 50 60 100
CR
NPLR 10 6 4 3 0
NPLLRE 100 20 10 7 0
LADSF 25 30 35 40 50
LR LDSF 100 85 70 60 40
IBR 0 40 75 100 150
31

GSR
Utility
1
0,8
0,6
0,4
0,2
0 %
8 12 16

RCCR NPLCR NPLR


Utility Utility NPLLRE
Utility Utility
1,2 1,2 1 1,2
1 1 1
0,8
0,8 0,8 0,8
0,6
0,6 0,6 0,6
0,4
0,4 0,4 0,4
0,2 0,2 0,2
0,2
% 0 % %
0 0 % 0
0 20 40 60 0 50 100 150 0 2 4 6 8 10 0 50 100 150

LADSF LDSF
Utility Utility IBR
1 1,2 Utility
1,2
0,8 1
1
0,8
0,6 0,8
0,6
0,6
0,4 0,4
0,4
0,2 0,2 0,2
% 0 % 0 %
0
0 50 100 150 0 50 100 150 200
25 35 45 55
32

Using a fitting software, we obtain the following single-attribute


functions.

Sub-Attributes SAUF
GSR U(GSR)= 0,96407-8,99*EXP(-0,28231*GSR)
RCCR U(RCCR)=1,0217-2,8631*(EXP(-0,11174*RCCR))
NPLCR U(NPLCR)=1,1026-2,9722*EXP(-0,03393*NPLCR)
NPLR U(NPLR)=-1,3232+2,3502*EXP(-0,059264*NPLR)
NPLLRE U(NPLLRE)=-0,012896+1,0403*EXP(-0,062763*NPLLRE)
LADSF U(LADSF)=1,6731-4,3073*EXP(-0,037504*LADSF)
LDSF U(LDSF)=4,9612-3,3831*EXP(0,0038465*LDSF)
IBR U(IBR)=8,95059*(1-EXP(-0,000804646*IBR))
33

By solving the corresponding scaling equations, we obtain the


following scaling constant k.

Sub-attributes ki Scaling Equation k


RCCR 0.95
NPLCR 0.7
x+1=(0.95x+1)(0.7x+1)(0.3x+1)(0.5x+1) kCR=−0.994021
NPLR 0.3
NPLLRE 0.5
LADSF 0.8
LDSF 0.5 x+1=(0.8x+1)(0.5x+1)(0.5x+1) kLR=-0.924816
IBR 0.5
34

The corresponding MAUF are then given below

Attribute MAUF

IR U(IR)=U(GSR)=0,96407-8,99*EXP(-0,28231*GSR)

U(CR)=[(0,95*kCR*U(RCCR)+1)(0,7*kCR*U(NPLCR)+1)(0,3*kCR*U(NPLR)+1)
CR
(0,5*kCR*U(NPLLRE)+1)-1]/kCR

LR U(LR)=[(0,8*kLR*U(LADSF)+1)(0,5*kLR*U(LDSF)+1)(0,5*kLR*U(IBR)+1)-1]/kLR
35

The corresponding MAUF are then given below

Attribute ki Scaling Equation

IR 0.1

CR 0.15 x+1=(0.1x+1)(0.15x+1)(0.2x+1) kR=6.50721

LR 0.2

U(R)=[(0.1*kR*U(IR)+1)(0.15*kR*U(CR)+1)(0.2*kR*U(LR)+1)-1]/kR

You might also like