Heavy OWA Operators

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Fuzzy Optimization and Decision Making, 1, 379 – 397, 2002

# 2002 Kluwer Academic Publishers. Printed in The Netherlands.

Heavy OWA Operators


RONALD R. YAGER Yager@Panix.com
Machine Intelligence Institute, Iona College, New Rochelle, NY 10801

Abstract. We recall the OWA operator and discuss some of the features used to characterize these operators. In
passing we introduce an new characterizing attribute called the divergence. We then consider two cases of
information fusion and use these as motivation to generalize the OWA operator with the introduction of the
Heavy OWA operator. These HOWA operators differ from the ordinary OWA operators by relaxing the
constraints on the associated weighting vector. We consider some applications of these HOWA operators and
provide some examples of weighting vectors associated with these HOWA operators.

Keywords: aggregation, decision making, uncertainty

1. Introduction

The Ordered Weighted Averaging (OWA) operator (Yager (1988), Yager and Kacprzyk
(1997)) provides a parameterized class of mean type aggregation operators which have
been used in numerous application (Fedrizzi et al (1993), Bordogna and Pasi (1995),
Bosc and Pivert (1995), Herrera et al (1996), Yager (1996), Bouchon-Meunier and Rifqi
(1997), Kacprzyk and Zadrozny (1997)). The parameterization is implemented by the
choice of the weighting vector. In this work we extend the framework of the OWA
averaging operators by introducing the Heavy OWA operator. As we shall see these
operators differ from the ordinary OWA operators by relaxing the constraints on the
associated weighting vector. With these operators we can capture in addition to mean
operators a large body of other aggregation operators.

2. Ordered Weighted Averaging Operators

An OWA operator of dimension n is a mapping F:Rn ! R characterized by an n


dimensional vector W, called the weighting vector, such that its components lie in
Pn
the unit interval and sum to one, wj 2 [0, 1] and j¼1 wj ¼ 1. The aggregation
performed by this operator is defined as F(a1, . . . , an) ¼ WT B where B, the ordered
argument vector, is such that its jth component, bj, is the jth largest of the arguments. We
note that if index is a function such that index( j) is the index of the jth largest of the
arguments then bj ¼ a index( j). Using this we can express the OWA aggregation operator
P
as F(a1, . . . , an) ¼ nj¼1 wj aindexð jÞ .
At times when we want to emphasize that we are using an OWA aggregation we shall
write OWA(a1, . . . , an) instead of F(a1, . . . , an). Also when we want to emphasize the
characterizing weighting vector we shall use FW(a1, . . . , an) or OWAW(a1, . . . , an).
380 YAGER

As we noted above the OWA operator is a mean or averaging operator. This is a


reflection of the fact that the operator has the following properties:

1. Commutativity: any permutation of the arguments has the same evaluation.


2. Monotonicity. If ai $ di for all i then F(a1, . . . , an) $ F(d1, . . . , dn).
3. Boundedness. Mini [ai] % F(a1, . . . , an) % Maxi [ai].

It is this third property that effectively makes this a mean operator. An important
implication of this third property is the idempotency of the operator, if ai ¼ a for all i,
then F(a1, . . . , an) ¼ a.
By choosing a different manifestation of the weighting vector we are able to obtain
different types of aggregation operators. For W* where w1 ¼ 1 and wj ¼ 0 for all j 6¼ 1
then F(a1, . . . , an) ¼ Maxj [aj]. For W* where wn ¼ 1 and wj ¼ 0 for all j 6¼ n then
F(a1, . . . , an) ¼ Minj [aj] and for WAve where wj ¼ 1/n for all j then F(a1, . . . , an) ¼
1
Pn
n j¼1 aj . In Yager (1993) we describe many other examples of OWA aggregation
operators. One example is where wk ¼ 1 and wj ¼ 0 for j 6¼ k, here F(a1, . . . , an) ¼
aindex(k), it is the value of the kth largest of the arguments. We denote this weighting vector as
W[k]. Another OWA operator has w1 ¼ h and wn ¼ 1 & h. This case corresponds to a
Hurwicz-Arrow type aggregation has F(a1, . . . , an) ¼ hMaxi[ai] þ (1 & h) Mini [ai].
What is notable about this OWA aggregation is that the lower indexed weights, those at
the top of W, are associated with the larger valued elements being aggregated. Thus we see
that as more of the weight is assigned near the top of the weighting vector the more Max-
like the aggregation. On the other hand those components of W with higher index, are
associated with the smaller valued element being aggregated.
In Yager (1988) we introduced two measures for characterizing a weighting vector and
the type of aggregation it performs. The first measure a(W), the attitudinal character, is
P
defined as aðWÞ ¼ nj¼1 n&1 n&j
wj . It can be shown that a 2 [0, 1]. The more of the weight
located near the top of W, associated with low indexed wi’s, the closer a to 1. The more of
the weighted located toward the bottom brings a closer to 0. We note that a(W*) ¼ 1,
a(WAve) ¼ 0.5 and a(W*) ¼ 0.
The second measure introduced in Yager (1988) is called the entropy or dispersion of W.
P
It is defined as HðWÞ ¼ & nj¼1 wj lnðwj Þ. This can be used to provide a measure of the
information being used. That is if wj ¼ 1 for some j, then H(w) ¼ 0 and the least amount
of information is used.
While in general many different weighting vectors can induce a given attitudinal value a
some special exceptions exist. For example the value a ¼ 1 can only be obtained by the
weighting vector W*. Similarly the value a ¼ 0 can only be obtained by the weighting
vector W*. On the other hand the case of a ¼ 0.5 provides antithesis to these, this is a
value of a that can be obtained by many different manifestations of W. Specifically
any symmetric weighting vector, one in which wj ¼ wn&jþ1 has a ¼ 0.5. We note that if
n is odd then W[k] with k ¼ nþ1 2 has a ¼ 0.5. This is actually the median. Of all the
possible ways of obtaining a ¼ 0.5 the vector WAve has a unique position, it is the one that
maximizes H(W).
HEAVY OWA OPERATORS 381

Pn n&j T
It should be noted that a(W) ¼ j¼1 n&1 wj ¼ W B can be seen as an OWA
n&j
aggregation where W has components wj and B has components bj ¼ n&1 .
In selecting the weighting vector W we are deciding on two features of the resulting
aggregation. One being along a scale with a dimension of Maxness/Miness, the a value.
Here we are determining how much preference we want to give to the bigger or smaller
values in the aggregation. a ¼ 1 indicates emphasize on the bigger values, a ¼ 0 indicates
emphasis of the smaller value and a ¼ 0.5 indicates no special emphasis on either of the
extremes. The second feature we are considering is the amount of filtering or redundancy.
This feature is measured by H(W). The larger H(W) the more redundancy or filtering. In
the case of a ¼ 0.5 this choice becomes clearest. When we select, wj ¼ 1/n, H(W) ¼ ln(n)
we are introducing the most redundancy and filtering. On the other hand when we select
the median this gives us the least redundancy or smoothing, here we have one element
with non-zero weight, in this case H(W) ¼ 0.
There appears to be a need for another characterizing feature associated with the OWA
operators. Here we shall only introduce and briefly discuss this new characterizing feature.
Consider the situation in which n ¼ 7 and we have two weighting vectors W b and W e.
For Wb we have w b1 ¼ 1/2 and w e we have w
bn ¼ 1/2 and for W e3 ¼ 1/2 and we5 ¼ 1/2. Here
a(Wb ) ¼ a(W e ) ¼ 0.5 and H(W b ) ¼ H(We ) ¼ ln(2). Thus the two characteristics can’t
distinguish between these two weighting vectors, yet it is clear they are different.
In order to capture this distinction we shall define a new characterizing feature

X
n ! "2
n&j
DivðWÞ ¼ wj & aðWÞ
j¼1
n&1

and call it the divergence of W. Let us look at this. We first consider the motivating
weighting vectors Wb and W
e

# n&1 $2 # 0 $2
bÞ ¼
DivðW 1
2 n&1 & 0:5 þ 12 n&1 & 0:5 ¼ 12 ð0:5Þ2 þ 12 ð0:5Þ2 ¼ 0:25
# 7&3 $2 # $2 1 # 4 3 $2 1 # 2 3 $2 # 1 $2
eÞ ¼
DivðW 1
2 6 & 0:5 þ 12 7&5
6 & 0:5 ¼ 2 6 & 6 þ 2 6 & 6 ¼ 6 ¼ 0:027:

Consider now the simple average where wj ¼ 1/n. Since this has a(W) ¼ 0.5 we get

n ! "
1X n&j 1 2 1 nþ1
DivðWÞ ¼ & ¼ :
n j¼1 n & 1 2 12 n & 1

Consider now any weighting vector that has a pair of weights that are that are symmetric,
# n&j $2
wj ¼ 0.5 and wnþ1&j ¼ 0.5. In this case since a(W) ¼ 0.5 we get Div(W) ¼ n&1 & 0:5 .
Thus here we see that for all n, this monotonically decreases as j moves towards the
middle. Another case of interest is W[k] here wk ¼ 1 and wj ¼ 0 for all others, for this case
Div(W[k]) ¼ 0.
382 YAGER

While at this point we shall not further pursue our study of this characterizing measure
we make one observation. If W is a weighting vector with a(W) ¼ a then the more the
weights move from the component with index n (1 & a) þ a the more Div(W) increases.

3. Heavy OWA Operators

Depending upon the application various different interpretations and semantics can be
associated with OWA operator and the related weights. For our purposes we initially
consider the following application. Assume we are interested in the value of some variable
and we have n experts, each of which provides an estimate aj of the value of this variable.
The OWA operator can provide a means of aggregating (fusing) these individualP estimates.
Using the OWA operator our fused estimate is ā ¼ F(a1, . . . , an) ¼ WTB ¼ nj¼1 wj bj . By
selecting different wj we implement different methods of fusing the observations. For
example with w1 ¼ 1 we take the largest of the estimates. If wj ¼ 1/n then we take the
average.
As a more tangible motivating example we consider the following problem. We have
some area and we are interested in determining the number of enemy units in this space.
Here each expert (sensor) is providing an estimate of this quantity. We are combining these
estimates to obtain a fused value. In this case the choice of weighting vector W reflects
some attitudinal aspect of the aggregation. For example, using W*, where w1 ¼ 1,
provides an estimate in which we take the largest possible number of forces. We call this
the maximizing estimate. The choice of W ¼ W*, takes the minimal value as the
aggregation value, this is a minimizing estimate.
We now look at a slight modification of the proceeding example. Again consider the
problem of trying to estimate the number of enemy forces within a given space. Here again
we assume we use n experts to aid us in the task. However, here, rather than having each
expert supply us with an estimate of the total number of enemy forces we partition the
space into n disjoint regions and ask each expert to provide an estimate of the number of
forces in their assigned region (see Figure 1).

Figure 1. Partitioning of Space.

In this case, with aj indicating


P the estimate provided by expert j for its region, our overall
estimated value is ā ¼ nj¼1 aj ¼ Total[a1, . . . , an]. In this case rather than the using a
mean aggregation of the experts values we use a totaling of the experts value.
The basic distinguishing feature between these two situations which resulted in the
different aggregation is that in the first case the experts where providing redundant or
HEAVY OWA OPERATORS 383

overlapping information while in the second case each expert is providing distinct non-
redundant information. In the first case we have some freedom as to how we use the
information while in the second case we must use all the information as they are
independent.
Before preceding we shall describe another situation leading to a totaling which can
help further provide intuition. Again assume we are interested in the number of objects
residing in a space. For visual simplicity we assume this to be a line. In order to uncover
these objects we shall use n sound sensing sensors. That is, these sensors are locating
the objects by the sound emitted by the objects. Further we assume, that while each of
the sensors is surveying the whole space, they each have a distinct frequency range in
which they can detect objects. We schematically illustrate the situation in Figure 2.

Figure 2. Partitioning by Frequencies.

Thus here we have a frequency partitioning of the space of objects, each sensor
performing within a given frequency range. Here again we would take the total of
the sensor estimates as our count of objects The important point here is that the totaling
type operation can arise from a partitioning based on features other than spatial partition.
In either of these last two examples we obtain the desired count of objects using a
totaling
Pn type aggregation. Consider now the totaling type aggregation, Total(a1, . . . , an) ¼
j¼1 aj. We note that we can express this in a way that resembles the OWA aggregation.
We let

Totalða1; . . . ; an Þ ¼ WT B

in which B is an ordered argument vector bj ¼ aindex( j) and W is an n dimension vector


such that wj ¼ 1 for all j. This observation inspires us to consider a new class of
aggregation operators which unifies the OWA and the totaling operator.
384 YAGER

Figure 3. h Scale.

Definition The Heavy Ordered Weighted Averaging (HOWA) aggregation operator of


dimension n is a mapping H: Rn ! R such that
X
n
Hða1; . . . ;an Þ ¼ WT B ¼ wj aindexðjÞ
j¼1

in which B is the ordered argument vector and W is a weighting vector such that
X
n
1: 0 % wj % 1 and 2: 1 % wj % n:
j¼1

Here while we still restrict each of the weights to lie in the unit interval we allow the sum
of the weights to be in [1, n] instead of being restricted to sum to one. We see that if WT is
a weighting vector such wj ¼ 1 P for all j then with W ¼ WT we get H(a1, . . . , an) ¼
Total(a1, . . . , an). If W is such that nj¼1 wj ¼ 1 then H(a1, . . . , an) ¼ OWAw (a1, . . . , an), it
is an OWA aggregation characterized by W. What of course becomes interesting here is
the introduction
P of new class of aggregation types lying between these two extremes where
1 < nj¼1 wj < n.

4. Characterizing HOWA Operators

An important distinguishing feature of the different members of the class of HOWA


aggregation operators is the sum elements in the weighting vector W. We shall denote this
P
sum as jWj ¼ nj¼1 wj and call it the magnitude of W. In order to normalize this feature
of the W we shall introduce a characterizing parameter called the beta value of the vector
W. We define this as hðWÞ ¼ jWj&1 n&1 . Since |W| 2 [1, n] then h 2 [0, 1]. For pure totaling
aggregation, W ¼ WT , we have |W| ¼ n and hence h ¼ 1. On the other hand for W
associated with an ordinary OWA aggregation we have |W| ¼ 1 and hence h ¼ 0. This
provides a scale as shown in Figure 3.
We see that h can be viewed as a kind of degree of totaling. Given a value for h and a
dimension n of the vector W we can obtain |W| ¼ h n þ (1 & h).
In some situations we may prefer to look at the negation of h. We shall define U ¼ 1 & h,
it is clear U also lies in the unit interval. We see that for the pure total operator we get U ¼ 0
HEAVY OWA OPERATORS 385

Figure 4. Two expert readings.

and for the OWA operator U ¼ 1. Using h ¼ jWj&1 n&jWj


n&1 and U ¼ 1 & h we get U ¼ n&1 . We
can express as |W| ¼ n & U (n & 1).
Let us try to associate some semantics with this U parameter. The fact that the OWA
type aggregation and the totaling type aggregation provide the extremes of this can help
in our understanding of this parameter. Let us look at these two different aggregations.
The OWA aggregation is a mean type operator. As we noted this operation is used in
situations in which we have repeated or redundant readings of the same variable. In this
case we have complete freedom or autonomy in selecting how and which of the values we
want to aggregate. On the other hand in the cases of the totaling operator, U ¼ 0, as
illustrated by our example we see that we have no redundancy in the information. All the
data that are arguments in the aggregation reflect different variables. Here we must use all
the data. This lack of freedom is clearly reflected in the fact that while generally for any
W for which |W| < n we some have freedom in assigning the weights in the case where
|W| ¼ n no freedom exists, we must select wj ¼ 1 for all n. Thus it appears then that the
parameter U can be interpreted as the freedom or redundancy in the information being
aggregated.
Let us consider a simple case of aggregating two arguments to get some further intuition.
Again consider a space in which we are trying to determine the number of objects that it
contains. Two experts (sensors) are being used to supply this information (see Figure 4). In
I we have the situation in which we partition the space into two disjoint non-overlapping
regions and let each expert count %the&number of objects in their region. This leads to the
1
totaling aggregation where W ¼ and U ¼ 0. In the second situation we have each
1
expert%counting& the whole space. Here the data is redundant, we use an OWA aggregation
a
W¼ and U ¼ 1. In this case we must select a, this selection is a reflection of our
1&a
attitude in emphasizing large or small data values.
Consider now the case illustrated in Figure 5. Here again each expert provides a reading,
a1 and a2 , however the partitioning is not as extreme as in the preceding. While expert one
solely counts the objects in the range 0 to r1 and expert two solely counts the objects in the
range r2 to 1, in the range from r1 to r2 , they are counted by both experts. Here then we
have some partial redundancy. In particular the degree of redundancy here is r ¼ r2 & r1.
Using this as our degree of redundancy U we get |W| ¼ n & (n & 1) r. With n ¼ 2 we have
|W| ¼ 2 & r. If for example r1 ¼ 1/4 and r2 ¼ 3/4 then r ¼ 0.5 and hence |W| ¼ 1.5. In this
386 YAGER

Figure 5. Partial Overlapping.

% &
w1
case W ¼ is such that w1 þ w2 ¼ 1.5. Thus the partial redundancy in this situation
w2
effects the formulation of the weighting vector W whose total weight is 1.5. The choice of
how we distribute this weight between the two arguments becomes a reflection of the
attitude we want to assume, whether we want to give more % preference
& to the larger values
1
or smaller value. Thus for example the choice W ¼ would% indicate
& a strong
0:5
0:5
preference for the bigger values,%on the & other hand the choice W ¼ 1 would reflect
75
the opposite. A choice of W ¼ would reflect neutrality on this issue.
:75
As we have indicated the magnitude |W| of W and the associated parameter U is related
to the amount of freedom we have in allocating the weights in the weighting vector,
U ¼ 0 indicates no freedom, wj ¼ 1 for all j. In cases in which we have some freedom in
allocating the weights one consideration must be our preference in emphasizing the larger
or smaller arguments in the aggregation. In the case of the OWA operator this consid-
eration was captured by the attitudinal character of the weighting vector, a(W). We shall
now provide an extension of this characterizing feature to the case of the HOWA operator.
Let W be a weighting vector of dimension n with magnitude |W| then we define
n ! "
1 X n&j
aðWÞ ¼ wj :
jWj j¼1 n & 1

It is easy to shown that a(W) 2 [0, 1]. We can also extend the measure of entropy or
dispersion to this heavy OWA situation. Here we define
! "
1 Xn
wj
DisðWÞ ¼ & wj ln :
jWj j¼1 jWj
w
If we let vj ¼ jWjj , the proposition of the weight allocated to the jth position, then a(W) ¼
Pn ðn&jÞ Pn
j¼1 n&1 vj and Dis(W) ¼ & j¼1 vj ln(vj).
HEAVY OWA OPERATORS 387

We see that when |W| ¼ 1 these reduce to the usual definitions. Furthermore we note
that when we use the totaling operator |W| ¼ n then we must have wj ¼ 1 for all j and
hence

1X n
n&j
aðWÞ ¼ ¼ 0:5 and DisðWÞ ¼ & ln n:
n j¼1 n & 1

Thus as we would expect totaling is neutral with respect to selecting between the
arguments as it must use all arguments in the aggregation.

5. Decision Making Under Uncertainty

Decision Making Under Uncertainty (DMUU) constitutes another situation in which the
OWA aggregation is used (Yager (1992)). Typically in DMUU we must choose from
among a collection of alternative actions. Here an alternative A consists of an n-triple,
(v1, . . . , vn), of possible payoffs one of which will be obtained as a result of the selection
of this alternative. However, the actual payoff to be obtained is unknown at the time the
choice of alternative must be made. In order to select one of the alternatives we must
compare the tuples corresponding to the different alternatives and select the ‘‘best’’. One
approach to comparing alternatives is to aggregate all the payoffs associated with an
alternative to provide a single representative value. We then select the alternative with the
largest of these representative values. The process of aggregating the payoffs to obtain the
representative value depends upon the decision makers attitude as well as any available
information about the probability or possibility of the different outcomes. When no
information is available distinguishing the probabilities of the outcomes, a special case of
DMUU called DM under ignorance, the determination of the representative value of an
alternative is based solely on the decision maker’s attitude. Examples P(Luce and Raiffa
(1967)) of representative values in this case are Maxi[vi], Mini[vi] and ni¼1 vi. The first is
seen as being an optimistic attitude, the second being pessimistic and the third being
neutral. In Yager (1992) a general procedure unifying these different attitudes using
the OWA operator was introduced. Under this method with (v1, . . . , vn) being the
possible payoffs associated with an alternative then OWA(v1, . . . , vn) is used to
obtain the representative value for this alternative. In this approach the selection of the
OWA weighting vector W is a reflection of the decision maker’s attitude. In this
application of the OWA operator the a value of the W is a measure of the decision
maker’s degree of optimism. The pure optimist selects W* which has w1 ¼ 1 and wj ¼ 0
for j 6¼ n. In this case a ¼ 1. Using W* we evaluate an alternative by its best
possible payoff, OWA(v1, . . . , vn) ¼ Maxi[vi]. A pessimist selects W*, where wn ¼ 1. In
this case a ¼ 0 and the evaluation of an alternative is its worst value OWA(v1, . . . , vn) ¼
Mini[vi]. The important observation is that W and the related a(W) provide an indication
of how the decision maker adjudicates the uncertainty, the closer a(W) to one the more
optimistic.
388 YAGER

Figure 6. Payoff Matrix.

We shall now look at a modification of this problem of decision making under ignorance
which introduces a role for the new HOWA operator. As a grounding example we shall
consider the following situation depicted in Figure 6.
Assume you are a manager and you must select an alternative action, one of the Ai. In
the Figure 6 we let X ¼ {x1, x2, . . . , xn} be a pool of workers one of which will be
assigned to you, but you don’t know which one. The value cij is the amount you can earn if
you select alternative Ai and employee xj is assigned to you. For example the X may be
salespeople and the Ai may be different product lines. You must decide on the alternative
before the decision on the employee is made. Here then the alternative Ai looks like an
n-tuple: (ci1, ci2, . . . , cin). In this example of DM under ignorance, as previously noted, we
can use the OWA operator, OWA(ci1, ci2, . . . , cin) ¼ V(Ai), to provide a representative
value for an alternative. We then select the alternatives with the largest representative
value.
We now consider a variation of the preceding situation. Again assume the Ai are a
collection of alternatives from among which one must be chosen, X ¼ {x1, . . . , xn} are
collection of employees and cij is the gain if we select alternative Ai and we receive
employee xj to work for us. Here, however, instead of getting just one of the employees
from the pool X we get all the employees. Here the valuation of each of the alternatives is
the sum of the elements in its row
X
n
VðAi Þ ¼ cij ¼ ToTðci1 ; ci2 ; . . . ; cin Þ:
j¼1

We note that in this case there exists no uncertainty, the payoff for choosing alternative Ai
will be equal to V(Ai) with no uncertainty. We can represent this valuation as an HOWA
aggregation V(Ai) ¼ HW(ci1, ci2, . . . , cin) where W is a weighting vector with |W| ¼ n,
wi ¼ 1 for all components.
These two examples provide inspiration to consider a more generalized situation in
which the HOWA operator can play a central role. Again assume the basic situation
described in the preceding, however here the manager shall get q of the employees
HEAVY OWA OPERATORS 389

assigned to him. With q ¼ 4, if the manager selected Ai and he got employees x1, x3, x4, x6
then his payoff is ci1 þ ci3 þ ci4 þ ci6, we assume the gains add. However, in our problem
while the the manager knows he will get q employees he does not know which of the q
employees will be assigned to him. Here in this situation where the manager will get q
employees, the HOWA aggregation operator can provide a tool for obtaining a represen-
tative value for each of the alternatives.
Consider alternative Ai, in this case where the decision maker will get q employees the
representative value of this alternative is V(Ai) ¼ H(ci1, ci2, . . . , cin) where H is an
HOWA aggregation operator with a weighting vector W having dimension n and
magnitude |W| ¼ q. The actual choice of the weighting vector W is a reflection of the
decision maker’s attitude, his degree of optimism, in the face of uncertainty. For example
the pure optimist will assume that the q employees assigned to him are always going to be
those with the largest payoffs. In this case he would have wj ¼ 1 for j % q and wj ¼ 0 for
j > q. The pure pessimist will assume that the q employees assigned to him are always
going to be those with the worst payoffs. In this case we have wj ¼ 0 for j % n & q and
wj ¼ 1 for j > n & q.
We see from the preceding that this decision making situation provides a prototypical
situation in which we find application for these HOWA operators. An important feature of
this decision making situation is the multiplicity of solutions for the uncertain variable. In
particular the value q was an indication of how many solutions. Thus this is an example of
decision making situation in which we are ignorant as to how the extraneous variable will
be selected other then the fact that it will have q different values.

6. On the Relationship of A(W) and B(W)

In the preceding we have seen some situations illustrating the possible applications of the
HOWA operator. In this section we shall try to get some understanding of the process of
selecting a particular weighting vector. One factor determining the choice of W is the
magnitude of the weighting vector denoted |W| or the related measure h. A second
important characterizing feature of a weighting is the attitudinal character a. It is useful to
note that the interpretation and semantics associated with these characterizing values is
closely related to the domain of application. As more application of these operators
become apparent, different semantics will arise. In the following, for the most part, we
shall draw upon the domain of decision making under uncertainty to provide a semantics.
In the following
P we assume W is an n dimensional vector. One feature associated with
vector is |W| ¼ nj¼1 wi. We also recall there exists a unique relationship between h(W)
and |W| for a given n, h ¼ jWj&1
n&1 , |W| ¼ h n þ (1 & h). We note h 2 [0, 1]. We shall refer to
both |W| and h as the magnitude of the vector W.
As we have already noted the magnitude effects our freedom with respect to allocating
the weights in the vector W. We noted that if h ¼ 1 then |W| ¼ n. In this case we only have
one possible allocation for the vector W, wj ¼ 1 for all j. In the case where h < 1 there
exists some freedom in allocating the weight. We refer to U ¼ 1 & h as the degree of
freedom associated with W. In allocating the weights in cases where there exists some
390 YAGER

freedom an important role can be played by the attitudinal character a. We recall that we
defined a in the case of these HOWA operator as

1 X n
ðn & jÞ
aðWÞ ¼ wj :
jWj j¼1 n & 1

Here a 2 [0,1] where a ! 1 means emphasis is placed on the larger arguments in the
aggregation process, a ! 0 means emphasis is placed on the smaller arguments and a ¼
1/2 means no special emphasis. In the framework of decision making under uncertainty,
one semantics associated with a is to the decision maker’s attitude regarding his optimism
(a ¼ 1) versus pessimism(a ¼ 0).
As we shall see an increase in h results in a reduction of freedom manifested by a
reduction in the allowable range of values for a. If for a given h we let [L(h), U(h)]
indicate the range from which we can select a, as h increases there is a reduction in this
range as h approaches 1. In the case where h ¼ 1, |W| ¼ n, which as noted has only one
manifestation wj ¼ 1 for j, the only attainable value for a is 0.5.
Before preceding we shall provide a representation of a(W) that will be useful. We recall

1 X n
ðn & jÞ 1 1 X n
aðWÞ ¼ wj ¼ ðn & jÞwj
jWj j¼1 n & 1 jWj ðn & 1Þ j¼1
!
1 1 Xn Xn
¼ n wj & j wj
jWj ðn & 1Þ j¼1 j¼1

1 1 1 1 X n
aðWÞ ¼ n jWj & j wj
jWj ðn & 1Þ jWj ðn & 1Þ j¼1
n 1 1 X n
aðWÞ ¼ & j wj :
n & 1 jWj ðn & 1Þ j¼1

7. Examples of HOWA Weighting Vectors

Let us now consider some notable examples of weighting vector allocations. The first one
we consider is what we call ‘‘the push up’’ allocation, we shall denote this as Wpu. Here
we desire to emphasize the larger values in the aggregate and thus we try to allocate as
much of the available magnitude to the elements at the top of the weighting vector W.
Assuming some h, we have |W| ¼ h n þ (1 & h). In this Wpu we allocate the weights
using the following algorithm

wj ¼ ð1 ^ ðjWj &ðj & 1ÞÞÞ _ 0:

Using this we allocate one to each of the top elements until we run out of weight. For
HEAVY OWA OPERATORS 391

example if n ¼ 8 and |W| ¼ 4.3 then we get

w1 ¼ 1; w2 ¼ 1; w3 ¼ 1; w4 ¼ 1; w5 ¼ 0:3; w6 ¼ w7 ¼ w8 ¼ 0:

An important feature of this allocation method is noted in the following.

THEOREM If Wpu and W are two weighting vectors of dimension n and both have the
same magnitude h if Wpu is the push up allocation then a(Wpu) $ a(W).

bj and wj be the weights associated in Wpu and W respectively and let q ¼ nh.
Proof: Let w

n 1 1 X n
aðWpu Þ ¼ & bj
jw
n & 1 jWj ðn & 1Þ j¼1
n 1 1 X n
aðWÞ ¼ & j wj
n & 1 jWj ðn & 1Þ j¼1
!
1 1 X
n X
n
aðWpu Þ & aðWÞ ¼ j wj & bj :
jw
jWj ðn & 1Þ j¼1 j¼1

bj ¼ 1 for j ¼ 1 to q and zero elsewhere we have


Since Wpu has w
!
Xq Xn
1 1
aðWpu Þ&aðWÞ ¼ j ðwj &1Þ& j wj
jWj ðn & 1Þ j¼1 j¼qþ1

q
X X
n X
n q
X
jðwj &1Þ& j wj ¼ j wj & jð1 & wj Þ
j¼1 j¼qþ1 j¼qþ1 j¼1

X
n q
X
$ ðq þ 1Þ wj & q ð1 & wj Þ:
j¼qþ1 j¼1
Pq Pn
If ! ¼ j¼1 wj then j¼qþ1 wj ¼ q & !. Using this

X
n q
X
ðq þ 1Þ wj & q ð1 & wj Þ ¼ ðq þ 1Þðq & !Þ & qðq & :!Þ ¼ q & ! $ 0
j¼qþ1 j¼1

hence a(Wpu) & a(W) $ 0.


The implication here is that for any given magnitude the push-up allocation of the
weights has the largest a. There are two special cases of Wpu worth noting. In the case
where h ¼ 0, the pure OWA, Wpu ¼ W*, w1 ¼ 1 and wj ¼ 0 for all j 6¼ 1. In the case
where h ¼ 1, Wpu ¼ WT. We note that for the case h ¼ 0 where we get W* we have
a(Wpu) ¼ a(W*) ¼ 1 while for the case h ¼ 1, a(Wpu) ¼ a(WT) ¼ 0.5.
It is interesting to see how a(Wpu) is effect by h for a fixed n. As the following result
shows increasing h tends to decrease a(Wpu).
392 YAGER

THEOREM Assume W1 and W2 are two push-up weighting vectors of dimension n, with
magnitudes h1 and h2. If h1 > h2 then a(W1) % a(W2).

n 1 1
Pn
Proof: First we we recall that for any W we have a(W) ¼ n&1 & ðn&1Þ jWj j¼1 j wj.
Thus we see that for fixed n, the weighting vectors are ordered with respect to the term
1
Pn
jWj j¼1 j wj ¼ d, the larger this term the smaller a(W). First consider the case where
|W| ¼ q some integer, 1 % q % n. In this case for the push-up type wj ¼ 1 for j ¼ 1 to q and
zero elsewhere. Here then
q
1X 11 qþ1
d¼ j¼ ðqÞðq þ 1Þ ¼ :
q j¼1 q2 2

Thus we see that d increases as q increases. If h1 > h2 then q1 > q2 therefore d1 > d2 and
hence a(W1) % a(W2). Consider now the case when |W| is not an integer but |W| ¼ q þ !
where 0 < ! < 1. In this case
!
q
1 X n
1 X
d¼ j wj ¼ j þ ðq þ 1Þ!
jWj j¼1 ðq þ !Þ j¼1
! "
1 1
¼ ðq þ 1Þq þ ðq þ 1Þ!
ðq þ !Þ 2
! "
1 1
d¼ ððq þ 1Þ q þ ! :
ðq þ !Þ 2
qþ1 1
Taking the derivative of this with respect to ! we get d’ ¼ ðqþ!Þ 2 2 q $ 0. Since this is

increasing we get the desired result.


An approximate relationship between h and a(Wpu) for these Pn push-up type vectors can
n 1 1
be obtained. We recall for any W that a(W) ¼ n&1 & ðn&1Þ jWj j¼1 j wj. We also recall that
for a given h, |W| ¼ hn þ (1 & h). Hence for Wpu where top weights are one we get
nhþð1&hÞ
P
n 1 1
aðWpu Þc n&1 & ðn&1Þ ðnhþð1&hÞÞ j
j¼1
n
ðnhþð1&hÞÞ 2 ðn h þ ð1&hÞÞðn h þ ð1&hÞ þ 1Þ
1 1 1
aðWpu Þc & ðn&1Þ
n&1
' (
n
aðWpu Þc n&1 & 12 ðn&1Þ
1
ðn h þ 2 & hÞc 12 2n&nh&ð2&hÞ
n&1 ¼ 12 nð2&hÞ&ð2&hÞ
n&1

aðWpu Þc 12 ð2 & hÞc1 & h2 :

We see that a(Wpu) decreases as h increases. When h ¼ 0, a(Wpu) assumes its largest
value of one and when h ¼ 1, a(Wpu) assumes its smallest value of zero. Since a(Wpu)
always has the largest a for a given h we see that as h increases the upper boundary for
any a(W) decreases.
We now consider the dual allocation to the push-up, the ‘‘push-down allocation.’’ Here
we try to allocate a weight of one to each of the bottom elements until we run of weights.
HEAVY OWA OPERATORS 393

Figure 7. Allowable range of a(W) as a function of h.

The allocation algorithm used in this case is

wn&jþ1 ¼ ð1 ^ ðjWj& ðj & 1ÞÞÞ _ 0

where j ¼ 1 to n. As an example if n ¼ 8 and |W| ¼ 4.3 then we get

w1 ¼ w2 ¼ w3 ¼ 0; w4 ¼ 0:3; w5 ¼ w6 ¼ w7 ¼ w8 ¼ 1:
The properties associated with this allocation are dual to those of the push-up. Here we
shall let Wpd indicate the push-down weighting vector.

THEOREM If Wpd and W are two weighting vectors of having the same dimension and
magnitude h then a(Wpd) % a(W).
The implication here is that for a given magnitude h the push-down allocation has the
smallest a.
Two special cases of Wpd are worth noting. In the case where h ¼ 0, the pure OWA
case, Wpd ¼ W* and in the case where h ¼ 1, Wpd ¼ WT. We recall a(WT) ¼ 0.5 and
a(W*) ¼ 0.
The following result, which is the dual of the push-up case, applies to the push-down.

THEOREM Assume W1 and W2 are two push-up weighting vector of dimension n, with
magnitudes h1 and h2 respectively. If h1 > h2 then a(W1) $ a(W2).
Since W* the push-down with the smallest magnitude has a ¼ 0 and WT the push-down
with the largest magnitude has a ¼ 1/2 then it follows that a(Wpd) moves from 0 to 0.5 as
h moves from zero to one.
As in the case of the push-up an approximate relationship between the a(Wpd) and the h
can be obtained, a(Wpd) ¼ h2 .
Since the push-up induces the largest a(W) for a given magnitude h and the push-down
provides the smallest a(W) for given h in Figure 7 we the range of allowable a(W) values
as a function of magnitude h.
Figure 7 makes clear our observation that as h increases we face a loss in freedom with
respect to our ability to obtain weighting vectors with specific attitudinal characteristics.
Another notable allocation of the weights is the uniform allocation. In this case if W has
dimension n and magnitude |W| then we assign the weights as wj ¼ jWj n for all j. In this case
394 YAGER

we can easily shown that a(W) ¼ 0.5. This allocation always has a neutral attitudinal
character.
Another special allocation is the median type allocation. Here we try to push as much
weight to the center of W. Let n be the dimension of W and assume we have magnitude
|W|. As is often the situation with the median we must distinguish between the cases when
n is even and odd. We shall first assume n is even, ie n ¼ 2a. In this case for j ¼ 1 to a we
allocate the weights as
! ! ""
jWj & 2ðj & 1Þ
waþj ¼ waþ1&j ¼ 1 ^ _ 0:
2

In the case where n is odd, n ¼ 2a þ 1 we allocate the weights as follows.


waþ1 ¼ 1
! ! ""
ðjWj &1Þ & 2ðj & 1Þ
waþ1&j ¼ waþ1þj ¼ 1^ _0 ðfor j ¼ 1 to aÞ:
2
Before calculating the a(W) for this case we provide a general result for symmetric
weighting vectors.

Definition We say that W is a symmetric weighting vector if wnþ1&j ¼ wj, for j ¼ 1 to n.

THEOREM Assume that W is a symmetric weighting vector then a(W) ¼ 0.5.

n 1 1
P
n
Proof: a(W) ¼ n&1 & n jWj j wj .
j¼1
Pn Pa Pn
even n ¼ 2a, then
1. Assume n is P P j¼1 j wj ¼ j¼1 j wj þ j¼aþ1 j wj since wj ¼
wnþ1&j then nj¼aþ1 j wj ¼ aj¼1 ðn þ 1 & jÞwj hence
X
n X
a X
a X
a
ðn þ1Þ jWj
j wj ¼ ðj þ ðn þ 1 & jÞÞwj ¼ ðn þ1Þwj ¼ ðn þ1Þ wj ¼ :
j¼1 j¼1 j¼1 j¼1
2

Using this

n 1 1 ðn þ 1ÞjWj
aðWÞ ¼ & ¼ 0:5:
n & 1 n jWj 2
P P P
2. If n is odd, n ¼ 2a þ 1, then nj¼1 j wj ¼ aj¼1 j wj þ ða þ 1Þwaþ1 þ 2aþ1 j¼aþ2 wj .
P Pa
Here using the fact that wj ¼ wnþ1&j we have 2aþ1
j¼aþ2 j wj ¼ j¼1 ð2a þ 2 & jÞwj

X
a 2X
aþ1 X
a X
a
j wj þ j wj ¼ ðj þ 2a þ 2 & jÞwj ¼ 2a þ 2 wj
j¼1 j¼aþ2 j¼1 j¼1
! "
jWj & waþ1
¼ ð2a þ 2Þ ¼ ða þ 1ÞðjWj & waþ1 Þ:
2
HEAVY OWA OPERATORS 395

Pn
From this we get that j¼1 j wj ¼ (a þ 1) |W|. Using this
there
n 1 1 X n ' n ( 1 1
aðWÞ ¼ & j wj : ¼ & ða þ 1ÞjWj
n & 1 n jWj j¼1 n&1 n & 1 jWj
1 a a
aðWÞ ¼ ½2a þ 1 & a & 1) ¼ & ¼ 0:5:
n&1 n & 1 2a

Thus we see that any symmetric weighting vector has a ¼ 0.5. Since the median type
allocation is symmetric it has attitudinal character a(Wmed) ¼ 0.5.
One important class of OWA weighting vector are the vectors where wK ¼ 1 and wj ¼ 0
for j 6¼ K. This can be seen as selecting the Kth largest element. In this case W is a vector
focused at K. We shall now extend the idea of a vector focus at the same point to the
HOWA aggregation.
Let W be an n dimensional vector with magnitude |W|. We say that W is focused at K
if the weights of W satisfy the allocations described below - In the following let b ¼
Min[(K & 1), (n & K)]

wK ¼ 1
! ! ""
ðjWj &1Þ & 2ðj & 1Þ
wKþj ¼ wK&j ¼ 1^ _ 0 for j ¼ 1 to b
2
if b ¼ K & 1, K & 1 < n & K, then

wjþ2K&1 ¼ ð1 ^ ððjWj &ð1 þ 2bÞÞ & ðj & 1ÞÞÞ _ 0 for j ¼ 1 to n & 2K þ 1

if b ¼ n & K, K & 1 > n & K then


w2K&n&j ¼ ð1 ^ ððjWj & ð1 þ 2bÞÞ & ðj & 1ÞÞÞ _ 0 for j ¼ 1 to n & 2K þ 1:

It can be seen that for K ¼ 1 this becomes the push-up allocation and for K ¼ n this
becomes the push-down. It is also the median when K ¼ nþ1 2 .
Another notable type of OWA aggregation is what is called the Olympic Average. If W is
the n dimensional OWA weighting vector in this case it is defined as follows. If m, called
the cropping value, is some integer such that n > 2m the weights are defined such that

wj ¼ 0 for j ¼ 1 to m
1
wj ¼ for j ¼ m þ 1 to n & m
n & 2m
wj ¼ 0 for j ¼ n & m þ 1 to n
Essentially with this type of aggregation we discard the m highest and m lowest values
and take the average of the remaining objects. It can be seen as lying somewhere between
the average and the median type aggregation.
396 YAGER

Let us look at the extension of this type of aggregation in the case of HOWA. Assume W
is a HOWA vector of dimension n with magnitude |W| > 1. Assume m is the cropping
value. To extend this aggregation to the HOWA case we must consider two cases. In the
first case |W| < n & 2m, that is the cropping value m satisfies m < n&jWj 2 . In this case we
allocate the weight as

wj ¼ 0 for j ¼ 1 to m
jWj
wj ¼ n&2m for j ¼ m þ 1 to n & m

wj ¼ 0 j ¼ n & m þ 1 to n:
n&jWj
In the second case where |W| $ n & 2m, m $ 2 , we allocate the weights as follows

wj ¼ 1 for j ¼ m þ 1 to n & m
' ' ((
wmþ1&j ¼ wn&mþj ¼ 1 ^ ðjWj&ðn&2mÞÞ&2ðj&1Þ
2 _0 for j ¼ 1 to m:

In either case since W is symmetric we get that a(W) ¼ 0.5.


Another notable type of OWA aggregation is the Arrow-Hurwicz Aggregation (Arrow
and Hurwicz (1972)). In this aggregation we assign w1 ¼ ! and wn ¼ 1 & ! this gives us

Fða1 ; . . . ; an Þ ¼ ! Maxi ½ai ) þ ð1 & !Þ Min½ai ):

It is a weighted average of the Max and Min. ! 2 [0, 1] is called the Hurwicz coefficient.
Here we shall describe an extension of this to the HOWA operator. Assume we have a
HOWA aggregation with magnitude |W| ¼ q and dimension n. Let ! be our Hurwicz
coefficient. We let q1 ¼ !q and q2 ¼ (1 & !) q.
Here we define our weights in two directions, push-up and push-down. We first calculate

e j ¼ ð1 ^ ðq1 &ðj & 1ÞÞÞ _ 0


w j ¼ 1 to n

bn&jþ1 ¼ ð1 ^ ðq2 &ðj & 1ÞÞÞ _ 0


w j ¼ 1 to n

then we define our weights as

wi ¼ w
ei þ w
bi :
ej ¼ 0 for all j $ q1 þ 1 $ ! |W| þ 1 and w
We observe that w bj ¼ 0 for j % n & q2 % n & |W|
þ !|W|.

8. Conclusion

We discussed the OWA operator and some of the features used to characterize them. In
passing we introduced an new characterizing attribute called the divergence. We then
HEAVY OWA OPERATORS 397

considered two cases of information fusion one requiring a mean aggregation of data
provided and the other requiring a summing of the data provided. We used these as
motivation to generalize the OWA operator with the introduction of the Heavy OWA
operator. The HOWA operators differ from the ordinary OWA operators by relaxing the
condition that the sum of weights in the weighting vector must be one. In doing this we
allow from the unified representation of wider class of aggregation operators that include
mean operators and totaling operators. We considered some applications of HOWA
operators and provide some examples of weighting vectors associated with these HOWA
operators.

References

Arrow, K. J. and L. Hurwicz. (1972). ‘‘An Optimality Criterion for Decision Making Under Ignorance.’’ In C. F.
Carter and J. L. Ford (eds.), Uncertainty and Expectations in Economics. New Jersey: Kelley.
Bordogna, G. and G. Pasi. (1995). ‘‘Linguistic Aggregation Operators in Fuzzy Information Retrieval,’’ Interna-
tional Journal of Intelligent Systems 10, 233 – 248.
Bosc, P. and O. Pivert. (1995). ‘‘SQLf: A Relational Database Language for Fuzzy Quering,’’ IEEE Transactions
on Fuzzy Systems 3, 1 – 17.
Bouchon-Meunier, B. and M. Rifqi. (1997). ‘‘OWA Operators and an Extension of the Contrast Model.’’ In R. R.
Yager and J. Kacprzyk (eds.), The Ordered Averaging Operators: Theory and Applications. Boston: Kluwer
Academic Publishers, 29 – 35.
Fedrizzi, M., J. Kacprzyk, and H. Nurmi. (1993). ‘‘Consensus Degrees Under Fuzzy Majorities and Fuzzy
Preferences Using OWA Operators,’’ Control And Cybernetics 22, 71 – 80.
Herrera, F., E. Herrera-Viedma, and J. L. Verdegay. (1996). ‘‘Direct Approach Processes in Group Decision
Making Using Linguistic OWA Operators,’’ Fuzzy Sets and Systems 79, 175 – 190.
Kacprzyk, J. and S. Zadrozny. (1997). ‘‘Implementation of OWA Operators in Fuzzy Querying for Microsoft
Access.’’ In R. R. Yager and J. Kacprzyk (eds.), The Ordered Averaging Operators: Theory and Applications.
Boston: Kluwer Academic Publishers, 293 – 306.
Luce, R. D. and H. Raiffa. (1967). Games and Decisions: Introduction and Critical Survey. New York: John
Wiley & Sons.
Yager, R. R. (1988). ‘‘On Ordered Weighted Averaging Aggregation Operators in Multi-Criteria Decision Mak-
ing,’’ IEEE Transactions on Systems, Man and Cybernetics 18, 183 – 190.
Yager, R. R. (1992). ‘‘Decision Making Under Dempster-Shafer Uncertainties,’’ International Journal of General
Systems 20, 233 – 245.
Yager, R. R. (1993). ‘‘Families of OWA Operators,’’ Fuzzy Sets and Systems 59, 125 – 148.
Yager, R. R. (1996). ‘‘Quantifier Guided Aggregation Using OWA Operators,’’ International Journal of Intelli-
gent Systems 11, 49 – 73.
Yager, R. R. and J. Kacprzyk. (1997). The Ordered Weighted Averaging Operators: Theory and Applications.
Norwell, MA: Kluwer.

You might also like