Professional Documents
Culture Documents
Non Numeric Multi Criteria Multi Person Decision Making
Non Numeric Multi Criteria Multi Person Decision Making
Abstract
We describe a decision making technique, ME-MCDM, for the evaluation and selection of alter-
natives using a non-numeric scale. Using this procedure each alternative is evaluated by an expert
for satisfaction to a multi-criteria selection function. Each criterion can have a different degree of
importance. The individual expert evaluations can then be aggregated to obtain an overall evalu-
ation function. We apply this technique tO the problem of proposal selection in the funding envi-
ronment. In this environment the technique is augmented by some textual information which can
also be used to help in the decision process.
1. Introduction
Decision making involving multiple experts is a very important and difficult class
of problems. These problems become more even complicated when each individ-
ual expert's decision is based upon the use of multiple criteria. We shall call these
M E - M C D M (Multi Expert-Multi Criteria Decision Making) problems. In many
cases an important aspect of the M E - M C D M solution process is the necessity to
aggregate ratings and preferences. A requirement for aggregation is that the infor-
mation provided, the preference ratings, be on a scale of sufficient sophistication
to allow appropriate aggregation operations to be performed. The A r r o w impos-
sibility theorem (Arrow 1951) is a reflection of this difficulty. One scale that has
the needed property is the numeric scale with its vast cornucopia of allowable
operations such as averaging. One problem with such a scale is that we b e c o m e
subject to an effect that I shall call the t y r a n n y o f n u m b e r s . The essential issue
there is that the numbers take a life and precision far in excess of the ability of
the evaluators in providing these scores.
In this note we suggest an approach that allows for the requisite aggregations
but that avoids the tyranny of numbers by using a scale that essentially only re-
quires a linear ordering. The procedure suggested here can be used on many dif-
ferent kinds of problems: pattern recognition and medical diagnosis, for example.
The problems are essentially characterized by a situation in which we must eval-
uate the quality of number of objects (alternatives). The M E - M C D M evaluation
process described in the following is a two-stage process. In the first stage, indi-
vidual experts are asked to provide an evaluation of the alternatives. This evalu-
82 RONALD R. YAGER
ation consists of a rating for each alternative on each of the criteria. We note that
each of the criteria may have a different level of importance. The values to be
used for the evaluation of the ratings and importances will be drawn from a lin-
guistic scale which makes it easier for the evaluator to provide the information.
We use a methodology developed in Yager (1981) and discussed in Caudill (1990)
to provide a single value rating for each evaluator for each alternative. This rating
is again a linguistic value from the same scale. In the second stage, we use a
methodology introduced in Yager (1988) and extended in Yager (1992) to aggregate
the individual expert's evaluations to obtain an overall linguistic value for each
object. This overall evaluation can then be used by the decision maker as an aid
in the selection process. A goal of this work is to show how these two methodol-
ogies can be combined to provide an overall methodology for the solution of Multi
Expert-Mult i Criteria Decision Making problems.
During 1983-1984, I had the opportunity to serve at a government funding
agency as a program director in information sciences. In this capacity, I was re-
sponsible for selecting among proposals submitted those which were to be funded.
This experience provided me with firsthand knowledge of the decision-making
procedure used in the granting process. Recently, I had the opportunity to serve
on a panel of this agency as a reviewer to help provide information for the selec-
tion of proposals. This experience renewed my interest in the process used to
make this very important class of ME-MCDM decisions. In this note we shall use
this framework of proposal selection to describe our general procedure for aggre-
gation in ME-MCDM problems.
In the application specifically addressed in this article, proposal selection, we
suggest augmenting the ratings by some textual evaluation. While the need for
textual evaluation is not required for the implementation of the ME-MCDM pro-
cedure described, we feel that it provides useful additional information in the se-
lection process.
2. Problem formulation
The problem we are interested in addressing can be seen to consist of three com-
ponents. The first component is a collection
P = {P, . . . . C}
such decisions. One thing that becomes obvious is the central role that the nu-
meric scores play in this process. At the very least (and very best) they help make
the broad distinction between those proposals that are very bad and those that are
very good. They also help provide some ordering among the proposals. Most ra-
tional people would agree that given the highly subjective (and obviously nonre-
peatable) process used by the experts in providing these numeric scores the de-
cision maker shouldn't follow the numeric ratings too exactly; they should be
used as a guide tempered at the very least by the textual material. On the other
hand, spurned suitors for funding are rarely rational. In particular the process of
fairness would appear to be denied if someone with a lower total score was funded
and they were not. This situation can be seen to be a tyranny of numbers. We are
sometimes encouraged to follow numeric ratings more precisely than they merit.
In the following section, we shall suggest an alternative scoring system that pro-
vides at least some degree of respite from this tyranny. This system will allow for
some ordering without the artificial precision of numbers. It will essentially group
the proposals into categories based upon a linguistic scoring. The decision maker
can then use the textual material as welt as a more detailed study of the scores to
make his or her selection.
As was discussed for each proposal, each expert is given a questionnaire. We shall
assume each questionnaire consists of n questions. As noted in the previous sec-
tion, each question is a manifestation of a criterion of concern in evaluating a
proposal, Each question--How good is this proposal on this criteria?--will be
given an answer from the following scale S:
The use of such a scale provides, of course, a natural ordering, S; > S~ ff i > j.
Of primary significance is that the use of such a scale doesn't impose undue bur-
den on the evaluator in that it doesn't impose the meaningless precision of num-
bers. The scale is essentially a linear ordering and just implies that one score is
better then another. However, the use of linguistic terms associated with these
scores makes it easier for the evaluator to manipulate. The use of such a linguistic
scale also implicitly implies some concept of being satisfactory or not. The use of
NON-NUMERIC MULTI-CRITERIA MULTI-PERSON DECISION MAKING 85
such a seven-point scale appears also to be in line with Miller's (1969) observation
that human beings can reasonably manage to keep in mind seven or so items.
Implicit in this scale are two operators, the maximum and minimum of any two
scores:
Max(S~,Sj) = S~ if S~ >_ Sj
Min(Se,S~) = S~ if Sj -< S~.
[P~k(q,),P~k(q2), . . 9 Pik(q.)]
where Pik(q~) is the rating of the ith proposal on the jth criteria by the kth expert,
Each Pi~(q~) is an element in the set S of allowable scores.
Assuming n = 6, a typical scoring for proposal from one expert would be:
I(q~) = P
l(qz) = VH
l(q3) = VH
I(q,) = M
l(qs) = L
I(q6) = L.
The next step in the process is to find the overall valuation for a proposal by a
given expert.
In order to accomplish this overall evaluation, we use a methodology suggested
by Yager (1981). This approach was recently discussed by Caudell (1990).
A crucial aspect of this approach is the taking of the negation of the impor-
tances. In Yager (1981), we introduced a technique for taking the negation on a
86 RONALD R. YAGER
linear scale of the type we have used. In particular, it was suggested that ff we
have a scale o f q items of the kind we are using, then
Neg(S~) = Sq_ i + 1.
We note that this operation satisfies the desirable properties o f such a negation
as discussed by Dubois and Prade (1985).
I. Closure
For any s ~ S, Neg(s) e S
2. Order reversal
For Si > Ss, Neg(S;) <- Neg(Ss)
3. Involution
Neg(Neg(Si)) = S; for all i.
For the scale we are using, we see that the negation operation provides the
following:
Neg(P) = N (Neg(S7) = SO
Neg(VH) = VL (Neg(St) = Sz)
Neg(H) = L (Neg(Ss) = $3)
Neg(M) = m (Neg(S4) = $4)
Neg(L) = H (Neg(S3) = $5)
Neg(VL) = 1/// (Neg(Sz) = $6)
Neg(N) = P (Neg(S0 = $7).
The methodology suggested by Yager (1981) that can be used to find the unit
score of each proposal by each expert, which we shall denote as P~k, is as follows:
In the above V indicates the max operation. We first note that this formulation
can be implemented on elements drawn from a linear scale as it only involves
max, min, and negation.
Let us look at the operation suggested in equation (1). First we note that the
min operation selects the smallest of its arguments; generally, unless all arguments
are high, high values on the scale don't effect the min operation. This is a result
of the property that Min (ST,Si) = S~ for any i. Consider a criterion that has little
importance; it will get an importance rating Sk that is low on the scale. When we
take the negation of this score we get something high. When we take the max
o f the criteria importance with the satisfaction value to the criteria we still get
a high score. Thus we see that low-importance criteria have little effect on the overall
score.
N O N - N U M E R I C MULTI-CRITERIA MULTI-PERSON DECISION MAKING 87
Criteria: CI C2 C3 C4 C5 C6
Importance: P VH VII M L L
Score: H M L P VII P.
In this case,
The essential reason for the low performance of this object is that it performed
low on the third criterion which has a very high importance. We note that if we
change the importance of the third criterion to low, then the proposal would eval-
uate to medium.
The formulation of equation (1) can be seen as a generalization of a weighted
averaging. Linguistically, this formulation is saying that ifa criterion is important,
then a proposal to be fundable should score well on it. This idea is in the same
spirit as that suggested by Roy in his outranking technique (1972, 1973).
Essentially this methodology starts off by assuming that each project has a per-
fect score and then reduces its evaluation by its scoring on each criterion. How-
ever, for each criterion the amount of this reduction is limited by the importance
of the criterion. This limitation is manifested by the fact that we use the negation
of the criteria importance.
In Arrow (1951), Luce and Raiffa (1967), and Yager (1980), a number of prop-
erties required of a multi-criteria decision function are discussed. Among this
properties are Pareto optimality, independence to irrelevant alternatives, positive
association of individual scores with overall score, nondictatorship, and symme-
try. It can be shown (Yager 1980, 1981) that the formulation suggested for the
aggregation of multi-criteria satisfies these conditions. A more detailed discussion
of this methodology can be found in Yager (1981).
An essential feature of this approach is that we have obtained a reasonable unit
evaluation of each proposal by each expert using an easily manageable linguistic
scale. We had no need to use numeric values and force undue precision on the
experts.
We note that in formulating the individual experts evaluation of an alternative
we are implicitly assuming a desire to satisfy all criteria that are important. More
formally we are requiring that for all criteria if the criterion is important, then it
has a good score.
The term Neg(I(qj) V P(qj)), which we shall denote as T(q~), indicates a value
for a given criteria to the statement if the criterion is important, then it has a good
88 RONALD R. YAGER
score. Our requirement that we desire all criteria, universal quantifier, to satisfy
this imperative means that the connection between the individual T(qj) is an "and-
ing." It is well established in the literature (Dubois and Prade 1975) that the only
way to implement an anding with the kind of ordinal scale we are using is with a
rain operator. One characteristic of this all/and/min type of aggregation is that the
least score plays the central role in determining the combined score. In addition,
implicit in this desire to attain universal satisfaction is that no tradeoffs are al-
lowed; any number of good scores can't make up for a bad score. This desire for-
universal satisfaction means that one unhappy criterion can bring down the over-
all rating. This leads in some cases to what on the surface appears to be anti-
intuitive results. The problem is not with the min operation used but with the
imposition of the desire for universal satisfaction. A decision maker which is un-
happy with this conclusion must provide a softer aggregation imperative than re-
quiring that all criteria be satisfied.
As a result of the previous section, we have for each proposal, assuming there are
r experts, a collection of evaluations P;~, P,z . . . . Pi~ where P;k is the unit evalua-
tion of the ith proposal by the kth expert. If r is small, the decision maker may be
able to manage these values to make some overall decision. In this section, we
shall provide a technique for combining the expert's evaluation to obtain an over-
all evaluation for each proposal, which we shall denote as P;. We are implicitly
assuming that each of the experts has the same importance. Differing importances
can be, to some extent factored in when the textual material is included. We note
that, in the case of using numeric values, this operation usually corresponds to
taking an average. The technique we use is based on the ordered weighted aver-
aging (OWA) operators introduced by Yager (1988) and extended to the linear
environment in Yager (1992).
The first step in this process is for the decision maker (program director) to
provide an aggregation function which we shall denote as Q. This function can be
seen as a generalization of the idea of how many experts he or she feels need to
agree on a project for it to be acceptable. In particular for each number i, where
i runs from 1 to r, the decision maker must provide a value Q(i) indicating how
satisfied he/she would be in selecting a proposal with which i of the experts were
satisfied. The values for Q(i) should be drawn from the scale S = {Sj, $2. . . . S,,}
described above.
It should be noted that the selection of Q(i) is purely a subjective choice of the
decision maker reflecting the way he/she wants to make his/her decision. How-
ever, the function Q should have certain characteristics to make it rational:
1. As more experts agree, the decision maker's satisfaction or confidence
should increase:
2. If all the experts are satisfied, then his/her satisfaction should be the highest
possible:
Q(r) = perfect,
Q(o) = none
We note that although these examples only use the two extreme values of the
scale S, this is not at all necessary or preferred.
In the following we shall suggest a manifestation of Q that can be said to emu-
late the usual arithmetic averaging function. In Yager (1992) we provide a formal
justification of this relationship. In order to define this function, we introduce the
operation Int [a] as returning the integer value that is closest to the n u m b e r a. In
the following we let q be the n u m b e r o f points on the scale (the cardinatity o f S)
and r be the n u m b e r o f experts participating. This function which emulates the
average is denoted as QA and is defined for all i = 0, 1 , . . . r as
Qa(k) = S0(k)
where
W e n o t e t h a t w h a t e v e r t h e v a l u e s o f q a n d r, it is a l w a y s t h e c a s e t h a t
QA(O) = S1
Q a ( r ) = Sq.
A s an e x a m p l e o f this f u n c t i o n , if r = 3 a n d q = 7, t h e n
b ( k ) = I n t [1 + (k * ~)] = I n t [1 + 2k]
and
QA(O) = S 1
QA(1) = S 3
QA(2) ----- S 5
QA(3) = 87,
Ifr = 4andq = 7, t h e n
b ( k ) = I n t [1 + k * 1.5]
and
QA(O) = S I
QA(1) = 83
QA(2) = S4
Oz(3) = S6
QA(4) = S 7.
In t h e c a s e w h e r e r = 10 a n d q = 7, t h e n
b ( k ) = I n t [1 + k * 6 ] ;
then
QA(O) = S I
QA(1) = $2
Qa(2) = $2
QA(3) = $3
QA(4) = $3
QA(5) = S4
QA(6) = S5
NON-NUMERIC MULTI-CRITERIA MULTI-PERSON DECISION MAKING 91
QA(7) = $5
QA(8) = 5 6
QA(9) = $6
QA(IO) = ST.
Having appropriately selected Q, we are now in the position to use the ordered
weighted averaging (OWA) method (Yager 1988, 1992), for aggregating the expert
opinions. Assume that we have r experts, each of which has a unit evaluation for
the ith project denoted Pik. The first step in the OWA procedure is to order the
Pi, s in descending order; thus we shall denote Bj as the jth highest score among
the experts' unit scores for the project. To find the overall evaluation for the ith
project, denoted Pi, we calculate
eil ~ M
P,: = H
Pi3 = L
P+4=VH.
B 1 = VH
B2 = H
B3 = M
B 4 = L.
Furthermore, we shall assume that our decision maker chooses as his or her
aggregation function the average like function, Qa. Then with r = 4 and scale
cardinality q = 7, we obtain
92 RONALDR, YAGER
Qa(l) = L ($3)
Qa(2) = M ($4)
Qa(3) = VH ($6)
Qa(4) = P ($7),
5. Conclusion
Note
t. Froma pragmatic point of view we suggest that those most important be given the rating P.
NON-NUMERIC MUUt'I-CRITERIA MULTI-PERSON DECISION MAKING 93
References
Arrow, K.J. (195I). Social Choice and Individual Values. New York: John Wiley & Sons.
Caudill, M. (t990). "'Using Neural Nets: Fuzzy Decisions," AI Expert (April), 59-64.
Dubois, D., and H. Prade. (1985). "A Review of Fuzzy Sets Aggregation Connectives," bLforma-
tion Sciences 36, 85-121.
Luce, R.D., and H. Raiffa. (t967). Games and Decisions: Introduction and Critical Survey, New
York: John Wiley & Sons.
Miller, G.A. (1969). "The Organization of Lexical Memory.'" In G~A. Talland and N.C. Waugh,
eds, The Pathology of Memory. New York: 1969.
Roy, B, (1973). "'How Outranking Relation Helps Multiple Criteria Decision Making." In J.J.
Cochrane and M. Zeleny, eds., Multiple Criteria Decision Making. Columbia, SC: University
of South Carolina Press, .,~
Roy, B., and E Bertier. (1972). "La Methode ELECTRE II: Une Application Media-Planning,"
Sixth International Conference on Operational Research, Dublin, 21-25.
Yager, R.R. (1980). "Competitiveness and Compensation in Decision Making: A Fuzzy Set Based
Interpretation," Computers and Operations Research 7,285-300.
Yager, R.R. (1981). "A New Methodology for Ordinal Multiple Aspect Decisions Based on Fuzzy
Sets," Decision Sciences 12, 58%600.
Yager, R.R. (1988), "~OnOrdered Weighted Averaging Aggregation Operators in Multi-Criteria De-
cision Making," IEEE Transactions on Systems, Man and Cybernetics 18, 183-190.
Yager, R.R. (1992L "Applications and extensions of O~La~ Aggregations," International Journal
of Man-Machine Studies, 37, I03-132.