Professional Documents
Culture Documents
Realistic Evaluation - Review of Tilley & Pawson
Realistic Evaluation - Review of Tilley & Pawson
com
Reviews
Evaluation 1998; 4; 234
DOI: 10.1177/13563899822208446
Published by:
http://www.sagepublications.com
On behalf of:
Subscriptions: http://evi.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Reviews Editor
234
. . . enhanced when practitioners, decision makers, and other users fully understand the
strengths and weaknesses of evaluation data, and that such understanding is increased by
being involved in making methods decisions. (p. 243)
This argument is supported by Exhibit 11.1, titled ‘Reasons Primary Users Should be
Involved in Methods Decisions’. Eight reasons are outlined which capture the
argument, and which could be used as a summary and for evaluation training
purposes.
The second device is the Menu which is described as a tool for working with
stakeholders in selecting evaluation decision options. The audience for the menu is
ultimately the evaluation user but it is provided in the text as a contribution to the
evaluating armoury of the reader. Essentially, menus offer the reader the range of
235
The Menus provide the reader with useful starting points for professional decision
making. Alongside these communicative devices are visual jokes and cartoons, each of
which make a point about evaluation use. The different forms of writing make
Utilization-Focused Evaluation an unusually communication-rich text.
236
At first sight being a ‘supreme underlabourer’ presents a serious ethical problem for a
utilization-focused evaluator. As Patton himself acknowledges, this issue needs to be
given ‘serious consideration’. Ernest House (quoted on p. 365) accuses utilization-
focused evaluation as having an unacceptable ethic if taken to its logical conclusion. He
has called this position ‘clientism’. House demands ‘What if the client is Adolph
Eichman, and he wants the evaluator to increase the efficiency of his concentration
camps?’. How does Patton respond to this predicament? House argues that he must
have other criteria at work than the intended uses of primary users. Patton does,
although they remain rather shadowy and ill-defined. His response connects with an
important dimension of his approach to the notion of audience. Critical of an abstract or
vague interpretation of audience, Patton emphasizes what might be called the sociology
of evaluation use. His use of the concept of audience is highly situated with particular
people at particular times. To use yet another metaphor, use is ‘nested’ in the concerns
of people with specific interests. It is the quality of the work with these people ‘which is
the key to specifying and achieving intended uses’ (p. 43). Patton goes further by
suggesting in Menu 3.1 (p. 54) that evaluators should find and cultivate people who
want to learn. Given the centrality of what Patton calls the ‘personal factor’, he
acknowledges the importance of project and stakeholder selection, presumably by the
evaluator. Further, he acknowledges the evaluator as a moral and ethical stakeholder
who brings professional standards and ‘his or her own sense of morality and integrity’
(p. 364) to the negotiation with primary users. In a transcript of a debate with
237
Patton, then, makes his ethical and moral stand at the point at which he is deciding
whether or not to work with a client. Once he makes that decision, although he does not
state explicitly on what basis that decision is made, the primacy of the user comes into
play. Is it uncharitable to suggest that by the time these various selections and
negotiations have been completed, work with the primary users and their inclinations
are shaped by ethical and moral considerations which are not derived from them but
from the evaluator? In the world I inhabit, the luxury of client selection has not quite
arrived. However, Patton’s position is operationalizable, perhaps in muted form, by all
of us in our selection of bids to make, invitations to tender we choose to follow-up and
estimations of the extent to which we can negotiate the evaluation brief with a user
such that they can get the most out of its use and we can exercise ‘professional
standards’ whatever they may be. It is typical of the generosity of the book that the
resources for this debate are present for readers to generate their own perspectives on
this dimension of utilization-focused evaluation. However, this ethical dimension is not
the real ‘Achilles’ heel’ of utilization-focused evaluation. Much more of a problem,
Patton argues, is the turnover of primary intended users. The ‘personal factor’ is
important for utilization-focused evaluation in which the relationship between the
primary users and the evaluators are nurtured by, ‘getting them to commit time and
attention . . . dealing with the political dynamics, building credibility and conducting
the evaluation in an ethical manner’. When there is primary user turnover, all these
negotiations might need to be recast involving adjustments to timing, reporting and use.
Before leaving this issue of ‘audience’, I should mention a further worry with the
highly situated and narrow definition of intended user and intended use. What does it
say about the ‘responsibility’ of the evaluator to a wider constituency, the public, for
example? It seems entirely reasonable that the evaluation of a programme in the public
domain, funded by public money, should have a sense of evaluation use which is
beyond the immediate and specific users of the evaluation and connects to the
responsibility of an evaluation to provide a public account of the effects of using public
money in a particular way. I suppose it might be argued that such issues are established
before the evaluation begins at the ‘client selection’ stage or that the ‘public’ is
represented by proxy in such cases by public officials. However, the thrust and vigour
of Patton’s position leads me to believe that, in reality, such distinctions will be lost.
The central two parts of the book provide frameworks, examples and operating
principles for the evaluation process. I will end this review by giving some tasters from
the extensive list of ingredients. Part Two, titled ‘Focusing Evaluations: Choices,
Options and Decisions’, covers questions associated with managing the evaluation
process. It ranges from issues associated with the way in which evaluations might be
oriented in the context of shifting situational factors, through considerations of the way
programme goals might shape evaluation processes, theories of action are incorporated
238
It’s a shame the title begins with ‘statistics’. First, statistics is viewed by many, perhaps
not completely without reason, as a partner of the devil. It is hardly surprising,
239
240
241
References
Cochran, W. G. (1965) ‘The Planning of Observational Studies of Human Populations (with
Discussion)’, Journal of the Royal Statistical Society, Series A 128: 234–66.
Cook, T. D. and D. T. Campbell (1979) Quasi-Experimentation: Design and Analysis Issues for
Field Settings. Chicago, IL: Rand McNally.
Isen, A. M. and P. F. Levin (1972) ‘The Effect of Feeling Good on Helping: Cookies and
Kindness’, Journal of Personality and Social Psychology 21: 384–8.
Lord, C. G., M. Lepper and E. Preston (1984) ‘Considering the Opposite: A Corrective Strategy
for Social Judgement’, Journal of Personality and Social Psychology 47: 1231–43.
Phillips, D. P. (1977) ‘Motor Vehicle Fatalities Increase Just After Publicized Suicide Stories’,
Science 196: 1464–5.
Reichardt, C. S. and M. M. Mark (1997) ‘Quasi-Experimentation’, in L. Bickman and D. J. Rog
(eds) The Handbook of Applied Social Research Methods. Thousand Oaks, CA: Sage.
Rosenbaum, P. R. (1995) Observational Studies. New York: Springer-Verlag.
242
Charles S. Reichardt
Jenny A. Novotny
University of Denver, CO, USA
If anyone dares to elaborate a Canon of evaluation books, such as the one produced by
H. Bloom for literary works, most likely Ray Pawson and Nick Tilley’s Realistic
Evaluation will be one of the books to include in it. This is an original, ambitious,
provocative, witty and useful book which proposes a new and practical approach to
evaluation, providing also its epistemological foundations and a historical perspective
on evaluation research and practice. All this in less than 250 pages! The book is
described by its authors as ‘a stock-taking exercise and a manifesto concerning
evaluation’s quest for scientific status’ (p. xi) and it consists of an introduction and nine
chapters, the first two devoted to a critique and the rest to a detailed presentation of
‘realistic evaluation’.
The first chapter presents a brief history of evaluation research. This light history is
the way through which the authors ‘position’ their perspective. In the process, they
provide a useful overview but also some misunderstandings: the point of the method in
Campbell and Stanley is not to exclude ‘every conceivable rival’ causal agent (p. 5) but
those that are plausible, and their ‘whole point’ is not ‘to make the basic causal
inference secure’ (p. 6) but as solid as possible (Campbell, like Popper, was a
fallibilist). Furthermore, the valuable discussion could greatly benefit from Albert
Hirschman’s work on reactionary and progressive rhetoric (Hirschman, 1991).
Chapter 2 is a ‘constructive critique’ of the experimental tradition in evaluation. The
authors attack the ‘successionist’ (or external) approach to causality used by experi-
mental evaluation, contrasting it with the ‘generative’ metatheory of causation. They
develop their critique through a detailed analysis of two relevant examples drawn from
the field of criminology. This distinction between different views of causality
corresponds to a philosophical discussion that was started by Aristotle. Pawson and
Tilley follow Harré’s version (although not mentioning the fundamental work of Piaget
243
244
245
References
Hirschman, A. (1991) The Rhetoric of Reaction. Cambridge, MA: Harvard University Press.
Lawson, T. (1997) Economics & Reality. London: Routledge.
Mohr, L. (1996) Impact Analysis for Program Evaluation. London: Sage.
Rist, R. (1995) ‘Postcript: Development Questions and Evaluation Answers’, in R. Picciotto and
R. Rist (eds) Evaluating Country Development Policies and Programs: New Approaches for a
New Agenda, New Directions for Evaluation, no.67. San Francisco, CA: Jossey-Bass.
246