Professional Documents
Culture Documents
American Journal of Evaluation 2013 Fetterman
American Journal of Evaluation 2013 Fetterman
American Journal of Evaluation 2013 Fetterman
net/publication/273214852
CITATIONS READS
16 92
4 authors, including:
Abraham Wandersman
University of South Carolina
12 PUBLICATIONS 165 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by David M Fetterman on 31 May 2018.
Published by:
http://www.sagepublications.com
On behalf of:
Additional services and information for American Journal of Evaluation can be found at:
Subscriptions: http://aje.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
What is This?
1
Fetterman & Associates, San Jose, CA, USA
2
University of South Florida, Tampa, FL, USA
3
University of South Carolina, Columbia, SC, USA
4
University of North Carolina, Chapel Hill, NC, USA
Corresponding Author:
David Fetterman, Fetterman & Associates, 5032 Durban Court, San Jose, CA 95138, USA.
Email: fettermanassociates@gmail.com
Collaborative evaluators are in charge of the evaluation, but they create an ongoing engagement
between evaluators and stakeholders, contributing to stronger evaluation designs, enhanced data
collection and analysis, and results stakeholders understand and use. Collaborative evalua-
tion covers the broadest scope of practice, ranging from an evaluator’s consultation with the
client to full-scale collaboration with specific stakeholders in every stage of the evaluation
(Rodrı́guez-Campos & O’Sullivan, 2010).
Participatory evaluators jointly share control of the evaluation. Participatory evaluations
range from program staff members and participants participating in the evaluator’s agenda
to participation in an evaluation that is jointly designed and implemented by the evaluator and
program staff members. They encourage participants to become involved in defining the eva-
luation, developing instruments, collecting and analyzing data, and reporting and disseminat-
ing results (Shulha, 2010). Typically ‘‘control begins with the evaluator but is divested to
program community members over time and with experience’’ (Cousins, Whitmore, &
Shulha., 2013, p.14).
Empowerment evaluators view program staff members, program participants, and com-
munity members as in control of the evaluation. However, empowerment evaluators do not
abdicate their responsibility and leave the community to conduct the evaluation solely by
itself. They serve as critical friends or coaches to help keep the process on track, rigorous,
responsive, and relevant. Empowerment evaluations are not conducted in a vacuum. They are
conducted within the conventional constraints and requirements of any organization. How-
ever, participants determine how best to meet those external requirements and goals (Fetter-
man & Wandersman, 2010).
Misleading Characterization
Cousins et al. present a hypothetical about an evaluation approach that hands off the evaluation to
the community without the assistance of an evaluator (p. 14). They use a discussion with Donna
Mertens about ‘‘turning control over to stakeholders’’ as their example. In this conversation, she con-
cluded it would be ‘‘chaos’’ and that there needed to be ‘‘a partnership rather than a relinquishing of
responsibility on the part of the evaluator’’ (p. 14). This is a misleading characterization of any major
stakeholder involvement approach. Empowerment evaluations, for example, place program staff and
community members in control of the evaluation. However, empowerment evaluators do not
abdicate their responsibility. They work with communities as evaluation coaches and critical friends.
This should be apparent from Fetterman, Kaftarian, and Wandersman’s (1996) earliest writings to
case examples in Stanford University’s School of Medicine (Fetterman et al., 2010) and Hewlett-
Packard’s US$15 Million Digital Village empowerment evaluation (Fetterman, 2013).
Empowerment evaluation is the opposite. The stakeholder being in control is primary and collabora-
tion is secondary. Stakeholder involvement represents a more generic and descriptive term for all
three approaches. Building on the work of others, engaging in the dialogue, or at least acknowled-
ging previous work in the discourse, has proved to provide more light than heat and greater concep-
tual clarity in the long run.
Cousins et al.’s (2013) resistance (p. 14) and ‘‘discomfort’’ (p. 13) with defining terms, compart-
mentalizing approaches, and clarifying approaches is not in the best interests of participatory eva-
luation, stakeholder involvement approaches, or evaluation in general. Our disagreements with
Cousins et al. (2013), however, should not be used to divide and weaken strong bonds and relation-
ships. They should be used to refine and improve our efforts. There is an overlap between collabora-
tive, participatory, and empowerment approaches in practice. Synergistic strength is a function of
interrelated and reinforcing characteristics and features. We invite colleagues to continue to engage
in critical reflection and dialogue in an effort to strengthen the quality of our work. It is our hope that
our dialogue will continue to produce more light than heat in the field.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Bryk, A. S. (1983). (Ed.) Stakeholder-based evaluation. New Directions for Program Evaluation, Vol. 17. San
Francisco, CA: Jossey-Bass.
Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy
Analysis, 14, 397–418.
Cousins, J. B., & Earl, L. M. (1995). Participatory Evaluation: Enhancing Evaluation Use and Organizational
Learning Capacity. The Evaluation Exchange. Issue Topic: Participatory Evaluation. Harvard Family
Research Project, 1. Retreived from http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-
archive/participatory-evaluation/participatory-evaluation-enhancing-evaluation-use-and-organizational-
learning-capacity
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80,
5–23.
Cousins, J. B., Whitmore, E., & Shulha, L. (2013). Arguments for a common set of principles for collaborative
inquiry in evaluation. American Journal of Evaluation, 34, 7–22.
Emerson, R. W. (1993). Self-reliance and other essays. Nashville, TN: American Renaissance.
Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
Fetterman, D. M. (2013). Empowerment evaluation in the digital villages: Hewlett-Packard’s $15 million race
toward social justice. Stanford, CA: Stanford University Press.
Fetterman, D. M., Deitz, J., & Gesundheit, N. (2010). Empowerment evaluation: A collaborative approach to
evaluating and transforming a medical school curriculum. Academic Medicine, 85, 813–820.
Fetterman, D. M., Kaftarian, S., & Wandersman, A. (Eds.), (1996). Empowerment evaluation: Knowledge and
tools for self- assessment and accountability. Thousand Oaks, CA: Sage.
Fetterman, D. M., & Wandersman, A. (Eds.). (2005). Empowerment evaluation principles in practice. New
York, NY: Guilford.
Fetterman, D. M., & Wandersman, A. (2007). Empowerment evaluation: Yesterday, today, and tomorrow.
American Journal of Evaluation, 28, 179–198.
Fetterman, D. M., & Wandersman, A. (2010, November). Empowerment evaluation essentials: Highlighting
the essential features of empowerment evaluation. Paper presented at the American Evaluation Association
Conference, San Antonio, Texas.
Mark, M. M., & Shotland, R. L. (1995). Stakeholder-based evaluation and value judgments: The role of per-
ceived power and legitimacy in the selection of stakeholder groups. Evaluation Review, 9, 605–626.
Miller, R., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American
Journal of Evaluation, 27, 296–319.
O’Sullivan, R. (2004). Practicing evaluation: A collaborative approach. Thousand Oaks, CA: Sage.
Patton, M. Q. (1997). Toward distinguishing empowerment evaluation and placing it in a larger context. Amer-
ican Journal of Evaluation, 18, 147–163.
Patton, M. Q. (2005). Toward distinguishing empowerment evaluation and placing it in a larger context: Take
two. American Journal of Evaluation, 26, 408–414.
Rodrı́guez-Campos, L., & O’Sullivan, R. (2010, November). Collaborative evaluation essentials: Highlighting
the essential features of collaborative evaluation. Paper presented at the American Evaluation Association
Conference, San Antonio, Texas.
Rodrı́guez-Campos, L., & Rincones-Gómez, R. (2013). Collaborative evaluations: Step-by-step (2nd ed.).
Stanford, CA: Stanford University Press.
Scriven, M. (1997). Empowerment evaluation examined. American Journal of Evaluation, 18, 165–175.
Scriven, M. (2005). Empowerment evaluation principles in practice. American Journal of Evaluation, 26,
415–417.
Sechrest, L. (1997). Review of the book Empowerment evaluation: Knowledge and tools for self-assessment
and accountability. Environment and Behavior, 29, 422–426. Retrieved from http://www.davidfetterman.
com/SechrestBookReview.htm
Shulha, L. (2010, November). Participatory evaluation essentials: Highlighting the essential features of parti-
cipatory evaluation. Paper presented at the American Evaluation Association Conference, San Antonio,
Texas.
Stufflebeam, D. L. (1994). Empowerment evaluation, objectivist evaluation, and evaluation standards: Where
the future of evaluation should not go and where it needs to go. American Journal of Evaluation, 15,
321–338.