Cluster Evaluation: In: Evaluation For The 21st Century: A Handbook

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Cluster Evaluation

In: Evaluation for the 21st Century: A Handbook

By: James R. Sanders


Edited by: Eleanor Chelimsky & William R. Shadish
Pub. Date: 2013
Access Date: September 22, 2020
Publishing Company: SAGE Publications, Inc.
City: Thousand Oaks
Print ISBN: 9780761906117
Online ISBN: 9781483348896
DOI: https://dx.doi.org/10.4135/9781483348896
Print pages: 396-404
© 1997 SAGE Publications, Inc. All Rights Reserved.
This PDF has been generated from SAGE Research Methods. Please note that the pagination of the
online version will vary from the pagination of the print book.
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

Cluster Evaluation

James R.Sanders

One of the new developments in program evaluation over the past decade has been a creative and evolving
approach called cluster evaluation. The request for cluster evaluation comes from funders of projects that
have a common mission but have local procedural autonomy; these funders want an evaluation of the results
obtained from their investments.

The label of cluster evaluation was first used in 1988 by Dr. Ronald Richards, director of evaluation for
the Kellogg Foundation at the time. The label described a design specified in a contract for an external
evaluation of a W. K. Kellogg Foundation-funded health initiative. Since then, cluster evaluations have
been commissioned by the Kellogg Foundation in science, education, agricultural health and safety, youth
development, public health, health professions education, public policy education, food systems professions
education, youth exchanges, and leadership development. In addition, others outside the Kellogg Foundation
have been doing cluster evaluation under other names or have adapted or extended the ideas behind cluster
evaluation to start a literature on cluster evaluation methodology (Barley & Jenness, 1993, 1995; Campbell,
1994; W. K. Kellogg Foundation, 1992, 1995; Parsons, 1994; Sanders, 1995; Worthen & Matsumoto, 1994).
In addition, many cluster evaluation technical reports have been prepared for the Kellogg Foundation since
1990.

In 1994, the Topical Interest Group on Cluster Evaluation was created within the American Evaluation
Association by evaluators who wanted to share ideas and information. In this chapter I will describe the
fundamentals of cluster evaluation and discuss its strengths and limitations. Future development of cluster
evaluation methodology will depend on program evaluators' sharing their cluster evaluation experiences.

Fundamentals of Cluster Evaluation

Cluster evaluation is one kind of program evaluation. It is evaluation of a program that has projects in multiple
sites aimed at bringing about a common general change. Examples of such changes include increasing adult
literacy in science or in agricultural safety, improving the health practices of parents in rural communities, and
increasing the participation of rural citizens in public policy formulation. Projects are relatively autonomous.
Each project develops its own strategy to accomplish the program goal, uses its own human and fiscal
resources to carry out its plan, and has its own context.

Questions Addressed

The questions that cluster evaluation addresses are as follows:

Cluster Evaluation
Page 2 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

1. Overall, have changes occurred in the desired direction? What is the nature of these
changes?
2. In what types of settings have what types of change occurred, and why?
3. Are there insights to be drawn from the program failures and successes that can inform
future initiatives?
4. What is needed to sustain desired changes?

Cluster evaluation is one of several approaches to program evaluation that can be used to address these
questions. Other approaches include analyses of project-level evaluations and use of site visits to make
judgments about individual programs. Such approaches are common in program evaluation, but they address
the above questions in ways very different from cluster evaluation.

Audiences

Cluster evaluations serve two primary audiences: program directors at the parent organization who are
responsible for current and future programming decisions, and project staff who are responsible for local
programming decisions. Secondary audiences include the administration and board of the parent
organization, professionals working in the program area outside of the cluster being evaluated, and policy
leaders who may use the insights supplied by cluster evaluation to shape organizational or governmental
policy.

Relationships

A basic assumption underlying the design of a cluster evaluation is that it does not dictate how projects plan
and conduct either their operations or their evaluations. The cluster evaluator becomes part of a team that
includes (a) the sponsor/funder/parent organization, (b) the project staffs (including the project evaluators),
and (c) the cluster evaluator. The dynamics of this tripartite relationship bear directly on the success of
cluster evaluation. This relationship leads to the development and use of a shared vision/goal/mission for
the program. If the parent organization representative lacks conceptual clarity, the cluster evaluation will be
affected. The same will be true if representatives of projects or the cluster evaluator lacks an understanding
of the mission of the program. The productivity of the cluster evaluation is affected even more if more than
one of the three parties lacks conceptual clarity about the program. Thus shared vision among participants is
one factor that can affect cluster evaluation. Other relationship factors affecting cluster evaluation include the
following:

▪ Role definitions: Cluster evaluation requires a partnership among three groups. There are
costs involved in time and expenses when collaboration is expected.
▪ Cooperation: If any member backs out of the partnership, it will be weakened; parity
among the partners is important.
▪ Conflict resolution: Communication is especially important in any disagreement,

Cluster Evaluation
Page 3 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

especially when a request by one or more of the partners negatively affects the work of
the others.
▪ Trust: Information shared in confidence is to be protected.
▪ Credit: Giving credit where credit is due is important when instruments, information, and
ideas are being shared in the partnership.

Design

Cluster evaluation has taken various forms, but it has certain basic characteristics: (a) It is holistic, (b) it
is outcome oriented, (c) it seeks generalizable learning, and (d) it involves frequent communications and
collaborations among the partners. Cluster evaluation can be either formative or summative, but it has most
frequently been both. Typically, a cluster evaluation project will involve the following steps.

Site visits by the cluster evaluators. These visits include introductions, orientation to the cluster evaluation,
role definitions, orientation to the local project, collection of documents, arrangement for the first networking
conference, and identification of evaluation issues.

Analysis of project and program documents. This analysis includes the search for commonalities and
uniqueness in outcome objectives across projects in the program, development of categories for project
strategies, and efforts to understand the program mission and establish conceptual clarity.

Networking conferences. Project and program directors plan to meet as a cluster group for a day or two
every 6 months. The first conference might be dedicated to project presentations of intended outcomes
and strategies, mapping each project onto a list of possible program outcomes and associating alternative
strategies with each outcome. Subsequent conferences could focus on sharing evaluation plans, instruments,
and findings, so as to begin identifying operational definitions and assessment techniques for outcomes and
activities that have been tried and appear promising as well as activities that have been tried and have
not worked. Outside speakers or consultants might be invited to stimulate new ways of thinking about the
program or about evaluation methods. Later conferences might focus on discussions and debate of tentative
evaluative conclusions, with a look at existing and missing documentation, alternative frameworks for judging
the program, and gaining outside interpretations of cluster evaluation data.

Collection and analysis of data. This is a continuous process during the life of the cluster evaluation.
It includes refining evaluation questions, taking stock of existing data, using working hypotheses (Glaser
& Strauss, 1967) to test and refine tentative conclusions, examining data needs and who is in the best
position to gather needed information, seeking cooperation in data collection, developing new instruments
and data collection plans that are needed, scheduling and supervising data collection (including naturalistic
observations at sites), analyzing data as they become available, and using results to refine future data
collection and analyses.

Interpretation and reporting. Like data collection and analysis, interpreting and reporting data are continuous

Cluster Evaluation
Page 4 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

processes. Cluster evaluation draws from a diversity of perspectives (including methodological) to arrive
at conclusions. Multiple interpretations of data early in the cluster evaluation process will allow for the
testing of certain interpretations over time in the projects. Networking conferences are valuable forums for
interpretations of early cluster evaluation findings.

Confirmation of findings. Final conclusions are reviewed for accuracy and conceptual integrity by participants
at each level of the cluster evaluation.

The elements of cluster evaluation as a continuous process are depicted in Figure 28.1. The stages of
inquiry reflected in cluster evaluation design are not unique to cluster evaluation. Starting with the classic
work of Glaser and Strauss (1967), there is a strong and well-documented tradition of the methodology
used in cluster evaluation found in the literature under such labels as qualitative methods, case study
methods, and naturalistic methods. What is unique about cluster evaluation is the combination of the four
characteristics listed above: (a) It is holistic, (b) it is outcome oriented, (c) it seeks generalizable learning,
and (d) it involves frequent communications and collaborations among the partners. It is an approach to
evaluation that is intrusive and affects the thinking and practices of project staff, the parent or funding
organization, and the cluster evaluator along the way. It does not seek to establish causation through
controlled comparative designs, but instead depends on naturalistic observations of many people to infer and
test logical connections. The underlying paradigm is one of argumentation and rules of evidence, such as
those found in our legal system. It strives for documentation and logical conclusions that have been tested
as fully as possible given resources, time, and methodological constraints of the evaluation. In terms of The
Program Evaluation Standards (Joint Committee on Standards for Educational Evaluation, 1994), it places a
premium on all four attributes of sound program evaluation: utility, feasibility, propriety, and accuracy.

Cluster Evaluation
Page 5 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

Figure 28.1. The Elements of Cluster Evaluation

Strengths and Limitations of Cluster Evaluation

Like any single approach to program evaluation, cluster evaluation has its strengths and limitations.
Therefore, other program evaluation approaches (e.g., site visits by the parent organization or analysis of
project-level evaluations) are sometimes recommended to supplement cluster evaluation.

The strengths of cluster evaluation are its responsiveness to stakeholder needs and the interactions among

Cluster Evaluation
Page 6 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

participants. It provides a framework within which both formative and summative evaluation can be done.
The specifics of data collection develop as the partners interact and digest information from others in the
partnership. Cluster evaluation is a continuous process of communication, learning, and reshaping of ideas.
It engages many minds and perspectives in strengthening and studying program effectiveness. As ideas are
shared and learning takes place, all of those involved in the cluster evaluation are affected. Capacity is built
in both the program subject matter and the evaluation. Contextual differences across projects are revealed,
clarity of purpose and plans is achieved, and new understanding takes place. The overall value added by
cluster evaluation consists of the insights that go beyond individual project experiences.

The limitations of cluster evaluation can be very serious, depending on one's philosophical orientation toward
evaluation methodology and the performance of the cluster partners. A characteristic of cluster evaluation
is that it is continuously evolving, and the program that is being evaluated is also continuously evolving,
being shaped by the cluster evaluation. The program itself is also a product of contextual differences across
projects. Given these circumstances, those who would rely on controlled comparative designs to establish
program impact, for example, will be disappointed with the cluster evaluation approach to program evaluation.
Further, the potential for cluster evaluators to lose their independence through the cluster evaluation process
is a very real concern, and one not to be taken lightly. Employment of a metaevaluation panel to oversee the
cluster evaluation is one way to address this concern. The opportunity to look for long-term change is often
missing in cluster evaluations, and funding or parent organizations need to look at the feasibility of conducting
follow-up cluster evaluations beyond the initial study period. Finally, the success of cluster evaluation is
dependent on all of the partners performing as expected over the life of the cluster evaluation. All partners
need to take seriously their responsibilities for sharing information, responding to reasonable requests from
the others, listening to the others, keeping the same personnel assigned to the program, and keeping the
mission of the program in the forefront as plans are refined. A summary of the strengths and limitations of
cluster evaluation is provided in Table 28.1.
TABLE 28.1 Summary of Strengths and Limitations of Cluster Evaluation
Strengths Limitations

Contributes in the formative role and allows logical inferences about


Includes no controls
attribution within the summative role

Has potential for co-optation

Evolves Does not evaluate individual projects

Has time limits for looking at change over the long


Is continuous
term

Is collaborative/participatory

Depends on goodwill, communication, coordination,


Builds capacity
and relationships across three partners

Brings information beyond individual project evaluations (e.g., Depends on stability of cluster membership

Cluster Evaluation
Page 7 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

contextual differences)

Depends on conceptual clarity/shared vision by


Focuses on broad issues
partners

Provides synergy

Involves an outcome orientation/documentation and logical Cannot establish causal attribution between results
explanation of outcomes, as well as a push toward clarity and interventions

Includes multiple perspectives

Conclusion

Cluster evaluation is in its infancy and will continue to be shaped by the ideas and experiences of evaluators
using it as an approach to program evaluation. As more experience is gained by program evaluators who
are doing cluster evaluation, we can expect to see a growing literature on cluster evaluation practices and
methodology.

References

Barley, Z. A., & Jenness, M. (1993). Cluster evaluation: A method to strengthen evaluation in smaller
programs with similar purposes. Evaluation Practice, 14, 141–147. http://dx.doi.org/10.1016/
0886-1633%2893%2990004-9

Barley, Z. A., & Jenness, M. (1995). Guidelines for the conduct of cluster evaluations. Paper presented at
the annual meeting of the American Evaluation Association, Vancouver.

Campbell, J. L. (1994). Emerging issues in cluster evaluation: Issues of cluster evaluation use. Paper
presented at the annual meeting of the American Evaluation Association, Boston.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research.
Chicago: Aldine.

Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards.
Thousand Oaks, CA: Sage.

W. K. Kellogg Foundation. (1992). Cluster evaluation information. Battle Creek, MI: Author.

W. K. Kellogg Foundation. (1995). Cluster evaluation model of evolving practices. Battle Creek, MI: Author.

Parsons, B. A. (1994). Relationship and communication issues posed by cluster evaluation. Paper presented
at the annual meeting of the American Evaluation Association, Boston.

Cluster Evaluation
Page 8 of 9
SAGE SAGE Research Methods
1997 SAGE Publications, Ltd. All Rights Reserved.

Sanders, J. R. (1995, March). Cluster evaluation: A creative response to a thorny evaluation issue.
Newsletter of the AEA Nonprofits and Foundations TIG, p. 5.

Worthen, B. R., & Matsumoto, A. (1994). Conceptual challenges confronting cluster evaluation. Paper
presented at the annual meeting of the American Evaluation Association, Boston.

http://dx.doi.org/10.4135/9781483348896.n28

Cluster Evaluation
Page 9 of 9

You might also like