Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 6

EVALUATION

By: Maria Khan


Evaluation is systematic determination of merit, worth, and significance of something or
someone using criteria against a set of standards. Evaluation refers to a periodic process of
gathering data and then analyzing or ordering it in such a way that the resulting information can
be used to determine whether your organization or program is effectively carrying out planned
activities, and the extent to which it is achieving its stated objectives and anticipated results.
Evaluation utilizes many of the same methodologies used in traditional social research, but
because evaluation takes place within a political and organizational context, it requires group
skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills
that social research in general does not rely on as much.

The Goals of Evaluation

The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences
including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies.
Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship
between an evaluation and its impact is not a simple one -- studies that seem critical sometimes
fail to influence short-term decisions, and studies that initially seem to have no influence can
have a delayed impact when more congenial conditions arise. Despite this, there is broad
consensus that the major goal of evaluation should be to influence decision-making or policy
formulation through the provision of empirically-driven feedback.

Types of Evaluation

There are many different types of evaluations depending on the object being evaluated and the
purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that
between formative and summative evaluation. Formative evaluations strengthen or improve the
object being evaluated -- they help form it by examining the delivery of the program or
technology, the quality of its implementation, and the assessment of the organizational context,
personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects
or outcomes of some object -- they summarize it by describing what happens subsequent to
delivery of the program or technology; assessing whether the object can be said to have caused
the outcome; determining the overall impact of the causal factor beyond only the immediate
target outcomes; and, estimating the relative costs associated with the object.

Formative evaluation includes several evaluation types:

• needs assessment determines who needs the program, how great the need is, and what
might work to meet the need
• evaluability assessment determines whether an evaluation is feasible and how
stakeholders can help shape its usefulness
• structured conceptualization helps stakeholders define the program or technology, the
target population, and the possible outcomes
• implementation evaluation monitors the fidelity of the program or technology delivery
• process evaluation investigates the process of delivering the program or technology,
including alternative delivery procedures

Summative evaluation can also be subdivided:

• outcome evaluations investigate whether the program or technology caused


demonstrable effects on specifically defined target outcomes
• impact evaluation is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole
• cost-effectiveness and cost-benefit analysis address questions of efficiency by
standardizing outcomes in terms of their dollar costs and values
• secondary analysis reexamines existing data to address new questions or use methods not
previously employed
• meta-analysis integrates the outcome estimates from multiple studies to arrive at an
overall or summary judgement on an evaluation question

DO’S AND DON’T IN EVALUATION

DO’S:
1. Think beyond budget. The reason to evaluate is not to justify the budget, but to assess that a
campaign, its process and methodology are producing results.

2. Set goals and objectives. Unless objectives are explicit and SMART (Specific, Measurable,
Achievable, Relevant and Timely) – then it is not possible to devise a meaningful form of evaluation.

3. Select key performance indicators based on campaign aims. It should reflect the goals – so think
about creating tangible outcomes wherever possible.

4. Use surveys to measure soft issues. Even with soft issues, where the aim may be to influence
attitudes and shape opinions, rather than immediately change behavior, pre and post campaign
research can indicate how opinion is moving.

5. Build in tangibles. For marketing based campaigns build in a response channel that can be
monitored – an advice line, dedicated e-mail channel, response form for information, and so on.

6. Monitor traditional media. Media coverage is the starting point for many traditional evaluation
techniques. Sign up a good media monitoring company, brief them thoroughly, and keep them in the
loop about what you are issuing, when and to whom.
7. Monitor new media. In many areas, the web is more influential than traditional media. Sign up a
specialist new media monitoring company who can monitor web appearances for you and also, if
required, review newsgroups, blogs and feeds.

8. Google and DIY. If you don’t sign up a new media specialist then you can at least Do it Yourself by
selecting key words and phrases that you can search on Google pre and post campaign to see how
your clients ownership of and ranking against these key concepts has changed. Also, Google
provides a free ‘Alerts’ service where you set a keyword and Google will notify you of appearances.

9. Multiple objectives require multiple measurement tools. Where a campaign has mixed objectives
you may need different evaluation techniques for each. There may be a need to combine both
quantitative and qualitative measurement techniques. Again this reinforces the case for keeping the
objectives clear and simple.

10. Borrow budget. In many cases behavior will be subject to multiple influences – PR, advertising,
direct mail, incentives, sales activity, and so on. This is a good reason for the cost of evaluation to
come from a general marketing pot, rather than just the PR budget.

DONT’S:
1. Don’t take all the responsibility. PR doesn’t drive sales and profitability so, though clients will let
you take heroic responsibility for this, explain that you are the messenger and others usually carry
this forward to action. Measure the PR contribution, not that of others.

2. Don’t disparage advertising value equivalent (AVE). Academics and those promoting more
elaborate, and expensive, performance measures hate AVE. The merit of AVE is that they are
low cost and quantify performance in simple monetary terms that all the management team, but
especially the bean counters, can understand.

3. Don’ rush to judgment. Many traditional media have a natural cycle that spans many months and
opinion shifts often happen slowly. While it is tempting to seek an early measure of campaign
effectiveness, the true impact may not be measurable until several months have passed.

4. Don’t rely exclusively on clipping services. Do additional media research over and above that
provided by the clipping service. If you discover they are missing references, let them know and
agree with them measures to improve their performance.
5. Don’t believe in magic bullets. There is no single evaluation technique that meets all needs.

An Overview of the Evaluation Process:


Most evaluations that incorporate both formative and summative evaluation have the following
steps, although the order in which they are undertaken may vary. There may also be some
retracing of steps.

Steps in the process Decision making

1. Specify, select, refine, or modify project goals What is the general focus of the evaluation?
and evaluation objectives. (See Fig. 2.1 ‘Project
evaluation framework’) • What is to be evaluated?
• Why — what are the purposes?

• Who is the evaluation for?

2. Establish standards/criteria (performance What benchmarks or measures will be used to


measures) where appropriate evaluate the success of the project?

3. Plan appropriate evaluation design • What are the key questions that need
answering?

• What is feasible in terms of budget,


time, available resources and
expertise?

4. Select and/or develop data gathering methods


What information will be gathered?

5. • From whom?
Collect relevant data • By whom?

• How will the information be gathered?

6. Process, summarize, analyze relevant data How will the information be analyzed and
interpreted, and by whom? (Criteria for
judging will relate to Step 2.)
7. Contrast data with evaluation standards/criteria

8. How will the results be communicated?


Report and feedback results

• To whom?

• By when?

9. Assess cost-benefit/effectiveness • What were the benefits?


• Was the investment worth it?

• Who will make such judgements?

10. How will the evaluation itself be evaluated?


Reflect (evaluate) the evaluation

• How will the design be evaluated?


• How will you know if the evaluation is
proceeding according to plan (a
management issue)?

• How will the overall evaluation effort


be judged?

Throughout Steps 1-10 How will the evaluation be managed — in


terms of identifying, allocating tasks,
resources, personnel etc.?

The Importance of Evaluation


Evaluation should be used as an ongoing management and learning tool to improve an
organization's effectiveness. Well-run organizations and effective programs are those that can
demonstrate the achievement of results. Results are derived from good management. Good
management is based on good decision making. Good decision making depends on good
information. Good information requires good data and careful analysis of the data. These are all
critical elements of evaluation.

Managers can and should conduct internal evaluations to get information about their programs so
that they can make sound decisions about the implementation of those programs. Internal
evaluation should be conducted on an ongoing basis and applied conscientiously by managers at
every level of an organization in all program areas. In addition, all of the program's participants
(managers, staff, and beneficiaries) should be involved in the evaluation process in appropriate
ways. This collaboration helps ensure that the evaluation is fully participatory and builds
commitment on the part of all involved to use the results to make critical program improvements.

Although most evaluations are done internally, conducted by and for program managers and
staff, there is still a need for larger-scale, external evaluations conducted periodically by
individuals from outside the program or organization. Most often these external evaluations are
required for funding purposes or to answer questions about the program's long-term impact by
looking at changes in demographic indicators such as graduation rate or poverty level. In
addition, occasionally a manager may request an external evaluation to assess programmatic or
operating problems that have been identified but that cannot be fully diagnosed or resolved
through the findings of internal evaluation.
Program evaluation, conducted on a regular basis, can greatly improve the management and
effectiveness of your organization and its programs. To do so requires understanding the
differences between monitoring and evaluation, making evaluation an integral part of regular
program planning and implementation, and collecting the different types of information needed
by managers at different levels of the organization.

A thorough evaluation helps both the organization and the individual identify strengths and
weaknesses in their respective contributions, provides for greater accountability of organizational
resources and improves the overall morale of all.

References:

1. Fisher, J. & Cole, K. (1993). Leadership and Management of Volunteer Programs. San
Francisco: Jossey-Bass Publishers

2. www2.guidestar.org

3. http://www.evancarmichael.com/Public-Relations/267/PR-Evaluation--A-Practical-Approach.html

4. [Adapted from Payne, D. A. (1994). Designing Educational Project and Program


Evaluations: A Practical Overview Based on Research and Experience. Kluwer
Academic Publishers. Boston. (Page 13.]

5. http://www.socialresearchmethods.net/kb/intreval.php

You might also like