Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

TOPIC 3

MODELS OF MONITORING AND EVALUATION:


TYPES OF EVALUATION
PROCESS, OUTCOME AND IMPACT
Presentation Outline
Brief introduction to Monitoring and Evaluation
Models of Monitoring and Evaluation
 Program monitoring
 Outcome monitoring
 Result Based Monitoring and Evaluation (Logical
Framework)
 Network Models
Types of Evaluation
 Formative evaluation
 Summative evaluation
 Process evaluation
 Outcome evaluation
 Impact evaluation
1.0 Brief Introduction to
Monitoring and Evaluation
 Monitoring is the systematic collection and
analysis of information as a project progresses.
 Objectives:
 to improve efficiency, effectiveness, and
increases the probability of reaching project
goals
 To keep the work on track, and can let
management know when things are going
wrong.
 To determine if resources and capacity available
are sufficient and are being well used,
Monitoring cont..
 Whether the capacity you have is sufficient and
appropriate, and whether you are doing what
you planned to do.
 It is based on targets set and activities planned
during the planning phases of work.
 It is a part of evaluation and occurs during
implementation of a project as mid-term and
terminal evaluations.
 If done properly, Monitoring is an invaluable tool
for good management, and it provides a useful
base for evaluation.
Monitoring and evalaution
 What monitoring and evaluation have in
common is that they are geared
towards learning from what you are
doing and how you are doing it, by
focusing on:
 Efficiency
 Effectiveness
 Impact
Why do Monitoring and
evaluation?
 Monitoring and evaluation enable you to
check the “bottom line” of development work:
Not “are we making a profit?” but “are we
making a difference?” Through monitoring
and evaluation, you can:
 Review progress;
 Identify problems in planning and/or implementation;
 Make adjustments so that you are more likely to
“make a difference”.
 In many organisations, “monitoring and evaluation” is
something that that is seen as a donor requirement
rather than a management tool
Monitoring and evaluation can:
 Help you identify problems and their causes;
 Suggest possible solutions to problems;
 Raise questions about assumptions and
strategy;
 Push you to reflect on where you are going
and how you are getting there;
 Provide you with information and insight;
 Encourage you to act on the information and
insight;
 Increase the likelihood that you will make a
positive development difference.
Monitoring involves:

 Establishing indicators of efficiency,


effectiveness and impact;
 Setting up systems to collect information
relating to these indicators;
 Collecting and recording the information;
 Analysing the information;
 Using the information to inform day-to-day
management.
 Monitoring is an internal function in any
project or organisation.
Indicators for monitoring
 Indicators are an essential part of a monitoring and
evaluation system because they are what you
measure and/or monitor. Through the indicators you
can ask and answer questions such as:
 Who?
 How many?
 How often?
 How much?
 But you need to decide early on what your indicators
are going to be so that you can begin collecting the
information immediately
Types of Indicators
 Input Indicators -Resources
 Process Indicators - Activities
 Output Indicators -Immediate
results
 Outcome Indicators -Medium Term
 Impact Indicators -Long Term
Results
Evaluation
 Definition of Evaluation

 Evaluation is the systematic application of scientific


methods to assess the design, implementation,
improvement or outcomes of a program (Rossi &
Freeman, 1993; Short, Hennessy, & Campbell, 1996).

 Evaluation is the systematic assessment of the


worth or merit of some object

 Evaluation is the comparison of actual project impacts


against the agreed strategic plans. It looks at what you
set out to do, at what you have accomplished, and how
you accomplished it.
2.0 Models of Monitoring and
Evaluation
 Monitoring and evaluation are geared
towards learning from what is being done
and how it is done by focusing on: _
Efficiency _ Effectiveness _ Impact.
There are various models used in M&E
including the following
a) Program based M&E (Result Based Monitoring
and Evaluation (Logical Framework)
b) Outcome based M&E
Programm M&E
 This focuses on the process of
implementation, how resources are used,
progress of activities, on going activities,
depends on the nature of the work
 Indicates the information to be collected,
sources and uses of information
 Project inputs, results, progress, wider impact
2.1 Result Based Monitoring and
Evaluation (RBM) for Programm M&E
RBM), focuses on:
 Achieving results,
 Implementing performance measurement
using the feedback to learn and change.
 RBM integrates: strategy, people, resources,
processes and measurements to improve
decision-making, transparency and
accountability.
 RBM uses Logical framework
2.1.1 Why measure results

 If you do not measure results, you cannot tell


success from failure.
 If you cannot see success, you cannot reward it.
 If you cannot reward success, you are probably
rewarding failure.
 If you cannot see success, you cannot learn
from it.
 If you cannot recognize failure, you cannot
correct it.
 If you can demonstrate results, you can win
public support and donor interest.
2.1.2 Logical framework analysis

 Logical frameworks or logic models provide a


linear, “logical” interpretation of the relationship
between inputs, activities, outputs,
outcomes and impacts with respect to
objectives and goals.

 They show the causal relationship between


inputs, activities, outputs, outcomes and impact
vis-à-vis the goals and objectives.
Logical framework cont..

 Logical frameworks outline the specific inputs


to produce specific outputs which will result
in specific outcomes and impacts.
 Logical frameworks do form the basis for
monitoring and evaluation activities for all
stages of the programme.
 Logic framework focuses on the programme's
inputs, activities, and results.
Logical models
 Logic models are valuable tools for:

 Programme Planning and Development:


helps think through your programme
 Programme Management: it "connects
resources, activities, and outcomes, can be a
basis for developing a more detailed
management plan.
 Communication. It can show stakeholders at a
glance what a programme is doing (activities)
and what it is achieving (outcomes),
emphasizing the link between the two.
2.1.3 The questions the logical framework
answers
 Why: a project is carried out –(objective)
 Who will benefit (relevance)
 What: the project is expected to achieve (?)
 How: the project is going to achieve it (?)
 Which: external factors are necessary for the
projects success (assumptions)
 How: we can assess the success (M&E)
 Where: will we find the data to assess the
success of the project (indicators)
 What: the project will cost (cost/budget)
 When: the project will be undertaken (time)
Project elements and intervention logic (vertical) 1
 Goals (overall) objectives- Long term objectives to which
the project should contribute by means of achieved

 Outcomes (project purpose)- Specific objectives of the


projects which should bring sustainable outputs to the target
group and which should be me by combination of produced

 Outputs (results) -Products of undertaken

 Activities –The things or actions that must be done


to achieve the results (outputs)
Outcome Monitoring &E
 Also known as impact M&E
 Look at the intended and un intended
impacts
 Provides information on the process and
impact
 It is the link between the process and
the impact
monitoring
 All monitoring should involve a form of
continuous self evaluation
 It is an internal activity done by the mgt
 If done well less formal evaluation are
required
 All monitoring system should include both
program (process) and outcome
 Information acquired during monitoring
process should assist mgt to make self
evaluation
3.0 Types of Evaluation
The following are some types of evaluation
 Cluster

 Ex-ante

 Formative: Evaluation performed at mid-term

 Summative: Conducted at completion


 Comparison group model (experimental)
 Pre-test – post-test model (Non-experimental)
 Process
 Outcomes
 Impact
3.1 Purpose of evaluation

 Demonstrate program effectiveness to funders


 Improve the implementation and effectiveness
of programs
 Better manage limited resources
 Document program accomplishments
 Justify current program funding
 Support the need for increased levels of funding
 Document program development and activities
to help ensure successful replication
Purpose of evaluation cont..
 Satisfy ethical responsibility to clients to
demonstrate positive and negative effects of
program participation (Short, Hennessy, &
Campbell, 1996).

Programs can conduct different types of


evaluations, each dependent on the stage of
development the program is in:
 if the program is new and just being planned
 if the program is newly operational
 if the program is well established.
3.2 Formative Evaluation
 Intended to prove performance
 Conducted during implementation phase of
the project, programm or policies
 Assess whether the project has addressed the
objectives given the inputs and resources
 Evaluation performed at mid-term
 It takes place while the project is still running.
Formative evaluation
 The intention is to improve the functioning of the
project while it is still possible to do so.
 It can predict the project’s final effects and can
highlight adjustments that are required to the
project design.
 It examines the development of the project and

 may lead to changes in the way the project is


structured.
 Formative evaluations are conducted at mid-term
(also called periodic
Formative evaluation cont..
 Formative evaluations strengthen or improve
the object being evaluated;
 they help form it by examining the delivery of
the program or technology, the quality of its
implementation, and the assessment of the
organizational context, personnel,
procedures, inputs, and so on.
3.2.1 Some of the Questions asked

 Do the activities correspond with those


presented in the proposal? If they do not
correspond, why were changes made? And were
the changes justified?
 Did the project follow the timeline presented in
the proposal?
 Are the project’s actual costs in line with initial
budget allocations?
 To what extent is the project moving towards
the anticipated goals and objectives?
 What challenges and obstacles have been
identified? And how have they been dealt with?
3.3. Summative Evaluation
 It is done at the end of an intervention or a
phase of that intervention.
 Assess achievement of an intervention in
terms of efficiency, effectiveness, outputs,
impacts
 Is intended to provide information about the
worthwhile of the program
 Normally conducted by those entrusted with
the design and delivery of a development
intervention
 Is divided into terminal and ex-post
evaluation
Summative evaluation
 It is an overall assessment of the project’s
performance and its impact. It assesses the extent to
which the programme has succeeded in meeting its
objectives, and the potential sustainability of gains
made through the programme.
 This only allows us to draw lessons once the project
has been completed.
 It therefore does not enable us to make
improvements to the specific project being evaluated.
 However, lessons may be learnt that can be applied
to enhance future projects and improve the
functioning of the organization.
3.3 Summative Evaluation
 Summative evaluations are also called terminal, final,
outcome or impact evaluations.
 Questions typically asked in summative evaluations
include:
 To which extent did the project meet its overall goals
and objectives?
 What impact did the project have on the lives of the
beneficiaries?
 Which components were the most effective?
 What significant unintended (accidental, not deliberate)
impacts did the project have?
 Is the project replicable (can it be repeated)?
 Is the project sustainable?
3.4 Pre-test – post-test model

 The basic assumption of the pre-test – post-


test model is that were it not for the
implementation of the project, the particular
undesirable situation of the projects’
beneficiaries would persist (continue); and
conversely, that as a result of the project,
their situation should improve.
Pre-test and post test cont..
 The situation before the project commences and
after the conclusion of the project is measured.
 The differences or changes noted between these
measurements are taken to be caused by the
project’s implementation.
 For such a comparison to be valid, the pre-test and
post-test must be essentially identical, and we must
be careful to make sure that our own personal bias
does not affect the measurements that are made.
 Information must be gathered from the same group
of beneficiaries;
Pre-test and post-test cont..
Advantage
 it is relatively easy to implement, as one
works with the same group of beneficiaries.
Disadvantage
is the possibility that measured changes are
the result of factors other than the project
itself. In other words, changes might be
attributable (at least in part) to external
factors rather than the project’s
implementation.
An example of a pre-test – post-test
evaluation:
 For two years, an NGO has been undertaking
HIV/AIDS awareness campaigns. At the outset
of the campaign, the beneficiaries of the project
were asked whether they use condoms. Around
75% of the respondents answered that they
never use condoms. Two years later, after the
conclusion of the campaign, the same group of
beneficiaries were asked the same question.
This time, only about 25% of them said that
they did not use condoms. The outcome of the
project was therefore more frequent use of
condoms among the beneficiaries.
 The impact of the project might be a lower rate
of new HIV infections in the area.
3.5 Comparison group model or
(experimental design)
 The situation of the group of beneficiaries is
NOT compared before and after project
implementation.
 Compare two similar groups only at the end of
the project.
 The beneficiaries of the project, and the other
group of people who have not benefited from
the project.
 It is important that the groups should have
similar characteristics e.g. gender balance,
educational level, age group spread, socio-
economic status, geographical status).
Example of comparison group model
 Using the example earlier
 Assuming this finding was compared to results
from another survey among
 A representative sample of people living in
neighbouring Village B with Similar
characteristics to those of Village A: e.g
education level, socioeconomic characteristics,
age, gender
 It was found that, only 35% used condoms on a
regular basis.
 The differences in the frequency of condom use
may therefore be attributed to the awareness
campaign among beneficiaries living in Village A.
Comparison group models cont..
The main differences between the two groups
can be attributed to the project’s
interventions.
Advantage
 it is relatively easy to link differences between
the groups to the project’s intervention.
Disadvantage
it might be difficult to find an otherwise
identical group of non-beneficiaries to
compare to the group of beneficiaries.
3.6 Process Evaluation

 Process evaluation can be defined as “the


assessment of policies, materials,
personnel, performance, quality of
practice or services, and other inputs
and implementation experiences.”.
 Process evaluation takes place during the
implementation of a program.
 Process evaluation may occur with or without
outcome evaluation.
 However, if resources, time and feasibility are
a road block to conducting a full evaluation
study, it is highly recommended that a good
process evaluation study be incorporated.
3.6.1 Typical questions in process
evaluation

 Typical questions asked include, but are not


limited to:
 What intervention activities are taking place?
 Who is conducting the intervention activities?
 Who is being reached through the
intervention activities?
 What inputs or resources have been allocated
or mobilized for programme implementation?
 What are possible programme strengths,
weaknesses, and areas that need
improvement?
3.6.2 Process Evaluation Strategies
 Both qualitative and quantitative research methods
(mixed method) are used in process evaluation.
 Some of the strategies to use to collect process level
information include:
 Interviews where open ended questions regarding
feelings, knowledge, opinions, experiences,
perceptions are used and data recorded
 Focus groups

 Forums and discussion groups

 In-depth interviews using key informant or other

community members; semi- structured and structured


 Observations from fieldwork descriptions of
activities
 Case Studies
3.7 Outcome Evaluation
 Outcome evaluation: “assessment of the effects
of a program on the ultimate objectives,
including changes in health and social benefits
or quality of life.”
 Outcome evaluation is conducted long after a
program has been completed.
 The length of time between the impact and
outcome evaluations should be determined in
the program plan.
 Documents short term outcomes
 Have descriptive data
 Tasks focused on results are those that
describes the outputs of the activities
 Immediate effects of the project
Outcome evaluation cont..
 The purpose is to examine any evidence that
the program may have had the intended
impact on health status and quality of life.

 The knowledge, attitudes and behaviors of


the program participants may also be
surveyed to determine if program participants
retained the health improvements and
knowledge they exhibited at the conclusion of
the program.
3.8 Impact Evaluation-

 Most comprehensive and focus on many


results and address changes and
development
 Very costly and involve extended
commitment
 Results often cannot be directly related to
the effects of an activity or project as
other external influence on the target
audience which occur over time
Impact evaluation
 Impact evaluation assesses: the long effect a
program has on the behaviors of the program
participants.
Impact evaluation assesses the changes that can
be attributed to a particular intervention, such as a
project, program or policy, both the intended ones,
as well as ideally the unintended ones.
In contrast to outcome monitoring, which examines
whether targets have been achieved, impact
evaluation is structured to answer the question: how
would outcomes such as participants’ well-being have
changed if the intervention had not been
undertaken?
Impact evaluation cont..
 This involves counterfactual analysis, that is, “a
comparison between what actually
happened and what would have happened
in the absence of the intervention’’.
 Impact evaluations seek to answer cause-and-
effect questions. In other words, they look for
the changes in outcome that are directly
attributable to a program.
 Impact evaluation helps us to answer key
questions for evidence-based policy making:
what works, what doesn’t, where, why and for
how much?
3.8.1 Strategies used in impact
evaluation
 Pre and Post Tests
 Surveys
 Surveys are often used in impact evaluation. Surveys can
be used to determine whether or not program
participants changed their behaviors.
 Logs or Journals
 Participants may be asked to keep a log or journal
during the course of the program.
 Program participants and the program staff can track
any changes in eating behaviors over the course of the
program.
3.8.2 Impact evaluation designs cont..
Evaluation designs can be broadly classified into
three categories:
 Experimental (comparison group)

 Quasi-experimental
 Non-experimental. (pre test and post test)
These three evaluation designs vary in
feasibility, cost, the degree of clarity and
validity of results, involvement during design
or after implementation phase of the
intervention, and degree of selection bias.
3.9 Quality improvement
In both impact evaluation and outcome evaluation the
desire is to collect information that can be used to
improve the program’s ability to meet the declared
objectives. The ultimate goal is to improve health and
quality of life.
 The strength of the evidence that a program had the
intended outcome depends greatly on the evaluation
methods used.
 Care should be taken in the design and
implementation of each of these three types of
evaluation to ensure best practices and appropriate
measurements are used.
Other Types of evaluation

 Self-evaluation: This involves an organization or


project holding up a mirror to itself and assessing how it
is doing, as a way of learning and improving practice. It
takes a very self-reflective and honest organization to do
this effectively but can be an important learning
experience.
 Participatory evaluation: This is a form of internal
evaluation. The intention is to involve as many people
with a direct stake in the work as possible. This may
mean project staff and beneficiaries working together on
the evaluation. If an outsider is called in, it is to act as a
facilitator of the process, not an evaluator
Other types of evaluation
 External evaluation: This is an evaluation done by
a carefully chosen outsider or outsider team.
 Interactive evaluation: This involves a very active
interaction between an outside evaluator or
evaluation team and the organization or project being
evaluated. Sometimes an insider may be included in
the evaluation team.
 Economic Evaluation
 This type of Program Evaluation involves identifying
and measuring the costs of a program, and also
comparing the costs and outcomes of a program with
alternative interventions (Sloan, 1997). Economic
evaluation is done to identify the treatment options
that yield the best value for the resources expended.
Economic evaluations are often completed by
external consultants.
References
 Burt, M. R., Harrell, A. V., Newmark, L. C., Aron, L. Y., & Jacobs,
L. K. (1997). Evaluation guidebook: Projects funded by S.T.O.P.
formula grants under the Violence Against Women Act. The
Urban Institute. http://www.urban.org/crime/evalguide.html
 Centers for Disease Control and Prevention. (1992). Handbook
for evaluating HIV education. Division of Adolescent and School
Health, Atlanta.
 CDC. Framework for program evaluation in public health.
MMWR Recommendations and Reports 1999; 48(RR11):1-40.
 Chalk, R., & King, P. A. (Eds.). (1998). Violence in Families:
Assessing prevention and treatment programs. Washington DC:
National Academy Press.
 Coyle, S. L., Boruch, R. F., & Turner, C. F. (Eds.). (1991).
Evaluating AIDS prevention programs: Expanded edition.
Washington DC: National Academy
 Monitoring and Evaluation by Janet Shapiro (email:
nellshap@hixnet.co.za)
Thanks for your
Attention

You might also like