Professional Documents
Culture Documents
BCM4105 Introduction To Monitoring and Evaluation
BCM4105 Introduction To Monitoring and Evaluation
1
Table of Contents
Chapter One ........................................................................................................................ 5
1.0 Introduction.............................................................................................................. 5
1.1 LEARNING OUTCOMES ................................................................................. 5
1.2 The Power of Measuring Results .................................................................... 5
1.3 Monitoring Questions ........................................................................................ 5
1.4 Evaluation Questions ........................................................................................ 6
1.5 Monitoring and Evaluation? ............................................................................. 6
1.6 MONITORING AND EVALUATION LESSIONS........................................... 8
Chapter Two...................................................................................................................... 10
2.0 Who needs, uses M&E Information?............................................................ 10
2.1 Who conducts M&E….? ................................................................................. 11
2.2 Answering the following questions................................................................ 11
2.3 Why monitor and evaluate? ........................................................................... 11
2.4 Monitoring and Evaluation helps us to answer the questions: ................. 12
Chapter Three.................................................................................................................... 13
3.0 HOW DO WE MONITOR? ............................................................................. 13
3.2 When and how to use it? ................................................................................ 14
3.4 Gathering the Information that you need to monitor and evaluate: ......... 19
3.5 ................................................................................................................................. 21
When? Evaluation ...................................................................................................... 22
3.6 Why? Monitoring.............................................................................................. 22
3.7 Difference between Monitoring and Evaluation .......................................... 22
Chapter four ...................................................................................................................... 23
4.1 QUALITIES OF INDICATORS....................................................................... 23
4.2 OUTCOME AND IMPACT EVALUATION ................................................... 24
4.3 Purpose of Monitoring..................................................................................... 24
4.4 What to monitor................................................................................................ 25
4.5 Monitoring in a program.................................................................................. 25
4.6 Monitoring report.............................................................................................. 25
4.7 Define and differentiate between impact and out-come?.......................... 25
2
Chapter five....................................................................................................................... 26
5.0 EVALUATION................................................................................................... 27
5.1 Purpose of Evaluation..................................................................................... 27
5.2 What to evaluate.............................................................................................. 27
5.3 DIFFERENT CHARACTERISTICS OF MONITORING AND
EVALUATION. ............................................................................................................ 27
5.6 WHEN MONITORING AND EVALUATION SHOULD BE DONE............ 28
5.7 Before project implementation....................................................................... 28
5.8 During project implementation....................................................................... 29
5.9 After project implementation .......................................................................... 29
5.10 Performance indicators ............................................................................... 29
5.11 Performance indicators ............................................................................... 29
Chapter Six........................................................................................................................ 30
6.1 DEFINITION OF IMPACT AND OUTCOME ............................................... 30
6.2 DIFFERENCE BETWEEN IMPACT AND OUTCOME .............................. 31
6.3 MEANS OF VERIFICATION (MoV).............................................................. 32
6.4 ASSUMPTIONS............................................................................................... 32
Chapter Seven ................................................................................................................... 33
1. Identify factors affecting sustainability.......................................................... 33
2. Identify type and level of each Indicator....................................................... 33
7.1 Factors affecting sustainability ...................................................................... 33
7.2 M&E Questions ................................................................................................ 39
7.3 Type and Level of Each Indicator ................................................................. 39
7.4 What Is a Good Indicator? ............................................................................. 40
7.5 SUSTAINABILITY............................................................................................ 40
7.6 Challenges in developing M & E systems: .................................................. 41
7.7 Why use a Participatory Approach ???........................................................ 42
7.8 Tips for Using a Participatory Approach ...................................................... 42
7.9 Process for Designing an M&E system........................................................ 44
7.12 Assessment of Existing M&E Structures.................................................. 46
7.13 Assessment of Existing M&E Structures….............................................. 47
3
7.14 Assessment of M&E capacity of the organization .................................. 47
7.15 Setting targets: What is a target ? ............................................................. 48
7.16 Recording Current Status ........................................................................... 48
1. What are the factors affecting sustainability................................................ 58
2. What are the type and level of each Indicator ............................................ 58
References......................................................................................................................... 59
4
Chapter One
1.0 Introduction
This is an introduction course to Monitoring and Evaluation. The course will
highlight on the value of Monitoring and Evaluation. Different aspects of
Monitoring and Evaluation will be discussed.
At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
a) To define Monitoring and explain its purpose.
b) To define Evaluation and explain its purpose
c) To describe the difference between Monitoring and Evaluation.
d) To describe when Monitoring and Evaluation should be done.
e) To define impact and outcome.
f) To differentiate between impact and outcome.
a) If you do not measure results, you cannot tell success from failure
b) If you cannot see success, you cannot
reward it
c) If you cannot reward success, you are probably rewarding failure
d) If you cannot see success, you cannot learn from it
e) If you cannot recognize failure, you cannot correct it
f) If you can demonstrate results, you can win support.
Adapted from Osborne & Gaebler, 1992
5
a) Were inputs made available to program/ project in the quantities and at the
time specified by the program/project work plan?
b) Were the scheduled activities carried out as planned?
c) How well were they carried out?
d) Did the expected changes occur at the program/project level, in terms of
people reached, materials distributed?
a) Did the expected change occur at the population level (not necessarily
attributable to program/project)? How much change occurred?
b) Can improved health outcomes be attributed to program efforts?
c) Did the target population benefit from the program and at what cost?
Monitoring and evaluation form an essential part of all project work. Specifically,
they are key components of the project cycle and Centres’ Annual Plans. Briefly:
- Monitoring :
Monitoring is the continuous measurement, recording, collection of
information and communicating, and observation of the performance of a
service, programme or project to see that it is proceeding according to the
proposal plans and the objectives.
6
This is continuously tracking performance against what was planned by
collecting and analyzing data on the indicators established for monitoring
& Evaluation purposes.
It provides continuous information on whether progress is being made
towards achieving results (outputs, outcomes, and goals) through record
keeping and regular reporting systems.
- Evaluation:
Evaluation is the process of determining the value of the project in terms
relevance, efficiency effectiveness and impact Evaluations assess what
happened as a result of these activities, and answers the questions “To
what extent did your project achieve what it set out to achieve?” “What
have we learned as a result of this assessing the effectiveness of our
work?”
7
It is simply 'taking stock' of results of a project over a defined time and
weighing against the pre-determined targets.
It is an assessment of 'result' that is 'what' has been accomplished and
'process' that is 'how' it was accomplished
Monitoring and evaluation are tools for effective programme implementation that
enables both the planners and decision makers to draw lessons for future action:
8
3. Deriving lessons for future development planning, better program
formulation and implementation
9
b. Advocacy
Chapter Two
At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
1. Identify the users of Monitoring and evaluation information
2. Identify who conduct Monitoring and Evaluation
3. Why do we monitor and why evaluate
4. Explain what Monitoring and Evaluation will answer
10
2.1 Who conducts M&E….?
1) Program implementer
2) Stakeholders
3) Beneficiary
a) What outputs will be measured, when and how for each major project
activities and among whom and how many:
b) What effects will be measured, when, (how for each major project
activities and among whom and how many:
c) What impacts will be measured for each program objective (based on
established indicators, when for example pre or post project. how
(methods) among whom and how many.
d) Monitoring and evaluation are related but distinct concepts. Both are
analytical processes involving the gathering and analysis of relevant data
and information effective management of programmes.
It is important that Centres monitor and evaluate their work and projects for the
following reasons:
11
- Improving performance: At the end of a project, Centres’ will need to
reflect upon what worked well, what was not so successful and how they will
plan for the next piece of work. The evaluation process directly supports this
process. Having completed these reflections, Centre’s should be in a
position to improve performance and results in subsequent projects.
12
3. Why do we monitor and why evaluate?
4. Explain what Monitoring and Evaluation will answer?
Chapter Three
At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
1. How monitoring is done?
2. When and how do we monitor?
3. How do we gather information that is needed to monitor and evaluate?
4. When do we evaluate?
5. Why do we monitor?
6. Difference between Monitoring and Evaluation?
a) Quantitatively:
b) Count activities
c) Calculate % increase or decreases
d) Use a simple questionnaire (e.g. to measure knowledge or risk perception
- has it increased)
e) Clients exit interviews
f) Make Observations.
There are three levels of results that come from projects being carried out:
13
o Writers workshops where published authors will support
participants with ideas and guidance
o Story competitions in schools
It is important to plan for monitoring and evaluation at the beginning of the project.
In order to do this, you need firstly to be clear about what is involved in and different
about the two processes. Then you will need to plan how to build these in to your
work. The last section in these Guidance Notes provides ideas about how to gather
14
the information that you will need in order to carry out your monitoring and
evaluation plan
In terms of monitoring, you will see (from the table below) that this is an on-going
activity, which should be documented as activities take place. It is often a good idea
to develop forms or report books in order to make the collection of this information
as routine as possible
All the monitoring information will be needed for any evaluations that are
conducted. The following table, adapted from Sharpening the Development
Process, Oliver Bakewell, INTRAC, provides an overview of what needs to be
considered when planning for M&E:
Monitoring Evaluation
Timing Continuous, throughout Periodically at significant points in
the project the project: mid-term or end of
project are most common
Scope Day to day activities Assess overall delivery of activities
and progress towards achieving
aim and objectives
Main participants Project staff and project External evaluators/facilitators,
users project users, project staff, donors
Reporting formats Regular reports and Written report with
updates to project users, recommendations for changes to
management and donors project
Key Steps
Once you know what needs to be done, by whom and when, the next task is to
devise key questions and indicators that will be used to collect and analyse the
information you need.
15
What are indicators?
Essentially, they are what they sound like: illustrations or pointers that something has
happened or is happening.
- When the car in front of your indicates that it is turning left, you understand what
is happening
- When you see very heavy dark clouds in the sky, you have an indication that it
may rain very soon
Let’s take an example of an objective and a few activities and develop some key
questions and indicators:
- Activity 1: Reading Circles: monthly meetings for PEN members and the
general public, with book presentations, readings and discussion.
-
Activity Key Question Indicators
Reading Circles: Are the Reading Numbers and locations of Reading circles
monthly Circles popular and Numbers of people (men/women) attending
meetings for building on local Topics covered
PEN members interest? Feedback from members
and the general Press reports
public, with book Book sales
presentations,
readings and
discussion.
16
Organise writers To what extent are Numbers of workshops
workshops the writers’ Numbers of people ( men+women) attending.
where published workshops effective Results: participants’ materials being published
authors will in enabling new on web, newspapers, anthologies, books.
support writers to get
participants with published?
ideas and
guidance
Story Are the competitions Nos of schools involved
competitions in popular and building Nos of children (girls+boys) taking part
schools on interest in Publicity
reading and writing?
In other words, the outcomes (results) can be seen as the sum of the parts
(activities)
17
A note on indicators
Indicators may be:
- Qualitative – the change is shown through description. Eg. the changing level of
interest in literature
- Direct: Something you can measure directly eg. the number of meetings held by a
committee
18
3.4 Gathering the Information that you need to monitor and evaluate:
As stated at the beginning of this Note, you will need to plan how to gather information needed for monitoring and
evaluation. The following table provides some methods that you might consider using, as well as an indication of the
strengths and weaknesses for each. When making you monitoring and evaluation plan, it will be wise to decide which
methods you will use at each stage, so that you also plan the time, costs and the resources that you will need. Many
plans will use some elements of the following techniques.
-
that can be descriptive or explanatory and evidence from documents, Can’t generalise findings
can serve to answer questions of how and
-
interviews, observation
Time consuming
why
- Provides insights that are not
easy to collect in more formal
processes
Holding focussed discussions with members - Similar advantages to - Can be expensive and
Focus Groups of target population who are familiar with interviews time consuming
- -
issues that are being explored. The purpose
Particularly useful where Can’t generalise findings
is to compare the beneficiaries’ perspectives
participant interaction is
with concepts in the evaluation’s objectives
desired
19
The interviewer asks questions of one or - People and institutions can - Time consuming
-
Interviews more persons and records the respondents’ explain their experiences in Can be expensive
answers. Interviews may be formal or
-
their own words and setting
If not done properly, the
informal, face-to-face or by telephone, or
- Flexible to allow the interviewer can influence
closed- or open ended
interviewer to pursue the interviewee’s
unanticipated lines of enquiry response
or to probe issues in depth
-
(observer becomes part of the setting for a
Findings can be open to
period of time).
interpretation
20
within a short time-frame
to process change
Developing a set of survey questions whose - Can reach a wide sample - The quality of responses
Questionnaires answers can be coded consistently simultaneously highly dependent on the
-
questionnaire
Impose uniformity by asking
all respondents the same -
things
- Can be inexpensive
3.5
21
When? Evaluation
a) All the definitions of monitoring and evaluation processes agree that monitoring is
a continuous activity, whilst evaluation is periodic.
b) Monitoring uses key indicators established to compare the actual achievements
at various levels against the objectives. Also it can be carried out by the
commission, the recipient community, institution, country or the programme /
project team itself (often with technical assistance back -up).
c) While evaluation process is independent whereby an external auditor or
evaluator can be involved or specified persons among the stakeholders can
undertake the exercise without involving the entire team.
22
d) Failure of monitoring process in any stage of the programme may lead to a
negative outcome in terms of evaluation and vice versa, and hence monitoring
process is pivotal in any programme prior to evaluation.
Chapter four
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Identify the qualities of indicators
2. Explain the different between outcome and impact evaluation
3. Identify the purpose of monitoring
4. Identify what to monitor
5. Differentiate between income and impact
a) Linked to project
b) Ability to measure change
c) Cost-effective and feasible to collect and analyze data
d) Easy to interpret
e) Permits change to be tracked over time
f) Comparable between projects, countries or population groups
23
4.2 OUTCOME AND IMPACT EVALUATION
The following are some of the definition and benefits of outcome and impact:-
a) Ensures inputs are made available on time and are properly utilized.
24
b) If any unexpected results occur, their causes are noted and corrective action
taken.
c) Enhances learning and facilitates decision making.
d) It helps in the documentation process of implementation.
a) Inputs
b) Activities.
c) Content
d) Results
e) Outputs/outcome.
25
An out-come is a medium-term consequence occurring after an event. It can be
measured qualitatively in terms of data and reports. For example, the uptake of ITNs by
expectant mothers and children under 5 years that die of malaria disease as a result of
not being protected by sleeping under the insecticide treated bed nets (ITNs) and use of
anti-malarial drugs for both the treatment of malaria and prophylaxis.
An impact is a long-term result. It can be measured both quantitatively and
qualitatively. In health care programmes, an impact assessment can be carried out at
every stage/phase of the health care programme in terms of impact evaluation. Impact
evaluation is a tool for policy makers, programme/project planners and programme
management. Therefore, impact evaluation requires information before therefore health
care programme begins, during the programmer's life and after the programme has
ended.
Chapter five
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. What is meant by the term evaluation?
2. What are the purposes of evaluation?
3. What do we evaluate?
4. Identify different characteristics between monitoring and evaluation
5. When should monitoring and evaluation be done?
26
5.0 EVALUATION
a) Helps implementers focus on the progress towards realizing the projects purpose
and goal.
b) Improves project planning and management.
c) Promotes institutional learning.
d) Informs policy.
a) Relevance
b) Efficiency
c) Impact
d) Effectiveness
e) Sustainability
Monitoring
a) Continuous
b) Keeps track, oversight; analyses and documents progress
27
c) Focuses on inputs, activities, outputs, implementation processes, continued
relevance, likely results at outcome level
d) Answers what activities were implemented and results achieved
e) Alerts managers to problems and provides options for corrective actions
f) Self-assessment by programme managers,
g) supervisors, community stakeholders, and donors
Evaluation
a) Periodic :
b) At important milestones such as the mid-term of programme implementation; at
the end or a substantial period after programme conclusion In-depth analysis;
compares planned with actual achievements
c) Focuses on outputs in relation to inputs, results in relation to cost; processes
used to achieve results; overall relevance; impact; and sustainability
d) Answers why and how results were achieved.
e) Contributes to building theories and models for change
f) Provides mangers with strategy and policy promotions.
g) Internal and/ or external analysis by programme managers, supervisors,
community stakeholders, donors and/or external evaluators.
(Source: UNICEF, 1991.WFP, May 2000)
28
c) Assist in making decisions on how the project will be implemented.
Evaluation should be a continuous process and should take place in all project
implementation activities. This enables the project planners and implementers to
progressively review the project strategies according to the changing circumstances in
order to attain the desired activity and project objectives.
This is to retrace the project planning and implementation process and results after
project implementation
USED FOR:
a) Establishing performance targets and then evaluating progress
b) Indicating whether an in-depth evaluation or review is needed
ADVANTAGES:
a) Effective means to measure progress toward objectives
DISADVANTAGES:
a) Poorly defined indicators are not good measures of success
b) Tendency to set too many indicators, or those without accessible data sources -
costly, impractical… and then underutilized
c) Often a trade-off between selecting the best indicators and accepting those
which can be measured using existing data
29
Questions for the module
1. Define evaluation
2. Identify the purposes of evaluation
3. What do we evaluate?
4. Differentiate the characteristics between monitoring and evaluation
5. At what time should monitoring and evaluation be done?
Chapter Six
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Define impact and outcome
2. Differentiate between impact and outcome
3. What is means of verification?
4. What are the assumptions expected?
Introduction
Both outcome and impact are types of long term indicators. Indicators are signs or
measures that show the extend of change in a project or organization.
Indicators help measure what actually happened in terms of quality, quantity, and
timeliness against what was planned.
Impact
Impact is an indicator which describes the changes in conditions of the community after
a programme .i.e. changes in behavior or practices upon a population as result of the
programme e.g. increased literacy.
Outcome
This is an indicator which measures the product of an activity or programme e.g.
number of pupils attending a school.
It indicates what changes have occurred and shows if outputs lead to the expected
positive changes.
30
6.2 DIFFERENCE BETWEEN IMPACT AND OUTCOME
31
6.3 MEANS OF VERIFICATION (MoV)
These are the information sources to show that the indicator has been achieved. E.g.
Minutes of meetings; Reports; Certificates; Records etc.
6.4 ASSUMPTIONS
These are the external factors (to the programme or intervention) that must remain
positive for the objectives at the various levels to be achieved. The programme has no
control over these external factors – but may try to influence their remaining positive.
E.g continued donor/political support.
32
Questions of the module
1. Definition of impact and outcome
2. Differentiate between impact and outcome
3. Define means of verification?
4. What are the assumptions expected?
Chapter Seven
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Identify factors affecting sustainability
2. Identify type and level of each Indicator
3. What Is a Good Indicator?
4. State the importance of sustainability
5. Identify challenges in developing M&E systems
6. The process of participatory
7. Design the M&E platform
a) Policy support
b) Appropriate technology
c) Environmental protection
d) Socio-cultural aspects; women in development
e) Institutional and management capacity
f) Economic and financial viability
Logic model
Logic model
33
Input Activity Output Outcomes Impact
34
Begin by inserting your activities
here
35
36
37
38
7.2 M&E Questions
Monitoring questions
Indicators: Definition
Type
a) Input/Process (Monitoring)
b) Outcome / Impact (Evaluation)
Level
a) Global level
b) Country level
c) Program level
39
7.4 What Is a Good Indicator?
7.5 SUSTAINABILITY
40
The Monitoring & Evaluation Plan…
• Is a tool used to plan and manage the
a) collection,
b) analysis and
c) reporting of data related to an indicator
7.6 Challenges in developing M & E systems:
41
7.7 Why use a Participatory Approach ???
It is therefore crucial to involve key stakeholders in the design of the M&E system
and to train them on M&E.
42
– Key stakeholders should be invited to all feedback workshops.
Establish a reference group representing the interest of key stakeholders
– Members should have knowledge of M&E
– Function as a resource group for development of the M&E system.
– Include M&E responsibilities in Job descriptions
– Ensures that the M&E function is acknowledged as part of the job and that
it is included in the performance appraisals.
43
7.9 Process for Designing an M&E system
PHASE 1: PREPARATION
Step 1: Building commitment and Preparation
a. Participatory Workshop to plan the process of designing the system
with client and key stakeholders
b. Process is designed to build commitment and cooperation
throughout
Step 2: Situation Analysis
a) Stakeholder Analysis
b) Assessment of existing M&E Structures
c) Assessment of M&E capacity of organization
PHASE 2: DESIGN
Step 3: Methods for Designing the M&E Plan
a. Design Workshop – Includes; Key stakeholders & decision makers,
Presentation of SA results, Review of objectives & intervention logic,
Design of M&E Plan
b. Write up M&E Plan and recommendation from the workshop
c. Verification of M&E Plan
Step 4: Design M&E Forms
a) Make list of all tools,
b) Design tools, Verify and,
c) Finalize tools
Step 5: Design Reporting Format
d) Ensure all indicators are represented in reporting formats
e) Verify and finalize reporting formats
PHASE 3: STANDARDIZATION
Step 6: Develop M&E System Support Documents
a. Glossary of terms
b. Definition of indicators
c. Description of tools
44
Step 7: Institutional Arrangements
a. Define roles and responsibilities
b. Put required human and physical resources in place
c. Include M&E KPI
d. Develop Supporting Information Technology Infrastructure (
Electronic system)
e. Training in the M&E System
f. Implementation, Piloting, Quality control, & Continuous improvement
45
Key themes to be explored in assessing the M&E structures:
• Assess monitoring gaps
a) What monitoring systems are in place?
b) The effectiveness of these systems
c) The feasibility of these systems
d) What are the monitoring gaps?
• Indicators used and needed
a) What is being measured?
b) What is not being measured but should be (gaps)?
c) Assess indicators in terms of whether they are directly related to the
objectives and outputs, whether they are verifiable, adequate, reliable and
practical to measure (DVARP)
d) What critical variables need to be considered when designing indicators
(e.g. demographics, gender issues)
46
a) How is data being reported, in what formats, frequency, by whom and to
whom?
b) Who is ultimately responsible for reporting on various aspects of the
project?
c) Which stakeholders need what information and how often?
d) How will information gathered be used and by whom; who will be making
what decisions based on the data returned by the M&E system? (currently
and in the future).
– How can reporting systems be streamlined so that programme staff and
management are not overloaded with reporting requirements?
• Usage of system
a. How is the organisation currently using the information generated?
b. For what purpose would the organisation like to use the
information?
c. How will the organisation react to negative information generated?
d. What are the decisions making structures in place to react to the
information generated from the M&E system?
e. What are the decision making structures that should be in place to
react to the information generated from the M&E system?
f. How will management ensure that information generated from the
M&E system and the decisions resulted thereof will be filtered down
to the implementers?
Main questions:
1. Where does the capacity exist to support a results-based M&E system?
2. What is the skills level in terms of:
a) Project and Programme management
47
b) Data capturing
c) Data analysis
d) Project and Programme goal and outcome establishment
e) Budget management
f) Performance auditing
• Are there any institutions, research centres, private organisations, consultancies
or universities who can provide technical assistance and training for the
organisation in evidence-based M&E?
• A target is the desired measure on an indicator that you are aiming to achieve
after your programme/project has been implemented or has had the desired
impact.
• Why set targets?
a) Guides towards achieving a goal/outcome/impact/output
b) Motivates you
c) Gives details on numbers that need to be achieved within a time period
• When setting realistic targets one should consider:
a) measure on the baseline
b) current resources available and what can be achieved within the
constraints
c) Finding balance between being ambitious and setting targets that are
easily achievable
48
c) It can be used to motivate staff to see what has been achieved thus far,
and how far they still have to go
49
Step 4: Design of forms
1. Forms are the MoV or the source of the information on the indicator that need to
be completed
2. When you develop the form, you look at the following columns in the M&E plan:
a) The actual indicator,
Who should fill in the form (from name of the form)
a) How the data should be analysed (disaggregation)
• Design a standardised header for all forms including information such as:
a. the form title,
b. who will gather the form,
c. supporting documents (e.g. should anything else go with the form
or should it be attached to another form or report format?)
Step 5: Design of reporting formats
50
PHASE 3: STANDARDIZATION
Step 6: Guideline for the M&E Plan
• Why is it important?
a) The M&E Plan can be quite confusing, complicated to understand, difficult
to access and
b) M&E Plan can be alienating for those who encounter it the first time
c) Guideline document ensures that those responsible for implementing the
system can understand the indicator, and that data will be analysed and
interpreted in standard ways
• What is its purpose?
a) To ensure that the data related to the indicator is calculated and measured
in the same manner every time it is reported on.
Step 6: guidelines for M&E Plan …
What should it contain?
a. Indicator definitions and analysis guidelines
b. Description of forms
c. Any other information relevant to the M&E plan that may have been developed
e.g. contact details, relevant policy / strategy documents etc.
51
Description of forms
a) Form number
b) Form Name
c) Purpose of form
d) Description of form
e) A diagram of how the forms feed into the reports could also be useful.
(Step 5)
52
f) The M&E team needs knowledge and skills to manage and maintain the
database
53
3. Monitoring should be conducted by line managers and implementers
4. An organisation should have a M&E capability within the organisation, e.g. unit or
a co-ordinator
5. Who are the key partners in M&E?
a) Planners (e.g. strategic planning)
b) Researchers
c) Senior managers
d) Governing Board
e) Knowledge management
f) Human Resources
g) Quality control / performance management
54
c) Stay focused on what information needs to be gathered for the M&E system as
oppose to what could be interesting to have.
d) Ensure that the M&E system is manageable and data can be collected
e) Ensure that the organisation has the capacity to analyse the data
f) The organisation must have mechanisms to feed back results from M&E to
management
g) The organisation must make decisions on what they want to do with the M&E
reports. How are issues used to improve implementation?
55
56
57
Deciding on an Appropriate Evaluation Design
1. How do you intend to use the results?
2. What do you want to measure (indicators)?
a. Provision, utilization, coverage, effectiveness, impact
3. How sure to you want to be (type of inference)?
a. What is the cost of making a mistake (low, medium, high)?
4. When do you need the results?
5. How much are you willing to pay?
6. Where in the program life cycle are you now?
What is Causality?
Question for the module
When one event produces a second event
1. What are the factors affecting sustainability
2. What are the type and level of each Indicator
3. What Is a Good Indicator?
4. Identify the importance of sustainability
5. What are the challenges in developing M&E systems
6. State how participatory process is done
7. Illustrate the design of M&E platform
58
References
1. Adapted from Osborne & Gaebler in JZ Kusek and RC Rist (2004). Ten Steps to
a Results-based Monitoring and Evaluation System: A Handbook for
Development Practitioners, The World Bank, Washington DC, page 11.
5. Mental Health and Drug and Alcohol Office (2010). Mental Health Project
Summary 2010, NSW Department of Health, page 6. (Internal document).
10. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 430.
12. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 432.
14. 19 PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 431.
59
B. Sc. Health Record Information Management
End of term examination
3rd year Class
Q1. With a diagram, describe how monitoring and evaluation can be done using a logic
model/framework
(10 Marks)
Q2. a) Define Monitoring and explain its purpose.
b) Define Evaluation and explain its purpose
(10 Marks)
Q3. Describe the difference between Monitoring and Evaluation.
(10 Marks)
Q4. Describe when Monitoring and Evaluation should be done.
(10 Marks)
Q5. Differentiate between impact and outcome.
(10 Marks)
Q6. Illustrate the monitoring and evaluation framework
(10 Marks)
Q7. Who needs uses M&E Information?
(10 Marks)
60