Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

INTRODUCTION TO MONITORING & EVALUATION

DEPARTMENT OF CLINICAL MEDICINE

COURSE CODE: BCM4105:

COURSE TITLE: INTRODUCTION TO MONITORING & EVALUATION

1
Table of Contents
Chapter One ........................................................................................................................ 5
1.0 Introduction.............................................................................................................. 5
1.1 LEARNING OUTCOMES ................................................................................. 5
1.2 The Power of Measuring Results .................................................................... 5
1.3 Monitoring Questions ........................................................................................ 5
1.4 Evaluation Questions ........................................................................................ 6
1.5 Monitoring and Evaluation? ............................................................................. 6
1.6 MONITORING AND EVALUATION LESSIONS........................................... 8
Chapter Two...................................................................................................................... 10
2.0 Who needs, uses M&E Information?............................................................ 10
2.1 Who conducts M&E….? ................................................................................. 11
2.2 Answering the following questions................................................................ 11
2.3 Why monitor and evaluate? ........................................................................... 11
2.4 Monitoring and Evaluation helps us to answer the questions: ................. 12
Chapter Three.................................................................................................................... 13
3.0 HOW DO WE MONITOR? ............................................................................. 13
3.2 When and how to use it? ................................................................................ 14
3.4 Gathering the Information that you need to monitor and evaluate: ......... 19
3.5 ................................................................................................................................. 21
When? Evaluation ...................................................................................................... 22
3.6 Why? Monitoring.............................................................................................. 22
3.7 Difference between Monitoring and Evaluation .......................................... 22
Chapter four ...................................................................................................................... 23
4.1 QUALITIES OF INDICATORS....................................................................... 23
4.2 OUTCOME AND IMPACT EVALUATION ................................................... 24
4.3 Purpose of Monitoring..................................................................................... 24
4.4 What to monitor................................................................................................ 25
4.5 Monitoring in a program.................................................................................. 25
4.6 Monitoring report.............................................................................................. 25
4.7 Define and differentiate between impact and out-come?.......................... 25

2
Chapter five....................................................................................................................... 26
5.0 EVALUATION................................................................................................... 27
5.1 Purpose of Evaluation..................................................................................... 27
5.2 What to evaluate.............................................................................................. 27
5.3 DIFFERENT CHARACTERISTICS OF MONITORING AND
EVALUATION. ............................................................................................................ 27
5.6 WHEN MONITORING AND EVALUATION SHOULD BE DONE............ 28
5.7 Before project implementation....................................................................... 28
5.8 During project implementation....................................................................... 29
5.9 After project implementation .......................................................................... 29
5.10 Performance indicators ............................................................................... 29
5.11 Performance indicators ............................................................................... 29
Chapter Six........................................................................................................................ 30
6.1 DEFINITION OF IMPACT AND OUTCOME ............................................... 30
6.2 DIFFERENCE BETWEEN IMPACT AND OUTCOME .............................. 31
6.3 MEANS OF VERIFICATION (MoV).............................................................. 32
6.4 ASSUMPTIONS............................................................................................... 32
Chapter Seven ................................................................................................................... 33
1. Identify factors affecting sustainability.......................................................... 33
2. Identify type and level of each Indicator....................................................... 33
7.1 Factors affecting sustainability ...................................................................... 33
7.2 M&E Questions ................................................................................................ 39
7.3 Type and Level of Each Indicator ................................................................. 39
7.4 What Is a Good Indicator? ............................................................................. 40
7.5 SUSTAINABILITY............................................................................................ 40
7.6 Challenges in developing M & E systems: .................................................. 41
7.7 Why use a Participatory Approach ???........................................................ 42
7.8 Tips for Using a Participatory Approach ...................................................... 42
7.9 Process for Designing an M&E system........................................................ 44
7.12 Assessment of Existing M&E Structures.................................................. 46
7.13 Assessment of Existing M&E Structures….............................................. 47

3
7.14 Assessment of M&E capacity of the organization .................................. 47
7.15 Setting targets: What is a target ? ............................................................. 48
7.16 Recording Current Status ........................................................................... 48
1. What are the factors affecting sustainability................................................ 58
2. What are the type and level of each Indicator ............................................ 58
References......................................................................................................................... 59

4
Chapter One

1.0 Introduction
This is an introduction course to Monitoring and Evaluation. The course will
highlight on the value of Monitoring and Evaluation. Different aspects of
Monitoring and Evaluation will be discussed.

1.1 LEARNING OUTCOMES

At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
a) To define Monitoring and explain its purpose.
b) To define Evaluation and explain its purpose
c) To describe the difference between Monitoring and Evaluation.
d) To describe when Monitoring and Evaluation should be done.
e) To define impact and outcome.
f) To differentiate between impact and outcome.

1.2 The Power of Measuring Results

a) If you do not measure results, you cannot tell success from failure
b) If you cannot see success, you cannot
reward it
c) If you cannot reward success, you are probably rewarding failure
d) If you cannot see success, you cannot learn from it
e) If you cannot recognize failure, you cannot correct it
f) If you can demonstrate results, you can win support.
Adapted from Osborne & Gaebler, 1992

1.3 Monitoring Questions

Monitoring seeks to answer questions such as:

5
a) Were inputs made available to program/ project in the quantities and at the
time specified by the program/project work plan?
b) Were the scheduled activities carried out as planned?
c) How well were they carried out?
d) Did the expected changes occur at the program/project level, in terms of
people reached, materials distributed?

1.4 Evaluation Questions

a) Did the expected change occur at the population level (not necessarily
attributable to program/project)? How much change occurred?
b) Can improved health outcomes be attributed to program efforts?
c) Did the target population benefit from the program and at what cost?

1.5 Monitoring and Evaluation?

Monitoring and evaluation form an essential part of all project work. Specifically,
they are key components of the project cycle and Centres’ Annual Plans. Briefly:

- Monitoring :
Monitoring is the continuous measurement, recording, collection of
information and communicating, and observation of the performance of a
service, programme or project to see that it is proceeding according to the
proposal plans and the objectives.

Monitoring is a process of continuous and periodic review to ensure that


the project inputs, deliveries, targeted outputs is carried out according to
plan. (Keeping watch)

6
This is continuously tracking performance against what was planned by
collecting and analyzing data on the indicators established for monitoring
& Evaluation purposes.
It provides continuous information on whether progress is being made
towards achieving results (outputs, outcomes, and goals) through record
keeping and regular reporting systems.

Monitoring looks at both programmes process and changes in conditions


of target groups and institutions brought about by programmes activities. It
also identifies strengths & weaknesses in a programme.

- Evaluation:
Evaluation is the process of determining the value of the project in terms
relevance, efficiency effectiveness and impact Evaluations assess what
happened as a result of these activities, and answers the questions “To
what extent did your project achieve what it set out to achieve?” “What
have we learned as a result of this assessing the effectiveness of our
work?”

Evaluations rely heavily on information collected in the monitoring systems


for assessment and analysis of progress towards agreed aims and
objectives.

Evaluation is a process that attempts to determine as systematically and


objectively as possible the relevance, effectiveness and impact of activities
in the light of their objectives. It is an independent objective examination
during or after programme or project, for inputs,

Evaluation is processes, outcomes and impact.

What can evaluation do:-

7
It is simply 'taking stock' of results of a project over a defined time and
weighing against the pre-determined targets.
It is an assessment of 'result' that is 'what' has been accomplished and
'process' that is 'how' it was accomplished

Measuring outcomes identities the weakness and strengths of the project


and its activities in order to increase efficiency and the effective use of
resources. (Process evaluation)

It also identifies the programmes stated objectives usually by collecting


baseline data (formative evaluation)

Compare baseline data with data collected at the completion of the


program. (Summative evaluation)

1.6 MONITORING AND EVALUATION LESSIONS

Monitoring and evaluation are tools for effective programme implementation that
enables both the planners and decision makers to draw lessons for future action:

Both aim at:


1. Tracking the process of the project during its implementation.
2. Determining systematic and objectively the:
a) Efficiency -Cost/benefit ratio. Could it have been
accomplished at a lower cost?
b) Significance-Did it make substantial contribution to
envelopment?
c) Effectiveness-Did it achieve planned purpose?
d) Impact of the program-Extent to which attainment of
immediate objectives has contributed to achievement of
overall programme objectives. (Goals.)

8
3. Deriving lessons for future development planning, better program
formulation and implementation

What is Program Monitoring, Evaluation?

Monitoring is the routine process of data collection and measurement of


progress toward program objectives.

Evaluation is the use of social research methods to systematically investigate an


achievement of a program’s results

What is the purpose….?

a) Improve program implementation


a. Data on program progress and implementation
b. Improve program management and decision making
b) Inform future programming
c) Inform stakeholders
a. Accountability (donors, beneficiaries)

9
b. Advocacy

Questions for the module

1. Define Monitoring and explain its purpose.


2. Define Evaluation and explain its purpose
3. Describe the difference between Monitoring and Evaluation.
4. Describe when Monitoring and Evaluation should be done.
5. Define impact and outcome.
6. Differentiate between impact and outcome.

Chapter Two

At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
1. Identify the users of Monitoring and evaluation information
2. Identify who conduct Monitoring and Evaluation
3. Why do we monitor and why evaluate
4. Explain what Monitoring and Evaluation will answer

2.0 Who needs uses M&E Information?

1) To improve program implementation… Managers


2) To Inform and improve future programs….
a) Donors ,
b) Governments,
c) Technocrats
3) Inform stakeholders Donors
a) Governments
b) Communities
c) Beneficiaries

10
2.1 Who conducts M&E….?

1) Program implementer
2) Stakeholders
3) Beneficiary

Monitoring & Evaluation requires participatory process of all stakeholders

2.2 Answering the following questions

a) What outputs will be measured, when and how for each major project
activities and among whom and how many:
b) What effects will be measured, when, (how for each major project
activities and among whom and how many:
c) What impacts will be measured for each program objective (based on
established indicators, when for example pre or post project. how
(methods) among whom and how many.
d) Monitoring and evaluation are related but distinct concepts. Both are
analytical processes involving the gathering and analysis of relevant data
and information effective management of programmes.

2.3 Why monitor and evaluate?

It is important that Centres monitor and evaluate their work and projects for the
following reasons:

- Accountability: PEN Centres are accountable in two directions: to their


donors, their members and to their beneficiaries. The donors want to know if
their funds have been used effectively and efficiently.
Beneficiaries have a right to be involved in decisions and actions which
affect them. Centres therefore have a responsibility to be clear about what
they are doing, why they are doing it; and what results have been achieved.
By monitoring and evaluating their projects, they can be accountable both to
donors and to beneficiaries and their Centre members.

11
- Improving performance: At the end of a project, Centres’ will need to
reflect upon what worked well, what was not so successful and how they will
plan for the next piece of work. The evaluation process directly supports this
process. Having completed these reflections, Centre’s should be in a
position to improve performance and results in subsequent projects.

- Learning: As well as improving performance, monitoring and evaluation can


also provide valuable lessons for other projects either within the Centre or in
relation to similar projects that are being carried out by others. The process
of monitoring and evaluation can also help staff and others develop new
skills.

Additionally, donors usually require organisations to provide evidence of how


they have monitored and evaluated their projects.

2.4 Monitoring and Evaluation help us to answer the questions:

a) What happened during the intervention/project?


b) Did the intervention (project) take place as planned?
c) What were the successes and failures?
d) What do we need to change?
e) What would we do differently next time?
f) What changed?
g) Did the programmes make a difference?
These help us to be Pro-active & Accountable

Questions of the module

1. Who are the users of Monitoring and evaluation information?


2. Who conduct Monitoring and Evaluation?

12
3. Why do we monitor and why evaluate?
4. Explain what Monitoring and Evaluation will answer?

Chapter Three

At the end of the course, the learner will be able to understanding of the following
issues relating to Monitoring & Evaluation:-
1. How monitoring is done?
2. When and how do we monitor?
3. How do we gather information that is needed to monitor and evaluate?
4. When do we evaluate?
5. Why do we monitor?
6. Difference between Monitoring and Evaluation?

3.0 HOW DO WE MONITOR?

a) Quantitatively:
b) Count activities
c) Calculate % increase or decreases
d) Use a simple questionnaire (e.g. to measure knowledge or risk perception
- has it increased)
e) Clients exit interviews
f) Make Observations.

There are three levels of results that come from projects being carried out:

- Outputs: the products or services, which are delivered on completion of


the project activities. In other words: What was done?
Examples of outputs that PEN might use:
o Reading Circles: monthly meetings for PEN members and the
general public, with book presentations, readings and discussion.

13
o Writers workshops where published authors will support
participants with ideas and guidance
o Story competitions in schools

- Outcomes: immediate and observable changes in relation to the project


objectives, which were brought about as a direct result of project activities
and outputs. In other words: What happened?
Examples of outcomes that PEN Centres might use:
 Greater interest and enthusiasm for literature
 More and better quality of materials produced
 Changes in amount and quality of press coverage
 Improved levels of literacy in schools
 Improved exam results

- Impact: this concerns longer-term changes that have come about as a


result of a project or programme. Impact can be either positive or negative
– both are equally important. What changed?
 Higher attendance at school
 More girls going into work after completing school
 Girls having their first babies later than previously

Monitoring and evaluation systems are designed to report on these different


levels of results.

3.2 When and how to use it?

It is important to plan for monitoring and evaluation at the beginning of the project.
In order to do this, you need firstly to be clear about what is involved in and different
about the two processes. Then you will need to plan how to build these in to your
work. The last section in these Guidance Notes provides ideas about how to gather

14
the information that you will need in order to carry out your monitoring and
evaluation plan

In terms of monitoring, you will see (from the table below) that this is an on-going
activity, which should be documented as activities take place. It is often a good idea
to develop forms or report books in order to make the collection of this information
as routine as possible

All the monitoring information will be needed for any evaluations that are
conducted. The following table, adapted from Sharpening the Development
Process, Oliver Bakewell, INTRAC, provides an overview of what needs to be
considered when planning for M&E:

Monitoring Evaluation
Timing Continuous, throughout Periodically at significant points in
the project the project: mid-term or end of
project are most common
Scope Day to day activities Assess overall delivery of activities
and progress towards achieving
aim and objectives
Main participants Project staff and project External evaluators/facilitators,
users project users, project staff, donors
Reporting formats Regular reports and Written report with
updates to project users, recommendations for changes to
management and donors project

Key Steps
Once you know what needs to be done, by whom and when, the next task is to
devise key questions and indicators that will be used to collect and analyse the
information you need.

15
What are indicators?
Essentially, they are what they sound like: illustrations or pointers that something has
happened or is happening.

- When the car in front of your indicates that it is turning left, you understand what
is happening

- When you see very heavy dark clouds in the sky, you have an indication that it
may rain very soon
Let’s take an example of an objective and a few activities and develop some key
questions and indicators:

Objective (outcome): To have promoted greater participation with


reading and writing in ….. region

- Activity 1: Reading Circles: monthly meetings for PEN members and the
general public, with book presentations, readings and discussion.

- Activity 2: Organise writers workshops where published authors will


support participants with ideas and guidance

- Activity 3: Story competitions in schools

-
Activity Key Question Indicators
Reading Circles: Are the Reading Numbers and locations of Reading circles
monthly Circles popular and Numbers of people (men/women) attending
meetings for building on local Topics covered
PEN members interest? Feedback from members
and the general Press reports
public, with book Book sales
presentations,
readings and
discussion.

16
Organise writers To what extent are Numbers of workshops
workshops the writers’ Numbers of people ( men+women) attending.
where published workshops effective Results: participants’ materials being published
authors will in enabling new on web, newspapers, anthologies, books.
support writers to get
participants with published?
ideas and
guidance
Story Are the competitions Nos of schools involved
competitions in popular and building Nos of children (girls+boys) taking part
schools on interest in Publicity
reading and writing?

Indicators at outcome level - checking progress towards achieving the objective –


would include the following which would be measured over time ( to indicates levels
of change)
Changes in attitude to literature
Quality of materials produced
Changes in amount and quality of press coverage
Levels of literacy in schools
Exam results

In other words, the outcomes (results) can be seen as the sum of the parts
(activities)

17
A note on indicators
Indicators may be:

- Quantitative – the change in indicators can be shown through numbers. Eg the


number of people submitting stories to a competition over time

- Qualitative – the change is shown through description. Eg. the changing level of
interest in literature

- Direct: Something you can measure directly eg. the number of meetings held by a
committee

- Indirect: For example the measurement of increasing women’s involvement in a


committee could be indicated by the number of decisions made which support
issues raised by women.

18
3.4 Gathering the Information that you need to monitor and evaluate:

As stated at the beginning of this Note, you will need to plan how to gather information needed for monitoring and
evaluation. The following table provides some methods that you might consider using, as well as an indication of the
strengths and weaknesses for each. When making you monitoring and evaluation plan, it will be wise to decide which
methods you will use at each stage, so that you also plan the time, costs and the resources that you will need. Many
plans will use some elements of the following techniques.

Technique Definition and Use Strengths Weaknesses


- Good case studies are
Case Studies Collecting information that results in a story - Can deal with a full variety of difficult to do

-
that can be descriptive or explanatory and evidence from documents, Can’t generalise findings
can serve to answer questions of how and
-
interviews, observation
Time consuming
why
- Provides insights that are not
easy to collect in more formal
processes
Holding focussed discussions with members - Similar advantages to - Can be expensive and
Focus Groups of target population who are familiar with interviews time consuming

- -
issues that are being explored. The purpose
Particularly useful where Can’t generalise findings
is to compare the beneficiaries’ perspectives
participant interaction is
with concepts in the evaluation’s objectives
desired

19
The interviewer asks questions of one or - People and institutions can - Time consuming

-
Interviews more persons and records the respondents’ explain their experiences in Can be expensive
answers. Interviews may be formal or
-
their own words and setting
If not done properly, the
informal, face-to-face or by telephone, or
- Flexible to allow the interviewer can influence
closed- or open ended
interviewer to pursue the interviewee’s
unanticipated lines of enquiry response
or to probe issues in depth

- Particularly useful where


language difficulties are
anticipated

- Greater likelihood of getting


input from senior officials
Observing and recording situation in a log or - Provides descriptive - Quality and usefulness
Observation diary. This includes who is involved; what information on context and of data highly dependent
happens; when, where, and how events observed changes on observer’s
occur. Observation can be direct (observer observational; and
watches and records), or participatory writing skills

-
(observer becomes part of the setting for a
Findings can be open to
period of time).
interpretation

- Does not easily apply

20
within a short time-frame
to process change
Developing a set of survey questions whose - Can reach a wide sample - The quality of responses
Questionnaires answers can be coded consistently simultaneously highly dependent on the

- Allows respondents time to clarity of questions

think before they answer - Sometimes difficult to

- Can be answered persuade people to

anonymously complete and return

-
questionnaire
Impose uniformity by asking
all respondents the same -
things

- Make data compilation and


comparison easier
Reviewing documents such as records, - Can identify issues to - Can be time consuming
Written administrative databases, training materials investigate further and
Document and correspondence provide evidence of action,
Analysis change and impact to support
respondents’ perceptions

- Can be inexpensive

3.5

21
When? Evaluation

a) Evaluation process can take place;


b) At some point during implementation as well as when the project or programme
is still underway; such interim evaluation may take place mid-term or at the end
of a particular phase of the programme or project;
c) On completion of the project or programme (end-of-project/programme
evaluation or at the end of the external financing).
d) A number of years after completion of the project service or programme (ex-post
evaluation).
Therefore, evaluation exercise takes place both during the life of the project, or
programme and after the project, or programme have been concluded.

3.6 Why? Monitoring

a) Monitoring in a programme or project takes place during the life of the


project/programme. It is largely concerned with the transformation of inputs into
outputs. For example, monitoring process takes place throughout the phases of
the project cycle namely;
b) Indicative programming Identification of problem Formulation (Appraisal)
Planning and Financing Implementation Monitoring and Evaluation

3.7 Difference between Monitoring and Evaluation

a) All the definitions of monitoring and evaluation processes agree that monitoring is
a continuous activity, whilst evaluation is periodic.
b) Monitoring uses key indicators established to compare the actual achievements
at various levels against the objectives. Also it can be carried out by the
commission, the recipient community, institution, country or the programme /
project team itself (often with technical assistance back -up).
c) While evaluation process is independent whereby an external auditor or
evaluator can be involved or specified persons among the stakeholders can
undertake the exercise without involving the entire team.

22
d) Failure of monitoring process in any stage of the programme may lead to a
negative outcome in terms of evaluation and vice versa, and hence monitoring
process is pivotal in any programme prior to evaluation.

Question for the module


1. How monitoring is done?
2. When and how do we monitor?
3. How do we gather information that is needed to monitor and evaluate?
4. When do we evaluate?
5. Why do we monitor?
6. Difference between Monitoring and Evaluation?

Chapter four

At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Identify the qualities of indicators
2. Explain the different between outcome and impact evaluation
3. Identify the purpose of monitoring
4. Identify what to monitor
5. Differentiate between income and impact

4.1 QUALITIES OF INDICATORS

a) Linked to project
b) Ability to measure change
c) Cost-effective and feasible to collect and analyze data
d) Easy to interpret
e) Permits change to be tracked over time
f) Comparable between projects, countries or population groups

23
4.2 OUTCOME AND IMPACT EVALUATION

The following are some of the definition and benefits of outcome and impact:-

a) Outcomes are the medium-term results of a health intervention or a project while


Impact is the assessment of the long term results and how the interventions
affected the outcomes and the effects were the intended or unintended.
b) Special evaluation studies are usually carried out to determine the outcomes. For
instance, in the HIV / AIDS epidemic, special evaluation studies that determines
and tracks the following indicators can done at project level.:
c) Median age at first sex
d) Reported condom use for adults (15-24) at last higher-risk sex, non-regular
e) (%) Young people having multiple partners in the last one year
f) Reported higher risk sex for adults (15-49) in the last one year
g) % Commercial Sex Workers reporting using a condom with most recent client
h) % people expressing accepting attitudes towards people with HIV I AIDS
i) A national population based study will need to be done to determine the impact
of HIV and AIDS interventions. The National Surveyor sentinel Surveillance will
be required to determine such indicators like
j) Life expectancy at birth
k) HIV prevalence (rate in young people (15-24). HIV prevalence among women
attending ANC
l) Estimated number of people living with HIV / AIDS .
m) Proportion of infants infected with HIV
n) Under-five mortality rate
o) AIDS deaths
p) Children Orphaned by AIDS

4.3 Purpose of Monitoring

The purpose of monitoring is to:-

a) Ensures inputs are made available on time and are properly utilized.

24
b) If any unexpected results occur, their causes are noted and corrective action
taken.
c) Enhances learning and facilitates decision making.
d) It helps in the documentation process of implementation.

4.4 What to monitor

Monitoring will involve the following processed

a) Inputs
b) Activities.
c) Content
d) Results
e) Outputs/outcome.

4.5 Monitoring in a program

a) Monitoring in a programme or project takes place during the life of the


project/programme. It is largely concerned with the transformation of inputs into
outputs. For example, monitoring process takes place throughout the phases of
the project cycle namely;
b) Indicative programming Identification of problem Formulation (Appraisal)
Planning and Financing Implementation Monitoring and Evaluation
4.6 Monitoring report

A monitoring report drawn up at least once a year, gives a critical assessment of


progress towards achieving the health care programmer's objectives and the likelihood
of sustainable benefits for the target group once the programme has been implemented.
However, if monitoring reveals that there are problems in the health care programmes,
management decisions will have to be taken to alter or improve the service or
programme so that it comes back on track.

4.7 Define and differentiate between impact and out-come?

25
An out-come is a medium-term consequence occurring after an event. It can be
measured qualitatively in terms of data and reports. For example, the uptake of ITNs by
expectant mothers and children under 5 years that die of malaria disease as a result of
not being protected by sleeping under the insecticide treated bed nets (ITNs) and use of
anti-malarial drugs for both the treatment of malaria and prophylaxis.
An impact is a long-term result. It can be measured both quantitatively and
qualitatively. In health care programmes, an impact assessment can be carried out at
every stage/phase of the health care programme in terms of impact evaluation. Impact
evaluation is a tool for policy makers, programme/project planners and programme
management. Therefore, impact evaluation requires information before therefore health
care programme begins, during the programmer's life and after the programme has
ended.

Questions for the module


1. What are the qualities of indicators
2. Differentiate between outcome and impact evaluation
3. What is the purpose of monitoring
4. What do we monitor
5. Differentiate between income and impact

Chapter five

At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. What is meant by the term evaluation?
2. What are the purposes of evaluation?
3. What do we evaluate?
4. Identify different characteristics between monitoring and evaluation
5. When should monitoring and evaluation be done?

26
5.0 EVALUATION

a) Evaluation is the process of judging value on what project/program has achieved


particularly in relation to planned activities and overall objectives.
b) It involves value judgments. It is a periodic, in depth analysis of programmes
performance.
c) It relies on data generated through monitoring interviews focus group
discussions, surveys etc.
d) Evaluations are often (but not always) conducted with the assistance of external
evaluators. Evaluation is important to identify constraints or bottlenecks that
hinder the project in achieving objectives. Solutions to constraints can then be
identified.
5.1 Purpose of Evaluation

a) Helps implementers focus on the progress towards realizing the projects purpose
and goal.
b) Improves project planning and management.
c) Promotes institutional learning.
d) Informs policy.

5.2 What to evaluate

a) Relevance
b) Efficiency
c) Impact
d) Effectiveness
e) Sustainability

5.3 DIFFERENT CHARACTERISTICS OF MONITORING AND EVALUATION.

Monitoring
a) Continuous
b) Keeps track, oversight; analyses and documents progress

27
c) Focuses on inputs, activities, outputs, implementation processes, continued
relevance, likely results at outcome level
d) Answers what activities were implemented and results achieved
e) Alerts managers to problems and provides options for corrective actions
f) Self-assessment by programme managers,
g) supervisors, community stakeholders, and donors

Evaluation
a) Periodic :
b) At important milestones such as the mid-term of programme implementation; at
the end or a substantial period after programme conclusion In-depth analysis;
compares planned with actual achievements
c) Focuses on outputs in relation to inputs, results in relation to cost; processes
used to achieve results; overall relevance; impact; and sustainability
d) Answers why and how results were achieved.
e) Contributes to building theories and models for change
f) Provides mangers with strategy and policy promotions.
g) Internal and/ or external analysis by programme managers, supervisors,
community stakeholders, donors and/or external evaluators.
(Source: UNICEF, 1991.WFP, May 2000)

5.6 WHEN MONITORING AND EVALUATION SHOULD BE DONE

Evaluation can be done.


a. Before
b. During
c. After implementation.

5.7 Before project implementation

Evaluation is needed in order to:


a) Assess the possible consequences of planned projects
b) Make a final decision on what project alternative should be implemented.

28
c) Assist in making decisions on how the project will be implemented.

5.8 During project implementation

Evaluation should be a continuous process and should take place in all project
implementation activities. This enables the project planners and implementers to
progressively review the project strategies according to the changing circumstances in
order to attain the desired activity and project objectives.

5.9 After project implementation

This is to retrace the project planning and implementation process and results after
project implementation

5.10 Performance indicators

a) Measures of inputs, processes, outputs, outcomes, and impacts


b) When supported with sound data, indicators can assist track progress,
demonstrate results, and take corrective action to improve service delivery
c) Important to include key stakeholders in defining indicators - they are more likely
to understand and then use indicators for management decision-making

5.11 Performance indicators

USED FOR:
a) Establishing performance targets and then evaluating progress
b) Indicating whether an in-depth evaluation or review is needed
ADVANTAGES:
a) Effective means to measure progress toward objectives
DISADVANTAGES:
a) Poorly defined indicators are not good measures of success
b) Tendency to set too many indicators, or those without accessible data sources -
costly, impractical… and then underutilized
c) Often a trade-off between selecting the best indicators and accepting those
which can be measured using existing data

29
Questions for the module
1. Define evaluation
2. Identify the purposes of evaluation
3. What do we evaluate?
4. Differentiate the characteristics between monitoring and evaluation
5. At what time should monitoring and evaluation be done?

Chapter Six
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Define impact and outcome
2. Differentiate between impact and outcome
3. What is means of verification?
4. What are the assumptions expected?

6.1 DEFINITION OF IMPACT AND OUTCOME

Introduction
Both outcome and impact are types of long term indicators. Indicators are signs or
measures that show the extend of change in a project or organization.
Indicators help measure what actually happened in terms of quality, quantity, and
timeliness against what was planned.
Impact

Impact is an indicator which describes the changes in conditions of the community after
a programme .i.e. changes in behavior or practices upon a population as result of the
programme e.g. increased literacy.
Outcome
This is an indicator which measures the product of an activity or programme e.g.
number of pupils attending a school.
It indicates what changes have occurred and shows if outputs lead to the expected
positive changes.

30
6.2 DIFFERENCE BETWEEN IMPACT AND OUTCOME

a) An outcome is realized in a shorter span of period of time while an impact takes a


longer period to be realized.
b) Outcomes are immediate and could be temporary while impacts are long term
and may be permanent e.g. behavior change.
c) Outcomes are most expected almost obvious and easily achieved while impacts
are like vision, difficult to be achieved or never achieved at all.
d) Outcomes describe changes in conditions of a community after a programme
while impact describes the product of an activity or programme.
e) Outcomes are more easily observed than impacts e.g. it is easy to note number
of students attending a school than observing behavior change.

31
6.3 MEANS OF VERIFICATION (MoV)

These are the information sources to show that the indicator has been achieved. E.g.
Minutes of meetings; Reports; Certificates; Records etc.

6.4 ASSUMPTIONS

These are the external factors (to the programme or intervention) that must remain
positive for the objectives at the various levels to be achieved. The programme has no
control over these external factors – but may try to influence their remaining positive.
E.g continued donor/political support.

32
Questions of the module
1. Definition of impact and outcome
2. Differentiate between impact and outcome
3. Define means of verification?
4. What are the assumptions expected?

Chapter Seven
At the end of the course, the learner will be able to understanding of the following issues
relating to Monitoring & Evaluation:-
1. Identify factors affecting sustainability
2. Identify type and level of each Indicator
3. What Is a Good Indicator?
4. State the importance of sustainability
5. Identify challenges in developing M&E systems
6. The process of participatory
7. Design the M&E platform

7.1 Factors affecting sustainability

a) Policy support
b) Appropriate technology
c) Environmental protection
d) Socio-cultural aspects; women in development
e) Institutional and management capacity
f) Economic and financial viability
Logic model

Logic model

33
Input Activity Output Outcomes Impact

Quantifiable 1) What Immediate Longer-term Long-term,


resources going you do to results from change in population level
in to your accomplish your activity knowledge, change.
activities – the your - people attitude, behaviour, Can relate to a
Level
things you budget objectives? trained, etc. programme or
for. services Related to organizations
provided programme Goal vision / mission
statement
- # of training # of people Measure of change HIV incidence in
manuals Training trained in quality of care target population
Indicator - amount of # of trainings provided to clients
(example) money spent on conducted
the training
workshop

34
Begin by inserting your activities
here

Input Activity Output Outcomes Impact


Quantifiable Immediate results from Longer-term Long-term, effect on
1) What you do to accomplish
resources going in to your activity, e.g.: expected results the incidence (e.g.
your objectives?
your activities – the - people trained related to changes in reduction in HIV) of
2) What else do you do to
things you budget knowledge, attitude, the disease or the
accomplish these objectives? - services
for. and behaviour. effects on the
Are there any sub-objectives provided Outcomes usually population at large
that should be measured?
give an indication (e.g. population living
whether programme longer/healthier)
In most cases each activity
goals are being Can relate to a
should have its own set of
achieved programme or
inputs and outputs.
organization vision /
mission statement

35
36
37
38
7.2 M&E Questions

Monitoring questions

a. What is being done?


b. By whom?
c. Target population?
d. When?
e. How much?
f. How often?
g. Additional outputs?
h. Resources used? (Staff, funds, materials, etc.)

Indicators: Definition

a) Markers that help to measure change by showing progress towards meeting


objectives
b) Observable, measurable, and agreed upon as valid markers of a less well-
defined concept or objective
c) Indicators differ from objectives in that they address specific criteria that will be
used to judge the success of the project or program.

7.3 Type and Level of Each Indicator

Type
a) Input/Process (Monitoring)
b) Outcome / Impact (Evaluation)
Level
a) Global level
b) Country level
c) Program level

39
7.4 What Is a Good Indicator?

a) Valid: Measures the effect it is supposed to measure


b) Reliable: Gives same result if measured in the same way
c) Precise: Is operationally defined so people are clear about what they are
measuring
d) Timely: Can be measured at an interval that is appropriate to the level of change
expected
e) Comparable: Can be compared across different target groups or project
approaches

7.5 SUSTAINABILITY

a) Sustainability of the developed system is a must.


b) System that is not used is not sustainable.
c) Therefore, the issue of use of the system is the prerequisite for the system
sustainability.

40
The Monitoring & Evaluation Plan…
• Is a tool used to plan and manage the
a) collection,
b) analysis and
c) reporting of data related to an indicator
7.6 Challenges in developing M & E systems:

a) People do not participate in M&E development which may cause some


resistance & misunderstanding
b) People are reluctant to engage in an M&E system
c) People do not understand the M&E system
d) No data or incorrect data gathered by management or implementers
e) Data fed back to management for planning not taken seriously

41
7.7 Why use a Participatory Approach ???

a) People will have a better understanding of the system


b) People will have more knowledge about M&E systems
c) Key stakeholders will have been integrally involved in the design of the M&E
system

It is therefore crucial to involve key stakeholders in the design of the M&E system
and to train them on M&E.

7.8 Tips for Using a Participatory Approach

• Conduct an Introductory planning workshop with key stakeholders


– This can take the form of ½ day awareness creation to a 3 days training
course.
– Create awareness among stakeholders of
• the benefits and purposes of having a M&E system
• the process that will be followed to design the system
• key concepts of M&E
• Include stakeholders (implementers, decision makers & managers) in the design
workshop of the M&E Plan
• Validation and verification of M&E plan, forms, reporting formats, indicator
guideline and other reports that are generated
– Sufficient time for commenting should be planned for.

42
– Key stakeholders should be invited to all feedback workshops.
 Establish a reference group representing the interest of key stakeholders
– Members should have knowledge of M&E
– Function as a resource group for development of the M&E system.
– Include M&E responsibilities in Job descriptions
– Ensures that the M&E function is acknowledged as part of the job and that
it is included in the performance appraisals.

43
7.9 Process for Designing an M&E system

PHASE 1: PREPARATION
Step 1: Building commitment and Preparation
a. Participatory Workshop to plan the process of designing the system
with client and key stakeholders
b. Process is designed to build commitment and cooperation
throughout
Step 2: Situation Analysis
a) Stakeholder Analysis
b) Assessment of existing M&E Structures
c) Assessment of M&E capacity of organization
PHASE 2: DESIGN
Step 3: Methods for Designing the M&E Plan
a. Design Workshop – Includes; Key stakeholders & decision makers,
Presentation of SA results, Review of objectives & intervention logic,
Design of M&E Plan
b. Write up M&E Plan and recommendation from the workshop
c. Verification of M&E Plan
Step 4: Design M&E Forms
a) Make list of all tools,
b) Design tools, Verify and,
c) Finalize tools
Step 5: Design Reporting Format
d) Ensure all indicators are represented in reporting formats
e) Verify and finalize reporting formats
PHASE 3: STANDARDIZATION
Step 6: Develop M&E System Support Documents
a. Glossary of terms
b. Definition of indicators
c. Description of tools

44
Step 7: Institutional Arrangements
a. Define roles and responsibilities
b. Put required human and physical resources in place
c. Include M&E KPI
d. Develop Supporting Information Technology Infrastructure (
Electronic system)
e. Training in the M&E System
f. Implementation, Piloting, Quality control, & Continuous improvement

Step 2: Situation analysis …


• The process of conducting a situation analysis includes:
– Stakeholder analysis
– Document review (including best practice M&E examples in the sector)
– Interviews with key stakeholders
– Collecting documents during interviews including forms & reporting
formats.
– Observation & recording of the storing of data (e.g. the databases with
their categories).

7.10 Stakeholder Analysis


Key questions (adapted from Jody & Ray, 2004, p. 41 -45):
a) What is driving the need for building an M&E system?
b) Who are the champions for building and using an M&E system?
c) What is motivating those who champion building an M&E system?
d) Who will lead and manage the development and implementation of the system?
e) Who will contribute to the system?
f) Who will benefit from the system?
g) What kind of information do stakeholders want?
h) Who will not benefit from building an M&E system?

b) Assessment of Existing M&E Structures

45
Key themes to be explored in assessing the M&E structures:
• Assess monitoring gaps
a) What monitoring systems are in place?
b) The effectiveness of these systems
c) The feasibility of these systems
d) What are the monitoring gaps?
• Indicators used and needed
a) What is being measured?
b) What is not being measured but should be (gaps)?
c) Assess indicators in terms of whether they are directly related to the
objectives and outputs, whether they are verifiable, adequate, reliable and
practical to measure (DVARP)
d) What critical variables need to be considered when designing indicators
(e.g. demographics, gender issues)

7.12 Assessment of Existing M&E Structures

• Methodology / tools for data collection, information sharing & analysis


a) Which data collection tools are being used?
b) If any tools are not being used effectively, why not?
c) The regularity with which data collection tools are being completed,
collected and analysed
d) The effectiveness and efficiency of data collection and analysis methods
e) Which data is practical and feasible to gather and analyse, and which is
not?
f) Which data is returning useful information and which is not?
g) Are methods and tools violating peoples rights
h) What critical variables need to be considered when designing indicators
(e.g. demographics, gender issues etc.)

Assessment of Existing M&E Structures…


• Reporting structures and systems in place, including accountability of reporting

46
a) How is data being reported, in what formats, frequency, by whom and to
whom?
b) Who is ultimately responsible for reporting on various aspects of the
project?
c) Which stakeholders need what information and how often?
d) How will information gathered be used and by whom; who will be making
what decisions based on the data returned by the M&E system? (currently
and in the future).
– How can reporting systems be streamlined so that programme staff and
management are not overloaded with reporting requirements?

7.13 Assessment of Existing M&E Structures…

• Usage of system
a. How is the organisation currently using the information generated?
b. For what purpose would the organisation like to use the
information?
c. How will the organisation react to negative information generated?
d. What are the decisions making structures in place to react to the
information generated from the M&E system?
e. What are the decision making structures that should be in place to
react to the information generated from the M&E system?
f. How will management ensure that information generated from the
M&E system and the decisions resulted thereof will be filtered down
to the implementers?

7.14 Assessment of M&E capacity of the organization

Main questions:
1. Where does the capacity exist to support a results-based M&E system?
2. What is the skills level in terms of:
a) Project and Programme management

47
b) Data capturing
c) Data analysis
d) Project and Programme goal and outcome establishment
e) Budget management
f) Performance auditing
• Are there any institutions, research centres, private organisations, consultancies
or universities who can provide technical assistance and training for the
organisation in evidence-based M&E?

7.15 Setting targets: What is a target?

• A target is the desired measure on an indicator that you are aiming to achieve
after your programme/project has been implemented or has had the desired
impact.
• Why set targets?
a) Guides towards achieving a goal/outcome/impact/output
b) Motivates you
c) Gives details on numbers that need to be achieved within a time period
• When setting realistic targets one should consider:
a) measure on the baseline
b) current resources available and what can be achieved within the
constraints
c) Finding balance between being ambitious and setting targets that are
easily achievable

7.16 Recording Current Status

1. The measure on an indicator at a specific point in time when information on the


indicator is collected.
2. Why record current status?
a) Allows one to gauge how far along you are to achieving target
b) Makes current status of the indicator readily available

48
c) It can be used to motivate staff to see what has been achieved thus far,
and how far they still have to go

What is the difference between BASELINE STUDY and SITUATIONAL ANALYSIS


???
a) Situation analysis: broader study of the problems related to the programme
intervention
b) Baseline study: collects very specific measures on indicators before an
intervention
c) Baseline study and situation analysis can be combined

What is the difference between TARGET and BENCHMARK ???


• A benchmark is a standard against which one can assess achievements. This
standard can be based on what has already been achieved by other similar
organisations.
• A benchmark is thus a tool used for setting targets.

What is the difference between TARGET and MILESTONE ???


• Target
a) desired measure on an indicator at the end of the project
• Milestone
a) determines how far along one should be at different points in time during
the project
b) Sub-targets indicating the planned progress at different points in time

Supporting M&E documents


• Consists of forms, reporting formats and guideline for the M&E plan
• Purpose is to “make the M&E Plan alive”
• Start by making lists of the forms & reporting formats that must be developed
(from MoV column in the M&E Plan)

49
Step 4: Design of forms

1. Forms are the MoV or the source of the information on the indicator that need to
be completed
2. When you develop the form, you look at the following columns in the M&E plan:
a) The actual indicator,
Who should fill in the form (from name of the form)
a) How the data should be analysed (disaggregation)
• Design a standardised header for all forms including information such as:
a. the form title,
b. who will gather the form,
c. supporting documents (e.g. should anything else go with the form
or should it be attached to another form or report format?)
Step 5: Design of reporting formats

1. A reporting format consist of a range of combined forms


2. Once you have developed all the forms, you need to develop the reporting
formats. It often helps to draw which forms leads to which reporting formats.
3. The person responsible for reporting can inform the name of the reporting format
4. Need to look at the relevant forms that lead to the reporting format

50
PHASE 3: STANDARDIZATION
Step 6: Guideline for the M&E Plan
• Why is it important?
a) The M&E Plan can be quite confusing, complicated to understand, difficult
to access and
b) M&E Plan can be alienating for those who encounter it the first time
c) Guideline document ensures that those responsible for implementing the
system can understand the indicator, and that data will be analysed and
interpreted in standard ways
• What is its purpose?
a) To ensure that the data related to the indicator is calculated and measured
in the same manner every time it is reported on.
Step 6: guidelines for M&E Plan …
What should it contain?
a. Indicator definitions and analysis guidelines
b. Description of forms
c. Any other information relevant to the M&E plan that may have been developed
e.g. contact details, relevant policy / strategy documents etc.

Indicator definitions and analysis guidelines


• Should contain:
a) Glossary of common terms (e.g. HIV and AIDS, Facilitator)
b) Definition of the indicator
c) Relevance of the indicator
d) Method of gathering data
e) Data required / Types of disaggregation
f) Data sources
g) Formulas
h) Interpretations
i) Quality assurance measures
j) Possible problems

51
Description of forms
a) Form number
b) Form Name
c) Purpose of form
d) Description of form
e) A diagram of how the forms feed into the reports could also be useful.
(Step 5)

Step 7: Institutional Arrangements


What are institutional arrangements?
a. Roles and responsibilities
b. Electronic database system
c. Budgets and resources
d. Capacity building
e. Continuous improvement on the system
Roles and responsibilities
a) Plans for data gathering, analysis and reporting begins with identifying roles and
responsibilities
b) need to be formalised and included in Key Performance Areas of individuals and
units

Electronic database system


a) Data gathered by M&E activities can be managed by using an electronic data
base
b) Design of the data base informed by the data gathering, analysis and reporting
plan
c) M&E plan needs to be developed first BEFORE electronic system can be
designed
d) Allows for systematic and speedy management of large amounts of information
e) Can assist with generating automatic reports.

52
f) The M&E team needs knowledge and skills to manage and maintain the
database

Budgets and Resources


1. Organisation needs to have the correct skills for monitoring and evaluation, (can
be internal or external)
2. M&E function needs to have the following skills:
a) working knowledge of M&E -can be acquired once appointed
b) theoretical knowledge of M&E - acquired from short courses or formal
qualifications
c) understanding of government systems and procedures
d) research skills (particularly data analysis skills)
e) data base skills
f) capacity building and facilitation skills for training, mentoring and coaching
people in the use of the system

Capacity building for M&E


Everyone in the organisation should understand the importance of monitoring and
evaluation, as they:
a) will be the ones who will have to maintain the system
b) will be using the results to make decisions for improvement

Continuous improvement of the system


a) System must be piloted & tested
b) The indicators, means of verification and supporting system elements can be
refined and improved over time.
c) Ensure quality control of the system

Where does M&E fit into the organisation?


1. Monitoring is an internal function
2. Evaluation can be internal or external

53
3. Monitoring should be conducted by line managers and implementers
4. An organisation should have a M&E capability within the organisation, e.g. unit or
a co-ordinator
5. Who are the key partners in M&E?
a) Planners (e.g. strategic planning)
b) Researchers
c) Senior managers
d) Governing Board
e) Knowledge management
f) Human Resources
g) Quality control / performance management

What is the role of the M&E unit / coordinator?


1. Design, maintain & continuously improve the M&E system
2. Build capacity for M&E within the organisation
3. Build & encourage a culture of learning e.g. establish learning networks, have
learning and sharing meetings etc.
4. Co-ordinate & drive M&E efforts
5. Encourage compliance with the M&E system
a. Ensure that all role players are collecting, analysing & reporting data on
required indicators using correct tools & reporting on time
6. Quality of data collection – quality assurance.
7. Reporting and communicating results
– Accountability should be upward, sideways and downwards

Hints for a successful M&E system


a) The organisation must have proper conceptualisation of the project before M&E
system can be developed.
b) Can be more than two M&E systems per organisation e.g. implementing
organisations and multi-partner.

54
c) Stay focused on what information needs to be gathered for the M&E system as
oppose to what could be interesting to have.
d) Ensure that the M&E system is manageable and data can be collected
e) Ensure that the organisation has the capacity to analyse the data
f) The organisation must have mechanisms to feed back results from M&E to
management
g) The organisation must make decisions on what they want to do with the M&E
reports. How are issues used to improve implementation?

What do we want the evaluation to demonstrate?


a) Did the desired outcome occur?
b) Did the new procedures/intervention/inputs make a difference?
c) Can we attribute the outcome/change in outcome to the
program/process/intervention?

55
56
57
Deciding on an Appropriate Evaluation Design
1. How do you intend to use the results?
2. What do you want to measure (indicators)?
a. Provision, utilization, coverage, effectiveness, impact
3. How sure to you want to be (type of inference)?
a. What is the cost of making a mistake (low, medium, high)?
4. When do you need the results?
5. How much are you willing to pay?
6. Where in the program life cycle are you now?

Types of Evaluation Designs


1. Experimental
a. Strongest for demonstrating causality, most expensive
2. Quasi-experimental
a. Weaker for demonstrating causality, less expensive
3. Non-experimental
a. Weakest for demonstrating causality, least expensive

What is Causality?
Question for the module
When one event produces a second event
1. What are the factors affecting sustainability
2. What are the type and level of each Indicator
3. What Is a Good Indicator?
4. Identify the importance of sustainability
5. What are the challenges in developing M&E systems
6. State how participatory process is done
7. Illustrate the design of M&E platform

58
References

1. Adapted from Osborne & Gaebler in JZ Kusek and RC Rist (2004). Ten Steps to
a Results-based Monitoring and Evaluation System: A Handbook for
Development Practitioners, The World Bank, Washington DC, page 11.

2. PH Rossi, MW Lipsey and HE Freeman (2004). Evaluation: A Systematic


Approach, 7th edn, Sage Publications, Thousand Oaks, CA, page 431.

3. M Scriven (1991). Evaluation Thesaurus, Sage Publications, Newbury Park, CA,


page 1.

4. M Quinn Patton (1997). Utilization-Focused Evaluation: The New Century Text,


3rd edn, Sage Publications, Thousand Oaks, CA, page 23.

5. Mental Health and Drug and Alcohol Office (2010). Mental Health Project
Summary 2010, NSW Department of Health, page 6. (Internal document).

6. C Watson and N Harrison (2009). op. cit.

7. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 426.

8. JZ Kusek and RC Rist (2004). op. cit., page 226.

9. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 429.

10. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 430.

11.WK Kellogg Foundation (1998). WK Kellogg Foundation Evaluation Handbook,


WK Kellogg Foundation, Battle Creek, MI, page 35. Viewed 8 September 2010
at: <www.portal.mohe.gov.my/portal/page/portal/ExtPortal/MOHE_
MAIN_PAGE/Tender_Contract/BUDGET/files/KELLOG_FOUNDATION_EVALU
TION_HANDBOOK.pdf>.

12. PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 432.

13. JZ Kusek and RC Rist (2004). op. cit., page 226.

14. 19 PH Rossi, MW Lipsey and HE Freeman (2004). op. cit., page 431.

59
B. Sc. Health Record Information Management
End of term examination
3rd year Class

Subject: Monitoring and Evaluation in HIS Date: Time allowed:3 hours


Instructions
1. Write your student number at the space provided on the answer
sheet.
2. Question one is compulsory
3. Answer any nine (9) of questions two to twelve.

Q1. With a diagram, describe how monitoring and evaluation can be done using a logic
model/framework
(10 Marks)
Q2. a) Define Monitoring and explain its purpose.
b) Define Evaluation and explain its purpose
(10 Marks)
Q3. Describe the difference between Monitoring and Evaluation.
(10 Marks)
Q4. Describe when Monitoring and Evaluation should be done.
(10 Marks)
Q5. Differentiate between impact and outcome.
(10 Marks)
Q6. Illustrate the monitoring and evaluation framework
(10 Marks)
Q7. Who needs uses M&E Information?
(10 Marks)

Q8. a) Why monitor? Why evaluate?


b) What to monitor and what to evaluate
(10 Marks)
Q9. What are the qualities of indicators?
Q10. Illustrate, plan of operation
(10 Marks)
Q11. What are the factors affecting sustainability.
(10 Marks)
Q12. State at least five points of a Good Indicator?
(10 Marks)

60

You might also like