Monitoring & Evaluation

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

Monitoring & Evaluation

What is Monitoring?

 Monitoring is the routine assessment (e.g.


daily/monthly/ quarterly) of information or
indicators of ongoing activities
Why Monitor

Monitoring
 tracks progress toward the set program
targets or performance standards
 identifes aspects of the program that are
working according to plan and those that are
in need of midcourse corrections so that
timely improvements or changes can be
made
What is Evaluation?

 Evaluation refers to the measurement of


how much things have changed because
of the intervention(s) implemented.
Why Evaluate

 Because there are many factors that cause


things to change, a formal evaluation tries
to demonstrate how much a specific
intervention contributed to the change.
Definition of M & E

 A management tool that is built around a


formal process for measuring performance
and impact using indicators that help
measure progress toward achieving
intermediate targets or goals. Monitoring
systems comprise procedural arrangements
for data collection, analysis and reporting.
Purpose of M & E

 To determine how program funds are being spent


 To determine if programs are being implemented as
planned
 To determine what the effects are on public health
 To determine if programmes need to be changed to
be more effective and
 Provide reasons for success or failure
Monitoring and Evaluation

Evaluation
PMTCT Interventions Monitoring CHANGE

Figure 1.1
Indicators

Indicator:
A value on a scale of measurement (a number,
%,ratio, fraction) derived from a series of
observed facts that reveal relative changes
as a function of time.
Eg. In 2000 there were no ARV available; In
2005 5% of known HIV infected persons are
on ARVs
Understanding Indicators

 Definition
 Rationale
 Numerator
 Denominator
 Measurement
 Strengths and limitations
Definition

 Number and percentage of CSO’s trained in


HIV/AIDS prevention among youth 15-24
(prevention approach may vary depending
on CSO eg church, SLPPA)
Rationale

 NAP reliant on CSO’s to reach subgroup


populations
– Important to assess CSO resources available to
address prevention
– Knowledge and approval of CSO’s activity in
terms of prevention
– Are CSO’s meeting the needs of the populations
concerned
What is measured

 This indicator quantifies


1. CSO’s with human resources that are trained in
HIV prevention and/or
2. CSO’s that provide prevention services
Numerator/Denominator

 Number of CSO workers trained or retrained


in prevention methods

 Total number of CSO workers identified as


able to provide prevention methods
Strength

 This indicator tracks the number of CSO’s


trained for prevention of HIV infection. It
attempts to document increasing capacity to
deliver preventative interventions
Limitation

 No conclusions should be drawn regarding


quality because this is affected by practices
employed rather than the existence of trained
personnel
Indicator Examples

% HIV patients on ARV


 2000 - 0%
 In 2005 - 5%
 In 2008 - 20% = improvement

Reported HIV cases in new born


 2000 - 2
 2005 - 26= stop and review
 2008 – 41= programme failure
Collecting and Using Data
M & E of CSO’s

 Unique as M & E is done on an individual


basis
 Maybe ongoing or one time
 Results are usually reflected in surveys or
records
Active M & E

 Examination of each module to select


indicator (eg CSO’s, PMTCT, OVC, TC,
Blood safety)
 Data elements selected from indicator eg
– # Females 15-24 on ARV
– Sex, age, treatment
 Data capture designed
What is good data

Understand the data. Make sure that those


responsible for collecting information clearly
understand what is being asked for.
 Record the data every time. Encourage
those responsible for collecting information to
record it on the appropriate form
What is good data

 Record all the data. Make sure all the


information requested on the form is
completed.
 Record the data in the same way every
time. When possible, use the same
definitions, the same rules for reporting the
same piece of information over time.
M & E through data collection

Set Program
Objectives

Met Program
Objectives

E E

M M M M

Program Time Program


Start End
Mid-course and end-of-program
evaluation can be determined by:

1. Reviewing available records and reports


2. Conducting supervisory assessment
3. Conducting self-assessment
4. Conducting peer assessment
5. Obtaining client feedback (exit interviews)
6. Polling community perceptions
7. Benchmark (comparing the site’s services with
others)
Your Role in M & E

 To ensure that proposal objectives are met


 To seek assistance if necessary
 To be honest about deliverables
 To give feedback/report
 Allow review of work
My Role in M & E

 To review approved proposals To define


indicator(s)
 To prepare a reporting format
 To provide assistance where necessary
 To review periodically (dependant on
proposal)
 To review accomplishments
Thank You

Any questions contact


712-3474

You might also like