Monitoring Is A Periodically Recurring Task Already Beginning in The Planning Stage of A Project or Program

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

CPU COLLEGE

SCHOOL OF GRADUATE STUDIES


MASTER OF PROJECT MANAGEMEN (MPM)

INDIVGUAL ASSIGNMENT

PROJECT MONITERING AND EVALUATION


By: Yohannis Solomon ID /EMPM/529/14

SUBMITED TO: Dr. GEREMEW KEFYALEW


ASSIST.PROFESSOR (PhD)

JULY 23, 2022,

ADDIS ABEBA, ETHIOPIA

1
CPU College

Project Monitoring and Evaluation

Group Assignment 1

Attempt all questions and answer accordingly!


1. Do you think Monitoring & evaluation is important for your own work or organization

Performance? What is your own organization’s monitoring and evaluation practices

Today? What are you recommend for improvement and strengthen the Monitoring &

Evaluation efforts at your respective organ?

Monitoring and Evaluation Practices

Monitoring and evaluation is increasingly becoming an essential program management tool.


According to Dyason (2010), Monitoring is the collection along with the analysis of information
regarding a given program or intervention; and evaluation is an assessment whose focus is to
answer questions relating to a program or an intervention. All these various definitions depict
monitoring as an ongoing process mainly based on the set targets, planned activities in the course
of the planning stage of work. It aids in keeping the work on track, and can let the management
know whether things are not running as expected in the course of undertaking the project. If done
in a proper manner, it is an instrumental tool for good project management, and offers a suitable
evaluation base. It allows one to ascertain if the project resources are enough and whether they
are properly utilized, whether the capacity is adequate and suitable, and whether one is doing as
planned. Evaluation is more about the results/outcomes and impact of the project. It is usually a
periodic assessment of changes in the predetermined results that relates to the program or the
interventions of a project (Goyder, 2009). It helps the project manager to arrive at decisions on
the project’s destiny, and to determine if the project has attained the set goals and objectives. 4
Monitoring and Evaluation practices ensures that the project/program results at the levels of
impact, outcome, output, process along with input can be quantified so as to offer a framework

2
for accountability and in assisting in making informed decision at program and policy levels.
Identification of their own needs, defining the objectives of the program, implementing the
activities and monitoring and evaluating the program. Human resources management are very
critical in project management. Particularly, they are essential for an effective monitoring and
evaluation. The technical capacity and organizational know-how in carrying out evaluations, the
value and participation of its human resources in the process of decision making as well as their
motivation in executing the decision arrived at can significantly have an effect on the
evaluation(Vanessa and Gala, 2011). Dobrea et al (2010) further illustrate that this should not be
just mere training by undertaking learning approach and have a positive effect on the evaluation
process within the organization. Donaldson (2013) explanations how stakeholder are
empowered, different activities excite different set of stakeholders, this stipulates how, when and
how the stakeholder are empowered in their different capacities. These approaches promote
inclusion and significant participation. To strengthen the Stakeholder involvement, they should
be involved in early stages of evaluation precisely in planning. This entails funding of
individuals and political agents of high profile who have interest in learning as well as using
instruments in demonstrating effectiveness (Jones, 2008). Participatory methods stipulation
dynamic involvement during making decision for interested persons in the project and its
strategy, the involvement generated a notion or rather mentality of ownership in results and
recommendations related to M & E (Chaplowe, & Cousins, 2015). The subject of planning and
pre-construction planning is central to project control process. According to Gyorkos (2011)
planning is a process of decision making derived in advance of execution, meant to craft a future
that is desired with ways of implementation where in planning answers questions what, how, by
who, with what and when. The purpose of planning as explained by Kelly and Magongo (2014)
is to assist the manager to fulfil his primary functions of direction and control in the
implementation of project components, coordinate and communicate with the many parties.
George (2008) says that at the planning phase many identify potential problems proactively
before they can greatly affect project cost and schedule during the implementation phase. Project
planning helps to create a benchmark for execution. 6 (Zimmererand Yasin, 2011), argues that
clear benchmarks are critical as they are used at execution to provide direction for the project
team as events unfold. To do this task, the project manager has to assemble the most competent
team and take into consideration cultural differences. Maylor (2013) recognized what is widely

3
known the usual midterm planning horizon for development projects in terms of promoting
sustainable benefits, predominantly when behavioral and institutional transformation is included
in the goals especially so when there are multiple local agencies involved. Open-ended
requirements are not appropriate; however, phasing project activities over a longer period is a
project strategy to support sustainable benefits. Phasing approach requires clear goals and
objectives, from the beginning and well-articulated decision points at each project end phase.
Where there is ambiguity about local policy, capability or guarantee then an initial pilot phase,
leading on to a number of subsequent phases, should move the business case than the exception
(Kalali, Ali and Davod , 2011). Estimation of Financial resources done during planning for
implementation of monitoring and evaluation (Dyason, 2010). A key aspect of planning for
monitoring and evaluation is to approximate the costs, staffing, and other resources that are
required for monitoring and evaluation work. It is essential for monitoring and evaluation
specialists to weigh in on monitoring and evaluation budget needs at the project design phase so
that funds are distributed to the implementation of key monitoring and evaluation tasks (Ahsan
and Gunawan, 2010). IFAD (2012), in its report noted that most developing countries are being
faced with the challenge of implementing a sound monitoring and evaluation due to lack of
control on their funding. Therefore, the donors need to put more emphasizes on the establishment
of sound monitoring and evaluation systems through factoring this in the funding.

My own organization practice of monitoring and evaluation is regarded as a core tool when it
comes to enhancing project management quality, considering that in the short run and in the
medium term, the management of complex projects will entail corresponding strategies from the
financial view point that required to adhere to the criteria of effectiveness, sustainability along
with durability. This is the only way to ensure that most of these projects realize their goals and
leave a sustainable impact on the society.

The activity of monitoring supports both the project managers and staff in understanding whether
the projects are progressing as predetermined for minimizing time along with cost overruns,
while at the same time ensuring that the required standards of quality are attained in the
implementation of the project. Maintaining relevant improvement

 Support the aim of consistent and high quality program service delivery

4
 Foster the idea of continuous improvement and identify specific areas for improving
service delivery.
 Help empower service providers with the ability to assess the quality of their service
delivery.
 Foster a collective commitment to quality through a common set of clear, measurable
criteria, aims.

2. Select a given project/program and develop the job description (goal, duties, role,

And responsibilities) of Monitoring and Evaluation professional?

Role and Responsibilities of an M&E Project Manager

Monitoring and Evaluation (M&E) professionals can have many different titles and can have
quite a diversity of responsibilities depending on the context and organizations where they work.
Common titles seen in government agencies, non-governmental organizations (NGOs) and non-
profit organizations include: M&E Officer; M&E Specialist; and M&E Manager. These titles are
often used interchangeably. M&E professionals work at agency headquarters or central offices
providing technical assistance and strategic oversight, or in the field collecting and managing
data. Regardless of their place of work, M&E professionals play an important role in project
management and often help build capacity in performance and impact measurement within their
organizations.

Mission of the Department:

As part of WWF-Kenya’s growth to nationally make conservation impact at scale to deliver


towards our goal of people living in harmony with nature, we are strengthening our internal
capacity to support an effective monitoring, evaluation and learning system. The role of the
M&E Manager will be to develop, review and implement a robust Monitoring, Evaluation &
Learning framework and tools to facilitate measurement of progress; enhance compliance of

5
program and projects with WWF Network standards, policy frameworks and best practices; and
achieve efficiency and accountability.

Mission of the Department:

As part of WWF-Kenya’s growth to nationally make conservation impact at scale to deliver


towards our goal of people living in harmony with nature, we are strengthening our internal
capacity to support an effective monitoring, evaluation and learning system. The role of the
M&E Manager will be to develop, review and implement a robust Monitoring, Evaluation &
Learning framework and tools to facilitate measurement of progress; enhance compliance of
program and projects with WWF Network standards, policy frameworks and best practices; and
achieve efficiency and accountability.

Duties and Responsibilities:

The Manager will provide technical support to all the functions of WWF-Kenya so as to ensure
that the implementation of the Organizational Strategic Plan meets the highest standards, and in
particular;

 Monitoring – Take lead in the design and implementation of a monitoring framework to


track delivery against WWF-Kenya goals and objectives.
 Evaluation – Take lead in the analysis of data collected under the monitoring framework for
assessment of progress and areas for improvement. In addition, facilitate periodic evaluations
of WWF-Kenya conservation impacts to determine impact levels.
 Reporting - Provide regularly updated reports to the SMT on the status of implementation
against WWF-Kenya goals and objectives. The Manager will also work with the different
Directorates in preparing performance reports for the Board of Directors. Additionally, the 2
incumbent will work with Program Managers to ensure timely and quality technical reports
as per WWF Kenya standards and donor requirements.
 Knowledge management - Facilitate ongoing and collaborative learning within WWFKenya
based on key data from the performance management system for continuous improvement of
program delivery.

6
Role and Responsibilities of an M&E Project Manager

Monitoring and Evaluation (M&E) professionals can have many different titles and can have
quite a diversity of responsibilities depending on the context and organizations where they work.
Common titles seen in government agencies, non-governmental organizations (NGOs) and non-
profit organizations include: M&E Officer; M&E Specialist; and M&E Manager. These titles are
often used interchangeably. M&E professionals work at agency headquarters or central offices
providing technical assistance and strategic oversight, or in the field collecting and managing
data. Regardless of their place of work, M&E professionals play an important role in project
management and often help build capacity in performance and impact measurement within their
organizations.

Common Responsibilities in Monitoring and Evaluation Careers:

Planning

Developing strong M&E systems require a great deal of planning. The M&E professional plays a
key role in facilitating the input of project staff, partners and other stakeholders in project design
and measurement activities. Responsibilities include:

 Providing expertise in M&E planning and methodology


 Participating in and providing support to project design activities including development
of project theories of change and strategic frameworks (Results Frameworks, Log
Frames)
 Developing a Monitoring and Evaluation plan
 Helping determine performance and impact indicators and targets
 Providing support to proposal development for M&E components

Day-to-Day Monitoring and Evaluation Activities

The M&E professional plays an essential role in tracking and updating M&E data as well as
ensuring the data is of the best quality possible. Responsibilities include:

7
 Implementing monitoring systems and designing monitoring tools
 Developing data collection tools
 Monitoring project activities, outputs and progress towards anticipated results
 Working with data platforms, databases and select technologies to capture and organize
data
 Training field staff in monitoring and evaluation processes and providing ongoing
coaching
 Conducting or providing support to data quality assessments

Analysis and Reporting

Once the M&E system has been implemented and data collection processes established, the
M&E professional proceeds with the analysis and reporting of data. Responsibilities include:

 Determining data analysis procedures and use of quantitative or qualitative analysis tools
 Cleaning, sorting, categorizing and organizing data
 Analyzing quantitative and/or qualitative data
 Summarizing findings
 Developing monthly, quarterly or annual reports depending on project requirements
 Disseminating evaluation findings and project results to donors and other stakeholders

Evaluations or Special Studies

The M&E professional will often be involved in special studies or evaluations which may be
conducted by the M&E professional and project staff in the case of an internal evaluation or with
the assistance of external evaluation consultants in the case of final or impact evaluations
depending on donor requirements and resources. Responsibilities of the M&E professional
include:

 Conducting program analysis or special studies


 Supporting or leading evaluation teams
 Managing external evaluation consultants and draft scopes of work

Knowledge Management

8
M&E professionals often provide much support to knowledge management processes within
their organizations. Responsibilities can include:

 Contributing to institutional learning processes


 Convening communities of practice and other organizational learning practices
 Tracking best practices in monitoring and evaluation

3. Write an essay on and evaluate the proposed Monitoring & Evaluation system for

Ethiopia’s Ten Year’s Perspective Plan (2021-2030 GC).

minister of National Planning and Development Commission, Fitsum Assefa (PhD) said the 10
year perspective plan is intended to make Ethiopia African beacon of prosperity.

In her exclusive interview with Policy Matters, the Minister said that the 10 year plan envisions
that it Ethiopia becomes African beacon of prosperity by 2030. The plan is intended to ensure
shared prosperity in all its dimensions, where every citizen gets access to it, she added.

“For this to happen, the economy should stay in the high growth trajectory and we target it to
register 10 percent average GDP growth over the 10 year time.”, Dr. Fitsum stressed.

Ethiopia, as per the 10 year plan, is working slash poverty by half and register a per capita GDP
of 2200 US Dollars and position the country among middle income countries level doubling the
current per capita GDP per capita, Dr. Fitsum explained.

“And also unemployment is targeted to rich to less than nine percent. In fact, other goals are also
included on the infrastructure side. For instance, energy generation is targeted to reach about 20
Giga Watts. And along with that, universal access to electricity and safe drinking water both in
rural and urban areas over the coming 10 years.” she stated.

“It is intended to ensure that social development goals tough lives of our citizens in the sectors
including health, education, among others pursuant to the sustainable development goals.”, the
minister underlined.

African Agenda 2063 is also included in the plan to improve livelihoods of the people in line
with the goals set to bring about sustainable and shared prosperity, she underscored.

9
The 10 year perspective plan has inclusive and comprehensive goals that include economic,
social, administrative and institutional issues, according to the minister.

Agriculture, manufacturing, tourism, mining and ICT are focus areas of the 10 year perspective
plan in the coming three years.

Government is working to facilitate better and smooth environment for the private sector, she
said, adding that activities are underway to develop private sector development strategy.

Works are underway to scale up best practices in Public-Private Partnership initiative which is
becoming effective in areas such as energy and infrastructure.

She further underscored that the 10 year perspective plan is also intended to enable rural farmers
get access to financial supports using their cattle and crops on farmland as collateral means.

All activities in the plan show the commitment of the government open up the economy for
private sector investment.

Successful partnership and joint venture with domestic private sector and foreign investors will
be carried out to tap synergies in a more transparent manner.

Ease of doing business initiative including the one window services and reforms in the state
owned enterprises including energy and telecom sectors will be continued as per the 10 year
plan, she added.

4. Compare and contrast Project Monitoring & Evaluation with Impact

Evaluation/Assessment.

Monitoring

Monitoring is a form of evaluation or assessment, though unlike outcome or impact evaluation, it


takes place shortly after an intervention has begun (formative evaluation), throughout the course
of an intervention (process evaluation) or midway through the intervention (mid-term
evaluation).

10
Monitoring is not an end in itself. Monitoring allows program to determine what is and is not
working well, so that adjustments can be made along the way. It allows program to assess what
is actually happening versus what was planned.

 Monitoring allows program to:


 Implement remedial measures to get program back on track and remain accountable to
the expected results the program is aiming to achieve.
 Determine how funds should be distributed across the program activities.
 Collect information that can be used in the evaluation process.

When monitoring activities are not carried out directly by the decision-makers of the program it
is crucial that the findings from those monitoring activities are coordinated and fed back to them.

Information from monitoring activities can also be disseminated to different groups outside of
the organization which helps promote transparency and provides an opportunity to obtain
feedback from key stakeholders.

There are no standard monitoring tools and methods.  These will vary according to the type of
intervention and objectives outlined in the program. Examples of monitoring methods include:

 Activity monitoring reports


 Record reviews from service provision (e.g. police reports, case records, health intake
forms and records, others)
 Exit interviews with clients (survivors)
 Qualitative techniques to measure attitudes, knowledge, skills, behavior and the
experiences of survivors, service providers, perpetrators and others that might be targeted
in the intervention.
 Statistical reviews from administrative databases (i.e. in the health, justice, interior
sectors, shelters, social welfare offices and others)
 Other quantitative techniques

Impact Evaluation

11
Impact evaluation measures the difference between what happened with the program and what
would have happened without it.  It answers the question, “How much (if any) of the change
observed in the target population occurred because of the program or intervention?”

Rigorous research designs are needed for this level of evaluation. It is the most complex and
intensive type of evaluation, incorporating methods such as random selection, control and
comparison groups.

These methods serve to:

 Establish causal links or relationships between the activities carried out and the desired
outcomes.
 Identify and isolate any external factors that may influence the desired outcomes.

For example, an impact evaluation of an initiative aimed at preventing sexual assaults on women
and girls in town x through infrastructural improvements (lighting, more visible walkways, etc.)
might also look at data from a comparison community (town y) to assess whether reductions in
the number of assaults seen at the end of the program could be attributed to those improvements.
The aim is to isolate other factors that might have influenced the reduction in assaults, such as

training for police or new legislation.

By contrast, difficult-to-collect data are generally collected through evaluations. If the


monitoring data reveals that a project is not going as planned, an evaluation can establish why
this is so. An evaluation examines and evaluates processes, results and social impacts.
Evaluations can be carried out at various points in time.

Monitoring and evaluation are contrasted in the table below:

12
  Monitoring Evaluation

Initial What’s happening in the project? Why is something happening, with what degree of
question quality, and with what consequences (results)?
Why?  To review the project’s progress  To describe and assess progress and results
 To be able to make informed  To draw conclusions and derive
decisions recommendations
 To be able to carry out adjustments
 To provide a basis for further
analysis (e.g., evaluation)
Who? Carried out internally by project staff Carried out internally or externally

5. Describe Monitoring & Evaluation Framework and Indicators.

Monitoring vs. Evaluation M&E conceptual Framework

The M&E Framework follows the hypothesized causal logic of the Theory of Change and the
corresponding results framework. However, it also provides additional details about how the
hypotheses will be verified, indicators used, how they will be measured, and how the information
is intended to be used. After identifying the key indicators, the M&E plan assigns the appropriate
levels of rigor and outlines the assessment methodology. The assessment methodology guides
the M&E activities throughout the assessment implementation (before, during and after).

13
Performance Indicators

Performance indicators are measures of inputs, processes, outputs, outcomes, and impacts for
development projects, programs, or strategies. When supported with sound data collection—
perhaps involving formal surveys—analysis and reporting, indicators enable managers to track
progress, demonstrate results, and take corrective action to improve service delivery.

14
Participation of key stakeholders in defining indicators is important because they are then more
likely to understand and use indicators for management decision-making.

Setting performance targets and assessing progress toward achieving them.

 Identifying problems via an early warning system to allow corrective action to be taken.
 Indicating whether an in-depth evaluation or review is needed.

Selecting Indicators

Selection must be based on, a careful analysis of the objectives and the types of changes wanted
as well as how progress might be measured and an analysis of available human, technical and
financial resources. A good indicator should closely track the objective that it is intended to
measure. For example, development and utilization of DRR Action Plans would be good
indicators if the objective is to reduce disaster risks at national and local levels. Selection should
also be based on an understanding of threats. For example, if natural disasters are a potential
threat, indicators should include resources and mechanisms in place to reduce the impact of
nature disasters. Two types of indicator are necessary:

1) Outcome / Impact indicators (that measure changes in the system (e.g. resource allocation
for DRR based on Strategic Action Plans)

2) Output / Process indicators (that measure the degree to which activities are being
implemented (e.g. number of stakeholder developed Strategic Action Plans).

Note that it may be difficult to attribute a change, or effect, to one particular cause. For example,
resource allocation for DRR could be due to good management of the DRR agencies / authorities
outside the UNISDR support.

A good indicator should be precise and unambiguous so that different people can measure it and
get similarly reliable results. Each indicator should concern just one type of data (e.g. number of
UNISDR supported Strategic Action Plans rather than number of Strategic Action Plans in
general). Quantitative measurements (i.e. numerical) are most useful, but often only qualitative
data (i.e. based on individual judgments) are available, and this has its own value. Selecting
indicators for visible objectives or activities (e.g. early warning system installed or capacity

15
assessment undertaken) is easier than for objectives concerning behavioral changes (e.g.
awareness raised, community empowerment increased).

Criteria for Selecting Indicators

Choosing the most appropriate indicators can be difficult. Development of a successful


accountability system requires that several people be involved in identifying indicators, including
those who will collect the data, those who will use the data, and those who have the technical
expertise to understand the strengths and limitations of specific measures. Some questions that
may guide the selection of indicators are:

Does this indicator enable one to know about the expected result or condition?

Indicators should, to the extent possible, provide the most direct evidence of the condition or
result they are measuring. For example, if the desired result is a reduction in human loss due to
disasters, achievement would be best measured by an outcome indicator, such as the mortality
rate. The number of individuals receiving training on DRR would not be an optimal measure for
this result; however, it might well be a good output measure for monitoring the service delivery
necessary to reduce mortality rates due to disasters. Proxy measures may sometimes be necessary
due to data collection or time constraints. For example, there are few direct measures of
community preparedness. Instead, a number of measures are used to approximate this:
community’s participation in disaster risk reduction initiatives, government capacity to address
disaster risk reduction, and resources available for disaster preparedness and risk reduction.
When using proxy measures, planners must acknowledge that they will not always provide the
best evidence of conditions or results.

Is the indicator defined in the same way over time? Are data for the indicator collected in
the same way over time?

To draw conclusions over a period of time, decision-makers must be certain that they are looking
at data which measure the same phenomenon (often called reliability). The definition of an
indicator must therefore remain consistent each time it is measured. For example, assessment of

16
the indicator successful employment must rely on the same definition of successful (i.e., three
months in a full-time job) each time data is collected. Likewise, where percentages are used, the
denominator must be clearly identified and consistently applied. For example, when measuring
children mortality rates after disaster over time, the population of target community from which
children are counted must be consistent (i.e., children ages between 0 - 14). Additionally, care
must be taken to use the same measurement instrument or data collection protocol to ensure
consistent data collection.

Will data be available for an indicator?

Data on indicators must be collected frequently enough to be useful to decision-makers. Data on


outcomes are often only available on an annual basis; those measuring outputs, processes, and
inputs are typically available more frequently.

Are data currently being collected? If not, can cost effective instruments for data collection
be developed?

As demands for accountability are growing, resources for monitoring and evaluation are
decreasing. Data, especially data relating to input and output indicators and some standard
outcome indicators, will often already be collected. Where data are not currently collected, the
cost of additional collection efforts must be weighed against the potential utility of the additional
data.

CPU College

Project Monitoring and Evaluation

Group Assignment 2

Attempt all questions and answer accordingly!

17
1. Provide the technical and conceptual definitions of the following terms:

a) Monitoring

b) Evaluation

C) Monitoring and Evaluation

d) System

a) Monitoring

Monitoring is a periodically recurring task already beginning in the planning stage of a project or
program. Monitoring allows results, processes and experiences to be documented and used as a
basis to steer decision-making and learning processes. Monitoring is checking progress against
plans. The data acquired through monitoring is used for evaluation.

Monitoring is the systematic process of collecting, analyzing and using information to track a
program’s progress toward reaching its objectives and to guide management decisions.
Monitoring usually focuses on processes, such as when and where activities occur, who delivers
them and how many people or entities they reach.

Monitoring is conducted after a program has begun and continues throughout the program
implementation period. Monitoring is sometimes referred to as process, performance or
formative evaluation.

Monitoring is the regular observation and recording of activities taking place in a project or
program. It is a process of routinely gathering information on all aspects of the project.

To monitor is to check on how project activities are progressing. It is observation; ─ systematic


and purposeful observation.

Monitoring also involves giving feedback about the progress of the project to the donors,
implementer’s and beneficiaries of the project.

18
Reporting enables the gathered information to be used in making decisions for improving project
performance.

b) Evaluation

Evaluation is assessing, as systematically and objectively as possible, a completed project or


program (or a phase of an ongoing project or program that has been completed). Evaluations
appraise data and information that inform strategic decisions, thus improving the project or
program in the future.

Evaluation is the systematic assessment of an activity, project, program, strategy, policy, topic,


theme, sector, operational area or institution’s performance. Evaluation focuses on expected and
achieved accomplishments, examining the results chain (inputs, activities, outputs, outcomes and
impacts), processes, contextual factors and causality, in order to understand achievements or the
lack of achievements. Evaluation aims at determining the relevance, impact, effectiveness,
efficiency and sustainability of interventions and the contributions of the intervention to the
results achieved.

Monitoring & Evaluation

M&E is an embedded concept and constitutive part of every project or program design (must be).
M&E is not an imposed control instrument by the donor or an optional accessory (nice to have)
of any project or program. M&E is ideally understood as dialogue on development and its
progress between all stakeholders.

In general, monitoring is integral to evaluation. During an evaluation, information from previous


monitoring processes is used to understand the ways in which the project or program developed
and stimulated change.

19
System

A system may have many definitions dependent upon the context in which it is used. A system
may be defined as a set of interdependent components or factors which accomplish a
predetermined goal. As such it may be concluded that the process of project management will
always require the creation or use of a system. Ideally the system, though made of
different components, will operate in such a fashion that these components will complement one
another in the performance of their defined function to a greater degree than they would be able
to as separate, independent components. Said system may either exist in a physical sense, with
actual interlocking parts carrying out a tangible process, or the system may exist in project
management or intangible sense. Most often a system will be composed of both physical and
intellectual parts and processes.
In regards to effective project management, a system may be composed of different parts all
existing for the purpose of reaching the project’s goal. These may include pre-existing project
management processes, learned techniques, different methodologies or lines of thinking and
working, as well as tools existing as intellectual processes or physical objects operated by the
project management team itself.

Activities are conducted to create and describe in detail a system-of-interest (SoI) to satisfy an


identified need. The activities are grouped and described as generic processes, which consist of
system requirements definition, system architecture definition, system design definition and
system analysis. The architecture definition of the system may include the development of
related logical architecture models and physical architecture models. During and/or at the end of
any iteration, gap analysis is performed to ensure that all system requirements have been mapped
to the architecture and design.

2. What are Project Monitoring and Evaluation Plan?

A monitoring and evaluation (M&E) plan is a document that helps to track and assess the results
of the interventions throughout the life of a program. It is a living document that should be
referred to and updated on a regular basis.

20
A Project MEL Plan is technically an annex to a Project Appraisal Document (PAD).
Functionally it is a separable document that provides guidance to USAID staff over the life of a
project.

This section of the kit is designed to help you develop a project MEL plan. It is divided into four
segments, as listed in the menu below. You can go through this section sequentially, by using the
forward arrow at the bottom of each page, or you can jump directly to topics on the list below.
The Program Cycle menu at the top of this page and the Sitemap provide even more detailed
views of the coverage of this section, should you wish to consult them.

A Monitoring and Evaluation (M&E) Plan is a guide as to what you should evaluate, what
information you need, and who you are evaluating for.

The plan outlines the key evaluation questions and the detailed monitoring questions that help
answer the evaluation questions. This allows you to identify the information you need to collect,
and how you can collect it. Depending on the detail of the M&E plan, you can identify the
people responsible for different tasks, as well as timelines. The plan should be able to be picked
up by anyone involved in the project at any time and be clear as to what is happening in terms of
monitoring and evaluation.

An evaluation plan should ideally be done at the planning stage of a project, before you
commence implementation. This will allow you to plan ahead of time and data collection
activities that you may need to undertake, such as pre-intervention surveys. However, it is never
too late to develop an M&E plan.  Retro-fitting an M&E plan to an existing project may just
mean that you may be constrained in some of the data that you can collect.

3. Differentiate between Baseline study and Baseline Data.

21
What is a baseline study?

The purpose of a baseline study is to provide an information base against which to monitor and
assess an activity’s progress and effectiveness during implementation and after the activity is
completed. Sometimes the data needed for a baseline, against which to measure the degree and
quality of change during an activity’s implementation, will already exist. In such cases the only
task is to collate the data and ensure that it can be updated in the longer term. So it is important
to find out what information is already available. But more commonly, there will not be any
existing data, or it will be incomplete or of poor quality, or it will need to be supplemented or
broken out into categories that are relevant for the project being implemented.

When planning a baseline study, the implementing organization needs to determine both what
change needs to be assessed and what sort of comparison(s) will need to be made as part of that
assessment of change. There are two common ways to measure change:

 ‘with and without’ activity – this seeks to mimic the use of an experimental control, and
compares change in the activity location to change in a similar location where the activity
has not been implemented, and
 Before and after’ activity – this measures change over time in the activity location alone.

The study should be closely linked with the activity monitoring plan so that the data collected
can be replicated if necessary during ongoing activity monitoring, for any mid-term review,
when the activity is being assessed for the activity completion report and for any subsequent
evaluations. Baseline data should provide the minimum information required to assess the
quality of the activity implementation and measure the development results.

Baseline data

Baseline data help to set achievable and realistic indicator targets for each level of result in a
project’s design, and then determine and adjust progress towards these targets and their
respective results.

22
1. Planning: The baseline will help establish and concretize objectives as well as the type of
support needed. It may also identify specific activities which will accomplish these objectives.
This will necessitate identifying locally perceived resources as well as needs and problems as
defined by local stakeholders.

2. Monitoring: The information gathered can be used during the implementation process to
monitor progress and make any necessary changes.

3. Evaluation: Because baselines use program or project-specific benchmarks and indicators, the
data can then be used to assess the achievement of the outcomes and impact of a program. The
following steps below should be considered as initial considerations, either to be taken
sequentially or in parallel depending on the circumstances.

Definitions

A “baseline” refers to measurements of key conditions (indicators) before a project begins, from
which change and progress can be assessed. Sometimes baseline data is available, other times a
baseline study is needed to determine baseline conditions. As this guide highlights, there are a
variety of different scenarios for and ways to conduct baseline studies. The specific methodology
will depend on a variety of project-specific factors, ranging from specific indicators to time and
budget.

Definitions (baseline)

Data that measures conditions (appropriate indicators) before project start for later comparison.

Further explanations

Baseline data provides a historical point of reference to: 1) inform program planning, such as
target setting, and 2) monitor and evaluation change for program implementation and impact
assessment.

Definitions Baseline Study

Data collection and analysis exercise to determine the baseline conditions (indicators)

Further explanations
23
If resources are invested into a baseline study, it is important to budget and plan for an end line
study of the same baseline conditions (indicators) using the same methodology for reliable
comparison.

Why is baseline data important?

Without baseline data, it can be very difficult to plan, monitor and evaluate future performance.
Baseline data help to set achievable and realistic indicator targets for each level of result in a
project’s design (e.g. log frame), and then determine and adjust progress towards these targets
and their respective results. Additional reasons for conducting baseline studies include:

 Inform project management decision-making, providing a reference point to determine


progress and adjust project implementation to best serve people in need.
 Assess measurability of the selected indicators and fine tune the systems for future
measurement.
 Uphold accountability, informing impact evaluation to compare and measure what
difference the project is making.
 Promote stakeholder participation, providing a catalyst for discussion and motivation
among community members and project partners on the most appropriate means of
action.
 Shape expectations and communication strategies by assisting by sharpening
communication objectives, and focusing content of media materials.
 Convince and provide justification to policy-makers and donors for a project intervention.
 Support resource mobilization for and celebration of accomplished project results
compared to baseline conditions.
 If conducted properly, baseline results can be generalized and used to inform service
delivery for communities with similar characteristics.

4. What are the similarities and differences between monitoring and


evaluation?

24
Monitoring

 To monitor means to observe.


 Monitoring is the regular observation and recording of activities taking place in a project
or program. It is a process of routinely gathering information on all aspects of the project.
 To monitor is to check on how project activities are progressing. It is observation; ─
systematic and purposeful observation.
 Monitoring also involves giving feedback about the progress of the project to the donors,
implementers and beneficiaries of the project.
 Reporting enables gathered information to be used in making decisions for improving the
project performance.

   Monitoring                                 Evaluation

Monitoring is the systematic and routine


collection of information about the Evaluation is the periodic assessment of the
programs/projects activities programs/projects activities

It is ongoing process which is done to see if It is done on a periodic basis to measure the success
things/activities are going on track or not i.e. it against the objective i.e. it is an in-depth assessment
regularly tracks the program of the program

Evaluation is to be done after certain point of time


of the project, usually at the mid of the project,
Monitoring is to be done starting from the completion of the project or while moving from one
initial stage of the projects stage to another stage of the projects/programs

Evaluation is done mainly done by the external


members. However, sometimes it may be also done
Monitoring is done usually by the internal by  internal members of the team or by both internal
members of the team and external members in a combined way

Monitoring provides information about the Evaluation provides recommendations, information

25
current status and thus helps to take immediate for long term planning and lessons for
remedial actions, if necessary organizational growth and success

It focuses on input, activities and output It focuses on outcomes, impacts and overall goal

Monitoring process includes regular meetings,


interview, monthly and quarterly reviews etc. Evaluation process includes intense data collection,
Usually quantitative data. both qualitative and quantitative

It has multiple points of data collection Data collection is done at intervals only

It gives answer about the present scenario of


the project towards achieving planned results
considering the human resources, budget, It assesses the relevance, impact, sustainability,
materials, activities and outputs effectiveness and efficiency of the projects

Monitoring studies the present information and Evaluation studies the past experience of the project
experiences of the project performance

Monitoring checks whether the project did Evaluation checks whether what the project did had
what it said it would do the impact that it intended

Helps to improve project design and


functioning of current project Helps to improve project design of future projects

Evaluation does not look at detail of activities but


Monitoring looks at detail of activities rather looks at a bigger picture

It looks at the achievement of the programs along


It compares the current progress with the with both positive/negative, intended/unintended
planned progress effects

Information obtained from monitoring is more Information obtained from evaluation is useful to

26
useful to the implementation/management
team all the stakeholders

Monitoring result is used for informed actions Evaluation result is used for planning of new
and decisions programs and interventions

Answers the question “Are we doing things


right?” Answers the question “Are we doing right thing?”

Regular report and updates about the Reports with recommendations and lessons act as a
project/program act a deliverables here deliverable here

Good or effective monitoring does not rely on Good or effective evaluation relies to some extent
evaluation results on good monitoring

There are few quality checks in monitoring There are many quality checks in evaluation

It provides information for evaluation It provides information for proper planning

Audit

Audits are of different types, quality and integrated or differentiated into personal,
internal, external, statutory, no statutory, social, performance and final.

The main steps involved in auditing is information gathering, followed by evaluation and
validation of internal control.

Audit is the evaluation of a person, system, organization or product done to determine its
validity and authenticity.

To ensure information reported is accurate according to tax law

27
Occurs if computer programs detect irregularities on the form, if there are an unusual
number of deductions, or if randomly selected.

Taxpayer meets auditor at home, at place of business or in an IRS office. Involves


interviews, review of records, potentially interviews with related third parties.

Auditor explains findings and report in detail. Can be accepted or appealed.

What are the similarities and differences between monitoring and evaluation?

Monitoring is a continuous assessment of program based on early detailed information on the


progress or delay of the ongoing assessed activities. An evaluation is an examination concerning
the relevance, effectiveness, efficiency and impact of activities in the light of specified
objectives.

5. How you select Monitoring and Evaluation Framework that suits your

Effort?

Conceptual Frameworks
Conceptual frameworks are diagrams that identify and illustrate relationships among relevant
organizational, individual and other factors that may influence a program and the successful
achievement of goals and objectives. They Help determine which factors will influence the
program and outline how each of these factors (underlying, cultural, economic socio-political
etc.) might relate to and affect the outcomes. They do not form the basis for monitoring and
evaluation activities, but can help explain program results.

28
Illustrative Example from the Rural AIDS Development Action Research
(RADAR) Program Intervention with Microfinance for AIDS and Gender
Equity (IMAGE) in South Africa

IMAGE seeks to influence factors that predispose individuals to HIV infection and gender-based
violence through targeting the environment in which they occur.  Individual agency, household
well-being, communication and power relations, and the norms, networks, relationships and
responses of communities constitute the environment in the IMAGE framework. The framework
attempts to conceptualized the complexity of factors and relationships that constitute the
environment in which sexual behavior and gender-based violence occurs. The framework was
developed to guide both the intervention and evaluation components of the IMAGE program.

29
30
31

You might also like