Professional Documents
Culture Documents
Monitoring Is A Periodically Recurring Task Already Beginning in The Planning Stage of A Project or Program
Monitoring Is A Periodically Recurring Task Already Beginning in The Planning Stage of A Project or Program
Monitoring Is A Periodically Recurring Task Already Beginning in The Planning Stage of A Project or Program
INDIVGUAL ASSIGNMENT
1
CPU College
Group Assignment 1
Today? What are you recommend for improvement and strengthen the Monitoring &
2
for accountability and in assisting in making informed decision at program and policy levels.
Identification of their own needs, defining the objectives of the program, implementing the
activities and monitoring and evaluating the program. Human resources management are very
critical in project management. Particularly, they are essential for an effective monitoring and
evaluation. The technical capacity and organizational know-how in carrying out evaluations, the
value and participation of its human resources in the process of decision making as well as their
motivation in executing the decision arrived at can significantly have an effect on the
evaluation(Vanessa and Gala, 2011). Dobrea et al (2010) further illustrate that this should not be
just mere training by undertaking learning approach and have a positive effect on the evaluation
process within the organization. Donaldson (2013) explanations how stakeholder are
empowered, different activities excite different set of stakeholders, this stipulates how, when and
how the stakeholder are empowered in their different capacities. These approaches promote
inclusion and significant participation. To strengthen the Stakeholder involvement, they should
be involved in early stages of evaluation precisely in planning. This entails funding of
individuals and political agents of high profile who have interest in learning as well as using
instruments in demonstrating effectiveness (Jones, 2008). Participatory methods stipulation
dynamic involvement during making decision for interested persons in the project and its
strategy, the involvement generated a notion or rather mentality of ownership in results and
recommendations related to M & E (Chaplowe, & Cousins, 2015). The subject of planning and
pre-construction planning is central to project control process. According to Gyorkos (2011)
planning is a process of decision making derived in advance of execution, meant to craft a future
that is desired with ways of implementation where in planning answers questions what, how, by
who, with what and when. The purpose of planning as explained by Kelly and Magongo (2014)
is to assist the manager to fulfil his primary functions of direction and control in the
implementation of project components, coordinate and communicate with the many parties.
George (2008) says that at the planning phase many identify potential problems proactively
before they can greatly affect project cost and schedule during the implementation phase. Project
planning helps to create a benchmark for execution. 6 (Zimmererand Yasin, 2011), argues that
clear benchmarks are critical as they are used at execution to provide direction for the project
team as events unfold. To do this task, the project manager has to assemble the most competent
team and take into consideration cultural differences. Maylor (2013) recognized what is widely
3
known the usual midterm planning horizon for development projects in terms of promoting
sustainable benefits, predominantly when behavioral and institutional transformation is included
in the goals especially so when there are multiple local agencies involved. Open-ended
requirements are not appropriate; however, phasing project activities over a longer period is a
project strategy to support sustainable benefits. Phasing approach requires clear goals and
objectives, from the beginning and well-articulated decision points at each project end phase.
Where there is ambiguity about local policy, capability or guarantee then an initial pilot phase,
leading on to a number of subsequent phases, should move the business case than the exception
(Kalali, Ali and Davod , 2011). Estimation of Financial resources done during planning for
implementation of monitoring and evaluation (Dyason, 2010). A key aspect of planning for
monitoring and evaluation is to approximate the costs, staffing, and other resources that are
required for monitoring and evaluation work. It is essential for monitoring and evaluation
specialists to weigh in on monitoring and evaluation budget needs at the project design phase so
that funds are distributed to the implementation of key monitoring and evaluation tasks (Ahsan
and Gunawan, 2010). IFAD (2012), in its report noted that most developing countries are being
faced with the challenge of implementing a sound monitoring and evaluation due to lack of
control on their funding. Therefore, the donors need to put more emphasizes on the establishment
of sound monitoring and evaluation systems through factoring this in the funding.
My own organization practice of monitoring and evaluation is regarded as a core tool when it
comes to enhancing project management quality, considering that in the short run and in the
medium term, the management of complex projects will entail corresponding strategies from the
financial view point that required to adhere to the criteria of effectiveness, sustainability along
with durability. This is the only way to ensure that most of these projects realize their goals and
leave a sustainable impact on the society.
The activity of monitoring supports both the project managers and staff in understanding whether
the projects are progressing as predetermined for minimizing time along with cost overruns,
while at the same time ensuring that the required standards of quality are attained in the
implementation of the project. Maintaining relevant improvement
Support the aim of consistent and high quality program service delivery
4
Foster the idea of continuous improvement and identify specific areas for improving
service delivery.
Help empower service providers with the ability to assess the quality of their service
delivery.
Foster a collective commitment to quality through a common set of clear, measurable
criteria, aims.
2. Select a given project/program and develop the job description (goal, duties, role,
Monitoring and Evaluation (M&E) professionals can have many different titles and can have
quite a diversity of responsibilities depending on the context and organizations where they work.
Common titles seen in government agencies, non-governmental organizations (NGOs) and non-
profit organizations include: M&E Officer; M&E Specialist; and M&E Manager. These titles are
often used interchangeably. M&E professionals work at agency headquarters or central offices
providing technical assistance and strategic oversight, or in the field collecting and managing
data. Regardless of their place of work, M&E professionals play an important role in project
management and often help build capacity in performance and impact measurement within their
organizations.
5
program and projects with WWF Network standards, policy frameworks and best practices; and
achieve efficiency and accountability.
The Manager will provide technical support to all the functions of WWF-Kenya so as to ensure
that the implementation of the Organizational Strategic Plan meets the highest standards, and in
particular;
6
Role and Responsibilities of an M&E Project Manager
Monitoring and Evaluation (M&E) professionals can have many different titles and can have
quite a diversity of responsibilities depending on the context and organizations where they work.
Common titles seen in government agencies, non-governmental organizations (NGOs) and non-
profit organizations include: M&E Officer; M&E Specialist; and M&E Manager. These titles are
often used interchangeably. M&E professionals work at agency headquarters or central offices
providing technical assistance and strategic oversight, or in the field collecting and managing
data. Regardless of their place of work, M&E professionals play an important role in project
management and often help build capacity in performance and impact measurement within their
organizations.
Planning
Developing strong M&E systems require a great deal of planning. The M&E professional plays a
key role in facilitating the input of project staff, partners and other stakeholders in project design
and measurement activities. Responsibilities include:
The M&E professional plays an essential role in tracking and updating M&E data as well as
ensuring the data is of the best quality possible. Responsibilities include:
7
Implementing monitoring systems and designing monitoring tools
Developing data collection tools
Monitoring project activities, outputs and progress towards anticipated results
Working with data platforms, databases and select technologies to capture and organize
data
Training field staff in monitoring and evaluation processes and providing ongoing
coaching
Conducting or providing support to data quality assessments
Once the M&E system has been implemented and data collection processes established, the
M&E professional proceeds with the analysis and reporting of data. Responsibilities include:
Determining data analysis procedures and use of quantitative or qualitative analysis tools
Cleaning, sorting, categorizing and organizing data
Analyzing quantitative and/or qualitative data
Summarizing findings
Developing monthly, quarterly or annual reports depending on project requirements
Disseminating evaluation findings and project results to donors and other stakeholders
The M&E professional will often be involved in special studies or evaluations which may be
conducted by the M&E professional and project staff in the case of an internal evaluation or with
the assistance of external evaluation consultants in the case of final or impact evaluations
depending on donor requirements and resources. Responsibilities of the M&E professional
include:
Knowledge Management
8
M&E professionals often provide much support to knowledge management processes within
their organizations. Responsibilities can include:
3. Write an essay on and evaluate the proposed Monitoring & Evaluation system for
minister of National Planning and Development Commission, Fitsum Assefa (PhD) said the 10
year perspective plan is intended to make Ethiopia African beacon of prosperity.
In her exclusive interview with Policy Matters, the Minister said that the 10 year plan envisions
that it Ethiopia becomes African beacon of prosperity by 2030. The plan is intended to ensure
shared prosperity in all its dimensions, where every citizen gets access to it, she added.
“For this to happen, the economy should stay in the high growth trajectory and we target it to
register 10 percent average GDP growth over the 10 year time.”, Dr. Fitsum stressed.
Ethiopia, as per the 10 year plan, is working slash poverty by half and register a per capita GDP
of 2200 US Dollars and position the country among middle income countries level doubling the
current per capita GDP per capita, Dr. Fitsum explained.
“And also unemployment is targeted to rich to less than nine percent. In fact, other goals are also
included on the infrastructure side. For instance, energy generation is targeted to reach about 20
Giga Watts. And along with that, universal access to electricity and safe drinking water both in
rural and urban areas over the coming 10 years.” she stated.
“It is intended to ensure that social development goals tough lives of our citizens in the sectors
including health, education, among others pursuant to the sustainable development goals.”, the
minister underlined.
African Agenda 2063 is also included in the plan to improve livelihoods of the people in line
with the goals set to bring about sustainable and shared prosperity, she underscored.
9
The 10 year perspective plan has inclusive and comprehensive goals that include economic,
social, administrative and institutional issues, according to the minister.
Agriculture, manufacturing, tourism, mining and ICT are focus areas of the 10 year perspective
plan in the coming three years.
Government is working to facilitate better and smooth environment for the private sector, she
said, adding that activities are underway to develop private sector development strategy.
Works are underway to scale up best practices in Public-Private Partnership initiative which is
becoming effective in areas such as energy and infrastructure.
She further underscored that the 10 year perspective plan is also intended to enable rural farmers
get access to financial supports using their cattle and crops on farmland as collateral means.
All activities in the plan show the commitment of the government open up the economy for
private sector investment.
Successful partnership and joint venture with domestic private sector and foreign investors will
be carried out to tap synergies in a more transparent manner.
Ease of doing business initiative including the one window services and reforms in the state
owned enterprises including energy and telecom sectors will be continued as per the 10 year
plan, she added.
Evaluation/Assessment.
Monitoring
10
Monitoring is not an end in itself. Monitoring allows program to determine what is and is not
working well, so that adjustments can be made along the way. It allows program to assess what
is actually happening versus what was planned.
When monitoring activities are not carried out directly by the decision-makers of the program it
is crucial that the findings from those monitoring activities are coordinated and fed back to them.
Information from monitoring activities can also be disseminated to different groups outside of
the organization which helps promote transparency and provides an opportunity to obtain
feedback from key stakeholders.
There are no standard monitoring tools and methods. These will vary according to the type of
intervention and objectives outlined in the program. Examples of monitoring methods include:
Impact Evaluation
11
Impact evaluation measures the difference between what happened with the program and what
would have happened without it. It answers the question, “How much (if any) of the change
observed in the target population occurred because of the program or intervention?”
Rigorous research designs are needed for this level of evaluation. It is the most complex and
intensive type of evaluation, incorporating methods such as random selection, control and
comparison groups.
Establish causal links or relationships between the activities carried out and the desired
outcomes.
Identify and isolate any external factors that may influence the desired outcomes.
For example, an impact evaluation of an initiative aimed at preventing sexual assaults on women
and girls in town x through infrastructural improvements (lighting, more visible walkways, etc.)
might also look at data from a comparison community (town y) to assess whether reductions in
the number of assaults seen at the end of the program could be attributed to those improvements.
The aim is to isolate other factors that might have influenced the reduction in assaults, such as
12
Monitoring Evaluation
Initial What’s happening in the project? Why is something happening, with what degree of
question quality, and with what consequences (results)?
Why? To review the project’s progress To describe and assess progress and results
To be able to make informed To draw conclusions and derive
decisions recommendations
To be able to carry out adjustments
To provide a basis for further
analysis (e.g., evaluation)
Who? Carried out internally by project staff Carried out internally or externally
The M&E Framework follows the hypothesized causal logic of the Theory of Change and the
corresponding results framework. However, it also provides additional details about how the
hypotheses will be verified, indicators used, how they will be measured, and how the information
is intended to be used. After identifying the key indicators, the M&E plan assigns the appropriate
levels of rigor and outlines the assessment methodology. The assessment methodology guides
the M&E activities throughout the assessment implementation (before, during and after).
13
Performance Indicators
Performance indicators are measures of inputs, processes, outputs, outcomes, and impacts for
development projects, programs, or strategies. When supported with sound data collection—
perhaps involving formal surveys—analysis and reporting, indicators enable managers to track
progress, demonstrate results, and take corrective action to improve service delivery.
14
Participation of key stakeholders in defining indicators is important because they are then more
likely to understand and use indicators for management decision-making.
Identifying problems via an early warning system to allow corrective action to be taken.
Indicating whether an in-depth evaluation or review is needed.
Selecting Indicators
Selection must be based on, a careful analysis of the objectives and the types of changes wanted
as well as how progress might be measured and an analysis of available human, technical and
financial resources. A good indicator should closely track the objective that it is intended to
measure. For example, development and utilization of DRR Action Plans would be good
indicators if the objective is to reduce disaster risks at national and local levels. Selection should
also be based on an understanding of threats. For example, if natural disasters are a potential
threat, indicators should include resources and mechanisms in place to reduce the impact of
nature disasters. Two types of indicator are necessary:
1) Outcome / Impact indicators (that measure changes in the system (e.g. resource allocation
for DRR based on Strategic Action Plans)
2) Output / Process indicators (that measure the degree to which activities are being
implemented (e.g. number of stakeholder developed Strategic Action Plans).
Note that it may be difficult to attribute a change, or effect, to one particular cause. For example,
resource allocation for DRR could be due to good management of the DRR agencies / authorities
outside the UNISDR support.
A good indicator should be precise and unambiguous so that different people can measure it and
get similarly reliable results. Each indicator should concern just one type of data (e.g. number of
UNISDR supported Strategic Action Plans rather than number of Strategic Action Plans in
general). Quantitative measurements (i.e. numerical) are most useful, but often only qualitative
data (i.e. based on individual judgments) are available, and this has its own value. Selecting
indicators for visible objectives or activities (e.g. early warning system installed or capacity
15
assessment undertaken) is easier than for objectives concerning behavioral changes (e.g.
awareness raised, community empowerment increased).
Does this indicator enable one to know about the expected result or condition?
Indicators should, to the extent possible, provide the most direct evidence of the condition or
result they are measuring. For example, if the desired result is a reduction in human loss due to
disasters, achievement would be best measured by an outcome indicator, such as the mortality
rate. The number of individuals receiving training on DRR would not be an optimal measure for
this result; however, it might well be a good output measure for monitoring the service delivery
necessary to reduce mortality rates due to disasters. Proxy measures may sometimes be necessary
due to data collection or time constraints. For example, there are few direct measures of
community preparedness. Instead, a number of measures are used to approximate this:
community’s participation in disaster risk reduction initiatives, government capacity to address
disaster risk reduction, and resources available for disaster preparedness and risk reduction.
When using proxy measures, planners must acknowledge that they will not always provide the
best evidence of conditions or results.
Is the indicator defined in the same way over time? Are data for the indicator collected in
the same way over time?
To draw conclusions over a period of time, decision-makers must be certain that they are looking
at data which measure the same phenomenon (often called reliability). The definition of an
indicator must therefore remain consistent each time it is measured. For example, assessment of
16
the indicator successful employment must rely on the same definition of successful (i.e., three
months in a full-time job) each time data is collected. Likewise, where percentages are used, the
denominator must be clearly identified and consistently applied. For example, when measuring
children mortality rates after disaster over time, the population of target community from which
children are counted must be consistent (i.e., children ages between 0 - 14). Additionally, care
must be taken to use the same measurement instrument or data collection protocol to ensure
consistent data collection.
Are data currently being collected? If not, can cost effective instruments for data collection
be developed?
As demands for accountability are growing, resources for monitoring and evaluation are
decreasing. Data, especially data relating to input and output indicators and some standard
outcome indicators, will often already be collected. Where data are not currently collected, the
cost of additional collection efforts must be weighed against the potential utility of the additional
data.
CPU College
Group Assignment 2
17
1. Provide the technical and conceptual definitions of the following terms:
a) Monitoring
b) Evaluation
d) System
a) Monitoring
Monitoring is a periodically recurring task already beginning in the planning stage of a project or
program. Monitoring allows results, processes and experiences to be documented and used as a
basis to steer decision-making and learning processes. Monitoring is checking progress against
plans. The data acquired through monitoring is used for evaluation.
Monitoring is the systematic process of collecting, analyzing and using information to track a
program’s progress toward reaching its objectives and to guide management decisions.
Monitoring usually focuses on processes, such as when and where activities occur, who delivers
them and how many people or entities they reach.
Monitoring is conducted after a program has begun and continues throughout the program
implementation period. Monitoring is sometimes referred to as process, performance or
formative evaluation.
Monitoring is the regular observation and recording of activities taking place in a project or
program. It is a process of routinely gathering information on all aspects of the project.
Monitoring also involves giving feedback about the progress of the project to the donors,
implementer’s and beneficiaries of the project.
18
Reporting enables the gathered information to be used in making decisions for improving project
performance.
b) Evaluation
M&E is an embedded concept and constitutive part of every project or program design (must be).
M&E is not an imposed control instrument by the donor or an optional accessory (nice to have)
of any project or program. M&E is ideally understood as dialogue on development and its
progress between all stakeholders.
19
System
A system may have many definitions dependent upon the context in which it is used. A system
may be defined as a set of interdependent components or factors which accomplish a
predetermined goal. As such it may be concluded that the process of project management will
always require the creation or use of a system. Ideally the system, though made of
different components, will operate in such a fashion that these components will complement one
another in the performance of their defined function to a greater degree than they would be able
to as separate, independent components. Said system may either exist in a physical sense, with
actual interlocking parts carrying out a tangible process, or the system may exist in project
management or intangible sense. Most often a system will be composed of both physical and
intellectual parts and processes.
In regards to effective project management, a system may be composed of different parts all
existing for the purpose of reaching the project’s goal. These may include pre-existing project
management processes, learned techniques, different methodologies or lines of thinking and
working, as well as tools existing as intellectual processes or physical objects operated by the
project management team itself.
A monitoring and evaluation (M&E) plan is a document that helps to track and assess the results
of the interventions throughout the life of a program. It is a living document that should be
referred to and updated on a regular basis.
20
A Project MEL Plan is technically an annex to a Project Appraisal Document (PAD).
Functionally it is a separable document that provides guidance to USAID staff over the life of a
project.
This section of the kit is designed to help you develop a project MEL plan. It is divided into four
segments, as listed in the menu below. You can go through this section sequentially, by using the
forward arrow at the bottom of each page, or you can jump directly to topics on the list below.
The Program Cycle menu at the top of this page and the Sitemap provide even more detailed
views of the coverage of this section, should you wish to consult them.
A Monitoring and Evaluation (M&E) Plan is a guide as to what you should evaluate, what
information you need, and who you are evaluating for.
The plan outlines the key evaluation questions and the detailed monitoring questions that help
answer the evaluation questions. This allows you to identify the information you need to collect,
and how you can collect it. Depending on the detail of the M&E plan, you can identify the
people responsible for different tasks, as well as timelines. The plan should be able to be picked
up by anyone involved in the project at any time and be clear as to what is happening in terms of
monitoring and evaluation.
An evaluation plan should ideally be done at the planning stage of a project, before you
commence implementation. This will allow you to plan ahead of time and data collection
activities that you may need to undertake, such as pre-intervention surveys. However, it is never
too late to develop an M&E plan. Retro-fitting an M&E plan to an existing project may just
mean that you may be constrained in some of the data that you can collect.
21
What is a baseline study?
The purpose of a baseline study is to provide an information base against which to monitor and
assess an activity’s progress and effectiveness during implementation and after the activity is
completed. Sometimes the data needed for a baseline, against which to measure the degree and
quality of change during an activity’s implementation, will already exist. In such cases the only
task is to collate the data and ensure that it can be updated in the longer term. So it is important
to find out what information is already available. But more commonly, there will not be any
existing data, or it will be incomplete or of poor quality, or it will need to be supplemented or
broken out into categories that are relevant for the project being implemented.
When planning a baseline study, the implementing organization needs to determine both what
change needs to be assessed and what sort of comparison(s) will need to be made as part of that
assessment of change. There are two common ways to measure change:
‘with and without’ activity – this seeks to mimic the use of an experimental control, and
compares change in the activity location to change in a similar location where the activity
has not been implemented, and
Before and after’ activity – this measures change over time in the activity location alone.
The study should be closely linked with the activity monitoring plan so that the data collected
can be replicated if necessary during ongoing activity monitoring, for any mid-term review,
when the activity is being assessed for the activity completion report and for any subsequent
evaluations. Baseline data should provide the minimum information required to assess the
quality of the activity implementation and measure the development results.
Baseline data
Baseline data help to set achievable and realistic indicator targets for each level of result in a
project’s design, and then determine and adjust progress towards these targets and their
respective results.
22
1. Planning: The baseline will help establish and concretize objectives as well as the type of
support needed. It may also identify specific activities which will accomplish these objectives.
This will necessitate identifying locally perceived resources as well as needs and problems as
defined by local stakeholders.
2. Monitoring: The information gathered can be used during the implementation process to
monitor progress and make any necessary changes.
3. Evaluation: Because baselines use program or project-specific benchmarks and indicators, the
data can then be used to assess the achievement of the outcomes and impact of a program. The
following steps below should be considered as initial considerations, either to be taken
sequentially or in parallel depending on the circumstances.
Definitions
A “baseline” refers to measurements of key conditions (indicators) before a project begins, from
which change and progress can be assessed. Sometimes baseline data is available, other times a
baseline study is needed to determine baseline conditions. As this guide highlights, there are a
variety of different scenarios for and ways to conduct baseline studies. The specific methodology
will depend on a variety of project-specific factors, ranging from specific indicators to time and
budget.
Definitions (baseline)
Data that measures conditions (appropriate indicators) before project start for later comparison.
Further explanations
Baseline data provides a historical point of reference to: 1) inform program planning, such as
target setting, and 2) monitor and evaluation change for program implementation and impact
assessment.
Data collection and analysis exercise to determine the baseline conditions (indicators)
Further explanations
23
If resources are invested into a baseline study, it is important to budget and plan for an end line
study of the same baseline conditions (indicators) using the same methodology for reliable
comparison.
Without baseline data, it can be very difficult to plan, monitor and evaluate future performance.
Baseline data help to set achievable and realistic indicator targets for each level of result in a
project’s design (e.g. log frame), and then determine and adjust progress towards these targets
and their respective results. Additional reasons for conducting baseline studies include:
24
Monitoring
Monitoring Evaluation
It is ongoing process which is done to see if It is done on a periodic basis to measure the success
things/activities are going on track or not i.e. it against the objective i.e. it is an in-depth assessment
regularly tracks the program of the program
25
current status and thus helps to take immediate for long term planning and lessons for
remedial actions, if necessary organizational growth and success
It focuses on input, activities and output It focuses on outcomes, impacts and overall goal
It has multiple points of data collection Data collection is done at intervals only
Monitoring studies the present information and Evaluation studies the past experience of the project
experiences of the project performance
Monitoring checks whether the project did Evaluation checks whether what the project did had
what it said it would do the impact that it intended
Information obtained from monitoring is more Information obtained from evaluation is useful to
26
useful to the implementation/management
team all the stakeholders
Monitoring result is used for informed actions Evaluation result is used for planning of new
and decisions programs and interventions
Regular report and updates about the Reports with recommendations and lessons act as a
project/program act a deliverables here deliverable here
Good or effective monitoring does not rely on Good or effective evaluation relies to some extent
evaluation results on good monitoring
There are few quality checks in monitoring There are many quality checks in evaluation
Audit
Audits are of different types, quality and integrated or differentiated into personal,
internal, external, statutory, no statutory, social, performance and final.
The main steps involved in auditing is information gathering, followed by evaluation and
validation of internal control.
Audit is the evaluation of a person, system, organization or product done to determine its
validity and authenticity.
27
Occurs if computer programs detect irregularities on the form, if there are an unusual
number of deductions, or if randomly selected.
What are the similarities and differences between monitoring and evaluation?
5. How you select Monitoring and Evaluation Framework that suits your
Effort?
Conceptual Frameworks
Conceptual frameworks are diagrams that identify and illustrate relationships among relevant
organizational, individual and other factors that may influence a program and the successful
achievement of goals and objectives. They Help determine which factors will influence the
program and outline how each of these factors (underlying, cultural, economic socio-political
etc.) might relate to and affect the outcomes. They do not form the basis for monitoring and
evaluation activities, but can help explain program results.
28
Illustrative Example from the Rural AIDS Development Action Research
(RADAR) Program Intervention with Microfinance for AIDS and Gender
Equity (IMAGE) in South Africa
IMAGE seeks to influence factors that predispose individuals to HIV infection and gender-based
violence through targeting the environment in which they occur. Individual agency, household
well-being, communication and power relations, and the norms, networks, relationships and
responses of communities constitute the environment in the IMAGE framework. The framework
attempts to conceptualized the complexity of factors and relationships that constitute the
environment in which sexual behavior and gender-based violence occurs. The framework was
developed to guide both the intervention and evaluation components of the IMAGE program.
29
30
31