3.5.7. What Are The Steps in Implementing An Impact Assessment?

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

3.5.7. What are the Steps in Implementing an Impact Assessment?

1 of 2

http://www.microlinks.org/print/4643

Published on USAID Microlinks (http://www.microlinks.org)


Home > 3.5.7. What are the Steps in Implementing an Impact Assessment?

3.5.7. What are the Steps in Implementing an Impact Assessment?


What Are the Steps in Implementing an Impact Assessment?
Conducting a good impact assessment of a value chain project involves the following steps (the steps assume two research rounds--a baseline and follow-up):
1.
2.
3.
4.
5.
6.

Select the Project(s) to be Assessed.


Conduct an Evaluability Assessment.
Prepare a Research Plan.
Contract and Staff the Impact Assessment.
Carry out the Field Research and Analyze its Results.
Disseminate the Impact Assessment Findings.

Step 1: Select the Project(s) to be Assessed


Impact assessments are carried out because someone needs to know what results particular projects or intervention approaches are achieving. This demand for
information is most likely to arise from the higher levels of donor organizations (those who plan aid projects and allocate funds), but it can also come from Mission
Directors, program officers, and implementers, such as NGOs. Large projects or those that take innovative and promising approaches are particularly strong
candidates for an impact assessment. Another important criterion is a cooperative attitude among project leadership and the USAID Mission. Finally, the availability
of finance to do the impact assessment is obviously critical.

Step 2: Conduct an Evaluability Assessment


An evaluability assessment is an initial appraisal of whether an impact assessment should be conducted on the project and, if so, what is the most appropriate
methodology to do so. An important part of the evaluability assessment involves sitting down with project staff to work through a causal model for all the project
activities to be covered in the impact assessment. This means determining what exactly the project is doing or will do, over what time period, and with which
expected outputs, outcomes, and impacts. If this discussion indicates that the hypothesized relationships between project activities and impacts are unrealistic, or if
the project time frame is unsuitable (as a general rule, a minimum of two years is necessary for sustainable impacts to occur), then the impact assessment is not
worthwhile.
The evaluability assessment should also consider the purposes that would be served by the impact assessment, its potential audience, its cost-effectiveness, and its
potential credibility, along with the the best timing for conducting the impact assessment.
The evaluability assessment is conducted prior to conducting the baseline impact assessment. It is also important, however, to conduct a modified evaluability
assessment prior to conducting the follow-up impact assessment. The purpose in this case is to determine (1) whether it is worthwhile to invest in the follow-up
assessment at all or (2) whether to reduce the scope of the impact assessment light of events that have occurred since the baseline.
For more on causal models, see the Impact Assessment Series article #4, "Developing a Causal Model for Private Sector Development Programs."
For more on evaluability assessments, see the forthcoming Impact Assessment Primer Series article #6, Evaluability Assessment: The First Step in Assessing
Impacts.
The PSDIAI team has produced the following examples of evaluability assessments, which include corresponding causal models for each program:
Evaluability Assessment of USAID/Brazil's Micro and Small Enterprise Trade-Led Growth Program
Evaluability Assessment of the PROFIT Zambia Program

Step 3: Prepare a Research Plan


The research plan should include the causal model of the impact assessment and a practical plan for carrying out the study. The causal model is used to generate a
set of hypotheses about outcomes and impacts that will be tested in the study. Typically, impacts of several different types will be anticipated at three levels:
1. In the value chains and markets involved, including product markets and sometimes also supporting markets for inputs, business services, and/or finance
2. Among participating MSEs
3. In the households associated with participating MSEs
Once testable hypotheses have been identified, the next step is to define measurable indicators that can be used to determine whether impact has been achieved.
After that, sources of information for measuring the indicators must be identified. In the quasi-experimental approach (see definitions below), a longitudinal survey
serves as an important source of information for determining whether there is impact at the MSE and household levels. This involves selecting a sample of project
participants (explicitly defined in a manner consistent with the projects structure and approach) and matching it with a sample of non-participants who are as
similar as possible to the project participants in all relevant characteristics (the control group). This must be done carefully to minimize the effect of selection
biasthe tendency for people who would have done better anyway to become project participantswhich leads to overstatement of the projects impact. In a quasiexperimental impact assessment, the two groups of survey respondents form a panel that will be interviewed at least twice, with a minimum interval of two years
between survey rounds. To allow for attrition in the sample between rounds, over-sampling is required in the baseline round. In an experimental assessment, the
two groups are selected at random and then interviewed just once at the conclusion of the study.
The survey is the quantitative part of the impact assessment. It can be combined with qualitative research to get a richer view of impact at the MSE and household
levels, as well as to obtain some idea of what the projects impact is at the value chain and market levels. At these higher levels, finding a satisfactory control group

5/28/2014 9:24 AM

3.5.7. What are the Steps in Implementing an Impact Assessment?

2 of 2

http://www.microlinks.org/print/4643

is likely to be difficult if not impossible, so impact cannot be proven so definitely as at the MSE and household levels. The qualitative research consists of a project of
structured interviews, focus group discussions, and other qualitative methods with persons who participate in various ways in the relevant value chains and markets.
Their views and insights are then triangulated in an attempt to get a coherent picture of the structure of the markets concerned and changes over time that may be
attributable to project activities.
The research plan should also include detailed specifications for the questions to be asked on the survey questionnaire and guidelines for the interviews and focus
group discussions.
The PSDIAI team has produced the following examples of research plans:
Baseline Research Design for the PROFIT Zambia Program
Baseline Research Design for the Kenya BDS and Horticulture Development Center Projects

Step 4: Contract and Staff the Impact Assessment


Once a research plan is drawn up, the next step is to make arrangements to carry out the field research. Typically, a local research firm is contracted to carry out the
field research under the guidance of the sponsoring organization and external advisers (often international experts) it may have hired. In this case, a competitive
bidding process is desirable. Potential local research partners submit a bid based on a scope of work (SOW0 prepared by the sponsoring organization or its external
advisers that clearly defines the responsibilities, time frame, and budget for the field research. Selection of the local research partner will consider several factors,
including past experience, technical expertise, the quality of the proposal, recommendations, timing, and cost.
For more on selecting local research partners, see the Impact Assessment Primer Series article #2, Methodological Issues in Conducting Impact Assessments of
Private Sector Development Programs.
The PSDIAI has produced the following examples of SOWs for local research partners:
Terms of Reference for Local Research Partners in the GMED India Impact Assessment
Terms of Reference for Local Research Partners in the PROFIT Zambia Impact Assessment

Step 5: Carry out the Field Research and Analyze Results


Under the supervision of the sponsoring organization and its external advisers, the local research partner carries out the baseline field research, which includes the
impact assessment survey and complementary qualitative data collection activities, principally key informant interviews and focus group discussions. The local
research partner next organizes the data and summarizes the findings. For the impact survey, this involves entering the survey responses into a data set, cleaning
the data, and tabulating the results into simple descriptives and frequencies. For the qualitative research, this involves organizing the raw responses, summarizing
them, and noting general trends.
It is up to the sponsoring organization to determine who conducts the in-depth data analysis. It may be the local research partner or an external consultant hired by
the sponsor. Before hiring the local research partner to perform the in-depth data analysis, the sponsor should confirm the partner's data analysis capacity, its
familiarity with the project and its causal model (relevant for interpreting the results), and its language capacities (relevant for preparing the final report). In some
cases, it may be appropriate for the local partner to perform all data analysis and report writing. In other cases, it may be appropriate to hire the local research
partner to perform the data analysis and report writing and to hire an external consultant to review and (if necessary) revise the data analysis and final report. This
option is particularly relevant where the local research partner does not possess native speaking capacity in the language of the final report. In yet other cases, it
may be appropriate to hire an external consultant to perform the data analysis and prepare the final report.
The analysis of the baseline results is largely descriptive. Its aim is to create an accurate picture of the conditions within the project and among treatment and
control group participants at or near project inception to serve as a baseline against which changes can be measured after the follow-up round of research is
completed.
In contrast, the aim of the follow-up (or endline) data analysis is to document the changes in outcomes and impacts that have occurred since the baseline and to
attribute the observed changes to project activities. Project impact is assessed using the difference in difference method, in which the changes among the
treatment group of project beneficiaries are compared to changes among the control group of non-beneficiaries. Impact is inferred if the changes among the
treatment group are significantly more favorable than changes among the control group. The analysis must also take account of mediating variables that might
affect this comparisonfor example, differences in wealth, age, gender, or educational attainment between the two samples.
For more information on data collection and analysis, see the Impact Assessment Primer Series article #3, "Collecting and Using Data for Impact Assessment."

Step 6: Disseminate the Impact Assessment Findings


Since the impact assessment is likely to generate information that has value beyond the particular project assessed, it is vital that the lessons learned through be
disseminated effectively to all those who are in a position to use them. Possible means of dissemination include web postings, seminar or conference presentations,
workshops, and published papers.
The PSDIAI team has produced the following impact assessment reports:
Baseline Research Report for USAID/Brazil's Micro and Small Enterprise Trade-Led Growth Program
Baseline Research Report for Kenya BDS and Horticulure Development Centre Projects

COMMENTS (0)

5/28/2014 9:24 AM

You might also like