Professional Documents
Culture Documents
Monitoring With CART Principles
Monitoring With CART Principles
CHAPTER 5
A × B = Social Im pact
In this formula, “A” means doing what you said you would do and doing it
efficiently, and “B” means choosing good ideas.
If only life were that simple.
Monitoring and evaluation roughly align with the two variables in this
formula. Think of monitoring as “A” and impact evaluation as “B.” Then con-
sider what happens if you forget about A. Even the best idea, which has
been proved to work, will not have an impact if implemented poorly. This
chapter focuses on variable A: how to use your theory of change and the
CART principles (Credible, Actionable, Responsible, and Transportable) to
understand whether you are doing what you set out to do and to improve
how you do it.
We argue that if organizations develop accountable and transparent
monitoring systems, they can not only meet external accountability
requirements but go beyond them to use data to manage and optimize
operations. By following the CART principles, stakeholders can hold
organizations accountable for developing strong monitoring systems and
reporting the data those systems generate. They will also avoid imposing
systems with burdensome predetermined data collection requirements
that do not support learning and improvement.
Of course variable B, impact evaluation, matters as well. And the push to
evaluate can be hard to resist. At what point should organizations consider
an impact evaluation? Given the cost and complexity of impact evaluations,
we argue that several conditions must be met before proceeding. Most im-
portantly, organizations should first be confident that the part they can
control—their implementation—is happening with the fidelity they de-
sire. And monitoring data can help make sure implementation is done well.
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 67 ]
86
We now have two key building blocks of a right-fit monitoring system: the
theory of change and the CART principles. With these in hand, we will walk
through how to develop a monitoring system that drives accountability,
learning, and improvement.
Five types of monitoring data are critical for learning and accounta-
bility. Two of these—financial and activity tracking—are already collected
by many organizations. They help organizations demonstrate account-
ability by tracking program implementation and its costs. The other
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 69 ]
07
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 71 ]
27
3. Targeting Data
4. Engagement Data
We divide engagement data into two parts. The first part of this data is
what economists call the extensive margin, meaning the size of the program
as measured by a simple binary measure of participation: Did an individual
participate in a program or not? Second is the intensive margin, a measure
of how intensely someone participated: How did they interact with the
product or service? How passionate were they? Did they take advantage of
all the benefits they were offered?
Good targeting data make analysis of the extensive margin possible (who
is being reached and who is not). “Take-up” data measure whether someone
participated in a program, policy, or product. This is synonymous with the
extensive margin. For example, take-up data measure the percentage of
people who were offered a good or service and who actually end up using
it. Take-up data also can be broken down by type of person: 35% of women
took up the program, whereas only 20% of men took up the program. Take-
up data support program learning and improvement because they help an
organization understand whether a program is meeting client demand. If
services are being provided but take-up numbers are low, organizations may
need to go back to the drawing board—this could indicate a critical flaw in a
program’s design. Perhaps the product or service is not in demand, in which
case organizations should focus resources somewhere else. If demand exists,
then low take-up suggests that perhaps the program is not well advertised,
costs too much, or does not match the needs of people it aims to reach.
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 73 ]
47
5. Feedback Data
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 75 ]
67
At this point, you may be thinking, “This is all fine in theory, but how does
it work in practice?” Let’s return to the hypothetical supplemental feeding
program run by Nutrition for All (NFA) for malnourished children to see
how the five types of monitoring data—together with the CART princi-
ples—can help NFA improve its program. We first revisit NFA’s theory of
change (Figure 5.1) to show how it can be used to set the priorities for data
collection. Then, we will introduce the five different types of monitoring
data we will need to collect to ensure the program is operating as intended.
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 77 ]
87
CHW trainings
Trained CHWs
Household visits conducted
Malnourished children identified
Families are recruited for the program
Families enroll in the program
OUTPUTS
Trainings are conducted
Families attend trainings
Families receive supplements and advice on nutrition
Families know how to use supplements
Follow-up visits to detect moderate malnutrition
Families serve malnourished children NFA food supplements
Activity Output
(CHWs) conduct household visits to identify and Malnourished children are enrolled in NFA’s
enroll children in supplemental feeding program supplemental feeding program
Assumptions
• CHWs visit all households they are assigned to visit
• Families of malnourished children are home and willing to talk with NFA staff about their
children (and speak the same language) and allow them to assess nutritional status
• Families agree with the assessment of their children’s nutritional status, and see the
value in NFA’s feeding program
• Concerns about the social stigma of not being able to provide adequate nutrition (as
indicated by enrollment in the program) are outweighed by the desire to help children
• Families have time to attend required NFA training sessions on preparing supplements
and proper nutrition (and are willing to commit to the sessions)
• Families enroll their children in NFA’s program
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 79 ]
08
Activity Tracking
For NFA, a basic activity tracking system would measure the outputs of all
of the program’s core activities: the number of CHW trainings conducted,
health workers trained, villages visited, households visited, malnourished
children identified, families enrolled, family training sessions conducted,
supplements distributed, and supplements consumed by targeted children.
To begin the activity tracking process, NFA needs to know how many
CHW trainings were conducted and who was trained. Even more impor-
tantly, the organization would like to know that CHWs have actually
learned the technical skills needed to identify malnourished children and
that they can apply those skills in the field. Tracking training attendance
could be done at relatively low cost; paper forms or tablets could be used
to easily collect attendance information for CHW and family trainings.
Credible data on whether CHWs have learned the skills would be more
costly to obtain as it might involve testing CHWs or shadowing them on
home visits. This may not be a responsible use of resources if the benefits
of this additional data are not worth the costs. We return to this issue later.
Household visits and assessments are critical activities that need to be
closely monitored. If household visits do not take place or assessments are
not done correctly, NFA may fail to reach the malnourished children it aims
to serve. NFA plans to use CHW assessment forms filled out in each village
and after each household visit to track completed visits and gather data on
the nutritional status of all children in the villages.
NFA needs to keep its eye on the credibility of all this data. NFA has two
potential risks in using self-reported data on CHW assessment visits: poor
quality assessments and falsification of data. To ensure the data are cred-
ible, NFA could randomly select households to confirm that a CHW visited
and conducted an assessment. If NFA has reason to suspect poor quality
assessments by one or more CHWs, the organization could validate the
data by reassessing a random selection of children that the CHW had vis-
ited. However, this reassessment would burden children and families, and
the actionable and responsible principles dictate that it should only be done
if concerns about poor quality assessments are strong.
Targeting
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 81 ]
28
the family earned an above average income that month. Perhaps there also
was a school feeding program implemented for the first time that year.
Perhaps NFA had no impact and was wasting money, yet with malnourish-
ment rates on the decline, the program appeared effective and continued
unchanged when it should have been modified or shut down.
In other words, we are back to the counterfactual problem.
However, there is a targeting reason to track two of NFAs outcomes. First,
they should track weight-for-age to identify which children are still mal-
nourished, in order to retarget them for additional support. Second, NFA
should track whether families are adopting improved nutritional practices
and are serving the supplements to malnourished children. This is a neces-
sary step for the outcome of improved nutritional status, and NFA needs to
know this is happening before they can consider measuring impact.
The challenge for organizations is clear. When outcome data are col-
lected for targeting purposes, they should not be used for assessing impact.
Yet the data are right there. Management and donors need self-control to
avoid comparing before to after and making sloppy claims about causes.
Engagement
Feedback
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 83 ]
48
supplements are easy to prepare and appealing enough that children ac-
tually eat them. What could the program do better, and what services do
participants value most? NFA can use household surveys and focus group
discussions to collect these data, and it can also regularly ask field staff to
relay the feedback they gather informally from participants.
The challenge of collecting engagement and feedback data raises the im-
portant issue of choosing sampling methods in monitoring systems. Does
every client need to provide engagement and feedback data for the findings
to be credible, for example? Probably not. Fortunately, the actionable and
responsible principles help us think through these questions. Different data
needs dictate different sampling strategies. With a clear understanding
of how the data will be used, we can determine what type of sampling is
appropriate.
Some elements of activity tracking data need to be collected for all pro-
gram sites, for example. In our NFA example, the organization needs to
know that supplements have been delivered and distributed at all program
sites. Similarly, administrative data are often automatically collected on all
program participants and can be used for targeting purposes (thus no sam-
pling needed, since sampling only makes sense when collecting data is costly
for each additional observation). Engagement and feedback data, however,
can be more expensive and time-consuming to collect credibly. The respon-
sible principle would therefore suggest considering a random sampling of
clients, rather than the full population. A smaller sample may provide suffi-
cient data to accurately represent the full client population at a much lower
cost. Where data are intended to demonstrate accountability to external
stakeholders, a random sample may also alleviate concerns about “cherry-
picking” the best results. Data for internal learning and improvement might
require purposeful sampling, like specifically interviewing participants
who did not finish a training, or signed up and never used a service, or
are particularly poor, or live in particularly remote areas. If organizations
want to understand why lower income or more rural individuals are not
taking-up or engaging with the program, for example, they will want to
focus on those populations.
Data used for staff accountability (i.e., to track staff performance) likely
need a full sample. (A high-performing staff member would likely not be
happy to miss out on a performance bonus simply because the random
sampling skipped her!)
Armed with these five types of credible monitoring data (accounting,
activity, take-up, engagement, feedback), NFA will be in a strong po-
sition to understand what is working well and what needs to be
improved in its program. Data systems built around key questions and
The CART principles help organizations winnow the set of data they
should collect. The credible principle reminds us to collect only valid and
reliable data. This means ignoring data that cannot be collected with high
quality, data that “would be nice to have” but realistically will not be cred-
ible. The actionable principle tells us to collect only data we are going to
use. Organizations should ignore easy to collect data that do not have a
clear purpose or action associated with them. We have demonstrated how
a theory of change helps pinpoint critical program elements that are worth
collecting data on. The responsible principle suggests that organizations
should collect only those data that have greater benefits than costs—use
must be weighed against the cost of collection and the quality with which
data can be collected. Transportability may also help minimize costs. Where
organizations can focus on widely used and accepted indicators, they can
lower costs by building on efforts of others while creating opportunities for
sharing knowledge.
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 85 ]
68
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 87 ]
8
monthly staff meeting. The important thing is that data are reviewed on a
regular basis in a venue that involves both program managers and staff. At
each meeting, everyone should be able to see the data—hence the chalk-
board or the dashboard.
But just holding meetings will not be enough to create organizational
commitment if accountability and learning are not built into the process.
Program staff should be responsible for reporting on the data, sharing what
is working well and developing strategies to improve performance when
things are not. Managers can demonstrate organizational commitment
by engaging in meetings and listening to program staff. Accountability
efforts should focus on the ability of staff to understand, explain, and
develop responses to data—in other words, focused on learning and im-
provement, not on punishment. If the system becomes punitive or has
strong incentives attached to predetermined targets, staff will have clear
incentives to manipulate the data. The 2016 discovery that Wells Fargo
bank employees were opening fake accounts to meet sales targets is one
example of incentives backfiring.
The final element of an actionable system is consistent follow- up.
Organizations must return to the data and actually use them to inform
program decisions. Without consistent follow-up, staff will quickly learn
that data collection doesn’t really matter and will stop investing in the
credibility of the data.
1. Can and will the (cost-effectively collected) data help manage the day-
to-day operations or design decisions for your program?
2. Are the data useful for accountability, to verify that the organization is
doing what it said it would do?
3. If the data are being used to assess impact, do you have a credible
counterfactual?
If you cannot answer yes to at least one of these questions, then you prob-
ably should not be collecting the data.
M o n i t or i n g w i t h t h e C A RT P r i n c ip l e s [ 89 ]