Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Monitoring Observed Policy Outcomes

LEARNING OBJECTIVES
By studying this chapter, you should be able to:

• Distinguish monitoring from other policy-analytic methods


• Describe the main functions of monitoring
• Distinguish policy inputs, activities, outputs, outcomes, and impacts
• Compare and contrast social systems accounting, social auditing, policy experimentation, research and practice
synthesis, systematic review, and metaanalysis
• Define and illustrate threats to the validity of experiments, quasiexperiments, and natural experiments
• Perform interrupted time-series and control-series analyses to explain policy outcomes

INTRODUCTION
The consequences of policies are never fully known in advance. For this reason, the policyanalytic procedure of
monitoring is essential to policy analysis. Indeed, much of the work of policy analysis is carried out after policies have
been prescribed and adopted. Although prescription provides valuable information about what should be done to solve
a problem, only by monitoring the outcomes of a policy can we find out whether the policy has been successfully
implemented. Monitoring helps answer a profoundly important practical question: Does the policy work?

MONITORING IN POLICY ANALYSIS


Monitoring permits the production of information about the causes and consequences of policies. Because monitoring
investigates relations between policy operations and their observed outcomes, it is the primary source of information
about the success of efforts to implement policies. Monitoring goes beyond the normative prescriptions of economic
analysis by establishing factual premises about policies. Monitoring is primarily concerned with establishing factual
premises about public policies, while evaluation (Chapter 7) is concerned with establishing value premises. While
factual and value premises are in continuous flux, and “facts” and “values” are interdependent, only monitoring
produces factual claims during and after policies have been adopted and implemented (ex post).
Monitoring performs several functions in policy analysis: compliance, auditing, accounting, and description and
explanation.

• Compliance. Monitoring helps determine whether the actions of program managers are in compliance with norms,
values, and standards mandated by legislatures, regulatory agencies, and professional associations. For example, the
Environmental Protection Agency’s Continuous Air Monitoring Program (CAMP) produces information about pollution
levels to determine whether industries are complying with federal air quality standards.

• Auditing. Monitoring helps discover whether resources and services intended for target groups and beneficiaries
actually reach them, for example, recipients of social services or municipalities that qualify for federal and state grants.
By monitoring federal revenue sharing, we can determine the extent to which funds are reaching local governments.

• Accounting. Monitoring produces information that is helpful in accounting for social and economic changes that
follow the implementation of policies over time. For example, changes in well-being may be monitored with such social
indicators as average education, percentage of the population below the poverty line, and average annual paid
vacations. The World Bank, the European Union, the Organization of African Unity, the United Nations, and the United
States, and other national and international bodies employ social indicators to monitor changes in quality of life.

• Description and explanation. Monitoring also yields information that helps to explain why the outcomes of public
policies and programs result in particular outcomes. The Campbell Collaboration, which has spread to more than forty
countries, is an archive of field experiments in education, labor, criminal justice, housing, social work, and other areas.
The Campbell Collaboration and the parallel organization in medicine and public health, the Cochrane Collaboration,
aim at evidence-based policy and evidence-based medicine.

&&. Policy-Relevant Causes and Effects


In monitoring policy outcomes there are several kinds of policy-relevant effects. Policy outcomes are the goods,
services, and resources actually received by target groups and beneficiaries as an effect of the policy or program.
However, per capita welfare expenditures and units of packaged food produced by Meals on Wheels are examples of
policy outputs rather than policy outcomes. In contrast to policy outcomes, policy impacts are longer-term changes in
knowledge, attitudes, or behaviors that result from policy outcomes.
APPROACHES TO MONITORING

Monitoring may be broken down into several identifiable approaches: social systems Accounting, social auditing,
policy experimentation, research and practice synthesis, metaanalysis, and case studies.

Social Systems Accounting


With this general framework in mind, we can proceed to contrast approaches to monitoring. Social systems
accounting is an approach and set of methods that permit analysts to monitor changes in objective and subjective
social conditions over time.23 The Term social systems accounting comes from a report published by the National
Commission on Technology, Automation, and Economic Progress, a body established in 1964 to examine the social
consequences of technological development and economic growth.

Policy Experimentation
Random Innovation is the process of executing a large number of alternative policies and programs whose inputs
are neither standardized nor systematically manipulated. Because there is no direct control over inputs, activities, and
outputs, the outcomes and impacts of policies cannot easily be traced back to known sources. By contrast, policy
experimentation is the process of systematically manipulating policies and programs in a way that permits corrigible
answers to questions about the sources of change in policy outcomes.

Social experiment as a form of monitoring has several important characteristics:


 Direct control over experimental treatments (policy interventions). Analysts who use social experimentation
directly control experimental treatments (policy interventions) and attempt to maximize differences among them in
order to produce effects that differ as much as possible. One objective is the establishment produce effects that differ
as muchas possible. One objective is the establishment of a plausible counterfactual claim, that is, a claim that the
experimental (policy intervention) group would have experienced no change in the absence of the intervention,
because no change occurred in the control group.
 Random assignment. Potential sources of variation in policy outcomes other than those produced by the
experimental treatment are eliminated by randomly selecting members to participate in the policy experiment, by
randomly dividing them into experimental and control groups, and by randomly assigning the treatment (policy
intervention) to one of these groups. Random assignment minimizes biases in the selection of members of the
experimental and control groups, who may respond differently to the experimental treatment. For example, children
from middle-class families may respond to a special education program in a more positive way than children from poor
families. This would mean that factors other than the program itself are responsible for such outputs as higher reading
scores. These selection biases are reduced or eliminated through randomization.

Social Auditing

One of the limitations of social systems accounting and social experimentation—namely, that both approaches neglect
or oversimplify policy processes—is partly overcome in social auditing. social auditing explicitly monitors relations
among inputs, activities, outputs, outcomes, and impacts in an effort to trace policy inputs “from the point at which they
are disbursed to the point at which they are experienced by the ultimate intended recipient of those resources.”49
social auditing, which has been used in areas of educational and youth policy by analysts at the rand corporation and
the national institute of education, helps determine whether policy outcomes are a consequence of inadequate policy
inputs or a result of processes that divert resources or services from intended target groups and beneficiaries.

Research and Practice Synthesis

Research and practice synthesis is a method of monitoring that involves the systematic compilation, comparison, and
assessment of the results of past efforts to implement policies and programs. It has been used to synthesize
information in a number of policy issue areas that range from social welfare, agriculture, and education to municipal
services and science and technology policy. It has also been employed to assess the quality of policy research
conducted on policy outcomes.

Systematic Reviews and Meta-Analyses

The systematic review is an evaluation methodology that summarizes the best available evidence on a specific
question using standardized procedures to identify, assess, and synthesize policy-relevant research findings.
systematic reviews investigate the effectiveness of programs and policies through a process that is designed to be
accurate, methodologically sound, comprehensive, and unbiased. The systematic review (sr) and meta-analysis (ma)
have become indispensable tools for monitoring the outcomes and impacts of policies. for example, an sr helps
identify studies of the outcomes of programs and policies in areas ranging from criminal justice, to education, policing,
and counter-terrorism. In addition, the SR helps evaluate the methodological quality of the studies used to document
these outcomes. These two questions—one about the outcomes of programs and the other about the quality of
evidence used to make claims about these outcomes—are two sides of the same coin. The coin is evidence of “what
works.”
These approaches may be contrasted in terms of two major properties (Table 6.2):

1. Types of controls. Approaches to monitoring differ in terms of the control they exercise over policy and
program inputs, activities, and outputs. only one of the approaches (policy experimentation) involves
prospective direct controls. the other three approaches exercise control statistically, by measuring
retrospectively how much of the variation in outcomes and impacts can be explained by variations in inputs,
activities, outputs, as well as factors that are external to the experiment. there is not one type of social
experimentation, but several: the randomized controlled trial or randomized experiment, the quasi-experiment,
the regression-discontinuity experiment, and the natural experiment.21

2. Type of information required. approaches to monitoring differ according to their respective information
requirements. some approaches (policy experimentation and social auditing) require the collection of new
information. social systems accounting may or may not require new information, while research and practice
synthesis and meta-analysis rely exclusively on available information. case studies rely on newly collected
and available information.

METHODS FOR MONITORING

Common and specialized methods are available for analyzing and visualizing data obtained with the approaches to
monitoring presented in the last section: social systems accounting, social auditing, social experimentation, research
and practice synthesis, and systematic reviews and meta-analyses.

Graphic Displays

Information about observed policy outcomes may be presented in the form of graphic displays, which are visual
representations of the values of one or more output, outcome, and impact variables. A graphic display can be used to
depict a single variable at one or more points in time, or to summarize the relation between two variables.

Tabular Displays

Another useful way to represent policy outcomes is to construct tabular displays. A tabular display is a rectangular
array used to summarize the key features of one or more variables. The simplest form of tabular display is the one-
dimensional table, which presents information about policy outcomes in terms of a single dimension of interest, for
example, age, income, region, or time.

Index Numbers

Another useful way to monitor changes in outcome variables over time is index numbers. Index numbers are
measures of how much the value of an indicator or set of indicators changes over time relative to a base period. Base
periods are arbitrarily defined as having a value of 100, which serves as the standard for comparing subsequent
changes in the indicators of interest. Many index numbers are used in public policy analysis. These include index
numbers used to monitor changes in consumer prices, industrial production, crime severity, pollution, health care,
quality of life, and other important policy outcomes.

Effect Sizes

One of the goals of conducting a systematic review (SR) or meta-analysis (MA) is to determine the extent to which
programs and policies have achieved desired outcomes and impacts. There is a virtual consensus in the policy
analysis and program evaluation communities that the evidence required to determine “what works” is causal in
nature. Generally, policy interventions carried out in the form of experiments have several requirements: A presumed
cause is actively manipulated to produce a subsequent outcome, there is a correlation between the policy and the
outcome, and randomization is used to assign some units (persons, groups, or larger entities) to an intervention group
and another to a control group.

Interrupted Time-Series Analysis

Interrupted time-series analysis is a set of procedures for displaying in graphic and statistical form the effects of policy
interventions on policy outcomes. Interrupted time-series analysis is appropriate for problems where an agency
initiates some action that is put into effect across an entire jurisdiction or target group, for example, in a particular state
or among all families below the poverty threshold. Because policy actions are limited to persons in the jurisdiction or
target group, there is no opportunity for comparing policy outcomes across different jurisdictions or among different
categories of the target group. In this situation, the only basis of comparison is the record of outcomes for previous
years.
Regression-Discontinuity Analysis

Having reviewed computational procedures for correlation and regression analysis, we are now in a position to
consider regression-discontinuity analysis. Regression-discontinuity analysis is a set of graphic and statistical
procedures used to compute and compare estimates of the outcomes of policy actions undertaken among two or more
groups, one of which is exposed to some policy treatment while the other is not. Regression-discontinuity analysis is
the only procedure discussed so far that is appropriate solely for social experimentation. In fact, regression-
discontinuity analysis is designed for a particularly important type of social experiment, namely, an experiment where
some resource input is in such scarce supply that only a portion of some target population can receive needed
resources.

You might also like