Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Applied Microeconometrics:

Program Evaluation

Katja Kaufmann
Winter 2012/13
Bocconi University
Organization of the Course
What is expected of you?
1. Read theory and applied papers with (*) in syllabus,
2. Present papers of your choice in class (with +)
 20 min presentation (question and contribution of the paper,
methodology, results, conclusion and critical assessment)
3. Problem sets

 All this is relevant for final exam and final grade.


 AND more importantly, course should help you for your future
research in terms of analyzing papers of others and in applying the
learned methods to address research questions of interest
Outline

Section 1: Introduction
1. Program Evaluation: Motivation and Questions that can be
addressed
2. Challenge: The Problem of Causal Inference
3. Counterfactual Framework
4. Different Approaches

Section 2: Randomized and Natural Experiments


Motivation: Why learn “Program Evaluation”
Approaches?
Learn to evaluate the impact of “programs/policies”
 Helps you to address interesting and policy-relevant questions from
many different areas

Examples:
• Education: Effect of an additional year of schooling on earnings (“Returns to
education”); Effect of school inputs or class size on test scores; Effect of
watching TV on test scores, Effect of fellowships on college enrollment
• Labor: Effect of unemployment insurance on duration of unemployment,
Effect of minimum wages or job training on employment
• Development: Effect of antipoverty programs such as conditional cash
transfer programs on children’s education and health, effect of microfinance
programs on level and volatility of income
• Health: Effect of smoking/drinking on health; Effect of an advertising
campaign to cut smoking, effect of increasing minimum drinking age on
traffic deaths
Challenge: The Problem of Causal Inference

• Drawing causal inference such as “What is the causal effect of a


job training program on earnings?” requires answering
counterfactual questions:
1. How would individuals who participated in a program have fared in
the absence of the program?
2. How would those who were not exposed to the program have fared in
the presence of the program?
• Problem: never observe counterfactual outcomes, as we can not
observe a person in two different states of the world.

( Theoretical framework on counterfactual outcomes will be


introduced later on today, right now giving intuition)
Problem of Causal Inference: Counterfactual
Outcomes are unobserved

To get at the causal effect, why not compare:


1. Outcome of people who are in program with people who are
not in the program, e.g. compare earnings of training
participants to those of non-participants?
 Omitted variables (eg people who are more motivated to work
might decide to join the program, but because of the motivation
they would have higher earnings anyways even without program)
 Self-selection into treatment (e.g. job training, of those who
expect to benefit most from such a program)

2. Outcome of the same person before and after the program


 Change of other factors that affect outcomes, for example,
macroeconomic shocks such as business cycles, seasonal
differences, natural process like aging
Goal of the literature on Program Evaluation:
How to find a good comparison group to make up for not knowing
counterfactual outcomes
• Many different approaches that rely on different assumptions and that have
different data requirements (e.g. single cross-section, repeated cross-
section, panel data)
• depends on the context what is a good approach and what approach is
“available”.

Goal of the course:


Understand the identifying assumptions needed to justify the application
of different estimators (and learn to apply the methods).
 Discuss for each approach and do a comprehensive comparison at the end
of the course.
Program Evaluation and Regression
Framework
Example: OLS Regression
 usually interested in estimating causal (i.e. “ceteris paribus”) effects
(causation versus correlation)

Y=a+bX+U, assume E(U|X)=0 (conditional mean independence)


dE(Y|X)/dX=b  average causal effect

Evaluate approaches according to


1) Internal validity: Does an approach provide credible estimates of these
effects for the population and setting under study?
2) External validity: When does an approach provide credible estimates of
these effects that can be generalized from the given population and
setting to other populations and settings (e.g. legal, policy and
physical environments).
Internal validity

• Internal validity: the statistical inferences about causal


effects are valid for the population being studied
• Five threats to internal validity
1. Omitted variable bias
2. Sample selection bias
3. Simultaneous causality bias (or reversed causality)
4. Wrong functional form
5. Measurement error (“Errors-in-variables” bias)

 All threats lead to a violation of the conditional independence


assumption
1. Omitted variable bias
• Coefficients of interest:

• But instead we estimated the short regression:

• The OLS estimator converges in probability to:

(Derive in problem set)


An omitted variable bias arises if the omitted variable is both (i) a
determinant of y and (ii) correlated with at least one included regressor
 Evaluate the conditions for an upward or downward bias from the equation

Intuition: attribute effect of the omitted variable to another variable in the


model that is correlated with the omitted variable
 Coefficient on that other variable can not be interpreted as “causal”
(example: ability bias)

Potential solutions to omitted variable bias:


• include variable if measurable (or proxy)
• use panel data (solves problem when there are time-constant individual effects)
• use instrumental variable regression
• run a randomized experiment
2. Sample selection bias

• Often assumed in regression framework: simple random sampling of


the population, BUT sometimes sample “selects itself”
• Sample selection bias arises when a selection process
– Influences the availability of data and
– That process is related to the dependent variable.
 Induces correlation between regressor and error term (again violates
conditional mean independence)
2. Sample selection bias

• Example: Returns to Education

– Random sample from the population of workers (!!), data on earnings


and years of education
– Problem: factors that determine whether someone works are quite
similar to the factors that determine how much that person earns when
employed  the fact that someone has a job suggests that person has
a high U and furthermore this error term could be correlated with the
included regressors (educ)
• Potential solutions to sample selection bias
– Randomized controlled experiment
– Construct a model of the sample selection problem and estimate that
model (Heckman’s sample selection model (1979) which was cited in
his 2000 Nobel)
3. Simultaneous causality bias (or reversed
causality)

• Example of reversed causality: low wages cause bad health or bad


health causes low wages?
• More general simultaneous equations: Price and quantity are
determined jointly from demand and supply equation
3. Simultaneous causality bias (or reversed
causality)

• Economic theory provides us with not one, but two causal equations:

and we observe an iid sample of (y,x)


 OLS of y on x will not yield consistent estimates of
 Intuition: a high value of u leads to a high value of y, which in turn leads
to a high value of x  x and u will be correlated
 Formally:

even if E(uv) =0, E(yu) is generally not equal to zero (see first equation)
• Solutions to simultaneous causality bias:
– Randomized controlled experiment: because x is chosen at random by
the experimenter, there is no feedback from the outcome variable to y
(assuming perfect compliance)
– Use instrumental variables regression to estimate the causal effect of
interest (use the variation in x that is exogenous, e.g. supply shifters
such as bad weather which change supply but do not affect demand)
– Develop and estimate a complete model of both directions of causality
(idea behind large macro models, very difficult in practice).
Framework of potential outcomes
(Rubin’s causal model)

• Each individual has two potential outcomes


– Potential outcome without treatment
– Potential outcome with treatment

 Treatment Effect:
for each individual, but only one of the two outcomes is observed

• D=1 if receive treatment, else D=0


• Observed outcome:
Framework of potential outcomes
(Rubin’s causal model)

• If individual is treated:
– is observed,
– is a counterfactual

• If individual is not treated:


– is observed,
– is a counterfactual
Parameters of interest

Most commonly used:


• Average treatment effect (ATE):

• Average effect of treatment on the treated (TTE):

• Average effect of treatment on the untreated (TUE)

• (ATE as average of TTE and TUE)


Parameters of interest
Other parameters of interest
• Proportion of people benefiting from the program

• Effect of treatment on people at the margin of participation (MTE)


 relevant for policies as these are the people who will benefit

• Distribution of treatment effects

• Selected quantiles (e.g. effect of antipoverty program different for


people at different quantiles of the income distribution)
Model for outcomes with and without
treatment
• Model:

• Observed outcome:

 Heterogeneous treatment effect:


When are ATE and TTE the same?

• ATE:
• TTE:
• Parameters are the same if
– (A1) U1=U0
 homogenous effect (conditional on X): g(X)
Y=a+g(X)*D+U0
Note: usually assumed in regression framework, problems of
internal validity as discussed before (e.g. omitted variable bias)
When are ATE and TTE the same?

• ATE:
• TTE:
• Parameters are the same if
– (A2)
 Effects can be heterogeneous, but choice of treatment D is
independent of U1-U0 (i.e. ex-post heterogeneity, but not acted
on ex-ante)
This is for example the case in a randomized experiment with
full compliance: E(Y1-Y0|D=1)=E(Y1-Y0), as (U1-U0) indep of D
Goal of the literature on Program Evaluation
How to find a “good” comparison group to make up for not knowing
counterfactual outcomes

Illustration:
- Identification problem:
we observe E(Y0|D=0), E(Y1|D=1), but not the counterfact. E(Y0|D=1), E(Y1|D=0)

- Example: To estimate the TTE, we would need


TTE = E(Y1|D=1) - E(Y0|D=1) (Problem: second term is unobserved)

- Assumption to identify TTE:


E(Y0|D=1)=E(Y0|D=0)=E(Y0),
i.e. no selectivity based on outcome in untreated state
(violated for example, if those who face negative income shocks enter job training
programs. Works for randomized experiment)
 Substitute unobserved second term with observed E(Y0|D=0)
Selection problems

In general: E(Y0|D=1) not equal to E(Y0|D=0)


 The “naïve” estimator (difference in observed means) is then a
biased estimator for TTE

E(Y1|D=1) - E(Y0|D=0)
= [E(Y1|D=1) - E(Y0|D=1)] + [E(Y0|D=1) - E(Y0|D=0)]
= TTE + Bias
 Bias is the difference between (average) counterfactual Y0 in both
populations (treated and untreated)
Why is this bias likely?

1) Simple Roy model: “I participate if it is worth it for me.”


 Selection Rule: D=1 if Y1 - Y0 > C

 Then in general:
E(Y0 | D=1)=E(Y0 | Y0 < Y1-C)  those who chose treatment
not equal to
E(Y0 | D=0)=E(Y0 | Y0 > Y1-C)  those who chose not to be treated

In this case selectivity stems from:


- Comparative advantages in terms of Y1 - Y0
- Simple example: participants have smaller Y0, thus larger potential gain
(think of job training example, “Ashenfelter” dip)
- Heterogeneity in costs
2) Administrative rule
- “Cream-skimming”: they choose “the best”, i.e.
E(Y0 | D=1) > E(Y0 | D=0)  overestimate treatment effect
- They put the weakest kids in smaller classes, i.e.
E(Y0 | D=1) < E(Y0 | D=0)  underestimate treatment effect (e.g. of
class size)
Different approaches of Program Evaluation
1. Selection on observables (unconfoundedness
assumption): observe all X that affect participation
decision and outcome
• Matching
• Diff-in-Diff: very specific form of selection on unobservables is
allowed, that is based on fixed (i.e. time-constant) effects
• Regression discontinuity: no selection on unobservables around
the discontinuity
2. Selection on unobservables
• Control function approach
• Instrumental variable estimation: find variable that is correlated
with treatment participation decision, but does not affect outcome
(“exclusion restriction”, as excluded from outcome eqn)

You might also like