Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

BEC2044 ECONOMETRICS 1

LECTURE 1:
INTRODUCTION

2011 Pearson Addison-Wesley. All rights reserved.

1-0

What is Econometrics?
Econometrics literally means economic
measurement
It is the quantitative measurement and analysis
of actual economic and business phenomena
and so involves:
economic theory
Statistics
Math
observation/data collection

2011 Pearson Addison-Wesley. All rights reserved.

1-1

What is Econometrics? (cont.)


Three major uses of econometrics:
Describing economic reality
Testing hypotheses about economic theory
Forecasting future economic activity

So econometrics is all about questions: the


researcher (YOU!) first asks questions and then
uses econometrics to answer them

2011 Pearson Addison-Wesley. All rights reserved.

1-2

Example
Consider the general and purely theoretical
relationship:
Q = f(P, Ps, Yd)

(1.1)

Econometrics allows this general and purely


theoretical relationship to become explicit:
Q = 27.7 0.11P + 0.03Ps + 0.23Yd

2011 Pearson Addison-Wesley. All rights reserved.

(1.2)

1-3

What is Regression Analysis?


Economic theory can give us the direction of a
change, e.g. the change in the demand for dvds
following a price decrease (or price increase)
But what if we want to know not just how? but also
how much?
Then we need:
A sample of data
A way to estimate such a relationship
one of the most frequently ones used is regression analysis

2011 Pearson Addison-Wesley. All rights reserved.

1-4

What is Regression Analysis?


(cont.)
Formally, regression analysis is a statistical
technique that attempts to explain
movements in one variable, the dependent
variable, as a function of movements in a set
of other variables, the independent (or
explanatory) variables, through the
quantification of a single equation

2011 Pearson Addison-Wesley. All rights reserved.

1-5

Example
Return to the example from before:
Q = f(P, Ps, Yd)

(1.1)

Here, Q is the dependent variable and P, Ps, Yd are the


independent variables
Dont be deceived by the words dependent and independent,
however
A statistically significant regression result does not necessarily imply
causality
We also need:
Economic theory
Common sense

2011 Pearson Addison-Wesley. All rights reserved.

1-6

Single-Equation Linear Models


The simplest example is:
Y = 0 + 1X

(1.3)

The s are denoted coefficients


0 is the constant or intercept term
1 is the slope coefficient: the amount that Y will
change when X increases by one unit; for a linear model,
1 is constant over the entire function

2011 Pearson Addison-Wesley. All rights reserved.

1-7

Figure 1.1
Graphical Representation of the
Coefficients of the Regression Line

2011 Pearson Addison-Wesley. All rights reserved.

1-8

Single-Equation Linear Models


(cont.)
Application of linear regression techniques requires that the
equation be linearsuch as (1.3)
By contrast, the equation
Y = 0 + 1X2
(1.4)
is not linear
What to do? First define
Z = X2
(1.5)
Substituting into (1.4) yields:
Y = 0 + 1Z
(1.6)
This redefined equation is now linear (in the coefficients 0 and 1
and in the variables Y and Z)

2011 Pearson Addison-Wesley. All rights reserved.

1-9

Single-Equation Linear Models


(cont.)
Is (1.3) a complete description of origins of variation in Y?
No, at least four sources of variation in Y other than the variation in the
included Xs:

Other potentially important explanatory variables may be missing


(e.g., X2 and X3)

Measurement error

Incorrect functional form

Purely random and totally unpredictable occurrences

Inclusion of a stochastic error term () effectively takes care


of all these other sources of variation in Y that are NOT captured
by X, so that (1.3) becomes:
Y = 0 + 1X +

2011 Pearson Addison-Wesley. All rights reserved.

(1.7)

1-10

Single-Equation Linear Models


(cont.)
Two components in (1.7):
deterministic component (0 + 1X)
stochastic/random component ()

Why deterministic?
Indicates the value of Y that is determined by a given value of X
(which is assumed to be non-stochastic)
Alternatively, the det. comp. can be thought of as the
expected value of Y given Xnamely E(Y|X)i.e. the
mean (or average) value of the Ys associated with a
particular value of X
This is also denoted the conditional expectation (that is,
expectation of Y conditional on X)

2011 Pearson Addison-Wesley. All rights reserved.

1-11

Extending the Notation


Include reference to the number of
observations
Single-equation linear case:
Yi = 0 + 1Xi + i (i = 1,2,,N)

(1.10)

So there are really N equations, one for each


observation
the coefficients, 0 and 1, are the same
the values of Y, X, and differ across observations

2011 Pearson Addison-Wesley. All rights reserved.

1-12

Extending the Notation (cont.)


The general case: multivariate regression
Yi = 0 + 1X1i + 2X2i + 3X3i + i (i = 1,2,,N) (1.11)
Each of the slope coefficients gives the impact of a one-unit
increase in the corresponding X variable on Y, holding the
other included independent variables constant (i.e., ceteris
paribus)
As an (implicit) consequence of this, the impact of variables
that are not included in the regression are not held
constant (we return to this in Ch. 6)

2011 Pearson Addison-Wesley. All rights reserved.

1-13

Example: Wage Regression


Let wages (WAGE) depend on:
years of work experience (EXP)
years of education (EDU)
gender of the worker (GEND: 1 if male, 0 if female)

Substituting into equation (1.11) yields:


WAGEi = 0 + 1EXPi + 2EDUi + 3GENDi + i (1.12)

2011 Pearson Addison-Wesley. All rights reserved.

1-14

Indexing Conventions
Subscript i for data on individuals (so called
cross section data)
Subscript t for time series data (e.g., series of
years, months, or daysdaily exchange rates, for
example )
Subscript it when we have both (for example,
panel data)

2011 Pearson Addison-Wesley. All rights reserved.

1-15

The Estimated Regression


Equation
The regression equation considered so far is the truebut
unknowntheoretical regression equation
Instead of true, might think about this as the population
regression vs. the sample/estimated regression
How do we obtain the empirical counterpart of the theoretical
regression model (1.14)?
It has to be estimated
The empirical counterpart to (1.14) is:

Yi = 0 + 1 X i

(1.16)

The signs on top of the estimates are denoted hat, so that we have
Y-hat, for example

2011 Pearson Addison-Wesley. All rights reserved.

1-16

The Estimated Regression


Equation (cont.)
For each sample we get a different set of estimated
regression coefficients
Y is the estimated value of Yi (i.e. the dependent
variable for observation i); similarly it is the
prediction of E(Yi|Xi) from the regression equation
The closer Y is to the observed value of Yi, the
better is the fit of the equation
Similarly, the smaller is the estimated error term, ei,
often denoted the residual, the better is the fit

2011 Pearson Addison-Wesley. All rights reserved.

1-17

The Estimated Regression


Equation (cont.)
This can also be seen from the fact that
(1.17)

Note difference with the error term, i, given as


(1.18)

This all comes together in Figure 1.3

2011 Pearson Addison-Wesley. All rights reserved.

1-18

Figure 1.3
True and Estimated Regression Lines

2011 Pearson Addison-Wesley. All rights reserved.

1-19

Example: Using Regression to


Explain Housing prices
Houses are not homogenous products, like corn or
gold, that have generally known market prices
So, how to appraise a house against a given
asking price?
Yes, its true: many real estate appraisers actually
use regression analysis for this!
Consider specific case: Suppose the asking price
was $230,000

2011 Pearson Addison-Wesley. All rights reserved.

1-20

Example: Using Regression to


Explain Housing prices (cont.)
Is this fair / too much /too little?
Depends on size of house (higher size, higher price)
So, collect cross-sectional data on prices
(in thousands of $) and sizes (in square feet)
for, say, 43 houses
Then say this yields the following estimated regression
line:

PR ICE i = 40 .0 + 0 .138 SIZE i

2011 Pearson Addison-Wesley. All rights reserved.

(1.23)

1-21

Figure 1.5 A Cross-Sectional


Model of Housing Prices

2011 Pearson Addison-Wesley. All rights reserved.

1-22

Example: Using Regression to


Explain Housing prices (cont.)
Note that the interpretation of the intercept term
is problematic in this case (well get back to this
later, in Section 7.1.2)
The literal interpretation of the intercept here is the
price of a house with a size of zero square feet

2011 Pearson Addison-Wesley. All rights reserved.

1-23

Example: Using Regression to


Explain Housing prices (cont.)
How to use the estimated regression line / estimated
regression coefficients to answer the question?
Just plug the particular size of the house, you are interested in
(here, 1,600 square feet) into (1.23)
Alternatively, read off the estimated price using Figure 1.5

Either way, we get an estimated price of $260.8 (thousand,


remember!)
So, in terms of our original question, its a good dealgo
ahead and purchase!!
Note that we simplified a lot in this example by assuming that
only size matters for housing prices

2011 Pearson Addison-Wesley. All rights reserved.

1-24

Table 1.1a Data for and Results of the


Weight-Guessing Equation

2011 Pearson Addison-Wesley. All rights reserved.

1-25

Table 1.1b Data for and Results of the


Weight-Guessing Equation

2011 Pearson Addison-Wesley. All rights reserved.

1-26

Figure 1.4
A Weight-Guessing Equation

2011 Pearson Addison-Wesley. All rights reserved.

1-27

Key Terms from Chapter 1


Regression analysis

Slope coefficient

Dependent variable

Multivariate regression
model

Independent (or
explanatory) variable(s)
Causality
Stochastic error term
Linear

Expected value
Residual
Time series
Cross-sectional data set

Intercept term

2011 Pearson Addison-Wesley. All rights reserved.

1-28

You might also like