Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Nonexam Formulas Econometrics, August 2018

SOME VERY IMPORTANT THINGS YOU Convergence in probability/consistency and


MUST KNOW IN DETAIL, THAT ARE NOT limiting distributions of a sample mean and a
ON THE FORMULA SHEET, BUT CAN BE sample variance.
FOUND IN THE BOOK (YOU CAN HELP
YOUR MEMORY BY SEEING THE LOGIC LS-REGRESSION
OF EACH FORMULA)
How to derive LS-estimators.
PROBABILITY AND STATISTICS LS-Assumptions (and what they mean).
Properties of LS-related statistics (incl.
μY = E Y  , Y = convergence in probability/consistency, limiting
1
n Y i
distributions and small sample distributions).
σ = var(Y) = E (Y  μ Y ) 2  = E  Y 2   μ Y2
2
Y When you can use large sample results and when
you can use small sample results.
s 2Y = 1
n 1  (Y  Y)
i
2

Threats to the validity of LS.


cov(X,Y) = σ XY = E  X  μ X  Y  μ Y  
= E(XY)  μ X μ Y ˆ
Ŷi  ˆ 0  ˆ 1X1i  ...  ˆ k X ki , û i  Yi  Y i

s XY = 1
n1  (X i  X)(Yi  Y)  û i  0 , sX uˆ  0 j

corr(X,Y) = ρ XY =
σ XY s
, rXY = XY SSR   uˆ i2 , ESS   (Y ˆ  Y) 2 ,
i
σXσY sXsY TSS   (Yi  Y) 2 , TSS  ESS  SSR

E  a+bX+cY  = a + bμ X + cμ Y ESS SSR s2


R2   1 , R 2  1  2û
TSS TSS sY
var  a+bX+cY  = b 2 σ X2 + c 2 σ Y2 + 2bcσ XY
SSR
cov  a+bX+cV,Y  = bσ XY + cσ VY s 2û  , SER  s û
n  k 1
SE(ˆ )  ˆ 2ˆ
j j
E(Y) = E[E(Y | X)]
var(Y | X  x) = E[{Y  E(Y | X  x)}2 | X  x] ˆ j  β j,0
t
SE(ˆ ) j
Skewness = E  (Y  μ Y )3  / σ3Y 95% confid. interval ˆ j  1.96  SE(ˆ j )
Kurtosis = E  (Y  μ Y ) 4  / σ 4Y
Homoscedasticity only:
E  Y  = μ Y = μ Y , var(Y) = σ 2Y = σ 2Y / n (SSR r  SSR u ) / q (R u2  R r2 ) / q
F 
SSR u /(n  k u  1) (1  R 2u ) /(n  k u  1)
Y   Y,0
SE(Y) = σˆ Y =s Y / n , t 
SE(Y) IV-REGRESSION
95% confid. interval Y  1.96  SE(Y)
IV-Assumptions (and what they mean),
including assumption about the instruments.
If X and Y are independent then E(Y | X) =  Y TSLS procedure.
Properties of IV-related statistics (incl.
If E(Y | X) =  Y (indep. of X) then  XY = 0
convergence in probability/consistency and
If X and Y are independent then  XY = 0 limiting distributions).
When you can use large sample results.
If X and Y are jointly normally distributed, then
aX + bY + c is also normally distributed. ML-ESTIMATION

The concepts: convergence in How to set up a likelihood function and how to derive
probability/consistency and limiting distributions ML-estimators.
of random variables.

You might also like