Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

UNIVERSITY OF DAR ES SALAAM

COLLEGE OF ARTS AND SOCIAL SCIENCES

DEPARTMENT OF ECONOMICS

EC 384: ECONOMETRICS

ASSINGNMENT

NAME: MASSAWE DEOGRATIUS B

REG NO: 2008 – 04 – 02858

ASSINMENT ON ORDINARY LEAST SQUARE (OLS)


THE ORDINARY LEAST-SQUARES METHOD

The OLS method gives the best straight line that fits the sample of XY observations in the sense
that it minimizes the sum of the squared (vertical) deviations of each observed point on the graph
from the straight line.

We take vertical deviations because we are trying to explain or predict movements in Y, which is
measured along the vertical axis. We cannot take the sum of the deviations of each of the
observed points from the OLS line because deviations that are equal in size but opposite in sign
cancel out, so the sum of the deviations equals 0.

Taking the sum of the absolute deviations avoids the problem of having the sum of the deviations
equal to 0. However, the sum of the squared deviations is preferred so as to penalize larger
deviations relatively more than smaller deviations.

PROPERTIES OF LEAST-SQUARES ESTIMATORS: The Gauss–Markov Theorem

As noted earlier, given the assumptions of the classical linear regression model, the least-squares
estimates possess some ideal or optimum properties. These properties are contained in the well-
known Gauss–Markov theorem. To understand this theorem, we need to consider the best
linear unbiasedness property (BLUE) of an estimator. An estimator is said to be a best linear
unbiased estimator (BLUE) if the following hold:
1. It is linear, that is, a linear function of a random variable, such as the dependent variable Y in
the regression model.
Y= β1 + β2X1 + β3X2 + µ
2. It is unbiased, that is, its average or expected value, E (β2), is equal to the true value, β2.
That is, E (βˆ2) = β2
3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased
estimator with the least variance is known as an efficient estimator.

4. It is consistent that is when the sample size (n) increases, the variance decreases to zero.
That is, Var(βˆ2) n→∞ = 0

In time series data the problem of autocorrelation often emerges hence OLS estimator fail to be
the best estimator because OLS estimator would not be having the minimum variance
(inefficient) though it would be unbiased and having linear equation. As a solution to the
problem, Generalized Least Square (GLS) is used instead.
.

You might also like