Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Econometric For Finance

Group Assignment
Section:- D
Group Name ID Number
1 , Abiyan Tadesse RAD/0181/2012
2, Bisrat Tesfaye RAD/0187/2012
3, Bonsa Tezera RAD/0188/2012
4, Dawit Matiwos RAD/0189/2012
5, Yared Masresha RAD/0225/2012

Submitted to:-Mr. Surafel


What is Regression Analysis?
-Regression analysis is a statistical tool for the investigation of relationships between variables.
These variables are dependent and independent variables
Dependent variables:- Are variables that what you want to use the model to explain or predict.
The values of this variable depend on other variables. It is the outcome that you’re studying. It’s
also known as the response variable, outcome variable, and left-hand variable. 

Independent Variable:- are the ones that you include in the model to explain or predict changes
in the dependent variable. Independent indicates that they stand alone and other variables in the
model do not influence them. 

-Regression analysis it estimates the average mean value of the dependent variable in terms of
fixed value of the explanatory variable

Objective of Regression analysis


Is to explain variability in dependent variable by means of one or more of independent or control
variables.

Types of Regression Analysis


There are two main types of regression these are:

1. Simple Linear Regression


2. Multiple Linear Regression
1. Simple Linear Regression

-Represented by a single equation regression model

Y=a+bX+u
Where a: constant Y: Dependent Variable
b: slope X: Independent Variable
U: Unknown/error
-The Dependent variable is expressed as the function of only a single explanatory variable
-Causal relationship between variables flows in one direction only
- Simple linear regression is a model that assesses the relationship between a dependent variable
and an independent variable.

2 Multiple Linear Regression

-Dependent variables explained by more than one explanatory variables

Y=a+b1X1+b2X2+b3X3+...+btXt+u

Where: Y=the dependent variable you are trying to predictor explain

X=the explanatory (independent) variable(s) you are using to predict or associate with Y

a=the y intercept

b= (beta coefficient) is the slope of the explanatory variable(s)

u=the regression residual or error term
Interpretation of Regression Analysis

Can be interpreted in to ways:-

1. Linear in variables

If an equation is expressed in a straight line or if the equation have one degree

For example:- Y=Bo+B1X+U

2. Linearity in Parameters

Parameters are raised to the first degree

For example:- Y=Bo+B1x2+U

What Is the Purpose of Regression?

In statistical analysis, regression is used to identify the associations between variables occurring
in some data. It can show both the magnitude of such an association and also determine its
statistical significance (i.e., whether or not the association is likely due to chance). Regression is
a powerful tool for statistical inference and has also been used to try to predict future outcomes
based on past observations.

Steps in Regression Analysis


1. Decide on purpose of model and appropriate dependent variable to meet that purpose.
2. Decide on independent variables

3. Estimate parameters of regression equation.


4. Interpret estimated parameters, goodness of fit and qualitative and quantitative assessment of
parameters.
5. Assess appropriateness of assumptions.
6. If some assumptions are not satisfied, modify and revise estimated equation.
7. Validate estimated regression equation.

Population Regression Function

-Is an idealized concept

- tells how the mean or average of response of Y varies with X

where and are unknown but fixed parameters known as the regression coefficients β1 and β2
are also known as intercept and slope coefficients, respectively. The equation above itself is
known as the linear population regression function. Some alternative expressions used in the
literature are linear population regression model or simply linear population regression. In the
sequel, the terms regression, regression equation, and regression model will be used
synonymously. In regression analysis our interest is in estimating the PRFs like the one above,
that is, estimating the values of the unknowns β1 and β2 on the basis of observations on Y
and X.

Sample Regression function

Shows estimated relationship between dependent variable y and independent variable x

It is a regression function based on a sample collected from the population


the sample regression line.

Where is read as Y-hat

= estimator of

= estimator of

= estimator of

Classical linear regression


There are several assumption found in classical linear regression the basics are

Assumption 1:- The Models should be Linear in Parameters regardless of whether of the
explanatory of the dependent are linear or not

y = Xβ + ε

Assumption 2 :- Ei is a random and real variable

-The value which E may be positive, negative or 0

Assumption 3 :- Zero mean value of the disturbance

This states that the expected value of the disturbance at observation i in the sample is not
a function of the independent variables observed at any observation, including this one.
This means that the independent variables will not carry useful information for prediction
of εi .
E[ε|X] = 0.

Assumption 4 :- Homoscedasticity and no autocorrelation: Each disturbance, εi has the same


finite variance, σ2, and is uncorrelated with every other disturbance, εj .

Assumption 5:- Independence or no auto correlation between the disturbance term

Assumption 6:- A random variable has a normal distribution The disturbances are normally
distributed and the data in
(xj1, xj2, · · · , xjK) may be any mixture of constants and random variables.
ε|X ∼ N(0, σ2 I).
X may be fixed or random.

Assumption 7:- The disturbance term Ei is not correlated with explanatory variables

-x are not random variables

-Ei and X are not moving together Cov(X,e)=0

Assumption 8:- No perfect Multi collinearity

Assumption 9:-Linear in parameter and variable in the model

9.2-No measurement error in x & y

9.3-No correlation between Y/t & Yt-1

9.4-Y is normally distributed to (Bo+B1X,σ2)

Regression and Econometrics

Econometrics is a set of statistical techniques used to analyze data in finance and economics. An
example of the application of econometrics is to study the income effect using observable data.
An economist may, for example, hypothesize that as a person increases their income their
spending will also increase.

How do we interpret a Regression Model?


A regression model output may be in the form of Y = 1.0 + (3.2)X1 - 2.0(X2) + 0.21.

Here we have a multiple linear regression that relates some variable Y with two explanatory
variables X1 and X2. We would interpret the model as the value of Y changes by 3.2x for every
one-unit change in X1 (if X1 goes up by 2, Y goes up by 6.4, etc.) holding all else constant (all
else equal). That means controlling for X2, X1 has this observed relationship. Likewise, holding
X1 constant, every one unit increase in X2 is associated with a 2x decrease  in Y. We can also
note the y-intercept of 1.0, meaning that Y = 1 when X1 and X2 are both zero. The error
term (residual) is 0.21.
Reference

https://www.investopedia.com/terms/r/regression.asp

https://economictheoryblog.com/2015/04/01/ols_assumptions/

https://corporatefinanceinstitute.com/resources/data-science/regression-
analysis/

You might also like