Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 46

Multiple Linear Regression Model

Objectives

In this chapter, you learn:


 How to develop a multiple regression model
 How to interpret the regression coefficients
 How to determine which independent variables to
include in the regression model

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 2


Concept of Multiple Regression Model
 Simple linear regression model (also called the two-variable model) is
extensively discussed in the previous section.
 Such models assume that a dependent variable is influenced by only one
explanatory variable.
 However, many economic variables are influenced by several factors or
variables.
 Hence, simple regression models are unrealistic.

 Adding more variables to the simple linear regression model leads us to the
discussion of multiple regression models i.e. models in which the dependent
variable (or regressand) depends on two or more explanatory variables, or
regressors.
Cont…

 The simplest possible multiple regression model is three-variable


regression, with one dependent variable and two explanatory
variables.
Assumptions of Multiple Linear Regression Model
Estimation of Partial Regression Coefficients
Variance and Standard errors of OLS Estimators
 2
 2   
1  2  1 2  x1 x 2 
2 2
2
 X x 2  X x  2 X 1 X
 ^  ^ 1
Var   0    ui    3.17
 
 

n  x1  x2  ( x1 x2 )
2 2 2

 
^
Variance of  1
 ^  
Var   1    u2 
 x22 
3.18
2 
  x1  x 2  ( x1 x 2 ) 
2 2
 
^
Variance of  2
^ ^ 2
Var (  2 )   u 
 x12 
3.19
2 
  x1  x 2  ( x1 x 2 ) 
2 2
Self test question

∑ŷ2 = 135.03 ∑x12 = 49, 206.92 ∑Y = 639


∑x1y = 1043.25 ∑x1x2 = 960.67 ∑X1 = 3679
∑ei2 = 1401.22 ∑x22 = 2500.65 ∑X2 = 1684
∑y2 = 1536.25 ∑x2y = -509 n = 12

a) Estimate the model?


b) Calculate the variance and standard error of the parameters?
The Multiple Regression Model

Idea: Examine the linear relationship between


1 dependent (Y) & 2 or more independent variables (Xi)
Multiple Regression Model with k Independent Variables:

Y-intercept Population slopes Random Error

Yi  β 0  β1X1i  β 2 X 2i      β k X ki  ε i

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 9


Multiple Regression Equation
The coefficients of the multiple regression model are
estimated using sample data

Multiple regression equation with k independent variables:


Estimated Estimated Estimated slope coefficients
(or predicted) intercept
value of Y

ˆ  b  b X  b X    b X
Yi 0 1 1i 2 2i k ki
In this chapter we will use Excel to obtain the regression
slope coefficients and other regression summary measures.

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 10


Multiple Regression Equation (continued)

Two variable model


Y
Ŷ  b0  b1X1  b 2 X 2

X1
e
abl
i
var
r
fo
ope X2
Sl
varia ble X 2
pe fo r
Slo

X1
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 11
Example: 2 Independent Variables

 A distributor of frozen dessert pies wants to


evaluate factors thought to influence demand
 Dependent variable: Pie sales (units per week)
 Independent variables: Price (in $)
Advertising ($100’s)

 Data are collected for 15 weeks

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 12


Pie Sales Example
Pie Price Advertising
Week Sales ($) ($100s)
1 350 5.50 3.3 Multiple regression equation:
2 460 7.50 3.3
3 350 8.00 3.0
Sales = b0 + b1 (Price)
4 430 8.00 4.5
5 350 6.80 3.0 + b2 (Advertising)
6 380 7.50 4.0
7 430 4.50 3.0
8 470 6.40 3.7
9 450 7.00 3.5
10 490 5.00 4.0
11 340 7.20 3.5
12 300 7.90 3.2
13 440 5.90 4.0
14 450 5.00 3.5
15 300 7.00 2.7

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 13


Excel Multiple Regression Output
Regression Statistics
Multiple R 0.72213

R Square 0.52148
Adjusted R Square 0.44172
Standard Error 47.46341 Sales  306.526 - 24.975(Price)  74.131(Advertising)
Observations 15

ANOVA   df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333      

  Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 14


The Multiple Regression Equation

Sales  306.526 - 24.975(Price)  74.131(Advertising)


where
Sales is in number of pies per week
Price is in $
Advertising is in $100’s.
b1 = -24.975: sales b2 = 74.131: sales will
will decrease, on increase, on average,
average, by 24.975 by 74.131 pies per
pies per week for week for each $100
each $1 increase in increase in
selling price, net of advertising, net of the
the effects of effects of changes
changes due to due to price
advertising
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 15
Using The Equation to Make Predictions

Predict sales for a week in which the selling


price is $5.50 and advertising is $350:

Sales  306.526 - 24.975(Price)  74.131(Advertising)


 306.526 - 24.975 (5.50)  74.131 (3.5)
 428.62

Note that Advertising is


Predicted sales in $100s, so $350 means
that X2 = 3.5
is 428.62 pies
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 16
Predictions in Excel using PHStat
 PHStat | regression | multiple regression …

Check the
“confidence and
prediction interval
estimates” box

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 17


Predictions in PHStat (continued)
DCOVA

Input values

<
Predicted Y value
Confidence interval for the
mean value of Y, given
these X values

Prediction interval for an


individual Y value, given
these X values
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 18
The coefficient of multiple determination
 As coefficient of determination is the square of the simple correlation
coefficient in simple model, coefficient of multiple determination is
the square of multiple correlation coefficient.
 The coefficient of multiple determination is the measure of the
proportion of the variation in the dependent variable that is explained
jointly by the independent variables in the model
Conti…

 In simple linear regression, the higher the R 2 means the better the model is determined
by the explanatory variable in the model.
 In multiple linear regression, however, every time we insert additional explanatory
variable in the model, the R 2 increases irrespective of the improvement in the goodness-
of- fit of the model. That means high R 2 may not imply that the model is good.
 Thus, we adjust the R 2 as follows:

In multiple linear regression, therefore, we better interpret the adjusted R 2 than the ordinary
or the unadjusted R 2 .
  Cont…
It measures the proportion of variation explained by only
those independent variables that really help in explaining
the dependent variable.
It penalizes the addition of independent variable that do not
help in predicting the dependent variable.
The Coefficient of Multiple
Determination, r2

 Reports the proportion of total variation in Y


explained by all X variables taken together

2 SSR regression sum of squares


r  
SST total sum of squares

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 22


Multiple Coefficient of Determination In Excel

Regression Statistics
2 SSR 29460.0
Multiple R 0.72213
r    .52148
R Square 0.52148 SST 56493.3
Adjusted R Square 0.44172
52.1% of the variation in pie sales
Standard Error 47.46341
is explained by the variation in
Observations 15
price and advertising
ANOVA   df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333      

  Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 23


Adjusted r2
 r2 never decreases when a new X variable is
added to the model
 This can be a disadvantage when comparing

models
 What is the net effect of adding a new variable?
 We lose a degree of freedom when a new X

variable is added
 Did the new X variable add enough

explanatory power to offset the loss of one


degree of freedom?

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 24


Adjusted r2 (continued)

 Shows the proportion of variation in Y explained by


all X variables adjusted for the number of X
variables used

2  2  n  1 
radj  1  (1  r ) 
  n  k  1 
(where n = sample size, k = number of independent variables)

 Penalizes excessive use of unimportant independent


variables
 Smaller than r2
 Useful in comparing among models

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 25


Adjusted r2 in Excel
Regression Statistics 2
Multiple R 0.72213 radj  .44172
R Square 0.52148
Adjusted R Square 0.44172
44.2% of the variation in pie sales is
Standard Error 47.46341 explained by the variation in price and
Observations 15 advertising, taking into account the sample
size and number of independent variables
ANOVA   df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333      

  Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 26


Conti…

For example:

 According to the above table, adjusted r-squared is maximum when we


included two variables. It declines when third variable is added.
Whereas r-squared increases when we included third variable. It means
third variable is insignificant to the model.
Hypothesis testing in Multiple Regression

 Hypothesis testing is important to draw inferences about the estimates


and to know how representative the estimates are to the true
population parameter.
 Basically in MLRM, hypothesis testing take the following forms:

a) Testing individual partial regression coefficient

b) Testing the overall significance of the estimated multiple


regression model
Hypothesis testing individual regression coefficients

 Like that of two-variable case the procedure for testing the


significance of the partial regression coefficients use:
 The standard error

 t –test or

 Z -test
Conti…

Ŷ = 4.0 + 0.7X1 + 0.2X2 R2 = 0.86


(0.78) (0.102) (0,102) n = 23
Where, X1 = labor, X2 = capital, Y = output
Test the hypothesis β1 = 0, β2 = 0, at α = 5% using the test of
significance and confidence interval approach?
Testing the overall significance of the sample regression

 Here, we are interested to test the overall significance of the observed


or estimated regression line, that is, whether the dependent variable is
linearly related to all of the explanatory variables.
 Hypotheses of such type are often called joint hypotheses.

 Testing the overall significance of the model means testing the null
hypothesis that none of the explanatory variables in the model
significantly determine the changes in the dependent variable.
 Or it means testing the null hypothesis that none of the explanatory
variables significantly explain the dependent variable in the model.
Cont…
H 0 : 1   2  0
H 1 :  i  0, at least for one i.
The test statistic for this test is given by:

 yˆ 2

Fcal  k  1
 e2
nk
 But to test such joint hypothesis, it is impossible to use t-test, Z-test
and SE-test
 However, this joint hypothesis can be tested by the analysis of
variance (ANOVA) technique.
Cont…

 So, we should use ANOVA table


Source of Sum of squares Degrees of Mean sum of Fcal
variation freedom squares
Regression ^2 k 1 ^
MSE
SSE   y MSE 
 y2 F
k 1 MSR
Residual SSR   e2 nk e 2

MSR 
nk
Total SST   y 2 n 1

 If Fcal > Fα (k-1, n-k), reject the null whereas,


 If Fcal < Fα (k-1, n-k), accept the null

 Alternatively we can calculate


Is the Model Significant?

 F Test for Overall Significance of the Model


 Shows if there is a linear relationship between all of the X
variables considered together and Y
 Use F-test statistic
 Hypotheses:
H0: β1 = β2 = … = βk = 0 (no linear relationship)
H1: at least one βi ≠ 0 (at least one independent
variable affects Y)

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 34


F Test for Overall Significance
 Test statistic:
SSR
MSR k
FSTAT  
MSE SSE
n  k 1

where FSTAT has numerator d.f. = k and


denominator d.f. = (n – k - 1)

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 35


F Test for Overall Significance In Excel
(continued)

Regression Statistics
Multiple R 0.72213
MSR 14730.0
R Square 0.52148
FSTAT    6.5386
Adjusted R Square 0.44172 MSE 2252.8
Standard Error 47.46341
With 2 and 12 degrees P-value for
Observations 15 of freedom the F Test

ANOVA   df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333      

  Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 36


F Test for Overall Significance (continued)

H0: β1 = β2 = 0 Test Statistic:


H1: β1 and β2 not both zero MSR
FSTAT   6.5386
 = .05 MSE
df1= 2 df2 = 12
Decision:
Critical
Since FSTAT test statistic is
Value:
in the rejection region (p-
F0.05 = 3.885
value < .05), reject H0
 = .05
Conclusion:
0 F There is evidence that at least one
Do not Reject H0
reject H0 independent variable affects Y
F0.05 = 3.885
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 37
Self test question

∑ŷ2 = 135.03 ∑x12 = 49, 206.92 ∑Y = 639 tα/29df = 2.262


∑x1y = 1043.25 ∑x1x2 = 960.67 ∑X1 = 3679 n = 12
∑ei2 = 1401.22 ∑x22 = 2500.65 ∑X2 = 1684
∑y2 = 1536.25 ∑x2y = -509 Fα (2,11)
Fα (2, 9) ==3.98
4.26

a) Calculate R2 and Ṝ2 and interpret the result?


b) Construct 95% confidence interval for true population parameter?
c) Test the significance of X1 and X2 in determining changes in Y?
d) Test the overall significance of the model?
Multiple Regression Assumptions

Errors (residuals) from the regression model:

<
ei = (Yi – Yi)

Assumptions:
 The errors are normally distributed

 Errors have a constant variance

 The model errors are independent

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 39


Residual Plots Used in Multiple Regression
 These residual plots are used in multiple
regression:
 Residuals vs. Yi
 Residuals vs. X1i <

 Residuals vs. X2i


 Residuals vs. time (if time series data)

Use the residual plots to check for


violations of regression assumptions

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 40


Are Individual Variables Significant?

 Use t tests of individual variable slopes


 Shows if there is a linear relationship between
the variable Xj and Y holding constant the
effects of other X variables
 Hypotheses:
 H0: βj = 0 (no linear relationship)
 H1: βj ≠ 0 (linear relationship does exist
between Xj and Y)

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 41


Are Individual Variables Significant?
(continued)

H0: βj = 0 (no linear relationship between Xj and Y)


H1: βj ≠ 0 (linear relationship does exist
between Xj and Y)

Test Statistic:
bj  0
t STAT  (df = n – k – 1)
Sb
j

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 42


Are Individual Variables
Significant? Excel Output (continued)

Regression Statistics
Multiple R 0.72213
t Stat for Price is tSTAT = -2.306, with
R Square 0.52148 p-value .0398
Adjusted R Square 0.44172
Standard Error 47.46341 t Stat for Advertising is tSTAT = 2.855,
Observations 15 with p-value .0145

ANOVA   df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333      

  Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 43


Inferences about the Slope:
t Test Example
From the Excel output:
H0: βj = 0
For Price tSTAT = -2.306, with p-value .0398
H1: βj  0
For Advertising tSTAT = 2.855, with p-value .0145
d.f. = 15-2-1 = 12
 = .05 The test statistic for each variable falls
t/2 = 2.1788 in the rejection region (p-values < .05)
Decision:
/2=.025 /2=.025 Reject H0 for each variable
Conclusion:
There is evidence that both
Reject H0 Do not reject H0
-tα/2 tα/2
Reject H0
Price and Advertising affect
0
-2.1788 2.1788 pie sales at  = .05
Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 44
Confidence Interval Estimate for the Slope

Confidence interval for the population slope βj

b j  tα / 2 Sb where t has
(n – k – 1) d.f.
j

  Coefficients Standard Error


Intercept 306.52619 114.25389 Here, t has
Price -24.97509 10.83213
Advertising 74.13096 25.96732
(15 – 2 – 1) = 12 d.f.

Example: Form a 95% confidence interval for the effect of changes in


price (X1) on pie sales:
-24.975 ± (2.1788)(10.832)
So the interval is (-48.576 , -1.374)
(This interval does not contain zero, so price has a significant effect on sales)

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 45


Confidence Interval Estimate for the Slope
(continued)

Confidence interval for the population slope βj

  Coefficients Standard Error … Lower 95% Upper 95%


Intercept 306.52619 114.25389 … 57.58835 555.46404
Price -24.97509 10.83213 … -48.57626 -1.37392
Advertising 74.13096 25.96732 … 17.55303 130.70888

Example: Excel output also reports these interval endpoints:


Weekly sales are estimated to be reduced by between 1.37 to
48.58 pies for each increase of $1 in the selling price, holding the
effect of advertising constant

Copyright © 2016 Pearson Education, Ltd. Chapter 13, Slide 46

You might also like