Solutions Manual To Accompany Business Forecasting With Business Forecastx 6th Edition 9780073373645

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Solutions Manual to accompany Business

Forecasting with Business ForecastX 6th edition


9780073373645

To download the complete and accurate content document, go to:


https://testbankbell.com/download/solutions-manual-to-accompany-business-forecasti
ng-with-business-forecastx-6th-edition-9780073373645/
Solutions Manual to accompany Business Forecasting with Business ForecastX 6th edition 97800

Chapter 05 - Forecasting with Multiple Regression

CHAPTER 5
FORECASTING WITH MULTIPLE REGRESSION

CHAPTER OVERVIEW

This chapter extends our discussion on linear regression to multiple regression models. In
addition, qualitative factors such as seasonality are modeled using dummy variables.

LEARNING OBJECTIVES

• Introduce Classical Linear Multiple Regression Model


• Statistical Evaluation of the Multiple Regression
• Modeling Seasonality with Qualitative Variables
• Generating Multiple Regression Forecasts using FORECASTXTM

NOTES TO TEACHERS

This chapter extends the classical linear regression model to multiple regression and its
accompanying modifications. Emphasis is placed on using dummy variables to account for data
seasonality.

1. Students will appreciate that multiple regression is a fairly straightforward extension of


simple regression including the optimal properties of the ordinary least square estimator under
the Gaussian assumptions. A notable difference, however, is that the slope parameters are now
interpreted as partial derivatives allowing us to analyze the effect of one independent variable on
the dependent variable holding all other variables constant. Thus, to control for a given factor,
one must place it in the model as an explanatory variable. The three-dimensional representation
of the two independent variable regression serves to make clear to students precisely what is
being estimated in multiple regression. Multiple regressions with more than two independent
variables are not able to be illustrated but can be explained as representing hyperplanes (i.e.,
fourth and higher order regression planes).

2. Having more than one independent variable brings up the issue of overlapping causation
and the problem of multicollinearity. Accordingly, the correlation among independent variables
in the model is now an important concern in the model selection process. When two variables
are a linear combination of each other, OLS fails and no reliable estimates are obtained. Near
multicollinearity, on the other hand, arises when two independent variables are highly, but not
perfectly, correlated. This causes OLS estimates to be imprecise, i.e., having large standard
errors. Finally dummy variables are introduced as a way of measuring qualitative attributes in
regression.

5-1

Visit TestBankBell.com to get complete for all chapters


Chapter 05 - Forecasting with Multiple Regression

3. With the exception of multicollinearity, the statistical evaluation of multiple regression is


very similar to that of bivariate regression. An important exception, however, is use of the
adjusted-R-squared and other model selection criteria such as AIC and BIC as a preferred
alternative to the simple R-squared. Students should note these additional criteria are necessary
since one can easily increase simple R-squared by simply adding more explanatory variables,
even if they are marginally correlated with the dependent variable. AIC and BIC are the
preferred tools for model selection in actual practice. The Durbin-Watson test statistic remains an
important tool in business forecasting since we almost always are working with time series data.
Note carefully that it is appropriate to check the DW(4) with quarterly data and the DW(12) with
monthly data.

4. An important use of qualitative or dummy variables in modeling data seasonality. It is


important for students to note that dummy variables are essentially on/off switches and therefore
to be careful not to include too many in the model. For example, we do not need a dummy
variable for both males and females, since one called sex will do, i.e., either you are female or
not. Failure to account for this simple fact will lead to perfect multicollinearity, which renders
OLS useless. Generally speaking, the rule is; for J attributes, use only J-1 dummy variables may
be used at a maximum (less than J-1 are always allowed).

It is also important to point out to the student how to correctly interpret the estimated coefficients
on a set of dummy variables. In the text example on data seasonality, dummy variables act to
shift the estimated intercept relative to the base period (in this case quarter one). Accordingly,
dummy variables are interpreted according to some based period, which may be arbitrarily
selected by the forecaster. The base period is the period not accounted for by a dummy variable
(e.g., if quarters two, three, and four are represented by dummy variables and there is no quarter
one dummy variable, then quarter one is the base period; if two quarters are not represented by
dummy variables then the average of the two “missing quarters” is the base period).
Accordingly, dummy variables allow forecasters to measure the effects of qualitative factors on
quantitative random variables.

5. Some extensions to multiple regression frequently encountered in practice, notably


modeling data nonlinearities and dynamic adjustment issues, are also covered. Since many
economic variables are subject to diminishing marginal returns, forecasters frequently add
squared values of independent variables to model nonlinear relationships between variables.
Note that the inclusion of a squared term does not “force” the nonlinearity on the forecast, but
allows the effect of the nonlinearity if, in fact, there truly is a nonlinear relationship. In addition,
the inclusion of a lagged value of the dependent variable allows forecasters to model dynamic
adjustment of the dependent variable to the set of independent variables and often helps remove
the problem of serial correlation. Finally, exponential relationships can be modeled using linear
regression by use of a logarithmic transformation.

5-2
Chapter 05 - Forecasting with Multiple Regression

ANSWERS TO END-OF-CHAPTER EXERCISES

1. In evaluating multiple regression, the adjusted R-squared (sometimes called the multiple
coefficient of determination) should always be considered. The reason for the adjustment is that
adding another independent variable will always increase the unadjusted R-squared, even if the
variable has no meaningful relation to the dependent variable. To get around this and show only
meaningful changes in R-squared; an adjustment is made to account for a loss in degrees of
freedom as additional independent variables are added to the model.

2. The estimated coefficients of the model for jewelry sales and their respective t-ratios are
reported in the table below.

Variable Coefficient Estimate Calculated t-Test


Constant Term -163.57 -0.76
Disposable Income (DPI) 0.25 12.47
Unemployment Rate (UR) -45.41 -2.17
9/11 Dummy (911) -259.10 -1.53
February Dummy (feb) 449.59 4.86
Variable
March Dummy (mar) 177.66 1.92
April Dummy (apr) 245.75 2.66
May Dummy (may) 619.11 6.69
June Dummy (jun) 362.36 3.91
July Dummy (jul) 286.61 3.10
August Dummy (aug) 362.46 3.91
September Dummy (sep) 269.49 2.88
October Dummy (oct) 360.24 3.85
November Dummy (nov) 786.46 8.49
December Dummy (dec) 3,851.24 41.55

Step #1: For the most part, the signs of the level variables are of the correct sign. Sales increase
with real personal disposable income (DPI). Sales decrease with increases in the unemployment
rate. The model also reveals a September 11th effect with the negative sign on the 911 variable
(although it is only significant at the 87% level)

The dummy variables are interpreted as follows: their magnitude is the difference between the
base period (the period left out; in this case, period 1 or January) and the measured period, i.e.,
they are always compared with a base period. Of the eleven seasonal dummy variables, all are
significant at least at the 90% level (and most at the 95% level). Since all the seasonal dummy
variables are positive in sign, sales in each of these months can be expected to be above January
sales levels.

5-3
Chapter 05 - Forecasting with Multiple Regression

Step #2: All of the estimated coefficients of the independent variables are significantly different
from zero, using a one-tailed t-test, at the 5 percent level with the exception of the September
11th variable and the March dummy variable. Note carefully that we do not check the t-statistic
for the constant term as part of this test.

Step #3: The adjusted R-squared is 95.51%, suggesting that the model explains 95.51 percent of
the variability in jewelry sales. This is substantially higher than previous model versions and
suggests that the model above is a candidate for forecasting JS. The Durbin-Watson statistic is
1.79 (close to two), suggesting that serial correlation is not a problem with this model. Finally,
the significance of the R-squared statistic is tested with the F-test (the calculated F-statistic is
218.36), in which we reject the null of no model fit at the 99% level of confidence.

In conclusion, the regression model with additional variables appears to explain jewelry sales
fairly well based on in-sample results. This is shown in Table 5-9 in which the RMSE for the in-
sample data (RMSE = 214.56) is lower than for the two independent variable regression (RMSE
= 1,008.87) also shown in Table 5-9 .

3. A dummy variable has a value of either zero or one. It is zero if the event does not exist
for that observation, and one, if the event does exist. A dummy variable is a special type of
variable that is used to effectively account for the impact of seasonality (or other qualitative
attributes). For example, you might use the coefficients on dummy variables to measure
seasonality of ski equipment sales. For the fourth and first quarters of the calendar year, you
would expect positive signs, however during the second and third quarters we expect the dummy
coefficients to have negative signs, depending of course on the base period. This would be
expected, since the demand for ski equipment increases during the fall and winter and declines
during the spring and summer.

4. The model for miles per gallon is summarized by the following results:

Variable Coefficient Standard Error t-ratio


Intercept 6.51 1.28 5.09
CID 0.031 0.012 2.58
D 9.46 2.67 3.54
M4 14.64 2.09 7.00
M5 14.86 2.42 6.14
US 4.64 2.48 1.87

a) The t-ratios were calculated by dividing each coefficient by the corresponding standard error.
For example, the t-ratio for US is:
(4.64/2.48) = 1.87.

5-4
Solutions Manual to accompany Business Forecasting with Business ForecastX 6th edition 97800

Chapter 05 - Forecasting with Multiple Regression

b) The signs on all five independent variables should be evaluated according to one’s
expectations. In some cases such as for cubic inch displacement (CID) and US (for cars made in
the United States) differing arguments could be made. For D (diesel), M4 and M5 (manual 4 and
5 speed transmissions) positive signs would be expected. Given the large sample size of 120, the
critical value of the t-ratio would be 1.645 for one-tailed tests and 1.96 for two-tailed tests at a
95% confidence level. Since all of the calculated t-ratios shown are above these critical values,
we can conclude that all five independent variables are influential in determining MPG. Finally,
the adjusted R-square of .569 indicates that 56.9% of the variation in miles per gallon is
accounted for by variations in the independent variables included in the model.

5. See Excel file c5p5Solution.

6. See Excel file c5p6Solution.

7. See Excel file c5p7Solution.

8. See Excel file c5p8Solution.

9. See Excel file c5p9Solution.

10. See Excel file c5p10Solution.

11. See Excel file c5p11Solution.

12. See Excel file c5p12Solution.

13. See Excel file c5p13Solution.

5-5

Visit TestBankBell.com to get complete for all chapters

You might also like