Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

CHAPTER FOUR

4.0 INTRODUCTION

This chapter aims at presenting and analyzing the data collected on the price of corrugated

roofing sheets (Power hand type) manufactured by Midland Galvanizing Limited, Abeokuta

using Autoregressive Integrated Moving Average (ARIMA) model so as to predict the future

price and proffer recommendations for policy making.

4.1 RESULTS AND DISCUSSION

4.1.1 STATIONARITY CHECK AND DETERMINATION OF THE APPROPRIATE


ARIMA ORDER

Fig. 4.1: Time series plot of Monthly Power Hand Sales figures
The time series plot in fig. 4.1 shows the price of roofing sheets (Power hand) over the years.

The series indicates non-stationarity, with both upward and downward movements indicating

pattern of irregularity. It can also be evidenced from the ACF and PACF plot in fig. 4.2 and

4.3.
Figure 4.2: Sample ACF plot of Monthly Power Hand Sales figures
The Autocorrelation plot in fig. 4.2 indicates significant spikes from lag 1, 2, 5, and 16 as

well as irregular variation, an upward spike for subsequent lags and off to zero at lag 23

which also indicates an element of non-stationary.

Figure 4.3: Sample PACF plot of Monthly Power Hand Sales figures
Also there is an upward significant spikes at lag 1, 5, and 26 with a slow decay in the PACF

plot at lag 14.


Table 4.1: Augmented Dickey-Fuller Test for Series Stationarity @Level

Dickey-Fuller test statistic Lag order P-value

-2.7799 12 0.252

Source: R-Studio Output

Table 4.1 also indicates the presence of unit root (P-value 0.252> 0.05 level of significance)

at level of the series as evidenced from 4.1, 4.2 and 4.3 respectively.

In view of this, stationarity was therefore achieved by applying the method of first order

differencing as evidenced in fig 4.4, 4.5 and 4.6 for the series acceptability of no unit root

which was also ascertained by performing Augmented Dickey Fuller Test of significance as

shown in table 4.2 below.

Fig. 4.4: Time series plot of First Order Differenced Monthly Power Hand Sales figures
It can be seen that the trend and irregular variation has disappeared in figure 4.4 due to the

first order differencing process applied.


Figure 4.5: ACF plot of First Order Differenced Monthly Power Hand Sales figures.
The differenced ACF and PACF in figure 4.5 and 4.6 indicates the presence of minor

significant spikes. Many of the spikes do not touch the upper and lower bounds of the graph

and there is slow decay of autocorrelation function to zero at some respective lags.

Figure 4.6: PACF plot of First Order Differenced Monthly Power Hand Sales figures.
Table 4.2: Augmented Dickey-Fuller Test for Series Stationarity @First Order
Differencing

Dickey-Fuller test statistic Lag order P-value

-6.2146 12 0.01563

Source: R-Studio Output

The stationarity of the series is further confirmed in table 4.2, with ADF value of -6.2146

with P-value of 0.01563?<5% significance level at lag order of 12. However, with the series

stationarity, we can then go ahead with model identification. At this point, it is very important

to identify various order for the AR(p) and MA(q) components. With a few iterations on this

model building strategy. We hope to arrive at the best possible model for the series.

The orders of AR(p) and MA(q) were chosen based on the significant spikes from the ACF

and PACF plots. The significant spikes in ACF of lag 1, lag 3, and lag 4 suggest MA of order

1, 3 or 4 in figure 4.5. Also, the PACF with significant spikes of lag 1, lag 2, lag 4, lag 8 and

lag 15 suggest AR of order 1, 2 up to 15 in figure 4.6 respectively. These significant spikes

suggested the models built for selection in table 4.3 below.

Table 4.3: Iterated Models for selection

S/N Model AIC LOG LIKELIHOOD MSE (σ 2)


1 (1, 1, 1) 2149.06 -1071.53 495841
2 (4, 1, 4) 2135.50 -1057.75 377924
3 (3, 1, 4) 2132.67 -1058.34 384981
4 (2, 1, 2) 2144.73 -1067.36 464692
5 (4, 1, 5) 2128.37 -1054.18 345716
6 (0, 1, 2) 2139.90 -1066.95 458389
7 (2, 1, 0) 2200.75 -1097.37 754966
8 (2, 1, 1) 2145.03 -1068.51 472336
9 *(1, 1, 2) 2130.28 -1061.14 407226*
10 (2, 1, 3) 2131.39 -1059.69 387476
Source: R-Studio Output
The table 4.3 below shows different ARIMA models of various orders. In order to choose the

best model, we look for the model with the least AIC andσ 2. Three different models can be

identified from the table to give minimum AIC and estimated sigma square ( σ 2), that is,

ARIMA (4, 1, 5), ARIMA(2, 1, 3) and ARIMA (1, 1, 2). Though ARIMA (4, 1, 5) gives the

least AIC(2128.37) estimated σ 2(345716) and higher Log-Likelihood (-1054.18) out of the

three models with lower AIC. But applying the rule of parsimony which says the simplest

model should be chosen provided all necessary conditions are met, this resulted in choosing

ARIMA(1, 1, 2) model whose parameters were tested for significance.

Table 4.4: Best Fitted ARIMA(1, 1, 2) Model Estimates

Coefficients Estimates Standard Error |Z| value


ϕ1 0.4465 0.0936 4.7703
θ1 -1.9649 0.0895 21.954
θ2 0.9652 0.0877 11.006
Source: R-Studio Output

The model specification for ARIMA(1, 1, 2) in table 4.4 is written in form of backshift

operator as;

( 1−ϕ 1 B ) ( 1−B ) X t =( 1−θ 1 B−θ 2 B2 ) ε t (4.1)

Substituting the coefficients, we have:

( 1−0.4465 B ) ( 1−B ) X t =( 1+1.9649 B−0.9652 B 2 ) ε t (4.2)

From table 4.4, it shows that all parameters are significant, that is AR (1) and MA (1) up to

MA(2) the absolute value of Z for each parameter estimate is higher than the critical value of

Z α =Z 0.05 =1.96 ). This indicates that the model coefficient is efficient in predicting the price
2 2

of roofing sheets in Nigeria since all parameters of Autoregressive and Moving Averages are

statistically significant.
Since it is essential to check whether the model is correctly specified, that is, whether the

model assumptions are supported by the data. However, if any of the key assumptions seem

to be violated, then a new model should be specified, fitted and checked again until a model

that provides an adequate fit to the data is found.

Fig. 4.7: Adequacy Check for ARIMA(1,1,2) using Standardized Residuals, ACF Of

Residuals and P-values for Ljung-Box Statistics

Figure 4.7 depicts the standardized residuals, ACF of residuals and P-values for the Ljung-

Box statistic. This indicates that the model has captured goodness of fit since the spikes of the

ACF of residuals were found to be insignificant and were found within the upper and lower
bound. The P-values for the Ljung-Box statistics shows an evidence of efficient and

parsimonious fit since the dots are above the bounds.

4.1.2 FURTHER DIAGNOSTIC CHECK ON ARIMA(1, 1, 2) MODEL

Before we accept a fitted model and interpret its findings, it is essential to check whether the

model is correctly specified, that is, whether the model assumptions are supported by the

data. If some key assumptions seem to be violated, then a new model should be specified,

fitted and checked again until a model that provides an adequate fit to the data is found. Here,

the quality of the model is further assessed.

4.1.2.1 SHAPIRO-WILK TEST OF NORMALITY

Table 4.5: Shapiro-Wilk Test

Shapiro-Wilk P-value

W = 0.99367 0.8134

Source: R-Studio Output

The Shapiro-Wilk test of normality in table 4.5 above has a test statistic of W = 0.99319,

leading to a P-value of 0.784>0.05 level of significance where normality of residuals of the

best fitted ARIMA(1, 1, 2) is not rejected at 1%, 5% and 10% significance levels. This

indicates that the residuals are normally distributed.


Fig. 4.8: Histogram showing residuals of the fitted ARIMA(1, 1, 2)

Histogram of residuals of the best fitted model with an embedded normal probability curve as

evidenced in figure 4.8 also confirm that the residuals are normally and independently

distributed since the curve depicts a bell shape.

4.1.2.2 LJUNG-BOX TEST OF INDEPENDENCE

The Ljung-Box test statistic examines the null hypothesis of independence in the residuals of

the fitted model. The result gives a Chi-squared of 8.024944 with 12 degrees of freedom and

a corresponding P-value of 0.8745>0.05 level of significance as shown in table 4.6 below.

Table 4.6: Ljung-Box Test

Chi-Squared Degree of Freedom P-value

8.024944 12 0.8745

Source: R-Studio Output

The statistic and large p-values in the Box test above is suggesting us to accept the null

hypothesis that all of the autocorrelation functions in lag 1 to 12 are zero. In other words, we

can conclude that there is no (or almost nil) evidence for non-zero autocorrelations in the
forecast errors at lags 1 to 12 in our fitted model. This also indicates that the model has

captured the dependence in the series. The AIC and Log-likelihood deal with the fit and

parsimony of the model which provides a measure of efficient and parsimonious prediction.

In addition, the ARIMA(1, 1, 2) model can be confirmed to be adequate for predicting the

trend of roofing sheets price under study.

4.2.2 FORECASTING FROM ARIMA (1,1,2) MODEL

The ultimate aim of building any time series models is forecasting. If this objective is not

achieved, the work is incomplete. Forecasts are usually based on the assumption that the

prevailing condition or variation will persist into the future. Forecasts are usually needed over

a period known as the lead-time which varies with each problem. In this case, we assume the

model parameter is correct and that the true parameter does not change. Forecasts was made

for the possible sales trend for two years period.

Fig 9: Two (2) Years Forecast from the fitted ARIMA model (1, 1, 2)
Fig. 9 shows the two years forecast. The two shaded zones of forecast represent the 80% and

95% (lower and upper side) projection of prediction intervals. A closer look indicates that

there is going to be a steady rise in the price of roofing sheets in Nigeria but this steady rise is

observed to take short time in reaching past peaked values except there is favourable

intervention of government in making sure the nation’s export is more than her import

counterpart thereby creating an avenue for favourable exchange rate and price reduction.

4.3 SUMMARY OF FINDINGS

This study is primarily concerned with building a suitable parsimonious time series model

using the monthly price of roofing sheets (Power hand) collected from Midland Galvanizing

Limited, Abeokuta between the month of January, 2006 to December, 2016.

A popularly recognized Box-Jenkins modeling approach was used to identify and estimate

the parameters of the identified models for the series. The original series was not stationary

without any indications of seasonality. Stationarity was achieved after performing a first

order differencing.

Non-seasonal models were identified considering the ACF and PACF plot and model

adequacy was done using the AIC, variance 2 and log-likelihood technique. This plot

indicated that an Autoregressive AR(1) and Moving Average MA(2) model will make a

better fit since the ACF was significant at lag 1 and lag 2. However, ARIMA(1, 1, 2), model

was specified as the most adequate model considering the significant spike at lag 2 in the

sampled differenced ACF and lag 1 in the sampled differenced PACF. The AR and MA

parameters were significant at 0.05 significance level and the diagnostic check shows that the

estimated model capture dependence and white noise was achieved for the price trend of

power hand roofing sheets produced by the manufacturing industry.


Finally, a short term forecasting was done using the chosen model and the results of the

findings were fully interpreted.


APPENDIX
> isiaq<-ts(read.csv("Powerhand.csv",header=FALSE),frequency=12,
start=c(2006,1))
> isiaq
Jan Feb Mar Apr May Jun Jul
2006 16012.5 15487.5 17062.5 16012.5 15855.0 15907.5 16275.0
2007 16073.5 16916.7 16073.5 16073.5 16442.4 16073.5 16337.0
2008 16442.4 16916.7 17022.1 16600.5 16600.5 16864.0 15810.0
2009 16012.5 15645.0 14490.0 15750.0 15750.0 16380.0 16537.5
2010 16043.0 14991.0 16043.0 16463.8 16989.8 16989.8 16832.0
2011 16012.5 16275.0 16800.0 17325.0 17955.0 16275.0 16275.0
2012 16905.0 16432.5 16537.5 16537.5 17115.0 17220.0 17325.0
2013 16380.0 16905.0 17587.5 16852.5 17062.5 17535.0 17325.0
2014 16043.0 16095.6 17095.0 17463.2 16411.2 16569.0 16832.0
2015 16957.5 17062.5 17325.0 16275.0 17062.5 17325.0 16275.0
2016 16348.8 15982.0 17292.0 15982.0 16086.8 16348.8 16506.0
2017 17690.0 18700.0 18500.0 18400.0
Aug Sep Oct Nov Dec
2006 18007.5 17955.0 17482.5 16852.5 16275.0
2007 16073.5 16073.5 16231.6 16020.8 16916.7
2008 15810.0 15704.6 13965.5 17127.5 16442.4
2009 16957.5 17325.0 16380.0 15750.0 15487.5
2010 16306.0 17884.0 17095.0 16411.2 17358.0
2011 16012.5 16800.0 17325.0 16275.0 16012.5
2012 17955.0 17902.5 17325.0 16275.0 16012.5
2013 16537.5 17062.5 16905.0 17325.0 17850.0
2014 16832.0 17884.0 18410.0 17095.0 17358.0
2015 16800.0 16537.5 16905.0 17325.0 16380.0
2016 16610.8 16768.0 17292.0 16506.0 17030.0
2017

> plot.ts(isiaq, ylab="Value", xlab="YEAR", main="Monthy sales of Powerhand between Jan


to April, 2017")

> ACFSales<- acf(ts(isiaq, frequency=12), lag.max=36, main="Correlogram of


Power hand Sales")

> PACFSales<- pacf(ts(isiaq, frequency=12), lag.max=36, main="Partial


Correlogram Plot of Power hand Sales")

> adf.test(isiaq, alternative = c("stationary"), k=12)

Augmented Dickey-Fuller Test

data: isiaq
Dickey-Fuller = -2.7799, Lag order = 12,
p-value = 0.252
alternative hypothesis: stationary

> diffisiaq<-diff(isiaq, differences = 1)


> diffisiaq
Jan Feb Mar Apr May Jun
2006 -525.0 1575.0 -1050.0 -157.5 52.5
2007 -201.5 843.2 -843.2 0.0 368.9 -368.9
2008 -474.3 474.3 105.4 -421.6 0.0 263.5
2009 -429.9 -367.5 -1155.0 1260.0 0.0 630.0
2010 555.5 -1052.0 1052.0 420.8 526.0 0.0
2011 -1345.5 262.5 525.0 525.0 630.0 -1680.0
2012 892.5 -472.5 105.0 0.0 577.5 105.0
2013 367.5 525.0 682.5 -735.0 210.0 472.5
2014 -1807.0 52.6 999.4 368.2 -1052.0 157.8
2015 -400.5 105.0 262.5 -1050.0 787.5 262.5
2016 -31.2 -366.8 1310.0 -1310.0 104.8 262.0
2017 660.0 1010.0 -200.0 -100.0
Jul Aug Sep Oct Nov Dec
2006 367.5 1732.5 -52.5 -472.5 -630.0 -577.5
2007 263.5 -263.5 0.0 158.1 -210.8 895.9
2008 -1054.0 0.0 -105.4 -1739.1 3162.0 -685.1
2009 157.5 420.0 367.5 -945.0 -630.0 -262.5
2010 -157.8 -526.0 1578.0 -789.0 -683.8 946.8
2011 0.0 -262.5 787.5 525.0 -1050.0 -262.5
2012 105.0 630.0 -52.5 -577.5 -1050.0 -262.5
2013 -210.0 -787.5 525.0 -157.5 420.0 525.0
2014 263.0 0.0 1052.0 526.0 -1315.0 263.0
2015 -1050.0 525.0 -262.5 367.5 420.0 -945.0
2016 157.2 104.8 157.2 524.0 -786.0 524.0
2017
> plot.ts(diffisiaq, ylab="First differenced power hand sales",
xlab="year", main="Time plot of first order differenced Power hand sales")

> ACFdiffSales<- acf(ts(diffisiaq, frequency=12), lag.max=36,


main="Correlogram Plot of first order differenced Power hand Sales")

> PACFdiffSales<- pacf(ts(diffisiaq, frequency=12), lag.max=36, main="PACF


Plot of first order differenced Power hand Sales")

> adf.test(diffisiaq, alternative = c("stationary"), k=12)

Augmented Dickey-Fuller Test

data: diffisiaq
Dickey-Fuller = -3.9132, Lag order = 12, p-value = 0.01563
alternative hypothesis: stationary

> adf.test(diffisiaq, alternative = c("stationary"), k=6)


Augmented Dickey-Fuller Test

data: diffisiaq
Dickey-Fuller = -6.2145, Lag order =
6, p-value = 0.01
alternative hypothesis: stationary

Warning message:
In adf.test(diffisiaq, alternative = c("stationary"), k = 6) :
p-value smaller than printed p-value

> fit1<-arima(diffisiaq, order=c(1,1,1), method="ML")


> fit1

Call:
arima(x = diffisiaq, order = c(1, 1, 1), method = "ML")

Coefficients:
ar1 ma1
-0.2954 -1.0000
s.e. 0.0824 0.0195

sigma^2 estimated as 495841: log likelihood = -1071.53, aic = 2149.06


> fit2<-arima(diffisiaq, order=c(4,1,4), method="ML")
> fit2
Call:
arima(x = diffisiaq, order = c(4, 1, 4), method = "ML")

Coefficients:
ar1 ar2 ar3 ar4 ma1 ma2 ma3 ma4
-0.5175 0.2904 0.1663 -0.1104 -0.9923 -0.8278 0.6377 0.183
s.e. 0.3902 0.1882 0.1857 0.1127 0.5204 0.4730 0.4261 0.332

sigma^2 estimated as 377924: log likelihood = -1057.75, aic = 2133.5


> fit3<-arima(diffisiaq, order=c(3,1,4), method="ML")
Warning messages:
1: In log(s2) : NaNs produced
2: In log(s2) : NaNs produced
> fit3
Call:
arima(x = diffisiaq, order = c(3, 1, 4), method = "ML")
Coefficients:
ar1 ar2 ar3 ma1 ma2 ma3 ma4
-0.5680 0.3207 0.2385 -0.9384 -0.9335 0.6895 0.1836
s.e. 0.5058 0.2337 0.2460 0.5136 0.5336 0.4822 0.4844

sigma^2 estimated as 384981: log likelihood = -1058.34, aic = 2132.67


> fit4<-arima(diffisiaq, order=c(2,1,2), method="ML")
Warning messages:
1: In log(s2) : NaNs produced
2: In log(s2) : NaNs produced
> fit4
Call:
arima(x = diffisiaq, order = c(2, 1, 2), method = "ML")
Coefficients:
ar1 ar2 ma1 ma2
-1.1054 -0.3795 -0.1848 -0.8152
s.e. 0.1190 0.0803 0.1068 0.1062

sigma^2 estimated as 464692: log likelihood = -1067.36, aic = 2144.73


> fit5<-arima(diffisiaq, order=c(4,1,5), method="ML")
> fit5

Call:
arima(x = diffisiaq, order = c(4, 1, 5), method = "ML")

Coefficients:
ar1 ar2 ar3 ar4 ma1 ma2 ma3
ma4
-0.6811 -0.6332 -0.2360 0.2049 -0.8094 -0.1088 -0.4692 -
0.4944
s.e. 0.1236 0.1156 0.1265 0.1068 0.1362 0.1397 0.1227
0.1431
ma5
0.8826
s.e. 0.1301

sigma^2 estimated as 345716: log likelihood = -1054.18, aic = 2128.37


> fit6<-arima(diffisiaq, order=c(0,1,2), method="ML")
> fit6

Call:
arima(x = diffisiaq, order = c(0, 1, 2), method = "ML")
Coefficients:
ma1 ma2
-1.5553 0.5553
s.e. 0.1159 0.1143

sigma^2 estimated as 458389: log likelihood = -1066.95, aic = 2139.9

> fit7<-arima(diffisiaq, order=c(2,1,0), method="ML")


> fit7

Call:
arima(x = diffisiaq, order = c(2, 1, 0), method = "ML")

Coefficients:
ar1 ar2
-0.8326 -0.4546
s.e. 0.0770 0.0783

sigma^2 estimated as 754965: log likelihood = -1097.37, aic = 2200.75


> fit8<-arima(diffisiaq, order=c(2,1,1), method="ML")
> fit8

Call:
arima(x = diffisiaq, order = c(2, 1, 1), method = "ML")

Coefficients:
ar1 ar2 ma1
-0.3590 -0.2125 -1.0000
s.e. 0.0843 0.0853 0.0204

sigma^2 estimated as 472326: log likelihood = -1068.51, aic = 2145.03


> fit9<-arima(diffisiaq, order=c(1,1,2), method="ML")
> fit9

Call:
arima(x = diffisiaq, order = c(1, 1, 2), method = "ML")

Coefficients:
ar1 ma1 ma2
0.4465 -1.9649 0.9652
s.e. 0.0936 0.0895 0.0877

sigma^2 estimated as 407226: log likelihood = -1061.14, aic = 2130.28


> fit10<-arima(diffisiaq, order=c(2,1,3), method="ML")
> fit10
Call:
arima(x = diffisiaq, order = c(2, 1, 3), method = "ML")

Coefficients:
ar1 ar2 ma1 ma2 ma3
-0.4024 0.3400 -1.0951 -0.8093 0.9047
s.e. 0.1194 0.1058 NaN 0.1466 NaN

sigma^2 estimated as 387476: log likelihood = -1059.69, aic = 2131.39


> tsdiag(fit9)

> fit9residual<-resid(fit9)
> fit9residual
Jan Feb Mar Apr May
2006 -0.5249991 1137.4269883 -947.3339781 -229.1416414
2007 -632.3192147 323.8196583 -880.8554593 -367.6009079 32.7717932
2008 -219.9252633 459.1388749 292.0632748 -197.0614655 3.0507456
2009 -295.4132126 -429.6089609 -1338.0256579 522.0744242 -62.3813333
2010 271.9526726 -1014.0476472 560.7128292 476.2184001 776.5839704
2011 -834.8748340 69.7776395 466.3181869 721.2534626 1062.6869573
2012 606.9524617 -291.1073445 35.9107550 -14.4851464 555.4552464
2013 -54.2075806 302.8122656 727.6658058 -344.9377195 202.1621869
2014 -1367.9160232 -453.5036040 533.5412392 426.5542191 -807.0342635
2015 -279.4562061 10.4689032 219.7034652 -956.6968460 331.1787412
2016 -409.5399218 -747.5467853 748.9270376 -1174.0844656 -441.1280780
2017 703.8445280 1386.0005333 677.7587461 636.1393609
Jun Jul Aug Sep Oct
2006 -21.8061608 266.3998514 1427.2301172 144.0740007 -392.1638518
2007 -473.7621862 4.2430985 -353.5574889 -184.0619804 -4.5378862
2008 249.5427530 -905.5406009 -349.3057680 -409.2207183 -1984.9736043
2009 564.8282354 401.8565033 713.2634862 831.3281033 -316.8329093
2010 492.5286052 304.6581539 -161.6720259 1631.1984448 50.0690597
2011 -940.4568420 -144.3481275 -396.1554660 519.0874622 656.7282070
2012 370.7385122 404.8269784 956.4738299 569.8924133 -14.2352768
2013 563.4062106 112.8347501 -586.0787560 308.4646025 -100.2962724
2014 -150.0316882 44.2396446 -78.6472005 967.0528190 974.5712367
2015 223.4302982 -953.1369836 73.2226440 -429.0420853 67.9409620
2016 -211.2220451 -165.1367918 -126.4826981 -13.8095718 436.6371630
2017
Nov Dec
2006 -731.8832167 -858.8209417
2007 -265.1773618 709.1361492
2008 2030.6579299 -200.8575187
2009 -489.1203945 -425.2993362
2010 -279.2842193 970.4033729
2011 -653.2712602 -412.9774244
2012 -802.9547985 -560.2745947
2013 386.8726372 698.5439975
2014 -616.2910550 252.1527100
2015 315.8669477 -829.9089529
2016 -600.7634557 293.4316360
2017

> shapiro.test(fit9residual)

Shapiro-Wilk normality test

data: fit9residual
W = 0.99367, p-value = 0.8134

> Box.test(fit9residual)
Box-Pierce test
data: fit9residual
X-squared = 8.024944, df = 12, p-value = 0.8745
> Fit9ErrorsPlot<-function(fit9residual)
+ {
+ mybinsize<-IQR(fit9residual)/4
+
+ mysd<-sd(fit9residual)
+ mymin<-min(fit9residual)*5
+ mymax<-max(fit9residual)+mysd*3
+ mynorm <- rnorm(10000, mean=0, sd=mysd)
+ mymin2 <- min(mynorm)
+ mymax2 <- max(mynorm)
+ if (mymin2 < mymin) { mymin <- mymin2 }
+ if (mymax2 > mymax) { mymax <- mymax2 }
+ mybins <- seq(mymin, mymax, mybinsize)
+ hist(fit9residual, col="green", freq=FALSE, breaks=mybins)
+ myhist <- hist(mynorm, plot=FALSE, breaks=mybins)
+ points(myhist$mids, myhist$density, type="l", col="red", lwd=2)
+ }
> Fit9ErrorsPlot(fit9residual)

> fit9forecast<-forecast.Arima(fit9, h=24)


> fit9forecast
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
May 2017 -652.853874 -1472.7656 167.0579 -1906.801 601.0929
Jun 2017 -287.281923 -1208.8802 634.3164 -1696.745 1122.1810
Jul 2017 -124.069404 -1064.5486 816.4098 -1562.408 1314.2694
Aug 2017 -51.201870 -995.3663 892.9625 -1495.177 1392.7729
Sep 2017 -18.669578 -963.5521 926.2129 -1463.743 1426.4035
Oct 2017 -4.145275 -949.1643 940.8738 -1449.427 1441.1366
Nov 2017 2.339215 -942.7042 947.3826 -1442.980 1447.6584
Dec 2017 5.234268 -939.8128 950.2813 -1440.090 1450.5590
Jan 2018 6.526788 -938.5205 951.5740 -1438.798 1451.8518
Feb 2018 7.103843 -937.9433 952.1510 -1438.221 1452.4287
Mar 2018 7.361474 -937.6856 952.4086 -1437.963 1452.6863
Apr 2018 7.476496 -937.5706 952.5236 -1437.848 1452.8013
May 2018 7.527848 -937.5193 952.5750 -1437.797 1452.8528
Jun 2018 7.550775 -937.4965 952.5981 -1437.774 1452.8759
Jul 2018 7.561010 -937.4864 952.6084 -1437.764 1452.8863
Aug 2018 7.565580 -937.4819 952.6131 -1437.760 1452.8910
Sep 2018 7.567621 -937.4800 952.6153 -1437.758 1452.8932
Oct 2018 7.568531 -937.4792 952.6163 -1437.757 1452.8943
Nov 2018 7.568938 -937.4789 952.6168 -1437.757 1452.8949
Dec 2018 7.569120 -937.4789 952.6171 -1437.757 1452.8953
Jan 2019 7.569201 -937.4789 952.6173 -1437.757 1452.8955
Feb 2019 7.569237 -937.4790 952.6175 -1437.757 1452.8958
Mar 2019 7.569253 -937.4791 952.6176 -1437.757 1452.8960
Apr 2019 7.569260 -937.4792 952.6177 -1437.758 1452.8962
> plot.forecast(fit9forecast)

You might also like