Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Recursive Chain Forecasting:

A Hybrid Time Series Model


Trend/Comparative Analysis for Product A and Product B Seasonality Analysis and Normalization Model Validation Forecasting Analysis

Bishal Neogi, Senior Data Scientist Soumajyoti Mazumder, Data Scientist Reetika Choudhary, Data Scientist with inputs and guidance from Eron Kar, AVP, Advanced Analytics

www.blueoceanmi.com

ABSTRACT
A technology major had a few pre-existing brands in the market which they wanted to replace with a single easy-to-carry, less space-occupying product across the major U.S. electronic/technology retailers for all the states. In due course of the analysis, we came across the parent product (Product A), which had 3 different pre-existing variants. The company plans to replace all those with a single branded product (Product B). Our analysis methodology assesses the time-series sales data for a stipulated time period, with all possible conventional tool kits. What really cuts through is our hybrid methodology, which magnifies the model precision over and above the standalone known techniques.

OBJECTIVE
To understand the sales trend of Product A and Product B over a period of time To determine the effect of seasonality which acts as a key influencer of Product A (and consequently Product B) Data harmonization for reducing seasonal impact and carrying out an unbiased analysis over a period of time To study the effect on a cross-sell unit volume of a product due to the change in sales volume of Product A and/or Product B To develop a final time series model to establish the unit sales volume for Product A and B

DATA
Our data set comprises of a product and its sub-product referred to as Product A and Product B respectively throughout the analysis. We have considered bivariate monthly series for our analysis, starting from July 2010 til March 2013.
Figure 1: Data Plot of Product A and Product B Sales

Comparison of Product A and Product B sales data


100000 200000 300000 0

TABLE 1
Variable Product A Product B Description Sales Sales

Product A (Black) and Product B (Red)

10

15

20 Time

25

30

As we can see in the figure 1 above, both Product A and Product B sales are following a similar pattern throughout the observed time period.

CHOICE OF MODEL
For monthly data, an additive model incorporates an underlying assumption that the difference between the values of any two distinct months will approximately remain same each year. In other words, the amplitude of

the seasonal effect will be same every year. Based on this assumption, the graphical results and the logical understanding of the extreme possible hypothetical situation where the sales can reach a point of zero, we have agreed on to make use of the Additive model. For a clear picture, consider the two graphs below (Figure 2). On observation, you will notice that the Additive model best fits the given data. Hence, we conclude that choice of the Additive model is the most appropriate of all.

Figure 2 Multiplicative and Additive Model Head-to-Head Additive model implemental


Decomposed Additve

Multiplicative model implemental


Decomposed Additve
5 10 15 20 4e+13 -4e+13 2011.0 0e+00 2e+13

180000

180000

Product B

140000

140000

Product B
2011.0 2012.0 2013.0

100000

100000

10

15

20

100000

140000

180000

2012.0

2013.0

Index

Time

Index

Time

APPROACH 1:
Decomposition Trend Analysis Seasonality Indexing and Normalization Stationarity Check and Granger Causality Regression

DECOMPOSITION
To proceed, we first decompose the data into its respective components, including trend (Tt), seasonality (St) and irregularity (It). The equation below gives a concise view of our analytical modelling technique, and the graphs further substantiate it with pictorial representation.

Data = Tt + St + It
Figure 3.1: Decomposition for Product A Sales
Decomposition of additive time series
ranbo seasonal trend observed ranbo seasonal trend observed
2010.5 2011.0 2011.5 2012.0 2012.5 2013.0

Figure 3.2: Decomposition for Product B


Decomposition of additive time series

=3000 0 2000-20000 40000 165000 180000 150000 250000

=3000 0 2000-20000 40000 165000 180000 150000 250000

2010.5

2011.0

2011.5

2012.0

2012.5

2013.0

Time

Time

On observing the graphs above (Figure 3.1 and Figure 3.2), we notice that the sales trend of Product A & Product B up to 2012, has a linear increasing pattern, post which, their behaviour becomes constant. However, as for the seasonality, both the products portray almost the same behaviour.

TREND
Next, we plot the data of both the products together to capture permanent changes in the given time period. The trend plot in Figure 4 helps us look for an overall pattern hidden in the data and in the long run, even helps us forecast future values.

Figure 4: Product A and Product B Trend Plot


Trend Plot of A and Product B Sales

Product A Sales

150000 80000

200000

250000

100000

120000

140000

160000

180000

Product B Sales

SEASONALITY IDENTIFICATION AND NORMALIZATION


Further, the periodic movement depicted by the seasonality index helps identify the general monthly behaviour masked in the data set (Figure 5). Considering a period of 12 months, we use the method of Seasonal means to remove seasonality from data (Figure 6).
Figure 5: Product A Sales Seasonality index
Product A seasonal Component plot
Product A $ seasonal
Product A $ seasonal

Figure 6: Product A: Adjusted for Seasonality


Plot of Adjusted Seasonal for Product A

60000

20000

-20000

-20000 2010.5

20000

60000

2010.5

2011.0

2011.5

2012.0

2012.5

2013.0

2011.0

2011.5

2012.0

2012.5

2013.0

Time

Time

STATIONARITY CHECK
The stationarity check tells us that both Product A and Product B are non-stationary in nature, up to a legitimate level. Hence, to make the series stationary, we do the 2nd order differencing. The referred test gives us a fair-enough evidence to claim that the seasonally adjusted and twice-differenced data is stationary at 5% level of significance. After the removal of Non-stationarity in both Product A and Product B, we fit them into a linear regression framework.

CAUSALITY
Since we are modelling a bivariate time series data, we have used Granger Causality test to check the causality dependence between the variables. The test gives us following insights: On checking causality between Product A and Product B, we see that Product A is granger causal to Product B up to order 3. The results stand valid up to 47% level of significance. However, the point to be noted here is that for the above statement, the reverse does not hold true. On checking causality between Product B and Product A, we see that Product B does not granger cause Product A at 5% level of significance.

LINEAR REGRESSION
To see if our model best fits the linear regression framework, we plot the adjusted data and obtain the below mentioned graph in Figure 7:

Figure 7: Linear Regression 2 Product A and 2 ProductB

2 Product A

-1e+05
-1e+05

0e+00

1e+05

-5e+04

0e+00
2

5e+04

1e+05

Product B

At a 0.93 level of significance, we conclude that the data best fits the linear model and hence the derived model can be as represented below:

2 Product A= +* 2 ProductB

APPROACH 2:
MARS on complete data Stationarity check Fitting MARS on the normalized data

MULTIPLE ADAPTIVE REGRESSION SPLINES (MARS)


First we use MARS on the raw data (Figure 8) where the results share an R-square of 70%, which is quite good but surely not better than the LM fit. Next we improve on data by stationarizing it and then implementing MARS on it (Figure 8). On doing this, we derive a better fit, with an accuracy level of almost 95%. The model that we derive on using MARS as our methodology is as given below:

1 Product A= 1-1*h(C1-1 Product B)+1*h(1 Product B-C2)

Figure 8: MARS Implementation


Plot of original data and MARS upon it
250000

Plot of stationarized data and MARS upon it

^1 Prod_A data 0 5 10 15 INDEX 20 25 30

PRODUCT A

200000

150000

-1e+05 0

0e+00

1e+05

10

15 INDEX

20

25

30

CHOICE OF FINAL MODEL


It can clearly be observed that MARS gives us better results but simultaneously introduces more complexity into the process. Had the improvement in the result been significantly large, usage of MARS was justified but in our case, since the improvement is just by 1% when compared to the LM fit, MARS cannot be claimed to be an appropriate method. Hence, we conclude that approach 1 i.e. LM is a better choice when weighed on the cost-complexity-efficiency criteria. Thus our final model is:

2 Product A= +*2 Product B

MODEL VALIDATION
To validate our model selection process, we proceed to model validation. One of the reasons why model validation is necessary is that the high R-square is not sufficient to conclude the goodness of fit. Hence, we conduct few tests like Residual Analysis and plot the normal Q-Q plot (Figure 10) to analyze the goodness of fit of the regression. We check if the residuals are random and if the model's predictive performance weakens substantially when we infuse new variables into the estimation process. Apart from this, we even adopt standard methods to detect the various outliers among residuals. If you observe the residual plot below (Figure 9), you would see that the residuals are behaving in a random fashion along the line y=0 and this suggests that our model fits the data well. Further, autocorrelation tests have yielded 87% proof that no auto correlation exists between residual and time.

Figure 9: Residual Plot

Figure 10: Normal Q-Q plot

20000

Simple Quantiles

Residual Values

-20000

-20000

20000

-100000

50000

150000

-2

-1

Predicted Values

Theoretical Quantiles

FORECASTING
Using our final model as mentioned below, we now move ahead with forecasting sales for product A assuming that the sales for product B is a given, and that the seasonal effect is known to us.

2 Product A= +*2 Product B


we fit ARIMA on the residuals, to obtain the forecasted value. For example, if we have to forecast for t=n+2,period to be T (say) and the known value of Product B is X1, then we will use the below mentioned equation: Product An+2 =+(Product Bn+2 - 2Product Bn+1+Product Bn )+2Product An+1 - Product An+Product A seasonal component for the period T + n+2 We have named the above-built hybrid methodology as Recursive Chain Forecast. One limitation with this method is that it cannot be used for point estimation as the chain can simply not be broken. Hence, it is necessary to conduct all the previous forecast calculations beforehand.

TECHNICAL ASPECTS & ANOMALIES


If we perform a linear regression analysis on the basic variables (i.e. Product A regressed over Product B) we obtain quite standard R2; a deep dive into the data, evidently shows signs of non-stationarity. A resultant spurious relationship between the variables may be observed. To overcome the spuriousness, we find both the series to be I(2) series and execute Cointegration. Furthermore, our initial regression analysis showed a Durbin-Watson (DW) statistic of 1.0491<1.37=dL , on account of which we reject the hypothesis that the series is not positively auto-correlated, which we eliminated after stationarizing both the basic variables & obtain DW= 2.2614>1.501=dU. We infer hence that there is no positive autocorrelation in the latter. Post cointegrating from I(2) to I(0), we run linear regression for our final model. The underlying anomaly here is that, the DW statistic for the final model shows a very little amount of negative autocorrelation if both the series are cointegrated.

KEY BUSINESS INSIGHTS


Seasonality analysis for Product A and Product B: Seasonality affects most of the commodity sale patterns within a given geography in a uniform way. Forecasting Model for Product A Unit Sales: It includes how Product A sales affect Product B unit sales in an aggregative manner in the future. Product A Product B association: Granger Causality, Recursive Chain Forecasting, and Residual Analysis inferentially determine that Product B drives Product A sales. It is evident that in some periods or quarters Product A & Product B behave quite alike.

For more information, visit www.blueoceanmi.com. www.facebook.com/blueoceanmi www.twitter.com/blueoceanmi

Contact: DURJOY PATRANABISH Senior Vice President - Analytics e: durjoy.p@blueoceanmi.com

Vrindavan Tech Village Building 2A, Ground Floor East Tower Sarjapur Outer Ring Road Bangalore, 560 037 INDIA p: 91.80.41785800

4835 E Cactus Road, #300 Scottsdale, AZ 85254 USA p: 602.441.2474 11280 Northup Way, #E200 Bellevue, WA 98005 USA

Copyright 2013 blueocean market intelligence, All Rights Reserved.

You might also like