Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

FACULTY OF COMPUTER & MATHEMATICAL SCIENCES

TIME SERIES ANALYSIS AND FORECASTING (STA570)


ASSESSMENT 3
FORECASTING THE MARKET STOCK PRICE OF
PADINI

Prepared by:
Name Student id
ESSHADIEQ DANIELL HAYYQAL 2020605302
SHAH B S MAZLAN

GROUP: CS2484A1
SUBMITTED TO:
DATE OF SUBMISSION: 4TH JULY 2022
TABLE OF CONTENT
CONTENT PAGE
INTRODUCTION
RESEARCH OBJECTIVE
DATA DESCRIPTION
METHOD OF ANALYSIS
ANALYSIS AND RESULT
CONCLUSION
REFERENCES
APPENDIX
1.0 INTRODUCTION

Padini Holdings Bhd began operating its business in Malaysia's clothing sector, producing, trading,
and distributing clothing on demand for retailers and distributors. The PADINI Concept Store is a concept
store that offers "one-stop shopping" for all PADINI Holdings brands. The first store to open in Malaysia
was situated in the shopping centre Johor Bharu City Square in the Malaysian city of Johor Bharu.

PADINI became one of the primary powers in Malaysia's textile and garments industry once it
began operating there. Additionally, PADINI has 190 independent boutiques, franchise locations, and
consignment counters for the distribution and selling of its own fashion lines.

PADINI primarily sells formal, fashionable clothing, and accessories. Additionally, PADINI is
the umbrella organisation for a few brands. Each brand stands for a particular fashion philosophy, and
each of these philosophies includes a wide range of products geared towards a particular consumer. Their
brand is firmly associated with true value, which includes pricing, quality, and practicality.

PADINI is comprised of eight distinct brands: PADINI, PADINI Authentics, PDI, P & CO, Seed,
Miki, Vincci, and Vincci Accessories. Due to both sexes and all ages, all brands target a diverse consumer
base. Though Seed Café has opened a new horizon in its company's culinary operations, Vincci and
Vincci Accessories are more focused on the adaptable tastes of women consumers when it comes to
shoes, bags, and accessories.

The above histogram displayed PADINI's net profit from 2004 to 2010. We can observe
that PADINI's net profit increased significantly between 2004 and 2010, rising from 1 percent to
12.2 percent. The demand for PADINI products is attributed to their strong branding, strategic
retail locations, enhanced efficiency of warehousing, inventory management, design, and product
mix, according to PADINI, who also predict an average yearly net profit growth of 10% in 2011
and 2012.

For the day, month, and year after that, the corporation must project the stock price. It is
crucial to make stock price predictions because they may aid in boosting the company's earnings
and revenue. If the stock price falls as predicted, it also helps the company prepare ahead and
consider other options.

2.0 RESEARCH OBJECTIVE

 To study the pattern of the arrangement of the observations in the series, which reveals the
underlying structure of the price of the PADINI.
 To determine the most suitable forecasting model to fit the data series
 To produce the projection numbers for Padini Holdings Bhd 's market stock price.

3.0 DATA DESCRIPTION


Based on our time series data, we mostly get it from the investing.com and malaysiastock.biz
websites, which contain all the stock-related data. For our project, we decided to use the Padini Holdings
Bhd as our data series. We decided to take 6 years of stock price data from 2017 until 2022. For our
analysis, we are analyzing monthly data from 24 th April 2017 until 15th Apriil 2022, where it consists of a
total of 261 months of data. The dependent variable involved is the stock price for Padini Holdings Bhd
while the independent variable is the weekly data stated as mentioned.

Based on the calculations, the estimation part contains 196 observations of the data, obtained
from first quarter of the year 2017(April 2017) until second quarter year 2021(January 2021) while the
evaluation part consists of 65 observations, obtained from third quarter of the year 2021(January 2021)
until fourth quarter of the year 2022(April 2022). This evaluation part is important as it will be used to
determine the result of the smallest error between all model and the best mathematical model to forecast
10 years of marketing stock price for KFIMA. This evaluation section is crucial because it will be used to
identify the model with the smallest error between it and the best mathematical model for predicting
Padini Holdings Bhd's marketing stock price over the next 6 years.
4.0 METHOD ANALYSIS

In this study, the data set has been processed using three different methods: double
exponential smoothing technique, single exponential smoothing technique, and Naive using the
Trend Model. The two types of errors are Mean Square Error (MSE) and Mean Absolute
Percentage Error (MAPE).

Naïve with Trend Model

Commonly, the Naive Model is altered to accommodate for characteristics. Organizations


frequently use this paradigm in their operations. Its adaptability to even reasonably brief time
series is one of the factors contributing to its appeal. As a result, it solves the issue of inadequate
data, which affects many companies. The use of complex modelling approaches would be
prohibited by a lack of data. According to this model, all future predictions can be adjusted to
equal the actual observed value over the most recent period multiplied by the growth rate. The
one-step-ahead forecast made using this model, where T is the final observation in the series, is
presented as follows:

yt
F t+1 = y t ( )
y t−1

Single Exponential Smoothing Model


Single dramatic smoothing method is the least difficult type of model inside the group of the
remarkable smoothing strategies. The model requires just a single boundary that is the smoothing
steady boundary α, to produce the fitted qualities and subsequently figure. In single remarkable
smoothing model, the figure for the following and all ensuing not entirely set in stone by
changing the ongoing time frame gauge by a piece of the distinction between the ongoing
estimate and the ongoing genuine worth. This depicted concerning least blunders/residuals. Thus,
on the off chance that the new gauges ended up being precise, it appears to be sensible to put
together the ensuing figures with respect to these appraisals. Moreover, if new expectations have
been exposed to enormous mistakes, new conjectures will likewise think about this element.

The general equation for single exponentially smoothed statistics is given as:

F t+1 =a y t +(1−a) Ft
Double Exponential Smoothing (DES) (Brown’s method)

For series that display a linear trend, this approach is helpful. This method's key benefit is its
capacity to produce multiple-ahead forecasts. There are four primary equations at play, namely:
St =α y t +(1−α) S t−1 a t=2 St −S t

α
St =α St +(1−α )S t−1 b t= (S −S )
1−α t t
The first equation is double exponentially valued whereas the second is exponentially smoothed.
In a moment, forecasts for the m-step-ahead are calculated using the formula:
F t+ m=a t +bt m

Mean Squared Error (MSE)


To determine if a model is fit for a certain set of data, the standard error measure criterion is
utilised. This metric is frequently used to assess how well a model forecasts the future. Its
mathematical simplicity is its main strength. It is simple to comprehend and compute, and when
applied outside of samples, it typically meets the within-sample requirement (Chatfield,1998). It
is regarded as the most appropriate metric to ascertain whether strategies avoid huge errors
because it has a tendency to penalise large forecast errors more harshly than other typical
accuracy measures. In other words, the value of MSE would be greatly impacted by the
incidence of a huge error. The Mean Squared Error Equation is as follows:

MSE:

n
1
∑ (Y −Y^ i )2
n i=1 i

Mean Absolute Percentage Error (MAPE)

The most popular unit-free measure may be mean absolute percentage error (Armstrong and
Collopy, 1992) MAPE is denoted as follows when measured in series:

MAPE:
| |
n
1 V ( t )−P(t)

n i=1 V (t)
∗100

Its relevance is determined by the fact that it holds true for ratio-scaled data, or data with a
meaningful zero. When the actual observations are near to zero, such as during a time of minimal
or zero growth, MAPE is potentially explosive for huge forecast error. Therefore, simple
percentage metrics are not appropriate when the denominator is tiny since they tend to greatly
exaggerate forecasting inaccuracy. Additionally, overestimation and underestimation mistakes
are not treated equally by percentage metrics.

Box-Jenkins Method

Typically, sample statistics are used to estimate the Box-Jenkins models; however, these sample
statistics must be verified to confirm that they are accurate estimates of the underlying
population parameter values. The validation process in this instance consists of the statistical
validation of residual diagnostics, parameter validation, and model validation. However, the
goals of these three validation methods are remarkably similar. They are all meant to make sure
that the residuals adhere to the presumptions of a stationary univariate process, in which case
they are taken to be normally distributed, randomly distributed, and independently distributed,
with a mean of zero and a variance that satisfies the white noise specification.

Autoregressive Integrated Moving Average (ARIMA)

A sort of autoregressive integrated moving average (ARIMA) model called Box-Jenkins


evaluates the relative importance of one dependent variable to other fluctuating variables. Instead
of using actual values, the model looks at variations between values in the series to forecast
future securities or financial market movements.

An ARIMA model can be understood by outlining each of its components as follows:

- Autoregression (AR): refers to a model where a variable is changing and regressing on its
own lagged, or prior, values. The variable's current value is calculated as a function of its
prior values plus an error term. In mathematics, it is expressed as,
Y t =μ+ϕ 1 Y t−1 +ϕ 2 Y t−2 +…+ ϕ p Y p−1 +ε t

- Integrated (I): indicates the varying of raw observations to enable the time series to
become stationary, i.e., the replacement of the data values with the difference between
the data values and the preceding values.
- Moving average (MA): incorporates the relationship between a residual error from a
moving average model applied to lagged observations and an observation. Instead of
using the values of the series themselves, the moving average model connects the current
values of time series to random errors that have happened in earlier periods. The formula
for the moving average model is:
Y t =μ+ε t−θ 1 ε t−1−θ 2 ε t−2−…−θq ε t−q
2
3
4
5
6
7
4/24/17

9/11/17

ACF + PACF
1/29/18

6/18/18
A. Model Identification

11/5/18
STEP 1 : INITIAL DATA INVESTIGATION

3/25/19

8/12/19
Close

12/30/19

5/18/20

10/5/20

2/22/21

7/12/21

11/29/21

4/15/22
Date: 06/30/22 Time: 01:39
Sample: 4/24/2017 4/15/2022
Included observations: 261
Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 0.986 0.986 256.68 0.000


2 0.970 -0.092 505.89 0.000
3 0.951 -0.074 746.70 0.000
4 0.932 -0.021 978.90 0.000
5 0.915 0.061 1203.5 0.000
6 0.898 -0.009 1420.6 0.000
7 0.881 -0.024 1630.4 0.000
8 0.864 0.004 1833.0 0.000
9 0.847 -0.016 2028.5 0.000
10 0.830 -0.017 2216.9 0.000
11 0.811 -0.064 2397.5 0.000
12 0.794 0.048 2571.1 0.000
13 0.777 0.021 2738.2 0.000
14 0.760 -0.023 2898.9 0.000
15 0.744 -0.016 3053.2 0.000
16 0.726 -0.058 3200.8 0.000
17 0.707 -0.019 3341.5 0.000
18 0.687 -0.057 3474.8 0.000
19 0.669 0.085 3601.9 0.000
20 0.656 0.131 3724.3 0.000
21 0.642 -0.017 3842.3 0.000
22 0.627 -0.128 3955.4 0.000
23 0.614 0.082 4064.2 0.000
24 0.600 -0.025 4168.5 0.000
25 0.588 0.081 4269.1 0.000
26 0.576 -0.055 4365.8 0.000
27 0.565 0.087 4459.6 0.000
28 0.554 -0.055 4550.1 0.000
29 0.546 0.096 4638.4 0.000
30 0.542 0.094 4725.6 0.000
31 0.536 -0.048 4811.4 0.000
32 0.528 -0.098 4895.1 0.000
33 0.521 0.014 4976.7 0.000
34 0.510 -0.082 5055.4 0.000
35 0.501 0.026 5131.5 0.000
36 0.492 0.009 5205.3 0.000

NOT STATIONARY
Null Hypothesis: CLOSE has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on SIC, maxlag=15)

t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -1.337708 0.6124


Test critical values: 1% level -3.455387
5% level -2.872455
10% level -2.572660

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation


Dependent Variable: D(CLOSE)
Method: Least Squares
Date: 06/30/22 Time: 01:42
Sample (adjusted): 5/01/2017 4/15/2022
Included observations: 260 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

CLOSE(-1) -0.013615 0.010178 -1.337708 0.1822


C 0.050126 0.038660 1.296591 0.1959

R-squared 0.006888 Mean dependent var 0.000462


Adjusted R-squared 0.003039 S.D. dependent var 0.174085
S.E. of regression 0.173820 Akaike info criterion -0.653927
Sum squared resid 7.795079 Schwarz criterion -0.626537
Log likelihood 87.01056 Hannan-Quinn criter. -0.642916
F-statistic 1.789462 Durbin-Watson stat 1.800143
Prob(F-statistic) 0.182170

NOT STATIONARY BCS THE PROBABILITY FOR UNIT ROOT TEST IS GREATER THAN ALPHA 0.05

STEP 2 : PERFORMING THE FIRST DIFFERENCING


Date: 06/30/22 Time: 01:49
Sample (adjusted): 5/01/2017 4/15/2022
Included observations: 260 after adjustments
Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 0.093 0.093 2.2868 0.130


2 0.084 0.076 4.1344 0.127
3 0.019 0.005 4.2349 0.237
4 -0.064 -0.074 5.3296 0.255
5 0.002 0.012 5.3304 0.377
6 0.007 0.017 5.3438 0.501
7 -0.018 -0.020 5.4323 0.607
8 0.008 0.005 5.4516 0.708
9 0.000 0.003 5.4516 0.793
10 0.053 0.055 6.2124 0.797
11 -0.051 -0.065 6.9230 0.805
12 -0.022 -0.020 7.0502 0.854
13 -0.005 0.007 7.0583 0.899
14 0.009 0.022 7.0794 0.932
15 0.061 0.051 8.1017 0.920
16 0.036 0.020 8.4600 0.934
17 0.050 0.041 9.1669 0.935
18 -0.088 -0.106 11.338 0.879
19 -0.127 -0.117 15.931 0.662
20 -0.035 -0.000 16.280 0.699
21 0.078 0.125 18.009 0.648
22 -0.061 -0.087 19.086 0.640
23 0.045 0.021 19.656 0.663
24 -0.068 -0.068 21.004 0.639
25 0.011 0.031 21.040 0.690
26 -0.085 -0.101 23.145 0.625
27 0.041 0.068 23.633 0.651
28 -0.087 -0.072 25.871 0.580
29 -0.139 -0.135 31.566 0.339
30 0.040 0.050 32.038 0.366
31 0.085 0.101 34.193 0.317
32 0.002 -0.012 34.194 0.363
33 0.125 0.091 38.883 0.222
34 -0.054 -0.045 39.762 0.229
35 -0.027 -0.022 39.986 0.258
36 0.033 0.039 40.319 0.285

STATIONARY
Null Hypothesis: D(CLOSE) has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on SIC, maxlag=15)

t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -14.60934 0.0000


Test critical values: 1% level -3.455486
5% level -2.872499
10% level -2.572684

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation


Dependent Variable: D(CLOSE,2)
Method: Least Squares
Date: 06/30/22 Time: 01:50
Sample (adjusted): 5/08/2017 4/15/2022
Included observations: 259 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(CLOSE(-1)) -0.906755 0.062067 -14.60934 0.0000


C 3.40E-05 0.010805 0.003148 0.9975

R-squared 0.453695 Mean dependent var -0.000386


Adjusted R-squared 0.451569 S.D. dependent var 0.234806
S.E. of regression 0.173888 Akaike info criterion -0.653114
Sum squared resid 7.770954 Schwarz criterion -0.625648
Log likelihood 86.57828 Hannan-Quinn criter. -0.642071
F-statistic 213.4329 Durbin-Watson stat 2.015400
Prob(F-statistic) 0.000000

CONFIRM DATA SERIES IS STATIONARY

STEP 3 : MODEL IDENTIFICATION

-THE FIRST DIFFERENCE HAS BEEN DONE d=1

- ARIMA(1,1,1)

- ARIMA(2,1,1)

-ARIMA(1,1,2)

-ARIMA(2,1,2)

-ARIMA(1,1,3)
B. MODEL ESTIMATION AND VALIDATION

ARIMA(1,1,1)
Dependent Variable: D(CLOSE)
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 06/30/22 Time: 02:01
Sample: 5/01/2017 1/18/2021
Included observations: 195
Convergence achieved after 35 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) 0.487679 0.618251 0.788804 0.4312


MA(1) -0.386204 0.634922 -0.608269 0.5437
SIGMASQ 0.037035 0.002514 14.72994 0.0000

R-squared 0.013254 Mean dependent var -0.002718


Adjusted R-squared 0.002976 S.D. dependent var 0.194232
S.E. of regression 0.193943 Akaike info criterion -0.427161
Sum squared resid 7.221853 Schwarz criterion -0.376807
Log likelihood 44.64818 Hannan-Quinn criter. -0.406773
Durbin-Watson stat 2.016884

Inverted AR Roots .49


Inverted MA Roots .39

EQUATION :

ELABORATION :
ARIMA(2,1,1)
Dependent Variable: D(CLOSE)
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 06/30/22 Time: 02:04
Sample: 5/01/2017 1/18/2021
Included observations: 195
Convergence achieved after 35 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) 0.160152 0.928920 0.172406 0.8633


AR(2) 0.068188 0.126146 0.540548 0.5894
MA(1) -0.069772 0.936935 -0.074469 0.9407
SIGMASQ 0.036962 0.002565 14.41245 0.0000

R-squared 0.015201 Mean dependent var -0.002718


Adjusted R-squared -0.000267 S.D. dependent var 0.194232
S.E. of regression 0.194258 Akaike info criterion -0.418852
Sum squared resid 7.207606 Schwarz criterion -0.351714
Log likelihood 44.83808 Hannan-Quinn criter. -0.391669
Durbin-Watson stat 1.996707

Inverted AR Roots .35 -.19


Inverted MA Roots .07

EQUATION :

ELABORATION :
ARIMA(1,1,2)
Dependent Variable: D(CLOSE)
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 06/30/22 Time: 02:19
Sample: 5/01/2017 1/18/2021
Included observations: 195
Convergence achieved after 33 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) 0.170865 0.715435 0.238827 0.8115


MA(1) -0.082378 0.720070 -0.114402 0.9090
MA(2) 0.080688 0.106779 0.755655 0.4508
SIGMASQ 0.036930 0.002560 14.42733 0.0000

R-squared 0.016063 Mean dependent var -0.002718


Adjusted R-squared 0.000608 S.D. dependent var 0.194232
S.E. of regression 0.194173 Akaike info criterion -0.419708
Sum squared resid 7.201298 Schwarz criterion -0.352569
Log likelihood 44.92149 Hannan-Quinn criter. -0.392524
Durbin-Watson stat 1.994048

Inverted AR Roots .17


Inverted MA Roots .04+.28i .04-.28i

EQUATION :

ELABORATION :
ARIMA(2,1,2)
Dependent Variable: D(CLOSE)
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 06/30/22 Time: 02:21
Sample: 5/01/2017 1/18/2021
Included observations: 195
Convergence not achieved after 500 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) -0.334527 0.054448 -6.144019 0.0000


AR(2) -0.898571 0.047848 -18.77959 0.0000
MA(1) 0.331856 0.060374 5.496648 0.0000
MA(2) 0.998327 0.331186 3.014396 0.0029
SIGMASQ 0.035483 0.011596 3.059930 0.0025

R-squared 0.054621 Mean dependent var -0.002718


Adjusted R-squared 0.034718 S.D. dependent var 0.194232
S.E. of regression 0.190831 Akaike info criterion -0.432111
Sum squared resid 6.919095 Schwarz criterion -0.348188
Log likelihood 47.13084 Hannan-Quinn criter. -0.398132
Durbin-Watson stat 1.784891

Inverted AR Roots -.17-.93i -.17+.93i


Inverted MA Roots -.17+.99i -.17-.99i

EQUATION :

ELABORATION :
ARIMA(1,1,3)
Dependent Variable: D(CLOSE)
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 06/30/22 Time: 02:25
Sample: 5/01/2017 1/18/2021
Included observations: 195
Convergence achieved after 44 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) -0.558696 0.690329 -0.809318 0.4193


MA(1) 0.655526 0.675248 0.970792 0.3329
MA(2) 0.149291 0.100153 1.490635 0.1377
MA(3) 0.110188 0.076824 1.434284 0.1531
SIGMASQ 0.036760 0.002620 14.02906 0.0000

R-squared 0.020592 Mean dependent var -0.002718


Adjusted R-squared -0.000028 S.D. dependent var 0.194232
S.E. of regression 0.194235 Akaike info criterion -0.413977
Sum squared resid 7.168153 Schwarz criterion -0.330054
Log likelihood 45.36278 Hannan-Quinn criter. -0.379998
Durbin-Watson stat 2.002712

Inverted AR Roots -.56


Inverted MA Roots .01+.40i .01-.40i -.68

EQUATION:

ELABORATION :
VALIDATION

MODEL DW AIC BIC


ARIMA(1,1,1) 2.016884 -0.427161 -0.376807
ARIMA(2,1,1) 1.996707 -0.418852 -0.351714
ARIMA(1,1,2) 1.994048 -0.419708 -0.352569
ARIMA(2,1,2) 1.784891 -0.432111 -0.348188
ARIMA(1,1,3) 2.002712 -0.413977 -0.330054

(PILIH AIC, BIC PALING KECIK)

ARIMA(1,1,1) IS THE BEST MODEL

COMPARISONS BETWEEN BOX-JENKINS AND UNIVARIATE MODEL

7
Foreca s t: CLOSEF
Actual : CLOSE
6
Forecas t s ampl e: 4/24/2017 1/18/2021
Adjusted sampl e: 5/08/2017 1/18/2021
5 Incl uded obs erva ti ons : 194
Root Mean Squared Error 0.192809
4 Mean Absol ute Error 0.135198
Mean Abs. Percent Error 3.665458
3 Theil Inequality Coef. 0.023885
Bias Proporti on 0.000225
2 Vari ance Proporti on 0.000026
Covari ance Proporti on 0.999749
1 Theil U2 Coeffi cient 0.994420
Symmetric MAPE 3.639127
5/8/17

7/31/17
10/23/17

1/15/18
4/9/18
7/2/18

9/24/18
12/17/18

3/11/19
6/3/19

8/26/19
11/18/19
2/10/20

5/4/20
7/27/20
10/19/20
1/11/21

CLOSEF ± 2 S.E.
7
Forecast: CLOSEF
6 Actual: CLOSE
Forecast sample: 1/25/2021 4/15/2022
5
Included observations: 65
4 Root Mean Squared Error 0.390101
Mean Absolute Error 0.343795
3
Mean Abs. Percent Error 11.16421
2 Theil Inequality Coef. 0.069113
Bias Proportion 0.765287
1
Variance Proportion 0.229748
0 Covariance Proportion 0.004965
Theil U2 Coefficient 4.085653
-1 Symmetric MAPE 11.99361
-2
1/25/21
2/22/21

3/22/21
4/19/21

5/17/21
6/14/21
7/12/21

8/9/21
9/6/21
10/4/21
11/1/21
11/29/21
12/27/21
1/24/22
2/21/22
3/21/22
4/15/22
CLOSEF ± 2 S.E.

STATISTICAL VALUES SINGLE ES ARIMA(1,1,1)


MSE ESTIMATION 0.03873 0.03717531048
MSE EVALUATION 0.00827 0.1521787902
MAPE ESTIMATION 3.61657 3.665458
MAPE EVALUATION 2.30979 11.16421

THE BEST MODEL FOR PADINI CLOSE PRICE IS SINGLE ES.

CONCLUSION

Each organization needs to perform a time series analysis in order to comprehend the
seasons, cycles, trends, randomness in sales, and other features. Every industry in this world
develops with time, changing patterns that depend on time. As a result, we may study the
historical data from this project to discover the pattern itself.

We could draw one conclusion from this analysis, then, and move on with the project. First
and foremost, we were able to locate and compile all the information on Padini Holding
Bhd's market stock price. So, after examining the data, we were able to identify some trends
that had occurred and identify the prevailing tendency over the course of several years.

Then, Naive with Trend Model, Single Exponential Model, and Double Exponential
Smoothing are the proposed approaches to be used on the data that we had been selecting.
We had already calculated the error measurements for each model we had selected to apply
to the data set. To guarantee the accuracy of our assessment, we used two key error
measurements. Mean Absolute Squared Error (MASE) and Mean Squared Error (MSE)
(MAPE).

Following these evaluations, we could then produce the best model. The Single Exponential
Smoothing approach is the most effective of the three models. Simply said, the Single
Exponential Smoothing method provided the smallest error measure for its Mean Squared
Error (MSE), which had a value of 0.00827. For each of its parameters, is 1 and beta, is 0,
and these values were used to calculate MSE. The Mean Absolute Squared Error (MAPE),
whose value is 2.30979, was then determined.

Next, we've used a novel technique called the Box-Jenkins methodology, which enables the
model to recognise patterns and produce forecasts by using seasonal differencing, moving
averages, and autoregression. With a mean squared error (MSE) of 0.1521787902 and a
mean absolute squared error (MAPE) of 11.16421, the Box-Jenkins technique provided an
error measure.

After becoming familiar with all of these patterns, error metrics, univariate modelling
strategies, Box-Jenkins Methodology, and other concepts. We were able to identify the
Single Exponential Smoothing approach, which has the lowest MSE of 0.00827, as the
mathematical model that best fits the data set.

You might also like