Professional Documents
Culture Documents
Garch Models
Garch Models
Garch Models
https://rdrr.io/cran/prophet/
man/rmse.html
https://rpubs.com/yevonnael/
garch-models-demo
Yevonnael Andrew
8/27/2020
library(tidyquant)
Computing returns
For managing financial risk, we need to first measure the risk by analyzing the return
series. In this exercise, you are given the S&P 500 price series and you need to plot the
daily returns. You will see that large (positive or negative) returns tends to be followed
by large returns, of either sign, and small returns tend to be followed by small returns.
The periods of sustained low volatility and high volatility are called volatility
clusters.
## [1] 0.2087576
# Compute the annualized standard deviation for the year 2009
sqrt(252) * sd(GOOG_ret["2009"])
## [1] 0.228392
# Compute the annualized standard deviation for the year 2017
sqrt(252) * sd(GOOG_ret["2017"])
## [1] 0.09050465
Roll, roll, roll You can visualize the time-variation in volatility by using the function
chart.RollingPerformance() in the package PerformanceAnalytics. An important tuning
parameter is the choice of the window length. The shorter the window, the
more responsive the rolling volatility estimate is to recent returns. The
longer the window, the smoother it will be. The function sd.annualized lets you
compute annualized volatility under the assumption that the number of trading days in
a year equals the number specified in the scale argument.
In this exercise you need to complete the code to compute the rolling estimate of
annualized volatility for the daily S&P 500 returns in sp500ret for the period 2005
until 2017.
Note the waves in the absolute prediction errors in the top plot. They correspond to
the presence of high and low volatility clusters. In the bottom plot, you can see the
large positive autocorrelations in the absolute prediction errors. Almost all of them are
above 0.15.
e2 = e^2
predvar <- rep(NA, 4527)
# Compute the predicted variances
predvar[1] <- var(GOOG_ret)
for(t in 2:4527){
predvar[t] <- 0.001 + 0.1 * e2[t-1] + 0.8 * predvar[t-1]
}
The large spike in volatility in mid september 2008 is due to the collapse of Lehman
Brothers on September 15, 2008.
Well done. Notice the typical GARCH behavior: after a large unexpected return,
volatility spikes upwards and then decays away until there is another shock. Let’s move
on to forecasting!
Out-of-sample forecasting
The garchvol series is the series of predicted volatilities for each of the returns in the
observed time series sp500ret.
For decision making, it is the volatility of the future (not yet observed) return that
matters. You get it by applying the ugarchforecast() function to the output from
ugarchfit() In forecasting, we call this the out-of-sample volatility forecasts, as they
involve predictions of returns that have not been used when estimating the GARCH
model.
This exercise uses the garchfit and garchvol objects that you created in the previous
exercise. If you need to check which arguments a function takes, you can use ?
name_of_function in the Console to access the documentation.
## [1] 0.01288
# Print last 10 ones in garchvol
tail(garchvol, 10)
## [,1]
## 2017-12-15 0.008501463
## 2017-12-18 0.008302661
## 2017-12-19 0.008234302
## 2017-12-20 0.008072244
## 2017-12-21 0.007873386
## 2017-12-22 0.008165352
## 2017-12-26 0.007953158
## 2017-12-27 0.007821857
## 2017-12-28 0.007640453
## 2017-12-29 0.007565508
# Forecast volatility 5 days ahead and add
garchforecast <- ugarchforecast(fitORspec = garchfit,
n.ahead = 5)
## 2017-12-29
## T+1 0.007415905
## T+2 0.007477487
## T+3 0.007538063
## T+4 0.007597661
## T+5 0.007656309
Well done. In the next exercise you learn how to use GARCH models for portfolio
allocation.
Volatility targeting in tactical asset allocation GARCH volatility predictions are
of direct practical use in portfolio allocation. According to the two-fund separation
theorem of James Tobin, you should invest a proportion w of your wealth in a risky
portfolio and the remainder in a risk free asset, like a US Treasury bill.
When you target a portfolio with 5% annualized volatility, and the annualized volatility
of the risky asset is σt, then you should invest 0.05/σt in the risky asset.
A realistic distribution thus needs to accommodate the presence of: fat tails: higher
probability to observe large (positive or negative) returns than under the normal
distribution skewness: asymmetry of the return distribution
Compared to the normal dist, the skewed student t distribution has two extra params:
degree of freedom, the lower is v, the fatter the tails skewness parameter, <1 negative
skewness, >1 positive skewness
library(rugarch)
# Specify a standard GARCH model with skewed student t
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "sGARCH"),
distribution.model = "sstd")
coef(garchfit)
Leverage effect
Size and sign of et matter for volatility prediction! Negative returns induce higher
leverage. We thus need two equations: - One explaining the GARCH variance following
a negative unexpected returns - One equation describing the variance reaction after a
positive surprise in returns In case the positive surprise in returns, we take the usual
GARCH(1,1). In case of a negative surprise, the predicted variance should be higher
than after a positive surprise, then we need a higher coefficient: GJR model. These
two define the GJR GARCH model.
Just change model="sGARCH" to model="gjrGARCH"
library(rugarch)
# Specify a gjrGARCH model with skewed student t
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")
coef(garchfit)
coef(garchfit)
plot(fitted(garchfit))
GARCH-in-mean model uses the financial theory of a risk-reward trade-off to build a
conditional mean model. Let’s now use statistical theory to make a mean model that
exploits the correlation between today’s return and tomorrow’s return.
The most popular model is the AR(1) model: AR(1) stands for autoregressive model of
order 1 It predicts the next return using the deviation of the return from its long term
mean value μ.
If the AR coeff is positive, when the return above it mean value, the next will be also
above the mean value. Possible explanation: market underreact to news and hence
there is momentum n returns.
If the AR coeff is negative, then a higher than average return is followed by a lower
than average return. Possible explanation: market overreact to news and hence there is
reversal in returns.
coef(garchfit)
Here the AR1 is negative which hints towards overreaction and thus a reversal on the
next day.
Other popular model for the conditional mean are the MA(1) and ARMA(1,1) model.
That’s right! In contrast with using an ARMA model for the mean, we have that a
GARCH-in-mean model does not exploit the correlation between today’s and
tomorrow’s return. It exploits the relationship between the expected return and the
variance of the return. The higher the risk in terms of variance, the higher should be
the expected return on investment.
Avoid unnecessary complexity The parameters of a GARCH model are estimated
by maximum likelihood. Because of sampling uncertainty, the estimated parameters
have for sure some estimation error. If we know the true parameter value, it is
therefore best to impose that value and not to estimate it.
Let’s do this in case of the daily EUR/USD returns available in the console as the
variable EURUSDret and for which an AR(1)-GARCH model with skewed student t
distribution has already been estimated, and made available as the ugarchfit object
called flexgarchfit.
# Print the flexible GARCH parameters
coef(flexgarchfit)
# Restrict the flexible GARCH model by impose a fixed ar1 and skew
parameter
rflexgarchspec <- flexgarchspec
setfixed(rflexgarchspec) <- list(ar1 = 0, skew = 1)
Parameter bounds and impact on forecasts Let’s take again the flexible GARCH
model specification for which the estimated coefficients are printed in the console.
Now assume that you believe that the GARCH parameter α should be between 0.05
and 0.1, while the β parameter is between 0.8 and 0.95. You are asked here to re-
estimate the model by imposing those bounds and see the effect on the volatility
forecasts for the next ten days obtained using ugarchforecast.
# Inspect coefficients
coef(bflexgarchfit)
Well done. Since volatility is mean reverting, it is wise to use this empirical fact when
estimating the GARCH model. As you’ve probably realized by now, there is a lot of skill
involved in finding the right model. You sharpen those skills in the next chapter.
Statistical significance
Are the variables in your GARCH model relevant? Can you simplify the model? ⟺ Are
there parameters zero? - If the ar1 parameter is zero, you can use a constant mean
model. - If the gamma1 parameter is zero, there is no GARCH-in-mean and you can
use a standard GARCH model instead of the GJR. It seem simple, but actually we don’t
know the true parameter value is, all needs to be estimated.
round(garchfit@fit$matcoef, 6)
## [1] 0.0001728729
# Goodness of fit for the variance prediction
e <- residuals(garchfit)
d <- e^2 - sigma(garchfit)^2
mean(d^2)
## [1] 2.974194e-07
# Compute the likelihood
likelihood(garchfit)
## [1] 14248.54
The result of the likelihood should not be interpreted by itself, but compare it with the
likelihood obtained using other models, like a GJR model, etc.
##
## Akaike -6.291382
## Bayes -6.280040
## Shibata -6.291388
## Hannan-Quinn -6.287386
That’s right! By choosing the model with the highest likelihood, you will end up with
the most complex model that is not parsimonious and has a high risk of overfitting.
Diagnosing absolute standardized returns Check 1: Mean and sd of
standardized returns. Sample mean 0, sample sd 1 Check 2: Time series plot of
standardized returns. Should have constant variability Check 3: No predictability in the
absolute standardized returns Verify that there is no correlation between the past
absolute standardized return and the current absolute standardized return. Why? The
magnitude of the absolute standardized return should be constant -> no correlations n
the absolute standardized returns. Check 4: Ljung-Box test
acf(abs(stdret), 22)
# We want it to have p-value > 0.05
Box.test(abs(stdret), 22, type = "Ljung-Box")
##
## Box-Ljung test
##
## data: abs(stdret)
## X-squared = 29.575, df = 22, p-value = 0.1292
head(preds)
mean(e^2)
## [1] 8.333045e-05
# Prediction error for the mean
e <- preds$Realized - preds$Mu
mean(d^2)
## [1] 3.764772e-08
Value-at-risk
How much would you lose in the best of the 5% worst cases?
A popular measure of downside risk: 5% value-at-risk. The 5% quantile of the return
distribution represents the best return in the 5% worst scenarios.
The workflow to obtain predicted 5% quantiles from ugarchroll is:
## [1] 0.04982733
VaR coverage and model validation Interpretation of coverage for VaR at loss
probability α (e.g. 5%): Valid prediction model has a coverage that is close to the
probability level α used. If coverage ≫ α: too many exceedances: the predicted quantile
should be more negative. Risk of losing money has been underestimated. If coverage ≪
α: too few exceedances, the predicted quantile was too negative. Risk of losing money
has been overestimated.
# Make the predictions for the mean and vol for the next ten days
garchforecast <- ugarchforecast(data = GOOG_ret,
fitORspec = progarchspec,
n.ahead = 10)
cbind(fitted(garchforecast), sigma(garchforecast))
## 2017-12-29 2017-12-29
## T+1 0.0004132459 0.007659785
## T+2 0.0004333941 0.007707519
## T+3 0.0004332626 0.007754692
## T+4 0.0004332635 0.007801316
## T+5 0.0004332635 0.007847402
## T+6 0.0004332635 0.007892961
## T+7 0.0004332635 0.007938003
## T+8 0.0004332635 0.007982539
## T+9 0.0004332635 0.008026579
## T+10 0.0004332635 0.008070131
You can also use the complete model spec to simulate artificial log-returns, defined as
the difference between the current log-price and the past log-price. The simulation is
useful to assess the randomness in future returns and their impact on future price
levels.
Step 1: Calibrate the simulation model
# Estimate the model and assign model params to the simulation model
garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")
# Estimate the model
garchfit <- ugarchfit(data = GOOG_log_ret["/2010-12"], spec = garchspec)
Step 2: Run the simulation with ugarchpath() Requries to choose: spec : completely
specified GARCH model m.sim : number of time series of simulated returns you want
n.sim: number of observations in the simulated time series (e.g. 252) rseed : any
number to fix the seed used to generate the simulated series (needed for
reproducibility)
plot.zoo(sigma(simgarch))
Analysis of simulated prices
simprices <- exp(apply(simret, 2, "cumsum"))
matplot(simprices, type = "l", lwd = 3)
That’s right! Stock returns tend to have fat tails and an asymmetric distribution. The
normal distribution is not a realistic assumption for stock returns.
In a corporate environment, there is often a distinction between the stage of model
engineering and the stage to using the model in production. When using the model in
production, it may be that the model is not re-estimated at each stage. You then use the
model with fixed coefficients but integrating on each prediction day the new data. The
function ugarchfilter() is designed to complete this task.
In this exercise you use a model fitted on the January 1989 till December 2007 daily
S&P 500 returns to make a prediction of the future volatility in a turbulent period
(September 2008) and a stable period (September 2017). The model has already been
specified as is available as garchspec in the R console.
# Compare the 252 days ahead forecasts made at the end of September 2008
and September 2017
garchforecast2008 <- ugarchforecast(data = GOOG_ret["/2008-09"],
fitORspec = progarchspec, n.ahead = 252)
garchforecast2017 <- ugarchforecast(data = GOOG_ret["/2017-09"],
fitORspec = progarchspec, n.ahead = 252)
par(mfrow = c(2,1), mar = c(3,2,3,2))
plot(sigma(garchforecast2008), main = "/2008-09", type = "l")
plot(sigma(garchforecast2017), main = "/2017-09", type = "l")
Note that the on the long run the volatility is predicted to return to its average level.
This explains why the predicted volatility at T+252 is similar in September 2008 and
2017.
Model Risk Sources: modeling choices starting values in the optimization outliers in
the return series Solution: Protect yourself through a robust approach model-
averaging: averaging the predictions of multiple models trying several starting values
and choosing the one that leads to the highest likelihood
If you cannot choose which model to use, you could estimate them all
plot(msigma)
Cleaning the data Avoid that outliers distort the volatility predictions How?
Through winsorization: reduce the magnitude of the return to an acceptable level
using the function Return.clean() in the
package PerformanceAnalytics with method="boudt":
Step 3: Use cor() to estimate ρ as the sample correlation of the standardized returns.
Step 4: Compute the GARCH covariance by multiplying the estimated correlation and
volatilities
discussion3
https://rpubs.com/jwzhang52/403782
Jingwen Zhang
Since it requires to build two Arima model, I’m going to try antoher one with (1,1,0)(0,1,1)[12]
myarima2=arima(train,order=c(1,1,1), seasonal=c(0,1,0))
myarima2
##
## Call:
## arima(x = train, order = c(1, 1, 1), seasonal = c(0, 1, 0))
##
## Coefficients:
## ar1 ma1
## -0.7461 0.2950
## s.e. 0.1674 0.2454
##
## sigma^2 estimated as 1939: log likelihood = -244.78, aic = 495.56
Just by comparing the aic, myarima AIC< myarima 2 AIC, so auto.arima is the best Arima
Model
Since model MAM aic< model ZZZ aic, so I choose MAM and I’m going to continue to use this
model for forecasting
Conclustion: By comparing aic, bic, Arima model has the lower value than ETS, Arima model is
better. When forecasting, by comparing RMSE, MAE, ME, MAPE, MASE, ETS has lower
values and it will get better result due to higher accuracy.
str(BOA)
## 'data.frame': 60 obs. of 7 variables:
## $ Date : chr "2015-08-01" "2015-09-01" "2015-10-01" "2015-11-01"
...
## $ Open : num 17.9 15.9 15.5 16.9 17.5 ...
## $ High : num 18.1 16.5 17.4 18.1 17.9 ...
## $ Low : num 14.6 15.2 14.6 16.9 16.5 ...
## $ Close : num 16.4 15.6 16.8 17.4 16.8 ...
## $ Adj.Close: num 15 14.2 15.4 16 15.4 ...
## $ Volume : num 2.33e+09 1.78e+09 1.85e+09 1.44e+09 1.77e+09 ...
head(BOA)
## Date Open High Low Close Adj.Close Volume
## 1 2015-08-01 17.91 18.07 14.60 16.44 15.00615 2325559300
## 2 2015-09-01 15.95 16.48 15.25 15.58 14.22115 1777912500
## 3 2015-10-01 15.52 17.44 14.63 16.78 15.36581 1848594500
## 4 2015-11-01 16.90 18.09 16.87 17.43 15.96102 1439390500
## 5 2015-12-01 17.52 17.89 16.50 16.83 15.41159 1771496200
## 6 2016-01-01 16.45 16.59 12.94 14.14 12.98476 2648604300
tail(BOA)
## Date Open High Low Close Adj.Close Volume
## 55 2020-02-01 33.00 35.45 27.70 28.50 28.12317 1072992600
## 56 2020-03-01 28.35 29.75 17.95 21.23 20.94929 2826499000
## 57 2020-04-01 19.93 25.32 19.51 24.05 23.88343 1636882500
## 58 2020-05-01 23.38 26.17 20.10 24.12 23.95295 1447546400
## 59 2020-06-01 24.28 29.01 23.02 23.75 23.58551 1801607100
## 60 2020-07-01 24.03 24.87 22.39 24.36 24.36000 1193407900
#Creating our time series
M1.ARIMA = auto.arima(train.ts)
summary(M1.ARIMA)
## Series: train.ts
## ARIMA(0,1,0)
##
## sigma^2 estimated as 4.329: log likelihood=-124.79
## AIC=251.59 AICc=251.66 BIC=253.65
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1456674 2.06294 1.582365 0.3593268 7.018769 0.3124246
## ACF1
## Training set -0.009918847
fcast.M1.ARIMA = forecast(M1.ARIMA,h=12)
plot(fcast.M1.ARIMA)
lines(test.ts, col="red")
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))
fcast.M1.ETS = forecast(M1.ETS,h=12)
plot(fcast.M1.ETS)
lines(test.ts, col="red")
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))
GBOA = garch(BOA.ts)
##
## ***** ESTIMATION WITH ANALYTICAL GRADIENT *****
##
##
## I INITIAL X(I) D(I)
##
## 1 3.486087e+01 1.000e+00
## 2 5.000000e-02 1.000e+00
## 3 5.000000e-02 1.000e+00
##
## IT NF F RELDF PRELDF RELDX STPPAR D*STEP
NPRELDF
## 0 1 3.660e+02
## 1 2 2.146e+02 4.14e-01 5.48e+00 1.4e-02 2.0e+03 1.0e+00
5.50e+03
## 2 4 2.142e+02 1.79e-03 1.76e-03 7.2e-04 2.0e+00 5.0e-02
1.40e-01
## 3 5 2.136e+02 2.57e-03 2.23e-03 1.1e-03 2.2e+00 1.0e-01
2.56e-03
## 4 7 2.136e+02 6.60e-05 6.61e-05 5.9e-05 1.1e+01 5.3e-03
3.47e-04
## 5 9 2.136e+02 1.10e-04 1.10e-04 1.2e-04 2.1e+00 1.1e-02
2.66e-04
## 6 10 2.136e+02 1.31e-04 1.32e-04 2.4e-04 2.6e+00 2.1e-02
1.54e-04
## 7 12 2.136e+02 4.15e-06 4.15e-06 1.5e-05 1.1e+01 1.3e-03
2.15e-05
## 8 14 2.136e+02 6.94e-06 6.94e-06 3.1e-05 2.1e+00 2.6e-03
1.76e-05
## 9 16 2.136e+02 1.17e-06 1.17e-06 6.3e-06 1.8e+01 5.2e-04
1.07e-05
## 10 18 2.136e+02 2.13e-06 2.13e-06 1.3e-05 3.0e+00 1.0e-03
9.72e-06
## 11 20 2.136e+02 3.41e-06 3.41e-06 2.6e-05 2.0e+00 2.1e-03
7.64e-06
## 12 23 2.136e+02 5.70e-08 5.70e-08 5.4e-07 1.3e+02 4.2e-05
4.32e-06
## 13 25 2.136e+02 1.13e-07 1.13e-07 1.1e-06 1.8e+01 8.4e-05
4.47e-06
## 14 27 2.136e+02 2.23e-08 2.23e-08 2.2e-07 3.3e+02 1.7e-05
4.37e-06
## 15 29 2.136e+02 4.44e-08 4.44e-08 4.4e-07 4.2e+01 3.3e-05
4.36e-06
## 16 31 2.136e+02 8.81e-08 8.81e-08 8.8e-07 2.1e+01 6.7e-05
4.31e-06
## 17 34 2.136e+02 1.75e-09 1.75e-09 1.8e-08 4.0e+03 1.3e-06
4.23e-06
## 18 36 2.136e+02 3.50e-09 3.50e-09 3.5e-08 5.0e+02 2.7e-06
4.24e-06
## 19 38 2.136e+02 6.99e-09 6.99e-09 7.0e-08 2.5e+02 5.4e-06
4.24e-06
## 20 40 2.136e+02 1.40e-09 1.40e-09 1.4e-08 5.0e+03 1.1e-06
4.23e-06
## 21 42 2.136e+02 2.80e-10 2.80e-10 2.8e-09 2.5e+04 2.1e-07
4.23e-06
## 22 44 2.136e+02 5.59e-11 5.59e-11 5.6e-10 1.3e+05 4.3e-08
4.23e-06
## 23 46 2.136e+02 1.12e-11 1.12e-11 1.1e-10 6.3e+05 8.6e-09
4.23e-06
## 24 48 2.136e+02 2.24e-11 2.24e-11 2.2e-10 7.8e+04 1.7e-08
4.23e-06
## 25 50 2.136e+02 4.47e-12 4.47e-12 4.5e-11 1.6e+06 3.4e-09
4.23e-06
## 26 52 2.136e+02 8.94e-12 8.94e-12 9.0e-11 2.0e+05 6.8e-09
4.23e-06
## 27 54 2.136e+02 1.79e-11 1.79e-11 1.8e-10 9.8e+04 1.4e-08
4.23e-06
## 28 56 2.136e+02 3.58e-12 3.58e-12 3.6e-11 2.0e+06 2.7e-09
4.23e-06
## 29 58 2.136e+02 7.15e-13 7.15e-13 7.2e-12 9.8e+06 5.5e-10
4.23e-06
## 30 61 2.136e+02 1.46e-14 1.43e-14 1.4e-13 4.9e+08 1.1e-11
4.23e-06
## 31 63 2.136e+02 2.40e-15 2.86e-15 2.9e-14 2.4e+09 2.2e-12
4.23e-06
## 32 66 2.136e+02 2.28e-14 2.29e-14 2.3e-13 7.6e+07 1.8e-11
4.23e-06
## 33 69 2.136e+02 5.32e-16 4.58e-16 4.6e-15 1.5e+10 3.5e-13
4.23e-06
## 34 71 2.136e+02 9.32e-16 9.16e-16 9.2e-15 1.9e+09 7.0e-13
4.23e-06
## 35 72 2.136e+02 -4.68e+07 1.83e-15 1.8e-14 3.8e+09 1.4e-12
4.23e-06
##
## ***** FALSE CONVERGENCE *****
##
## FUNCTION 2.135736e+02 RELDX 1.840e-14
## FUNC. EVALS 72 GRAD. EVALS 35
## PRELDF 1.832e-15 NPRELDF 4.228e-06
##
## I FINAL X(I) D(I) G(I)
##
## 1 3.486220e+01 1.000e+00 1.401e-03
## 2 9.537147e-01 1.000e+00 1.127e-01
## 3 8.032921e-13 1.000e+00 2.551e-01
summary(GBOA)
##
## Call:
## garch(x = BOA.ts)
##
## Model:
## GARCH(1,1)
##
## Residuals:
## Min 1Q Median 3Q Max
## 0.7457 0.9471 0.9956 1.0618 1.2193
##
## Coefficient(s):
## Estimate Std. Error t value Pr(>|t|)
## a0 3.486e+01 1.026e+03 0.034 0.973
## a1 9.537e-01 6.388e+00 0.149 0.881
## b1 8.033e-13 5.417e+00 0.000 1.000
##
## Diagnostic Tests:
## Jarque Bera Test
##
## data: Residuals
## X-squared = 1.8201, df = 2, p-value = 0.4025
##
##
## Box-Ljung test
##
## data: Squared.Residuals
## X-squared = 0.18546, df = 1, p-value = 0.6667
plot(GBOA)
#Comparing the GARCH model’s results with the other models’ results
acc.ARIMA
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1456674 2.062940 1.582365 0.3593268 7.018769 0.3124246
## Test set 0.7744870 0.774487 0.774487 3.1793389 3.179339 0.1529160
## ACF1
## Training set -0.009918847
## Test set NA
acc.ETS
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1569204 2.0651246 1.5877243 0.397414 7.046455 0.3134828
## Test set 0.7551898 0.7551898 0.7551898 3.100122 3.100122 0.1491059
## ACF1
## Training set 0.03991386
## Test set NA
#It appears that in terms of AIC and BIC the GARCH model does a way
better job than the ETS or ARIMA model.
#Is it enough to say that the GARCH model is the best model ? Probably
not because it would be nice to see how it does graphically and how it
does in term of accuracy.
#With the current info, I would still rather use an ARIMA or ETS model
over a GARCH model.